Generative AI and information privacy : Users' assessment of the risks and benefits of information disclosure

Loading...
Thumbnail Image

URL

Journal Title

Journal ISSN

Volume Title

School of Business | Master's thesis

Department

Major/Subject

Mcode

Language

en

Pages

75

Series

Abstract

Generative Artificial Intelligence (AI) has been valued for its interactive and advanced multimodal capabilities, and it has observed wide adaptation in recent years. Alongside this development, issues related to information privacy have been highlighted, as generative models can remember personal information from the training data and leak that information. Further information privacy issues arise for the users of generative AI from the interactive nature and advanced capabilities that can encourage users to share more information. This highlights important questions about how user input is collected and stored for purposes such as model development. The objective of this thesis is to study which factors affect users’ perception of the risks and benefits of information disclosure with generative AI. A research model was developed based on the privacy calculus framework and extended with constructs from technology acceptance model. The data was collected with a survey and analysed with Partial Least Squares Structural Equation Modelling (PLS-SEM). Most of the hypotheses were supported and the results highlight that technical features, organizational practices and regulation can influence users’ intention to disclose information when interacting with generative AI. In more detail, high risk perceptions lowered the intention to disclose, while perceived benefits increased it. Expectations for regulation heightened risk perceptions, whereas trust in the AI provider’s data handling practices reduced the risk perception. Additionally, ease of use and social presence positively influenced the perceived benefits of information disclosure. Contrary to previous research, motor impulsivity did not have a significant effect on the intention to disclose. This thesis contributes to the existing research by offering insights into the factors affecting information disclosure decisions with generative AI contexts by extending the privacy calculus model with AI-specific constructs. For practical implications, the results suggest that excessive risk may diminish users’ sense of privacy, which could make the user withhold from interacting with generative AI. On the other hand, increased perceived benefits of information disclosure may also lead to privacy and security risks. For organizations, clear guidelines and awareness about data use can support the adaptation and secure utilization of generative AI. For policymakers, it is important to support AI literacy in societal level.

Description

Supervisor

Ghanbari, Hadi

Other note

Citation