In a world rapidly embracing generative artificial intelligence (AI), a recent study by Veritas Technologies, conducted by 3Gem in December 2023, throws light on a concerning trend: employees across the globe are inputting sensitive data into publicly available generative AI tools despite being aware of the inherent risks. This study, which surveyed 11,500 employees from various continents, including Australia, China, Japan, Singapore, South Korea, Europe, and the US, exposes a significant gap in workplace policies and understanding of generative AI's potential threats.
The research highlights a paradox in the workplace: while 39% of respondents recognize the risk of sensitive data leaks through the use of public generative AI tools, a startling 31% admitted to inputting confidential information such as customer details, financial data, and personally identifiable information into these platforms. This practice jeopardizes customer trust and corporate integrity and poses serious compliance risks, with 37% of the survey's participants citing concerns over adhering to regulatory standards.
Despite these risks, the allure of generative AI's benefits, such as improved productivity and the automation of mundane tasks, has led to its widespread use. Approximately 57% of employees reported using generative AI tools weekly within their offices, with functions ranging from research and analysis to drafting emails and memos. However, this surge in usage has yet to be matched with adequate guidance on safe and practical application, as 36% of respondents reported a lack of formal policies or guidelines regarding the use of generative AI tools at work.
The study underscores a critical disconnect between the perceived value of inputting sensitive data into AI tools and the potential for business harm. While some employees see integrating customer and financial information into generative AI as beneficial, a significant portion (27%) disagrees, highlighting the diversity in understanding and attitudes towards these technologies within the corporate environment.
This situation is further complicated by the competitive dynamics that generative AI introduces among colleagues. Over half of the surveyed employees viewed the use of AI tools as an unfair advantage, suggesting a divide in perceptions of fairness and teamwork within the workplace. This divide is exacerbated by a need for clear guidelines, with a majority (90%) expressing the importance of establishing comprehensive policies for using emerging technologies.
The implications of this study extend beyond individual organizations, suggesting a broader industry challenge as generative AI continues to gain traction. According to IBM's X-Force Threat Intelligence Index 2024, the growing adoption of generative AI will likely increase security vulnerabilities, with potential large-scale attacks on AI technologies that achieve significant market share. Based on an analysis of over 150 billion security events daily, this report emphasizes the urgent need for businesses to secure their AI models against cyber threats and adopt holistic security measures in this new era.
The findings from Veritas Technologies and IBM highlight a crucial turning point for businesses globally: the need to balance the innovative potential of generative AI with the imperative of safeguarding sensitive information and maintaining regulatory compliance. As productive AI tools become an integral part of the corporate toolkit, establishing clear, mandatory guidelines and fostering a culture of security and ethical use will be paramount in navigating the promises and perils of this transformative technology.