More than a third (38%) of employees share sensitive work information using AI tools without their employer’s permission, according to a new study by CybSafe and the National Cybersecurity Alliance (NCA). Masu.
The report found that this behavior was particularly pronounced among young people.
Nearly half (46%) of Gen Z and 43% of Millennials surveyed admitted to using such tools to share sensitive work information without their employer’s knowledge.
As part of its research, CybSafe surveyed more than 7,000 people in the United States, United Kingdom, Canada, Germany, Australia, India, and New Zealand.
The survey also revealed that 52% of employed participants have not yet received training on the safe use of AI.
Furthermore, 58% of students have not received such training, 84% of non-active students, and 83% of retired students have not received AI training.
Oz Alashe, CEO and Founder of CybSafe, commented: “The introduction of AI has created a whole new category of security behaviors that CISOs and business leaders should be concerned about.The security community is well aware of the threat posed by AI, but this awareness is not consistent within the workforce. It is clear that this has not yet been reflected in our security actions.”
The “biggest” risk posed by AI
Ronan Murphy, a member of the Irish Government’s AI Advisory Board, told InfoSec that AI tools accessing organizational data is the biggest risk ever faced by any industry when it comes to cybersecurity, governance and compliance. spoke.
“Once you input all your IP into the AI model, anyone with access to it can ask the AI model to spill the beans,” he explained.
Murphy added, “In order to implement AI and drive operational efficiency, organizations need to ensure that the foundational layer, which is data, is properly sanitized before entering the AI application.”
Concerns and distrust about AI are widespread
Two-thirds (65%) of respondents also expressed concern about AI-related cybercrime, with the use of these tools to create more convincing phishing emails.
More than half (52%) think AI will make it harder to detect fraud, and 55% say the technology will make it more difficult to stay safe online.
Furthermore, a significant proportion of people expressed distrust about their companies’ implementation and use of AI. Similar proportions say they have high confidence in their organization’s AI implementation (36%) and low confidence (35%).
The remainder (29%) were neutral.
Almost a third (36%) believe their companies ensure that AI technology is free of bias, but 30% remain unconvinced.
Respondents were also evenly divided in their confidence level in their ability to recognize AI-generated content, with 36% expressing high confidence and 35% expressing low confidence.
Alarmingly, 36% believe AI is likely to influence decisions about what is real and what is fake during election campaigns.