A new poll of global digital trust professionals is revealing a high degree of uncertainty around generative artificial intelligence (AI), few company policies around its use, lack of training, and fears around its exploitation by bad actors, according to Generative AI 2023: An ISACA Pulse Poll.
Digital trust professionals from around the globe—those who work in cybersecurity, IT audit, governance, privacy and risk—weighed in on generative AI—artificial intelligence that can generate text, images and other media—in a new pulse poll from ISACA that explores employee use, training, attention to ethical implementation, risk management, exploitation by adversaries, and impact on jobs.
Diving in, even without policies
The poll found that many employees at respondents’ organizations are using generative AI, even without policies in place for its use. Only 28 percent of organizations say their companies expressly permit the use of generative AI, only 10 percent say a formal comprehensive policy is in place, and more than one in four say no policy exists and there is no plan for one. Despite this, over 40 percent say employees are using it regardless—and the percentage is likely much higher given that an additional 35 percent aren’t sure.
These employees are using generative AI in a number of ways, including to:
- Create written content (65%)
- Increase productivity (44%)
- Automate repetitive tasks (32%)
- Provide customer service (29%)
- Improve decision making (27%)
Lack of familiarity and training
However, despite employees quickly moving forward with use of the technology, only six percent of respondents’ organizations are providing training to all staff on AI, and more than half (54 percent) say that no AI training at all is provided, even to teams directly impacted by AI. Only 25 percent of respondents indicated they have a high degree of familiarity with generative AI.
“Employees are not waiting for permission to explore and leverage generative AI to bring value to their work, and it is clear that their organizations need to catch up in providing policies, guidance and training to ensure the technology is used appropriately and ethically,” said Jason Lau, ISACA board director and CISO at Crypto.com. “With greater alignment between employers and their staff around generative AI, organizations will be able to drive increased understanding of the technology among their teams, gain further benefit from AI, and better protect themselves from related risk.”
Risk and exploitation concerns
The poll explored the ethical concerns and risks associated with AI as well, with 41 percent saying that not enough attention is being paid to ethical standards for AI implementation. Fewer than one-third of their organizations consider managing AI risk to be an immediate priority, 29 percent say it is a longer-term priority, and 23 percent say their organization does not have plans to consider AI risk at the moment, even though respondents note the following as top risks of the technology:
- Misinformation/Disinformation (77%)
- Privacy violations (68%)
- Social engineering (63)
- Loss of intellectual property (IP) (58%)
- Job displacement and widening of the skills gap (tied at 35%)
More than half (57 percent) of respondents indicated they are very or extremely worried about generative AI being exploited by bad actors. Sixty-nine percent say that adversaries are using AI as successfully or more successfully than digital trust professionals.
“Even digital trust professionals report a low familiarity with AI—a concern as the technology iterates at a pace faster than anything we’ve seen before, with use spreading rampantly in organizations,” said John De Santis, ISACA board chair. “Without good governance, employees can easily share critical intellectual property on these tools without the correct controls in place. It is essential for leaders to get up to speed quickly on the technology’s benefits and risks, and to equip their team members with that knowledge as well.”
Impact on jobs
Examining how current roles are involved with AI, respondents believe that security (47 percent), IT operations (42 percent), and risk and compliance (tie, 35%) are responsible for the safe deployment of AI. When looking ahead, one in five organizations (19 percent) are opening job roles related to AI-related functions in the next 12 months. Forty-five percent believe a significant number of jobs will be eliminated due to AI, but digital trust professionals remain optimistic about their own jobs, with 70 percent saying it will have some positive impact for their roles. To realize the positive impact, 80 percent think they will need additional training to retain their job or advance their career.
Optimism in the face of challenges
Despite the uncertainty and risk surrounding AI, 80 percent of respondents believe AI will have a positive or neutral impact on their industry, 81 percent believe it will have a positive or neutral impact on their organizations, and 82 percent believe it will have a positive or neutral impact on their careers. Eighty-five percent of respondents also say AI is a tool that extends human productivity, and 62 percent believe it will have a positive or neutral impact on society as a whole.