Research from enterprise analytics AI firm Alteryx reveals that while 76% of consumers in EMEA believe AI will have a significant impact in the next five years, nearly half (47%) doubt the value it will bring, and 41% express concerns about its applications.
Since the introduction of ChatGPT by OpenAI in November 2022, there has been considerable excitement surrounding the transformative potential of generative AI, with many viewing it as one of the most revolutionary technologies of our time.
However, despite 79% of organizations reporting positive contributions from generative AI to their businesses, there remains a gap in demonstrating AI’s value to consumers, both personally and professionally. According to the ‘Market Research: Attitudes and Adoption of Generative AI’ report, based on surveys of 690 IT business leaders and 1,100 members of the general public in EMEA, significant issues related to trust, ethics, and skills persist, potentially hindering the successful deployment and broader acceptance of generative AI.
The impact of misinformation, inaccuracies, and AI hallucinations
These instances of AI hallucinations, where AI generates incorrect or illogical outputs, pose a significant concern. Trusting the outputs of generative AI is a substantial issue for both business leaders and consumers. Over a third of the public expresses anxiety about AI’s potential to create fake news (36%) and its susceptibility to misuse by hackers (42%). Similarly, half of the business leaders report grappling with misinformation produced by generative AI.
Moreover, the reliability of information provided by generative AI has come into question. Feedback from the general public indicates that half of the data received from AI was inaccurate, and 38% perceived it as outdated. On the business front, concerns include generative AI potentially infringing on copyright or intellectual property rights (40%) and producing unexpected or unintended outputs (36%).
A critical trust issue for both businesses (62%) and the public (74%) revolves around AI hallucinations. For businesses, the challenge lies in applying generative AI to appropriate use cases, supported by the right technology and safety measures, to mitigate these concerns. Close to half of consumers (45%) are advocating for regulatory measures on AI usage.
Ethical concerns and risks persist in the use of generative AI
In addition to these challenges, there are significant and similar concerns regarding ethical issues and risks associated with generative AI among both business leaders and consumers. More than half of the general public (53%) oppose the use of generative AI in making ethical decisions, while 41% of business respondents express concerns about its application in critical decision-making areas. Although there are differences in the specific areas where its use is discouraged; consumers notably oppose its use in politics (46%), and businesses are cautious about its deployment in healthcare (40%).
These concerns are echoed in the research findings, which reveal troubling gaps in organizational practices. Only a third of leaders confirmed that their businesses ensure the data used to train generative AI is diverse and unbiased. Furthermore, only 36% have established ethical guidelines, and 52% have implemented data privacy and security policies for generative AI applications.
This lack of emphasis on data integrity and ethical considerations puts firms at risk. 63% of business leaders cite ethics as their primary concern with generative AI, closely followed by data-related issues (62%). This situation underscores the importance of better governance to instill confidence and mitigate risks associated with how employees use generative AI in the workplace.
The rise of generative AI skills and the need for enhanced data literacy
As generative AI continues to advance, developing relevant skill sets and enhancing data literacy will be essential for unlocking its full potential. Consumers are increasingly utilizing generative AI technologies across various scenarios, including information retrieval, email communication, and skill acquisition. Similarly, business leaders are leveraging generative AI for data analysis, cybersecurity, and customer support. However, despite the reported success of pilot projects, numerous challenges persist, including security vulnerabilities, data privacy concerns, and issues related to output quality and reliability.
Trevor Schulze, Alteryx’s CIO, stressed the importance of both enterprises and the general public fully grasping the value of AI and addressing common concerns as they navigate the early stages of generative AI adoption. He highlighted the critical tasks of addressing trust issues, ethical concerns, skills shortages, fears of privacy infringement, and algorithmic bias. Schulze emphasized the urgency for enterprises to accelerate their data journey, implement robust governance practices, and empower non-technical individuals to access and analyze data securely and reliably. This approach, he noted, is crucial for genuinely benefiting from this “game-changing” technology while addressing privacy and bias concerns effectively.