Currencies33041
Market Cap$ 2.79T+0.21%
24h Spot Volume$ 34.69B-2.63%
DominanceBTC60.14%+0.03%ETH6.85%-0.77%
ETH Gas0.36 Gwei
Cryptorank
MainNewsSurvey Expos...

Survey Exposes AI Trust Gap, Urging Caution Among Users


Feb, 23, 2024
2 min read
by CryptoPolitan
Survey Exposes AI Trust Gap, Urging Caution Among Users

A recent survey spanning ten African and Middle Eastern countries sheds light on the growing trend of excessive trust in generative Artificial Intelligence (AI) tools. Conducted among 1,300 respondents, the survey highlights a concerning statistic: while 83% of users express confidence in the accuracy and reliability of these AI tools, 63% are open to sharing their personal information, exposing themselves to potential risks.

Generative AI: Opportunities and risks

Generative AI, exemplified by popular platforms like ChatGPT, has indeed revolutionized various sectors since its introduction in late 2022. Its seamless integration into daily tasks, from marketing campaigns to content creation, has garnered widespread adoption. 

However, the survey findings suggest that this adoption may have outpaced users’ awareness of associated risks.

Lack of awareness and policies

Anna Collard, Senior Vice President of Content Strategy and Evangelist at KnowBe4 AFRICA, emphasizes the need for increased user training and awareness regarding the potential risks of generative AI. 

Despite its benefits in time-saving and productivity, the survey reveals a glaring lack of comprehensive policies in organizations to address these challenges. Nearly half of the respondents reported the absence of a generative AI policy at work, with 8% prohibited from using it altogether.

User comfort and trust

Interestingly, the survey also unveils discrepancies in user comfort levels across different regions. While 75% of respondents in Nigeria express comfort in sharing personal information with generative AI tools, only 54% in South Africa feel the same. 

This variation underscores the importance of considering cultural and regional nuances in addressing trust and security concerns related to AI technologies.

Addressing the threat landscape

Collard stresses the importance of cultivating a zero-trust mindset to combat the threats posed by the malicious use of generative AI. With the prevalence of deepfakes and disinformation campaigns, organizations must prioritize employee training and implement comprehensive policies to safeguard against potential risks. 

Failure to do so not only exposes individuals to financial loss and data breaches but also threatens the integrity of democratic processes, particularly with elections on the horizon in various countries.

Read the article at CryptoPolitan

Read More

How BNB Chain’s AI Hack is Revolutionizing Web3 Innovation and DeFi Projects

How BNB Chain’s AI Hack is Revolutionizing Web3 Innovation and DeFi Projects

BNB Chain is expanding its presence in the artificial intelligence (AI) space through...
Apr, 17, 2025
< 1 min read
by CoinEdition
Exaforce Lands $75M To Bring AI Agents To Security

Exaforce Lands $75M To Bring AI Agents To Security

San Jose, California-based Exaforce locked up a $75 million Series A funding which it...
Apr, 17, 2025
2 min read
by Crunchbase
MainNewsSurvey Expos...

Survey Exposes AI Trust Gap, Urging Caution Among Users


Feb, 23, 2024
2 min read
by CryptoPolitan
Survey Exposes AI Trust Gap, Urging Caution Among Users

A recent survey spanning ten African and Middle Eastern countries sheds light on the growing trend of excessive trust in generative Artificial Intelligence (AI) tools. Conducted among 1,300 respondents, the survey highlights a concerning statistic: while 83% of users express confidence in the accuracy and reliability of these AI tools, 63% are open to sharing their personal information, exposing themselves to potential risks.

Generative AI: Opportunities and risks

Generative AI, exemplified by popular platforms like ChatGPT, has indeed revolutionized various sectors since its introduction in late 2022. Its seamless integration into daily tasks, from marketing campaigns to content creation, has garnered widespread adoption. 

However, the survey findings suggest that this adoption may have outpaced users’ awareness of associated risks.

Lack of awareness and policies

Anna Collard, Senior Vice President of Content Strategy and Evangelist at KnowBe4 AFRICA, emphasizes the need for increased user training and awareness regarding the potential risks of generative AI. 

Despite its benefits in time-saving and productivity, the survey reveals a glaring lack of comprehensive policies in organizations to address these challenges. Nearly half of the respondents reported the absence of a generative AI policy at work, with 8% prohibited from using it altogether.

User comfort and trust

Interestingly, the survey also unveils discrepancies in user comfort levels across different regions. While 75% of respondents in Nigeria express comfort in sharing personal information with generative AI tools, only 54% in South Africa feel the same. 

This variation underscores the importance of considering cultural and regional nuances in addressing trust and security concerns related to AI technologies.

Addressing the threat landscape

Collard stresses the importance of cultivating a zero-trust mindset to combat the threats posed by the malicious use of generative AI. With the prevalence of deepfakes and disinformation campaigns, organizations must prioritize employee training and implement comprehensive policies to safeguard against potential risks. 

Failure to do so not only exposes individuals to financial loss and data breaches but also threatens the integrity of democratic processes, particularly with elections on the horizon in various countries.

Read the article at CryptoPolitan

Read More

How BNB Chain’s AI Hack is Revolutionizing Web3 Innovation and DeFi Projects

How BNB Chain’s AI Hack is Revolutionizing Web3 Innovation and DeFi Projects

BNB Chain is expanding its presence in the artificial intelligence (AI) space through...
Apr, 17, 2025
< 1 min read
by CoinEdition
Exaforce Lands $75M To Bring AI Agents To Security

Exaforce Lands $75M To Bring AI Agents To Security

San Jose, California-based Exaforce locked up a $75 million Series A funding which it...
Apr, 17, 2025
2 min read
by Crunchbase