
Emotional intelligence is the best defense against GenAI threats
By Öykü Işık, Ankita Goswami | Published: 2024-12-12 18:00:00 | Source: The Present – Big Think
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week for free.
Imagine you receive a sad call from your grandchild; His voice is full of fear as he describes a terrible car accident. This is what happened to Jane, A 75 years old Senior citizen of Regina, Canada.
Due to her insistence, she arrived at the bank a few hours later to collect bail money for her grandson. Only later did she discover that she had been the victim of an artificial intelligence scam.
By 2030, GenAI is expected to do just that 70% automation Global business operations, leaving the leaders Excited and intrigued. But there is a dark side to GenAI, which is using deception as a weapon.
Jane’s story is not an isolated one. Emotional scams also pose a serious threat to businesses, exposing them to financial and reputational risks.
Recent data indicates this One in 10 CEOs You’ve already encountered a deep threat — an AI technology trained on real video and audio material — with 25% of leaders She is still not aware of its risks.
Detecting AI-generated content remains a challenge. To verify digital content, we first turn to technology for help. While many tools boast their ability to detect AI-generated content, their accuracy is inconsistent.
Especially among tools that can bypass detection, e.g AI remove watermarks from photosWe are far from successful automated classification of AI-generated content.
Although there are some promising tools, such as Intel’s tool FakeCatcherBut its widespread implementation is still lacking. As organizations play catch-up to GenAI, human skepticism remains our best insurance.
Emotional intelligence Recognizing, understanding, and managing one’s emotions is an asset in AI-enabled manipulation.
GenAI presents three main risks that emotional intelligence can help solve:
- GenAI’s hyper-realistic output becomes a trickster’s tool to exploit emotional responses, such as urgency or compassion.
- The ethical pitfalls of GenAI are Often unintentionalwhere employees may fall behind on AI recommendations or fall into automation bias, Prioritize efficiency over ethics.
- From job concerns to ethical apathy, employees can make rash decisions under the emotional pressure of GenAI.
As GenAI threats advance, emotional intelligence is key
To manage the impact of GenAI threats, individuals must recognize the emotions at play, consider the implications, and respond with informed and empathic actions. Here are three ways to do this.
1. Help teams recognize when emotions are being used as a weapon
Research indicates that human behavior causes 74% of data breaches – And it’s not hard to see why.
GenAI can personalize fraud just by analyzing employee data or company-specific content. Threat actors can also leverage your digital fingerprint to create a fake video of you.
All it takes is less than Eight minutes and minimal cost. Against this backdrop, the first step to mitigating GenAI threats is education.
Employees need to understand how GenAI can leverage emotions to outperform rational decisions. A senior manager at a cybersecurity company shared an interesting experience with us.
Recently, he received a WhatsApp message and audio recording from a scammer posing as his CEO, discussing legitimate details about an urgent business deal.
Emotional pressure from a higher authority initially caused a reaction. However, the scammer’s awareness of standard organizational communications helped spot red flags: that informal channels should not be used for sensitive data and follow-up calls.
Leaders should invest in training that enhances emotional intelligence. The workshops focus on identifying emotional triggers, combined with simulations Creative thinking And flexible strategies beyond that Follow strict rules – Help employees detect fraud sooner.
2. Make reflection the default for your team
Individuals must also consider how their actions, driven by emotions, can lead to unintended consequences. Reflection allows us to think about how our emotions work They unconsciously affect our behavior.
A recent example of the lack of ethical oversight is a German magazine that published an article Interview generated by artificial intelligence With Formula 1 legend Michael Schumacher without his consent. The interview included false quotes discussing his health condition and family life since 2013.
Because of the excitement caused by the publication of the “scoop”, journalists failed to consider its emotional impact on Schumacher and his family. This caused significant damage to the magazine’s reputation.
Critical thinking at work encourages us to consider different perspectives and factors that influence our choices.
Leaders can facilitate this by offering group reflection exercises.
One good example is “Fly on the wall“, where one team member presents the GenAI output and then watches silently while others discuss ethical considerations and biases.
Finally, insights gained from the discussion are shared. Based on this thinking, familiar situations may arise It takes on new meaningsRevealing basic assumptions and warning against relying on artificial intelligence.
3. Turn quick reactions into thoughtful responses
The final step is to translate awareness and thinking into deliberate actions. Even when considering risks, the “influence of artificial intelligence” can overwhelm good judgement. Give your employees the power to set boundaries to regain control of decision-making processes.
Encouraging them to express their discomfort or delay the action until verification, such as saying, “I need written confirmation before I proceed,” can slow the momentum of the manipulation.
Such responses are made possible through a culture of open dialogue, where employees are encouraged Question instructions Or express concerns without fear.
A recent example of this is A Ferrari Executive Who received a phone call from the CEO, Benedetto Vigna. At first, it seemed reasonable. But when the conversation became confidential, the CEO became suspicious.
Unwilling to take any risks, he asked a question that only a real CEO could answer. The caller suddenly hung up revealing the scam. We are currently seeing an increase in the probability ofEmotional entanglementThrough excessive reliance, anthropomorphism, and blurring the boundaries between fantasy and reality.
Falling prey to our emotions makes us human. However, it is also possible to organize them so that they can make better decisions. Ultimately, the robotic world can benefit from the seriousness, human touch and sensitivity of these strategies.
Republished with permission from the World Economic Forum. Read Original article.
Sign up for Big Think on Substack
The most surprising and impactful new stories delivered to your inbox every week for free.
ــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــ





