November 2024
The integration of AI into the workplace offers immense promise, boosting efficiency, improving decision-making, and automating routine tasks. These advancements free employees to focus on creativity and strategy. However, AI also brings significant risks, including job displacement, ethical concerns over bias and transparency, and challenges in maintaining human oversight. More concerning, criminals, extremists, and bad actors exploit AI’s capabilities, posing specific challenges for threat assessment, cybersecurity, and workplace violence prevention.
The Peril
AI has escalated the creation and sophistication of deepfakes, enabling realistic but fabricated content that threatens reputations, political stability, and public trust. Deepfakes are used in misinformation campaigns, identity theft, and blackmail, making it increasingly difficult to distinguish truth from deception. AI is also weaponized in cyberattacks, automating phishing schemes, hacking attempts, and large-scale data breaches. These evolving threats demand advanced detection systems, ethical AI governance, and public education to mitigate their impact.
In the workplace, AI can escalate violence by automating threatening messages or using data analysis to identify and exploit vulnerabilities in employees or organizational security. Regulatory safeguards, employee education, and proactive countermeasures are critical to ensuring AI serves as a tool for protection rather than harm.
For example, a recent Economist survey found that while 77% of executives agree generative AI needs new governance, only 35% have enterprise-wide structures in place. Many organizations still rely on outdated frameworks developed before the advent of modern AI.
AI also presents significant risks in domestic violence, stalking, and workplace violence. Abusers misuse AI-driven tools like smart devices, surveillance systems, and geolocation trackers to control and intimidate victims. Stalkers exploit AI-powered facial recognition and data mining to track and invade privacy. A recent domestic violence case in Spain highlights the risk of relying solely on AI when conducting threat assessments. Spanish police use an AI based algorithm to assess risk levels for domestic violence victims. In this case although the algorithm generated a score that stated the victim was safe from violence, she was in fact murdered by her abuser. Spanish authorities relied solely on the algorithm to assess risk and conducted no independent investigation.
AI-generated fraud is increasingly difficult to detect. In 2024, deepfakes imitating President Biden, Florida Gov. Ron DeSantis, and corporate CEOs caused political and financial harm. One deepfake of a multinational CFO led to a $26 million loss due to fraudulent bank transfers.
Extremist groups exploit AI for propaganda, moving from traditional media to interactive digital platforms like social media, music streaming, and online gaming. These platforms are used to spread ideology, recruit followers, and encourage violence. A US-based social media network popular with extremists has developed AI chatbot characters that enable users to interact with prominent political and historical figures. Hitler and Osama bin Laden chatbots created by this platform have encouraged radicalization and violence by responding to queries with answers that are antisemitic, deny the Holocaust and justify terrorist attacks. A 2017 British Home Office study revealed that ISIS supporters used over 400 platforms for extremist content—a number likely to grow with AI advancements.
AI’s ability to automate targeting, spread misinformation, and incite violence requires threat managers to remain vigilant and proactive.
The Promise
AI also provides valuable tools for threat assessment. It can analyze large data sets in real time, detect patterns in threatening communications, and identify escalation in violent rhetoric. By reducing cognitive bias, AI can offer objective insights during the initial evaluation of potential threats.
Recommendations
- Train Employees on AI Threat Detection:
Employees must learn to authenticate the source and content of messages. Indicators of AI-generated threats include:- Inconsistent audio or video quality
- Mismatched lip-syncing or unnatural movements
- Uncharacteristic speech patterns
- Verification of message sources
- Implement Verification Tools and Processes:
Organizations should provide tools for verifying message authenticity. If unavailable, management must empower employees to question suspicious content and follow structured verification procedures. - Adopt Advanced AI Detection Technologies:
Invest in next-generation cybersecurity tools capable of identifying harmful AI-generated messages or content.
Conclusion
AI should never replace the expertise of trained threat assessment professionals. While its potential to enhance assessments is undeniable, so are the risks it poses in the hands of criminals and extremists. A successful threat assessment program must balance these dynamics, recognizing all potential threats while leveraging AI to prevent and mitigate harm.