Most Cybersecurity Teams Are Unprepared for AI Cyberattacks

Most Cybersecurity Teams Are Unprepared for AI Cyberattacks
Most Cybersecurity Teams Are Unprepared for AI Cyberattacks

Cybersecurity teams aren’t the only ones using artificial intelligence to their advantage—cybercriminals are using this technology to launch never-before-seen cyberattacks, catching organizations off guard. How can professionals prepare for this unknown? Here is the latest on cybersecurity threats and various ways teams can better prepare for cybercrimes. 


AI cyber threats will catch cybersecurity teams off guard

Recent research commissioned by Darktrace indicates an overwhelming majority of cybersecurity teams are unprepared to defend against artificial intelligence (AI)-powered cyber threats. The report echoes other studies highlighting the emerging trend of increasing cyberattack frequency, sophistication and severity.  

In surveying nearly 2,000 information technology (IT) security professionals—ranging from those in junior positions to Chief Information Security Officers (CISOs)—Darktrace research showed 74% of IT security leaders believe their organizations are currently experiencing the effects of AI-powered cyber threats.   

Despite growing concerns cybercriminals will leverage AI technology for malicious purposes, many organizations admit they are unprepared. While 89% of IT security teams agree AI-assisted cyber threats will substantially impact their organization by 2026, 60% report their current defenses are inadequate.   

Since AI technology accelerates the speed of cyber threat development, surveillance and deployment—and lowers entry barriers for cybercriminality—preventative action is increasingly pressing. If cybersecurity teams do not act quickly, the rapid evolution of the threat landscape will catch them off guard.  


The exponential emergence of AI-powered cyber threats 

As AI advances, organizations become more susceptible to cyberattacks. According to one survey, the minimum level of cybersecurity resilience dropped by about 30% in 2023, leaving them more vulnerable to emerging cyber threats.   

One emerging threat involves the use of AI-powered deepfakes for phishing. Threat actors only need a single audio snippet or a handful of images to recreate a person’s voice and likeness. Concerningly, they may be able to use this technology to bypass biometric access controls.  

The frequency of these AI-driven phishing attempts has grown exponentially in recent years. In fact, the number of organizations experiencing deepfake-related security incidents increased to 66% in 2022, up from 13% in 2021. These modern social engineering attempts are often used in spear-phishing and whaling campaigns.  

AI-assisted ransomware attacks are another emerging threat, as cybercriminals use generative models to develop malicious code. The rapid evolution of malware poses a severe issue for understaffed cybersecurity teams.   

Another emerging trend is the deployment of distributed denial-of-service (DDoS) attacks led by an AI-driven botnet. This cyber threat is particularly dangerous because it is capable of autonomous execution and adapts to defend against countermeasures.   

Once cybercriminals infiltrate a network, they can leverage AI to launch trigger-based attacks at the most opportune time, enabling them to prioritize data exfiltration. The amount of damage they can do with this approach is proportional to setting explosive charges while robbing the building—the automation capabilities of algorithms elevate their attacks.   

Cybercriminals can automate cybercrime-as-a-service with the help of AI within the next few years. If this prediction becomes a reality, IT security teams will be inundated with untraceable, highly sophisticated cyberattacks.  


AI cyber threats are already impacting organizations 

Various industry professionals have claimed AI-powered cyber threats are a far-off possibility, implying the widespread growing concern is just hype. While it may be true that the worst impacts are yet to come, the sentiment could not be further from the reality of the situation—cybercriminals are already using algorithms to launch cyberattacks. 

In 2020, a branch manager received a call from the director of his company’s parent business requesting authorization for a $35 million transfer for an upcoming acquisition. After emails from a lawyer hired to coordinate the process appeared in his inbox, he went ahead. He eventually discovered fraudsters used AI-powered deepfake technology to mimic the boss’s voice.  

Advanced AI malware has already exited the proof of concept stage. One research team developed a computer worm that attacks AI-powered email assistants using an adversarial self-replicating prompt. The virus forces models to output personally identifiable information, regardless of guardrails. Apparently, they can embed malicious prompts in emails to trigger a cascading infection, reaching more clients.   

The same sentiment is true for AI-powered cyberattacks. Back in 2018, TaskRabbit—a marketplace for freelance labor—was hit by a DDoS attack controlled by an AI-powered botnet. The cybercriminals exfiltrated the social security and account numbers of 3.8 million users before the platform temporarily shut down to recover. 


What cybersecurity teams can do to strengthen defenses 

While some industry experts suggest many AI-powered cyber threats are of no concern because they are largely conceptual—they are mistaken. It is not a matter of if these attacks will happen, but when—and all indicators suggest it will be an issue sooner than later. In the meantime,  security teams must strengthen their defenses. 

1. Deploy a defensive AI 
Cybersecurity teams should deploy their own AI to make their defenses more dynamic. Research shows 96% of security decision-makers believe AI-driven countermeasures are critical for defending against malicious models, making this strategy sound.  

2. Audit AI technology
Organizations should consider auditing data sources and model behavior periodically, whether they develop their own algorithm or rely on a third-party tool. This way, they can ensure no adversarial training, prompt engineering or data set poisoning takes place.  

3. Leverage automation 
According to research from Darktrace and the MIT Technology Review, 60% of C-suite professionals agree that human-driven security solutions are inadequate for defending against AI cyber threats. Cybersecurity teams should instead rely on the power of automation.   

IT security professionals can use AI’s computational power to audit security logs, identify emerging cyber threats or optimize security parameters in real-time while focusing on high-priority matters.  

4. Raise awareness 
With AI-driven deepfake phishing on the rise, cybersecurity teams would be wise to pressure the human resources department or the board into requiring organization-wide training. They can prioritize external threats when they don’t have to fix as many employee mistakes. 

5. Utilize access controls 
Cybersecurity professionals should leverage authentication measures and access controls regardless of their strategies. Considering AI deepfakes can bypass biometrics, their toolset should include various solutions.  


Is the average IT security team ready to defend itself? 

Today, few security teams can withstand a sudden onslaught of AI cyberattacks—but that does not mean their situation is hopeless. They can defend against the modern threat landscape with additional technology investments and upskilling. 

About The Author


Zac Amos is the features editor at ReHack, where he covers trending tech news in cybersecurity and artificial intelligence. For more of his work, follow him on Twitter or LinkedIn.


Did you enjoy this great article?

Check out our free e-newsletters to read more great articles..

Subscribe