Artificial Intelligence is already affecting many aspects of our lives—and has been for decades. For better or worse, that’s going to continue. But as AI becomes more powerful and more deeply woven into the structure of our daily reality, it is critical for organizations to realistically assess its full potential as both tool and threat.
AI enables both good and bad actors to work faster at scale
The prevalence of machine learning in business makes it an appealing tool and target
The hype surrounding AI has the potential to obscure the risks
The scope of emerging threats is enormous and varied
New AI-driven security approaches will be required to combat AI-generated threats.
Part of the problem of predicting the real implications of generative AI technology is the massive, buzzy cloud of hype that surrounds it. Even the term itself has become something of a cliché. Want to fill an auditorium at a technology event? Put AI in the title of your presentation. Want to draw attention to a machine learning feature in your software? Market it as “AI.” This has the unfortunate effect of obscuring the reality of the technology—sensationalizing benefits and dangers while simultaneously anesthetizing many to the topic as a whole.
This is compounded by the fact that many—especially the less technical—don’t really understand what, exactly, AI is.
In simple terms, artificial intelligence is exactly what it sounds like: the use of computer systems to simulate human intelligence processes.
Examples: language processing, speech recognition, expert systems, and machine vision.
Computer systems governed by algorithms that enable them to learn and adapt automatically after they have been trained on a data set.
Examples: Content recommendation algorithms, predictive analysis, image recognition
A technique of machine learning that uses layers of algorithms and computing units to simulate a neural network like the human brain.
Examples: Large Language Models, Translation, Facial recognition
Content authenticity
Identity manipulation
Phishing with dynamite
Prompt injection
Machine Hallucinations
Attack sophistication
Custom malware
Poisoned data
Privacy leaks
Content authenticity
Generative AI has the ability to create highly realistic copies of original content. Not only does this present potential intellectual property risks for organizations using AI for content generation, but it also allows bad actors to steal and realistically copy all sorts of data to either pass off as an original creation or to facilitate other attacks.
Identity manipulation
Generative AI can create ultra-realistic imagery and video in seconds, and even alter live video as it is generated. This can erode confidence in a variety of vital systems—from facial recognition software to video evidence in the legal system to political misinformation—and undermine trust in virtually all forms of visual identity.
Phishing with dynamite
Attackers can use generative AI tools to realistically simulate faces, voices, and written tone, as well as emulate corporate or brand identity which can then be leveraged for highly effective and difficult to detect phishing attacks.
Prompt injection
Because many organizations are using off-the-shelf generative AI models, they are potentially exposing information used to train or give prompts to their instance to injection attacks refined by attackers to target popular models. Without stringent safeguards in place and frequent updates, an exploit for the base model could expose any organization using that model.
Machine Hallucinations
While AI can generally produce convincing speech or text at speed, isn’t always accurate. This is particularly problematic for organizations relying on AI to generate informational or support content for users, as well as for organizations using machine learning for threat detection, where an anomalous result could be especially costly.
Attack sophistication
Because AI is able to write functional code with superhuman speed, it could potentially be used to scale attacks with unprecedented speed and complexity. In addition, AI could be used to detect vulnerabilities in a compromised code base and could expand the scope of attackers by lowering the barrier of entry.
Custom malware
While popular LLMs have some safeguards against users creating malicious code, sophisticated attackers can find exploits and loopholes. Stolen or copied models can also be stripped of such safeguards, allowing bad actors to rapidly generate nearly undetectable, highly customizable exploits.
Poisoned data
Attacks don’t necessarily need to exploit the AI itself. Instead, they could target the data used to train a machine learning model in order to false output. This could then be further leveraged to create exploits within the model itself—such as falsifying a DNA sequence in a criminal database—or simply to produce results that could damage the targeted organization.
Privacy leaks
AI that is trained with or handles sensitive data could potentially expose that data, whether through a bug, as has happened with several of the major commercial models, or through a targeted attack.
We asked ChatGPT to lay out the top threats posed by generative AI. Here was its response:
Generative AI, while offering incredible potential for innovation and creativity, also presents unique challenges and threats in the realm of cybersecurity. Here are some key points to consider:
The features that make AI a useful tool for bad actors can—and must—be used to harden cybersecurity measures. Not only will this allow organizations to develop more effective and agile cybersecurity technologies, but better address human vulnerabilities as well.