With an estimated market size of $102 billion by 2032, it’s no secret that Artificial intelligence (AI) is taking every industry by storm. We all know the basic idea of AI – it’s like creating really clever computers by showing them lots of pictures, telling them what's in the pictures, and letting them learn from those pictures so they can figure things out on their own.
However, AI requires data, and where that data comes from, how it is processed and what comes out of those processes will require a sense of identity and security. Understandably, many people are concerned about the security of that data. A 2023 survey found that 81% of respondents are concerned about the security risks associated with ChatGPT and generative AI, while only 7% were optimistic that AI tools would enhance internet safety. Thus, strong cybersecurity measures will be even more critical with
AI technologies.
But there are also myriad opportunities to apply AI in cybersecurity to improve threat detection, prevention and incident response. Thus, companies need to understand the opportunities and weaknesses of AI in cybersecurity to stay ahead of forthcoming threats. In today’s post, I’m diving into key things companies need to know when exploring adopting AI in cybersecurity and how to protect against emerging threats
in AI.
On the bright side, AI can help transform cybersecurity with more effective, accurate and quicker responses. Some of the ways AI can be applied to cybersecurity include:
We are already seeing attackers use AI in attacks. For instance:
In addition, AI requires a lot of data, and companies need to limit exactly what is shared as it creates another third-party where data could be breached. Even ChatGPT itself suffered a data breach due to a vulnerability in the Redis open-source library, allowing users to access others' chat history. OpenAI swiftly resolved the issue, but it highlights potential risks for chatbots and users. Some companies have started banning the use of ChatGPT altogether to protect sensitive data, while others are implementing AI policies to limit what data can be shared with AI.
The lesson here is that while threat actors are evolving to use AI in new attacks, companies need to familiarize themselves with the potential threats of compromise in order to protect against them.
It would be remiss to talk about adopting AI in cybersecurity without mentioning the ethical considerations. It’s important to use responsible AI practices and human oversight to ensure the security and privacy. AI can only replicate what it has learned, and some of what it has learned is lacking. Thus, before adopting AI solutions companies should consider the ethical considerations, including the following:
Data bias amplification: AI algorithms learn from historical data, and if the data used for training contains biases, the algorithms can inadvertently perpetuate and amplify those biases. This can result in unfair or discriminatory outcomes when the algorithms make decisions or predictions based on biased data.
Unintended discrimination: AI algorithms may discriminate against certain groups or individuals due to biases in the training data or the features the algorithms consider. This can lead to unfair treatment in areas like hiring, lending, or law enforcement, where decisions impact people's lives based on factors beyond their control.
Transparency and accountability: Many AI algorithms, especially complex ones like deep neural networks, can be challenging to interpret and understand. Lack of transparency makes it difficult to identify how biases are introduced and decisions are made, leading to concerns about accountability when biased or unfair outcomes occur.
While right now it’s a bit of a wild west in AI, we will see emerging regulation requiring transparency and accountability to offset some of these privacy and ethical considerations. For instance, the European Commission has already been calling on major tech corporations such as Google, Facebook, and TikTok to take steps in labeling AI-generated content as part of their efforts to counter the proliferation of disinformation on the internet. As per the EU Digital Services Act, platforms will soon be obligated to clearly mark deep fakes with noticeable indicators.
Given the limitations of AI, Humans should always be the final decision makers, while using AI to speed up the process. Companies may use AI to be presented multiple options and then key decision makers can act quickly, thus AI will supplement, but not replace, human decision-making. Together, AI and humans can accomplish more than they can alone.
AI |
Humans |
|
|
The use of technologies such as Public Key Infrastructure (PKI) can play a fundamental role in protecting against emerging AI-related threats, such as deep fakes, and in maintaining the integrity of digital communications.
For example, a consortium of leading industry players, including Adobe, Microsoft, and DigiCert, are working on a standard known as the Coalition for Content Provenance and Authenticity (C2PA). This initiative introduced an open standard designed to tackle the challenge of verifying and confirming the legitimacy of digital files. C2PA leverages PKI to generate an indisputable trail, empowering users to discern between genuine and counterfeit media. This specification provides users with the capability to ascertain the source, creator, creation date, location and any modifications to a digital file. The primary goal of this standard is to foster transparency and trustworthiness in digital media files, especially given the increasing difficulty in distinguishing AI-generated content from reality in today's environment.
In sum, AI will develop many opportunities in cybersecurity and we have just scraped the surface of what it can do. AI will be used as both an offensive and defensive tool to prevent cyber-attacks as well as cause them. But the key is for companies to be aware of the risks and start implementing solutions now, while keeping in mind that AI cannot fully replace humans.