With every powerful discovery or invention from fire through nuclear energy to artificial intelligence, people have found ways to use it for both positive and negative purposes. As Generative Artificial Intelligence (Gen-AI) has opened the door to magnificent life-changing possibilities, the question arises: How can generative AI be used in cybersecurity?
This emerging technology holds great potential for bolstering cyber defenses, but cybercriminals are also exploring ways to orchestrate sophisticated, automated AI-driven cyberattacks that could render today’s conventional security measures ineffective.
This blog post will walk you through approaches that can help you use the many benefits of generative AI in cybersecurity defense while using this same technology to minimize the risk from AI-generated cyberattacks that could lead to a major costly data breach.
As global governmental initiatives are underway to address this rapidly growing and extremely powerful technology, there is a fundamental gap between cybersecurity professionals who do not understand data science and AI data scientists who do not understand cybersecurity.
Artificial intelligence has many promises and some disadvantages of AI in cybersecurity. However, we cannot and should not squelch the forward positive movement and awareness of the vulnerabilities.
Read our blog post, “Is the U.S. Energy Sector Prepared for Increased Cybersecurity Threats?”
Examples of AI/ML in cybersecurity: offense and defense
Attackers have pivoted in their approach to use AI and machine learning in cybersecurity offensively at unprecedented speed and magnitude bypassing security teams’ ability to combat this new wave of “offensive AI.”
National Institute of Standards and Technology (NIST) has identified four types of cyberattacks that manipulate behavior of AI systems:
- Evasion, which attempts to alter the input of a prompt to change how the system responds to it which could cause misinterpretation or confusion of the true intent
- Poisoning, which introducing corrupted data into a large language model by inserting numerous instances of inappropriate content
- Privacy, which attempts to learn about sensitive data from the training data to misuse it
- Abuse attacks, which involve the insertion of incorrect information into a source that an AI source would absorb to then repurpose the AI system’s intended purpose
NIST has also developed an Artificial Intelligence Risk Management Framework (AI RMP 1.0) to support trustworthy and responsible usage of this technology along with the world’s first legislation in the EU AI Act.
The future of AI in cybersecurity
“You don’t want to bring a knife to a gun fight.”
AI-Driven attacks are already underway. In fact, AI, which can be used by anyone, is being used by global threat groups for hacking campaigns. OpenID and Microsoft have confirmed that threat groups linked to Russia, North Korea, Iran, and China are using OpenAI for attacks using open-source queries, translation, searching for errors in code, and running basic coding tasks. These state-sponsored criminal groups can rapidly adopt generative AI to scale their attack capabilities beyond conventional network security controls.
For example, in the coding stage, there are no secure coding standards, yet we are using it for decision-making and putting it into production systems.
Another example would be “hallucination abuse,” which results in data outcomes that are unintended. A threat actor would try to manage or force the manipulation of outcomes in the case of a large language model (LLM) by inputting bias or toxic information/prompting forcing a decision that the LLM otherwise would not provide, such as allowing access to something or including something it should not in a response.
Due to the rapid adoption of AI into our lives, the attack surface is increasing.
How can AI be used in cybersecurity defense?
Organizations need to make a concerted effort to safeguard their LLMs and related data from attack. This would mean adopting a proactive cybersecurity posture, which includes an “AI-defensive” approach to cybersecurity controls to meet the “AI-offensive” attacks.
Steps toward securing against AI attacks using AI defenses
AI attack methods need to be met with the same technology and rigor. AI security should include controls that do not hold down technology but enable it to move forward and rapidly innovate while providing guardrails for safe, secure, and ethical use.
- Train your employees in ethical, productive, and responsible usage
- Develop an acceptable use policy defining appropriate usage
- Hold AI to the same standard security controls (encryption, access controls, segmentation, code reviews, and secure coding standards) as other technologies while not slowing down the technology
- Validate your model to ensure it’s producing the expected output
Read our blog post, “Can You Spot a Cyberattack?”
If you are bringing in a pre-training model into your use case, scan it for vulnerabilities before placing it into production. Determine if there is code in your model that does not belong or if it has been altered in any way.
Adopt security tools that leverage generative AI to reduce the risk of cyberattacks such as:
- Detection: Know who is using your model and what they are using it for and if threat actors are using these models at the end points, on your network
- Response: Automation of incident response that can detect anomalies in network traffic and malicious software
- Prediction: Use prediction capabilities to prepare for future attacks based on historical data and potentially prevention future attacks
- Input/output validation: If we are proving inputs/outputs to customers, we need to understand how they may be interacting with attackers
- User/customer interaction: Understand how users are interacting with your models. Are they trying to abuse them or just trying to leverage your solution?
- Visibility into LLM training platforms: Gain visibility into what data sources and code repositories your AI models are being trained on. Malicious actors are intentionally contributing vulnerable code to public code repositories, knowing that generative AI systems will learn from and reproduce that vulnerable code when training on those repositories.
Work with reputable companies offering generative AI security products:
- Verify their experience and knowledge in AI and cybersecurity
- Seek dedicated firms that focus primarily on this area
- Ensure their experience with cybersecurity includes user end points
- Obtain references to avoid the many companies that over promise
- Ensure privacy and security policies are in place for ethical usage
- Ensure you have control of data that is being fed into the models
Incoming AI cybersecurity regulations
As profit is a driving force, ethics and security have the potential to be sidestepped when it comes to generative AI. However, this blog post outlined why companies must take a proactive, ethical approach to utilizing generative AI for cybersecurity defense while mitigating the risks of offensive AI attacks.
From understanding the promises and pitfalls, to implementing defensive strategies and technologies, to getting ahead of forthcoming regulations — taking a proactive, company-wide ownership of AI cybersecurity best practices will reduce risk and build customer trust.
Those who get out in front will gain an advantage before compliance is forced upon the industry.
Unlock cybersecurity insights for the energy industry
Interested in learning more about cybersecurity and its implications in the energy sector? Click below to request access to our webinar, “Evolving Cybersecurity Threats & Challenges to Public Power.”