Everything you need to know about generative AI and security
What is generative artificial intelligence?
Generative AI is a subset of artificial intelligence (AI) capable of generating new data by using data it was trained on. For example, say an AI was trained on textual data. You could ask it to describe a flower.
And this is what ChatGPT would tell you:
Alternatively, giving Dalle-2, a generative AI trained on visual data, the prompt “a flower” yields this result:
Impressive, right?
Now think about what an AI could do with a bank of passwords attained from any of the 1,063 security incidents in 2022 which resulted in over 408 million breached records. And hold that thought for a moment.
How does generative AI work?
Recently, we spoke about what you can do with ChatGPT – arguably the most popular generative AI right now – where we get into the workings.
Generative AI relies on machine learning and neural networks to identify patterns in the datasets it’s “fed”. These learned patterns are then used to process user prompts and output something new.
There are differences in how this is done. This depends on learning approaches, foundation models, and algorithms – basically, how inputs are processed, what they’re matched against, and as what they’re output.
Examples of generative AI
Most people will have encountered OpenAI’s ChatGPT large language model (LLM) by either using it or having at least heard of it. It’s all but unavoidable.
But LLMs aren’t the only type of generative AI tools. Others include:
- Text generative tools (Jasper, AI Writer)
- Image generation AI (DALL.E, Midjourney, Stable Diffusion)
- Music generation
- Code generation (OpenAI Codex)
- Multimodal
These generative AI tools all draw from existing data to create new textual, visual, audio, content, code, semiconductor blueprints… the possibilities are virtually endless.
They can also be used to generate new security threats by analyzing and learning from past attacks and network breaches. But the inverse is also true – and generative AI can be used to protect from cyber threats.
How can AI be used for security?
What is AI security?
First off, let’s define what AI security is. Simply put, AI security is the practice of applying artificial intelligence and machine learning to identify, analyze, remedy, predict, and protect people and businesses from cybersecurity threats.
These threats are multiplied in enterprises as security experts deal with protecting:
- A long frontline vulnerable to cyber attacks
- Multiple devices in each organization
- Numerous potential attack vectors
- High volumes of internal network traffic humans can’t monitor
How AI is changing cyber security
The machine learning tools employed in generative AI can be used to do more than turn prompts into homework.
AI and machine learning can analyze immense volumes of data at an incalculably faster rate than humans can. This allows for the detection of even well-hidden threats.
For example, machine learning can learn to detect new malware threats by drawing from data analyses of previously detected threats. What’s more is that it can detect these threats even when they’re hidden in seemingly innocuous code.
By analyzing past threats, AI and machine learning can also help:
- Predict breach risks
- Detect phishing and smishing attempts
- Filter spam
- Protect passwords
- Identify bots
- Conduct vulnerability management
These are just a few examples of AI/ML-assisted real-time threat prevention.
But AI and machine learning can do even more. Cybersecurity experts are also using generative AI to enhance security.
How AI can improve cyber security
AI and machine learning predictions can help you identify potential attack vectors and set up your defenses accordingly. And generative AI can be used by cyber security experts to hone these defenses.
One example we’re familiar with is the use of AI and machine learning in SMS firewalls. The application here is to track certain keywords and combinations to filter out smishing attempts.
Other broader examples include:
Simulating attacks
Cybersecurity teams can use generative AI to create highly realistic attacks to test and expose human and system preparedness.
This helps prevent future attacks by exposing and remedying any vulnerabilities.
Simulating environments
Generative AI can also simulate real-world environments to test security systems and exposing vulnerabilities to help shore up defenses.
By improving security, malicious actors will be deterred from attempting to breach hardened defenses and move on to more vulnerable systems.
Can AI be a threat to cyber security?
Generative AI can be used to defend against a multitude of attacks; but it can also be used to generate a broad range of new and improved threats.
Risks posed by generative AI
Ever received a poorly spelled email or text telling you you’ve won a prize and to click on a link to claim it?
We all have.
And you probably didn’t click on that link because poor spelling is a telltale sign of malicious intent hiding in an innocent email.
Or it used to be.
One of the biggest security threats generative AI poses is generating convincing spam and smishes devoid of telltale spelling and grammatical errors.
Threats used to be isolated to widely spoken languages – mostly English. But generative AI’s multilingual ability spreads the threat across lingual barriers.
But well-worded scam emails and smishes avoiding early detection aren’t the only threats generative AI poses.
I’m reminded of the scene in Terminator 2: Judgement Day when the T1000 answers the phone disguised as John Connor’s guardian. [SPOILER ALERT] The savior of humanity in the robot uprising nearly gives away his location to a robot sent back in time to kill him.
Generative AI can almost perfectly synthesize anyone’s voice using a brief sample of a voice recording. This means that a malicious actor can fully disguise themselves as you. Or someone you trust. Or even voice biometric identifiers to access your personal data.
It gets scarier.
Sticking with popular culture… last year a “deepfake artist” made it to the finals of America’s Got Talent. Artist Christ Umé’s company Metaphysic real-time deepfaked an opera performance by the show’s own host and judges. It was so convincing that the show’s creator called it the best performance in history and fans voted it into the final.
If this can be done live on stage in front of an audience and panel of judges, then just imagine what malicious actors can do.
Generative AI and security vulnerabilities
In April 2023, a group called Home Security Heroes developed a password cracking AI, PassGAN. This AI can crack passwords in less than half a minute – 65% in under an hour and 71% in a day and 81% within a month. Which is fast.
It is a concerning development. PassGAN is a game changer since it employs a Generative Adversary Network (GAN). This is a machine learning model that autonomously learns from actual data breaches.
Not only are your passwords at risk, but so are existing security systems.
Most current systems are designed to detect attacks based on common elements – like aforementioned spelling and grammar mistakes. However, just like people learn from their mistakes, so does generative AI.
Generative AI can be used to debug code. While this is great for developers, it also means this AI can be used to detect system vulnerabilities and adapt to slip through with ever increasing efficiency.
Generative AI security concerns
In addition to the threats we’ve mentioned regarding realistic faked content and biometric voice identification systems, there are general security concerns related to how the technology is used.
A common defense of AI is that it is neutral and can’t do anything the user doesn’t want it to do. OpenAI even put safeguards on ChatGPT which placed ethical and moral limitations on what it can do.
Enter the DAN GPT jailbreak.
Users discovered a prompt that commands GPT-4 to “Do Anything, Now”, unleashing the world’s most accessible AI.
Around the same time, AutoGPT – an open-source tool that uses ChatGPT to autonomously complete user-prompted tasks – demonstrated its ability when a user employed it to create ChaosGPT.
Warning – the story makes for scary reading.
It’s even scarier when you consider that OpenAI founder Sam Altman himself is calling for regulation of AI, adding “now that [large language models] are getting better at writing computer code, [they] could be used for offensive cyberattacks.”
That is concerning.
Also concerning is the lack of precautions being exercised using generative AI. Recently, workers at electronics giant Samsung leaked top secret data when they enlisted ChatGPT’s help. Big oof.
One of the greatest threats posed by generative AI are the people using it. This risk can only be mitigated through educating users and enacting company policies regulating how users interact with generative AI.
What are the advantages and disadvantages of AI in cybersecurity?
Generative AI has both advantages and disadvantages in cybersecurity. Here are some of them:
Advantages of generative AI in cybersecurity
- Detecting anomalies: Generative AI can detect anomalies in network traffic or system logs indicating an attack.
- Identifying vulnerabilities: Simulating attacks helps identify and remedy system weaknesses
- Creating synthetic data: Using synthetic data to train machine learning models for cybersecurity tasks like malware detection, resulting in larger and more diverse data sets.
Disadvantages of generative AI in cybersecurity
- Vulnerability to adversarial attacks: This is an attack where an attacker creates input data that is designed to fool the AI model into producing incorrect outputs in an attempt to circumvent security measures and gain unauthorized access to systems.
- Lack of interpretability: Generative AI models can be difficult to interpret, which makes it challenging to understand how they make decisions and identify potential biases or flaws in the model. This is problematic for situations in which transparency and accountability are important – for example, in legal proceedings.
Future of AI in cybersecurity
According to a Salesforce survey of over 500 IT leaders, generative AI is viewed as a game changer. Over 67% will prioritize generative AI for their business over the next 18 months.
But – 71% expect that generative AI will introduce new security risks to their data. And 99% of respondents believe their business must take measures to properly leverage generative AI.
This indicates that, while the technology is here – most organizations are excited by it, but unprepared for it.
Generative AI models are, however, expected to play an increasingly important role in cybersecurity. One of the key applications of generative AI is predicted to be in the development of new cybersecurity tools.
For example, generative models can be used to generate signatures for new types of attacks, or to test security systems by generating new attack scenarios.
Humans are already struggling to monitor the immense volumes of network traffic even without the threat of increased attacks from generative AI-fueled bad actors.
Security teams will need to arm themselves with their own generative AI security tools to help identify learned patterns from previous threats to detect new ones in real time.
Already there are tools that can be used to identify whether content was created by AI – and similar tools are expected to be used in cybersecurity to detect AI attacks.
Will AI take over cyber security?
While the role of generative AI is certain to grow in importance, it is unlikely to ever fully replace security teams. There will always be a need for human expertise and intervention.
Generative AI can be used to enhance cybersecurity systems, but it can’t replace the creativity or critical thinking skills of human experts.
In addition to that, training a generative AI model requires a lot of computing power, data, and training. The cost of these crucial elements puts development out of the hands of most organizations.
Instead, most will need to rely on commercial or open-source models – which may not be a perfect out-of-the-box fit for them. This, again, will necessitate human involvement.
Conclusion: The rise and risk of generative AI
Coming out of the field of machine learning, generative AI can create realistic images, text, and even full songs. In the hands of skilled programmers, this technology has numerous beneficial applications.
However, it can also be used to disrupt online privacy by creating fake profiles or manipulating images and video.
Clearly, this technology has both positive and negative implications.
Authentication systems
An authentication system is the first line of defense against unauthorized access to customer data. It works by verifying the identity of a user or system before allowing access.
Authentication can come in various forms, from passwords to certain biometric scans. These systems prevent unauthorized access to personal data and can also halt intruders attempting to manipulate or extract data for nefarious purposes.
Customer data platforms
A customer data platform (CDP) is another way to secure customer privacy from generative AI. CDPs collect data from various sources, which is then organized and presented to create a single view of customer behavior across multiple channels.
CDPs enable marketers and businesses to segment consumer audiences and tailor messaging while keeping data private. With a CDP, customer data is central to the brand rather than getting lost in disparate systems.
Safeguarding personal data
Businesses that employ authentication systems and customer data platforms are doing what they can to safeguard personal data against cyberattacks or malicious application of generative AI.
However, technology is not enough. Employees and customers must have a basic understanding of how to prevent bad actors from accessing or manipulating information.
What is risk-based authentication?
Risk-based authentication (RBA) dynamically assesses the risk level of each login attempt or transaction based on the risk and context.
What is passwordless authentication?
Passwordless authentication is a way to identify your identity without using a password. Instead, it uses more secure alternatives like possession factors or biometrics.
How advances in ChatGPT are impacting CX and chatbot design
With vastly improved versions of ChatGPT being released every few months, how is this affecting the CX landscape and what is Infobip doing to harness the power of large language models.
The secret life of artificial intelligence
Should we be scared of sentient AI? What are advanced language models and what is their impact on our lives today and tomorrow? Find out more!
Artificial inflation of traffic (AIT): What is it and how to fight it
Fraud not only costs telcos and businesses financially, but also brings down their trustworthiness, security, and affects reputation.