Infobip and SplxAI: Ensuring the safety and security of GenAI solutions

Senior Product Marketing Manager

Julian Dawkins

Senior Product Marketing Manager

Given the development of more capable GenAI technologies with practical applications, there are significant opportunities for businesses to enhance customer experience at the same time driving efficiencies and reducing costs.


However, many companies are still reluctant to sign off on GenAI projects due to the perceived risks associated with the new technology. Some early adopters have unfortunately suffered reputational damage after deployments didn’t go to plan. Incidents have included Air Canada’s chatbot providing incorrect information and AI tricking users into sharing private data.


Although these isolated incidents have been magnified by media reporting, many organizations are wary about embedding the technology, despite the overwhelmingly positive benefits. There is a clear need for more stringent controls that minimize the risk and build public trust in GenAI.


This is why we have partnered with SplxAI, a specialist in securing AI apps and solutions, to integrate their AI security platform into Infobip’s conversational AI offerings. Together, we are creating solutions that can help enterprises leverage AI successfully by making them resilient to both existing and emerging threats. 

The security challenges for GenAI projects

Generative AI technology has the potential to revolutionize the way businesses interact with audiences from all backgrounds. We don’t see GenAI replacing humans, however it is already proving itself as a tool that can complement and enhance business operations. By deploying conversational interfaces, brands can provide a better service to more customers and automate a greater portion of their operations without losing the human touch.


From conversational customer service chatbots to automated marketing and sales solutions GenAI has the potential to change the way people interact with and buy from the brands they love.


However, there is a need for technology that can protect businesses and their customers from some of the unique security challenges that GenAI presents. These include leaking proprietary company data, producing false or damaging data, or even executing malicious instructions.


Some of the new terms that have entered our collective vocabulary in the last couple of years include:

Hallucinations: These can occur when an AI model generates incorrect or nonsensical information that appears plausible. This can happen because the model predicts text based on patterns in its training data, not actual human understanding. This can make it confidently give garbage or factually wrong outputs.


Prompt injections: Prompt injections exploit the fact that the Large Language Models (LLMs) that form the basis of GenAI solutions process both developer instructions and user inputs as natural language text. Using specific inputs, hackers can make the AI ignore its original instructions and follow the malicious ones instead.


Jailbreaks: These are techniques used to trick the AI into ignoring the safety protocols intended to ensure that it operates safely and responsibly. These guardrails are meant to prevent the AI from producing harmful content, making biased decisions, or executing malicious instructions. Jailbreaks use tactics like prompt injections, evasion, and model manipulation to bypass these protocols.

In addition to these threats, AI chatbots can also promote competitor companies if the correct input and output filters aren’t applied. This can lead to reputational damage for the business, loss of consumer trust, and even regulatory fines in cases where there are data losses.


This is where specialist technology providers like SplxAI can help to ensure that AI deployments are robust, secure, and trusted – before and after going live.

Securing AI, now and in the future

According to Gartner, by 2027 40% of Generative AI solutions will have multimodal functionality, allowing users to interact with AI systems via text, voice, and image inputs. While this advancement provides more opportunities for customer engagement, it also opens the door to new types of exploitation. Multimodal AI systems increase the complexity of potential attacks, such as jailbreaks through voice manipulation or phishing through malicious image inputs.

This is where the Infobip and SplxAI collaboration comes into play. SplxAI’s platform, the first to automate red-teaming for multimodal AI assistants, can generate hundreds of domain-specific attack simulations across various input types within minutes. By continuously testing for the latest AI attack scenarios, SplxAI reinforces Infobip’s ability to ensure that GenAI systems handling text, voice, or image interactions are resilient and secure. 

By proactively identifying vulnerabilities, security concerns can be addressed faster, meaning that AI-powered solutions can be deployed more efficiently. By integrating automated red-teaming and continuous security testing into the AI lifecycle, organizations are able to reduce the risk surface by 95% to keep their AI productive.

Any channel, any industry

As GenAI continues to get integrated into platforms like WhatsApp, Microsoft Teams, and Slack, ensuring secure deployments across different channels is critical. Infobip’s platform, enhanced by SplxAI’s out of the box integrations, makes deploying AI-powered assistants up to 12x faster.

The solution supports any LLM (large language model) and most communication channels, offering a lightweight and efficient way to protect AI applications in industries as diverse as healthcare, finance, and retail.


Whatever the vertical that you operate in, our partnership with SplxAI will enable you to unlock the full potential of conversational AI systems without the worry of data breaches and hallucinations.

Talk to us about how you can benefit from our partnership with SplxAI

Get in touch
Nov 14th, 2024
4 min read
Senior Product Marketing Manager

Julian Dawkins

Senior Product Marketing Manager