RBM Handbook for CPaaS – Chapter 5
AI Messaging Threats: How AI Empowers Both Cyber Spammers and Defenders
AI is both an asset and a threat in spam protection.
As the use of rich media grows, AI messaging threats become of increasing concern to the messaging ecosystem. Offensive AI needs to be mitigated by defensive AI, but spammers and defenders are in an arms race, highlighting AI’s critical role on both sides of the A2P messaging security landscape.
Handbook: Securing Messaging in the New Age of Rich Media
This is an excerpt from our handbook Securing Messaging in the New Age of Rich Media, a guide to RBM security for CPaaS providers. Download the full handbook for a comprehensive overview of RBM security.
AI Creates New Security Challenges
Both legitimate brands and spammers can leverage AI to increase their impact through messaging channels. The transition to rich media messaging enables new possibilities for creating credible scam messages using AI-created media content. A spammer may, for example, use it to:
- Create huge volumes of personalized spam. AI can generate spam and highly convincing phishing messages that are personalized and contextually relevant, making it harder for recipients to identify them as fraudulent.
- Messages can be localized using AI. Presenting spam in the recipient’s native language makes it more trustworthy. AI provides spammers with high-quality translations featuring natural language.
- Manipulate data in transit at a scale. Using AI, unencrypted data can be manipulated at scale while in transit, altering messages or injecting malicious content. This can compromise the integrity of communications and lead to data breaches.
- Fine-tune messages to evade spam filters. Traditional security measures can be evaded by constantly adapting and learning from detection patterns, making it more challenging for CPaaS providers to identify and block malicious activities.
- Create poly-morphic spam campaigns. A considerable number of variations makes pattern or signature recognition more difficult.
AI improves spam quality, making scam messages more believable and increasing the risk that targets will fall victim. Through AI, spammers can scale up campaigns with highly targeted, quality content that will be more difficult for subscribers to identify as malicious. While still necessary, consumer education will be less effective when personalization and contextualization make scams more relevant and seemingly legitimate, preventing many subscribers from recognizing them as fraudulent. This means that messaging security measures must be applied before subscribers are reached.
For legitimate brands AI also allows for quick, hands-off personalization of messages that are sent to recipients. Even with good intentions, a lack of human intervention can lead to content that violates policies, such as sending content inappropriate for the intended audience. The major challenge for CPaaS providers is identifying which messages contain spam or non-compliant content in a flood of media-enriched messages. Spam filters applying rules on text will not be enough to counter AI-driven spam.
AI Enables New Protection Mechanisms
Spammers’ use of AI challenges rule-based detection and filtering, which cannot keep up with the advanced personalizations, instant morphing, and rich media content AI can empower spammers with. However, AI is also becoming an indispensable tool in the spam filtering solutions CPaaS providers deploy to detect and prevent misuse of A2P messaging. A key benefit of AI-driven systems is that they can quickly analyze vast amounts of data, identifying patterns and anomalies that may indicate intentional spam or accidental non-compliance. Unlike rule-based systems, AI can interpret pictures and other media, which is an invaluable feature in the rich media messaging landscape. Many spam filtering use cases can benefit from AI-driven solutions, including:
- Rich media interpretation. Rich media, such as images, may contain restricted or prohibited material. AI-driven image analysis can interpret images and conclude whether they are compliant.
- Text interpretation. Text can be interpreted and classified into different types of communication, such as marketing, notifications, or spam. This can be used to ensure compliance with local regulations and to filter spam.
- OCR. AI improves the accuracy and adaptability of OCR to handle a wide range of text types, including handwriting and text purposely obfuscated to be difficult for machines to read.
- URL analysis. Spam messages often include URLs pointing to malicious websites. AI-driven URL analysis can detect malicious links without previous knowledge of the link or its target, thereby protecting subscribers in real time.
- Behavioral analysis. AI can detect anomalies in sending patterns indicating misuse of messaging services. Using AI-driven anomaly detection, fraud cases can be detected and prevented.
- Multi-modal analysis. AI can simultaneously analyze text, images, and other elements in rich media messages, providing a comprehensive approach to content moderation.
- Conversational analysis. Two-way conversations with several messages in each direction need intelligent spam detection and input validation.
Case Study: I Know Where You Live
There have recently been reports about spammers using screenshots from Google Street View showing the outside of a victim’s house to intimidate the victims into believing they have been outside (see, for example, APWG’s Phishing Reports). This tactic reinforces threats in “I know where you live” extortion campaigns, where scammers demand payment to leave the victim alone. These often evolve from sextortion campaigns, where scammers claim to have compromising material that they threaten to share unless they receive payment.
So far, this tactic has been seen mainly in email scam campaigns. However, scammers often adapt their tactics across different communication platforms. The effectiveness of using street view images to make threats appear more credible could potentially be applied to other messaging formats, including MMS and RBM/RCS, as these can also carry rich media.
Scaling these spam campaigns requires automation, which can be achieved using AI agents. Spammers can gain access to personal details through data breaches, but making screenshots and composing personalized messages is tedious. An AI agent can scrape information from the internet, get screenshots, and compose and send messages
These types of spam campaigns are typically detectable through the text part of the message, either using fingerprints that can detect variations and bypass techniques or AI that can understand the intention of the message text. However, spammers are creative, and adding the text part to an image instead requires defensive AI tools that can interpret the text in the image.
Mock up of how an “I know where you live” scam could look in MMS format
AI vs Humans in Spam Detection
Fundamentally, tasks that AI can perform, human analysts can also perform, and in comparison to human analysts, AI has several advantages, including:
- Language skills. AI is not bound to specific language skills but can operate in any language with the same semantic comprehension.
- Scaling. Machine-based spam detection is more straightforward to scale up (or down) when needed.
- Reduce stress. Prohibited content is prohibited for a reason. Human analysts are constantly exposed to violence, hate, and other types of stress-inducing content in the messages they review, which is a workplace hazard.
- Privacy. With the increasing personalization of legitimate A2P communication, privacy is a growing concern, and brands may want to keep their communication with consumers private. AI is an acceptable method to accomplish privacy if it excludes human analysis.
- Timeliness. A human review is not done in real-time. AI, on the other hand, can provide immediate analysis and a verdict on whether a message should be blocked, flagged, or allowed through. If detection can be done within an order of magnitude of milliseconds, then it can be done inline.
- Availability. Many organizations face the challenge of finding and financing skilled analysts, and the growing demands caused by increasing message volumes makes offloading to intelligent solutions imperative.
There are, however, also some limitations of AI systems in spam filtering:
- Intuition. AI has yet to conquer intuition, which is essential in much threat intelligence analysis work, and is fundamentally human. Understanding why someone would do something—what they can gain from an action—can be critical in determining whether messages are spam, especially with social engineering techniques at play.
- Accuracy. The AI models available today are not accurate enough to independently manage spam filtering. It is not possible to combine an acceptable detection rate with a low enough false positive rate, not even with extensive training of the model.
- Training and monitoring. AI models need constant retraining to tackle new spam variants. Learning systems can be effective in finding variants in existing spam techniques, but when spamming tactics evolve into completely new approaches, these need to be detected with other means and the models must be retrained.
- Resource usage and real-time. AI is resource hungry, and analyzing rich media content requires far more processing than analyzing text. Even if computing resources are available, the processing can introduce latencies, making AI unsuitable for inline, real-time deployments.
AI is no silver bullet for spam protection, but its impact and potential in terms of both its offensive and defensive capabilities cannot be ignored. Rich media messaging will require AI in spam protection, but it needs to be combined with human analysts to train, supervise, and complement it to achieve the accuracy needed to provide sufficient protection.
Keeping up With AI Developments
Thanks to the blistering speed of advancements in the field, some of the shortcomings of AI-driven solutions today will be solved within a couple of years, though others will take longer to fix.
AI agents, for example, make AI a much more helpful technology for spam filtering. An AI agent can improve spam filtering by connecting it with threat intelligence, e.g., data about spam content paired with meta-data such as sender, sending times, recipients, targeted networks, etc. This will allow the agent to compare and understand the messages with updated and current intelligence, providing contextual understanding. However, building the models and the agents requires specific expertise and continuous threat intelligence updates.
The announcement of DeepSeek in early 2025 showed us that it is possible to build high-performing Large Language Models that require less computational power and at a lower cost. Small Language Models specifically developed for spam filtering can further improve this efficiency. There is still a long way to go before resource consumption is no longer an issue, though, especially considering the shift towards rich media in messaging. However, improving the resource footprint helps accomplish more tasks while maintaining the accuracy and pace of the solution. This allows more filtering to be done inline.
AI needs to be included and adopted where it provides advantages to spam filtering. It is not a silver bullet for mitigating every kind of misuse of messaging channels, but it can improve the efficiency of mitigative actions. As AI keeps developing, its useability and effectiveness in messaging security will only increase.
Like what you’ve read? Click below to get your free copy of Enea’s RBM Handbook for CPaaS providers – Securing Messaging in the Age of Rich Media and Artificial Intelligence.
