RBM Handbook for CPaaS – Chapter 4
The Risks of Rich Media for Business Messaging
Rich media in RBM may be what marketers have missed in A2P SMS. Still, in addition to enabling brands to send more compelling and engaging content, it provides new opportunities for scammers to include malicious content in messages. It also increases the risk of legitimate brands sending non-compliant content inadvertently.
Spammers can include non-compliant content in images and videos. Malicious PDFs can be used for smishing or to distribute malware. Rich media can also be used to make malicious messages appear more legitimate and engaging, increasing the likelihood of users falling victim to scams. For legitimate brands, RBM’s rich features can make it easier to unintentionally cross the line between acceptable and prohibited content. Rich media increases the complexity of managing compliance and opens a new range of possible mistakes that can lead to accidental violations of regulations and policies, including sending restricted promotional content or misleading advertisements.
Handbook: Securing Messaging in the New Age of Rich Media
This is an excerpt from our handbook Securing Messaging in the New Age of Rich Media, a guide to RBM security for CPaaS providers. Download the full handbook for a comprehensive overview of RBM security.
How Rich Media Content Can be Misused
Rich media introduces an array of possible misuse, including:
- Inappropriate Media Content: Brands may send media content inappropriate for their audience, including SHAFT content or advertising images that are too provocative. Distinguishing between what is appropriate and what is not can be more challenging for rich media and complex content types than for plain text mediums.
- Illegal content: Rich media can be illegal or used in ways that make it illegal. Spammers may send information in images about where and when they will be selling illegal drugs. Pictures can be used in hate campaigns.
- Text-as-images: Spammers can bypass text filters by sending images with text embedded. Examples of known uses are sending malicious links as images or drug price lists as images. Hate content can also be disguised in images containing text.
- Leveraging OCR: Some smartphones can automatically translate text in an image to actual text. For example, the app would understand an image of a URL and open the URL in a browser. Spammers use this method to send malicious links or links to prohibited content as images.
- Malicious PDFs: Malicious PDFs are a cybersecurity concern. Recently, some noticeable attacks have happened where PDFs were used to distribute mobile malware.
- Multi-modal content: The combination of text and rich media in RCS business messages can potentially be misleading even when the individual components are not inherently deceptive.
- Impersonation: Spammers can use rich media to impersonate well-known organizations by falsifying logos and branding. Examples include sending images that look like official documents from well-known organizations such as government agencies or logistics companies. Another well-known example is using a bank’s insignia to make it look legitimate. Images like photos of fake IDs of bank representatives have been used in banking frauds.
- Unauthorized content: Using media assets without rights or infringing on copyrights is a risk.
- Dark design: using rich media to design messages to manipulate recipients into taking specific actions, such as clicking links or engaging in a two-way conversation. While call-to-action buttons and 2-way conversations are valuable features in RBM, spammers may use dark designs to lure subscribers to take actions they otherwise would not have.
Case Study: Spyware Distributed through Malicious PDFs
A recent example where malicious PDFs were used to compromise phones was a zero-click attack through WhatsApp targeting almost 100 journalists and members of civil society. A malicious PDF was sent to the targets via WhatsApp groups in a campaign linked to spyware maker Paragon. The story was featured in several media outlets, including Reuters and TechCrunch.
Chatbots Behaving Badly
Interactive two-way conversations with chatbots are seen as one of the key advantages of RBM, but the conversational style of interaction also introduces new threats and vulnerabilities that spammers can exploit. This can happen from either end; spammers can use chatbots to send spam, and badly protected chatbots can be exploited from the subscriber side.
Protecting Subscribers from Rogue Chatbots
- Spam: Spammers can use chatbots to simultaneously hold conversational interactions with many users in the languages of their choice and with a tone that reflects brands and services in a way that boiler-room manual spam gangs cannot.
- Phishing Attacks: Malicious actors can use chatbots to trick users into providing sensitive information, such as passwords or credit card details by emulating similar interactions with existing brands.
- Malware Distribution: Chatbots can be manipulated to distribute malware or ransomware. This is a known technique with online chatbots.
Example from a spam campaign intercepted in January 2025. The logo of PNC (one of the largest and best known banks in the US), is used to give the message a legitimate feel. The URL leads to a phishing site.
Protecting Chatbots from Rogue Subscribers
- Data Breaches: If a chatbot’s security is compromised, unauthorized access to sensitive user data stored within the chatbot’s database can occur.
- Prompt injections: Chatbots can be manipulated from the outside through prompt injections, causing them to leak sensitive and personal data or change their behavior.
- Rogue chatbots: Spammers can set up a rogue chatbot from scratch or manipulate an existing bot through unauthorized access. A compromised chatbot representing a well-known brand can be particularly dangerous as it is likely to be trusted by users and has an extensive reach. Subtle manipulations where the bots silently leak sensitive data to threat actors can go unnoticed for a long time, compromising many users. The conversational interaction between chatbots and users can lower the user’s guard, creating a false sense of trust. Mimicking the language and tone of trusted entities makes it easier to impersonate banks and other institutions. Spammers leverage this to make phishing attacks more convincing and harder for users to detect.
Text is hidden in the image to bypass text-based filters in this spam message intercepted in January 2025. The URL leads to a phishing site. AI-powered real-time URL analysis can identify the link as malicious and block the message. Alternatively, OCR capabilities can extract text from the image for fingerprint analysis.
Like what you’ve read? Click below to get your free copy of Enea’s RBM Handbook for CPaaS providers – Securing Messaging in the Age of Rich Media and Artificial Intelligence.