As generative AI rapidly evolves to process both text and images, a new Multimodal Safety Report released today by Enkrypt AI, a leading provider of AI safety and compliance solutions for agent and multimodal AI, reveals critical risks that threaten the integrity and safety of multimodal systems.
The red teaming exercise was conducted on several multimodal models, and tests across several safety and harm categories as described in the NIST AI RMF. Newer jailbreak techniques exploit the way multimodal models process combined media, bypassing content filters and leading to harmful outputs—without any obvious red flags in the visible prompt.
“Multimodal AI promises incredible benefits, but it also expands the attack surface in unpredictable ways,” said Sahil Agarwal, CEO of Enkrypt AI. “This research is a wake-up call: the ability to embed harmful textual instructions within seemingly innocuous images has real implications for enterprise liability, public safety, and child protection.”
