OpenAI is doubling down on safety with its latest video generation model, Sora 2, integrating robust measures from the ground up. The company's approach, detailed in their latest announcement, aims to balance creative freedom with concrete protections.
Every Sora 2 output will carry visible and invisible provenance signals. This includes industry-standard C2PA metadata video signatures, allowing for high-accuracy tracing back to the platform. Internal tools further bolster this, building on successful systems from ChatGPT's image generation capabilities.
Content Provenance and Identity
Visible watermarks, dynamically generated, will also feature creator names. This move seeks to clearly distinguish AI-generated content from authentic media.
When creating videos from images of real people, users must attest to having consent from individuals featured and rights to the media. These image-to-video generations are subject to particularly stringent safety guardrails, even exceeding those for Sora Characters.