The article focuses primarily on the newly enacted Take It Down Act, a law designed to combat revenge porn and AI-generated deepfakes. While the law's effectiveness and potential for overreach are debated, the article highlights the role of AI-powered content detection startups in helping online platforms comply. One such startup, Hive, is mentioned as a key player in this space. While the article doesn't specify the amount of funding Hive has raised, it does state that the company works with major platforms like Reddit, Giphy, Vevo, Bluesky, and BeReal to detect deepfakes and child sexual abuse material (CSAM). This implies a significant investment in the company's technology and its growing importance in the fight against harmful online content.
Hive's technology operates as a software-as-a-service (SaaS) model, meaning online platforms integrate Hive's API into their systems. This allows for proactive monitoring of uploaded content, enabling the detection and removal of non-consensual intimate images (NCII) before they are widely disseminated. The company's involvement in supporting the Take It Down Act suggests a belief that the legislation, despite its controversies, will create a greater demand for their services. The undisclosed funding round likely fuels Hive's expansion and development efforts to meet this increased demand.