DeepMedia Unveils Groundbreaking GenAI Voice Benchmark Dataset

September 3, 2024

In the rapidly evolving landscape of AI-generated content, DeepMedia continues to lead the charge in deepfake detection and AI disinformation security. Today, we're thrilled to announce the release of our latest high-quality GenAI voice benchmark dataset – the largest and most diverse of its kind available for public benchmarking.

A New Standard in Voice AI Security

Our new dataset represents a significant leap forward in the field of voice AI security:

  • 200,000+ voice samples

  • 16 different generative capabilities

  • 8,000 unique speaker identities

  • Diverse representation across age, race, gender, and emotional expression

This comprehensive collection covers a wide range of potential threats, from social media disinformation to AI voice manipulations in call centers and real-time communications.

Unparalleled Accuracy and Robustness

DeepMedia's detection system has achieved remarkable results on this challenging dataset:

  • 97% overall accuracy

  • False positive rate under 3%

  • False negative rate of approximately 3.6%

  • ROC AUC value of 0.98

Notably, our system maintained over 95% accuracy on most generators, including 8 out of 16 that were completely unseen during training. This demonstrates the robustness and adaptability of our detection algorithms in the face of new and evolving threats.

Collaboration for a Secure Digital Future

This breakthrough wouldn't have been possible without the invaluable contributions of our research partners, including SRI International. Their expertise has been crucial in ensuring the dataset's comprehensiveness and the continued refinement of our detection algorithms.

Join the Fight Against AI Disinformation

We're making this dataset available to qualified researchers, tech platforms, and government agencies. If you're interested in accessing this valuable resource or learning more about our state-of-the-art detection capabilities, please visit deepidentify.ai or reach out to our research team.

Together, we can build a more secure digital world, where the authenticity of voice content can be verified with confidence.