**OpenAI Co-founder Urges Cross-Company AI Safety Testing**
A prominent OpenAI co-founder has issued a significant call for artificial intelligence laboratories to undertake safety testing of rival AI models. This proposal underscores a growing sentiment within the AI community regarding the critical need for robust safety protocols as the technology rapidly advances.
The initiative suggests that independent evaluation, potentially across competing firms, could establish a crucial baseline for security and mitigate potential risks associated with increasingly powerful AI systems. Proponents argue that such a collaborative approach would foster greater transparency, identify vulnerabilities before widespread deployment, and ultimately ensure a more responsible and safer development trajectory for the entire AI industry. It represents a potential shift towards industry-wide accountability in the race for advanced AI.