An OpenAI co-founder has called for artificial intelligence labs to implement a system of safety-testing models developed by their competitors. The proposal advocates for a collaborative, industry-wide approach to proactively identify and mitigate potential risks as advanced AI systems rapidly emerge. This initiative from a leader at one of the foremost AI research institutions underscores the growing emphasis on robust safety protocols across the competitive AI landscape.