Read at www.theguardian.com
The UK should prioritize setting global standards for AI testing instead of trying to handle all the vetting internally, according to Marc Warner, CEO of Faculty AI. Warner believes that the newly established AI Safety Institute (AISI) could end up being responsible for reviewing different AI models because of the UK's expertise in AI safety. He suggests that the AISI should focus on becoming a world leader in setting test standards rather than trying to handle all the work on its own. Warner emphasizes the importance of setting standards for the wider world.
I think it's important that it sets standards for the wider world, rather than trying to do everything itself, he said.
The UK government announced the formation of the AISI last year, which secured a commitment from major tech companies to cooperate on testing advanced AI models. The UK's advanced work in AI safety, demonstrated by the establishment of the AISI, positions it as an important player in global AI safety efforts. With contracts to test AI models for the UK institute, Faculty AI assists in determining whether the models adhere to safety guidelines. Warner commends the institute's progress but stresses the need for setting standards that can be adopted by other governments and companies, such as red teaming to simulate AI model misuse.
The institute should put in place standards that other governments and companies can follow, such as red teaming, where specialists simulate misuse of an AI model, rather than take on all the work itself.