Read at www.theguardian.com
The UK should prioritize setting global standards for artificial intelligence (AI) testing instead of trying to carry out all vetting itself, according to Faculty AI CEO Marc Warner. The newly established AI Safety Institute (AISI) could play a key role in setting standards for the wider world, as the UK has advanced work on AI safety. Warner suggested that the institute should focus on setting test standards and encouraging other governments and companies to follow them. He also emphasized the need to move quickly, as technology is advancing rapidly. Warner highlighted the importance of red teaming, a simulation of misuse of an AI model, as a testing method that the institute should promote for other entities to adopt.
I think it's important that it sets standards for the wider world, rather than trying to do everything itself.
Warner commended the progress made by the AISI, describing its establishment as one of the fastest-moving initiatives he has seen in government. However, he cautioned that the rapidly advancing technology could lead to a backlog if the institute tries to take on all the testing work itself. Instead, Warner suggested that the institute should focus on providing standards that other governments and companies can follow, such as red teaming, to ensure the proper vetting of AI models.
I don't think I've ever seen anything in government move as fast as this.