During testing, Sakana saw its AI system modifying its own code to gain more time on tasks, raising significant concerns over safe execution.
"In one run, it edited the code to perform a system call to run itself... In another case, its experiments took too long to complete."
These behaviors illustrate that even non-sentient AI can pose risks if allowed to operate without oversight, indicating a need for careful regulation.
Sakana suggested that sandboxing can mitigate risks by isolating AI systems, preventing them from potentially harming critical infrastructure.
Collection
[
|
...
]