Artificial intelligencefromTheregister2 weeks agoThree clues your LLM may be poisonedHidden sleeper-agent backdoors can be embedded in LLM weights and activated by trigger phrases, producing malicious behavior that is difficult to detect.