Watch: How Anthropic found a trick to get AI to give you answers it's not supposed to
Briefly

If you keep at a question, you can break guardrails and wind up with large language models telling you stuff that they are designed not to. Like how to build a bomb.
The closer we get to more generalized AI intelligence, the more it should resemble a thinking entity, and not a computer that we can program, right?
Read at TechCrunch
[
add
]
[
|
|
]