If you keep at a question, you can break guardrails and wind up with large language models telling you stuff that they are designed not to. Like how to build a bomb.
The closer we get to more generalized AI intelligence, the more it should resemble a thinking entity, and not a computer that we can program, right?
[
add
]
[
|
|
...
]