LLM systems face threats from prompt injection and stealing, necessitating robust security measures.
System prompts are public and should be treated as such to mitigate risks.
Security strategies include embedding instructions, adversarial detectors, and fine-tuning models.