Increased LLM Vulnerabilities from Fine-tuning and Quantization: Experiment Set-up & Results | HackerNoon
Fine-tuning LLMs enhances task performance but may compromise their safety and increase vulnerabilities.
Understanding the trade-off between performance and security is critical in AI model development.