Guardrails - A New Python Package for Correcting Outputs of LLMs
Guardrails is an open-source Python package aiming to enhance accuracy and reliability of large language models outputs.
It introduces a unique concept called 'rail spec' to define expected structure and type of outputs, evaluating content for biases and bugs as well.