GPT4All: Model Training, Model Access, and Model Evaluation | HackerNoon
Briefly

The original GPT4All model was a fine-tuned variant of LLaMA 7B, where we froze the base weights and trained only a small set of LoRA weights to enhance efficiency.
We released all data, training code, and weights publicly, including a 4-bit quantized version to empower users to run it on commodity hardware.
Our preliminary evaluation showed that GPT4All has lower ground truth perplexities than the best openly available alpaca-lora model, highlighting its competitive performance.
The costs for our research were minimal, around $800 for GPU usage and $500 for OpenAI API, with the final model costing about $100 to train.
Read at Hackernoon
[
|
]