#lora

[ follow ]
#meshtastic
fromcontributor.insightmediagroup.io
2 months ago

Are You Still Using LoRA to Fine-Tune Your LLM?

The original LoRA insight is that fine-tuning all the weights of a model is overkill. Instead, LoRA freezes the model and only trains a small pair of low-rank adapter matrices.
Scala
[ Load more ]