#fine-tuning

[ follow ]
#language-models
fromHackernoon
55 years ago
Artificial intelligence

The Last Rank We Need? QDyLoRA's Vision for the Future of LLM Tuning | HackerNoon

fromHackernoon
55 years ago
Artificial intelligence

The Last Rank We Need? QDyLoRA's Vision for the Future of LLM Tuning | HackerNoon

fromHackernoon
55 years ago

Keep the Channel, Change the Filter: A Smarter Way to Fine-Tune AI Models | HackerNoon

Efficient fine-tuning methods are critical to address the high computational and parameter complexity while adapting large pre-trained models to downstream tasks.
Artificial intelligence
#large-language-models
Growth hacking
fromArs Technica
3 months ago

Gemini hackers can deliver more potent attacks with a helping hand from... Gemini

Indirect prompt injections are an effective method for exploiting large language models, revealing vulnerabilities in AI systems.
fromHackernoon
8 hours ago
Artificial intelligence

Beyond Static Ranks: The Power of Dynamic Quantization in LLM Fine-Tuning | HackerNoon

Growth hacking
fromArs Technica
3 months ago

Gemini hackers can deliver more potent attacks with a helping hand from... Gemini

Indirect prompt injections are an effective method for exploiting large language models, revealing vulnerabilities in AI systems.
fromHackernoon
8 hours ago
Artificial intelligence

Beyond Static Ranks: The Power of Dynamic Quantization in LLM Fine-Tuning | HackerNoon

#machine-learning
fromLogRocket Blog
2 weeks ago

Fine-tuning vs. RAG: Which AI strategy fits your frontend project? - LogRocket Blog

Fine-tuning provides consistent and fast responses, but requires lengthy retraining for updates, while RAG offers instant updates but involves handling latency and interface challenges.
Artificial intelligence
fromHackernoon
1 month ago

Comparing Chameleon AI to Leading Image-to-Text Models | HackerNoon

In evaluating Chameleon, we focus on tasks requiring text generation conditioned on images, particularly image captioning and visual question-answering, with results grouped by task specificity.
Artificial intelligence
OMG science
fromMail Online
3 months ago

Harvard scientist says God formula proves there is a creator

A mathematical formula suggests evidence of God's existence through the fine-tuning of the universe.
The asymmetry between matter and antimatter points to intentional design rather than randomness.
fromHackernoon
8 months ago

Increased LLM Vulnerabilities from Fine-tuning and Quantization: Experiment Set-up & Results | HackerNoon

The testing on different downstream tasks, including fine-tuning and quantization, shows that while fine-tuning can improve task effectiveness, it can simultaneously increase jailbreaking vulnerabilities in LLMs.
Data science
[ Load more ]