#fine-tuning

[ follow ]
Artificial intelligence
fromTechCrunch
1 day ago

'Selling coffee beans to Starbucks' - how the AI boom could leave AI's biggest companies behind | TechCrunch

Foundation models are increasingly commoditized; fine-tuning, post-training methods, and user-facing interfaces now drive competitive advantage in many AI businesses.
fromInfoQ
1 week ago

GenAI at Scale: What It Enables, What It Costs, and How To Reduce the Pain

My name is Mark Kurtz. I was the CTO at a startup called Neural Magic. We were acquired by Red Hat end of last year, and now working under the CTO arm at Red Hat. I'm going to be talking about GenAI at scale. Essentially, what it enables, a quick overview on that, costs, and generally how to reduce the pain. Running through a little bit more of the structure, we'll go through the state of LLMs and real-world deployment trends.
Artificial intelligence
Artificial intelligence
fromThe JetBrains Blog
3 weeks ago

Fine-Tuning and Deploying GPT Models Using Hugging Face Transformers | The PyCharm Blog

Fine-tuning pre-trained GPT models customizes performance for domain-specific math tasks, improving accuracy and efficiency while reducing training time and resources.
fromInfoQ
1 month ago

Unsloth Tutorials Aim to Make it Easier to Compare and Fine-tune LLMs

Qwen3-Coder-480B-A35B delivers SOTA advancements in agentic coding and code tasks, matching or outperforming Claude Sonnet-4, GPT-4.1, and Kimi K2. The 480B model achieves a 61.8% on Aider Polygot and supports a 256K token context, extendable to 1M tokens.
Artificial intelligence
#qdylora
fromHackernoon
55 years ago
Artificial intelligence

The Last Rank We Need? QDyLoRA's Vision for the Future of LLM Tuning | HackerNoon

fromHackernoon
2 months ago
Artificial intelligence

Beyond Static Ranks: The Power of Dynamic Quantization in LLM Fine-Tuning | HackerNoon

fromHackernoon
55 years ago
Artificial intelligence

The Last Rank We Need? QDyLoRA's Vision for the Future of LLM Tuning | HackerNoon

fromHackernoon
2 months ago
Artificial intelligence

Beyond Static Ranks: The Power of Dynamic Quantization in LLM Fine-Tuning | HackerNoon

Artificial intelligence
fromHackernoon
55 years ago

Keep the Channel, Change the Filter: A Smarter Way to Fine-Tune AI Models | HackerNoon

Efficient fine-tuning of large pre-trained models can be achieved by adjusting only filter atoms while preserving overall model capabilities.
Artificial intelligence
fromHackernoon
3 months ago

Comparing Chameleon AI to Leading Image-to-Text Models | HackerNoon

Chameleon was evaluated on image captioning and visual question-answering tasks against other leading models, focusing on maintaining the fidelity of pre-training data.
[ Load more ]