AWS offers new service to make AI models better at work
Briefly

AWS offers new service to make AI models better at work
"Enterprises are no longer asking whether they should adopt AI; rather, they want to know why the AI they have already deployed still can't reason as their business requires it to. Those AI systems are often missing an enterprise's specific business context, because they are trained on generic, public data, and it's expensive and time-consuming to fine-tune or retrain them on proprietary data, if that's even possible."
"Third-party models don't have access to proprietary data, he said, and building models with that data from scratch is impractical, while adding it to an existing model through retrieval augmented generation (RAG), vector search, or fine-tuning has limitations. But, he asked, "What if you could integrate your data at the right time during the training of a frontier model and then create a proprietary model that was just for you?""
"AWS's answer to that is Nova Forge, a new service that enterprises can use to customize a foundation large language model (LLM) to their business context by blending their proprietary business data with AWS-curated training data. That way, the model can internalize their business logic rather than having to reference it externally again and again for inferencing. Analysts agreed with Garman's assessment of the limitations in existing methods that Nova Forge aims to circumvent."
AWS positions Nova Forge as an infrastructure-first alternative to Microsoft's IQ approach by moving enterprise context into the LLM. Enterprises increasingly require AI that reasons using specific business context rather than generic public training data. Fine-tuning or retraining on proprietary data is expensive and may be impractical, while RAG, vector search, and prompt engineering have limitations. Nova Forge allows enterprises to blend proprietary business data with AWS-curated training data during model training so the resulting foundation model internalizes business logic. That approach aims to avoid repeated external retrieval during inference and produce proprietary models tailored to each enterprise.
Read at InfoWorld
Unable to calculate read time
[
|
]