Incorporating Domain Knowledge Into LLMs so It Can Give You The Answers You're Looking For | HackerNoon
Briefly

When I want to ask about any policy in my company, I will store it in a database and ask a question regarding the same. Our search system will search the document with the most relevant results and get back the information. We call this information "knowledge". We will pass the knowledge and query to an LLM, and we will get the desired results.
The second question I asked was, "Who won the last T20 Worldcup" and we all know that India won the ICC T20 2024 World Cup. They're large language models; they're very good at next-word predictions; they've been trained on public knowledge up to a certain point; and they're going to give us outdated information.
So, how can we incorporate domain knowledge into an LLM so that we can get it to answer those questions? There are three main ways that people will go about incorporating domain knowledge: Prompt Engineering, Fine Tuning, and Retrieval Augmentation.
Responses are only as good as the context and the data that support them. Thus, if we provide LLM domain knowledge, it will be able to answer perfectly, depending on retrieval efficacy.
Read at Hackernoon
[
|
]