ChatGPT is lying, how can we fix it?
06-20, 14:00–14:40 (Europe/Berlin), Kesselhaus

Large Language Models are great in grammar but tend to confabulate. Building a reliable knowledge base might be a way to solve it. Here is how.


ChatGPT was a revolution nobody was ready for. All the social channels have been flooded with prompts and answers which look ok at first glance but turn out to be counterfeit. Factuality is the biggest concern about Large Language Models, not only the OpenAI product. If you build an app with LLMs, you need to be aware of this.

Retrieval Augmented Language Models seem to be the solution to overcome that issue. They combine LLMs' language capabilities and the knowledge base's accuracy. The talk will review possible ways to implement it with humans in the loop.

See also: Slides (2.5 MB)

Kacper Łukawski is a Developer Advocate at Qdrant - an open-source neural search engine. Recently he’s been exploring the world of similarity learning and vector search.