Berlin Buzzwords 2024

Large language models are not a paradigm shift
06-10, 14:00–14:40 (Europe/Berlin), Maschinenhaus

Many of the putatively novel challenges of building systems around LLMs are analogous to problems we've solved for conventional ML systems. This talk will show you why the things you already know about building ML systems are still relevant for LLM systems — and where the true novelty of LLMs lies.


Hundreds of millions of people have used generative large language models (LLMs) to entertain themselves and improve their productivity. Sophisticated enterprises have scrambled to quickly put LLM initiatives and applications in place. It's hard not to wonder if LLMs are finally the technology that will make AI "real" to a broad audience after years of steady progress in machine learning (ML) applications.

This talk will argue that the emergence of LLMs does not represent the paradigm shift that it might appear to. As we'll see, many of the putatively novel challenges that LLM applications have strong analogies to challenges faced by conventional ML systems. We'll discuss the differences between LLM applications and ML applications, and see how many of these are important but not fundamental. We'll discuss the challenges inherent to getting the best out of LLMs and learn how AI systems can go wrong, how to build them responsibly, and why the aspects of LLMs that are truly new give us a reason to ultimately believe at least some of the hype after all.

See also: Slides (2.1 MB)

William Benton is passionate about making it easier for machine learning practitioners to benefit from advanced infrastructure and making it possible for organizations to manage machine learning systems. His recent roles have included defining product strategy and professional services offerings related to data science and machine learning, leading teams of data scientists and engineers, and contributing to many open source communities related to data, ML, and distributed systems. Will was an early advocate of building machine learning systems on Kubernetes and developed and popularized the “intelligent applications” idiom for machine learning systems in the cloud. He has also conducted research and development related to static program analysis, language runtimes, cluster configuration management, and music technology.