Episode #284: Running Local LLMs With Ollama and Connecting With Python - The Real Python Podcast
Briefly

Episode #284: Running Local LLMs With Ollama and Connecting With Python - The Real Python Podcast
"We cover a recent Real Python step-by-step tutorial on installing local LLMs with Ollama and connecting them to Python. It begins by outlining the advantages this strategy offers, including reducing costs, improving privacy, and enabling offline-capable AI-powered apps. We talk through the steps of setting things up, generating text and code, and calling tools. This episode is sponsored by Honeybadger."
"00:00:00 - Introduction 00:02:37 - Take the Python Developers Survey 2026 00:03:07 - How to Integrate Local LLMs With Ollama and Python 00:08:15 - Sponsor: Honeybadger 00:09:01 - Create Callable Instances With Python's .__call__() 00:12:13 - GeoPandas Basics: Maps, Projections, and Spatial Joins 00:16:03 - Ending 15 Years of subprocess Polling 00:18:57 - Video Course Spotlight 00:20:23 - Backseat Software - Mike Swanson"
Ollama installs and runs local large language models (LLMs) and provides APIs for Python projects. Running models locally reduces cloud costs, improves data privacy, and enables offline-capable AI applications. The setup includes installing Ollama, selecting a model, running the model server, and calling it from Python to generate text, generate code, and invoke external tools. Community resources include the 2026 Python Developers Survey, callable instances using __call__(), GeoPandas for maps and spatial joins, improvements to subprocess handling, a peer-to-peer encrypted CLI chat, and a retry library that classifies errors.
Read at Realpython
Unable to calculate read time
[
|
]