
"As with the previous posts, the code for this walkthrough is available on GitHub. Defining a function schema We'll define our function using JSON Schema format. The schema is similar to what we defined in the OpenAI post, with slight differences: Ollama's API is available at http://localhost:11434 by default. Let's use curl to make a request to the /api/chat/ endpoint and see the raw JSON response from the model. We'll provide our function schema in the tools parameter."
"This returns a response that includes a message object with a tool call: We see that the model decided to make a function call. As with the OpenAI example, the model simply tells us which function it would like to call and with which arguments. It's up to us to handle the function execution and pass back the result. We can also pass the actual Python function in the tools argument and Ollama will generate the schema for us in the background."
Install Ollama and download the Llama 3.2 model with ollama pull llama3.2. Define functions using JSON Schema format or provide Python functions; Ollama can generate schemas from annotated Python functions with Google-style docstrings. Send requests to Ollama's API at http://localhost:11434/api/chat/, supplying the function schema in the tools parameter. The model may return a tool call indicating which function to invoke and with which arguments. Execute the function logic locally, then feed the result back to the model. Use the ollama Python package to make requests and inspect responses for tool-call messages. Provide type annotations and docstrings for best results.
Read at Caktusgroup
Unable to calculate read time
Collection
[
|
...
]