Skip to content
GitHub Login

Tool Calling

You can pass a tools array in chat completion requests. The model may respond with tool_calls instead of text; your client runs the requested functions and sends the results back as tool messages, then requests again until the model returns a final text response.

Include a tools array in the request body. Each tool is an object with type: "function" and a function object that has:

  • name (required): Function name the model will use when calling.
  • description (optional): Description for the model; improves when it chooses to call the tool.
  • parameters (optional): JSON Schema for the arguments the function accepts.
{
"model": "8080/taalas/llama3.1-8b-instruct",
"messages": [{"role": "user", "content": "What's the weather in Paris?"}],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current temperature for a location by latitude and longitude.",
"parameters": {
"type": "object",
"properties": {
"latitude": {"type": "number", "description": "Latitude"},
"longitude": {"type": "number", "description": "Longitude"}
},
"required": ["latitude", "longitude"]
}
}
}
]
}
  1. First request: Send messages and tools. The response may be:

    • Normal text: choices[0].message.content is set and finish_reason is "stop". You’re done.
    • Tool calls: choices[0].message.tool_calls is set and finish_reason is "tool_calls". Each item has id, type: "function", and function with name and arguments (JSON string).
  2. Append assistant and tool messages: Add the assistant message (including tool_calls) to your conversation. For each tool call, append a message with role: "tool", tool_call_id (same as in the assistant’s tool_calls), and content set to the result of running that function (string, e.g. JSON).

  3. Second request: Send the updated messages (user + assistant + tool messages) with the same tools. Repeat until finish_reason is "stop" or you hit a max-turns limit.

This example defines a get_weather tool, sends a user message, and runs the tool-call loop until the model returns a final answer.

import os
import json
import requests
API_KEY = os.environ.get("_8080_API_KEY")
BASE_URL = "https://api.8080.io"
def get_weather(latitude: float, longitude: float) -> str:
"""Get current temperature for a location (mock implementation)."""
# In production you might call a real weather API
return json.dumps({"temperature_c": 18, "conditions": "Partly cloudy"})
def run_tool(name: str, arguments: str) -> str:
args = json.loads(arguments)
if name == "get_weather":
return get_weather(args["latitude"], args["longitude"])
return json.dumps({"error": f"Unknown tool: {name}"})
def chat_with_tools():
messages = [
{"role": "user", "content": "What's the weather in Paris right now?"}
]
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current temperature for a location by latitude and longitude.",
"parameters": {
"type": "object",
"properties": {
"latitude": {"type": "number", "description": "Latitude"},
"longitude": {"type": "number", "description": "Longitude"}
},
"required": ["latitude", "longitude"]
}
}
}
]
while True:
resp = requests.post(
f"{BASE_URL}/v1/chat/completions",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json"
},
json={"model": "8080/taalas/llama3.1-8b-instruct", "messages": messages, "tools": tools}
)
resp.raise_for_status()
data = resp.json()
choice = data["choices"][0]
message = choice["message"]
messages.append(message)
if choice.get("finish_reason") == "stop":
print(message.get("content", ""))
return
if choice.get("finish_reason") == "tool_calls" and message.get("tool_calls"):
for tc in message["tool_calls"]:
fn = tc["function"]
result = run_tool(fn["name"], fn["arguments"])
messages.append({
"role": "tool",
"tool_call_id": tc["id"],
"content": result
})
else:
print(message.get("content", ""))
return
if __name__ == "__main__":
chat_with_tools()

Run it (after setting _8080_API_KEY):

export _8080_API_KEY="your-api-key"
python chat_with_tools.py

The e80 Python SDK simplifies tool calling by decorating your functions and passing them as tools:

import requests
from eighty80 import chat, tool, Message
@tool
def get_weather(latitude: float, longitude: float) -> str:
"""Get the current temperature for a location by latitude and longitude."""
response = requests.get(
f"https://api.open-meteo.com/v1/forecast?latitude={latitude}&longitude={longitude}&current=temperature_2m"
)
data = response.json()
return str(data["current"]["temperature_2m"])
result = chat(
model="8080/taalas/llama3.1-8b-instruct",
messages=[Message("user", "What's the weather in San Francisco?")],
tools=[get_weather]
)

The SDK handles the tool-call loop and argument parsing for you.