Using the API

Quickstart guide to using TheAgentic LLMs

TheAgentic API is compatible with the OpenAI SDK. You can use your existing OpenAI SDK to call TheAgentic's LLMs, just by swapping the base_url and the api_key.

from openai import OpenAI

client = OpenAI(
    base_url="https://api.theagentic.ai/v1",
    api_key="YOUR_API_KEY",
)

answer = client.chat.completions.create(
    model="agentic-turbo",
    messages=[
        {"role": "system", "content": "Your system prompt"},
        {
            "role": "user",
            "content": "Tell me a joke",
        },
    ],
)
print(answer)

Supported Models

  • TheAgentic-Turbo: Low-latency, production optimized LLM
    Slug for OpenAI SDK: agentic-turbo

  • TheAgentic-Thinker: Reasoning LLM fit for agentic tasks
    Slug for OpenAI SDK: agentic-thinker or agentic-large


Tool Calling

TheAgentic's LLM API supports OpenAI-like tool-calling.
Here's an example that makes a request to TheAgentic API a tool using the OpenAI SDK:

from openai import OpenAI

client = OpenAI(
    base_url="https://api.theagentic.ai/v1",
    api_key="YOUR_API_KEY",
)

answer = client.chat.completions.create(
    model="agentic-turbo",
    messages=[
        {
            "role":"system",
            "content":"Your system prompt"
        },
        {
            "role":"user",
            "content":"What is the weather like in San Francisco?"
        }
    ],
    tools=[
        {
            "type":"function",
            "function":{
                "name":"get_weather",
                "description":"Get weather information for a specific location.",
                "parameters":{
                    "type":"object",
                    "properties":{
                        "location":{
                            "type":"string",
                            "description":"Name of the location"
                        }
                    },
                    "required":[
                        "location"
                    ]
                }
            }
        }
    ]
)

print(answer)

Reasoning Tokens

To see the reasoning tokens in each response, you can pass in return_reasoning=True in the extra_body parameter of the chat completion request.

response = client.chat.completions.create(
    model="agentic-large",
    messages=[...],
    extra_body={"return_reasoning": True},
)

The response will include the reasoning tokens in the reasoning_content of the ChatCompletionMessage object.

print(response.choices[0].message.reasoning_content)

When streaming the response, the reasoning tokens will be streamed in the reasoning_content field.

for chunk in response:
    print(chunk.choices[0].message.reasoning_content)