A 6-Step Guide to Developing AI Agent Applications with FastAPI (Complete Code Included)

Recently, AI Agents have become wildly popular. Many beginners want to start developing but get stuck on “choosing a framework + setting up the workflow”. Actually, for beginners, using FastAPI to develop AI Agents is one of the best solutions – it’s lightweight, fast, auto-generates API documentation, and can seamlessly connect with LLM APIs.

Today, we’ll break down the complex process into 6 simple steps. From environment setup to deployment and testing, each step provides runnable code you can copy and detailed explanations. Beginners can follow along and build their first AI Agent application!

First, clarify the core logic: The core of an AI Agent is “receive task → LLM thinks → execute tool → return result”. We’ll use FastAPI to build the interface layer, connecting domestic LLMs (using ByteDance’s Doubao as an example, which has ample free credits and is beginner-friendly) with tool functions, forming a complete loop.


Step 1: Environment Preparation – Set Up Basic Development Environment 🔧

First, get all dependencies installed. This step requires installing 3 core dependencies:

  • FastAPI: Core framework for building web interfaces.
  • Uvicorn: ASGI server for FastAPI, used to run the application.
  • OpenAI SDK: Needed for interacting with Doubao’s API.

1.1 Install Dependencies (Code/Command)

bash

# Open terminal and execute the following command (recommend creating a virtual environment first)
pip install fastapi uvicorn openai python-dotenv

Note: python-dotenv is used to manage environment variables, avoiding writing your domestic LLM API key directly in the code (security best practice).

1.2 Configure Environment Variables (Create .env file)

Create a new .env file in the project root directory and write your domestic LLM API key (obtained from the ByteDance Doubao open platform):

env

# .env file content
DOUBAO_API_KEY="Your Doubao LLM API Key"
DOUBAO_MODEL="doubao-lite"  # Doubao Lite version, ample free credits, fast response, beginner's first choice

Step 2: Build FastAPI Basic Framework – Minimal Application Example 🚀

First, write the simplest FastAPI app to verify the environment works. Create a new main.py file with the following code:

2.1 Basic Application Code

python

from fastapi import FastAPI
from dotenv import load_dotenv  # Load environment variables

# Load environment variables from .env file
load_dotenv()

# Create FastAPI app instance
app = FastAPI(
    title="Beginner-Friendly AI Agent Application",
    description="AI Agent developed with FastAPI, supports tool calling",
    version="1.0.0"
)

# Define first endpoint: test endpoint
@app.get("/")
def read_root():
    return {"message": "AI Agent app started successfully! Visit /docs to view API documentation."}

2.2 Run Application and Test

bash

# In terminal, execute command to run the app
uvicorn main:app --reload

Note:

  • --reload parameter: Auto-reload in development mode, no need to restart server after code changes.
  • After successful run, visit http://127.0.0.1:8000. You’ll see {"message": "AI Agent app started successfully!..."}.
  • Visit http://127.0.0.1:8000/docs to see the API documentation auto-generated by FastAPI (beginner-friendly!).

Step 3: Implement AI Agent Core – LLM Calling & Thinking Logic 🧠

The “brain” of an AI Agent is the LLM. The core function is “receive user task → think about what needs to be done → decide whether to call a tool”. In this step, we implement the domestic LLM (Doubao) calling function and the Agent’s basic thinking logic.

3.1 Encapsulate LLM Calling Function

Add code in main.py to encapsulate the ByteDance Doubao API call:

python

import os
from openai import OpenAI

# Get Doubao API KEY from environment variable. See supplemental instructions for application process.
api_key = os.getenv("DOUBAO_API_KEY")
# Add key validation to catch configuration issues early.
if not api_key:
    raise ValueError("Please configure DOUBAO_API_KEY in the .env file first. See document supplement for application process.")

# Initialize OpenAI client, configure Doubao model service URL
client = OpenAI(
    base_url="https://ark.cn-beijing.volces.com/api/v3",
    api_key=api_key,
)

def call_llm(prompt: str) -> str:
    """
    Call domestic LLM (Doubao) to get thinking result.
    :param prompt: Prompt for the LLM
    :return: LLM response text
    """
    try:
        # Call Doubao LLM (chat completion interface, text-only interaction)
        # Key change: Change unsupported 'input_text' to 'text' type recognized by Doubao.
        response = client.chat.completions.create(
            model=os.getenv("DOUBAO_MODEL"),  # Model name unchanged, e.g., doubao-lite
            messages=[
                {
                    "role": "user",
                    "content": [
                        {
                            "type": "text",  # Corrected to supported 'text' type
                            "text": prompt   # Pass user prompt
                        }
                    ]
                }
            ]
        )
        # Extract Doubao LLM's reply (response format consistent with mainstream LLMs, easy for beginners)
        return response.choices[0].message.content.strip()
    except Exception as e:
        return f"LLM call failed: {str(e)}"

3.2 Implement Agent Basic Thinking Logic

Add the Agent thinking function, letting the LLM decide if a task requires calling a tool (define the logic here, implement tools in Step 4):

python

def agent_think(task: str) -> dict:
    """
    Agent thinking logic: Decide if task requires calling a tool.
    :param task: User input task
    :return: Thinking result (need tool?, tool name, parameters)
    """
    # Prompt: Make LLM clearly decide if a tool is needed, which tool, and what parameters.
    prompt = f"""
    You are an AI Agent that needs to handle user task: {task}
    Please judge based on the following rules:
    1. If the task is getting real-time info (e.g., weather, news), performing calculations, or manipulating data, you need to call a tool.
    2. For other tasks (e.g., Q&A, summarization, creative writing), no tool is needed, answer directly.
    3. When a tool is needed, return format: {{"need_tool": true, "tool_name": "tool_name", "tool_params": {{"param_name": "param_value"}}}}
    4. When no tool is needed, return format: {{"need_tool": false, "answer": "your_answer"}}
    Available tools:
    - get_weather: Get weather, parameter: city (city name, string)
    """
    # Call LLM to get thinking result
    llm_response = call_llm(prompt)
    # Convert LLM's string response to dict (Note for beginners: Simplified here, actual code should have exception handling)
    import json
    try:
        return json.loads(llm_response)
    except Exception as e:
        return {"need_tool": False, "answer": f"Thinking failed: {str(e)}, please re-enter the task."}

Note: The prompt is the Agent’s “rules of conduct”. You must clearly tell the LLM “when to call a tool” and “how to return results”, otherwise the LLM will output randomly. Using JSON format here constrains the return result, making it easy to parse later.


Step 4: Implement Agent Tool Library – Using “Get Weather” as an Example 🔧

Tools are the Agent’s “hands and feet”, allowing it to interact with the external world. We’ll use the most common “get weather” as an example to implement a tool function (can be extended to translation, DB operations, etc., in real development).

Here, we use the free “Amap Weather API” (easy for beginners to apply for). First, go to the Amap Open Platform to apply for a Web Service Key (free), then add it to the .env file:

env

# Add to .env file
AMAP_WEATHER_KEY="Your Amap Weather API Key"

4.1 Implement Weather Tool Function

Add code in main.py (requires installing requests library: pip install requests):

python

import requests

def get_weather(city: str) -> str:
    """
    Tool function: Get real-time weather for specified city.
    :param city: City name (e.g., "Beijing")
    :return: Weather information
    """
    # Amap Weather API interface (real-time weather)
    url = f"https://restapi.amap.com/v3/weather/weatherInfo"
    params = {
        "key": os.getenv("AMAP_WEATHER_KEY"),
        "city": city,
        "extensions": "base"  # base=real-time, all=forecast
    }
    try:
        response = requests.get(url, params=params)
        data = response.json()
        # Parse weather data (simplified, actual code could have more checks)
        if data.get("status") == "1" and len(data.get("lives", [])) > 0:
            weather_data = data["lives"][0]
            return f"{city} current weather: {weather_data['weather']}, Temp: {weather_data['temperature']}°C, Humidity: {weather_data['humidity']}%, Wind: {weather_data['winddirection']} {weather_data['windpower']}级"
        else:
            return f"Failed to get {city} weather: {data.get('info', 'Unknown error')}"
    except Exception as e:
        return f"Failed to get weather: {str(e)}"

4.2 Encapsulate Tool Call Entry

Add a tool dispatch function to call the corresponding tool based on the Agent’s thinking result:

python

def call_tool(tool_name: str, tool_params: dict) -> str:
    """
    Tool call dispatch center.
    :param tool_name: Tool name
    :param tool_params: Tool parameters
    :return: Tool execution result
    """
    # Tool mapping table: When adding new tools, just add the mapping here.
    tool_map = {
        "get_weather": get_weather
    }
    # Check if tool exists
    if tool_name not in tool_map:
        return f"Unknown tool: {tool_name}"
    # Call tool (**tool_params: Unpack dict to keyword arguments)
    try:
        return tool_map[tool_name](**tool_params)
    except Exception as e:
        return f"Tool call failed: {str(e)}"

Note: Using tool_map for tool mapping. When adding new tools later (e.g., translation, calculation), just implement the function and add it to tool_map without modifying core logic, adhering to the “Open/Closed Principle”.


Step 5: Develop FastAPI Interface – Connect to Frontend/Client 📡

Now, encapsulate the Agent’s core logic and tools into an API interface for frontend or other clients to call. We’ll define a POST endpoint that receives a user task and returns the Agent’s processing result.

5.1 Define Request/Response Models (Pydantic)

FastAPI uses Pydantic for data validation. Add request and response models in main.py:

python

from pydantic import BaseModel

# Request model: Receive user task
class TaskRequest(BaseModel):
    task: str  # User input task
    user_id: str | None = None  # Optional: User ID for multi-user distinction (beginners can ignore)

# Response model: Return processing result
class AgentResponse(BaseModel):
    code: int = 200  # Status code: 200 success, 500 failure
    message: str = "success"
    data: dict  # Specific processing result (thinking result + tool execution result / direct answer)

5.2 Implement Agent Core Endpoint

python

@app.post("/agent/process", response_model=AgentResponse)
def process_task(request: TaskRequest):
    """
    AI Agent core endpoint: Process user task.
    :param request: Request body containing user task.
    :return: Agent processing result.
    """
    try:
        # 1. Agent thinks: Decide if tool is needed.
        think_result = agent_think(request.task)
        # 2. Process thinking result.
        if think_result["need_tool"]:
            # 3. Call tool.
            tool_result = call_tool(
                tool_name=think_result["tool_name"],
                tool_params=think_result["tool_params"]
            )
            # 4. Organize result (return tool result to user).
            return AgentResponse(
                data={
                    "task": request.task,
                    "think_process": "Requires tool processing",
                    "tool_name": think_result["tool_name"],
                    "tool_params": think_result["tool_params"],
                    "result": tool_result
                }
            )
        else:
            # No tool needed, return LLM's answer directly.
            return AgentResponse(
                data={
                    "task": request.task,
                    "think_process": "No tool needed, answer directly",
                    "result": think_result["answer"]
                }
            )
    except Exception as e:
        # Exception handling: Return error message.
        return AgentResponse(
            code=500,
            message="Processing failed",
            data={"error": str(e)}
        )

5.3 Test Endpoint (Auto-Documentation)

Restart the Uvicorn server and visit http://127.0.0.1:8000/docs. Find the /agent/process endpoint and click “Try it out”:

json

// Request body example 1 (requires tool call)
{
  "task": "Check the weather in Beijing today",
  "user_id": "test001"
}

// Request body example 2 (doesn't require tool call)
{
  "task": "Explain what an AI Agent is",
  "user_id": "test001"
}

Click “Execute” to see the Agent’s processing result! For example, querying weather returns Beijing’s real-time weather, explaining a concept returns the LLM’s answer directly.


Step 6: Simple Deployment – Let Others Access Your Agent 🌐

After development, deployment is needed for external access. Beginners are recommended to use “PythonAnywhere” (free tier is enough for testing). Steps:

6.1 Prepare Deployment Files

Package 3 files from the project: main.py.envrequirements.txt (Create a new requirements.txt and write all dependencies):

txt

# requirements.txt content
fastapi
uvicorn
openai
python-dotenv
requests

6.2 Deploy to PythonAnywhere

  1. Register a PythonAnywhere account (https://www.pythonanywhere.com/).
  2. Go to the “Files” page, upload main.py.envrequirements.txt.
  3. Go to “Consoles” → “Bash”, execute pip install -r requirements.txt --user to install dependencies.
  4. Go to the “Web” page, click “Add a new web app”, select “Manual configuration” → “Python 3.10+”.
  5. In the “WSGI configuration file”, modify the code:pythonimport os import sys path = ‘/home/your_username’ # Your project path if path not in sys.path: sys.path.append(path) from main import app as application # Import FastAPI app instance
  6. Click “Reload”, wait for deployment to complete. Visit “Your web app URL” to use the Agent interface!

Beginners Must Read: Common Issues & Optimization Directions 💡

1. Common Pitfalls and Solutions

  • LLM call fails: Check if API key is correct, network is normal (domestic LLM doesn’t need proxy, direct access).
  • Weather tool call fails: Check if Amap API key is correct, city name is standard (e.g., “北京” not “北京市”).
  • Endpoint returns “500” error: Check PythonAnywhere “Logs” to locate the cause (likely missing dependencies or path issues).

2. Future Optimization Directions (Beginner’s Next Steps)

  • Expand tool library: Add translation, database query, file operations, etc.
  • Optimize Agent thinking logic: Use multi-turn conversation to remember context (add session management).
  • Add access control: Add API key to endpoints to prevent malicious calls.
  • Frontend interface: Build a simple frontend page with Vue/React for more intuitive use.

Supplement: Doubao LLM API Key Application Process 📝

  1. Application Address: Volcano Engine (stable domestic access, beginner-friendly, ample free credits).
  2. Application Steps:
    • Register/Log in with a ByteDance account (can register with phone number, supports linking with Douyin/Toutiao accounts).
    • After login, go to “Open Platform” → “API Key Management” module (can be found in left navigation).
    • Click “Create API Key”, select corresponding project (beginners can create a personal test project), save the key after generation (shown only once, keep it safe).
    • Beginner’s Bonus: New users receive generous free credits upon registration. Lightweight models like doubao-lite have very low call costs, sufficient for development and testing.

Summary: Core Mindset for Beginners Developing AI Agents 🎯

The core of developing AI Agents with FastAPI is “layered design”:

  1. Interface Layer (FastAPI): Responsible for receiving and returning data.
  2. Core Layer (Agent thinking logic): Responsible for judging tasks and dispatching tools.
  3. Tool Layer: Responsible for interacting with external systems (weather, translation, etc.).

Beginners don’t need to pursue complex features from the start. First achieve the loop of “calling LLM + one simple tool”, then gradually expand. By following the 6 steps in this article, you’ve completed the breakthrough from 0 to 1!

If you encounter problems during development, feel free to leave a comment for discussion. More advanced AI Agent tutorials will follow!