Eino: AgenticModel User Guide [Beta]

Introduction

AgenticModel is an abstraction of model capabilities centered on “goal-driven autonomous execution”. As capabilities like caching and built-in tools become natively supported in APIs from advanced providers like OpenAI Responses API and Claude API, models are evolving from “one-shot Q&A engines” to “user goal-oriented autonomous agents”: capable of closed-loop planning around objectives, invoking tools, and iterative execution to accomplish more complex tasks.

Differences from ChatModel

AgenticModelChatModel
PositioningModel component abstraction based on AgenticMessageModel component abstraction based on Message
Core Entities
  • AgenticMessage
  • ContentBlock
  • Message
    Capabilities
  • Multi-turn model conversation generation
  • Session caching
  • Support for various built-in tools
  • Support for MCP tools
  • Better model adaptability
  • Single-turn model conversation generation
  • Session caching
  • Support for simple built-in tools
  • Related Components
  • AgenticTemplate
  • AgenticToolsNode
  • ChatTemplate
  • ToolsNode
  • Server-side tools (like web_search) and MCP tools are natively supported by model providers, meaning a single API request/response may contain results from multiple reasoning-action cycles. Taking web search capability as an example:

    • Implemented with ChatModel: requires pre-defining a custom web search tool. One reasoning-action process is as follows:
      1. Model generates tool call parameters
      2. User-side executes the tool
      3. Returns tool result to the model
    • Implemented with AgenticModel: can directly configure native web search tools provided by the model provider. One API request process is as follows:
      1. The model autonomously calls the web search tool based on the user’s question and completes multiple tool calls on the model server side, generating multiple reasoning-action results until the user’s task is completed.
      2. The user side only needs to receive the results.

    Component Definition

    Interface Definition

    Code location: https://github.com/cloudwego/eino/tree/main/components/model/interface.go

    type AgenticModel interface {
        Generate(ctx context.Context, input []*schema.AgenticMessage, opts ...Option) (*schema.AgenticMessage, error)
        Stream(ctx context.Context, input []*schema.AgenticMessage, opts ...Option) (*schema.StreamReader[*schema.AgenticMessage], error)
    
        // WithTools returns a new Model instance with the specified tools bound.
        // This method does not modify the current instance, making it safer for concurrent use.
        WithTools(tools []*schema.ToolInfo) (AgenticModel, error)
    }
    

    Generate Method

    • Function: Generate a complete model response
    • Parameters:
      • ctx: Context object for passing request-level information and Callback Manager
      • input: Input message list
      • opts: Optional parameters for configuring model behavior
    • Return values:
      • *schema.AgenticMessage: The response message generated by the model
      • error: Error information during generation

    Stream Method

    • Function: Generate model response in streaming mode
    • Parameters: Same as Generate method
    • Return values:
      • *schema.StreamReader[*schema.AgenticMessage]: Stream reader for model response
      • error: Error information during generation

    WithTools Method

    • Function: Bind available tools to the model
    • Parameters:
      • tools: List of tool information
    • Return values:
      • Model: A new AgenticModel instance with tools bound
      • error: Error information during binding

    AgenticMessage Struct

    Code location: https://github.com/cloudwego/eino/tree/main/schema/agentic_message.go

    AgenticMessage is the basic unit for interacting with the model. A complete response from the model is encapsulated as an AgenticMessage, which carries complex composite content through an ordered set of ContentBlocks. The biggest difference between AgenticMessage and Message is the introduction of the ContentBlock array concept, which can carry multiple reasoning-action outputs from the AgenticModel. Definition:

    type AgenticMessage struct {
        // Role is the message role.
        Role AgenticRoleType
    
        // ContentBlocks is the list of content blocks.
        ContentBlocks []*ContentBlock
        
        // ResponseMeta is the response metadata.
        ResponseMeta *AgenticResponseMeta
    
        // Extra is the additional information.
        Extra map[string]any
    }
    

    ContentBlock is the basic building unit of AgenticMessage, used to carry the specific content of a message. It’s designed as a polymorphic structure that identifies what type of data the current block contains through the Type field and holds the corresponding non-null pointer field. ContentBlock allows a message to contain mixed rich media content or structured data, such as “text + image” or “reasoning process + tool call”. Definition:

    type ContentBlockType string
    
    const (
        ContentBlockTypeReasoning               ContentBlockType = "reasoning"
        ContentBlockTypeUserInputText           ContentBlockType = "user_input_text"
        ContentBlockTypeUserInputImage          ContentBlockType = "user_input_image"
        ContentBlockTypeUserInputAudio          ContentBlockType = "user_input_audio"
        ContentBlockTypeUserInputVideo          ContentBlockType = "user_input_video"
        ContentBlockTypeUserInputFile           ContentBlockType = "user_input_file"
        ContentBlockTypeAssistantGenText        ContentBlockType = "assistant_gen_text"
        ContentBlockTypeAssistantGenImage       ContentBlockType = "assistant_gen_image"
        ContentBlockTypeAssistantGenAudio       ContentBlockType = "assistant_gen_audio"
        ContentBlockTypeAssistantGenVideo       ContentBlockType = "assistant_gen_video"
        ContentBlockTypeFunctionToolCall        ContentBlockType = "function_tool_call"
        ContentBlockTypeFunctionToolResult      ContentBlockType = "function_tool_result"
        ContentBlockTypeServerToolCall          ContentBlockType = "server_tool_call"
        ContentBlockTypeServerToolResult        ContentBlockType = "server_tool_result"
        ContentBlockTypeMCPToolCall             ContentBlockType = "mcp_tool_call"
        ContentBlockTypeMCPToolResult           ContentBlockType = "mcp_tool_result"
        ContentBlockTypeMCPListToolsResult      ContentBlockType = "mcp_list_tools_result"
        ContentBlockTypeMCPToolApprovalRequest  ContentBlockType = "mcp_tool_approval_request"
        ContentBlockTypeMCPToolApprovalResponse ContentBlockType = "mcp_tool_approval_response"
    )
    
    type ContentBlock struct {
        Type ContentBlockType
    
        // Reasoning contains the reasoning content generated by the model.
        Reasoning *Reasoning
    
        // UserInputText contains the text content provided by the user.
        UserInputText *UserInputText
    
        // UserInputImage contains the image content provided by the user.
        UserInputImage *UserInputImage
    
        // UserInputAudio contains the audio content provided by the user.
        UserInputAudio *UserInputAudio
    
        // UserInputVideo contains the video content provided by the user.
        UserInputVideo *UserInputVideo
    
        // UserInputFile contains the file content provided by the user.
        UserInputFile *UserInputFile
    
        // AssistantGenText contains the text content generated by the model.
        AssistantGenText *AssistantGenText
    
        // AssistantGenImage contains the image content generated by the model.
        AssistantGenImage *AssistantGenImage
    
        // AssistantGenAudio contains the audio content generated by the model.
        AssistantGenAudio *AssistantGenAudio
    
        // AssistantGenVideo contains the video content generated by the model.
        AssistantGenVideo *AssistantGenVideo
    
        // FunctionToolCall contains the invocation details for a user-defined tool.
        FunctionToolCall *FunctionToolCall
    
        // FunctionToolResult contains the result returned from a user-defined tool call.
        FunctionToolResult *FunctionToolResult
    
        // ServerToolCall contains the invocation details for a provider built-in tool executed on the model server.
        ServerToolCall *ServerToolCall
    
        // ServerToolResult contains the result returned from a provider built-in tool executed on the model server.
        ServerToolResult *ServerToolResult
    
        // MCPToolCall contains the invocation details for an MCP tool managed by the model server.
        MCPToolCall *MCPToolCall
    
        // MCPToolResult contains the result returned from an MCP tool managed by the model server.
        MCPToolResult *MCPToolResult
    
        // MCPListToolsResult contains the list of available MCP tools reported by the model server.
        MCPListToolsResult *MCPListToolsResult
    
        // MCPToolApprovalRequest contains the user approval request for an MCP tool call when required.
        MCPToolApprovalRequest *MCPToolApprovalRequest
    
        // MCPToolApprovalResponse contains the user's approval decision for an MCP tool call.
        MCPToolApprovalResponse *MCPToolApprovalResponse
    
        // StreamingMeta contains metadata for streaming responses.
        StreamingMeta *StreamingMeta
    
        // Extra contains additional information for the content block.
        Extra map[string]any
    }
    

    AgenticResponseMeta is the metadata returned in model responses, where TokenUsage is common metadata returned by all model providers. OpenAIExtension, GeminiExtension, and ClaudeExtension are extension field definitions specific to OpenAI, Gemini, and Claude models respectively; extension information from other model providers is placed in Extension, with specific definitions provided by the corresponding component implementations in eino-ext.

    type AgenticResponseMeta struct {
        // TokenUsage is the token usage.
        TokenUsage *TokenUsage
    
        // OpenAIExtension is the extension for OpenAI.
        OpenAIExtension *openai.ResponseMetaExtension
    
        // GeminiExtension is the extension for Gemini.
        GeminiExtension *gemini.ResponseMetaExtension
    
        // ClaudeExtension is the extension for Claude.
        ClaudeExtension *claude.ResponseMetaExtension
    
        // Extension is the extension for other models, supplied by the component implementer.
        Extension any
    }
    

    Reasoning

    The Reasoning type represents the model’s reasoning process and thinking content. Some advanced models can perform internal reasoning before generating the final answer, and this reasoning content can be conveyed through this type.

    • Definition
    type Reasoning struct {
        // Text is either the thought summary or the raw reasoning text itself.
        Text string
    
        // Signature contains encrypted reasoning tokens.
        // Required by some models when passing reasoning text back.
        Signature string
    }
    
    • Example
    reasoning := &schema.Reasoning{
        Text: "The user now needs me to solve...",
        Signature: "asjkhvipausdgy23oadlfdsf"
    }
    

    UserInputText

    UserInputText is the most basic content type for conveying plain text input. It’s the primary way users interact with models, suitable for natural language dialogue, instruction delivery, and question asking.

    • Definition
    type UserInputText struct {
        // Text is the text content.
        Text string
    }
    
    • Example
    textInput := &schema.UserInputText{
        Text: "Please help me analyze the performance bottleneck of this code",
    }
    
    // Or use convenience functions to create messages
    textInput := schema.UserAgenticMessage("Please help me analyze the performance bottleneck of this code")
    textInput := schema.SystemAgenticMessage("You are an intelligent assistant")
    textInput := schema.DeveloperAgenticMessage("You are an intelligent assistant")
    

    UserInputImage

    UserInputImage is used to provide image content to the model. It supports passing image data via URL reference or Base64 encoding, suitable for visual understanding, image analysis, and multimodal dialogue scenarios.

    • Definition
    type UserInputImage struct {
        // URL is the HTTP/HTTPS link.
        URL string
    
        // Base64Data is the binary data in Base64 encoded string format.
        Base64Data string
    
        // MIMEType is the mime type, e.g. "image/png".
        MIMEType string
    
        // Detail is the quality of the image url.
        Detail ImageURLDetail
    }
    
    • Example
    // Using URL method
    imageInput := &schema.UserInputImage{
        URL:      "https://example.com/chart.png",
        MIMEType: "image/png",
        Detail:   schema.ImageURLDetailHigh,
    }
    
    // Using Base64 encoding method
    imageInput := &schema.UserInputImage{
        Base64Data: "iVBORw0KGgoAAAANSUhEUgAAAAUA...",
        MIMEType:   "image/png",
    }
    

    UserInputAudio

    UserInputAudio is used to provide audio content to the model. It’s suitable for speech recognition, audio analysis, and multimodal understanding scenarios.

    • Definition
    type UserInputAudio struct {
        // URL is the HTTP/HTTPS link.
        URL string
    
        // Base64Data is the binary data in Base64 encoded string format.
        Base64Data string
    
        // MIMEType is the mime type, e.g. "audio/wav".
        MIMEType string
    }
    
    • Example
    audioInput := &schema.UserInputAudio{
        URL:      "https://example.com/voice.wav",
        MIMEType: "audio/wav",
    }
    

    UserInputVideo

    UserInputVideo is used to provide video content to the model. It’s suitable for video understanding, scene analysis, and action recognition advanced visual tasks.

    • Definition
    type UserInputVideo struct {
        // URL is the HTTP/HTTPS link.
        URL string
    
        // Base64Data is the binary data in Base64 encoded string format.
        Base64Data string
    
        // MIMEType is the mime type, e.g. "video/mp4".
        MIMEType string
    }
    
    • Example
    videoInput := &schema.UserInputVideo{
        URL:      "https://example.com/demo.mp4",
        MIMEType: "video/mp4",
    }
    

    UserInputFile

    UserInputFile is used to provide file content to the model. It’s suitable for document analysis, data extraction, and knowledge understanding scenarios.

    • Definition
    type UserInputFile struct {
        // URL is the HTTP/HTTPS link.
        URL string
    
        // Name is the filename.
        Name string
    
        // Base64Data is the binary data in Base64 encoded string format.
        Base64Data string
    
        // MIMEType is the mime type, e.g. "application/pdf".
        MIMEType string
    }
    
    • Example
    fileInput := &schema.UserInputFile{
        URL:      "https://example.com/report.pdf",
        Name:     "report.pdf",
        MIMEType: "application/pdf",
    }
    

    AssistantGenText

    AssistantGenText is the text content generated by the model, the most common form of model output. Extension field definitions differ for different model providers: OpenAI models use OpenAIExtension, Claude models use ClaudeExtension; extension information from other model providers is placed in Extension, with specific definitions provided by corresponding component implementations in eino-ext.

    • Definition
    import (
        "github.com/cloudwego/eino/schema/claude"
        "github.com/cloudwego/eino/schema/openai"
    )
    
    type AssistantGenText struct {
        // Text is the generated text.
        Text string
    
        // OpenAIExtension is the extension for OpenAI.
        OpenAIExtension *openai.AssistantGenTextExtension
    
        // ClaudeExtension is the extension for Claude.
        ClaudeExtension *claude.AssistantGenTextExtension
    
        // Extension is the extension for other models.
        Extension any
    }
    
    • Example

      • Creating a response
      textGen := &schema.AssistantGenText{
          Text: "Based on your requirements, I suggest the following approach...",
          Extension: &AssistantGenTextExtension{
              Annotations: []*TextAnnotation{annotation},
          },
      }
      
      • Parsing a response
      import (
          "github.com/cloudwego/eino-ext/components/model/agenticark"
      )
      
      // Assert to concrete implementation definition
      ext := textGen.Extension.(*agenticark.AssistantGenTextExtension)
      

    AssistantGenImage

    AssistantGenImage is the image content generated by the model. Some models have image generation capabilities and can create images based on text descriptions, with output results conveyed through this type.

    • Definition
    type AssistantGenImage struct {
        // URL is the HTTP/HTTPS link.
        URL string
    
        // Base64Data is the binary data in Base64 encoded string format.
        Base64Data string
    
        // MIMEType is the mime type, e.g. "image/png".
        MIMEType string
    }
    
    • Example
    imageGen := &schema.AssistantGenImage{
        URL:      "https://api.example.com/generated/image123.png",
        MIMEType: "image/png",
    }
    

    AssistantGenAudio

    AssistantGenAudio is the audio content generated by the model. Some models have audio generation capabilities, with output audio data conveyed through this type.

    • Definition
    type AssistantGenAudio struct {
        // URL is the HTTP/HTTPS link.
        URL string
    
        // Base64Data is the binary data in Base64 encoded string format.
        Base64Data string
    
        // MIMEType is the mime type, e.g. "audio/wav".
        MIMEType string
    }
    
    • Example
    audioGen := &schema.AssistantGenAudio{
        URL:      "https://api.example.com/generated/audio123.wav",
        MIMEType: "audio/wav",
    }
    

    AssistantGenVideo

    AssistantGenVideo is the video content generated by the model. Some models have video generation capabilities, with output video data conveyed through this type.

    • Definition
    type AssistantGenVideo struct {
        // URL is the HTTP/HTTPS link.
        URL string
    
        // Base64Data is the binary data in Base64 encoded string format.
        Base64Data string
    
        // MIMEType is the mime type, e.g. "video/mp4".
        MIMEType string
    }
    
    • Example
    audioGen := &schema.AssistantGenAudio{
        URL:      "https://api.example.com/generated/audio123.wav",
        MIMEType: "audio/wav",
    }
    

    FunctionToolCall

    FunctionToolCall represents a user-defined function tool call initiated by the model. When the model needs to execute a specific function, it generates a tool call request containing the tool name and parameters, with actual execution handled by the user side.

    • Definition
    type FunctionToolCall struct {
        // CallID is the unique identifier for the tool call.
        CallID string
    
        // Name specifies the function tool invoked.
        Name string
    
        // Arguments is the JSON string arguments for the function tool call.
        Arguments string
    }
    
    • Example
    toolCall := &schema.FunctionToolCall{
        CallID:    "call_abc123",
        Name:      "get_weather",
        Arguments: `{"location": "Beijing", "unit": "celsius"}`,
    }
    

    FunctionToolResult

    FunctionToolResult represents the execution result of a user-defined function tool. After the user side executes a tool call, the result is returned to the model through this type, allowing the model to continue generating responses.

    • Definition
    type FunctionToolResult struct {
        // CallID is the unique identifier for the tool call.
        CallID string
    
        // Name specifies the function tool invoked.
        Name string
    
        // Result is the function tool result returned by the user
        Result string
    }
    
    • Example
    toolResult := &schema.FunctionToolResult{
        CallID: "call_abc123",
        Name:   "get_weather",
        Result: `{"temperature": 15, "condition": "sunny"}`,
    }
    
    // Or use convenience function to create message
    msg := schema.FunctionToolResultAgenticMessage(
        "call_abc123",
        "get_weather",
        `{"temperature": 15, "condition": "sunny"}`,
    )
    

    ServerToolCall

    ServerToolCall represents the invocation of a model server-side built-in tool. Some model providers integrate specific tools (like web search, code executor) on the server side, and the model can autonomously call these tools without user intervention. Arguments are the parameters for the model to call the server-side built-in tool, with specific definitions provided by corresponding component implementations in eino-ext.

    • Definition
    type ServerToolCall struct {
        // Name specifies the server-side tool invoked.
        // Supplied by the model server (e.g., `web_search` for OpenAI, `googleSearch` for Gemini).
        Name string
    
        // CallID is the unique identifier for the tool call.
        // Empty if not provided by the model server.
        CallID string
    
        // Arguments are the raw inputs to the server-side tool,
        // supplied by the component implementer.
        Arguments any
    }
    
    • Example

      • Creating a response
      serverCall := &schema.ServerToolCall{
          Name:      "web_search",
          CallID:    "search_123",
          Arguments: &ServerToolCallArguments{
              WebSearch: &WebSearchArguments{
                  ActionType: WebSearchActionSearch,
                  Search: &WebSearchQuery{
                     Query: "Beijing weather today",
                  },
              },
          },
      }
      
      • Parsing a response
      import (
          "github.com/cloudwego/eino-ext/components/model/agenticopenai"
      )
      
      // Assert to concrete implementation definition
      args := serverCall.Arguments.(*agenticopenai.ServerToolCallArguments)
      

    ServerToolResult

    ServerToolResult represents the execution result of a server-side built-in tool. After the model server executes a tool call, the result is returned through this type. Result is the result of the model calling the server-side built-in tool, with specific definitions provided by corresponding component implementations in eino-ext.

    • Definition
    type ServerToolResult struct {
        // Name specifies the server-side tool invoked.
        // Supplied by the model server (e.g., `web_search` for OpenAI, `googleSearch` for Gemini).
        Name string
    
        // CallID is the unique identifier for the tool call.
        // Empty if not provided by the model server.
        CallID string
    
        // Result refers to the raw output generated by the server-side tool,
        // supplied by the component implementer.
        Result any
    }
    
    • Example

      • Creating a response
      serverResult := &schema.ServerToolResult{
          Name:   "web_search",
          CallID: "search_123",
          Result: &ServerToolResult{
              WebSearch: &WebSearchResult{
                 ActionType: WebSearchActionSearch,
                 Search: &WebSearchQueryResult{
                    Sources: sources,
                 },
              },
          },
      }
      
      • Parsing a response
      import (
          "github.com/cloudwego/eino-ext/components/model/agenticopenai"
      )
      
      // Assert to concrete implementation definition
      args := serverResult.Result.(*agenticopenai.ServerToolResult)
      

    MCPToolCall

    MCPToolCall represents an MCP (Model Context Protocol) tool call initiated by the model. Some models allow configuring MCP tools and calling them autonomously without user intervention.

    • Definition
    type MCPToolCall struct {
        // ServerLabel is the MCP server label used to identify it in tool calls
        ServerLabel string
    
        // ApprovalRequestID is the approval request ID.
        ApprovalRequestID string
    
        // CallID is the unique ID of the tool call.
        CallID string
    
        // Name is the name of the tool to run.
        Name string
    
        // Arguments is the JSON string arguments for the tool call.
        Arguments string
    }
    
    • Example
    mcpCall := &schema.MCPToolCall{
        ServerLabel: "database-server",
        CallID:      "mcp_call_456",
        Name:        "execute_query",
        Arguments:   `{"sql": "SELECT * FROM users LIMIT 10"}`,
    }
    

    MCPToolResult

    MCPToolResult represents the MCP tool execution result returned by the model. After the model autonomously completes an MCP tool call, the result or error information is returned through this type.

    • Definition
    type MCPToolResult struct {
        // ServerLabel is the MCP server label used to identify it in tool calls
        ServerLabel string
    
        // CallID is the unique ID of the tool call.
        CallID string
    
        // Name is the name of the tool to run.
        Name string
    
        // Result is the JSON string with the tool result.
        Result string
    
        // Error returned when the server fails to run the tool.
        Error *MCPToolCallError
    }
    
    type MCPToolCallError struct {
        // Code is the error code.
        Code *int64
        
        // Message is the error message.
        Message string
    }
    
    • Example
    // MCP tool call succeeded
    mcpResult := &schema.MCPToolResult{
        ServerLabel: "database-server",
        CallID:      "mcp_call_456",
        Name:        "execute_query",
        Result:      `{"rows": [...], "count": 10}`,
    }
    
    // MCP tool call failed
    errorCode := int64(500)
    mcpError := &schema.MCPToolResult{
        ServerLabel: "database-server",
        CallID:      "mcp_call_456",
        Name:        "execute_query",
        Error: &schema.MCPToolCallError{
            Code:    &errorCode,
            Message: "Database connection failed",
        },
    }
    

    MCPListToolsResult

    MCPListToolsResult represents the query result for available tools from an MCP server returned by the model. Models that support configuring MCP tools can autonomously query the MCP server for available tools, with the query result returned through this type.

    • Definition
    type MCPListToolsResult struct {
        // ServerLabel is the MCP server label used to identify it in tool calls.
        ServerLabel string
    
        // Tools is the list of tools available on the server.
        Tools []*MCPListToolsItem
    
        // Error returned when the server fails to list tools.
        Error string
    }
    
    type MCPListToolsItem struct {
        // Name is the name of the tool.
        Name string
    
        // Description is the description of the tool.
        Description string
    
        // InputSchema is the JSON schema that describes the tool input parameters.
        InputSchema *jsonschema.Schema
    }
    
    • Example
    toolsList := &schema.MCPListToolsResult{
        ServerLabel: "database-server",
        Tools: []*schema.MCPListToolsItem{
            {
                Name:        "execute_query",
                Description: "Execute SQL query",
                InputSchema: &jsonschema.Schema{...},
            },
            {
                Name:        "create_table",
                Description: "Create database table",
                InputSchema: &jsonschema.Schema{...},
            },
        },
    }
    

    MCPToolApprovalRequest

    MCPToolApprovalRequest represents an MCP tool call request that requires user approval. In the model’s autonomous MCP tool calling process, certain sensitive or high-risk operations (like data deletion, external payments) require explicit user authorization before execution. Some models support configuring MCP tool call approval policies, and before each high-risk MCP tool call, the model returns an authorization request through this type.

    • Definition
    type MCPToolApprovalRequest struct {
        // ID is the approval request ID.
        ID string
    
        // Name is the name of the tool to run.
        Name string
    
        // Arguments is the JSON string arguments for the tool call.
        Arguments string
    
        // ServerLabel is the MCP server label used to identify it in tool calls.
        ServerLabel string
    }
    
    • Example
    approvalReq := &schema.MCPToolApprovalRequest{
        ID:          "approval_20260112_001",
        Name:        "delete_records",
        Arguments:   `{"table": "users", "condition": "inactive=true", "estimated_count": 150}`,
        ServerLabel: "database-server",
    }
    

    MCPToolApprovalResponse

    MCPToolApprovalResponse represents the user’s approval decision for an MCP tool call. After receiving an MCPToolApprovalRequest, the user needs to review the operation details and make a decision, choosing to approve or reject the operation, with an optional reason for the decision.

    • Definition
    type MCPToolApprovalResponse struct {
        // ApprovalRequestID is the approval request ID being responded to.
        ApprovalRequestID string
    
        // Approve indicates whether the request is approved.
        Approve bool
    
        // Reason is the rationale for the decision.
        // Optional.
        Reason string
    }
    
    • Example
    approvalResp := &schema.MCPToolApprovalResponse{
        ApprovalRequestID: "approval_789",
        Approve:           true,
        Reason:            "Confirmed deletion of inactive users",
    }
    

    StreamingMeta

    StreamingMeta is used in streaming response scenarios to identify the position of a content block in the final response. During streaming generation, content may be returned in multiple chunks, and the index allows for correctly assembling the complete response.

    • Definition
    type StreamingMeta struct {
        // Index specifies the index position of this block in the final response.
        Index int
    }
    
    • Example
    textGen := &schema.AssistantGenText{Text: "This is the first part"}
    meta := &schema.StreamingMeta{Index: 0}
    block := schema.NewContentBlockChunk(textGen, meta)
    

    Common Options

    AgenticModel and ChatModel share a common set of Options for configuring model behavior. Additionally, AgenticModel provides some exclusive configuration options.

    Code location: https://github.com/cloudwego/eino/tree/main/components/model/option.go

    AgenticModelChatModel
    TemperatureSupportedSupported
    ModelSupportedSupported
    TopPSupportedSupported
    ToolsSupportedSupported
    ToolChoiceSupportedSupported
    MaxTokensSupportedSupported
    AllowedToolNamesNot SupportedSupported
    StopSupported by some implementationsSupported
    AllowedToolsSupportedNot Supported

    Correspondingly, AgenticModel has added the following method for setting Options:

    // WithAgenticToolChoice is the option to set tool choice for the agentic model.
    func WithAgenticToolChoice(toolChoice schema.ToolChoice, allowedTools ...*schema.AllowedTool) Option {}
    

    Implementation-Specific Custom Options

    The WrapImplSpecificOptFn method provides component implementations the ability to inject custom Options. Developers need to define implementation-specific Option types and provide corresponding Option configuration methods.

    type openaiOptions struct {
        maxToolCalls      *int
        maxOutputTokens   *int64
    }
    
    func WithMaxToolCalls(maxToolCalls int) model.Option {
        return model.WrapImplSpecificOptFn(func(o *openaiOptions) {
           o.maxToolCalls = &maxToolCalls
        })
    }
    
    func WithMaxOutputTokens(maxOutputTokens int64) model.Option {
        return model.WrapImplSpecificOptFn(func(o *openaiOptions) {
           o.maxOutputTokens = &maxOutputTokens
        })
    }
    

    Usage

    Standalone Usage

    • Non-streaming call
    import (
        "context"
    
        "github.com/cloudwego/eino-ext/components/model/agenticopenai"
        "github.com/cloudwego/eino/schema"
        openaischema "github.com/cloudwego/eino/schema/openai"
        "github.com/eino-contrib/jsonschema"
        "github.com/openai/openai-go/v3/responses"
        "github.com/wk8/go-ordered-map/v2"
    )
    
    func main() {
        ctx := context.Background()
    
        am, _ := agenticopenai.New(ctx, &agenticopenai.Config{})
    
        input := []*schema.AgenticMessage{
           schema.UserAgenticMessage("what is the weather like in Beijing"),
        }
    
        am_, _ := am.WithTools([]*schema.ToolInfo{
           {
              Name: "get_weather",
              Desc: "get the weather in a city",
              ParamsOneOf: schema.NewParamsOneOfByJSONSchema(&jsonschema.Schema{
                 Type: "object",
                 Properties: orderedmap.New[string, *jsonschema.Schema](
                    orderedmap.WithInitialData(
                       orderedmap.Pair[string, *jsonschema.Schema]{
                          Key: "city",
                          Value: &jsonschema.Schema{
                             Type:        "string",
                             Description: "the city to get the weather",
                          },
                       },
                    ),
                 ),
                 Required: []string{"city"},
              }),
           },
        })
        
        msg, _ := am_.Generate(ctx, input)
    }
    
    • Streaming call
    import (
        "context"
        "errors"
        "io"
    
        "github.com/cloudwego/eino-ext/components/model/agenticopenai"
        "github.com/cloudwego/eino/components/model"
        "github.com/cloudwego/eino/schema"
        "github.com/openai/openai-go/v3/responses"
    )
    
    func main() {
        ctx := context.Background()
    
        am, _ := agenticopenai.New(ctx, &agenticopenai.Config{})
    
        serverTools := []*agenticopenai.ServerToolConfig{
           {
              WebSearch: &responses.WebSearchToolParam{
                 Type: responses.WebSearchToolTypeWebSearch,
              },
           },
        }
    
        allowedTools := []*schema.AllowedTool{
           {
              ServerTool: &schema.AllowedServerTool{
                 Name: string(agenticopenai.ServerToolNameWebSearch),
              },
           },
        }
    
        opts := []model.Option{
           model.WithToolChoice(schema.ToolChoiceForced, allowedTools...),
           agenticopenai.WithServerTools(serverTools),
        }
    
        input := []*schema.AgenticMessage{
           schema.UserAgenticMessage("what's cloudwego/eino"),
        }
    
        resp, _ := am.Stream(ctx, input, opts...)
    
        var msgs []*schema.AgenticMessage
        for {
           msg, err := resp.Recv()
           if err != nil {
              if errors.Is(err, io.EOF) {
                 break
              }
           }
           msgs = append(msgs, msg)
        }
    
        concatenated, _ := schema.ConcatAgenticMessages(msgs)
    }
    

    Usage in Orchestration

    import (
        "github.com/cloudwego/eino/schema"
        "github.com/cloudwego/eino/compose"
    )
    
    func main() {
        /* Initialize AgenticModel
        * am, err := xxx
        */
        
        // Use in Chain
        c := compose.NewChain[[]*schema.AgenticMessage, *schema.AgenticMessage]()
        c.AppendAgenticModel(am)
        
        
        // Use in Graph
        g := compose.NewGraph[[]*schema.AgenticMessage, *schema.AgenticMessage]()
        g.AddAgenticModelNode("model_node", cm)
    }
    

    Option and Callback Usage

    Option Usage

    import "github.com/cloudwego/eino/components/model"
    
    response, err := am.Generate(ctx, messages,
        model.WithTemperature(0.7),
        model.WithModel("gpt-5"),
    )
    

    Callback Usage

    import (
        "context"
    
        "github.com/cloudwego/eino/callbacks"
        "github.com/cloudwego/eino/components/model"
        "github.com/cloudwego/eino/compose"
        "github.com/cloudwego/eino/schema"
        callbacksHelper "github.com/cloudwego/eino/utils/callbacks"
    )
    
    // Create callback handler
    handler := &callbacksHelper.AgenticModelCallbackHandler{
        OnStart: func(ctx context.Context, info *callbacks.RunInfo, input *model.AgenticCallbackInput) context.Context {
           return ctx
        },
        OnEnd: func(ctx context.Context, info *callbacks.RunInfo, output *model.AgenticCallbackOutput) context.Context {
           return ctx
        },
        OnError: func(ctx context.Context, info *callbacks.RunInfo, err error) context.Context {
           return ctx
        },
        OnEndWithStreamOutput: func(ctx context.Context, info *callbacks.RunInfo, output *schema.StreamReader[*model.AgenticCallbackOutput]) context.Context {
            defer output.Close()
        
            for {
                chunk, err := output.Recv()
                if errors.Is(err, io.EOF) {
                    break
                }
                ...
            }
        
            return ctx
        },
    }
    
    // Use callback handler
    helper := callbacksHelper.NewHandlerHelper().
        AgenticModel(handler).
        Handler()
    
    /*** compose a chain
    * chain := NewChain
    * chain.Appendxxx().
    *       Appendxxx().
    *       ...
    */
    
    // Use at runtime
    runnable, err := chain.Compile()
    if err != nil {
        return err
    }
    result, err := runnable.Invoke(ctx, messages, compose.WithCallbacks(helper))
    

    Official Implementations

    To be added


    Last modified March 2, 2026: feat: sync en files (c14c5a55)