Skip to content

Conversation

tuannvm
Copy link
Owner

@tuannvm tuannvm commented May 12, 2025

Summary:
This update refactors the registration of various HAProxy management tools within the MCP server to follow a consistent pattern that enhances readability, error handling, and maintainability. The code now uniformly extracts arguments, logs execution context, and uses a centralized callExec function to perform actions with proper error propagation.

Affected Modules:

  • registerTools function responsible for registering tools such as enable_server, disable_server, set_weight, set_maxconn_server, health and agent checks, and reload commands.

Key Details:

  • Standardized argument extraction from req.Params.Arguments with inline declarations.
  • Improved logging via slog.InfoContext with clear action labels.
  • Utilization of callExec wrapper for executing client commands, simplifying error handling and response formatting.
  • Addition of success messages with specific server/backend details for improved feedback.

Potential Impacts:

  • Enhanced maintainability due to consistent code structure.
  • Slight performance improvements by reducing duplicated code.
  • Necessity to verify that callExec and logging behave as expected in all tools.
  • No behavioral change expected if tested properly.

Testing Steps:

  • Invoke each tool individually via the MCP interface with valid backend, server, and other required arguments.
  • Confirm that logs contain correct context information for each operation.
  • Check that success messages display accurate server/backend data.
  • Validate error handling by providing invalid or missing arguments and ensuring proper error responses.
  • Perform a full reload cycle (reload_haproxy) and verify that HAProxy reloads successfully without errors.

Summary by CodeRabbit

  • New Features

    • Added a comprehensive set of HAProxy management tools to the MCP server, enabling users to manage backends, servers, health checks, agent checks, statistics, and perform reloads through a unified interface.
  • Refactor

    • Improved internal consistency and reduced duplication in HAProxy client logic, resulting in more reliable and maintainable tool operations.
    • Modularized tool registration for better organization and scalability.
  • Bug Fixes

    • Enhanced error handling and validation for HAProxy runtime operations and statistics parsing.

tuannvm added 3 commits May 11, 2025 20:34
Signed-off-by: Tommy Nguyen <tuannvm@hotmail.com>
Signed-off-by: Tommy Nguyen <tuannvm@hotmail.com>
Signed-off-by: Tommy Nguyen <tuannvm@hotmail.com>
Copy link

coderabbitai bot commented May 12, 2025

Walkthrough

The changes modularize and refactor the MCP tool registration for HAProxy management by splitting tool registration into focused helper functions, adding new utility helpers, and centralizing runtime checks in the HAProxy client. Several new files introduce tool registration for stats, backends, servers, health/agent checks, and reload operations. The HAProxy client and runtime logic are refactored for consistency and maintainability.

Changes

File(s) Change Summary
internal/haproxy/client.go Refactored HAProxyClient to add ensureRuntime() for centralized runtime client checks and toggleCheck() to handle health/agent check toggling. Updated public methods to use these helpers, reducing code duplication and standardizing error handling.
internal/haproxy/runtime/runtime.go Refactored ShowStat to delegate CSV parsing to a new parseCSVStats helper, improving error handling and code clarity.
internal/mcp/register_backend_tools.go Added new function registerBackendTools to register backend management tools (list_backends, get_backend, show_servers_state) with the MCP server.
internal/mcp/register_health_agent_tools.go Added new function registerHealthAgentTools to register tools for enabling/disabling health and agent checks on backend servers.
internal/mcp/register_reload_tool.go Added new function registerReloadTool to register a tool for reloading HAProxy configuration with the MCP server.
internal/mcp/register_server_tools.go Added new function registerServerTools to register server management tools (list, get, add, delete, enable/disable, set weight/maxconn) with the MCP server.
internal/mcp/register_stat_tools.go Added new function registerStatTools to register statistics and diagnostic tools (show_stat, show_info, debug_counters, clear_counters_all, dump_stats_file) with the MCP server.
internal/mcp/tools.go Refactored to delegate all tool registration to new helper functions, removing inline tool definitions and simplifying imports.
internal/mcp/tools_helpers.go Added utility functions (callJSON, callExec, getString, getInt) to standardize MCP tool call execution, error handling, and response formatting.

Sequence Diagram(s)

Example: MCP Tool Registration and Execution Flow

sequenceDiagram
    participant Admin as Admin User
    participant MCP as MCPServer
    participant HAProxy as HAProxyClient

    Admin->>MCP: Calls RegisterTools()
    MCP->>MCP: Calls registerStatTools()
    MCP->>MCP: Calls registerBackendTools()
    MCP->>MCP: Calls registerServerTools()
    MCP->>MCP: Calls registerHealthAgentTools()
    MCP->>MCP: Calls registerReloadTool()
    Note right of MCP: Each function registers tool handlers

    Admin->>MCP: Invokes a tool (e.g., "enable_health")
    MCP->>HAProxy: Calls EnableHealth(backend, server)
    HAProxy->>HAProxy: ensureRuntime() check
    HAProxy->>HAProxy: toggleCheck("enable", "health", backend, server)
    HAProxy->>MCP: Returns result or error
    MCP->>Admin: Returns tool result
Loading

Poem

🐇
In burrows deep, we organize,
With helpers new and tools so wise!
Refactored code, less duplication,
Modular flows for every station.
HAProxy’s tasks, now clear and neat—
A rabbit’s hop makes code complete!
🥕

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (1.64.8)

Error: you are using a configuration file for golangci-lint v2 with golangci-lint v1: please use golangci-lint v2
Failed executing command with error: you are using a configuration file for golangci-lint v2 with golangci-lint v1: please use golangci-lint v2

Tip

⚡️ Faster reviews with caching
  • CodeRabbit now supports caching for code and dependencies, helping speed up reviews. This means quicker feedback, reduced wait times, and a smoother review experience overall. Cached data is encrypted and stored securely. This feature will be automatically enabled for all accounts on May 16th. To opt out, configure Review - Disable Cache at either the organization or repository level. If you prefer to disable all data retention across your organization, simply turn off the Data Retention setting under your Organization Settings.

Enjoy the performance boost—your workflow just got faster.

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@tuannvm tuannvm changed the title refactor(tools): introduce reusable call handlers for mcp tools Refactor MCP Tools for HAProxy: Consistent Pattern and Error Handling May 12, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (9)
internal/mcp/register_reload_tool.go (1)

18-19: Prefer slog.InfoContext for consistency

Other tool handlers log action execution with slog.InfoContext, while the registration log uses plain slog.Info. For consistency – and to keep correlation IDs – consider:

- slog.Info("Registering HAProxy reload tool...")
+ slog.InfoContext(context.TODO(), "Registering HAProxy reload tool")

Or simply drop it, as the enclosing caller already announces “Registering HAProxy MCP tools…”.

internal/haproxy/runtime/runtime.go (1)

73-80: parseCSVStats header is ignored – risk of silent format drift

parseCSVStats returns both header and rows.
By discarding the header entirely (_), the function silently accepts any future HAProxy column changes; downstream code will still index by map keys but may miss new columns or mis-interpret renamed ones.

Consider validating the header (e.g. ensure mandatory columns like pxname, svname, status are present) before accepting the data. This provides an early, explicit failure mode when HAProxy versions change.

internal/mcp/tools.go (1)

11-18: Duplicate boilerplate – register functions could be looped for cleaner scaling

As the number of tool categories grows, adding manual calls bloats RegisterTools. A small slice of registrar functions centralises the flow and keeps the file tiny:

- slog.Info("Registering HAProxy MCP tools...")
- registerStatTools(s, client)
- registerBackendTools(s, client)
- registerServerTools(s, client)
- registerHealthAgentTools(s, client)
- registerReloadTool(s, client)
- slog.Info("All HAProxy MCP tools registered successfully")
+ slog.Info("Registering HAProxy MCP tools…")
+ for _, f := range []func(*server.MCPServer, *haproxy.HAProxyClient){
+     registerStatTools,
+     registerBackendTools,
+     registerServerTools,
+     registerHealthAgentTools,
+     registerReloadTool,
+ } {
+     f(s, client)
+ }
+ slog.Info("All HAProxy MCP tools registered successfully")

This adds no complexity but reduces future merge conflicts when new categories appear.

internal/mcp/register_health_agent_tools.go (1)

17-83: Consider reducing duplication in tool handlers

The four tools (enable_health, disable_health, enable_agent, disable_agent) follow almost identical patterns. Consider refactoring to reduce duplication using a helper function that takes the tool name, description, and client method as parameters.

+ // healthAgentToolConfig holds configuration for a health/agent check tool
+ type healthAgentToolConfig struct {
+     name        string
+     description string
+     action      string
+     clientFunc  func(backend, server string) error
+     successMsg  string
+ }
+ 
+ // registerHealthAgentTool is a helper to register a health/agent check tool
+ func registerHealthAgentTool(s *server.MCPServer, config healthAgentToolConfig) {
+     tool := mcp.NewTool(config.name,
+         mcp.WithDescription(config.description),
+         mcp.WithString("backend", mcp.Required(), mcp.Description("Name of the backend containing the server")),
+         mcp.WithString("server", mcp.Required(), mcp.Description("Name of the server to "+config.action+" for")),
+     )
+     s.AddTool(tool, func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
+         backend := getString(req, "backend")
+         serverName := getString(req, "server")
+         slog.InfoContext(ctx, "Executing "+config.name, "backend", backend, "server", serverName)
+         return callExec(ctx, config.action, func() (string, error) {
+             if err := config.clientFunc(backend, serverName); err != nil {
+                 return "", err
+             }
+             return fmt.Sprintf(config.successMsg, backend, serverName), nil
+         })
+     })
+ }
+ 
  func registerHealthAgentTools(s *server.MCPServer, client *haproxy.HAProxyClient) {
      slog.Info("Registering HAProxy health & agent check tools...")
      
+     // Define tool configurations
+     tools := []healthAgentToolConfig{
+         {
+             name:        "enable_health",
+             description: "Enables health checks for a server in a backend",
+             action:      "enable health checks",
+             clientFunc:  client.EnableHealth,
+             successMsg:  "Health checks for server %s/%s enabled successfully",
+         },
+         {
+             name:        "disable_health",
+             description: "Disables health checks for a server in a backend",
+             action:      "disable health checks", 
+             clientFunc:  client.DisableHealth,
+             successMsg:  "Health checks for server %s/%s disabled successfully",
+         },
+         {
+             name:        "enable_agent",
+             description: "Enables agent checks for a server in a backend",
+             action:      "enable agent checks",
+             clientFunc:  client.EnableAgent,
+             successMsg:  "Agent checks for server %s/%s enabled successfully",
+         },
+         {
+             name:        "disable_agent",
+             description: "Disables agent checks for a server in a backend",
+             action:      "disable agent checks",
+             clientFunc:  client.DisableAgent,
+             successMsg:  "Agent checks for server %s/%s disabled successfully",
+         },
+     }
+     
+     // Register all tools based on configurations
+     for _, config := range tools {
+         registerHealthAgentTool(s, config)
+     }
-
-     enableHealth := mcp.NewTool("enable_health",
-         mcp.WithDescription("Enables health checks for a server in a backend"),
-         mcp.WithString("backend", mcp.Required(), mcp.Description("Name of the backend containing the server")),
-         mcp.WithString("server", mcp.Required(), mcp.Description("Name of the server to enable health checks for")),
-     )
-     s.AddTool(enableHealth, func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
-         backend := getString(req, "backend")
-         serverName := getString(req, "server")
-         slog.InfoContext(ctx, "Executing enable_health", "backend", backend, "server", serverName)
-         return callExec(ctx, "enable health checks", func() (string, error) {
-             if err := client.EnableHealth(backend, serverName); err != nil {
-                 return "", err
-             }
-             return fmt.Sprintf("Health checks for server %s/%s enabled successfully", backend, serverName), nil
-         })
-     })
-
-     disableHealth := mcp.NewTool("disable_health",
-         mcp.WithDescription("Disables health checks for a server in a backend"),
-         mcp.WithString("backend", mcp.Required(), mcp.Description("Name of the backend containing the server")),
-         mcp.WithString("server", mcp.Required(), mcp.Description("Name of the server to disable health checks for")),
-     )
-     s.AddTool(disableHealth, func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
-         backend := getString(req, "backend")
-         serverName := getString(req, "server")
-         slog.InfoContext(ctx, "Executing disable_health", "backend", backend, "server", serverName)
-         return callExec(ctx, "disable health checks", func() (string, error) {
-             if err := client.DisableHealth(backend, serverName); err != nil {
-                 return "", err
-             }
-             return fmt.Sprintf("Health checks for server %s/%s disabled successfully", backend, serverName), nil
-         })
-     })
-
-     enableAgent := mcp.NewTool("enable_agent",
-         mcp.WithDescription("Enables agent checks for a server in a backend"),
-         mcp.WithString("backend", mcp.Required(), mcp.Description("Name of the backend containing the server")),
-         mcp.WithString("server", mcp.Required(), mcp.Description("Name of the server to enable agent checks for")),
-     )
-     s.AddTool(enableAgent, func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
-         backend := getString(req, "backend")
-         serverName := getString(req, "server")
-         slog.InfoContext(ctx, "Executing enable_agent", "backend", backend, "server", serverName)
-         return callExec(ctx, "enable agent checks", func() (string, error) {
-             if err := client.EnableAgent(backend, serverName); err != nil {
-                 return "", err
-             }
-             return fmt.Sprintf("Agent checks for server %s/%s enabled successfully", backend, serverName), nil
-         })
-     })
-
-     disableAgent := mcp.NewTool("disable_agent",
-         mcp.WithDescription("Disables agent checks for a server in a backend"),
-         mcp.WithString("backend", mcp.Required(), mcp.Description("Name of the backend containing the server")),
-         mcp.WithString("server", mcp.Required(), mcp.Description("Name of the server to disable agent checks for")),
-     )
-     s.AddTool(disableAgent, func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
-         backend := getString(req, "backend")
-         serverName := getString(req, "server")
-         slog.InfoContext(ctx, "Executing disable_agent", "backend", backend, "server", serverName)
-         return callExec(ctx, "disable agent checks", func() (string, error) {
-             if err := client.DisableAgent(backend, serverName); err != nil {
-                 return "", err
-             }
-             return fmt.Sprintf("Agent checks for server %s/%s disabled successfully", backend, serverName), nil
-         })
-     })
internal/mcp/register_backend_tools.go (1)

40-50: Add validation for the optional backend parameter

The show_servers_state tool has an optional backend parameter, but there's no validation for its correctness if provided.

 showServersState := mcp.NewTool("show_servers_state",
     mcp.WithDescription("Shows the state of servers including sessions and weight"),
     mcp.WithString("backend", mcp.Description("Optional backend name to filter servers")),
 )
 s.AddTool(showServersState, func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
     backend := getString(req, "backend")
+    // Log differently based on whether we're filtering by backend or showing all
+    if backend != "" {
+        slog.InfoContext(ctx, "Executing show_servers_state with backend filter", "backend", backend)
+    } else {
+        slog.InfoContext(ctx, "Executing show_servers_state for all backends")
+    }
-    slog.InfoContext(ctx, "Executing show_servers_state", "backend", backend)
     return callJSON(ctx, "show servers state", "servers_state", func() (interface{}, error) {
         return client.ShowServersState(backend)
     })
 })
internal/mcp/register_server_tools.go (2)

54-67: Validate optional parameters when non-zero

The add_server tool has optional port and weight parameters, but there's no validation to ensure they're valid when provided.

 s.AddTool(addServer, func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
     backend := getString(req, "backend")
     name := getString(req, "name")
     addr := getString(req, "addr")
     port := getInt(req, "port")
     weight := getInt(req, "weight")
+    
+    // Log port and weight only when they're provided
+    logParams := []any{"backend", backend, "name", name, "addr", addr}
+    if port > 0 {
+        logParams = append(logParams, "port", port)
+    }
+    if weight > 0 {
+        logParams = append(logParams, "weight", weight)
+    }
+    slog.InfoContext(ctx, "Executing add_server", logParams...)
-    slog.InfoContext(ctx, "Executing add_server", "backend", backend, "name", name, "addr", addr, "port", port, "weight", weight)
     
     return callExec(ctx, "add server", func() (string, error) {
         if err := client.AddServer(backend, name, addr, port, weight); err != nil {
             return "", err
         }
         return fmt.Sprintf("Server %s added successfully to backend %s", name, backend), nil
     })
 })

86-121: Code duplication between enable_server and disable_server

Similar to the health and agent check tools, the enable_server and disable_server tools have very similar code. Consider refactoring to reduce duplication.

+// serverToggleConfig holds configuration for a server enable/disable tool
+type serverToggleConfig struct {
+    name        string
+    description string
+    action      string
+    clientFunc  func(backend, server string) error
+}
+
+// registerServerToggleTool is a helper to register a server enable/disable tool
+func registerServerToggleTool(s *server.MCPServer, config serverToggleConfig) {
+    tool := mcp.NewTool(config.name,
+        mcp.WithDescription(config.description),
+        mcp.WithString("backend", mcp.Required(), mcp.Description("Name of the backend containing the server")),
+        mcp.WithString("server", mcp.Required(), mcp.Description("Name of the server to "+config.action)),
+    )
+    s.AddTool(tool, func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
+        backend := getString(req, "backend")
+        serverName := getString(req, "server")
+        slog.InfoContext(ctx, "Executing "+config.name, "backend", backend, "server", serverName)
+        return callExec(ctx, config.action+" server", func() (string, error) {
+            if err := config.clientFunc(backend, serverName); err != nil {
+                return "", err
+            }
+            // The past participle of enable is "enabled", disable is "disabled"
+            actionPastParticiple := config.action + "d"
+            return fmt.Sprintf("Server %s/%s %s successfully", backend, serverName, actionPastParticiple), nil
+        })
+    })
+}

 // In registerServerTools:
-// enable_server tool
-enableServer := mcp.NewTool("enable_server",
-    mcp.WithDescription("Enables a server in a backend"),
-    mcp.WithString("backend", mcp.Required(), mcp.Description("Name of the backend containing the server")),
-    mcp.WithString("server", mcp.Required(), mcp.Description("Name of the server to enable")),
-)
-s.AddTool(enableServer, func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
-    backend := getString(req, "backend")
-    serverName := getString(req, "server")
-    slog.InfoContext(ctx, "Executing enable_server", "backend", backend, "server", serverName)
-    return callExec(ctx, "enable server", func() (string, error) {
-        if err := client.EnableServer(backend, serverName); err != nil {
-            return "", err
-        }
-        return fmt.Sprintf("Server %s/%s enabled successfully", backend, serverName), nil
-    })
-})
-
-// disable_server tool
-disableServer := mcp.NewTool("disable_server",
-    mcp.WithDescription("Disables a server in a backend"),
-    mcp.WithString("backend", mcp.Required(), mcp.Description("Name of the backend containing the server")),
-    mcp.WithString("server", mcp.Required(), mcp.Description("Name of the server to disable")),
-)
-s.AddTool(disableServer, func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
-    backend := getString(req, "backend")
-    serverName := getString(req, "server")
-    slog.InfoContext(ctx, "Executing disable_server", "backend", backend, "server", serverName)
-    return callExec(ctx, "disable server", func() (string, error) {
-        if err := client.DisableServer(backend, serverName); err != nil {
-            return "", err
-        }
-        return fmt.Sprintf("Server %s/%s disabled successfully", backend, serverName), nil
-    })
-})
+// Register server toggle tools
+registerServerToggleTool(s, serverToggleConfig{
+    name:        "enable_server",
+    description: "Enables a server in a backend",
+    action:      "enable",
+    clientFunc:  client.EnableServer,
+})
+
+registerServerToggleTool(s, serverToggleConfig{
+    name:        "disable_server",
+    description: "Disables a server in a backend",
+    action:      "disable",
+    clientFunc:  client.DisableServer,
+})
internal/haproxy/client.go (2)

196-210: Consider refactoring SetWeight to use the ensureRuntime helper.

While most methods now use the centralized ensureRuntime helper for runtime client validation, this method still uses direct checking. For consistency, consider refactoring.

func (c *HAProxyClient) SetWeight(backend, server string, weight int) (string, error) {
-	if c.RuntimeClient == nil {
-		return "", fmt.Errorf("runtime client is not initialized")
-	}
+	if err := c.ensureRuntime(); err != nil {
+		return "", err
+	}

	// Directly execute the command since it might be different across versions
	cmd := fmt.Sprintf("set weight %s/%s %d", backend, server, weight)
	_, err := c.RuntimeClient.ExecuteRuntimeCommand(cmd)
	if err != nil {
		return "", err
	}

	return fmt.Sprintf("Weight for %s/%s set to %d", backend, server, weight), nil
}

339-351: Consider refactoring remaining methods to use the ensureRuntime helper.

Several methods (DumpStatsFile, DebugCounters, ClearCountersAll, AddServer, DelServer, ReloadHAProxy) still use direct checking for runtime client initialization. Consider refactoring them to use the centralized ensureRuntime helper for consistency with the rest of the codebase.

For example, for the DumpStatsFile method:

func (c *HAProxyClient) DumpStatsFile(filepath string) (string, error) {
-	if c.RuntimeClient == nil {
-		return "", fmt.Errorf("runtime client is not initialized")
-	}
+	if err := c.ensureRuntime(); err != nil {
+		return "", err
+	}

	cmd := fmt.Sprintf("show stat > %s", filepath)
	_, err := c.RuntimeClient.ExecuteRuntimeCommand(cmd)
	if err != nil {
		return "", err
	}

	return filepath, nil
}

Apply similar refactoring to the other mentioned methods.

Also applies to: 354-385, 388-403, 406-414, 417-427

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between debdc99 and aef2bcf.

📒 Files selected for processing (9)
  • internal/haproxy/client.go (7 hunks)
  • internal/haproxy/runtime/runtime.go (1 hunks)
  • internal/mcp/register_backend_tools.go (1 hunks)
  • internal/mcp/register_health_agent_tools.go (1 hunks)
  • internal/mcp/register_reload_tool.go (1 hunks)
  • internal/mcp/register_server_tools.go (1 hunks)
  • internal/mcp/register_stat_tools.go (1 hunks)
  • internal/mcp/tools.go (1 hunks)
  • internal/mcp/tools_helpers.go (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
internal/mcp/tools.go (1)
internal/haproxy/client.go (1)
  • HAProxyClient (14-18)
🔇 Additional comments (30)
internal/mcp/register_health_agent_tools.go (2)

14-16: Good modularization of tool registration

The function is well-structured and follows a clear single responsibility principle by focusing only on health and agent check tools.


22-32: Consistent error handling and detailed logging

Good use of structured logging with context and consistent error handling pattern. The success message includes both backend and server names which improves user experience.

internal/mcp/tools_helpers.go (2)

12-25: Well-structured JSON response wrapper

The callJSON function provides a clean abstraction for executing client calls and formatting JSON responses. Good error handling with context logging.


27-35: Clean execution result wrapper

The callExec function provides a consistent pattern for executing client actions and returning text responses.

internal/mcp/register_backend_tools.go (3)

12-14: Clean module design with clear logging

The function has a well-defined scope and follows good practices with descriptive logging.


19-24: Concise tool implementation

The list_backends tool implementation is clean and follows a consistent pattern with context logging and error handling.


31-37: Good parameter extraction and structured logging

The get_backend tool correctly extracts the name parameter and includes it in the context log.

internal/mcp/register_server_tools.go (1)

14-16: Well-structured module with clear logging

Good practices with initialization logging for server management tools.

internal/mcp/register_stat_tools.go (8)

1-13: Well-structured imports and package declaration.

Clean and organized imports with appropriate grouping of standard library and external packages.


14-16: Good function documentation and logging.

The function has appropriate logging to indicate when the registration process starts, which is helpful for debugging and operational visibility.


17-28: Well-implemented show_stat tool registration.

The tool registration follows a clean pattern with:

  • Clear description
  • Proper parameter definition with description
  • Structured logging with context
  • Use of helper functions for consistent response formatting

30-39: Well-implemented show_info tool registration.

Similar to other tools, this maintains the consistent pattern for registration and execution.


41-50: Well-implemented debug_counters tool registration.

Follows the same consistent pattern as other tools.


52-64: Well-implemented clear_counters_all tool registration.

This tool uses callExec appropriately as it performs an action rather than returning data. The success message provides clear feedback.


66-81: Well-implemented dump_stats_file tool registration.

The tool properly:

  • Requires a filepath parameter
  • Logs the parameter value
  • Provides a success message that includes the filepath

83-84: Good function completion logging.

Proper closing log message to indicate when registration is complete.

internal/haproxy/client.go (14)

20-26: Good addition of helper method for runtime client validation.

The ensureRuntime method centralizes the runtime client initialization check, improving consistency and reducing code duplication throughout the codebase.


32-40: Well-implemented toggleCheck helper method.

This helper method effectively encapsulates the common logic for toggling health and agent checks, reducing code duplication. It also leverages the ensureRuntime helper for consistent error checking.


114-119: Improved consistency in GetRuntimeInfo method.

Refactored to use the centralized ensureRuntime helper for validation before executing the operation, making the code more consistent with other methods.


130-135: Improved consistency in GetBackends method.

Refactored to use the centralized ensureRuntime helper for validation, maintaining consistent error handling pattern.


138-141: Improved consistency in GetBackendDetails method.

Refactored to use the centralized ensureRuntime helper for validation.


158-163: Improved consistency in ListServers method.

Refactored to use the centralized ensureRuntime helper for validation.


166-169: Improved consistency in GetServerDetails method.

Refactored to use the centralized ensureRuntime helper for validation.


181-186: Improved consistency in EnableServer method.

Refactored to use the centralized ensureRuntime helper for validation.


189-194: Improved consistency in DisableServer method.

Refactored to use the centralized ensureRuntime helper for validation.


213-218: Improved consistency in SetServerMaxconn method.

Refactored to use the centralized ensureRuntime helper for validation.


221-223: Improved EnableHealth method with toggleCheck.

Refactored to use the centralized toggleCheck helper method, which reduces code duplication and improves consistency across health and agent check methods.


226-228: Improved DisableHealth method with toggleCheck.

Refactored to use the centralized toggleCheck helper method, improving consistency.


231-233: Improved EnableAgent method with toggleCheck.

Refactored to use the centralized toggleCheck helper method, improving consistency.


236-238: Improved DisableAgent method with toggleCheck.

Refactored to use the centralized toggleCheck helper method, improving consistency.

Comment on lines +15 to +26
reloadTool := mcp.NewTool("reload_haproxy",
mcp.WithDescription("Triggers a reload of the HAProxy configuration"),
)
s.AddTool(reloadTool, func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
slog.InfoContext(ctx, "Executing reload_haproxy")
return callExec(ctx, "reload haproxy", func() (string, error) {
if err := client.ReloadHAProxy(); err != nil {
return "", err
}
return "HAProxy configuration reloaded successfully", nil
})
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Return value from HAProxyClient.ReloadHAProxy is discarded – consider surfacing HAProxy’s own feedback

ReloadHAProxy currently returns only an error, so you fall back to a hard-coded success string.
If the underlying runtime command prints warnings (e.g. “nothing to reload”, “configuration unchanged”) they will be lost. Either:

  1. Extend ReloadHAProxy() to return (string, error) so the actual CLI output can be propagated, or
  2. Run the raw runtime command directly in the closure and forward its output.

Propagating HAProxy’s own message yields better observability when operators trigger a reload from MCP.

- return callExec(ctx, "reload haproxy", func() (string, error) {
-     if err := client.ReloadHAProxy(); err != nil {
-         return "", err
-     }
-     return "HAProxy configuration reloaded successfully", nil
- })
+ return callExec(ctx, "reload haproxy", func() (string, error) {
+     out, err := client.ExecuteRuntimeCommand("reload")
+     if err != nil {
+         return "", err
+     }
+     // Bubble raw output back to the caller for maximum transparency
+     return strings.TrimSpace(out), nil
+ })

Note: requires importing strings and making ExecuteRuntimeCommand public (or adding a wrapper).

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +37 to +43
// getString extracts a string argument from the request
func getString(req mcp.CallToolRequest, key string) string {
if v, ok := req.Params.Arguments[key].(string); ok {
return v
}
return ""
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add validation for missing required parameters

The getString function returns an empty string when the parameter is missing or not a string, which could lead to silent failures. For required parameters, this should return an error rather than an empty string.

-// getString extracts a string argument from the request
-func getString(req mcp.CallToolRequest, key string) string {
+// getString extracts a string argument from the request
+func getString(req mcp.CallToolRequest, key string) string {
     if v, ok := req.Params.Arguments[key].(string); ok {
         return v
     }
     return ""
 }
+
+// getRequiredString extracts a required string argument and returns an error if missing
+func getRequiredString(req mcp.CallToolRequest, key string) (string, error) {
+    val, ok := req.Params.Arguments[key]
+    if !ok {
+        return "", fmt.Errorf("missing required parameter: %s", key)
+    }
+    
+    strVal, ok := val.(string)
+    if !ok {
+        return "", fmt.Errorf("parameter %s must be a string", key)
+    }
+    
+    if strVal == "" {
+        return "", fmt.Errorf("parameter %s cannot be empty", key)
+    }
+    
+    return strVal, nil
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// getString extracts a string argument from the request
func getString(req mcp.CallToolRequest, key string) string {
if v, ok := req.Params.Arguments[key].(string); ok {
return v
}
return ""
}
// getString extracts a string argument from the request
func getString(req mcp.CallToolRequest, key string) string {
if v, ok := req.Params.Arguments[key].(string); ok {
return v
}
return ""
}
// getRequiredString extracts a required string argument and returns an error if missing
func getRequiredString(req mcp.CallToolRequest, key string) (string, error) {
val, ok := req.Params.Arguments[key]
if !ok {
return "", fmt.Errorf("missing required parameter: %s", key)
}
strVal, ok := val.(string)
if !ok {
return "", fmt.Errorf("parameter %s must be a string", key)
}
if strVal == "" {
return "", fmt.Errorf("parameter %s cannot be empty", key)
}
return strVal, nil
}

Comment on lines +45 to +51
// getInt extracts a numeric argument (as int) from the request
func getInt(req mcp.CallToolRequest, key string) int {
if f, ok := req.Params.Arguments[key].(float64); ok {
return int(f)
}
return 0
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve handling of numeric parameters

Similar to getString, the getInt function defaults to 0 when the parameter is missing or not a number. This could cause issues if 0 is a valid value for some parameters.

-// getInt extracts a numeric argument (as int) from the request
-func getInt(req mcp.CallToolRequest, key string) int {
+// getInt extracts a numeric argument (as int) from the request
+func getInt(req mcp.CallToolRequest, key string) int {
     if f, ok := req.Params.Arguments[key].(float64); ok {
         return int(f)
     }
     return 0
 }
+
+// getRequiredInt extracts a required numeric argument and returns an error if missing
+func getRequiredInt(req mcp.CallToolRequest, key string) (int, error) {
+    val, ok := req.Params.Arguments[key]
+    if !ok {
+        return 0, fmt.Errorf("missing required parameter: %s", key)
+    }
+    
+    floatVal, ok := val.(float64)
+    if !ok {
+        return 0, fmt.Errorf("parameter %s must be a number", key)
+    }
+    
+    return int(floatVal), nil
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// getInt extracts a numeric argument (as int) from the request
func getInt(req mcp.CallToolRequest, key string) int {
if f, ok := req.Params.Arguments[key].(float64); ok {
return int(f)
}
return 0
}
// getInt extracts a numeric argument (as int) from the request
func getInt(req mcp.CallToolRequest, key string) int {
if f, ok := req.Params.Arguments[key].(float64); ok {
return int(f)
}
return 0
}
// getRequiredInt extracts a required numeric argument and returns an error if missing
func getRequiredInt(req mcp.CallToolRequest, key string) (int, error) {
val, ok := req.Params.Arguments[key]
if !ok {
return 0, fmt.Errorf("missing required parameter: %s", key)
}
floatVal, ok := val.(float64)
if !ok {
return 0, fmt.Errorf("parameter %s must be a number", key)
}
return int(floatVal), nil
}

Comment on lines +130 to +138
s.AddTool(setWeight, func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
backend := getString(req, "backend")
serverName := getString(req, "server")
weight := getInt(req, "weight")
slog.InfoContext(ctx, "Executing set_weight", "backend", backend, "server", serverName, "weight", weight)
return callExec(ctx, "set weight", func() (string, error) {
return client.SetWeight(backend, serverName, weight)
})
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Handle success message consistently

The set_weight tool handles the success message differently from other tools, where the client method returns the message directly instead of constructing it in the handler.

 s.AddTool(setWeight, func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
     backend := getString(req, "backend")
     serverName := getString(req, "server")
     weight := getInt(req, "weight")
     slog.InfoContext(ctx, "Executing set_weight", "backend", backend, "server", serverName, "weight", weight)
     return callExec(ctx, "set weight", func() (string, error) {
-        return client.SetWeight(backend, serverName, weight)
+        if err := client.SetWeight(backend, serverName, weight); err != nil {
+            return "", err
+        }
+        return fmt.Sprintf("Weight for server %s/%s set to %d", backend, serverName, weight), nil
     })
 })

🏁 Script executed:

#!/bin/bash
# Checking if SetWeight returns a message or just an error
# We need to verify the method signature to confirm the refactoring

rg -A 5 "func.*SetWeight" --type go

Length of output: 959


🏁 Script executed:

#!/bin/bash
sed -n '1,200p' internal/mcp/register_server_tools.go

Length of output: 8275


Handle success message consistently for set_weight

The set_weight tool currently returns the raw output from client.SetWeight. To align with the other tools (which always format their own success messages), update the handler in internal/mcp/register_server_tools.go as follows:

 s.AddTool(setWeight, func(ctx context.Context, req mcp.CallToolRequest) (*mcp.CallToolResult, error) {
     backend := getString(req, "backend")
     serverName := getString(req, "server")
     weight := getInt(req, "weight")
     slog.InfoContext(ctx, "Executing set_weight", "backend", backend, "server", serverName, "weight", weight)
     return callExec(ctx, "set weight", func() (string, error) {
-        return client.SetWeight(backend, serverName, weight)
+        if err := client.SetWeight(backend, serverName, weight); err != nil {
+            return "", err
+        }
+        return fmt.Sprintf("Weight for server %s/%s set to %d", backend, serverName, weight), nil
     })
 })

This ensures consistent formatting of success messages across all server-management tools.

@tuannvm tuannvm merged commit 8ad4b20 into main May 12, 2025
7 checks passed
Copy link

🎉 This PR is included in version 1.0.12 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

@tuannvm tuannvm added this to MCP May 28, 2025
@tuannvm tuannvm moved this to Done in MCP May 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

1 participant