A standalone agent runner that executes tasks using MCP (Model Context Protocol) tools via Anthropic Claude, AWS BedRock and OpenAI APIs. It enables AI agents to run autonomously in cloud environments
A standalone agent runner that executes tasks using MCP (Model Context Protocol) tools via Anthropic Claude, AWS BedRock and OpenAI APIs. It enables AI agents to run autonomously in cloud environments and interact with various systems securely.
cd dashboard
npm i
npm run dev
Dashboard URL: http://localhost:3000
API Documentation: http://localhost:3000/api-docs
https://github.com/user-attachments/assets/c98be6d2-0096-40f2-bd78-d3fb256fec83
uv sync
Here is an example configuration file:
{
"task": "Find all image files in the current directory and tell me their sizes",
"model": "claude-3-7-sonnet-20250219",
"system_prompt": "You are a helpful assistant that completes tasks using available tools.",
"verbose": true,
"max_iterations": 10
}
uv run agentic_mcp_client/agent_worker/run.py
The project requires a config.json
file in the root directory to define the inference server settings and available MCP tools. Here's an example configuration:
{
"inference_server": {
"base_url": "https://api.anthropic.com/v1/",
"api_key": "YOUR_API_KEY_HERE",
"use_bedrock": true,
"aws_region": "us-east-1",
"aws_access_key_id": "YOUR_AWS_ACCESS_KEY",
"aws_secret_access_key": "YOUR_AWS_SECRET_KEY"
},
"mcp_servers": {
"mcp-remote-macos-use": {
"command": "docker",
"args": [
"run",
"-i",
"-e",
"MACOS_USERNAME=your_username",
"-e",
"MACOS_PASSWORD=your_password",
"-e",
"MACOS_HOST=your_host_ip",
"--rm",
"buryhuang/mcp-remote-macos-use:latest"
]
},
"mcp-my-apple-remembers": {
"command": "docker",
"args": [
"run",
"-i",
"-e",
"MACOS_USERNAME=your_username",
"-e",
"MACOS_PASSWORD=your_password",
"-e",
"MACOS_HOST=your_host_ip",
"--rm",
"buryhuang/mcp-my-apple-remembers:latest"
]
}
}
}
The inference_server
section configures the connection to your language model provider:
base_url
: The API endpoint for your chosen LLM providerapi_key
: Your authentication key for the LLM serviceuse_bedrock
: Set to true to use Amazon Bedrock for model inferenceThe mcp_servers
section defines available MCP tools. Each tool has:
command
: The command to execute (typically Docker for containerized tools)args
: Configuration parameters for the toolThis example shows MCP tools for remotely controlling a macOS system through Docker containers.
The Model Context Protocol provides a standardized way for applications to:
The protocol uses JSON-RPC 2.0 messages to establish communication between hosts (LLM applications), clients (connectors within applications), and servers (services providing context and capabilities).
Our agent worker implements this workflow:
sequenceDiagram
participant User
participant AgentWorker
participant LLM as Language Model
participant MCP as MCP Tools
User->>AgentWorker: Task + Configuration
AgentWorker->>MCP: Initialize Tools
AgentWorker->>LLM: Send Task
loop Until completion
LLM->>AgentWorker: Request Tool Use
AgentWorker->>MCP: Execute Tool
MCP->>AgentWorker: Tool Result
AgentWorker->>LLM: Send Tool Result
LLM->>AgentWorker: Response
end
AgentWorker->>User: Final Result
Contributions to Agentic MCP Client are welcome! To contribute, please follow these steps:
This project was inspired by and builds upon the work the excellent open-source projects in the MCP ecosystem:
We are grateful to the contributors of these projects for their pioneering work in the MCP space, which has helped make autonomous agent development more accessible and powerful.
Agentic MCP Client is licensed under the Apache 2.0 License. See the LICENSE file for more information.
peakmojo/agentic-mcp-client
March 30, 2025
June 4, 2025
Python