Here are a few options, prioritizing clarity and SEO: * **AI Dev Toolkit: MCP for building & managing AI models. #AI #ML** (63 chars) * **npcpy: Model Context Protocol toolkit for AI development.
<p align="center">
<img src="https://raw.githubusercontent.com/cagostino/npcpy/main/npcpy.png" alt="npcpy logo of a solarpunk sign">
</p>
# npcpy (Model Context Protocol)
Welcome to `npcpy`, the Python library for the NPC Toolkit and the home of the core command-line programs that make up the NPC Shell (`npcsh`).
`npcpy` is an AI framework designed for AI response handling and agent orchestration. It facilitates the seamless integration of AI models into your daily workflow by providing diverse interfaces to use, test, and explore the capabilities of AI models, agents, and agent systems.
<p align="center">
<a href= "https://github.com/cagostino/npcpy/blob/main/docs/npcpy.md">
<img src="https://raw.githubusercontent.com/cagostino/npcpy/main/npcpy/npc-python.png" alt="npc-python logo" width=250></a>
</p>
## Key Features
* **AI Response Handling:** Streamlined processing and management of AI model outputs.
* **Agent Orchestration:** Tools for creating and managing AI agents and multi-agent systems.
* **Flexible Integration:** Designed to easily integrate with various AI models and providers.
* **Command-Line Interface (CLI):** Includes the NPC Shell (`npcsh`) for interacting with NPCs and LLMs via the command line.
* **Model Context Protocol (MCP):** Facilitates the transfer of context between models, agents, and systems.
* **LiteLLM Integration:** Supports a wide range of LLM providers, including Ollama, LMStudio, OpenAI, Anthropic, Gemini, and Deepseek.
* **Data Handling:** Supports inclusion of images, PDFs, and CSVs in LLM response generation.
* **Image and Video Generation:** Capabilities for creating images and videos using Hugging Face's Diffusers library, OpenAI, or Gemini.
* **Streaming Responses:** Supports streaming responses from LLMs, allowing for real-time processing.
* **Structured Outputs:** Can return structured outputs in JSON format or according to a Pydantic schema.
## Getting Started
### Installation
`npcpy` is available on PyPI and can be installed using pip. Before installing, ensure you have the necessary dependencies installed on your system.
```bash
pip install npcpy
For additional features, you can install with extras:
pip install 'npcpy[lite]'
(for API libraries)pip install 'npcpy[local]'
(for local model support: Ollama, Diffusers, Transformers, CUDA, etc.)pip install 'npcpy[yap]'
(for TTS/STT support)pip install 'npcpy[all]'
(for all features)# These are for audio primarily, skip if you don't need TTS
sudo apt-get install espeak
sudo apt-get install portaudio19-dev python3-pyaudio
sudo apt-get install alsa-base alsa-utils
sudo apt-get install libcairo2-dev
sudo apt-get install libgirepository1.0-dev
sudo apt-get install ffmpeg
# For triggers
sudo apt install inotify-tools
# If you don't have Ollama installed:
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2
ollama pull llava:7b
ollama pull nomic-embed-text
pip install npcpy
# if you want to install with the API libraries
pip install 'npcpy[lite]'
# if you want the full local package set up (ollama, diffusers, transformers, cuda etc.)
pip install 'npcpy[local]'
# if you want to use tts/stt
pip install 'npcpy[yap]'
# if you want everything:
pip install 'npcpy[all]'
</details>
<details>
<summary>Mac</summary>
# Mainly for audio
brew install portaudio
brew install ffmpeg
brew install pygobject3
# For triggers
brew install inotify-tools
brew install ollama
brew services start ollama
ollama pull llama3.2
ollama pull llava:7b
ollama pull nomic-embed-text
pip install npcpy
# if you want to install with the API libraries
pip install npcpy[lite]
# if you want the full local package set up (ollama, diffusers, transformers, cuda etc.)
pip install npcpy[local]
# if you want to use tts/stt
pip install npcpy[yap]
# if you want everything:
pip install npcpy[all]
</details>
<details>
<summary>Windows</summary>
Download and install the Ollama executable. Then, in PowerShell, download and install FFmpeg.
ollama pull llama3.2
ollama pull llava:7b
ollama pull nomic-embed-text
pip install npcpy
# if you want to install with the API libraries
pip install npcpy[lite]
# if you want the full local package set up (ollama, diffusers, transformers, cuda etc.)
pip install npcpy[local]
# if you want to use tts/stt
pip install npcpy[yap]
# if you want everything:
pip install npcpy[all]
</details>
<details>
<summary>Fedora (Under Construction)</summary>
python3-dev # (fixes hnswlib issues with Chroma DB)
xhost + # (PyAutoGUI)
python-tkinter # (PyAutoGUI)
</details>
After installation, initialize the NPC Shell:
npcsh
This will generate a .npcshrc
file in your home directory containing your npcsh
settings. Example:
# NPCSH Configuration File
export NPCSH_INITIALIZED=1
export NPCSH_CHAT_PROVIDER='ollama'
export NPCSH_CHAT_MODEL='llama3.2'
export NPCSH_DB_PATH='~/npcsh_history.db'
Add the following to your .bashrc
or .zshrc
to source the configuration:
# Source NPCSH configuration
if [ -f ~/.npcshrc ]; then
. ~/.npcshrc
fi
To use tools requiring API keys, create a .env
file in your project directory or add the keys to your ~/.npcshrc
. Example .env
file:
export OPENAI_API_KEY="your_openai_key"
export ANTHROPIC_API_KEY="your_anthropic_key"
export DEEPSEEK_API_KEY='your_deepseek_key'
export GEMINI_API_KEY='your_gemini_key'
export PERPLEXITY_API_KEY='your_perplexity_key'
npcsh
creates a directory at ~/.npcsh/
containing global NPCs, tools (jinxs), and workflow pipelines. For project-specific configurations, create an npc_team
directory in your project:
./npc_team/ # Project-specific NPCs
├── jinxs/ # Project jinxs #example jinx next
│ └── example.jinx
└── assembly_lines/ # Project workflows
└── example.pipe
└── models/ # Project workflows
└── example.model
└── example1.npc # Example NPC
└── example2.npc # Example NPC
└── team.ctx # Example ctx
from npcpy.npc_compiler import NPC
simon = NPC(
name='Simon Bolivar',
primary_directive='Liberate South America from the Spanish Royalists.',
model='gemma3',
provider='ollama'
)
response = simon.get_llm_response("What is the most important territory to retain in the Andes mountains?")
print(response['response'])
The most important territory to retain in the Andes mountains is **Cuzco**.
It’s the heart of the Inca Empire, a crucial logistical hub, and holds immense symbolic value for our liberation efforts. Control of Cuzco is paramount.
from npcpy.npc_compiler import NPC, Team
ggm = NPC(
name='gabriel garcia marquez',
primary_directive='You are the author gabriel garcia marquez. see the stars ',
model='deepseek-chat',
provider='deepseek', # anthropic, gemini, openai, any supported by litellm
)
isabel = NPC(
name='isabel allende',
primary_directive='You are the author isabel allende.
NPC-Worldwide/npcpy
September 27, 2024
July 7, 2025
Python