Open Source MCP Server

Stop Switching Tabs. Command Every LLM From One Prompt.

Mid-conversation: "Ask Perplexity AND Grok to research this, then have Kimi K2 and GPT-5.1 analyze the findings." It just routes to the right models.

Gateway: One OpenRouter key
BYOB: Your own provider keys
Perplexity: Always needs own key
Works best with Claude Code MCP integration
"Have Grok check Twitter for that error message"
"Ask Perplexity what changed in React 19 this week"
"Get Gemini to brainstorm, then have Kimi K2 and GPT-5.1 both analyze it"
6 Providers
GPT-5.1, Gemini, Grok, Perplexity, Kimi, Qwen
31 Tools
Turn on/off to control token costs
YAML Workflows
Chain models, reuse patterns
Two Ways to Use It

Ad-hoc Commands or Saved Workflows

Ask models anything mid-conversation. Or chain them into repeatable workflows.

Natural Language Routing

Just say what you need. "Ask Perplexity AND Grok to research this memory leak, then have Kimi K2 analyze." No config files. It routes to the right models automatically.

zero config

Real-Time Web Search

"Have Grok check Twitter for that error." "Ask Perplexity what changed in Next.js 15." Get actual sources from the past week, not hallucinated citations.

live search

Multi-Model Debates

Models challenge each other. Grok analyzes → Gemini critiques → Qwen finds edge cases → GPT-5.1 synthesizes. Bad answers get caught before you see them.

cross-checking

Model Strengths

Each model has a specialty. Perplexity for research, Grok for first-principles, Kimi K2 for 128k context, Gemini for brainstorming, GPT-5.1 for synthesis, Qwen for code.

right tool for job

Gateway or BYOB

Gateway Mode: one OpenRouter key for all models. Or BYOB: use your own provider keys directly. Perplexity always needs its own key for web search.

flexible setup

Control Token Costs

Need 1 tool? 400 tokens. All 31 tools? 13k tokens. Turn tools on/off in one config file. You decide the tradeoff between capability and cost.

400-13k tokens

YAML Workflows

For repeatable tasks: Perplexity researches → Grok reasons → Kimi K2 analyzes → GPT-5.1 synthesizes. Define once, reuse whenever. One model's output feeds the next.

chain models

Rich Terminal Output

ASCII art headers, pie charts, braille progress bars, and gradient colors. See usage stats, workflow progress, and tool outputs as beautiful terminal visualizations.

visual feedback

How It Reduces Hallucinations

Run multiple AI models on the same question. They check each other's answers, debate solutions, and catch mistakes before you see them.

Based on peer-reviewed research (arXiv:2406.04692)

The Problem

A single AI model can confidently give you wrong answers. It doesn't know when it's making things up.

No way to verify if the answer is correct
You have to manually fact-check everything
Mistakes only show up after you've used the answer

What TachiBot Does

TachiBot runs your question through multiple models at once. They generate answers independently, then review each other's work to catch mistakes.

Step 1
4-6 models answer your question separately
Step 2
Each model reviews the others' answers and points out errors
Step 3
Final answer combines the best parts and removes mistakes

Why This Works

Researchers tested this approach and published the results. When models check each other, hallucinations drop significantly.

30-40% Fewer Mistakes
Published research shows measurable reduction
More Accurate Answers
Models catch errors other models would miss
Open Research
Published on arXiv, code on GitHub - verify it yourself
Real Workflow Output

The AI your AI calls for help.

Real GPT-5 to GPT-5.1 migration analysis with 5 AI models

5 steps
~3 minutes
5 AI models
QUERY
"I'm using GPT-5 in production. Should I migrate to GPT-5.1? What are the differences, breaking changes, and migration steps?"
Technology
GPT-5 Status
Primary Strength
Released Aug 7, 2025
Best For
Your current production model
Key Considerations
Stable and working well
Technology
GPT-5.1 Status
Primary Strength
Released Nov 12, 2025
Best For
Latest version with improvements
Key Considerations
20-30% faster, better reasoning
Technology
Migration
Primary Strength
Automatic for ChatGPT
Best For
No breaking changes until Q1 2026
Key Considerations
API users: re-test for factuality improvements

Real Example: Deep Research with Verification

Stop Trusting. Start Verifying.

Single Model

"What breaking changes are in React 19?"
Generic advice, might miss recent updates
No sources or official documentation links
Could confuse React 18 vs 19 features
Unreliable, unverified answers

TachiBot

"What breaking changes are in React 19?"
1.Run in Parallel
OpenAIGooglePerplexityOpenRouter
2.Debate & Refine

Models challenge each other (3-200 rounds)

3.Verify Facts
PerplexityGrok

Search live sources with recency filters

Accurate list with official documentation

The Problem

One model hallucinates. You ask Claude something, it sounds confident, but it's wrong. Hours wasted debugging fake information.

Manual model-hopping. Copy from Claude, paste into GPT, paste into Perplexity, compare manually. Tedious and error-prone.

No web access. Claude doesn't know what happened yesterday. You need to manually search and paste context.

Different models, different strengths. GPT-5.1 is great at synthesis. Grok has Twitter. Kimi K2 handles 128k context. Using the wrong one wastes time.

The Solution

Models verify each other. "Ask Perplexity AND Grok to both research this." They catch each other's mistakes.

Natural language routing. Just ask. "Have Grok check Twitter for that error." "Get Gemini to brainstorm alternatives." No config needed.

Live web search built in. Perplexity and Grok search the web in real-time. Actual sources, not hallucinated citations.

Right model for each task. Research? Perplexity. First-principles? Grok. Long context? Kimi K2. Synthesis? GPT-5.1. Code? Qwen. It routes automatically.

Works With Leading AI Providers

Always using the latest models from each provider

OpenAI
GPT-5.1 series
Google
Gemini 3 Pro
Perplexity
Sonar Pro search
xAI
Grok 4.1
OpenRouter
Qwen, Kimi K2 & more
Anthropic
Claude models

Works best with Claude Code MCP integration

Full Control. Zero Lock-In.

Customize Everything

Control token costs and build custom workflows with simple config files

Profile System

Toggle tools on/off to control token usage

Choose a preset profile or create your own. Toggle individual tools on/off to control exactly which capabilities load and how many tokens you use.

tools.config.jsonJSON
{
  "customProfile": {
    "enabled": true,  // ← Use custom profile
    "tools": {
      // Research tools
      "perplexity_ask": true,    // ✓ ON
      "scout": true,             // ✓ ON

      // Reasoning tools
      "grok_reason": true,       // ✓ ON
      "challenger": true,        // ✓ ON
      "verifier": true,          // ✓ ON

      // Creative tools
      "openai_brainstorm": true, // ✓ ON
      "gemini_analyze_code": false, // ✗ OFF
      "qwen_coder": false        // ✗ OFF
    }
  }
}
1 tool enabled~400 tokens
All 31 tools enabled~20k tokens

Optimize your context. Tools take token space. Load only what you need. Switch profiles anytime.

Custom Workflows

Write your own multi-step AI workflows

Define custom workflows in YAML or JSON. Chain any tools together, pass outputs between steps, run models in parallel. This example runs 4 models simultaneously, synchronizes their perspectives, then debates to refine the solution.

general-council.yamlYAML
# Multi-model council workflow
steps:
  # Step 1: Gather perspectives
  - tool: grok_reason
    output: grok_view
  - tool: perplexity_ask
    output: research_facts
  - tool: qwen_coder
    output: technical_view
  - tool: kimi_thinking
    output: systematic_view

  # Step 2: Extract patterns
  - tool: gemini_analyze_text
    params:
      type: "key-points"
    output: patterns

  # Step 3: Final synthesis
  - tool: openai_brainstorm
    params:
      style: "systematic"
    output: final

Build your own workflows. Create unlimited variations. Save as .yaml or .json files. Run with workflow(name: "general-council")

Get Started in Minutes

Add to Claude Desktop or any MCP client

Installation
# 1. Install via npm
npm install -g tachibot-mcp

# 2. Add to Claude Desktop config
# ~/.config/Claude/claude_desktop_config.json
{
  "mcpServers": {
    "tachibot": {
      "command": "tachibot-mcp",
      "args": ["--config", "~/.tachibot/config.json"]
    }
  }
}

# 3. Configure API keys (optional)
{
  "apiKeys": {
    "openai": "sk-...",
    "gemini": "...",
    "perplexity": "..."
  }
}

# 4. Start using!
tachibot workflow run general-council "Your query here"