Stop Switching Tabs. Command Every LLM From One Prompt.
Mid-conversation: "Ask Perplexity AND Grok to research this, then have Kimi K2 and GPT-5.1 analyze the findings." It just routes to the right models.
"Have Grok check Twitter for that error message""Ask Perplexity what changed in React 19 this week""Get Gemini to brainstorm, then have Kimi K2 and GPT-5.1 both analyze it"Ad-hoc Commands or Saved Workflows
Ask models anything mid-conversation. Or chain them into repeatable workflows.
Natural Language Routing
Just say what you need. "Ask Perplexity AND Grok to research this memory leak, then have Kimi K2 analyze." No config files. It routes to the right models automatically.
zero configReal-Time Web Search
"Have Grok check Twitter for that error." "Ask Perplexity what changed in Next.js 15." Get actual sources from the past week, not hallucinated citations.
live searchMulti-Model Debates
Models challenge each other. Grok analyzes → Gemini critiques → Qwen finds edge cases → GPT-5.1 synthesizes. Bad answers get caught before you see them.
cross-checkingModel Strengths
Each model has a specialty. Perplexity for research, Grok for first-principles, Kimi K2 for 128k context, Gemini for brainstorming, GPT-5.1 for synthesis, Qwen for code.
right tool for jobGateway or BYOB
Gateway Mode: one OpenRouter key for all models. Or BYOB: use your own provider keys directly. Perplexity always needs its own key for web search.
flexible setupControl Token Costs
Need 1 tool? 400 tokens. All 31 tools? 13k tokens. Turn tools on/off in one config file. You decide the tradeoff between capability and cost.
400-13k tokensYAML Workflows
For repeatable tasks: Perplexity researches → Grok reasons → Kimi K2 analyzes → GPT-5.1 synthesizes. Define once, reuse whenever. One model's output feeds the next.
chain modelsRich Terminal Output
ASCII art headers, pie charts, braille progress bars, and gradient colors. See usage stats, workflow progress, and tool outputs as beautiful terminal visualizations.
visual feedbackHow It Reduces Hallucinations
Run multiple AI models on the same question. They check each other's answers, debate solutions, and catch mistakes before you see them.
Based on peer-reviewed research (arXiv:2406.04692)
The Problem
A single AI model can confidently give you wrong answers. It doesn't know when it's making things up.
What TachiBot Does
TachiBot runs your question through multiple models at once. They generate answers independently, then review each other's work to catch mistakes.
Why This Works
Researchers tested this approach and published the results. When models check each other, hallucinations drop significantly.
The AI your AI calls for help.
Real GPT-5 to GPT-5.1 migration analysis with 5 AI models
| Technology | Primary Strength | Best For | Key Considerations |
|---|---|---|---|
| GPT-5 Status | Released Aug 7, 2025 | Your current production model | Stable and working well |
| GPT-5.1 Status | Released Nov 12, 2025 | Latest version with improvements | 20-30% faster, better reasoning |
| Migration | Automatic for ChatGPT | No breaking changes until Q1 2026 | API users: re-test for factuality improvements |
Real Example: Deep Research with Verification
Stop Trusting. Start Verifying.
Single Model
"What breaking changes are in React 19?"TachiBot
"What breaking changes are in React 19?"Models challenge each other (3-200 rounds)
Search live sources with recency filters
The Problem
One model hallucinates. You ask Claude something, it sounds confident, but it's wrong. Hours wasted debugging fake information.
Manual model-hopping. Copy from Claude, paste into GPT, paste into Perplexity, compare manually. Tedious and error-prone.
No web access. Claude doesn't know what happened yesterday. You need to manually search and paste context.
Different models, different strengths. GPT-5.1 is great at synthesis. Grok has Twitter. Kimi K2 handles 128k context. Using the wrong one wastes time.
The Solution
Models verify each other. "Ask Perplexity AND Grok to both research this." They catch each other's mistakes.
Natural language routing. Just ask. "Have Grok check Twitter for that error." "Get Gemini to brainstorm alternatives." No config needed.
Live web search built in. Perplexity and Grok search the web in real-time. Actual sources, not hallucinated citations.
Right model for each task. Research? Perplexity. First-principles? Grok. Long context? Kimi K2. Synthesis? GPT-5.1. Code? Qwen. It routes automatically.
Works With Leading AI Providers
Always using the latest models from each provider
Works best with Claude Code MCP integration
Customize Everything
Control token costs and build custom workflows with simple config files
Profile System
Toggle tools on/off to control token usage
Choose a preset profile or create your own. Toggle individual tools on/off to control exactly which capabilities load and how many tokens you use.
{
"customProfile": {
"enabled": true, // ← Use custom profile
"tools": {
// Research tools
"perplexity_ask": true, // ✓ ON
"scout": true, // ✓ ON
// Reasoning tools
"grok_reason": true, // ✓ ON
"challenger": true, // ✓ ON
"verifier": true, // ✓ ON
// Creative tools
"openai_brainstorm": true, // ✓ ON
"gemini_analyze_code": false, // ✗ OFF
"qwen_coder": false // ✗ OFF
}
}
}Optimize your context. Tools take token space. Load only what you need. Switch profiles anytime.
Custom Workflows
Write your own multi-step AI workflows
Define custom workflows in YAML or JSON. Chain any tools together, pass outputs between steps, run models in parallel. This example runs 4 models simultaneously, synchronizes their perspectives, then debates to refine the solution.
# Multi-model council workflow
steps:
# Step 1: Gather perspectives
- tool: grok_reason
output: grok_view
- tool: perplexity_ask
output: research_facts
- tool: qwen_coder
output: technical_view
- tool: kimi_thinking
output: systematic_view
# Step 2: Extract patterns
- tool: gemini_analyze_text
params:
type: "key-points"
output: patterns
# Step 3: Final synthesis
- tool: openai_brainstorm
params:
style: "systematic"
output: finalBuild your own workflows. Create unlimited variations. Save as .yaml or .json files. Run with workflow(name: "general-council")
Get Started in Minutes
Add to Claude Desktop or any MCP client
# 1. Install via npm
npm install -g tachibot-mcp
# 2. Add to Claude Desktop config
# ~/.config/Claude/claude_desktop_config.json
{
"mcpServers": {
"tachibot": {
"command": "tachibot-mcp",
"args": ["--config", "~/.tachibot/config.json"]
}
}
}
# 3. Configure API keys (optional)
{
"apiKeys": {
"openai": "sk-...",
"gemini": "...",
"perplexity": "..."
}
}
# 4. Start using!
tachibot workflow run general-council "Your query here"