Loading documentation...
TachiBot provides 51 tools with full parameter schemas. Click any tool to see all available parameters, types, and examples. Control which tools load via Profile System.
Use profiles to control token usage (4k - 19k tokens):
TACHIBOT_PROFILE=minimal - 12 toolsTACHIBOT_PROFILE=code_focus - 29 toolsTACHIBOT_PROFILE=research_power - 31 toolsTACHIBOT_PROFILE=balanced - 39 toolsTACHIBOT_PROFILE=heavy_coding - 45 tools (DEFAULT)TACHIBOT_PROFILE=full - 51 toolsSee Profile Configuration for details.
Some tools require API keys in your .env file:
PERPLEXITY_API_KEY - For Perplexity toolsXAI_API_KEY - For Grok toolsOPENAI_API_KEY - For OpenAI/GPT-5.2 toolsGEMINI_API_KEY - For Gemini toolsOPENROUTER_API_KEY - For Qwen & Kimi toolsSee API Keys Guide for setup instructions.
Log reasoning thoughts in a dedicated scratchpad. Provides structured thinking space for step-by-step problem solving. Foundation of the 54% performance improvement from Anthropic research.
Multi-model collaborative reasoning with 10+ specialized modes (deep-reasoning, code-brainstorm, architecture-debate, etc.). Coordinate different AI models to solve problems together. Also available as 'tachi' alias.
Sequential thinking with multi-model execution, context distillation, and auto-judgment. Chain reasoning across models with smart context management.
Live web search using Grok-4.1 with advanced filtering. Configure sources (web/news/x/rss), domain restrictions, recency filters, and max results. IMPORTANT: Costs $5 per 1000 sources searched.
Deep logical reasoning with Grok using first principles thinking. Break down complex problems to fundamental truths and build solutions from the ground up.
Code analysis, optimization, debugging, review, and refactoring with Grok 4.1. Specialized for technical tasks requiring deep code understanding.
Deep debugging assistance with context-aware analysis. Grok examines errors, code, and context to find root causes and suggest fixes.
System architecture and design for small/medium/large/enterprise scale. Design scalable systems with proper constraints and requirements.
Creative brainstorming with Grok 4.1 for deep creative thinking. Generate innovative ideas with configurable constraints and quantity.
Web search with real-time information using Perplexity Sonar Pro. Filter by recency (hour/day/week/month/year) and search domains (general/academic/news/social).
Deep research using Perplexity's sonar-deep-research model. Synthesizes hundreds of sources into a comprehensive report in a single call. High latency (minutes) but exhaustive results.
Complex reasoning with Perplexity Sonar Reasoning Pro. Analytical, creative, systematic, or comparative approaches for deep problem solving.
Generate ASCII visualization of workflow structure showing steps, tools, models, parallel execution, and conditional logic. Perfect for understanding workflow composition.
Start a workflow in step-by-step streaming mode. Execute one step at a time with session management for long-running workflows. Returns session ID for continuing execution.
Continue executing the next step of a streaming workflow session. Use the session ID returned from workflow_start or previous continue_workflow calls.
Check the progress and status of a running streaming workflow session. Shows completed steps, current step, remaining steps, and latest output.
Execute multi-step tool sequences from YAML/JSON files. Variable interpolation, parallel execution, unlimited steps. Build complex automated reasoning processes.
List all available workflow templates and custom workflows in your project directory.
Create custom workflows from templates (code-review/brainstorm/debug/research/custom). Define your own tool sequences with YAML/JSON.
Validate workflow YAML/JSON content for correctness. Checks syntax, interpolation references, tool names, and circular dependencies. Returns detailed error messages with fix suggestions.
Validate a workflow file from filesystem. Same validation as validate_workflow but reads from file path. Checks syntax, tool names, variable references, and dependencies.
Creative brainstorming with GPT-5.2 series. Choose model variant, reasoning effort (none/low/medium/high), and style.
Mathematical and scientific reasoning using GPT-5.2. Excels at logical analysis, mathematical proofs, scientific reasoning, and analytical problem-solving.
Comprehensive code review with GPT-5.2. Focus on security, performance, readability, bugs, or best practices. Get detailed feedback and improvement suggestions.
Explain complex concepts at different levels (beginner/intermediate/expert) using various styles (technical/simple/analogy/visual). Perfect for learning and teaching.
Multi-model consensus and conflict resolution using GPT-5.2. Compare options, identify conflicts, and provide recommendations based on criteria.
Web search with OpenAI. Retrieve real-time information grounded in web search results using GPT models with search capabilities.
Collaborative ideation with Gemini. Multi-round brainstorming where Gemini builds upon previous thoughts to generate deeper insights.
Code quality, security, performance, and bug analysis with Gemini. Focus on specific aspects or get general comprehensive analysis.
Text analysis for sentiment, summary, entities, key points, or general insights. Extract structured information from unstructured text.
Qwen3-Coder-Next (80B/3B MoE, 262K context, SWE-Bench >70%) for code generation, review, optimization, debugging, refactoring, and explanation. 3x cheaper than legacy model ($0.07/$0.30 per M tokens).
Qwen3-Competitive for competitive programming, algorithmic challenges, and LeetCode-style problems. Only enabled when ENABLE_QWEN_COMPETITIVE=true.
Heavy reasoning with Qwen3-Max-Thinking 235B (>1T total params, 98% HMMT math competition). Excels at complex multi-step reasoning, mathematical proofs, and analytical problem-solving.
Algorithm analysis with Qwen3-235B-Thinking (235B MoE, LiveCodeBench 91.4, HMMT 98%). O(1)-first optimization, complexity profiling, competitive programming patterns, and data structure selection.
Advanced agentic reasoning with Moonshot AI's Kimi K2.5 (1T MoE, 32B active, 256k context). Multimodal (vision/video) with Agent Swarm (100 sub-agents). Excels at long-horizon reasoning, multi-step analysis, and complex problem-solving. Tops SWE-Bench.
Task decomposition with Kimi K2.5's Agent Swarm reasoning. Breaks complex tasks into structured subtasks with IDs, dependencies, acceptance criteria, and parallel execution hints. Feeds into planner synthesis for ordered implementation plans.
SWE-focused code analysis with Kimi K2.5 (76.8% SWE-Bench). Specialized for code review, bug detection, refactoring suggestions, and implementation planning. Agent Swarm spawns sub-agents for different aspects of code analysis.
Long-context analysis with Kimi K2.5's full 256K token window. Analyze entire codebases, long documents, or extensive conversation histories. Ideal for cross-file analysis and large-scale code review.
Code fix, review, and optimization with MiniMax M2.5 (SWE-Bench 80.2%). Embedded SCoT, reflexion, and rubber_duck techniques. Per-task temperatures for optimal output.
Multi-step task decomposition with MiniMax M2.5. ReAct + least-to-most protocol with HALT criteria. Breaks tasks into ordered steps and executes with verification.
Google Search grounding with Gemini. Uses dynamic retrieval to ground responses in real-time Google Search results. Get factual, sourced answers with search citations.
Science-backed LLM-as-a-Judge evaluation (Gu et al., arXiv:2411.15594). Synthesize, evaluate, rank, or resolve multiple AI perspectives into a unified verdict. Uses chain-of-thought, first-principles, and adversarial reasoning.
Multi-model jury panel. Runs your question through configurable AI jurors in parallel (grok, openai, qwen, kimi, perplexity, minimax), then Gemini Judge synthesizes a unified verdict. Based on 'Replacing Judges with Juries' (Cohere, arXiv:2404.18796).
Multi-model council for creating verified implementation plans. 6-step process: Grok searches for ground truth, Qwen analyzes feasibility, GPT-5.2 critiques gaps, Gemini scores quality. Returns confidence-scored plans.
Step-by-step execution of implementation plans with verification checkpoints. Tracks progress, validates each step, and provides 50% and 100% verification milestones.
Browse recent implementation plans. View plan summaries, status, and confidence scores. Filter by status or search by keywords.
Browse all 31 research-backed prompt engineering techniques. View descriptions, aliases, and usage examples for patterns like first_principles, tree_of_thoughts, council_of_experts, and more.
Preview how a prompt technique will enhance your query before execution. See the exact system prompt and user prompt that will be sent to the AI model. No API calls made.
Apply a research-backed prompt technique to any AI tool. Enhances your query with structured prompting patterns, then routes to the best model for that technique. Composable building blocks for thinking.
For exhaustive parameter documentation with all edge cases and advanced examples, see the Complete Tools Reference on GitHub.
Configure which tools load via environment variable or config file:
Enabling fewer tools reduces token usage. Each tool adds ~400-800 tokens to context. Choose wisely based on your needs and budget.
Configure and optimize your TachiBot setup