TachiBot
← Back to Home

Getting Started

  • Introduction
  • Installation
  • Quick Start
  • Examples

Core Features

  • Tools Overview
  • Planner
  • Prompt Techniques
  • Workflows

Configuration

  • Tool Profiles
  • Tools Config
  • Themes
  • Usage Stats
  • API Keys

Resources

  • What's New

Legal

  • Terms & Conditions
  • Cookie Policy

Loading documentation...

Practical Examples

Real-world examples showing how to use TachiBot tools and workflows effectively. From simple tool calls to complex multi-step orchestrations.

Using Tools Directly (No Workflows)

You can call any TachiBot tool directly from Claude Code without creating a workflow. Just use natural language:

Example 1: Simple Research

Ask Claude to use perplexity_ask tool:

conversation
šŸ‘¤You
"Use perplexity_ask to research the latest developments in Rust async runtime"
šŸ¤–Claude
I'll use the perplexity_ask tool to research that.
āš™ļøTool Call: perplexity_ask
āœ“Result
Shows recent tokio updates, async-std developments, and more.

Example 2: Code Analysis

Direct code review with qwen_coder:

conversation
šŸ‘¤You
"Use qwen_coder to review this code for bugs:"
šŸ¤–Claude
I'll use qwen_coder to review this code.
āš™ļøTool Call: qwen_coder
āœ“Result
Found issues: 1. Weak validation — doesn't check domain 2. Missing null checks 3. No email format validation (RFC 5322) Recommendation: Use proper regex or validation library

Multi-Tool Chains

Chain multiple tools together for more powerful analysis. Claude can orchestrate this automatically:

Research → Validate → Analyze → Save

Complete analysis pipeline in natural language:

conversation
šŸ‘¤You
"Research GraphQL vs REST with perplexity, validate findings with grok_search, analyze the trade-offs with qwen_coder, and save the results to a markdown file with timestamp"
šŸ¤–Claude
Claude executes a 4-step pipeline: 1. perplexity_research({ topic: "GraphQL vs REST API comparison" }) 2. grok_search({ query: "GraphQL REST performance benchmarks 2024" }) 3. qwen_coder({ task: "analyze", requirements: "Compare based on research" }) 4. Saves output to timestamped markdown file
āœ“Result
File created: ./graphql-vs-rest-2025-11-14-16-30.md

Multi-Model Consensus

Get opinions from 3 different models:

conversation
šŸ‘¤You
"Should I use TypeScript or Go for a new microservice API? Get recommendations from grok_reason, gemini_brainstorm, and openai_brainstorm"
šŸ¤–Claude
I'll gather perspectives from multiple models. 1. grok_reason says: "Go for performance-critical services..." 2. gemini_brainstorm suggests: "TypeScript for faster development..." 3. openai_brainstorm recommends: "Consider team expertise..."
āœ“Result
Based on 3 model consensus: • Go: Better for CPU-intensive, low-latency services • TypeScript: Better for rapid prototyping, Node.js ecosystem Recommendation: TypeScript for MVP, migrate to Go if performance becomes bottleneck

Running Workflows

Workflows are pre-built multi-step processes. TachiBot includes 11 ready-to-use workflows:

How to Run a Workflow

Available Workflows

ultra-creative-brainstorm
18 steps, 14 techniques
Comprehensive creative ideation pipeline
code-review
7 steps, parallel execution
Multi-perspective code analysis
iterative-problem-solver
15 steps, conditional
Research → Analyze → Solve with refinement
code-architecture-review
7 steps
Systematic architecture analysis
accessibility-code-audit
Multiple steps
WCAG compliance checking
pingpong
8 steps, debate
Multi-model debate and refinement

System Workflows (Advanced)

High-capacity workflows for large-scale analysis:

verifier
~58k tokens
Multi-model consensus verification (7 steps)
scout
~90k tokens
Multi-source information gathering (6 steps)
challenger
~64k tokens
Devil's advocate critical analysis (6 steps)

Finding & Reading Workflow Outputs

Output Directory Structure

All workflow outputs are saved to workflow-output/ directory:

File Naming Convention

{stepNumber}-{stepName}-{modelName}-{YYYY-MM-DD-HH-MM-SS-DayName}.md
Example: 3-brainstorm-ideas-gpt-5-2025-11-14-16-32-05-Thursday.md

Configuration

Override default output directory in your .env file:

Reading Results

1. Check Manifest.json

Contains execution summary and metadata:

2. Read Individual Step Outputs

3. Debugging Failed Workflows

Use jq to find errors in manifest.json:

Real-World Use Cases

Use Case 1: Analyzing a GitHub Issue

Multi-tool analysis pipeline:

conversation
šŸ‘¤You
"Analyze GitHub issue #234 about performance regression: 1. Research similar issues with perplexity_research 2. Analyze the code changes with qwen_coder 3. Get optimization suggestions from grok_code 4. Validate solution with verifier workflow"
āœ“Result
Complete analysis with: • 5 similar historical issues found • Root cause identified (N+1 query problem) • 3 optimization strategies suggested • Solution verified by 3 models (95% confidence)

Use Case 2: Technical Decision Making

Compare database options:

conversation
šŸ‘¤You
"Run iterative-problem-solver workflow to decide between PostgreSQL and MongoDB for a social media analytics platform with 1M daily active users"
šŸ¤–Claude
Workflow executes 15 steps: Step 1–3: Research both databases (perplexity + grok) Step 4–6: Analyze requirements vs capabilities Step 7–10: Challenge assumptions with different scenarios Step 11–13: Verify recommendations with multi-model consensus Step 14–15: Synthesize final decision with trade-offs
āœ“Result
Output: 15-page analysis in workflow-output/ Recommendation: PostgreSQL for ACID compliance, with MongoDB for time-series data

Use Case 3: Code Refactoring Plan

Comprehensive refactoring strategy:

conversation
šŸ‘¤You
"Use code-architecture-review workflow on src/ directory"
šŸ¤–Claude
Workflow analyzes: • Code structure and patterns • Coupling and cohesion • Performance bottlenecks • Security vulnerabilities • Test coverage gaps
āœ“Result
Delivers: āœ“ 12 refactoring recommendations āœ“ Prioritized by impact (high/medium/low) āœ“ Before/after code examples āœ“ Estimated effort (story points) āœ“ Risk assessment for each change Saved to: workflow-output/code-architecture-review/2025-11-14.../

Advanced Patterns

Parallel Execution

Run multiple tools simultaneously for faster results:

Conditional Execution

Execute steps based on previous results:

File-Based Chaining (Large Outputs)

Handle outputs larger than 1MB by saving to disk:

Prompt Engineering Integration

Use 31 research-backed techniques for better reasoning:

Related Resources

All Tools ReferenceComplete documentation for all 31 toolsWorkflows GuideLearn to create custom workflowsTool ProfilesPre-configured tool setsTools ConfigurationCustomize which tools are enabled

Pro Tip

Start with direct tool calls to understand what each tool does. Once you find a pattern you use often, convert it to a workflow for repeatability. Use the create_workflow tool to generate YAML from your natural language description!