Practical Examples
Real-world examples showing how to use TachiBot tools and workflows effectively. From simple tool calls to complex multi-step orchestrations.
Using Tools Directly (No Workflows)
You can call any TachiBot tool directly from Claude Code without creating a workflow. Just use natural language:
Example 1: Simple Research
Ask Claude to use perplexity_ask tool:
š¤You
"Use perplexity_ask to research the latest developments in Rust async runtime"
š¤Claude
I'll use the perplexity_ask tool to research that.
āļøTool Call: perplexity_ask
āResult
Shows recent tokio updates, async-std developments, and more.
Example 2: Code Analysis
Direct code review with qwen_coder:
š¤You
"Use qwen_coder to review this code for bugs:"
š¤Claude
I'll use qwen_coder to review this code.
āļøTool Call: qwen_coder
āResult
Found issues:
1. Weak validation ā doesn't check domain
2. Missing null checks
3. No email format validation (RFC 5322)
Recommendation: Use proper regex or validation library
Multi-Tool Chains
Chain multiple tools together for more powerful analysis. Claude can orchestrate this automatically:
Research ā Validate ā Analyze ā Save
Complete analysis pipeline in natural language:
š¤You
"Research GraphQL vs REST with perplexity, validate findings with grok_search, analyze the trade-offs with qwen_coder, and save the results to a markdown file with timestamp"
š¤Claude
Claude executes a 4-step pipeline:
1. perplexity_research({ topic: "GraphQL vs REST API comparison" })
2. grok_search({ query: "GraphQL REST performance benchmarks 2024" })
3. qwen_coder({ task: "analyze", requirements: "Compare based on research" })
4. Saves output to timestamped markdown file
āResult
File created: ./graphql-vs-rest-2025-11-14-16-30.md
Multi-Model Consensus
Get opinions from 3 different models:
š¤You
"Should I use TypeScript or Go for a new microservice API?
Get recommendations from grok_reason, gemini_brainstorm, and openai_brainstorm"
š¤Claude
I'll gather perspectives from multiple models.
1. grok_reason says: "Go for performance-critical services..."
2. gemini_brainstorm suggests: "TypeScript for faster development..."
3. openai_brainstorm recommends: "Consider team expertise..."
āResult
Based on 3 model consensus:
⢠Go: Better for CPU-intensive, low-latency services
⢠TypeScript: Better for rapid prototyping, Node.js ecosystem
Recommendation: TypeScript for MVP, migrate to Go if performance becomes bottleneck
Running Workflows
Workflows are pre-built multi-step processes. TachiBot includes 11 ready-to-use workflows:
How to Run a Workflow
Available Workflows
ultra-creative-brainstorm
18 steps, 14 techniques
Comprehensive creative ideation pipeline
code-review
7 steps, parallel execution
Multi-perspective code analysis
iterative-problem-solver
15 steps, conditional
Research ā Analyze ā Solve with refinement
code-architecture-review
7 steps
Systematic architecture analysis
accessibility-code-audit
Multiple steps
WCAG compliance checking
pingpong
8 steps, debate
Multi-model debate and refinement
System Workflows (Advanced)
High-capacity workflows for large-scale analysis:
Multi-model consensus verification (7 steps)
Multi-source information gathering (6 steps)
Devil's advocate critical analysis (6 steps)
Finding & Reading Workflow Outputs
Output Directory Structure
All workflow outputs are saved to workflow-output/ directory:
File Naming Convention
{stepNumber}-{stepName}-{modelName}-{YYYY-MM-DD-HH-MM-SS-DayName}.md
Example: 3-brainstorm-ideas-gpt-5-2025-11-14-16-32-05-Thursday.md
Configuration
Override default output directory in your .env file:
Reading Results
1. Check Manifest.json
Contains execution summary and metadata:
2. Read Individual Step Outputs
3. Debugging Failed Workflows
Use jq to find errors in manifest.json:
Real-World Use Cases
Use Case 1: Analyzing a GitHub Issue
Multi-tool analysis pipeline:
š¤You
"Analyze GitHub issue #234 about performance regression:
1. Research similar issues with perplexity_research
2. Analyze the code changes with qwen_coder
3. Get optimization suggestions from grok_code
4. Validate solution with verifier workflow"
āResult
Complete analysis with:
⢠5 similar historical issues found
⢠Root cause identified (N+1 query problem)
⢠3 optimization strategies suggested
⢠Solution verified by 3 models (95% confidence)
Use Case 2: Technical Decision Making
Compare database options:
š¤You
"Run iterative-problem-solver workflow to decide between PostgreSQL and MongoDB for a social media analytics platform with 1M daily active users"
š¤Claude
Workflow executes 15 steps:
Step 1ā3: Research both databases (perplexity + grok)
Step 4ā6: Analyze requirements vs capabilities
Step 7ā10: Challenge assumptions with different scenarios
Step 11ā13: Verify recommendations with multi-model consensus
Step 14ā15: Synthesize final decision with trade-offs
āResult
Output: 15-page analysis in workflow-output/
Recommendation: PostgreSQL for ACID compliance, with MongoDB for time-series data
Use Case 3: Code Refactoring Plan
Comprehensive refactoring strategy:
š¤You
"Use code-architecture-review workflow on src/ directory"
š¤Claude
Workflow analyzes:
⢠Code structure and patterns
⢠Coupling and cohesion
⢠Performance bottlenecks
⢠Security vulnerabilities
⢠Test coverage gaps
āResult
Delivers:
ā 12 refactoring recommendations
ā Prioritized by impact (high/medium/low)
ā Before/after code examples
ā Estimated effort (story points)
ā Risk assessment for each change
Saved to: workflow-output/code-architecture-review/2025-11-14.../
Advanced Patterns
Parallel Execution
Run multiple tools simultaneously for faster results:
Conditional Execution
Execute steps based on previous results:
File-Based Chaining (Large Outputs)
Handle outputs larger than 1MB by saving to disk:
Prompt Engineering Integration
Use 31 research-backed techniques for better reasoning:
Pro Tip
Start with direct tool calls to understand what each tool does. Once you find a pattern you use often, convert it to a workflow for repeatability. Use the create_workflow tool to generate YAML from your natural language description!