TachiBot
← Back to Home

Getting Started

  • Introduction
  • Installation
  • Quick Start
  • Examples

Core Features

  • Tools Overview
  • Workflows
  • PingPong Debates

Configuration

  • Tool Profiles
  • Tools Config
  • API Keys

Legal

  • Terms & Conditions
  • Cookie Policy

Loading documentation...

PingPong Workflow

A pre-built 9-step workflow that orchestrates multi-model debate with analysis, challenge, and consensus phases. Four models collaborate to refine ideas through structured iteration.

What is PingPong?

PingPong is a YAML workflow that runs your question through 4 AI models (Grok, Gemini, Qwen, Perplexity) in 3 phases: initial analysis, critical challenge, and final consensus. Each model builds on previous outputs to refine solutions through structured debate.

How to Use

Execute the pingpong workflow using the workflow tool:

Or via Claude Code MCP:

Three-Phase Structure

Phase 1: Analysis (4 steps)

Each model analyzes your question independently from its unique perspective:

  • Grok - First-principles reasoning
  • Gemini - Creative brainstorming
  • Qwen - Technical code-focused analysis
  • Perplexity - Analytical reasoning with research

Phase 2: Challenge (4 steps)

Models critique and refine each other's analyses:

  • Grok - Finds flaws, edge cases, hidden assumptions
  • Gemini - Synthesizes analyses and proposes alternatives
  • Qwen - Identifies technical risks and implementation challenges
  • Perplexity - Research validation with current evidence

Phase 3: Consensus (1 step)

OpenAI GPT-5 compares all perspectives and generates final recommendation with consensus analysis.

Output Files

The workflow automatically saves all outputs to workflow-output/pingpong/<timestamp>/*.md:

  • analyze-grok.md - Grok's initial analysis
  • analyze-gemini.md - Gemini's brainstorming
  • analyze-qwen.md - Qwen's technical perspective
  • analyze-perplexity.md - Perplexity's research-backed analysis
  • challenge-*.md - Critical refinements from each model
  • consensus.md - Final multi-model comparison and recommendation

Required API Keys

The pingpong workflow requires all 4 provider API keys:

  • XAI_API_KEY - For Grok reasoning
  • GOOGLE_API_KEY - For Gemini analysis
  • OPENROUTER_API_KEY - For Qwen coder
  • PERPLEXITY_API_KEY - For research validation
  • OPENAI_API_KEY - For final consensus (GPT-5)

Chaining with Other Workflows

Use pingpong's output as input to other workflows, or chain multiple analyses:

Or create a custom workflow that includes pingpong as a step:

custom-architecture-review.yaml

Best Practices

  • Clear, specific questions - Better input = better analysis across all 9 steps
  • Review all phase outputs - Each model provides unique insights worth examining
  • Focus on consensus.md - Final step synthesizes all perspectives with recommendations
  • Cost awareness - 9 steps × 5 models = 45 API calls per workflow run
  • Use for complex decisions - Architecture choices, technical tradeoffs, design debates

Example: Architecture Decision

Performance Notes

⚠️ Cost Considerations

The pingpong workflow makes 9 total API calls across 5 different providers:

  • 4 calls during Analysis phase (Grok, Gemini, Qwen, Perplexity)
  • 4 calls during Challenge phase (same models)
  • 1 call for Consensus (OpenAI GPT-5)

Estimated cost: $0.10-0.30 per run depending on response lengths and provider pricing.

Next Steps

Enhance your multi-model orchestration

Learn Workflows
Combine PingPong with other tools
Explore Tools
See all available models
Quick Start
More examples