Wave-Based vs Sequential AI Agent Execution: When Parallelism Pays Off
Modern AI coding agents are remarkably capable. Given a well-scoped task, a frontier model like Claude can produce working code in minutes. Yet most multi-agent frameworks still execute tasks one at a time, waiting for each to finish before starting the next. The bottleneck isn’t the AI. It’s the serial execution model wrapping it.
When you have a project with 20 tasks and half of them have no dependencies on each other, running them sequentially means you’re leaving massive throughput on the table.
How Does Sequential Execution Work?
Sequential execution runs tasks in a linear pipeline where each task must finish before the next one begins, regardless of whether the tasks depend on each other.
The orchestrator picks a task, assigns it to an agent, waits for completion, validates the result, and then moves to the next task. Even if Task 3 and Task 4 share no dependencies whatsoever, Task 4 waits idly while Task 3 runs.
Sequential Execution (Total: 5 time units)
Time: 1 2 3 4 5
┌───┐
Task A │ A │
└───┘
┌───┐
Task B │ B │
└───┘
┌───┐
Task C │ C │
└───┘
┌───┐
Task D │ D │
└───┘
┌───┐
Task E │ E │
└───┘This model is simple and predictable. There are no race conditions, no merge conflicts, and debugging is straightforward. But for projects with any degree of task independence, it wastes time.
How Does Wave-Based Dispatch Achieve Topological Parallelism?
Wave-based dispatch builds a directed acyclic graph from task dependencies, then groups independent tasks into waves that execute simultaneously rather than one at a time.
SPOQ takes a fundamentally different approach. Before execution begins, the system builds a directed acyclic graph (DAG) from the task dependency declarations. It then performs a topological sort to identify waves, which are groups of tasks that have no inter-dependencies and can safely execute in parallel.
Wave-Based Execution (Total: 3 time units)
Dependencies: A → C, B → D, E is independent
Time: 1 2 3
┌───┐
Task A │ A │
└───┘
┌───┐ ┌───┐
Task B │ B │ Task D │ D │
└───┘ └───┘
┌───┐ ┌───┐
Task E │ E │ │ C │ Task C
└───┘ └───┘
Wave 1: [A, B, E] Wave 2: [C] Wave 3: [D]
(parallel) (depends on A) (depends on B)In Wave 1, tasks A, B, and E run simultaneously because none depend on each other. Wave 2 starts only after Wave 1 completes, running task C (which depends on A). Wave 3 runs task D (which depends on B). The total wall-clock time drops from 5 units to 3.
How the DAG Gets Built
Each task in a SPOQ epic declares its dependencies explicitly in its YAML definition. The orchestrator parses these declarations, constructs the DAG, validates it for cycles, and computes the wave schedule. This happens during the planning phase, before any agent writes a single line of code.
When Does Parallelism Pay Off?
Parallelism delivers the largest gains when a project has many independent tasks with few inter-dependencies, creating wide dependency trees that fill each wave with simultaneous work.
Wide Dependency Trees
Projects with many independent tasks see the largest gains. A feature sprint that adds several unrelated components like a new API endpoint, a UI component, a database migration, and a set of tests can often run all four in parallel. In our benchmarks, wide trees achieved speedups up to 5.3x compared to sequential execution.
Feature Sprints and Test Generation
Test generation is a particularly strong use case. Once the implementation tasks complete, unit tests for different modules have no dependencies on each other. A wave of test-writing agents can cover an entire codebase simultaneously.
When Does Parallelism Fail to Help?
Parallelism provides minimal benefit when tasks form strict sequential chains or when multiple tasks modify the same files, forcing serialization regardless of logical independence.
Deep Dependency Chains
If your tasks form a strict chain (A depends on B, B depends on C, C depends on D) there is nothing to parallelize. Each wave contains exactly one task. In these cases, SPOQ’s wave-based approach delivers only a modest 1.3x speedup (from reduced orchestration overhead and pre-fetching), not the dramatic gains seen with wider graphs.
Single-File Refactors
When multiple tasks modify the same file, they cannot safely run in parallel regardless of their logical dependencies. SPOQ detects file-level conflicts during planning and serializes these tasks even if the DAG would otherwise allow parallelism. This is a correctness constraint, not a limitation.
What Do Real Deployment Numbers Show?
Across 9 real-world deployments, wave-based dispatch averaged approximately 2.4x speedup, with results ranging from 1.3x for deep chains to 5.3x for wide dependency trees.
The full data is documented in the SPOQ research paper:
- Wide dependency trees (many independent tasks): up to 5.3x speedup
- Mixed graphs (typical projects): 2.0x to 3.5x speedup
- Deep chains (mostly sequential dependencies): 1.3x speedup
- Average across all 9 deployments: approximately 2.4x
The key takeaway: even in the worst case, wave-based dispatch doesn’t hurt. The overhead of computing waves is negligible, and the orchestration improvements provide a small baseline gain. In the best case, you complete your project more than five times faster.
How Should You Choose Between the Two Approaches?
Wave-based dispatch subsumes sequential execution, so the real decision is whether your task decomposition reveals parallelism opportunities worth exploiting.
A deep chain is just a series of single-task waves. The real question is whether your project’s task decomposition reveals parallelism opportunities.
Good task decomposition is the prerequisite. If you lump everything into three monolithic tasks, there’s nothing to parallelize. If you break work into atomic, well-scoped units with explicit dependencies, the wave scheduler finds the parallelism automatically.
For the full benchmark data across all 9 deployments, including detailed dependency graph shapes and timing breakdowns, see the SPOQ research paper.
Related Posts
- Why Multi-Agent AI Orchestration Changes Everything
- Getting Started with Claude Code for Multi-Agent Development
Interested in multi-agent AI architecture? Schedule a conversation to discuss how these patterns can accelerate your team.