The Story

The 777-1 Experiment

The Project That Never Ends

How 129 code reviews led to 7 specialized subagents and a new approach to AI-assisted development.

The Origin Story

After reviewing 129 AI-generated web applications, I noticed patterns. The same issues appeared again and again, regardless of how well-written the initial prompts were.

Some prompts were detailed and specific. Others were vague and lazy. But the types of issues remained consistent. Responsive design broke on mobile. State didn't persist across refreshes. Accessibility was an afterthought. Features worked in isolation but not together.

This led to a realization: prompt engineering wasn't enough. We needed context engineering.

The Insight

Seven recurring issues emerged from those 129 reviews. Each became a specialized subagent — a context engineer who provides domain-specific context just-in-time.

1Responsive/Mobile design failures
62+(12-15%)
2Cross-feature integration failures
58+(11-13%)
3Incomplete functionality
57+(11-13%)
4UI inconsistency
53+(10-12%)
5Accessibility gaps
40+(8-10%)
6State management problems
38+(7-9%)
7Code quality issues
35+(7-8%)

The Experiment Design

The 777-1 Experiment tests a hypothesis: Can we predict prompt failures before execution by understanding prompt characteristics?

The Method

7 Diverse Applications

From educational simulations to e-commerce to gaming — different categories, different complexity levels.

Sequential Review

Each application reviewed by all 7 subagents in a specific order, because later context depends on earlier context.

Git Documentation

Every transformation documented via Git commits for full transparency and pattern extraction.

Pattern Extraction

Analyzing the transformations to build a failure prediction algorithm that keeps improving.

Why This Order Matters

The subagents execute in a specific sequence because later context depends on earlier context. Amber (responsive design) goes first because layout affects everything. Cassandra (cross-feature integration) goes last because she needs to see how all features work together.

The Timeline

Phase 1Completed

129 code reviews analyzed

Systematic analysis of AI-generated web applications to identify recurring issues.

Phase 2Completed

7 subagent specifications created

Each recurring issue became a specialized subagent with clear responsibilities.

Phase 3Completed

7 projects selected

Diverse applications chosen to test different aspects of context engineering.

Phase 4In Progress

Experiments in progress

Systematic transformation of each project through all 7 subagents.

Phase 5

Algorithm development

Extract patterns to predict prompt failures before execution.

Phase 6Ongoing

Continuous iteration

The project that never ends - every new project adds data and refines the algorithm.

Why "The Project That Never Ends"

Seven projects will give us directional insights, not statistical rigor. That's intentional. This is Phase 1 of an ongoing experiment.

Every new project adds data. Every new pattern refines the algorithm. The goal isn't perfection — it's continuous improvement.

Future phases will expand beyond Next.js, test different frameworks, explore new application types, and push the boundaries of what context engineering can achieve.

The Algorithm That Keeps Learning

Each case study contributes to a growing understanding of prompt failure patterns. The more projects we analyze, the better we can predict and prevent issues before they occur.

Follow the Journey

Meet the team, explore the case studies, and see context engineering in action.