Canonical Pattern Seeding

Establish strong local examples first, because future work will imitate the surrounding pattern.

Agents are highly pattern-following: templates, initial tests, file layout, README style, and CI setup all become implicit policy. The fastest way to improve future output is to seed the codebase with the shape of work you want copied. #

Embedded Standards

Put quality and style expectations into concrete artifacts the system can inspect and execute.

Treats tests, starter templates, and existing code patterns as the real carriers of standards. Delegation gets more reliable when norms are demonstrated and machine-checkable. #

Encode the Hard Part

Put human effort into the few original conceptual moves that are still hard to generate, and let systems handle explanation, adaptation, and distribution.

Once the core insight exists, systems can explain it in many ways to many people. The scarce human contribution is no longer generic explanation; it is producing the distilled conceptual bit that others can then amplify. #

External Working Memory

Extend long-running work by storing evolving state outside immediate context and refreshing it continuously.

Rather than rely on one uninterrupted context window, use a scratchpad and continuation loop. Offload intermediate understanding into stable external state so work can remain coherent over long horizons. #

Independent Reality Checks

Keep long-running work aligned by checking it against multiple external signals of correctness rather than trust in local reasoning alone.

Specs, conformance tests, compilation, and visual comparisons as separate feedback channels. Long autonomous loops drift unless reality keeps correcting them from the outside. #

Intent Over Interfaces

Hide fragmented systems behind one coordinating layer that accepts human goals, remembers context, and routes the underlying actions.

People want to express intent to one persistent coordinator, while the underlying tools become modular capabilities exposed through APIs. #

Lethal Trifecta

A system becomes structurally dangerous when it combines private data, exposure to hostile input, and a path to send data back out.

This is the basic breach geometry behind many failures. If you cannot trust the inputs, remove one of the three legs, especially the outbound path. #

Normalization of Deviance

Repeated near-misses make risky behavior feel normal and safe until a failure finally lands.

Surviving unsafe behavior teaches organizations the wrong lesson. Lack of disaster can increase overconfidence instead of proving safety. #

Plurality Over Monoculture

High-stakes systems are healthier when capability and decision-making are distributed across multiple specialized actors and a shared open layer, rather than concentrated in one closed center. #

Precise Ends, Flexible Means

Define outcomes, priorities, boundaries, and corner cases precisely, while leaving local implementation choices open.

The system is rigorously obedient to its instructions. When the work goes in an unwanted direction, the remedy is usually not more effort inside the run, but a better top-level brief that is detailed on goals and constraints without micromanaging execution. #

Process as Code

Make instructions, roles, and workflows explicit enough that they can themselves be tested, compared, and improved.

Once the operating process is written down clearly enough, it stops being invisible overhead and becomes an improvable system. #

Prototype-to-Proof Loop

Treat ideas as hypotheses, generate multiple versions cheaply, and decide by testing them against reality.

Initial ideas are usually wrong and that cheap prototyping changes the economics of exploration. #

Quality by Longevity

Set the quality bar according to how long the software must live and how much change it must absorb.

Distinguish disposable software from maintained software: for short-lived utilities, working output may be enough; for long-lived systems, structure, refactoring, and readability matter much more. Match engineering investment to expected maintenance burden. #

Recursive Scoped Ownership

Break a large objective into nested scopes with clear owners, and keep each unit inside its assigned boundary.

The planner-worker tree is the core way the system stays scalable: the root owns the whole objective, sub-planners own narrower slices, and leaf tasks become increasingly specific. The key pattern is bounded delegation that prevents scope creep and overlap. #

Risk-Scaled Autonomy

The amount of autonomy you grant a system should rise or fall with the downside of failure.

Draw a clear boundary between low-stakes personal experimentation and outputs that can harm other people. The larger the blast radius, the more review, restraint, and expertise you need. #

Task Bundles, Not Jobs

Analyze technological impact by breaking roles into tasks and then asking how lower task costs change total demand.

Treat jobs as bundles of activities, some of which speed up earlier than others. Pair that with demand elasticity: when something becomes cheaper and easier, total demand can rise rather than fall, so the effect on whole roles is not mechanically eliminative. #

Tolerate Recoverable Errors

Allow small, non-compounding local errors when eliminating them immediately would slow the whole system more than fixing them shortly afterward.

The system accepts some transient breakage because insisting on perfect local correctness at every step would create synchronization bottlenecks. The important distinction is between errors that accumulate destructively and errors that are quickly repaired as part of normal flow. #

Verifiability First

Autonomous improvement works best where success can be measured clearly, repeatedly, and cheaply enough to close the loop.

Rapid progress is tied to objective metrics and easy evaluation, while softer, harder-to-score domains remain uneven. The same principle explains both where systems become highly capable and why they stay jagged elsewhere. #