Focus Is a Scaling Law, Whether You’re Scaling People or Agents

There’s a formula from 1967 that explains why your 200-person team ships slower than when it was 30. It wasn’t written about organizations; it was written about CPUs. But the principle still applies.

The Law

Amdahl’s Law describes the theoretical speedup of a task when you add more processors. The insight is disarmingly simple: if any fraction of the work is inherently serial (i.e. can’t be parallelized), then adding more processors hits diminishing returns fast. If 10% of your workload is serial, you will never get more than a 10x speedup no matter how many cores you throw at it; not 100x, not 50x, just ten.

The formula is clean:

Speedup = 1 / (S + (1 – S) / N)

Where S is the serial fraction and N is the number of processors. As N approaches infinity, speedup converges to 1/S. The serial fraction is the ceiling.

Now, the computing world didn’t just accept this ceiling. Gustafson’s Law (1988) offered the optimistic counterpoint: as you add processors, you can also scale the problem size, and when you do, the serial fraction shrinks as a proportion of total work. The entire GPU revolution is a testament to this; people restructured their problems to be massively data-parallel, effectively defeating Amdahl’s pessimism through reformulation.

This matters for the organizational analogy, because the same option theoretically exists for teams. You can surrender to your serial fraction, or you can reformulate how you work to shrink it. Most organizations believe they’re doing exactly this when they expand their product areas and grow the scope of the problems they tackle. But scaling the problem size in an organization isn’t as clean as scaling a matrix multiplication across more GPU cores. Expanding scope often introduces new coordination demands, new stakeholders, and new dependencies that increase S even as the total workload grows. The Gustafson escape hatch is real, but it requires deliberate restructuring of the work itself, not just doing more of it.

In computing, the serial portion is usually some shared resource: a memory bus, a lock, a dependency chain. In organizations, the serial portion is decision-making, alignment, and communication. And unlike transistors, people get tired.

Why 1.0 Teams Feel Fast

Most people who’ve built products have felt this: the 1.0 team moves at a pace that feels almost unreasonable. Twelve engineers ship what later takes sixty engineers twice as long to iterate on.

Not every 1.0 team gets this right; plenty flounder. But the ones that ship well tend to share a structural property: low serial fraction. The product doesn’t exist yet, so there are no live users to protect, no incumbent features to preserve, no competing roadmaps to reconcile. The requirements are comparatively clear (build the thing that does X), decisions are fast because the option space is constrained, the communication graph is small, and everyone shares the same mental model.

In Amdahl’s terms, S is low and most of the work is parallelizable. Each person or small group can take a chunk of the problem and run, and synchronization costs are minimal because the goal is singular and legible.

This is the part teams remember fondly and mistakenly attribute to culture or talent density. Those matter, but the dominant variable is the serial fraction; during a well-scoped 1.0, it’s naturally compressed.

The Post-1.0 Drag

Then the product launches and users arrive. Success creates options, and options destroy focus.

The roadmap fragments into a surface area: growth, retention, monetization, platform concerns, partner requests, technical debt, regulatory compliance. Each of these is legitimate, each pulls in a different direction, and each requires alignment across people and teams that didn’t need to coordinate before.

The serial fraction explodes. Not because the people got worse, but because the work changed character. Success forces a phase transition from discovery to delivery; from “figure out what to build” (high risk, low S) to “protect what we’ve built” (low risk, high S). Much of what fills the post-1.0 calendar is inherently serial: regulatory compliance, security reviews, privacy assessments, backward compatibility guarantees. You can’t parallelize a legal review or distribute a compliance decision across twelve engineers. The defensive surface area of a successful product is, almost by definition, non-parallelizable work.

A larger share of every engineer’s week is now spent in this serialized portion: syncs, design reviews, cross-team alignment, roadmap negotiation, stakeholder and dependency management. The meetings aren’t dysfunction; they’re a direct consequence of the problem becoming less parallelizable.

This is where things get worse than Amdahl’s Law alone predicts. In the original formula, S is fixed. But in organizations, S grows with team size; communication overhead scales roughly with the square of headcount, as every new person adds edges to the coordination graph. This is closer to Brooks’s Law (from The Mythical Man-Month) than Amdahl’s, and it makes the scaling picture uglier: you’re not just hitting a fixed ceiling, you’re watching the ceiling drop as you add people. And the human “interconnect” can’t be upgraded. It’s bounded by the latency of speech, the bandwidth of meetings, and something like Dunbar’s Number (the cognitive limit on how many working relationships a person can actually maintain). CPUs got faster buses; we got ‘chat’, which arguably made things worse.

Suppose a team starts at 10 engineers with S at 5%, giving an effective speedup of 6.9x. The team doubles to 20, but as it grows, S climbs to 25% from all the added coordination. The new effective speedup is 3.5x. You doubled the team and got absolutely slower; not just slower per capita, but less total output than before. And if S grows quadratically with N, there’s an optimal team size where total output peaks. Add one more person beyond that point and you’ve entered negative scaling territory, where each hire makes the team slower in absolute terms, not just per capita. Most organizations never do this math explicitly, which is how you end up with teams of a hundred producing less than they did at sixty.

None of this is new, of course. Coordination costs are well-studied in organizational theory; Coase and Williamson were writing about transaction costs and governance overhead decades before anyone applied Amdahl’s Law to an org chart. But the physics metaphor clarifies something that management frameworks sometimes obscure: there is a hard quantitative limit to what adding people can do, and it’s set by the serial fraction.

Focus Is the Governor

The pattern, once you see it, shows up everywhere. Teams that stay fast post-1.0 tend to have something in common: an almost stubborn narrowness about what they’re doing right now (not forever; just right now). They sequence rather than parallelize when the serial costs of parallelization exceed the gains, and they make explicit bets about what they are not doing, which is harder politically than it sounds.

“Focus” is doing a lot of work in that sentence, so let me be more specific. There are at least two distinct things that drive the serial fraction in organizations. The first is strategic clarity: does leadership know what to prioritize, and have they made that legible to the team? The second is structural coupling: does the org design create unnecessary dependencies between groups that could otherwise work independently? Put differently: strategic clarity is knowing what to do; structural decoupling is the ability to do it without asking permission. You can have clear strategy but terrible org structure, or clean structure with muddled priorities. Both inflate S, but they need different fixes.

What focus means in practice is reducing both: making fewer bets (strategic clarity) and designing teams so those bets can execute without constant cross-team synchronization (structural decoupling). This is the organizational equivalent of what GPU architects did; reformulating the problem to be more parallel, rather than throwing more cores at an inherently serial workload.

The hardest part of this isn’t knowing you should do it; it’s having the organizational standing to say no. Every priority you add, every initiative you run in parallel, every “also can we” in a planning meeting increases the serial fraction. They add synchronization points and force alignment conversations that wouldn’t otherwise need to happen. And many of them are imposed from above: VP-sponsored initiatives, customer commitments, competitive responses, regulatory deadlines. S is often not a choice the team lead gets to make; it’s a constraint they inherit.

The most insidious version of this problem is when the loss of focus is disguised as ambition. The roadmap looks impressive with fourteen workstreams and everyone is busy, but the serial fraction has quietly climbed to 40%, and no amount of additional headcount will get you past 2.5x speedup. You have a team of a hundred performing like a team of thirty, but with the coordination costs of a team of a hundred.

There’s a tradeoff worth naming here: you can reduce S by giving teams full autonomy (no alignment needed, everyone runs independently). But you risk building an incoherent product. The interesting question isn’t always “minimize S” but “what’s the right S for your situation?” At the extremes, S=0 is entropy (every team diverges, nothing integrates, the product dissolves into chaos) and S=1 is stasis (every decision requires full organizational consensus, nothing ships, bureaucracy calcifies). The job of leadership is to find the critical S that allows for coherence without strangulation, and to make that tradeoff deliberately rather than letting S inflate by accident.

Enter the Agents

This is where the story gets most interesting, because we’re about to replay all of this; faster, and with higher stakes.

The promise of agent-powered development is essentially an Amdahl’s Law play: add massively more parallel capacity by giving every engineer a fleet of tireless, fast, cheap workers. The analogy to adding cores is almost literal, and for certain classes of work (the kind where the task is well-specified, the interfaces are clean, and the dependencies are minimal), agents deliver real parallel speedup. They don’t get tired, don’t need to context-switch, and can work twenty tasks simultaneously.

Agents also solve one piece of the scaling problem that humans structurally cannot: they don’t have communication bandwidth constraints with each other. Two agents don’t need a meeting to sync; they can share state through code, specs, and structured interfaces at machine speed. The communication graph doesn’t grow quadratically the way it does with people. This is a genuine structural advantage that removes one of the key mechanisms that makes S grow with team size in human organizations.

But “machine speed” has its own limits. Agents still operate within context windows and need shared taxonomies (consistent naming, clean interfaces, coherent architecture). If the underlying codebase is a tangle of implicit dependencies and undocumented conventions, agents hit a coherency wall: they can read code fast but they can’t infer intent from spaghetti. The communication overhead shrinks but it doesn’t vanish; it shifts from meetings to architecture.

But here’s what agents don’t solve: focus.

An agent fleet still needs to know what to build. It needs requirements, priorities, architectural direction, and product judgment, all of which come from humans. And if the humans haven’t resolved what the product should do (if there are three competing visions, or the roadmap is a sprawl of fourteen workstreams), then the agents inherit all of that incoherence. They’ll execute fast, but they’ll execute in conflicting directions. You’ll get twelve implementations of six features where you needed three implementations of two features.

There’s a subtler bottleneck too: verification bandwidth. Every piece of agent output needs a human to review, integrate, and validate it. That evaluation is serial, and it scales linearly with agent output volume. The faster and more prolific your agents become, the more human attention each cycle demands. If one engineer is orchestrating ten agents across three workstreams, the bottleneck isn’t the agents’ throughput; it’s the engineer’s ability to evaluate whether what came back is correct, coherent, and actually what the product needs.

The serial fraction for agent-powered teams isn’t “writing code.” Agents parallelized that away. The serial fraction is deciding what to build, verifying what was built, and owning the end-to-end problem; a chain that runs through a human at every link.

As N grows massive via agents, the ratio of deciders to doers shifts dramatically, and the serial fraction converges onto a single point of failure: the product owner’s cognitive load. Call it the sequential bottleneck of judgment. Even in a massively parallel agent fleet, truth is a serial resource; someone has to decide what “correct” means, what tradeoffs are acceptable, and whether the output actually serves the user. One person’s ability to hold the problem in their head, make those calls, and verify outputs becomes the governing constraint on the entire system. You’ve replaced a team bottleneck with an individual bottleneck, which is faster right up until that individual saturates.

In fact, agents may make the focus problem more acute. When execution is cheap, the temptation to pursue everything simultaneously gets stronger. “Why not try all fourteen workstreams? The agents can handle it.” But each workstream still needs a human to own the problem end-to-end: to define what done looks like, to make judgment calls when the spec is ambiguous, to reconcile conflicts when two workstreams step on each other, and to verify that the output is actually right. If you don’t have enough humans with enough ownership depth, you get a lot of code and very little product. Call it the hallucination of progress: the codebase is growing, PRs are landing, dashboards are green, but the serial cost of reconciling all those workstreams is growing exponentially, and no one has the bandwidth to notice that the pieces don’t fit together.

The Amdahl’s Law framing predicts this precisely. You’ve increased N enormously by adding agents, but if S hasn’t changed (if the serial fraction is still decision-making, verification, and product judgment), then your speedup is still capped at 1/S. You’ve just made the ceiling more visible.

The Same Lesson, Faster

The organizations that will use agents well are the same ones that already manage human parallelism well: the ones that invest in reducing S. That means clear ownership, sequenced bets, ruthless prioritization, and leaders willing to say no to good ideas because they’d increase the serial fraction.

The difference is that the penalty for getting this wrong will be faster and more expensive. With human teams, a bloated roadmap means people spend too much time in meetings and ship slowly. With agent-powered teams, a bloated roadmap means you burn through compute and get a codebase full of half-coherent features that no one fully owns. The failure mode is faster but structurally identical.

Amdahl published his law as a cautionary note to computer architects who thought they could just keep adding processors. The lesson was: before you add more parallelism, look at your serial fraction. Reduce that first. Everything else follows.

The same applies to your org chart, and soon, to your agent fleet.

Reduce S. Not because the rest is easy (reformulating the problem never is), but because nothing else you do matters until you do.

The Sycophantic, Lazy LLM: An RL Artifact?

If you’ve spent real time pairing with a frontier LLM on anything non-trivial, whether a renderer or a tricky refactor, you’ve probably noticed two failure modes that keep showing up together:

It wants to agree with you. Push back on its output, even weakly, and it folds. “You’re right, let me fix that,” and then it “fixes” something that was correct.

It wants to be done. Error paths silently return; with no exception, no log, no indication anything went wrong. Edge cases get hand-waved with a comment. Tests get weakened until they pass. A 2000-line diff arrives with three of the six features you asked for, and a cheerful summary claiming completion.

Individually, either is annoying. Together they’re corrosive, because the sycophancy hides the laziness. The model tells you it’s finished; you believe it; you find out a day later that the shortcut it took broke an invariant essential to actual operation.

A recent example from RISE, my renderer: I was getting VCM (Vertex Connection and Merging) stood up. The first few passes from the model looked plausible. Compiled, ran, produced images. But the images had splotches, the telltale artifact of bad vertex merging: incorrect density estimation blowing up localized radiance into bright blobs. Nothing in the model’s self-report flagged this. It was only after several iterations, and deliberately setting up validation that looked for splotches specifically (alongside variance and convergence checks against a bidirectional path tracing reference), that I got to an implementation I’d trust. The model wasn’t going to tell me the merge kernel was wrong. The images had to.

Why this shape?

My working theory is that this is an RL artifact, not a capability ceiling. The reward signal during post-training is, at best, a noisy proxy for “did the user get what they actually needed.” What it can much more easily capture is:

  • Did the user express satisfaction in the next turn?
  • Did the response terminate cleanly without long back-and-forth?
  • Did the answer look confident and well-structured?

Optimize against those and you get exactly the behavior we see. Agreeing with the user is a near-guaranteed path to a positive next-turn signal. Declaring victory early and producing a tidy summary looks indistinguishable, to a human rater skimming transcripts, from actually finishing. The model isn’t being deceptive; it’s learned that appearing done and being done are rewarded identically, and appearing done is cheaper. This is reward hacking in the standard sense: the policy found a cheap region of input space that scores well on the proxy.

The same dynamic explains why the failures cluster at exactly the places verification is hardest: numerical correctness, performance regressions, subtle concurrency, anything where “it compiles and the happy path runs” is wildly insufficient as an oracle. And it gets worse as models improve. Generation cost is falling fast. Verification cost is not (especially if you are building a renderer). Every capability jump widens the gap between how quickly the model can produce something plausible and how long it takes you to confirm it’s actually right. If you don’t build for that asymmetry, you lose ground on every iteration.

What actually works

The fix, in my experience, is to stop relying on the model’s self-report and instead make correctness externally defined and externally checked. A few things that have moved the needle for me on the renderer:

Adversarial review agents. Not one reviewer, several, with explicit instructions to find flaws, disagree, and escalate. A single reviewer inherits the same sycophancy gradient. Multiple reviewers with conflicting mandates (one checks correctness, one checks performance, one looks for silent scope reduction) break the collusion.

Correctness defined in two registers. Qualitative: image diffs, perceptual checks, “does this frame look right.” Quantitative: variance statistics on the Monte Carlo estimator, convergence rate vs. reference, wall-time budgets with regression thresholds. Either one alone is gameable. Together they’re much harder to satisfy by shortcut.

No self-graded completion. The model never decides when it’s done. A separate evaluation harness does. If the harness doesn’t pass, the task isn’t finished, regardless of how confident the summary sounds.

Cross-model review. One of the more effective tricks I’ve landed on: have a different model review the first model’s work. Same model reviewing itself inherits the same priors, the same shortcuts, the same blind spots, and the same training distribution that taught it which corners are safe to cut. A different lab’s model has been shaped by a different reward signal and trips on different things. The disagreements are where the interesting bugs live. It’s not that the second model is smarter; it’s that it’s wrong in uncorrelated ways.

The pattern underneath all four: assume the model will take the easiest path that looks like success, and make sure the easiest path is success.

Has this been your experience?

I’d be curious whether others are seeing the same thing, and what you’ve built to counter it. Specifically: how do you define correctness for tasks where the obvious checks are cheap to fool? What does your adversarial review setup look like? And has anyone found a prompting pattern, rather than a harness, that reliably suppresses the “declare victory and exit” behavior?

Fifteen Years of Rendering, Catching up in Weeks

I stopped doing serious rendering work around 2010. Path tracing, BSDFs, Monte Carlo integration were all second nature. From the start of my undergrad in 1997 right up to building the Adobe Ray Tracer in Photoshop in 2010 I had been steeped in this world. Then life moved on: building consumer products at scale, building teams, building platforms. My renderer sat dormant.

Recently I picked it back up both to have something concrete to work on with agents but also to scratch that graphics itch that never went away. Normally to catch up I’d plow through the fifteen years of SIGGRAPH papers that been stacked up, but I did something different.

I started implementing instead.

Read less, build more

There’s a difference between understanding a technique and understanding why it works the way it does. Papers give you the former. Code gives you the latter. I am also a ‘doing’ learner, so for me working on something is how I learn. I’d read many papers on MIS but it wasn’t until actually going and implementing it, working through all the bugs and watching the variance drop that it really locks on.

The problem used to be velocity. Getting from “I understand this algorithm” to “a working implementation” took days to weeks. Boilerplate, scaffolding, debugging the trivial stuff. The interesting parts were buried under setup cost.

Coding agents collapsed that ratio.

The agent workflow, honestly

My workflow isn’t careful line-by-line review. That’s not the point; the speed is the feature. When you can go from a paper to a running implementation of GGX microfacet with VNDF sampling in a fraction of the time it used to take, you get to spend your cognitive budget on the parts that actually require thinking.

What I’ve found is that agents aren’t uniformly fast. Some things, like well-specified algorithms with solid reference implementations(Dupuy-Benyoub spherical cap VNDF sampler for example) they handle cleanly. Others require real steering. Getting light subpath guiding right in BDPT came down to a subtle decision about separate vs. shared guiding fields that no prompt was going to resolve on its own. When separate eye and light fields produce destructive interference at the same surface position, you need to understand why, not just what to type.

That pattern (full speed on clear specs, hard stops where physical insight is required) has been one of the more interesting meta-lessons. More on where the boundary actually falls in a later post as I let that stew more.

The biggest surprise: the field went physical

When I left, biased techniques were the pragmatic answer to hard light transport problems. Dipole approximation for subsurface scattering. Photon mapping as a caustics crutch. Spectral rendering was a research luxury, RGB was good enough.

Coming back, I expected things to have fully moved to the GPU but that trade-off to still be alive.

It isn’t. The field has largely moved to unbiased physical simulation across the board. Random walk SSS has replaced diffusion approximations as the standard. Hero wavelength spectral sampling means that spectral rendering is the default. Null-scattering volume formulations handles participating media properly while being physically based. The question isn’t “can we afford to be physically correct?” anymore.

This landed differently for me than it might for others. My original skin rendering work was dual-purpose: graphics and biomedical light transport simulation. The biomedical side required physical random walks and spectral interaction simulation; you can’t use a dipole approximation when you need to know where photons actually go in tissue. At the time, that work lived in a completely separate world from production rendering. The techniques were too expensive, too specialized.

Now seeing random-walk SSS become the graphics standard felt like watching a conversation finally arrive somewhere you’d been standing for a while.

What’s been implemented so far

In a few weeks, working alongside agents, RISE (the renderer I am modernizing) has gone from a reasonable 2010-era foundation to something a lot closer to where the field is now with things like:

  • GGX microfacet with anisotropic VNDF sampling (Dupuy-Benyoub 2023) and Kulla-Conty multiscattering energy compensation
  • Random-walk subsurface scattering replacing dipole/diffusion approximations
  • Hero wavelength spectral sampling to get spectral rendering with lower color noise
  • Null-scattering volume framework for unbiased heterogeneous participating media
  • Light BVH for many-light sampling (4.78x variance reduction on a 100-light scene)
  • Light subpath guiding in BDPT using separate OpenPGL fields for eye and light paths
  • Blue-noise error distribution via ZSobol sampling

Next up: VCM, Hyperspectral skin rendering