Context

Here are practical patterns, observations, and lessons I’ve collected while adopting LLM/Gen AI-assisted software engineering in both hobby projects and my day-to-day work as a software engineer and engineering manager.

One idea in particular that shifted my perspective. In an interview with Simon Willison, he argues that effective LLMs use requires intentional practice - including the exercise of completing tasks entirely through prompting - no manual edits.

As someone who enjoys programming, this felt counterintuitive (and expensive). I’d always treated LLM output as a starting point, not the final draft - it just needed to be good-enough fo me to take over.

But once I approached this hands-off style as a practice kata of sorts, it made sense. Like learning any skill, the more repetitions you do, the faster you learn.

What follows are the patterns that emerged from that practice.

Tactics

Provide essential context by speccing before coding

Vibe Speccing is a semi-automated way to ensure that LLMs actually understand your tasks before touching code. Luke Bechtel’s post outlines the pitfalls of underspecified requests and offers concrete prompts to preempt them.

His observation is that this same need exists for delegating to humans well:

Too little context and the LLM flails. Too much and it gets lost, or expensive…The way we solve this with humans is to write a concise set of specs (AKA requirements / PRDs)…

<aside> 💡

When tasking an LLM with tackling any non-trivial task, the first step should be to have at least one model go through the speccing process with you.

</aside>

This often surfaces requirements or constraints I hadn’t considered. The earlier invalid assumptions are spotted, the cheaper they are to correct.

Improve specs by having models critique each other

When developing specs, pass the plans from one model to another and ask them to: