
LLMs and agents seem to serve experts much more than they benefit others. One common comparison that is drawn is that LLMs are less likely to be a professional chef than an incredibly fast sous-chef, able to see every recipe ever written. This is very potent in the power of a well-experienced head chef. The chef knows the cuisine, knows the balance of flavor, knows when a recipe is missing acid or when to turn too much salt and can tell when a suggestion might clash directly with the rest of the menu.

The sous-chef speeds up prep work, recommends modifications and assists with idea exploration, but the chef is still tasked with taste, consistency and quality. The same sous-chef is infinitely more dangerous for someone who does not have cooking experience. Recipes look perfect, directions sound authoritative, substitutions seem natural. But without a grasp of fundamentals, heat control, seasoning, timing, even following the instructions correctly the result can be inedible. It isn’t clear why it tastes bad when something tastes different. The issue isn’t with the recipe generator; it’s with the judgment to evaluate the output. This roughly maps to the way LLMs are used in development.

Experienced engineers approach code generated from other coding like a recipe draft. They can see right away what’s missing, error handling, clear contracts, separation of concerns, long-term maintainability. They mix the proportions, swap ingredients, and sometimes throw out the dish. Newer users tend to think that somehow it must be “good” if the code compiles and executes something (just as you would like to think that a dish is right because it followed a recipe from start to finish).

Hyrum’s Law also shines through in this analogy. Whatever is the undocumented quirk, cooking, a pan that runs hotter than predicted, say, or an oven that browns in patches, will eventually be the thing to use. Change the pan, and the dish breaks. And in software, the quirks generated by LLM are as frequently made into accidental dependencies as the quirks generated by the AI. The experts compensate on purpose when they go; novices accidentally build it into any future meal. Agent and agent-based systems are equivalent to automatic kitchen stations in a kitchen. In an efficiently executed restaurant, automation will optimize speed of service which does not sacrifice the quality of dishes and orders because the menu is set, processes are established and chefs are in control of the order. On an inexperienced kitchen, the very same automation generates inconsistent and uneven dishes at scale. Errors are not separate, they are multiplied. So is the technical-debt principle of a perfect fit here.

Fast food is inexpensive to create but costly to sustain over the long haul. LLMs render “fast food software” extremely easy to create: fast, filling, instantly satiating. Experienced teams take time to transform that into a balanced, sustainable meal. Others swallow up indigestion, systems that are difficult to reform, difficult to reason through, becoming more fragile. I relate to this restaurant analogy because it encapsulates a simple truth repeated across developer circles: LLMs don’t eliminate taste, judgment, and responsibility. They amplify them. Just as good tools don’t make a great cook, LLMs don’t make a great engineer. They just make the difference visible sooner — and at scale.