Will Smidlein's Blog

How I Quickly Build Complex Side Projects With LLMs (in Feb 2025)

I’m a programmer both professionally and, somewhat begrudgingly, as a hobby. I am constantly building dumb hyper-specific side projects for an audience of 1. I have a shell script that orders a coffee at my local coffee drive-through. I have a Chrome extension that scrapes and syncs my Amazon purchases to YNAB. I have a neural network constantly checking the IP cam in my garage on trash day to make sure I took the trash out. These are things I build- almost compulsively- upon encountering the tiniest annoyance or inconvenience. And it’s been that way for as long as I can remember.

LLMs have hypercharged this. At first I was able to do projects in 1 or 2 iterations with a chatbot, but as these things’ capabilities have grown, so has my appetite. I’m building dumber projects of grander scales than I’ve ever built before. I figured I should share my approach and see how it compares to others.

This workflow particularly shines for data-heavy automation projects that would normally be tedious to build and maintain. Some examples I’ve built using this approach:


Step 1: Lay out the requirements

This stage is usually in a chatbot UI and I’m generally reaching for Sonnet or 4o unless the project has complex data parsing/structures at which point I’m usually going for o1.

First specify that the goal of the conversation is to create a comprehensive bullet point list of requirements “in the style of a Jira ticket”. Next specify that you want to have a conversation about the problem and the requirements prior to generating the list- ask it to be extremely detailed about it’s requirement gathering and verbose in it’s response. Instruct it to remain language-agnostic reaching for pseudocode where necessary. Finally the fun part… just word dump the problem. Explain what you want to do in plain English. If you have example input data, provide it. If you have a HAR file from a reverse engineered API, (turn it into Markdown and) provide it. API docs? Provide them.

I have yet to find a UI that lets me tag a chunk of response (eg a specific bullet point) to come back to, or what I really want which is a waterfall of the different conversations that split out of a long response. Typically I want to respond and ask more (or provide more detail) about 2-5 bullet points but in a purely linear structure I’m constantly scrolling back up and trying to remember things I want to loop back and ask about.

At the end you’ll get a nice big text blob with requirements. Save this as a mardown doc - initial-reqs.md. Assume that this requirements list is missing about 30% of what you think you told it. You also cannot assume any logic whatsoever for what it decides to drop and at what part of the process it decides to drop it. But it’s okay! We have a solution…

Step 2: Do it again

Usually still in a chatbot UI, usually reaching for a “smarter” model from a different provider. So if I used Sonnet in the last step, this time I’m probably going for o1. If I used a GPT model, I’m reaching for Gemini 2.0. I have not seen any evidence that this actually matters but anecdotally it feels right so ¯\(ツ)/¯. Using llm more and more for this step.

First specify that the goal is to find any gaps in the requirement document outlined “so that a talented but new Junior engineer can complete it without interruption”. Specify that you’re writing Python (even if the end goal is not Python- these things seem to be trained on a lot of Python… and it’s very easy to iterate on). Specify that you require small well-defined methods with verbose commentary. Ask an open ended question like “am I missing anything?” and answer any questions you feel are relevant. At the end, dump that entire conversation to a markdown file - detailed-reqs.md.

Step 3: Unit tests!

I’m generally in an “LLM-native” code editor like Cursor, VS Code with GH Copilot, or Windsurf for this step. I can’t say there is any distinct pattern of which model I’m reaching for- in fact I’m generally hot swapping between them just for funsies.

I’ve found I’m generally ask for the intended output language at this point. For me that’s either Python (I need iterate on this quickly and run it once), Node.js (I need to run this on a regular basis and can quickly deal with it when something breaks), or Go (I need this to run exactly the same today as it will run 5 years from now). However on a few projects- specifically one parsing really complex CSV structures using Go and another transcoding G.711->Opus with libopus- I found the models choked a bit and got stuck in a loop writing code that would never execute. The trick is to ask it to write the unit tests and code in Python and then ask it to turn that code into Go/Swift/whatever. For the sake of a (marginally) interesting post, I’m going to do that.

Specify that you want a comprehensive set of unit tests that check for conformity against the specifications attached. Attach both initial-reqs.md and detailed-reqs.md. Ask it to include excerpts of the requirements alongside the relevant tests.

This is the part of the process that requires the most brainpower. You need to go through all of the generated tests and ensure they

  1. Make sense
  2. Are comprehensive enough
  3. Cover every iota of intended functionality

This is made even easier by the ability to chat with the models about specific individual tests. I am finding that I generally 3-5x each test size from the initial generation. This may be out of habit but it’s likely because it’s so goddamn easy.

Make sure you’re breaking tests into individual files, I have completely unscientifically settled on <1k lines/file. This is really where an LLM-native editor starts to show it’s perks vs copying and pasting from a chatbot UI.

It is absolutely crucial that you’re reviewing the diffs on a line-by-line basis rather than just blindly accepting the “full file” results. The LLMs will drop comments to an almost comical degree- even for lines they haven’t touched the logic on. This is probably the biggest flaw of these models (or rather, the “code apply” models the IDEs are using) at the moment.

By the end of this step you want a comprehensive test suite. It’s on you, the human, to make sure this test suite actually encompasses everything you want your final program to do. You won’t (or at least I don’t) but you should try really really really hard to. Actually read every single one and question if it makes sense. This is literally the first time in the process that this is happening, so it’s critical that you take your time here.

Step 4: Write the code

If you did your job right in the previous steps, the model should be able to generate code that gets with the expected results in 1-3 steps. If you did not do a good job, it has probably quickly become clear to you what gaps exist in your unit tests. Do not fix the code. Fix the unit tests. It is extremely tempting to be like “oh duh I just forgot to tell it Fahrenheit instead of Celsius, if I correct it I’ll fix this one remaining bug with the program and be done.” It’s tempting because it’s probably true! For simple enough corrections, the LLM will get it right first iteration. But for more complex implementations, you may need to cycle back and forth for many message in order to get it to do what you want in the way you want. These conversations sometimes go totally off the rails into a loop where it’s just suggesting the same 3 solutions that will never work over and over and over again. Those conversations are where the unit tests really shine because you can close them and start fresh ones with zero fear of losing context. You’re only losing conversation-time context, which in this case is actually quite useful. I have yet to encounter a problem I could not solve in this way- although it does sometimes require several attempts to guide the model through the exact implementation path I want. I always always “save the state” of the requirements back into the unit tests.

Even if you get the thing to run in a way that seems like it’s working, it’s very possible some (or even many) unit tests will fail. Again, that’s the beauty of unit tests. Now you’ve just gotta run through them and figure out if it’s the code that’s wrong, the unit test that’s wrong, or both- and then fix. I find the cycle time on this to be fast enough (and fun enough) that I haven’t tried anything “agentic” to automate this process beyond a little bit of experimentation with Block’s Goose.

I cannot hammer this home enough… the code is the byproduct of the unit tests. In 6 months or 2 years or 10 years when the languages change or the dependencies change or your tastes change, you can simply ask the most cutting-edge model of the time to write you new code against those unit tests, and then you can objectively evaluate (in a single shot), if it did it. Sure sounds like a practical implementation of those “reward functions” I hear the thinkfluencers talking about.

Step 5: Write a Readme

I’m not actually sure if this helps the LLms at all, but it definitely helps me when I’ve gotta go back and figure out what the hell I built and why I built it that way.

I will usually feed both the initial-reqs.md and detailed-reqs.md and ask it something along the lines of “Generate a comprehensive Readme with sections outlining goals, data structures, dependencies, how to run locally, how to run unit tests” which pretty much any model can do in one shot. Sometimes I’m feeding the unit tests and the code in (which definitely improves the quality of the Readme) but sometimes I’m limited by context size. On my next project I want to try this stage with Gemini 2.0 Pro’s insane 2M context window.


Please email with feedback and ideas! I am excited to see how this process evolves (and likely continues to get simplified by better tools and better models) over time.