How to Vibe Code Effectively

September 3, 2025

Since 2023, I’ve been using LLMs as coding assistants for a variety of projects. From my own AB testing, I think I’ve developed a pretty good intuition for using these tools effectively. Many people online have come to similar conclusions, and my goal is to aggregate my intuition into one post.

I write this blog post based on LLM capabilities as of September 2025, when the SOTA coding LLM is arguably GPT-5.

Part 1: Do not trust the LLM

Chain yourself to the semantic graph

It’s easy for an LLM’s supervisor to approve LLM-written code without fully understanding what it’s doing. The next time the LLM contributes to that code, it will be even harder for the supervisor to understand the consequences of these further contributions. This is deadly. Do not let yourself fall into that cycle.

Remember that as the LLM’s supervisor, a big part of your job is to stop it from making stupid decisions. This is impossible if you don’t understand what the code is doing. For every code change, you must understand what every semantic chunk of code is doing.

It’s a matter of taste and opinion what comprises a semantic chunk of code, but you should at least understand what each function does. If the function is not trivial, you should be able to generate a list of bullet points in your head that describe what the function does.

Force yourself to ask the LLM to explain code if you have any doubts regarding your understanding. You’re probably not doing this sufficiently.

If you chain yourself to the semantic graph of your codebase, it is much harder to trap yourself in debugging loops.

Test, refactor, and commit

Get comfortable with version control. It’s important to commit often (branching as necessary) to keep the changes digestible. Before committing, test your code. Then re-read the codebase to find opportunities to refactor and simplify. If the new code expands a function, consider splitting it into smaller pieces.

Remember, code is disposable now, so be open to discarding all the changes made, improving the prompt, then regenerating.

You want to keep the code simple and your commits small. Minimize the ratio of code complexity to functionality.

This describes obvious good software engineering practices, but it’s easy (especially for experienced developers) to ignore them and still get good results if you’re not using an LLM. If you are using an LLM, ignoring them can seem harmless at first but it is much more likely to waste hours of your time in the future.

Be more interactive

Don’t just ask LLM to write code. If you’re making a big change, ask it to analyze the codebase thoroughly and suggest potential implementation strategies and their pros/cons. You want to have a conversation with the LLM and make sure it fully understands the implementation details before it writes code.

After it writes code, if you find anything questionable, interrogate the LLM and figure out why it implemented it in that manner. Even if you’re unsure, ask it, “Are you sure this doesn’t create a race condition?”

It seems obvious, but you should probably be doing this more often. If you’re hyperfixated on shipping quickly, it’s easy to develop a predisposition to only talking to the LLM when you want it to write code.

Recognize common failure patterns

LLMs are predictable and often write low-quality code in similar ways. Here are a few that still appear in modern LLMs:

  • Wrapping errors in try-except blocks
  • Writing placeholder code instead of the actual functionality
  • Rewriting documentation to aggressively emphasize the last change
  • Forming opinions and never questioning them later on
  • Simply assuming expected functionality instead of asking you for more context

As you gain experience using LLMs in development, you’ll notice patterns and which specific technologies they have low familiarity with.

Part 2: General workflow improvements

Waiting for the LLM to write code

If you’re waiting for an LLM response and expect it to take a while, be very cognizant of what you do in that time. It’s generally a bad idea to scroll social media. If you don’t have anything work-related to do, maybe listen to a podcast or read a book.

Be sure you’re notified as soon as a response is generated. In recent versions of Cursor, this is the case. Make sure desktop notifications are enabled. If not, it’s easy to waste time, and this is something very easy to ignore.

The trap you want to avoid is forgetting about the code that was generated 10 minutes ago.

Context

If you ask an LLM to write code that interfaces with file A but you don’t give it file A, there’s a high probability it will simply assume what file A does based on context cues instead of asking you to give it file A.

Modern coding IDEs like Cursor often do allow the LLM to do semantic searches and use tools like grep, but it is very common for the LLM to underuse these tools.

I suggest you manually add any files that could be relevant to the context. This has a greater impact than you’d expect.

A useful trick is to clone a relevant repo inside the repo you’re working on, so the LLM can read it easily. You can also download Markdown files and add them to your working directory.

On debugging

If you write code and debug yourself, you’ll have a good intuition for when to give up on a piece of code or debug session. Also, you’ll get fatigued more quickly, causing you to give up more quickly.

But if you use an LLM, it’s easy to waste time by repeatedly asking it to fix an issue without understanding what it’s stuck on.

Usually, it’s best to have the LLM explain why the code is failing and suggest potential solutions. Have a back-and-forth conversation. The LLM’s assumptions are often wrong, but use the LLM to challenge your own assumptions as well.

Give up early. If you’re letting the LLM fix code, give it only one or two chances. You can visualize the process of developing or debugging as graph search in the space of possible programs. When you waste time with LLM coding tools, it’s usually because your search algorithm resembles depth-first search more than breadth-first search. Giving up early forces you to investigate the problem in more detail.

When testing code, minimize the time it takes to test, so you can maximize the iteration rate.

Concluding thoughts

Many of these ideas seem obvious in hindsight, but it would’ve saved me quite a bit of time to have forced myself to strictly adhere to these best practices. I hope this can be of use to you for as long as AI requires human supervision.