In no particular order:
1. Maintain AI Documents
Depending on your agent of choice this might be .github/copilot-instructions.md or CLAUDE.md. I wrote about what you might include in such a document in my post I'm not a 'Vibe Coder', but when I am, this is my set up. This document is essential for successful pair programming with an AI, will save you significant correctional work and ensures the value of your expertise is baked into every iteration of every solution you build.

2. Read and understand EVERY Line
This is non-negotiable. You have to read and understand every line that the AI writes. There will be times when the AI is doing well for long stretches at a time, and it's tempting to take your hands off of the wheel. But a time WILL arrive where things are not going so well, and you've lost touch with the code base and don't fully understand how it's working.
Better still, you will be providing feedback to the AI after every iteration and actively refactoring code as the AI works. This will keep you in touch with the code, stop things going awry early when they do, and provide you with opportunity to see what needs to be added to your AI Documents as you go.
A great way of doing this is to have your favourite Git / version control client open as you code, and to stage changes with each iteration, making it easy to see what was added with the last task.
Another useful practice is to ask the AI for reasoning as to why it has made the choices it has when it's not clear, and to ask it to maintain a document regarding it's solution. As the solution starts to get big, or when you have to step away from the project for a while and come back later, such a document can be useful for re-orientating yourself with the code base.
3. Be wary of developing in languages and frameworks you are not familar with
Things tend to go well when you are able to step in and provide clear opinion regarding how a project should be architected. When you're not familiar with the language or framework the AI is building with, despite it often looking like progress is being made, this is a fragile solution that you cannot be fully accountable for. As AI's do more, accountability is going to be a greater part of the value that the human-in-the-loop provides.
4. Set up Tests Early and Include Test Tooling
Start with unit and e2e / integration testing early. Configuring the Playwright MCP server (for example) will allow the AI to be more independent and hand off to you for testing less frequently. More over, for web apps this provides the AI with access to console logging which is often useful an allowing the AI to gather it's own feedback.
Use AI Documents instruction to re-enforce this, tell the AI where to find server logs and tell it whether and how it can inspect your development (emphasis on development!) database.
Summary
That's it for now. I'll keep this post up to date with more rules as I discover them!