

ENGINEERING
5 MINS
Validating vibe code:
Validating vibe code:
Notes from a developer
Notes from a developer
Himanshu Patil, Full Stack Developer at Canvs, talks about how vibe coding works best when you already have a handle on your craft. It helps you get past the starting bottleneck, explore directions quickly, and move faster through parts that don’t need deep thought. But knowing your code, how it works, where it lives, and how to debug it, still matters just as much.
Vibe coding has caught on in a big way
Vibe coding has caught on in a big way
92% of U.S.-based developers are already using AI coding tools, both at work and outside of it. That’s largely because writing code with these tools has gotten really good. In February this year, Andrej Karpathy coined a term for this new way of building: vibe coding. You just describe what you want, and the model writes the code.
Since then, the phrase has caught on. “Vibe coding” searches are up 6700% in the last 3 months. Fully working tools and apps, built through prompting alone, are now everywhere, especially in side projects and quick prototypes.
At Canvs, we’ve always taken a deep interest in both design and engineering. As AI starts to shape how software is built, we wanted to understand its role more closely.
We spoke to Himanshu, a full-stack developer at Canvs, to understand how he uses vibe coding and get his perspective on it. He walked us through the tools he uses, the parts where AI speeds things up, and the areas where it still falls short.
92% of U.S.-based developers are already using AI coding tools, both at work and outside of it. That’s largely because writing code with these tools has gotten really good. In February this year, Andrej Karpathy coined a term for this new way of building: vibe coding. You just describe what you want, and the model writes the code.
Since then, the phrase has caught on. “Vibe coding” searches are up 6700% in the last 3 months. Fully working tools and apps, built through prompting alone, are now everywhere, especially in side projects and quick prototypes.
At Canvs, we’ve always taken a deep interest in both design and engineering. As AI starts to shape how software is built, we wanted to understand its role more closely.
We spoke to Himanshu, a full-stack developer at Canvs, to understand how he uses vibe coding and get his perspective on it. He walked us through the tools he uses, the parts where AI speeds things up, and the areas where it still falls short.
Searches for the phrase 'vibe coding' are up 6700% globally since mid-March 2025.
Searches for the phrase 'vibe coding' are up 6700% globally since mid-March 2025.
How much do you rely on vibe coding when you’re starting something new?
How much do you rely on vibe coding when you’re starting something new?
It really depends on the size of the project or feature. If I’m working on a small side project or building something that’s fairly self-contained, I usually just start directly with vibe coding. It’s quick, and I can get to a working version fast without spending too much time planning every detail.
But when it’s a bigger feature or part of a more complex system, I don’t go all in at once. I break the project down into smaller chunks and then use AI to help me build those chunks individually. Once I have those pieces, I integrate everything manually.
For me, AI works best as a way to get past the initial inertia. It helps me get a working scaffold in place. But as the logic gets more nuanced, I take over. I usually ask AI to help me build rough versions of each function, then go one by one and refine or fix them where needed.
For example, in a recent project, we had a state management logic tied to video segments. AI couldn’t handle that kind of layered complexity, so I used it only to get the initial draft, and then rewrote the specific interactions myself.
It really depends on the size of the project or feature. If I’m working on a small side project or building something that’s fairly self-contained, I usually just start directly with vibe coding. It’s quick, and I can get to a working version fast without spending too much time planning every detail.
But when it’s a bigger feature or part of a more complex system, I don’t go all in at once. I break the project down into smaller chunks and then use AI to help me build those chunks individually. Once I have those pieces, I integrate everything manually.
For me, AI works best as a way to get past the initial inertia. It helps me get a working scaffold in place. But as the logic gets more nuanced, I take over. I usually ask AI to help me build rough versions of each function, then go one by one and refine or fix them where needed.
For example, in a recent project, we had a state management logic tied to video segments. AI couldn’t handle that kind of layered complexity, so I used it only to get the initial draft, and then rewrote the specific interactions myself.



"It's less about using a single tool all the time and more about choosing the right one."
"It's less about using a single tool all the time and more about choosing the right one."
What tools do you use in your daily workflow?
What tools do you use in your daily workflow?
I mainly use GitHub Copilot inside VS Code. It helps a lot with snippets and autocompletions as I write. It’s especially handy when I already know what I’m trying to build and just need to move fast through repetitive logic or syntax.
If it’s a side project, something small, or a new functionality, I prefer using ChatGPT or Bolt.new. These tools help me think through how to structure the feature or explore options if I’m unsure of how to go about it.
It’s less about using a single tool all the time and more about choosing the right one depending on the problem.
I mainly use GitHub Copilot inside VS Code. It helps a lot with snippets and autocompletions as I write. It’s especially handy when I already know what I’m trying to build and just need to move fast through repetitive logic or syntax.
If it’s a side project, something small, or a new functionality, I prefer using ChatGPT or Bolt.new. These tools help me think through how to structure the feature or explore options if I’m unsure of how to go about it.
It’s less about using a single tool all the time and more about choosing the right one depending on the problem.
Can you talk about a recent feature or project where you used vibe coding? How did it help?
Can you talk about a recent feature or project where you used vibe coding? How did it help?
I do have a few I've dabbled in that I'd like to share here.
I do have a few I've dabbled in that I'd like to share here.
This was a fun weekend experiment. I wanted to display time as a binary pattern, just lights representing bits. I used AI to help build the logic that converts current time into binary and then updates the DOM every second. It’s a simple idea, but AI helped speed up the starting logic.
The whole thing came together in under 15 minutes with fewer than 3 prompts. Whereas, manually building it would’ve taken a full day, and even then, the UI wouldn’t have been as effective.
This was a fun weekend experiment. I wanted to display time as a binary pattern, just lights representing bits. I used AI to help build the logic that converts current time into binary and then updates the DOM every second. It’s a simple idea, but AI helped speed up the starting logic.
The whole thing came together in under 15 minutes with fewer than 3 prompts. Whereas, manually building it would’ve taken a full day, and even then, the UI wouldn’t have been as effective.



This one is more functional. At Cassini, we would need to convert CSS styles from Figma to Tailwind classes to be used in our code. I built this tool using AI to convert raw CSS into closest-match Tailwind classes. It saves a lot of time, especially for repetitive conversions.
This one is more functional. At Cassini, we would need to convert CSS styles from Figma to Tailwind classes to be used in our code. I built this tool using AI to convert raw CSS into closest-match Tailwind classes. It saves a lot of time, especially for repetitive conversions.



Video Progress Bar for Cassini:
Video Progress Bar for Cassini:
For one of our internal tools at Cassini, we needed a custom video progress bar that synced precisely with annotated segments. AI helped me structure the state logic. AI also suggested using the canvas tag instead of divs to handle complex states of the progress bar. The harder part was integrating this into our app’s architecture, and that’s where I had to step in and write manually.
For one of our internal tools at Cassini, we needed a custom video progress bar that synced precisely with annotated segments. AI helped me structure the state logic. AI also suggested using the canvas tag instead of divs to handle complex states of the progress bar. The harder part was integrating this into our app’s architecture, and that’s where I had to step in and write manually.






Full length capture tool for Cassini's Chrome Extension:
Full length capture tool for Cassini's Chrome Extension:
We wanted our Cassini Chrome extension to allow users to capture a screenshot of the entire webpage, not just the visible area. Because other browsers limit you to capturing only the visible viewport.
Using AI, we built a solution where JavaScript scrolls the page programmatically, captures each viewport, and then stitches them together into one continuous image. The AI helped with writing the base logic for scrolling, capturing, and stitching.
We wanted our Cassini Chrome extension to allow users to capture a screenshot of the entire webpage, not just the visible area. Because other browsers limit you to capturing only the visible viewport.
Using AI, we built a solution where JavaScript scrolls the page programmatically, captures each viewport, and then stitches them together into one continuous image. The AI helped with writing the base logic for scrolling, capturing, and stitching.


"Chaining fixes with AI often results in a loop of errors. Instead, once I have an initial draft, I prefer to manually handle edge cases and logic."
"Chaining fixes with AI often results in a loop of errors. Instead, once I have an initial draft, I prefer to manually handle edge cases and logic."
Vibe coding usually gets you 80% of the way there fast, but starts to fall apart as things get more complex. Have you felt that in your own projects?
Vibe coding usually gets you 80% of the way there fast, but starts to fall apart as things get more complex. Have you felt that in your own projects?
Yes, multiple times. Especially when I’m trying to implement complete functionality, it gives me an answer, but if there's a bug and I ask it to fix it, it usually creates a new one.
Chaining fixes with AI often results in a loop of errors. Instead, once I get an initial draft, I prefer to manually handle edge cases and complex logic.
And when I hit that 80% mark, it’s faster to just go in myself. I break the problem down, isolate functions, and give them to AI one at a time if needed. That’s far more efficient.
Yes, multiple times. Especially when I’m trying to implement complete functionality, it gives me an answer, but if there's a bug and I ask it to fix it, it usually creates a new one.
Chaining fixes with AI often results in a loop of errors. Instead, once I get an initial draft, I prefer to manually handle edge cases and complex logic.
And when I hit that 80% mark, it’s faster to just go in myself. I break the problem down, isolate functions, and give them to AI one at a time if needed. That’s far more efficient.



Are there types of work where vibe coding doesn’t help at all?
Are there types of work where vibe coding doesn’t help at all?
CSS, especially when styles are deeply tied to the parent or surrounding elements. AI doesn’t know the broader context, like what styles have already been applied to parent elements, or how different components interact across files.
If I want to make a child view scrollable, for example, it often doesn’t work the way I expect because the AI doesn’t know how the parent is styled. And that context really matters in CSS.
For logic-driven code, AI is super helpful. But when it comes to precise interface styling, I usually have to take over.
CSS, especially when styles are deeply tied to the parent or surrounding elements. AI doesn’t know the broader context, like what styles have already been applied to parent elements, or how different components interact across files.
If I want to make a child view scrollable, for example, it often doesn’t work the way I expect because the AI doesn’t know how the parent is styled. And that context really matters in CSS.
For logic-driven code, AI is super helpful. But when it comes to precise interface styling, I usually have to take over.
"Vibe coding doesn't work so well for CSS, especially when styles are deeply tied to the parent or surrounding elements."
"Vibe coding doesn't work so well for CSS, especially when styles are deeply tied to the parent or surrounding elements."
In your opinion, does good vibe coding show in the output, or is it only visible to the maker?
In your opinion, does good vibe coding show in the output, or is it only visible to the maker?
Sometimes, yes. I can tell if a piece of code was written by AI. Especially because of the comments. AI tends to add these clean, slightly over-explained comments that feel a bit too polished. If the developer hasn’t edited them, it’s usually obvious.
It’s not always a giveaway, but it leaves a bit of a signature.
Sometimes, yes. I can tell if a piece of code was written by AI. Especially because of the comments. AI tends to add these clean, slightly over-explained comments that feel a bit too polished. If the developer hasn’t edited them, it’s usually obvious.
It’s not always a giveaway, but it leaves a bit of a signature.



Do you think vibe coding only works if you already know what you’re doing? Or can anyone use it and figure it out later?
Do you think vibe coding only works if you already know what you’re doing? Or can anyone use it and figure it out later?
Anyone can use it to generate a basic page or feature and is great for getting things off the ground. But the moment something breaks and you don’t know why, that’s when things start to fall apart. Debugging still needs a real understanding of how code works.
For a non-coder, it’s hard to debug because you don’t even know what to look for. Plus, sometimes you won’t even know if and when AI starts hallucinating.
Anyone can use it to generate a basic page or feature and is great for getting things off the ground. But the moment something breaks and you don’t know why, that’s when things start to fall apart. Debugging still needs a real understanding of how code works.
For a non-coder, it’s hard to debug because you don’t even know what to look for. Plus, sometimes you won’t even know if and when AI starts hallucinating.
If someone’s only worked using vibe coding for a few months, what habits or blind spots should they watch out for?
If someone’s only worked using vibe coding for a few months, what habits or blind spots should they watch out for?
One thing I’d definitely call out is not reading documentation. That’s a habit I’ve seen slip in myself, too. Before using AI tools, I always made it a point to go through the docs or look up GitHub issues when I ran into something new or got stuck. Now, it’s easy to just ask the AI and move on, but that means you miss out on a deeper understanding.
Another blind spot is losing touch with your own codebase and your understanding of the project. When you’re manually writing code, you remember where things are, what file does what, and what lives in which folder. But with AI, especially when you let it write large chunks, that familiarity fades. You start relying on search and guesswork more than actual recall, which adds friction over time.
One thing I’d definitely call out is not reading documentation. That’s a habit I’ve seen slip in myself, too. Before using AI tools, I always made it a point to go through the docs or look up GitHub issues when I ran into something new or got stuck. Now, it’s easy to just ask the AI and move on, but that means you miss out on a deeper understanding.
Another blind spot is losing touch with your own codebase and your understanding of the project. When you’re manually writing code, you remember where things are, what file does what, and what lives in which folder. But with AI, especially when you let it write large chunks, that familiarity fades. You start relying on search and guesswork more than actual recall, which adds friction over time.
"Reading documentation is important, as is being in touch with your own codebase and having an understanding of the project."
"Reading documentation is important, as is being in touch with your own codebase and having an understanding of the project."



How do you judge the quality of code that comes from Vibe coding tools? (Is it enough that it works quickly, or does it only count as good if you don’t have to fix it a lot later?)
How do you judge the quality of code that comes from Vibe coding tools? (Is it enough that it works quickly, or does it only count as good if you don’t have to fix it a lot later?)
For small weekend projects or proof-of-concepts, speed is fine. Even if the code isn’t great, it doesn’t matter.
But for large-scale products that will scale or be maintained by multiple developers, quality takes precedence.
With Canvs client projects, we know the code will stay for years. Multiple developers will use it. It has to be clean, readable, and maintainable. Speed doesn’t matter if the output will break in a few months.
Maintainability is part of a developer’s job. AI can help, but it can’t replace that responsibility.
For small weekend projects or proof-of-concepts, speed is fine. Even if the code isn’t great, it doesn’t matter.
But for large-scale products that will scale or be maintained by multiple developers, quality takes precedence.
With Canvs client projects, we know the code will stay for years. Multiple developers will use it. It has to be clean, readable, and maintainable. Speed doesn’t matter if the output will break in a few months.
Maintainability is part of a developer’s job. AI can help, but it can’t replace that responsibility.
Canvs is an interface design and engineering studio based in Mumbai, India. We are group design partners to some of India’s market leaders in Banking and Finance and have been around since 2016.