- Sabrina Ramonov š
- Posts
- AI Builds $10,000 App for $2 in 2 Hours
AI Builds $10,000 App for $2 in 2 Hours
AI Fully Coded My Shader Art Web App
Yesterday, I built a text simulation game, Vampire Security Checkpoint, 100% coded by AI.
This morning AI made me ANOTHER full web app called LLM Shader Art ā it transforms natural language into 2D GLSL shaders.
You describe what you want to visualize, then AI generates GLSL code for the shader, and finally the shader renders the visualization in browser.
I use a āfragment shaderā which determines what color a pixel should be, given an (x, y) coordinate.
According to MVPCalculator:
This web app would cost around $10k and 2-3 weeks minimum to build!
Can I do it 10x faster and cheaper with AI?
YES! š
Time and Cost Estimate from MVPCalculator.co
Hereās my Youtube walkthrough.
Highly recommend watching to learn how AI can build help you build a full-fledged web app JUST BY TALKING!
Table of Contents
Initialize Project
Similar to my project yesterday, I use Anthropic Claude to help initialize a new Next.js project.
I ask Claude what I should do, and I follow Claudeās directions step-by-step.
Everything works smoothly.
Then, I open the new project in Cursor, an AI-powered development environment that streamlines coding with AI.
Moving forward in this post, I work entirely within Cursor.
Prompt
Hereās my spec for the app.
I feed this spec as a prompt into Cursor AI Chat.
Your goal is to build the following web app:
A natural language driven, interactive way to create 2D GLSL shaders directly in browser by interacting with LLM. A user writes simple instructions on how to update the shader and the AI automatically creates the corresponding vertex and fragment shaders.
The UI consists of 3 sections in a 3-column layout:
left | middle | right
- left screen is a chat session with an LLM
- middle screen is the shader GLSL code
- right screen is the actual visualization
For example: User can write the following sequence of instructions:
- make a smooth colorful white ring
- have the ring expand outward into an infinite animation and pulsate
- split the screen into 4 quadrants and duplicate the ring in each one of them
- make the rings expand into infinity
- change background to black and make the rings glow random colors
- make the ring orbit around while expanding
After these instructions, the shader code section is populated with the working shader code, and the visualization section is populated with the visualization. This should be all done in browser using WebGL. I suggest starting by creating the UI layout and the WebGL canvas to display a blank shader. The code should take user input and interactively update the shader code with suggestions and redraw the right screen. In addition, the shaders should support animation by introducing a variable. For simplicity, we can just focus on the fragment shader and leave the vertex shader as is.
The fragment shader should accept the following attributes:
- uniform vec2 u_resolution -- canvas resolution
- uniform float u_tick -- animation tick
To summarize the key points:
- A 3-column layout web application built with Next.js and raw WebGL.
- Command-driven interface using Claude 3.5 Sonnet as the LLM for shader generation.
- Real-time GLSL shader code display with syntax highlighting and editable functionality.
- WebGL visualization that updates based on user commands and debounced manual code edits.
- Focus on fragment shaders with default uniforms for resolution and animation tick.
- Error handling with up to 3 retry attempts by the LLM.
- Targeting major browsers (Chrome/Firefox).
- No mobile responsiveness, version control, saving/loading, or user instructions for the MVP.
Then, I follow Cursorās instructions to create the necessary files and add code.
Bugfixing
No surprise, there were multiple bugs and missing features.
Never expect coding to be perfect on the first pass, whether by AI or humans.
After Cursor created the first draft āplumbing and scaffoldingā of the project, then I spent the majority of my time working through each issue.
Here are some examples:
unreadable text because the font was white in a white-colored input
client-side API call to OpenAI should have been a server-side call
GLSL compilation errors due to GLSL code embedded in LLM outputs that also included free-form text
prompt used to generate shader code did not include the previously generated shader code
But, in about an hourā¦
I resolved all issues and still did NOT touch a single line of code!
MVP
Hereās the result of running the sample instructions from my prompt:
make a smooth colorful white ring
have the ring expand outward into an infinite animation and pulsate
split screen into 4 quadrants and duplicate the ring in each one of them
change background to black and make the rings glow random colors
Super cool to see the visual animations in the Youtube video (timestamp)!
Claude vs Cursor AI
Iām on the free plan of Cursor AI, so Iām limited to 50 slow premium requests.
Likely, I burned through my free credits yesterday, which means Iām automatically downgraded and NOT using Claude Sonnet.
This could explain why todayās version of the app seems lower quality than the one I generated a few days ago using Claude Artifacts (Sonnet 3.5).
For example, the version today, built with Cursor AI, did not use CodeMirror to display the shader code. It also missed key specs, such as iteratively updating the shader code based on user input.
Itād be nice to see what API calls Cursor makes. Donāt love lack of transparency.
Last Thoughts
Again, AI-driven coding did not disappoint!
Given the same spec, estimated to cost thousands $$$$ and take weeksā¦
AI created the MVP
ā¦ in under 2 hours š¾
ā¦ for less than a cup of coffee āļø
This is far greater than a 10x improvement.
And, it feels like just the START of AI-driven coding.
I canāt imagine how powerful it will become over the next decade.
To close things out ā
Here are some of my less organized, wandering thoughts and observations, now that Iāve used AI to build 4 fully functional apps this week:
feels like pair programming, except AI is the driver and Iām the navigator, explaining what I want, looking out for potential issues, etc.
I shouldnāt write specs late at night! Looking back, I definitely couldāve been clearer in describing user flows, expected behaviors, & test cases
Running the spec piece-meal through AI would probably be better than asking AI to generate the entire codebase one-shot (maybe, agents?!?)
Aider is another AI coding assistant I plan to check out, but I do like how Cursor (and Copilot) are embedded in your IDE
Iām still actively engaged reading Cursorās output and trying to make sure I have a high-level understanding of what itās trying to do. I donāt feel like Iām āblindly copy pastingā even if it appears that way.
Did I miss anything?
Have ideas or suggestions?
Message me on LinkedInš
Sabrina Ramonov
P.S. If youāre enjoying my free newsletter, itād mean the world to me if you share it with others. My newsletter just launched, every single referral helps. Thank you!
share by copying and pasting the link: https://www.sabrina.dev