We've developed a way to make AI coding actually work by systematically identifying and fixing places where LLMs typically fail in full-stack development. Today we're launching as Lovable (previously gptengineer.app) since it's such a big change.
The problem? AI writing code typically make small mistakes and then get stuck. Those who tried know the frustration. We fixed most of this by mapping out where LLMs fail in full-stack dev and engineering around those pitfalls with prompt chains.
Thanks to this, in all comparisons I found with: v0, replit, bolt etc we are actually winning, often by a wide margin.
What we have been working on since my last post (https://news.ycombinator.com/item?id=41380814):
> Handling larger codebases. We actually found that using small LLMs works much better than traditional RAG for this.
> Infra work to enable instant preview (it spins up dev environments quickly thanks to microVMs and idle pools of machines)
> A native integration with Supabase. This enables users to build full-stack apps (complete with auth, db, storage, edge functions) without leaving our editor.
Interesting project as an example:
https://likeable.lovable.app – a clone of our product, built with our AI. Looks like a perfect copy and works (click "edit with lovable" to get to a recursive editor...)
Going forward, we're shipping improvements weekly, focusing on making it faster, even more reliable and adding visual editing experience similar to figma.
If you want to try it has a completely free tier for now at lovable.dev
Would love your thoughts on where this could go and what you'd want to build with it. And what it means for the future of software engineering...