Eibira development kick-started in February this year.
It’s been three months since using an AI-first, AI-only approach to build the app. In the pre-AI world, I had never imagined I would get back to development. The last line of code I wrote was almost two decades ago.
So I had to restart my development journey by learning what GitHub is…
Fast forward to today, I still haven’t written a single line of code, but the product development is on track; it works just as I had imagined, it looks just as it was designed, and in a quality I would only have prayed for as a bootstrapped start-up.
It feels surreal, seeing how trivial development suddenly seems. And this is not just development from a coding perspective, it is for all of these three domains – development, design, and product management.
AI is a thoroughbred workhorse; the only skill needed from a human is to be a good equestrian. This shift should make one feel empowered instead of bitter, showing how human guidance still remains vital.
Here is my experience working as a developer, designer, and product manager: going from implementing 6 user journeys in 2 months to shipping 10 user journeys this month.
What do I mean by an AI-first approach?
I think this is an important distinction to make before I talk about my experience.
AI-first is not about writing a prompt and then expecting an outcome to magically meet the exact needs.
The old-school process is still valid even with AI (and so is burning the mid-night oil).
- I still write the source of truth. Which is then spec’ed by AI using contracts that have been revised and versioned numerous times. The requirements have to be thorough and precise (especially with AI, since your requirements will be taken at face value).
- I had to orchestrate the framework (the process) before allowing an AI to write a single line of code.
- AI is the workhorse. It will do the heavy lifting for you.
- But I still have to manually test it, have AI fix bugs, and refactor.
- I still need to make infrastructure decisions on the choices offered.
All of these have to be done, but what makes the AI-first approach so desired is the time, people, and cost factor. You’ll know as you read further.
Why don’t you do it?
I never intended to do the development. I was forced to take it up.
When I began documenting Eibira features, I had to discard my product management background and, with it, all the learning I had about how I approached product development. The focus shifted from writing specifications for a human to communicating with an AI.
How do you have an AI document a feature and its behaviour, so that another AI can build it exactly?
This was the SDD phase, writing specifications and operationalizing it for an AI. Though it took some time, it was much easier than what came next.
The more challenging part was finding a developer or architect who believed their role was a hands-off orchestrator, rather than a developer.
After spending many months and interacting with many developers, a passing comment from a friend hit home: “Why don’t you do it?”
Thus began my journey into development…

But building an app is complex
Yes. Building an app is complex.
Especially one that borrows design and UX principles from gaming and product psychology to bring together neuroscience, evolutionary biology, psychology, and mindfulness into a coherent app that helps people make better decisions in life.
And all of this requires technical infrastructure, but not necessarily people with the know-how to build and manage it. That last part is the reality that many people are still unwilling to accept.
As a sceptic with a critique-first approach to AI outputs, I have had people from the technology review the Eibira codebase and its setup.
The comments are not “It’s amazing”, or “oh my god, what level of coding”, but they have been more subdued and humble, “yeah, this makes sense”, “can you ask your AI why it did A and not B?”, or simply “looks good to me”.
Code feels less like an asset and more of a consequence. And how that code is written and which languages the tech stack is built on are minor concerns.
I often joke with my friends that the next programming language will not be from a human. It will be generated by an AI, and only it will understand its semantics and syntax. We will be meek spectators.
What about design?
Broadly, there are two ways to look at product design.
Some designers push boundaries in visual language to create a user experience that no one imagined – like an artwork.
But most of product design work is systems thinking, assembling patterns and components into something coherent.
As a start-up, you are looking at middle ground, designs that look original and give you a brand identity, while also being easy on your pocket.
Needless to say, given an ecosystem like Dribbble or Material Design, AI can connect dots that otherwise may require more brains and iterations than what a start-up can afford.
That’s not the only reason for adopting AI in design. I had hired a designer early on and soon realised it’s the same challenge: how do you get designers to shift their perspective from designing for a human to designing for an AI?
Unable to change the existing mindset and the design practice, I had to begin my tertiary role as a full-time designer, building components in Figma with AI assistance for a downstream AI to implement.
The design journey so far has been unbelievable. The speed at which I can design and the accuracy of the front-end development are humbling when you see AI at work.
Why 90 days, when AI can build a website in 90 seconds?
I recently came across an ad on YouTube that claimed a website can be built in 90 seconds using their AI.
Well, they are right, you can. But if you are aiming for a product you can take to the market and expect people to pay for it, brace yourself for a longer iteration cycle.
It took me 4 months to build an AI framework for my needs, and another 2 months to have it implement around 6 user journeys reliably. Once the system was proven, I could implement 10 user journeys in 30 days.
My goal is to oil it enough to implement one user journey every two days: from ideation to a working feature (design, spec, dev, everything included).
AI is dangerously fast. So it’s worth slowing down to fine tune the direction every now and then.
A word of caution
This two-day goal is considering a system designed and built around one person owning everything.
I define an AI-first development lifecycle consisting of four levels (this is not an industry standard, but a working model that has been useful for me):
- L1: Prompt Engineering – “Please fix this for me.”
- L2: Structured Prompt Systems – “Please fix this for me, based on what you did here.”
- L3: Spec-driven AI Development – “You are fixing the following. Below are the policies governing the fix approach. You are not to violate the guardrails. Before you commence, read the current journey and then run the handshake.”
- L4: Compiler-Enforced AI Systems – “I [AI] identified an issue in production when the user did X. It happened due to Y in your spec. I have applied patch Z as a temporary fix. The issue is resolved, but it deviates from your policy P. I have these three possible spec options on policy P. Let me know which one to pick. If you’re thinking of a different approach, I’ll be curious to know?”
Today, Eibira functions as an L3 system. But now that Eibira is a 3-person company (so happy to be writing this 🙂 ), and development will be handled by all three of us, the goal will be to move to an L4 system if GTM is successful.
What is the cost of an AI-first approach?
There are two prices you pay:
- The cost of building the product: Right now, it’s ridiculously low. So low that it may skew the entire conversation. A better cost-to-measure could be the final cost to bring the product to market. I’m tracking these costs, and hopefully there aren’t any surprises down the road.
- The cost of human interaction: AI can suck you in. It’s a black hole. I spend an average of 6 hours every day on AI tools. No question is left unanswered. No algorithm is left unresolved. No need for me to speak with another human being.
That second point really bothers and hurts me. So much so that I had to design an antidote, a deliberate engine that enforces human interaction.
Here is how I define these three engines dictating my everyday work:
- Implementation engine: The AI workhorse I’ve been talking about.
- Reasoning engine: The AI engine used for critical thinking and learning. Also, a critique of the implementation engine.
- Homo sapiens engine: Questions and concerns reserved for homo sapiens. A deliberate attempt to keep certain aspects of the product resolved only through human interaction.
It’s a weird and unnerving situation that I have to make a deliberate attempt at human interaction!
Not everything is hunky-dory… yet.
Despite all the amazing things AI can do, it still cannot predict whether a product will be successful, and AI hallucination is real.
There are days when I wake up and count 527 days are past, and I can feel my stomach sink. Is it too long? Am I missing the boat? Will the product be accepted?
There are days when you are jolted, when the AI you’ve been training for months gives a response that is alien and scares you.
While the first is for me to have the necessary patience, the latter is for AI to evolve. Even with modest improvements in LLMs, the impact will be manifold.
I can already see the difference with GPT 5.5 and Opus 4.7.
It’s been an exciting journey learning, exploring, and using AI so far. But the next few months will not be about AI. They will be about strengthening each product journey to ensure every touchpoint stays meaningful and helps a user make better decisions on Eibira.

Liked the article?
If it sparked a thought or made you pause, consider sharing it on LinkedIn – it might do the same for someone else.