MVP launched… now what?

Six months of "weekend AI warrioring" led to the launch of our MVP, Burrow. We're now focused on hardening the product and validating with real user input. The journey proves AI is impressive, but it must include proper product development process.

MVP launched… now what?
Our AI product has seen the light of day. Where do we go from here?

Six months ago, I dove into the fast-moving AI “revolution” to see what might be possible. Now, after a lot of weekend AI warrioring, we’ve launched a version of Burrow to web a dozen or so initial testers.

The next steps are:

  • Product Hardening - Very focused, real-world testing to refine the features and technologies that have been put in place.
  • Validate. Perform user testing with people not involved with development against the aforementioned hypotheses and collect new ideas (our ideas = hobby, actual user input = real product) 
  • Plan. Hypothesize what might be useful for the roadmap.
  • Go. Burrow has proven to be a good vehicle for the AI educational journey.

Overlooked Topics

For housecleaning, there are a few items the blog hasn’t covered that seem important for rounding out where we’ve been so far.

iOS and Android Apps

Even though we only launched on the web, I incrementally tested iOS and Android deployments to ensure they were functional. It would be terrible to put all of this work in and not actually have it work for apps.

If it turns out that people are actually interested in what we’ve deployed to web, it should be relatively easy to deploy apps thanks to Replit’s integration with Expo.

Use of Vibe Coding to Draft “Pillars”

While my teammate, Christian, owns the pillars of functionality that support the product, we did draft a first version of one pillar using Replit. Our search algorithm is driven by some complex logic that Christian outlined, and we refined it in several steps.

  • Christian helped to draft the approach we wanted to take, including the creation of embedding vectors using the OpenAI API.
  • Replit took a first pass at implementing the logic.
  • We followed a Human-Review-AI-Clean feedback loop.
    • Christian added comments in the search logic file via GitHub to specify code cleanup needed. 
    • Replit walked through and addressed each comment one by one as Christian and I supervised
    • When all comments had been processed, we repeated the human review/comment cycle three more times.

The result was pretty clean code and a file that Christian will likely take ownership of eventually. If/when that happens, I will inform Replit that the file is off-limits for modification, and Christian will use his console approach, combining Claude and Gemini to refine further.

The Road Ahead

We plan to continue tinkering as the tools evolve and blog about what we’re working on and insights that might be useful to others on a similar journey. AI is definitely not magic or a replacement for good product development practices. It is, however, pretty impressive and I’m looking forward to the ride.

If you're on the same road, I'd love to hear what tools and insights you've discovered.