← Back to Blog

Validating Products Before Building: Why I'm Not Coding offer.guide Yet

SelfCEO Team

Validating Products Before Building: Why I'm Not Coding offer.guide Yet

I have a problem many founders would envy: I know exactly how to build offer.guide. The tech stack is clear, the UI design is mapped out, and I could probably ship an MVP in 2-3 weeks.

But I'm not building it yet.

Instead, I'm doing something that feels inefficient and tedious: manually running offer analyses for people, one by one, using Claude AI. No automation. No slick interface. Just me, manually processing requests.

Here's why.

The "Build It and They Will Come" Trap

I've fallen into this trap before with side projects. You spend weeks building something, polish it until it's perfect, launch it with excitement, and then... crickets.

The problem isn't that the product is bad. The problem is you built the wrong thing, or built it for the wrong people, or solved a problem nobody actually cared about enough to pay for.

Building is the easy part. Validation is the hard part.

What I'm Actually Validating

With offer.guide, I'm not just validating "do people want this?" I already know they want help making offers on houses. What I'm really validating is:

1. Will people actually use it?

There's a difference between "yeah that sounds useful" and actually taking action. By offering to manually run analyses, I see who's willing to:

  • Fill out the questions
  • Send me listing URLs
  • Actually read the report I send back

If people won't do this when it's FREE and MANUAL, they definitely won't pay for an automated version.

2. Is the output actually valuable?

Does the offer recommendation help people feel more confident? Do they understand the reasoning? Would they actually use this number when making an offer?

I can only learn this by watching real people interact with real output.

3. What's the real workflow?

I thought people would want to analyze 5-10 properties at once. Turns out, they want to analyze 1-2 properties they're serious about. That completely changes the pricing model and feature set.

I wouldn't have learned this from building an MVP and watching analytics. I learned it by literally talking to people.

4. What questions are they asking?

After I send someone an analysis, they always have follow-up questions:

  • "What if I'm competing with other offers?"
  • "Should I waive inspection contingency?"
  • "How firm should I be on this number?"

These questions tell me what features to build. Maybe I need a "competitive offer" mode. Maybe I need negotiation scripts. Maybe I need a calculator for contingency trade-offs.

I would have missed all of this if I just built and launched.

The Manual Process

Here's what I'm actually doing:

  1. Someone expresses interest (Reddit, Twitter, or DMs)
  2. I send them a simple form with ~9 questions about the property
  3. They fill it out and send me the listing URL
  4. I feed everything into Claude with my offer.guide prompt
  5. I format the output nicely and send it back
  6. I ask for feedback

It takes me 15-20 minutes per analysis. It's not scalable. That's the point.

If I can't get 10-20 people to let me do this manually, I definitely can't get 100 people to pay $10 for the automated version.

What I've Learned So Far

After running ~8 analyses manually:

People actually want this. Multiple people have said "I wish this existed when I bought my house" or "can you analyze 2 more properties for me?"

The reasoning matters more than the number. People don't just want "offer $395K" - they want to understand WHY. The breakdown of adjustments is what makes them trust the recommendation.

Different buyers have different priorities. First-time homebuyers want reassurance. Investors want speed. The same tool needs to serve both, but differently.

Realtors aren't the enemy. I was worried realtors would see this as competition. Instead, several people said "I'm going to show this to my realtor and use it to have a better conversation with them."

When I'll Actually Start Building

I'll start building the automated version when:

  1. ✅ I've manually processed 20+ analyses
  2. ⏳ At least 10 people say they'd pay $10 for this (currently at 6)
  3. ⏳ I have a clear understanding of the workflow and feature set
  4. ⏳ I've validated that the output is actually influencing real offers

I'm probably 2-3 weeks away from hitting these milestones. Then I'll build.

But by waiting, I'm ensuring that when I DO build, I'm building something people actually want to pay for. Not just something I think is cool.

The Unsexy Truth About Validation

Nobody wants to hear "spend 3 weeks doing manual work before you write code." It's not exciting. It doesn't feel like progress. You can't screenshot it and post "shipped this weekend 🚀"

But it's the difference between:

  • Building the right thing vs building something nobody wants
  • Launching with 20 paying customers vs launching to an empty room
  • Sustainable growth vs "I spent 2 months building this and got 3 users"

Validation isn't glamorous, but it's the highest ROI work you can do as a solo founder.

What's Next for offer.guide

Over the next few weeks, I'm continuing manual analyses while working on:

  • Growing the waitlist on offer.guide
  • Documenting the validation process (like this post)
  • Refining the questions and output format
  • Planning the actual build

If you're house hunting and want a free manual analysis while I'm still in validation mode, DM me or join the waitlist. You'll help me build a better product, and you'll get a data-driven offer recommendation in return.


Building something? Don't skip validation. It's tempting to jump straight to code, but the best products come from deeply understanding the problem first.

Want to follow along? I'm documenting the entire journey of building SelfCEO ventures here on the blog. Subscribe for updates or connect with me on X/Twitter