Learning when to be patient as a founder
Founders are impatient by default. I know I am. One lesson I’ve had to internalize is knowing when impatience is a superpower, and when it’s actually harmful to your product.
Over the years as a founder, I’ve found myself demanding results in days (or a couple weeks) for things that simply can’t reveal truth that quickly. This unfortunately resulted in broken systems, constant resets, and teams that couldn’t build momentum because the goalposts kept moving.
Here’s the deeper lesson I’ve learned:
Each experiment loop comes with a different time-to-signal. Your job is to know the acceptable latency for each loop, commit to it upfront, and manage accordingly.
Some activities should produce a signal in 2–14 days. If they don’t, you’re not moving fast enough.
Others are supposed to take 4–12 weeks. If you judge them in week 2, you’ll either kill something too early, or burn your team with constant measurement that isn’t meaningful yet.
The real skill: assign the right latency to each loop
Here are the rules I wish I had earlier:
Decide the acceptable time-to-signal for the activity.
Hold the team to inputs until the signal window opens.
Once the window opens, be ruthless about interpreting the signal.
The subtlety is in #2: you shouldn’t expect week 1 indicators for everything. For many loops, weeks 1–3 are about activity and inputs, because the real signal doesn’t show up until week 4 (or week 8).
An easy example here is a cold outbound strategy as a brand new startup: the number of demos you have after 100 emails and 2 weeks is not an indication that no one wants your product.
That means the founder’s job isn’t “measure everything early.” It’s:
Know what can be measured early (and measure it)
Know what can’t (and don’t pretend it can)
Avoid premature measurement, which leads to thrash
A quick checklist for setting the latency window
Before you start an initiative, force yourself to answer:
What’s the natural cycle time here (customer cycles, product cycles, investor cycles, etc.)?
What’s the minimum sample size before outcomes mean anything?
What are the inputs we can measure immediately while outcomes are still noisy?
What would make us stop early because the inputs are clearly wrong?
A story from 2022: the cost of being impatient in the wrong area
In 2022, we were building a card + payments platform for the construction sector. At the time, I was extremely impatient with revenue growth. I pushed for 30%+ growth essentially from MVP.
The good news: it worked… for a while. We ended up growing 10× over the following 12 months.
The bad news: we paid for that growth with our product evolution.
Almost all product work got pulled into maintenance and infrastructure: building and maintaining a ledger, restructuring our issuing/rails platform, resolving sensitive customer issues tied to their business finances, keeping the machine running.
We didn’t spend enough time iterating on a few key features that customers actually needed; the ones we knew would differentiate us from horizontal products.
By Q1 2023, growth started slowing and the funnel looked worse. The root cause wasn’t “marketing stopped working.” It was that our product felt more like a credit offering than the differentiated software we set out to build. And as a result, we attracted the wrong types of customers from the wrong sources.
The failure mode was simple in retrospect: I assigned “fast-loop urgency” to revenue growth, when it in fact should have been on a longer cycle while product iteration and customer love on core software features should have been kept on a tighter loop.
That experience taught me a very specific lesson: if you get impatient with the wrong loop, you can win in the short term, but completely miss the mark in the long term.
A few practical examples
Here are few examples to highlight what I mean:
Fast signal loops (2–14 days): stay default impatient
These are loops where delay is basically waste. Set clear output metrics up front and ruthlessly go through your experiment. Make quick judgments and do not talk yourself into waiting longer. Otherwise, you’re wasting precious cash/runway and likely losing the attention of potential customers who are trusting you to solve their pain points.
MVP testing, customer discovery, product iteration, and shipping
Aim to iterate in days and get to customer reactions as quickly as possible.
If you’re taking 4–8 weeks to test a hypothesis, you’re likely building for a customer context that has already changed; especially in the current AI-supercharged environment.
Tools like v0 make this dramatically easier. At Payflow, I took our marquee feature from idea to something customers could react to over a single weekend. This would have easily taken 2-4 weeks in the past
Taking 3–6 months to test an MVP often means you’re delivering for a world that no longer exists.
People
It’s easy to say “hire slow, fire fast.” The truth is: you hired someone because you were excited about something in their background or something you anchored on during the interview process. A default mindset of “I’ll fire everyone in two weeks if they don’t produce” is chaos.
So why do I still put people decisions in the fast-signal category?
Inputs. When the outcome signal is delayed or nuanced, inputs are your early warning system.
You can’t always demand outcomes in 48 hours, but you can demand motion, critical thinking, ownership, intensity, and genuine excitement about the product.
Within the first week, you can usually tell:
Are they putting in the right effort?
Are they taking ownership without being chased?
Are they doing the unglamorous work (cleaning leads, setting up outreach, writing tests)?
Are they producing artifacts you can review (commits, PRs, notes, outreach drafts)?
Do they seem genuinely excited about what you’re building or are they emotionally somewhere else?
To make “fire fast” decisive rather than chaotic, we now run 2-week work trials with clear expectations. If the inputs aren’t there in a two-week window, I’d rather end it quickly than hope it magically appears later.
Paid Marketing Experiments
Unlike cold outreach, paid marketing experiments tend to have near-immediate signals. An ad should lead to clear traffic at a minimum. A webinar should get you a solid lead list.
I’ve been fortunate to work with some great marketers. And the best ones iterate on copy and creatives on a daily basis (and sometimes multiple times in a day).
Paid marketing can create a serious drag on your burn and invalidate your LTV:CAC math, both of which create concerns for your viability as a venture-backed company. This is a place to accept nothing less than extremely fast feedback cycles.
Slow signal loops (4–12 weeks): be patient on outcomes, intense on inputs
These are loops where the system needs time to show you the truth, where you need multiple iterations before you can truly say you nailed it.
These are hard for me because I default want to “start building the system.” However, I’ve learned that impatience here often creates a worse outcome.
Some examples:
SEO / inbound growth
You often need 6+ weeks to understand what’s happening and another 12–18 weeks to see real results. Judging it at week 3 is premature. Of course, within the first week, you can understand keyword ranking and your content pillars. All inputs. Actual signals that your SEO is working will take a lot longer.
True PMF
“I’ve already iterated 4 times on this product/feature, surely we should just scale it now.” I really made this mistake. Feels awful just thinking about it.
True PMF can take 4 weeks, but it can also take 2+ years. Until it’s abundantly clear that people are lining up to get the product off your hands, you need to keep iterating and keep experimenting.
How does this square with the product iteration loop above? You can be very impatient with each iteration loop; but patient with the overall outcome. You might need multiple experiments before you see breakout motion. The key isn’t being “patient” as in passive; it’s being disciplined about the latency. It might take 30 iterations to get it right. Airbyte and Segment are great examples of this.
Nailing the ICP
I used to jump the gun on this one. Once I felt like I’d talked to enough people, I would convince myself that I had found my ICP and tunnel-vision on them.
The better approach is to keep gathering signals with your current ICP — conversion rates, willingness to pay, eagerness to onboard, product usage once onboarded, feedback on the product once used — until it’s extremely clear you’ve found an ICP desperate to pay enough for the product iteration that you can maintain healthy gross margins and scale.
Until then, you might just be selling to the wrong person (or selling the wrong product).
Fundraising
It’s so easy to spiral in weeks 3–4 of a fundraise (heck, maybe even week 2!). This always leads to subpar decisions.
“If investors don’t like this, maybe it’s not worth building.”
“Should I listen to one investor about pivoting to a different vertical?”
“This investor made a great point about our business model, should we change it?”
Investors are smart; but you shouldn’t read too much into a lengthy fundraise. Especially given the additional noise from external factors like market timing. Your customers, your intuition, and your team are materially superior sources of truth for what you should build.
Be patient in fundraising.
Some counterexamples
Every rule has exceptions. And in startups, any advice you see has 20 counterexamples where things went extremely well. So here are a couple I spotted:
You should be impatient with “slow” loops if the early signals are clearly wrong.
Example: outbound targeting an ICP that doesn’t have the pain. You send 100 emails, and the responses are overwhelmingly “this isn’t a problem for us.” Don’t wait 9 weeks to call this one; go back to the drawing board.
You must be patient with “fast” loops if you’re gating on real constraints.
Example: compliance/security work that must be correct, or a genuinely complex feature that requires layered engineering to be reliable (e.g., credit card ledger, AI financial analyst for large businesses).
The point isn’t “always impatient” or “always patient.” The point is: choose the correct latency window for the loop and commit upfront.
Closing: a founder’s job is managing time-to-signal
As a founder, you have to learn to manage feedback cycles.
If you get the latency wrong, you either:
Move too slowly where speed is the advantage, or
Thrash prematurely where patient analysis is the requirement.
So build your own map. Of all the experiments you are running or planning to run this month:
What should pay you back in 2–14 days?
What takes 4–12 weeks before the signal is real?
What inputs do you hold the team accountable to until the outcomes are measurable?
That ultimately will make the difference between being “impatient” in a way that compounds, and being impatient in a way that creates constant fire drills with little to show for it in the long term.
Special thanks to Jose Pons Vega for reading and commenting on drafts of this post.


Great piece Wilfried!! I learned a lot.