please stop throwing spaghetti at the wall.
it doesn’t work.
i know you can launch a landing page in five minutes. i know it’s easy to toss up a quick a/b test. and i know how easy it is to chat with the 3 people sitting next to you in the coffee shop about your hot idea.
this isn’t experimenting. nor is it validation.
in fact, it’s a waste of time.
now don’t get me wrong. i’m not saying don’t experiment. i’m saying stop experimenting at random.
be strategic about what you test
over the last couple of weeks, we looked at how togenerateandevaluateideas. you should have a list of ideas that are consistent withyour visionthat you suspect will help drive progress towardyour goal.
most teams will take this list, prioritize it, and start building.
don’t do that.
go ahead and prioritize your list based on your best guess at expected outcomes. how will this idea impact your goal? by how much?
but do realize that these are guesses at best and are likely wrong.
this is hard for a lot of people to swallow. i’ve heard countless product folks fall back on their many years of experience or espouse the art of building products. they don’t need to test, they claim. they’ve done this before.
the internet changes quickly. new products are launched at a rate we’ve never seen before. user behavior adapts.
what worked last year may not work this year. what worked in one context won’t work in another context.
so yes, you do have to test.
but don’t test everything.
the “test everything” camp has similar flaws in its logic. it is true that everything changes. it is possible that what didn’t work last week might work this week. but there are infinite possibilities and you can’t test them all.
you want to accelerate your learning. if you test every possibility, all you’ll do is get good at testing. but you won’t build the right product faster.
there is a role for human judgment. now by that i don’t mean your personal preference or opinion. don’t pick the ideas you like the best. i mean judgment informed by data and user context.
you can use judgment to estimate the expected outcome. and you can use expected outcome to rank your ideas. involve other people and follow theseestimation tips.
and of course, never forget the most important rule of building products: prepare to be wrong. –tweet this
you don’t want to just prioritize and build. instead, assume you are wrong and go out and look for evidence to the contrary.
test your assumptions, not your ideas
you shouldn’t test your ideas. you should test the assumptions behind your ideas.
that’s an important distinction. so much so, it’s worth saying again.
don’t test your ideas. test the assumptions that have to be true to make your ideas work. –tweet this
this may seem like an obvious point. but it’s often overlooked.
let’s look at a simple example. suppose your goal is to drive email signups. you suspect that if you drive enough people to the sign up page, sign ups will go up.
you have 3 different ideas designed to drive traffic to your sign up form.. you test all three ideas. should you build the one that drives the most traffic?
do the people who land on that sign up page sign up?
each idea is designed to drive traffic. but each idea also has an impact on what happens after the user arrives at your sign up page. does the idea that drives the most traffic, also lead to the most sign ups?
your assumption is that more traffic will lead to more sign ups. that assumption may be true for one idea but not for another.
this may seem like a silly idea. of course, you should go with the idea that drives the most sign ups. but what if your funnel was more complicated?
how many stores optimize for adding more items to the shopping cart and then lose the sale when the shopper is scared away by the high price? the idea is “get shoppers to add more products to their cart.” the underlying assumption is, “more items in the cart means more revenue per shopper.” but if the shopper abandons the cart, this assumption is false.
don’t make this mistake. don’t just test your ideas. make sure the underlying assumptions that need to be true are true.
run the right experiments
once you’ve identified the assumptions that need to be true for your ideas to work, you need to design experiments to test them.
throwing something up to see what happens is not an experiment. it’s a waste of time. –tweet this
no matter what happens, you will convince yourself it was good. you might as well skip the test all together.
different assumptions require different types of tests
are you trying to figure out why someone is doing something? talk to them.
are they not doing something you want them to do? observe them, while they think aloud.
are you trying to figure out which layout works better? split test them.
are you curious about whether what you hear from one or two people is prevalent across your entire population? run a survey.
too many people are only familiar with one testing method. of late, everyone is hot on customer development. i’m never going to argue with talking to your customers more. but when all you have in your toolbox is a hammer, everything starts looking like a nail.
add to your repertoire. learn a variety of testing methods and know when to use them. laura klein does a great job of explaining what to do when in her bookux for lean startups.
draw lines in the sand
you’ve done your homework. you know what your assumptions are. you’ve designed good experiments to test them. you are feeling pretty good about yourself.
but you have one more step to do before you run the experiment.
you need to draw a line in the sand.
what does that mean? you need to decide before you run the experiment what a good result looks like.
this is critical. why?
your brain is wired tosee what you want to see.if you don’t draw a line in the sandbeforeyou run the experiment, it doesn’t matter what results you get. you will conclude the results are good.
you know what i’m talking about. you’ve done it before. you get a mediocre result and you start to rationalize to yourself why it’s okay. the next thing you know you are looking for reasons why it is good.
don’t do this. you are cheating yourself out of a learning experience.
instead, take the time to ask yourself before you run the experiment, how will i act on these results? set a threshold. if the results are above this threshold, i will do x. if the results are below this threshold, i will do y.
be honest with yourself
setting this threshold is easy once you know to do it. sticking to your outcomes based on that threshold is challenging. your brain is going to work hard to convince you the results are good.
be ruthless. work with someone who will hold you accountable to that threshold.
did the results clear your threshold? do x. did they miss the mark? do y.
it’s that simple. and yet, it’s difficult to do.
do it anyway.
many teams do everything right up until this point. but when they see the result they lack the discipline to act on it. they start to second-guess the results. they question the experiment itself.
don’t fall into this trap. if you do, you will waste your experimentation efforts. instead, setup the systems to ensure that you trust and act on the results. make the hard decisions.
on thursday, we’ll move on to prioritization based on experimentation results. don’t miss it.subscribe to the 瑞士vs喀麦隆水位分析 mailing list.