Book Notes by Abi Noda

Running Lean - by Ash Maurya

ISBN: 9781449305178
READ: Feb 7, 2014
ENJOYABLE: 6/10
INSIGHTFUL: 5/10
ACTIONABLE: 4/10

Critical Summary

I'll begin my summary by quoting the author's promise: "Running Lean is a repeatable, actionable process for building products, one that raises your odds for success by helping your identify your success metrics and measure progress against those metrics."

At a high-level, the Running Lean framework is fairly straightforward: validate the problem. Define a solution. Validate the solution. Then develop your solution iteratively while continuing to test and validate along the way. Running Lean offers concrete, actionable instructions and templates for each step of this process.

However, the greatest flaw in this book is hinted in the language of the author's promise. Running Lean is designed more like an algorithm — painfully detailed, comprehensive, and unemotional — than a practical field guide for the real world. The book delves into everything from landing page design to Kanban boards. In other words, in its attempt at engineering a comprehensive framework for business creation, Running Lean fails to deliver a strong set of core principles (I will revisit this later in my summary).

Another problem I have with the author's promise is that the word "metrics" is mentioned twice, when in actuality Running Lean incorporates very few metrics. In fact, it's not until the very last stage of that actual numbers are even mentioned (eg. Sean Ellis test, 40% customer retention). I found incongruence in the fact that Running Lean was characterized as algorithmic, but was largely based on qualitative experiments without discussion of potential quantitative benchmarks or test methodologies.

Since Running Lean is considered the de-facto field manual for Lean Startup methodology, I was eager to read it and compare it to Nail it Then Scale It, which I had read previously.

At a high level, NISI and Running Lean prescribe very similar methodologies. However, where Running Lean stumbles, NISI's shines. NISI's focus on simplicity makes it far more powerful and practical. For example, as a first step, NISI focuses only on pain whereas Running Lean starts off with a lean canvas, which forces you to simultaneously consider other parts of the business model. NISI's "less is more" approach proves more effective, because as formulaic and well-engineered as Running Lean tries to be, the reality is that starting a company is stressful and unpredictable.

Another example of unnecessary complexity is useless jargon like "iteration meta-pattern" and "build-measure-learn loop", as well as tangential topics like usability testing, Kanban boards or an annoyingly complex definition of risk: "the way you quantify risk in your business model is by quantifying the probabilities of a specific outcome along with quantifying the associated loss if you're wrong." As a result of this complexity, the milestones and objectives defined in Running Lean are less concrete and powerful than NISI. NISI does a better job painting a holistic picture of common entrepreneurial fallacies, and how to breakthrough them by focusing on the most important goal — acquiring payed customers.

I also want to highlight two methodological differences between NISI and Running Lean:

1) NISI gets you in front of customers faster. The Lean Canvas is simple, but it seems like the entire exercise should hinge on the customer pain being validated first. That gets entrepreneurs in front of customers faster, which in turn helps save time and wasted energy on the subsequent steps.

2) NISI recommends an objective, quantitative testing method for initially validating the customer pain, whereas Running Lean uses customer interviews. I would argue that as a whole, NISI approaches the startup process more objectively while Running Lean bases it on customer interviews.

Overall, I believe Running Lean is a worthwhile complement to NISI in bits and pieces. Specifically, I found its structured customer interview templates, advice on establishing pricing, and mention of the "Sean Ellis Test" to be valuable and actionable.


Running Lean Overview

Three stages of a startup:

  1. Problem/Solution Fit - Do I have a problem worth solving?
  2. Product/Market Fit - Have I built something people want?
  3. Scale - How do I accelerate growth?

Stages of Running Lean:

  1. Understand the problem

    Conduct formal customer interviews or other techniques to understand whether you have a problem worth solving. Who has the problem and how is it solved today?

    Nail it Then Scale It uses quantitative tests such as response rate to gauge whether a problem is worth solving.

  2. Define the solution

    Build a demo that helps the customer visualize the solution then test it with customers. Will it solve the customers' problem? Does the pricing model work?

  3. Validate qualitatively

    Build your MVP and then soft-launch it to your early adopters. Do they realize the UVP? Are you getting paid?

  4. Verify quantitatively

    Launch your refined product to a larger audience. How will you reach customers at scale? Do you have a viable business?

Three Biggest Startup Risks:


Document your Plan A

1. Create Lean Canvases

Capture your business model hypotheses using the Lean Canvas (adaptation of Business Model Canvas), laying out a plan that you believe should work. Fill out lean canvases quickly, concisely, and don't feel bad about leaving sections unfilled.

It's interesting that Running Lean kicks off right away with a "Plan A", whereas Nail it Then Scale It starts by focusing on whether or not you have a strong customer pain. While practically speaking, the distinction is semantic, I personally prefer the NISI philosophy because it keeps you focused on the customer instead of your business model. No pain, no business.

Start by brainstorming the list of possible customers for your product. A customer is a someone who pays for your product (versus a user). Split broad customer segments into smaller ones, and create a lean canvas for each customer segment. Next list the top one to three problems, and how you think your early adopters address these problems today. Another way of thinking about problems is in terms of the jobs customers need done.

UVP: Why you are different and worth getting attention. Derive your UVP from the number-one problem you are solving. Target early adopters, not the middle of the market. Focus on "finished story benefits", not features. The most effective way to get noticed is to nail a customer problem.

Instant Clarity Headline = End Result Customer Wants + Specific Period of Time + Address the Objections

Answer: what, who, and why (if possible)

Maurya recommends Positioning. Note to self to also consult Made to Stick.

Keep your solution general, don't over-invest. Entrepreneurs are especially gifted at rationalizing their vision. It is important to accept that your initial vision is built largely on untested assumptions.

Address pricing from day one, because: - Price is part of the product (remember Positioning) - Price defines your customers - Getting paid is one of the best forms of validation

Unfair advantage is something that cannot be easily copied or bought, eg. insider information, personal authority, access to the right endorsements.

2. Prioritize, Identify Risks, and Prepare to Test

  1. Rank your lean canvases based on 1) customer pain level, 2) ease of reach, 3) profitability, 4) market size

  2. Assess and rank risks based on uncertainty and the potential negative impact of those uncertainties. The top three risks are not solving a big enough problem, reaching customers, and being able to monetize profitably. Seek external advice on identifying risks. The biggest risk for most startups is building something nobody wants.

  3. Prepare to Test

- Maximize for speed, learning, and focus.

Do the smallest thing possible to learn. You don't need code to test a software product. You don't need a restaurant to test a new food concept. You don't need automation to test a marketplace.

- Convert assumptions into testable "falsifiable hypotheses"

eg. Too vague: "Being known as an expert will drive early adopters". Specific and testable: "A blog post will drive 100 signups."

Falsifiable hypothesis = [Specific Repeatable Action] will [Expected Measurable Outcome]

- Validate Qualitatively, Verify Quantitatively

When you have a lot of uncertainty, you don't need a lot of data to reduce uncertainty significantly. You can get a strong positive or negative signal with as few as five customer interviews. A strong positive signal gives you permission to move on to quantitative verification.


Stage One: Understand Problem

Don't try to learn from surveys and focus groups. Surveys assume you know the right questions to ask, as well as the right choices of answers. You can't gauge body language in a survey. Focus groups devolve to group think.

Tips for talking to customers and finding prospects:

The Problem Interview

How do customers rank the top three problems? How do customers solve these problems today? Is this a viable customer segment?

While you can quickly gauge customer reaction by measuring engagement with a problem-centric teaser landing page or blog post, you need to actively engage customers to truly understand they face, and how they solve them today. Techniques for doing this include informal methods like "Design Thinking" and "User Centric Design", and/or structured customer interviews.

**Reference detailed problem interview script and instructions (l. 1550)*

You are done when you have interviewed at least 10 people and you:


Stage Two: Define Solution

The Solution Interview

Build a demo that helps customers visualize your solution and validate that it will solve their problem.

Essentially the "virtual prototype" from Nail it Then Scale It

**Reference detailed solution interview script and instructions (l. 1864)*

You are done when you are confident that you:

Next step: Build an MVP* (see Chapter 9 — "Get to release 1.0")

Establishing Pricing

When it comes to pricing, "learning versus pitching" does not mean you can be vague and open-ended. Pricing needs to be tackled more directly than understanding customer behavior.

Don't ask customers what they'll pay, tell them. You can't convince a customer that they have a problem, but you can and should convince a customer to pay a certain price for your product.

There's no reasonable economic justification for a customer to offer anything but a low-ball figure. Pricing is part of your product and defines the customer segment you attract.

Recall the discussion of pricing strategies from Positioning

Reference example pricing discussion (l. 1799)

Early-adopter pricing strategies:

The right price is one the customer accepts, but with a little resistance.

AIDA framework for structuring solution interviews

AIDA = Attention, Interest, Desire, and Action

Reminds me of Made to Stick

Attention: Get the customers attention with your UVP.

Interest: Use your demo to show how you will deliver your UVP and generate interest

Desire: Trigger desire through scarcity and prizing (see earlier notes)

Similar to using EMOTION to elicit action in Made to Stick

Action: Get a verbal, written, or prepayment commitment


Stage Three: Validate Qualitatively

The MVP Interview

With MVP, marketing website, and conversion dashboard in hand, your objective is to meet with prospects face to face and sign them up to use your service.

If you can't convert a warm prospect in a 20-minute face-to-face interview, it will be harder to convert a visitor in less than eight seconds on your landing page.

Does your landing page get noticed? Do customers make it all the way through your activation flow? Does your MVP demonstrate and deliver on your UVP? Do customers pay for your solution?

**Reference detailed MVP interview script and instructions (l. 2299)*

Once you get signups, your goal is to retain your users, get paid, and collect favorable customer testimonials.

Learn from paying customers. Also learn from "lost sales" prospects.

Stage Four: Verify Quantitatively

Achieving Product/Market Fit == building something want == delivering on your UVP

For one-time services, activation is the key metric.

For recurring products/services, you have early traction when you are retaining 40% of your activated users, month after month.

Sean Ellis Test for evaluating early traction

Survey users with the key question "how would you feel if you could no longer use [product]?", with possible answers: Very disappointed, Somewhat disappointed, Not disappointed, N/A - I no longer use it. The exact wording of question and answers should be adapted.

If over 40% of users say that they would be "very disappointed" without your product, there is a great chance you an build sustainable, scalable customer acquisition growth.

Try this out ASAP for OrangeQC

You have achieved early traction when you can retain 40% of your users and pass the Sean Ellis Test