Categories
Knowledge Base

Evaluating Innovation with Scorecards

In this article we will take a look at some scorecards used to evaluate innovations, innovators, ideas and startups. If you’d like to use a scorecard shown here feel free to create an account at Innoscout – the tool for innovation scouting – or Innovote – the mobile award system for startup competitions.

When deciding on an evaluation scorecard you have two major things to think about: the criteria and the scales.

A criteria represents a property or attribute that is important for you or in general. For example the innovator (person or team) is generally important for successful execution on any innovation. What represents a large opportunity on the other hand can differ if you’re evaluating as an angel investors, government sponsored fund or venture capitalist.

For each criteria the evaluator chooses a value on a scale as their evaluation. Often times you provide guidance on how many points to award based on qualifying questions or attributes. Let’s take the example above. You might give one point each for a team that includes an CEO, CTO and CMO, and additional points if they are serial entrepreneurs. For the size of opportunity criteria you may define a minimum or maximum value representing 0 or 5 points and the evaluator can choose values in between.

Here we present a number of scorecards sourced from various use cases that were adapted to be used as early stage evaluation questionnaires for innovations.

Cabbage and Zhang Scorecard

This six part scorecard is used to evaluate an innovative company (startup) along six criteria, with each consisting of 3 possible question that each award a point and results in 0-3 quantitative scale (0 meaning that none of the factors are met) for each of the six equally weighted criteria.

  • Customer
    • Is there unmet need or desire?
    • Is the market large enough? Either a niche market that they are the only player (big fish strategy) or a much larger market that they can get market share in (big market strategy).
    • Do they have reliable access to that market? No point if the sole channel is a single point of failure or market regulation/manipulation is in place or to be expected.
  • Product
    • Is the solution customer focused? No point if value is unclear or multiple goals are targeted.
    • Does the solution have a low barrier to adoption? Cost of the solution also includes migration or adoption cost, so a point if the solution has mostly a low financial and time investment to adopt, low learning curve and easily integrate-able into other systems and processes.
    • Is the value proposition clear? A point if the value calculation (ROI) for the solution can be made easily understood and perceived.
  • Competition
    • Is a clear market inefficiency being met? One point for new markets (demand exceeds supply), fragmented market with no clear market leader or stagnant market that is ready for disription.
    • Is there a barrier to entry? Consider no point if there are existing economies of scale, existing mature products, well-established brands or price competition.
    • Does the solution have a defend-able USP (e.g. patent, technology, experience, unique approach)?
  • Timing
    • Is this a new innovation?
    • Does the demand exist?
    • Is the solution already commoditized? No point if low-cost players exist or many players with similar products that are substitutable.
  • Financial
    • Is large capital risk involved? 1 point for minimal sunk costs.
    • Is a large amount of working capital required? 1 point for less working capital required.
    • Are economies of scale expected? 1 point if it can be proven that margins increase with volume.
  • Team
    • Do they team have the experience? Are they subject matter experts.
    • Do they have the skills to deliver? Technical, engineering, professional or network.
    • Do they have the network to deliver? Connections to partners and suppliers.

Source

Adapted Angel Scorecard Method

Popularized by Bill Payne in 2011 this scorecard is often used by angel investors for evaluating their financial investment opportunities. It can certainly be used as a formal method for a detailed evaluation but this minimally adapted version can also used in earlier and faster evaluation rounds.

1. Strength of the Entrepreneur and the Management Team

A large part of the succcess of any innovation is it’s execution team starting with the founder (or founding team) and the team they can assembly around them.

Subcriteria points weighted at 30%-2-10 123
Experience Founderno business experiencein sales or technologyas a product manager
or many years of any business sector
as COO, CFO, CTO
or in this business sector
as CEO
Team StrengthOnly the entrepreneurOne competent player in placeTeam identified on the sidelinesCompetent team in place

2. Size of the Opportunity

The most difficult criteria to provide a reasonable scale for as it will vary between countries, investment types and the evaluator. What one person sees as a small market is a large one for another. I have decided to keep the figures are originally provided by Payne but caution to adapt these to your own needs if used in a differetn context.

Note though that the future revenue scale is not linear but parabolic. Both a too small potential and a too large potential are deemed negative for early stage investments. The latter usually implies further required capital down the line and introducing more risk.

Subcriteria points weighted at 25%-2-10 123
Size of the target market (total sales)< $50 million$100 million>$100 million
Potential for revenues of target company in 5 years< $20 million> $100 million$20 – $50 million

3. Strength of the Product and Intellectual Property

Here the questions ask how far along and mature their product development is and if can be protected from competing innovators.

Subcriteria points weighted at 15%-2-10 123
Is the product defined and developed?Not well define, still looking a prototypesWell defined, prototype looks interestingGood feedback from potential customersOrders or early sales from customers
Is the product compelling to customers?This product is a vitamin pillThis product is a pain killerThis product is a pain killer with no side effects
Can this product be duplicated by the others?Easily copied, no intellectual propertyDuplication difficultProduct unique and protected by trade secretsSolid patent protections

4. Competitive Environment

Will the innovation have difficulties entering the market and acquiring market share?

Subcriteria points weighted at 15%-2-10123
Strength of competitors in this marketplaceDominated by a single large playerDominated by several playersFractured, many small players
Strength of competitive productsCompetitive products are excellentCompetitive products are weak

5. Marketing/Sales/Partners

Can they product the innovation and deliver it to the market?

Subcriteria points weighted at 10%-2-10 123
Sales channels, sales and marketing partnersHaven’t even discussed sales channelsKey beta testers identified and contactedChannels secure, customers placed trial orders
Channels secure, customers placed trial ordersNo partners identifiedKey partners in place

6. Funding needs

Similar to the second factor in the criteria on opportunity size this scale should be adapted to the audience or organisation performing the evaluation as a later stage or corporate investor will have a different scale to the angel as depicted here.

Subcriteria points weighted at 5%-2-10 123
Need for additional rounds of fundingNeed venture capitalAnother angel roundNone

7. Other factors

It is not efficient to add more elements to the above criteria, especially when used for fast evaluations. But there may be obvious factors not mentioned in the previous criteria that have a substantial impact on the innovation’s success. These are represented by this last criterium.

Subcriteria points weighted at 5%-2-10 123
Need for additional rounds of fundingNegative other factorsPositive other factors

Adapted from Source and Source

Ulu Rubric

Developed by Ulu ventures to determine the early parts of their funnel this scorecard approach evaluates innovators in 7 categories with a 5 part scale each.

PointsVerdict in that category
-2Showstopper
-1Fares poorly
0Neutral
1Stands out
2Outstanding

Fit

In this category the fit between innovator and adopting/interested organisation is measured. This will usually be the score for how well it matches on of your innovation scouting search scopes.

Factors to consider are:

  • Is the innovation in the correct stage for your purpose/interest?
  • Does it fit the industry/area of expertise?
  • Does it fit your (desired) organisational values?
  • Does it fit your location requirements?

Market / Opportunity

  • Is the problem adressing a top 3 problem for the customer with budget?
  • Does the solution have traction?
  • Is the total addressable market large enough for you to invest?
  • Does the team have a focused go-to-market strategy?

Team to Market Fit

  • Does the team have the required domain expertise?
  • Is the team committed?

Team in General

  • Does the team come across as authentic?
  • Does the team have good ethics?
  • Does the team have character?

Product Development

  • Is learning baked into the product development?
  • Is the solution complexity handled well?
  • How well does the demo communicate the value?
  • How is the overall product experience?

Financial Viability

  • Is the current valuation good for us to invest?
  • Is the business model sound?
  • How diluted is the cap table now and will it be in future?
  • Are there Exit possibilities?

Super Powers

  • Do they have a substantial competitive advantage?
  • Are there network effects built-in to the innovation?

Adapted from Source

Anchored Risk Comparison Scorecard

If you have existing innovators, innovations, projects or experiences to anchor your evaluation against then you the risk comparison scorecard is a useful tool. Although when evaluating in a team the anchor needs to be either identical or at least similar in most characteristics. Alternatively you can define an anchor value for each of the 7 risk categories and evaluate against that:

  • Management Risk
  • Stage of Business Risk
  • Legislation / Political Risk
  • Manufacturing Risk
  • Sales and Marketing Risk
  • Funding / Capital Raising Risk
  • Competition Risk

For each of the six risk categories you compare the risk for that category against the anchor and give points as follows:

PointsCompare to the anchor
-2A lot worse
-1Slightly worse
0Normal / About the same
1Slightly better
2A lot better

Depending on your innovation activity (e.g. investments or procurement) you may weight each risk category differently to calculate the overall score.

Source

Conclusion

In the scorecards we’ve looked at their are many similarities. Depending on where they are used there is a different focus (e.g. on the financials) but the repeating factors are:

  • Team / Execution: Is the innovator the right person and/or assembled the right team to execute on the innovation?
  • Product / Innovation: Is the innovation solving a market need successfully?
  • Business Model / Delivery: Can the innovator ultimately deliver the innovation to market at a cost below what someone is willing to pay for it?
  • Market / Opportunity: Does the result on investing in the innovation justify the cost (time, money and any other resources including opportunity costs)?

Most evaluations (especially early in your innovation funnel) will boil down to these factors. You may choose to weigh one factor higher than the other (i.e. commonly team is more important than the others as a good team can change course on a bad product, but a bad team cannot execute on a good product).

Photos by Sigmund, engin akyurt and Sean Benesh on Unsplash

Categories
Knowledge Base

Methods for evaluating Innovation from Ideas to Startups

The evaluation step of an innovation scouting process is the most time and resource consuming. A saying goes “ideas are worthless, execution is everything”. While the core message is that a not executed idea cannot result in value is true, the blanket statement that ideas are worthless distracts from a fact. The value in the idea is unearthed by executing it. The idea itself is a necessary precondition, but not sufficient for value creation. The execution must follow to turn the raw material into diamonds.

So investing early on in an idea with potential often results in higher returns than at a later stage where some of the risk has been mitigated, others have recognized the hidden value and the diamond is already on the horizon.

Evaluations are mostly seen as risk mitigation for investing resources into innovations. But they should also be seen as the opportunity to “get in” at the ground floor and be an early mover when none of your competitors has even realized a change is coming.

Benefits of structured evaluation of innovators

Evaluation is a trade-off between the resources invested to make a decision and the consequences of that decision. Both false positives (i.e. decision to invest further resources into an innovation that turns out to be a failure) and false negative (i.e. decision to pass on an innovation that turns out to be successful) are costly. Add to that the notion that 95% of innovations fail [2] and the need for a repeatable and improvable approach for evaluating innovation becomes apparent.

“We often find several purposes for evaluating innovation. The main purposes though, are to study and communicate the value, the innovation has created. At the same time, the evaluation is also used as a progress and management tool. The key ingredients here are systematic data collection and measurement.”

National Centre for Public Sector Innovation Denmark (COI)

Does this mean that gut feelings or anecdotal observations must give way to purely evidence based methods? Absolutely not. Structure can be established in any evaluation method with the goal of documenting the result. Structure does not imply quantitative methods, but rather repeatable methods. The accuracy of a subjective evaluation by an individual domain expert can be tremendous, but it always includes bias. Oftentimes inertia to change is present and other factors that reduce it’s accuracy. The only way to identify these issues though is to tie the original evaluation to the result which can be months, years or decades later. Structured documentation leads to both transparency and accountability in the evaluation process.

Categories of Innovation Evaluation

The term evaluation applies to a vast array of methods from evidence-based financial methods, or pattern matching, to purely experience based gut-feeling. Each of these has it’s place in the innovation scouting process at different times, but there are four distinct categories.

Automated Innovation Evaluation Methods

In early stages of your innovation funnel where the quantity of ideas is high the choice falls on highly automate-able (i.e. low investment) evaluations. These can be as basic as a questionnaire that doesn’t let the innovator continue if certain preconditions aren’t met (e.g. in government based funding schemes it is often a requirement to be an established company in that country) or an innovation scout that determines the innovation it outside of the predetermined scope they are working off just by using a checklist.

Often times these filters act as “gates”. They do not allow the innovator to enter your realm. But in innovation scouting it can be useful to establish the innovator as a lead anyway. After all that same innovator may not qualify at the moment, but your search scope may change or the innovation could pivot and therefore keeping an eye out for previously disqualified innovations is a useful tool for the innovation scout.

Typical evaluation methods in this category:

  • Automated questionnaires
  • Accounting integration (e.g. cannot surpass a certain revenue point)
  • Government reports (digital tax returns)

Typically these evaluation methods are used in:

  • Industry events by the innovation scout collecting leads
  • Hackathons with specific topics during the ideation phase
  • Investment or grant programs before submission

Fast Innovation Evaluation Methods

Automatic evaluations rely on hard metrics whereas most innovation requires an analysis of the qualitative substance. When large number of innovations require evaluation and where there is little negative impact for a bad decision a trade-off on the quality of the judgement is made in favor of speed.

One approach is to use the “wisdom of the crowds”. This term coined by Surowiecki [4] denotes the rule of polling large audiences often times averages out any biases present in the individual members. You see this approach applied at startup events, idea and pitch competitions where the audience casts a vote to determine a winner.

In a similar vane startup events can employ jury or expert panels during pitching competitions to score the innovation potential. Where the audience is often only asked to choose their favorite, the panel will usually judge a handful of criteria on a quantitative scale but then follow-up in a round of discussions later to determine the ultimate winner. (See our follow-up article on evaluating startups and innovations using scorecards.)

Although often times the quantitative result is the overridden by a discussion of the jury or experts. There are valid for this as these predetermined scorecards do not always cover the breadth of innovation correctly. Unfortunately discussions also have the potential to let the “loudest person in the room” get their way. Therefore an override-able quantitative approach should be audited by an independent party depending on the implications of the decision being made.

In any case the organizer should question the jury and experts as to which factors were missing from the scorecard. Especially if it is determined that a missing factor caused a different winner to be chosen.

Structured innovation evaluation at events with innovote.io
Structured innovation evaluation at events with innovote.io

Typical evaluation methods in this category:

  • Real-time Jury / Expert Panels using Scorecards
  • Crowd-sourced / Audience ratings using Winner Voting

Typically these evaluation methods are used in:

  • Startup events, pitch and idea competitions
  • Hackathons, Meetups, Unconferences
  • High level evaluation at the beginning of an innovation funnel

Analytical Innovation Evaluation Methods

The higher the potential impact of the decisions you are making based on an evaluation, the higher the need for more analytical methods to be applied. This will be the bulk of evaluations performed in an innovation scouting process by the innovation knowledge network.

Each group of people in the knowledge network should be involved in creating the detailed scorecard and include criteria that correspond to their expertise.

  • Domain experts will include criteria for evaluating the technical or production feasibility, emerging market trends, innovative-ness etc, but also industry experience of the innovation team (or single innovator).
  • Business experts will look at the business model in general, if all parts of the supply chain are covered, which markets are being serviced and at what cost. A high level view on the financials is also often useful, but more in terms of a trend analysis of a short window (3, 6, or 12 months).
  • Innovation scouts and managers will include criteria such as team skills, distribution or lack of competencies and roles in the innovation entity, uniqueness of their idea and approach with respect to other innovations in the industry.

Some criteria of one group may overlap with those of another. In this case it is important to identify if the same value is being measured and that is is not simply a naming issue. For example the term “team experience” can mean industry or professional experience for the domain experts but entrepreneurial experience for the innovation experts – two very different skill sets.

Screenshot of Structured Innovation Scouting And Evaluation With Innoscout
Screenshot of Structured Innovation Scouting And Evaluation With Innoscout

Typical evaluation methods in this category:

  • Scorecards
  • Qualitative summary judgement

Typically these evaluation methods are used in:

  • Innovation Scouting Funnels
  • Government funds to a certain degree
  • Accelerator programs

Formal Innovation Evaluation Methods

If the innovation is considered later stage (i.e. has reached product market fit and gained some traction) then using any available financial data to set a valuation of the innovation, innovator or company in question is possible. Both early (angel) and later (venture capital) investors will use a variety of calculation models to estimate the worth of a company before trying to invest.

The reliability of these calculation models can be improved through the use of statistical simulations that create a model from the base financial data and automate the introduction of certain events (investment, hires, repeating trends) into a future projection of the KPIs.

Screenshot from Startup Simulation Software Summit (summit.com)

Typical evaluation methods in this category:

  • Financial methods: First Chicago, Venture Capital Method, Discounted Cash Flow
  • Traction and/or Market Analysis
  • Simulations

Typically these evaluation methods are used in:

  • Angel, Accelerator or Venture Capital Investments
  • Merger and Acquisitions

How to structure innovation evaluation?

Many of the mentioned methods can and should be performed in a structured fashion to achieve transparency and repeatability and often even comparability. Let’s look at some methods in detail.

Questionnaire Automation

The most trivial family of methods to document are questionnaire automations. By definition the questionnaire schema is stored and can be pulled up at any time. Note though that versioning is important where questionnaires are reused. If possible the innovation scouting system that offers application questionnaires should be able to clone previous questionnaires for reuse which allows you to re-visit past questionnaire versions instead of simply updating one master version (which would result in not knowing which automatic filters were applied in the previous instances).

Scorecards (Audience, Jury and Experts Panels)

Even a single audience question regarding which innovation is their favorite can be considered a scorecard. But as the complexity of these questions grow you arrive at a structure you would probably recognize as a typical score card.

Using digital tools for innovation evaluation you have the automatic benefits of knowing which questions were asked, who answered them in what way, which weighting factors were employed and can reproduce the result at any time in the future.

More importantly over time you can analyse which variables had the best predictive evaluation result and not only change future evaluations but also course correct the innovation evaluation that are more recent with pending decisions regarding new and follow-on investments.

Formal methods

This family of evaluations is structured by default but there are few things to be mindful of.

Algorithms may change over time, so a versioned history is necessary to fulfill the requirement of transparency. Similarly all inputs to the algorithm must be documented. This obviously includes the data provided by the innovator but also any parameter variables that may have been set and any context data that was used (for example historical market data or a machine learning training data set). Only if all inputs are available at a later stage can the algorithm produce the same output and if required be changed to adapt to new learning over time.

Note: Some algorithms (e.g. Monte Carlo simulations) may be non deterministic and include random elements. While these are powerful tools they are also hard to document. It cannot be expected that the same simulation even if run in parallel produces the same exact output give the same inputs, but rather the output range (a statistical set of probabilities) should be reproduce-able.

Further reading and References

  1. National Center for Public Sector Innovation Denmark. (n.d.). Evaluating innovation. Center for Offentlig Innovation. https://www.coi.dk/en/what-we-do/evaluating-innovation/
  2. Carmen Nobel. (2011, February 14). Clay Christensen’s milkshake marketing. HBS Working Knowledge. https://hbswk.hbs.edu/item/clay-christensens-milkshake-marketing
  3. Merz, Alexander. (2018). Mechanisms to Select Ideas in Crowdsourced Innovation Contests – A Systematic Literature Review and Research Agenda.
  4. Surowiecki, J. (2005). The wisdom of crowds. Anchor.

Photos by Teemu Paananen, Alain Pham and Jon Tyson on Unsplash