Knowledge Base

Evaluating Innovation with Scorecards

In this article we will take a look at some scorecards used to evaluate innovations, innovators, ideas and startups. If you’d like to use a scorecard shown here feel free to create an account at Innoscout – the tool for innovation scouting – or Innovote – the mobile award system for startup competitions.

When deciding on an evaluation scorecard you have two major things to think about: the criteria and the scales.

A criteria represents a property or attribute that is important for you or in general. For example the innovator (person or team) is generally important for successful execution on any innovation. What represents a large opportunity on the other hand can differ if you’re evaluating as an angel investors, government sponsored fund or venture capitalist.

For each criteria the evaluator chooses a value on a scale as their evaluation. Often times you provide guidance on how many points to award based on qualifying questions or attributes. Let’s take the example above. You might give one point each for a team that includes an CEO, CTO and CMO, and additional points if they are serial entrepreneurs. For the size of opportunity criteria you may define a minimum or maximum value representing 0 or 5 points and the evaluator can choose values in between.

Here we present a number of scorecards sourced from various use cases that were adapted to be used as early stage evaluation questionnaires for innovations.

Cabbage and Zhang Scorecard

This six part scorecard is used to evaluate an innovative company (startup) along six criteria, with each consisting of 3 possible question that each award a point and results in 0-3 quantitative scale (0 meaning that none of the factors are met) for each of the six equally weighted criteria.

  • Customer
    • Is there unmet need or desire?
    • Is the market large enough? Either a niche market that they are the only player (big fish strategy) or a much larger market that they can get market share in (big market strategy).
    • Do they have reliable access to that market? No point if the sole channel is a single point of failure or market regulation/manipulation is in place or to be expected.
  • Product
    • Is the solution customer focused? No point if value is unclear or multiple goals are targeted.
    • Does the solution have a low barrier to adoption? Cost of the solution also includes migration or adoption cost, so a point if the solution has mostly a low financial and time investment to adopt, low learning curve and easily integrate-able into other systems and processes.
    • Is the value proposition clear? A point if the value calculation (ROI) for the solution can be made easily understood and perceived.
  • Competition
    • Is a clear market inefficiency being met? One point for new markets (demand exceeds supply), fragmented market with no clear market leader or stagnant market that is ready for disription.
    • Is there a barrier to entry? Consider no point if there are existing economies of scale, existing mature products, well-established brands or price competition.
    • Does the solution have a defend-able USP (e.g. patent, technology, experience, unique approach)?
  • Timing
    • Is this a new innovation?
    • Does the demand exist?
    • Is the solution already commoditized? No point if low-cost players exist or many players with similar products that are substitutable.
  • Financial
    • Is large capital risk involved? 1 point for minimal sunk costs.
    • Is a large amount of working capital required? 1 point for less working capital required.
    • Are economies of scale expected? 1 point if it can be proven that margins increase with volume.
  • Team
    • Do they team have the experience? Are they subject matter experts.
    • Do they have the skills to deliver? Technical, engineering, professional or network.
    • Do they have the network to deliver? Connections to partners and suppliers.


Adapted Angel Scorecard Method

Popularized by Bill Payne in 2011 this scorecard is often used by angel investors for evaluating their financial investment opportunities. It can certainly be used as a formal method for a detailed evaluation but this minimally adapted version can also used in earlier and faster evaluation rounds.

1. Strength of the Entrepreneur and the Management Team

A large part of the succcess of any innovation is it’s execution team starting with the founder (or founding team) and the team they can assembly around them.

Subcriteria points weighted at 30%-2-10 123
Experience Founderno business experiencein sales or technologyas a product manager
or many years of any business sector
or in this business sector
as CEO
Team StrengthOnly the entrepreneurOne competent player in placeTeam identified on the sidelinesCompetent team in place

2. Size of the Opportunity

The most difficult criteria to provide a reasonable scale for as it will vary between countries, investment types and the evaluator. What one person sees as a small market is a large one for another. I have decided to keep the figures are originally provided by Payne but caution to adapt these to your own needs if used in a differetn context.

Note though that the future revenue scale is not linear but parabolic. Both a too small potential and a too large potential are deemed negative for early stage investments. The latter usually implies further required capital down the line and introducing more risk.

Subcriteria points weighted at 25%-2-10 123
Size of the target market (total sales)< $50 million$100 million>$100 million
Potential for revenues of target company in 5 years< $20 million> $100 million$20 – $50 million

3. Strength of the Product and Intellectual Property

Here the questions ask how far along and mature their product development is and if can be protected from competing innovators.

Subcriteria points weighted at 15%-2-10 123
Is the product defined and developed?Not well define, still looking a prototypesWell defined, prototype looks interestingGood feedback from potential customersOrders or early sales from customers
Is the product compelling to customers?This product is a vitamin pillThis product is a pain killerThis product is a pain killer with no side effects
Can this product be duplicated by the others?Easily copied, no intellectual propertyDuplication difficultProduct unique and protected by trade secretsSolid patent protections

4. Competitive Environment

Will the innovation have difficulties entering the market and acquiring market share?

Subcriteria points weighted at 15%-2-10123
Strength of competitors in this marketplaceDominated by a single large playerDominated by several playersFractured, many small players
Strength of competitive productsCompetitive products are excellentCompetitive products are weak

5. Marketing/Sales/Partners

Can they product the innovation and deliver it to the market?

Subcriteria points weighted at 10%-2-10 123
Sales channels, sales and marketing partnersHaven’t even discussed sales channelsKey beta testers identified and contactedChannels secure, customers placed trial orders
Channels secure, customers placed trial ordersNo partners identifiedKey partners in place

6. Funding needs

Similar to the second factor in the criteria on opportunity size this scale should be adapted to the audience or organisation performing the evaluation as a later stage or corporate investor will have a different scale to the angel as depicted here.

Subcriteria points weighted at 5%-2-10 123
Need for additional rounds of fundingNeed venture capitalAnother angel roundNone

7. Other factors

It is not efficient to add more elements to the above criteria, especially when used for fast evaluations. But there may be obvious factors not mentioned in the previous criteria that have a substantial impact on the innovation’s success. These are represented by this last criterium.

Subcriteria points weighted at 5%-2-10 123
Need for additional rounds of fundingNegative other factorsPositive other factors

Adapted from Source and Source

Ulu Rubric

Developed by Ulu ventures to determine the early parts of their funnel this scorecard approach evaluates innovators in 7 categories with a 5 part scale each.

PointsVerdict in that category
-1Fares poorly
1Stands out


In this category the fit between innovator and adopting/interested organisation is measured. This will usually be the score for how well it matches on of your innovation scouting search scopes.

Factors to consider are:

  • Is the innovation in the correct stage for your purpose/interest?
  • Does it fit the industry/area of expertise?
  • Does it fit your (desired) organisational values?
  • Does it fit your location requirements?

Market / Opportunity

  • Is the problem adressing a top 3 problem for the customer with budget?
  • Does the solution have traction?
  • Is the total addressable market large enough for you to invest?
  • Does the team have a focused go-to-market strategy?

Team to Market Fit

  • Does the team have the required domain expertise?
  • Is the team committed?

Team in General

  • Does the team come across as authentic?
  • Does the team have good ethics?
  • Does the team have character?

Product Development

  • Is learning baked into the product development?
  • Is the solution complexity handled well?
  • How well does the demo communicate the value?
  • How is the overall product experience?

Financial Viability

  • Is the current valuation good for us to invest?
  • Is the business model sound?
  • How diluted is the cap table now and will it be in future?
  • Are there Exit possibilities?

Super Powers

  • Do they have a substantial competitive advantage?
  • Are there network effects built-in to the innovation?

Adapted from Source

Anchored Risk Comparison Scorecard

If you have existing innovators, innovations, projects or experiences to anchor your evaluation against then you the risk comparison scorecard is a useful tool. Although when evaluating in a team the anchor needs to be either identical or at least similar in most characteristics. Alternatively you can define an anchor value for each of the 7 risk categories and evaluate against that:

  • Management Risk
  • Stage of Business Risk
  • Legislation / Political Risk
  • Manufacturing Risk
  • Sales and Marketing Risk
  • Funding / Capital Raising Risk
  • Competition Risk

For each of the six risk categories you compare the risk for that category against the anchor and give points as follows:

PointsCompare to the anchor
-2A lot worse
-1Slightly worse
0Normal / About the same
1Slightly better
2A lot better

Depending on your innovation activity (e.g. investments or procurement) you may weight each risk category differently to calculate the overall score.



In the scorecards we’ve looked at their are many similarities. Depending on where they are used there is a different focus (e.g. on the financials) but the repeating factors are:

  • Team / Execution: Is the innovator the right person and/or assembled the right team to execute on the innovation?
  • Product / Innovation: Is the innovation solving a market need successfully?
  • Business Model / Delivery: Can the innovator ultimately deliver the innovation to market at a cost below what someone is willing to pay for it?
  • Market / Opportunity: Does the result on investing in the innovation justify the cost (time, money and any other resources including opportunity costs)?

Most evaluations (especially early in your innovation funnel) will boil down to these factors. You may choose to weigh one factor higher than the other (i.e. commonly team is more important than the others as a good team can change course on a bad product, but a bad team cannot execute on a good product).

Photos by Sigmund, engin akyurt and Sean Benesh on Unsplash

Knowledge Base

Methods for evaluating Innovation from Ideas to Startups

The evaluation step of an innovation scouting process is the most time and resource consuming. A saying goes “ideas are worthless, execution is everything”. While the core message is that a not executed idea cannot result in value is true, the blanket statement that ideas are worthless distracts from a fact. The value in the idea is unearthed by executing it. The idea itself is a necessary precondition, but not sufficient for value creation. The execution must follow to turn the raw material into diamonds.

So investing early on in an idea with potential often results in higher returns than at a later stage where some of the risk has been mitigated, others have recognized the hidden value and the diamond is already on the horizon.

Evaluations are mostly seen as risk mitigation for investing resources into innovations. But they should also be seen as the opportunity to “get in” at the ground floor and be an early mover when none of your competitors has even realized a change is coming.

Benefits of structured evaluation of innovators

Evaluation is a trade-off between the resources invested to make a decision and the consequences of that decision. Both false positives (i.e. decision to invest further resources into an innovation that turns out to be a failure) and false negative (i.e. decision to pass on an innovation that turns out to be successful) are costly. Add to that the notion that 95% of innovations fail [2] and the need for a repeatable and improvable approach for evaluating innovation becomes apparent.

“We often find several purposes for evaluating innovation. The main purposes though, are to study and communicate the value, the innovation has created. At the same time, the evaluation is also used as a progress and management tool. The key ingredients here are systematic data collection and measurement.”

National Centre for Public Sector Innovation Denmark (COI)

Does this mean that gut feelings or anecdotal observations must give way to purely evidence based methods? Absolutely not. Structure can be established in any evaluation method with the goal of documenting the result. Structure does not imply quantitative methods, but rather repeatable methods. The accuracy of a subjective evaluation by an individual domain expert can be tremendous, but it always includes bias. Oftentimes inertia to change is present and other factors that reduce it’s accuracy. The only way to identify these issues though is to tie the original evaluation to the result which can be months, years or decades later. Structured documentation leads to both transparency and accountability in the evaluation process.

Categories of Innovation Evaluation

The term evaluation applies to a vast array of methods from evidence-based financial methods, or pattern matching, to purely experience based gut-feeling. Each of these has it’s place in the innovation scouting process at different times, but there are four distinct categories.

Automated Innovation Evaluation Methods

In early stages of your innovation funnel where the quantity of ideas is high the choice falls on highly automate-able (i.e. low investment) evaluations. These can be as basic as a questionnaire that doesn’t let the innovator continue if certain preconditions aren’t met (e.g. in government based funding schemes it is often a requirement to be an established company in that country) or an innovation scout that determines the innovation it outside of the predetermined scope they are working off just by using a checklist.

Often times these filters act as “gates”. They do not allow the innovator to enter your realm. But in innovation scouting it can be useful to establish the innovator as a lead anyway. After all that same innovator may not qualify at the moment, but your search scope may change or the innovation could pivot and therefore keeping an eye out for previously disqualified innovations is a useful tool for the innovation scout.

Typical evaluation methods in this category:

  • Automated questionnaires
  • Accounting integration (e.g. cannot surpass a certain revenue point)
  • Government reports (digital tax returns)

Typically these evaluation methods are used in:

  • Industry events by the innovation scout collecting leads
  • Hackathons with specific topics during the ideation phase
  • Investment or grant programs before submission

Fast Innovation Evaluation Methods

Automatic evaluations rely on hard metrics whereas most innovation requires an analysis of the qualitative substance. When large number of innovations require evaluation and where there is little negative impact for a bad decision a trade-off on the quality of the judgement is made in favor of speed.

One approach is to use the “wisdom of the crowds”. This term coined by Surowiecki [4] denotes the rule of polling large audiences often times averages out any biases present in the individual members. You see this approach applied at startup events, idea and pitch competitions where the audience casts a vote to determine a winner.

In a similar vane startup events can employ jury or expert panels during pitching competitions to score the innovation potential. Where the audience is often only asked to choose their favorite, the panel will usually judge a handful of criteria on a quantitative scale but then follow-up in a round of discussions later to determine the ultimate winner. (See our follow-up article on evaluating startups and innovations using scorecards.)

Although often times the quantitative result is the overridden by a discussion of the jury or experts. There are valid for this as these predetermined scorecards do not always cover the breadth of innovation correctly. Unfortunately discussions also have the potential to let the “loudest person in the room” get their way. Therefore an override-able quantitative approach should be audited by an independent party depending on the implications of the decision being made.

In any case the organizer should question the jury and experts as to which factors were missing from the scorecard. Especially if it is determined that a missing factor caused a different winner to be chosen.

Structured innovation evaluation at events with
Structured innovation evaluation at events with

Typical evaluation methods in this category:

  • Real-time Jury / Expert Panels using Scorecards
  • Crowd-sourced / Audience ratings using Winner Voting

Typically these evaluation methods are used in:

  • Startup events, pitch and idea competitions
  • Hackathons, Meetups, Unconferences
  • High level evaluation at the beginning of an innovation funnel

Analytical Innovation Evaluation Methods

The higher the potential impact of the decisions you are making based on an evaluation, the higher the need for more analytical methods to be applied. This will be the bulk of evaluations performed in an innovation scouting process by the innovation knowledge network.

Each group of people in the knowledge network should be involved in creating the detailed scorecard and include criteria that correspond to their expertise.

  • Domain experts will include criteria for evaluating the technical or production feasibility, emerging market trends, innovative-ness etc, but also industry experience of the innovation team (or single innovator).
  • Business experts will look at the business model in general, if all parts of the supply chain are covered, which markets are being serviced and at what cost. A high level view on the financials is also often useful, but more in terms of a trend analysis of a short window (3, 6, or 12 months).
  • Innovation scouts and managers will include criteria such as team skills, distribution or lack of competencies and roles in the innovation entity, uniqueness of their idea and approach with respect to other innovations in the industry.

Some criteria of one group may overlap with those of another. In this case it is important to identify if the same value is being measured and that is is not simply a naming issue. For example the term “team experience” can mean industry or professional experience for the domain experts but entrepreneurial experience for the innovation experts – two very different skill sets.

Screenshot of Structured Innovation Scouting And Evaluation With Innoscout
Screenshot of Structured Innovation Scouting And Evaluation With Innoscout

Typical evaluation methods in this category:

  • Scorecards
  • Qualitative summary judgement

Typically these evaluation methods are used in:

  • Innovation Scouting Funnels
  • Government funds to a certain degree
  • Accelerator programs

Formal Innovation Evaluation Methods

If the innovation is considered later stage (i.e. has reached product market fit and gained some traction) then using any available financial data to set a valuation of the innovation, innovator or company in question is possible. Both early (angel) and later (venture capital) investors will use a variety of calculation models to estimate the worth of a company before trying to invest.

The reliability of these calculation models can be improved through the use of statistical simulations that create a model from the base financial data and automate the introduction of certain events (investment, hires, repeating trends) into a future projection of the KPIs.

Screenshot from Startup Simulation Software Summit (

Typical evaluation methods in this category:

  • Financial methods: First Chicago, Venture Capital Method, Discounted Cash Flow
  • Traction and/or Market Analysis
  • Simulations

Typically these evaluation methods are used in:

  • Angel, Accelerator or Venture Capital Investments
  • Merger and Acquisitions

How to structure innovation evaluation?

Many of the mentioned methods can and should be performed in a structured fashion to achieve transparency and repeatability and often even comparability. Let’s look at some methods in detail.

Questionnaire Automation

The most trivial family of methods to document are questionnaire automations. By definition the questionnaire schema is stored and can be pulled up at any time. Note though that versioning is important where questionnaires are reused. If possible the innovation scouting system that offers application questionnaires should be able to clone previous questionnaires for reuse which allows you to re-visit past questionnaire versions instead of simply updating one master version (which would result in not knowing which automatic filters were applied in the previous instances).

Scorecards (Audience, Jury and Experts Panels)

Even a single audience question regarding which innovation is their favorite can be considered a scorecard. But as the complexity of these questions grow you arrive at a structure you would probably recognize as a typical score card.

Using digital tools for innovation evaluation you have the automatic benefits of knowing which questions were asked, who answered them in what way, which weighting factors were employed and can reproduce the result at any time in the future.

More importantly over time you can analyse which variables had the best predictive evaluation result and not only change future evaluations but also course correct the innovation evaluation that are more recent with pending decisions regarding new and follow-on investments.

Formal methods

This family of evaluations is structured by default but there are few things to be mindful of.

Algorithms may change over time, so a versioned history is necessary to fulfill the requirement of transparency. Similarly all inputs to the algorithm must be documented. This obviously includes the data provided by the innovator but also any parameter variables that may have been set and any context data that was used (for example historical market data or a machine learning training data set). Only if all inputs are available at a later stage can the algorithm produce the same output and if required be changed to adapt to new learning over time.

Note: Some algorithms (e.g. Monte Carlo simulations) may be non deterministic and include random elements. While these are powerful tools they are also hard to document. It cannot be expected that the same simulation even if run in parallel produces the same exact output give the same inputs, but rather the output range (a statistical set of probabilities) should be reproduce-able.

Further reading and References

  1. National Center for Public Sector Innovation Denmark. (n.d.). Evaluating innovation. Center for Offentlig Innovation.
  2. Carmen Nobel. (2011, February 14). Clay Christensen’s milkshake marketing. HBS Working Knowledge.
  3. Merz, Alexander. (2018). Mechanisms to Select Ideas in Crowdsourced Innovation Contests – A Systematic Literature Review and Research Agenda.
  4. Surowiecki, J. (2005). The wisdom of crowds. Anchor.

Photos by Teemu Paananen, Alain Pham and Jon Tyson on Unsplash

Knowledge Base

The Innovation Scouting Process

In What is Innovation Scouting? we discussed the necessity for organisations to implement innovation scouting into their culture. As these organisations grow and innovation leads become numerous they need to start adding dedicated resources so that leads don’t start falling through the cracks. In this article we present a best practice approach to establishing a structured innovation scouting process.

Innovation is often considered finding the “needle in the haystack”. Consequently many organisations stick with an unstructured reactive approach to finding innovation. The assumption being if they just shout out their needs loud enough and are well connected, then an innovation will appear on the horizon. It should be obvious that the competition for innovation is large and growing constantly, so that this approach is akin to playing the lottery.

Simply copying the processes that thought leaders in the industry have published, or subscribing to one of the many innovation scouting tools and service providers are similarly doomed to failure. Not investing the required work to shape and define your innovation activities means at best low return on your time and money invested and at worst a worse position that you started out with.

“Simply implementing a nicely designed methodology for technology foresight or running weighted queries in a startup database is of little or no value.”

Matteo Fabiano, Firematter (Source)

Although the four phases of the innovation scouting funnel being discussed are presented linearly here, they are very much parallel in practice. A classic waterfall approach where the search steps stop and then push the results of that phase into an evaluation and implementation phase is not only wasteful (you might miss out on an important new trend while you are in the following cycles) but you can’t react fast enough to the learnings you acquire in each phase. Borrowed from software development, you innovation processes should be agile with hypothesis testing at the core and repeated questioning of assumptions and course corrections. The primary actors in each step of the innovation scouting process are distinct which also facilitates the parallelism.

Especially the first phase, the definition of the search scopes, should be repeated constantly. It is vastly dependent the organisation, the strategy, the market and the environment, that small changes in this context have magnified impact downstream in the process. Months or even years invested in building an innovative partnership may turn out to be dead on arrival because a certain industry trend that formed the basis has since disappeared

The following diagram shows all four phases of the innovation scouting process with the actors involved. Note the funnel at the center which links the deliverables in each phase to each other. The qualified innovation as the desired results, can be traced back to the evaluations performed, the scouting activity (where did the lead come from) and ultimately back to the search scoped that the innovation need originated from.

The images show the four steps of the innovation scouting process organized as an innovation scouting funnel.
The Innovation Scouting Funnel


The scopes for scouting for innovation commonly fall into three categories:

  • Claims about the future: an extrapolation of trends or innovative seeds that the organisation discovers through their network or own research.
  • Unmet needs: internal or external requirements that are either not met or not met at the desired level (e.g. customers that do not purchase your products or services, for lack of certain functionality, or road block for internal processes)
  • Desired benefits: the complement of needs, but rather than a pain that requires solving, these are more often vitamins with improvement potentials (e.g. existing customers requesting additional value or internal processes that could be improved through better tooling)

Each category requires tapping into your “knowledge network”. Neither the innovation managers nor the innovation scouts can be relied on to have the complete picture for this phase. It is their objective to gather this information from the experts. Members of the innovation team should be seen as facilitators of knowledge collection and management, not as oracles.

“scouting needs to follow a methodical and systematic approach, irrespective of the technology in question. There are likely to be hundreds of thousands of technology sources to identify and annotate – even in a focused search – and this requires careful collation and tracking of information for later selection.”

Sagentia (Source)

In this early stage it is important to be open to a vast amount of ideas. It is important for the scout to free themselves from bias on past experience, their own network and influences as much as possible, as any such effect will follow your innovation efforts down to the results.

Often times tools borrowed from market research (e.g. surveys, workshops…) or specific ideation methods (e.g. lead user) will be used to brainstorm or collect ideas for each of the three categories. The knowledge network should find a balance between users/consumers, internal and external business and domain experts with care to at least have a representation of each group. Don’t oversee the obvious opportunities that exist in your own organisation. Front line employees that may have been ignored previously, but can have valuable ideas to improve their daily work. These often carry the potential for high return for very low cost and investment.

The beginning of the scoping phase is marked by a creative freedom without much constraint, similar to formal brainstorming activites, which results in a broad set of ideas. But during the later refinement stages of the search scopes the organisational strategy will play an important role. It is vital that this is certainly a bias being reintroduced, but not at the personal level, as mentioned above, but rather on the organisational level. As an example, an automotive company may have uncovered a great need for solar panels during the early phases, but it requires a rare organisational culture that is very open to such a big departure from the core competency. The plethora of new and unique solutions will need to fit the innovative capability of the organisation it serves.

The scoping phase may profit from some tooling, but keep in mind tooling also restricts the creativity which is so vital in this step. The results usually take on the form of a presentation to management. Once they have given their approval though, tooling should enter the process. The scoping information forms the top of the innovation scouting funnel and description of the agreed upon search scopes, what they mean for the organisation and specifics (very analog to requirements for a software project) are entered into the search database (sometimes referred to as a “claim database”). All future steps and results will ultimately lead back to one of your defined search scopes.

If you follow the manifesto of open innovation you may choose to publish your search scopes to the public (e.g. on your website) in order to display your openness to external participation and to signal your innovative nature. Marketing can be applied to make your innovation needs known and sometimes shortcut the scouting process for leads that find you instead of the other way around. We will look into this topic in a future article.


In this step you go from a clear definition of what kinds of innovations you are looking for, to suitable candidates (innovators, innovations and ideas) that can fulfill them. As talked about previously a lead can come from any of the employees in the organisation if the culture supports it, but in the context of a more formal innovation scouting process, this is where innovation scouts spend most of their time.

Depending on the number of search scopes and available innovation scouts each individual may have all of the scopes in the back of their head but concentrate on a few specific ones to concentrate their efforts. When visiting a broad industry conference for instance a scout may find an innovation by coincidence walking around or talking to someone, but will actively search for innovations in their assigned search scopes. But the same applies to searching through databases, news sources and contacts. A good scout is triggered by any suitable innovation, but hunts for specific innovations.

Depending on the number of scouts available and the iterative nature of the process you would choose a subset of the search scopes to work on in each iteration rather than overwhelming each scout with a large number.

Where the search scope is the input to this phase, the connection to a lead is the activity and a dataset of information on that lead is the resulting deliverable. The information received from an innovator has traditionally centered around a pitch deck, but this approach has several disadvantages. This has led to the pitch deck only being a small part of a more structured data gathering process used in a digital innovation scouting process.

Pitch decks are designed to provide high level information to quickly summon interest in the reader. Most of the time they are created with investors in mind, so the financial aspects are front and center. More riskier topics (e.g. technology or market challenges) are often not highlighted or even completely missing to not impede an investment opportunity. This also makes pitch decks very hard to compare. The evaluators in later stages start long cycles of information gathering, requesting details and closing information gaps. If done at a later stage it is not as efficient than if the data is enriched at the discovery stage.

Especially with a growing number of innovation leads structured information data sets are exponentially more valuable for the evaluation phase. An innovation scout will use the pitch deck as a fast and easy tool to decide if the innovator fits the search profile, but once confirmed the lead is ushered through a process that has similarities to KYC (know your customer) used in banking.

The set of information that is collected from each lead, usually through a set of questionnaires, is refined over time but is defined by the innovation knowledge network. Because they act as evaluators in later stages, they are best suited to define what information they will need to make a decision. Basic factors such as financial KPIs will be among them, but also very domain and industry specific questions.

Obviously additional request for information during the evaluation phase cannot be fully avoided, but the dataset gathered from discovery should allow for a large part of the evaluation to be performed without further questions. Those that pass an early evaluation will then use more detailled datasets and interviews to complete the evaluation.

A modern innovation scouting solution will allow the creation and management of these questionnaires, their distribution to identified leads and manage the process all the way through to the evaluation.


The innovation scout acts as the facilitator in this step, but there is a heavy reliance on the innovation knowledge network used during scoping.

Although in contrast to the first phase there is an implicit order to which groups of people in the network should evaluate first. Where scoping relied on users and consumers for ideas first (to set the stage so to speak) they would be involved at a later stage. By no means should you ignore the users or consumers, but rather let more objective measures (the financials by the business experts, the feasibility checks by the domain experts and even the company strategy) decide first. An innovation that users find excellent is of no use if the financials don’t make sense or the promise cannot be fulfilled technically.

Similar to the data collection from the innovators, the evaluation data should follow a structured approach as well. In this case the expertise of the innovation team (managers and scouts) should define what evaluations should be performed and using which scales (ideally a mix of quantitative, to foster automation, but qualitative, so as to not lose the intricacies of each evaluator’s opinion).

The amount of time an innovation can be in this evaluation phase will depend on the speed of the organisation and the urgency of the innovation. But if longer cycles are necessary, it is important to maintain an active innovator relationship (especially since if you have collected all information beforehand there is less need to communicate in the early evaluation phase). The innovation scout remains the primary point of contact during this phase and should update the innovator on new proceedings and touch base regularly since innovators are prone to change and any new information should be added to the evaluation data set. Set clear expectations for the innovator about the organisation’s intentions and processes. Otherwise you may decide to proceed with an innovation after evaluation, only to find out the innovator has moved on.

This communication effort and effectively hand holding of innovators distracts the innovation scout from their core objective – finding new innovation. So although it is necessary, tooling support should be implemented to automate this as much as possible, such as informing the innovator automatically about the progress of the evaluation via email, requesting updates etc. A good innovation scouting system becomes the central hub for your innovation funnel activities.


What comes out of the innovation scouting funnel are qualified innovations. That means they have support from domain and business experts, they fit the strategy of the organisation and have signs of early market validation. Depending on the scope they originated from and also the nature of the innovation, an appropriate implementation is then chosen.

Generally speaking, there are three typical outcomes to an innovation scouting initiative: procurement, product development, and investment/M&A. Sometimes one approach is chosen as a test and then morphs into another (e.g. close procurement partnership results in acquisition of the innovation).

During this phase a hand-off from the innovation scout, as the process lead so far, to a dedicated implementation manager occurs. This role will often still be part of the larger innovation organisation in order to bridge the different cultures that will ultimately collide, but the scouts job is done at this point.


Very often tools and knowledge from sales or procurement funnels are applied to the innovation scouting process without further adaptation. But innovation with it’s impact across the organisational boundary, influence from numerous stakeholders and very unique challenges simply doesn’t fit neatly into those existing templates causing the associated methods and tools to fail.

The innovation scouting process described here can be used as a blueprint to mitigate a large part of the uncertainty involved in innovation scouting. But the role of the innovation scout is central to all activities, through all phases up until innovation is implemented.

“By driving needs definition and aiming for a strong fit between need and idea, [the innovation scout] can reduce the risk of launching products that don’t sell. With organizational support and strong incentives, a formal scouting process can also effectively address internal resistance to externally-generated ideas.”

The Management Roundtable (Source)

Further reading and References

  1. Dahlander, L., & O’Mahony, S. (2017, July 18). A Study Shows How to Find New Ideas Inside and Outside the Company. Harvard Business ReviewA Study Shows How to Find New Ideas Inside and Outside the Company

Photos by N. and Scott Graham on Unsplash

Knowledge Base

What is Innovation Scouting?

Roots in open innovation

The term innovation scouting finds its roots in the field of open innovation. Building on earlier findings that openness benefits innovation, Prof. Chesbrough (Haas School of Business) coined the term.

The idea being that companies need to allow for a flow of innovation across their organisational borders in order to cope with the challenges of ever faster innovation life cycles. The traditional approach of secrecy and silos of research and development must give way to a distributed innovation process.

“Half the company’s ideas must come from the outside”

A. G. Lafley, retired CEO of P&G (Source: Making Innovation Work)

Internal research and development often burdened with organisational inertia are being outpaced by smaller unencumbered teams of innovators. This leads to a high risk of missing out on emerging market opportunities. Especially those that are in stark contrast to the existing modus operandi.

Decreased internal R&D budgets must mitigate the ever present threat of their parent organisations being disrupted by globalization and new competitors that have easy access to the resources that used to be a significant barrier to entering the market.

Open innovation taps into external innovation, turning it into an opportunity before it becomes a risk.

Innovation scouting as a part of the discovery step of the open innovation process

In general an open innovation process follows four steps:

  1. Definition (where lies the interest/focus),
  2. Discovery (the actual search and evaluation of innovation),
  3. Procurement (the process of adopting what was found) and
  4. Integration into the organisation in order to multiply the expected effect.

Each step has challenges but the discovery phase in particular is prone to information overload. Even with narrowly defined problem there is a sea of hyperbole in innovation that makes it hard to find valuable innovators. But just as important is identifying the innovation that does not partake on the global innovation stage, but rather is known only to real insiders. So called “hidden champions” can be treasure troves of innovation.

“Hidden champions […] have reached leading positions in their markets with specific strategies [and] are far more suitable role models and instructive examples [that] small and midsize enterprises across the world can learn a lot from […]”

Simon, H. (2009). Hidden champions of the twenty-first century: The success strategies of unknown world market leaders. Springer Science & Business Media.

On a higher level it is important to recognize emerging technologies and trends before they become mainstream and be able to creatively combine overlapping technology areas to see big picture innovation on the horizon.

Innovation can be sourced vertically, that is from consumers to suppliers, and horizontally, between producers, competitors and industries.

Popular stream of open innovation research have concentrated on the vertical borders between the producers and consumers or suppliers. Methods such as crowd sourcing, lead user method and design thinking, that all tap into the creative potential of users and consumers, have evolved from academia to become common place in most innovator’s toolboxes.

Innovation across organisational borders

The previously mentioned methods for uncovering innovative ideas all base around the fact that finding your users and consumers is straight forward as a producer. In contrast the discovery process across organisational borders, in the ocean of industry players, poses a much more challenging sourcing problem.

There are many ways of innovation cooperation between organisations: licensing of technology, joint ventures, spin-offs and acquisition.

Scouting, specifically “technology scouting” refers to the active search for cooperation opportunities as part of the discovery step. The term has more recently been replaced by “innovation scouting” in order to represent a more holistic approach, encompassing the search for not only for new technology, but any form of new ideas, methods, processes and devices.

“Innovation is a process of redefining and reshaping a category or a reality. It may be connected to corporate culture, organization’s structure, business model, existing products and needs, customers and yes — technology. But it does not equal new technology.”

Monika Rozalska-Lilo, CEO CREATORS (Source)

Although scouting is often primarily seen as inbound sourcing, where the provider of the innovation is external to your organisation. More benefits are gained from bi-directional sourcing and combined discovery. That means, true to the original open innovation concept, to offer your own insights and innovations to others allowing for the combined innovation to have greater effect.

Bi-directional information exchange in the discovery process will also simplify the integration and adoption of any innovation you discover.

Types of Innovation Scouting: Trend, Startup and Cross-Industry

Startup scouting is a type of innovation scouting that currently dominates the space and both are often regarded as synonymous. But another type – trend scouting – is the actual foundation of any innovation scouting effort. With a goal of identifying change in your industry early, familiarizing yourself with new and upcoming terminologies and stakeholder hot spots (i.e. conferences, trade shows and other networking opportunities) it facilitates the network and language you need to master in order to find, reach and talk to innovators.

The term “startup” has evolved from an early stage externally financed business venture with scale-able properties to describing any person or group of people working on a business idea. Innovative or new is no longer a necessary attribute. This makes scouting for real innovators the core goal of “startup scouting”. Of course an innovator can take on various forms: from inventors and ideators to fully fledged innovative businesses, the innovation scout needs to be receptive to a whole spectrum of possibilities. After all an early discovery results in maximum participation potential.

A certain amount of effort albeit with lower budgetary priority should be invested in other industries adjacent to your own both in terms of trends and developments but also significant innovators and innovations as their core elements may find application in your own industry. For example the core concepts of early peer to peer technology (as applied to music sharing) can today be seen in any number of industries and applications in a derived form (shared ownership, decentralized exchanges, …).

Pillars of Innovation Scouting

An organisation that is not open to external innovation is the first barrier to overcome in order to facilitate open innovation and especially innovation scouting. Mature organisations that are used to developing everything in house must undergo a mindset change or face a high risk of being disrupted sooner or later.

Then the first step before dedicating resources to innovation scouting is simply to create a culture of scouting. Every employee should be on the lookout for innovation as part of their daily work. This is both true for outward facing roles, such as sales and business development, but also less obvious team members may come in contact with new ideas by simply having a vested interest in their industry and subsequent interactions.

A 2017 Harvard Business Review paper on current innovation scouting research stated that

“exposing employees to a broad range of external partners can lead to more innovation at the company. But if they spend too much time searching for new ideas outside their firm, this could detract from the work they do inside the company. They will presumably have less time to attend internal meetings, talk to colleagues, and stay on top of email. So while they may be increasing the potential for future innovation, their time away from firm activities could negatively affect the firm’s current productivity.”

So when the innovation scouting machine starts humming, the need for a structured approach grows and dedicating a role to innovation as a whole and innovation scouting specifically becomes necessary.

Further reading and References

  1. Davila, T., Epstein, M., & Shelton, R. (2012). Making innovation work: How to manage it, measure it, and profit from it. FT Press.
  2. Simon, H. (2009). Hidden champions of the twenty-first century: The success strategies of unknown world market leaders. Springer Science & Business Media.
  3. O’Connell, D. (2016). Harvesting external innovation: Managing external relationships and intellectual property. Routledge.
  4. Dahlander, L., & O’Mahony, S. (2017, July 18). A Study Shows How to Find New Ideas Inside and Outside the Company. Harvard Business ReviewA Study Shows How to Find New Ideas Inside and Outside the Company

Photos by Paul Skorupskas & Matt Ridley on Unsplash