Archive | InBrief

Leverage a balanced scorecard framework for software evaluation

Leverage a balanced scorecard framework for software evaluation

There are a number of ways to approach software selection projects, some of which are more six sigma focused such as the quality function deployment (QFD) framework and some are not as academic in nature (i.e., selection based on a brand name).  Either way, the goal of a software evaluation is to establish criteria for which individual requirements will be evaluated against to satisfy our client’s future needs.  I recommend a balanced scorecard framework, where four key criteria are used to evaluate software (weighted based on relative importance to the client). These criteria are as follows:

  1. Functionality: How well does the solution meet the business requirements?  The requirements (i.e., the specific “hows”) should be driven by the future “target business model” and demonstrations should focus on evaluating the requirements that are truly unique to your business.  Functionality is best evaluated in a scenario-based demonstration where demo participants are grading business processes and the key requirements within a process.
  2. Technology Platform: How scalable is the software and does the architecture fit with the business model? Look at the software’s deployment model (e.g., multi-tenant SaaS vs. on-premise), interoperability and integration platform, extensibility, maintainability, support, security and compliance, workflow and alerts, and document management capabilities.  Work with a technical expert to review the architecture than relying solely on the vendor’s word.
  3. Vendor Viability: How confident are you that the vendor will be around for years to come to support and evolve the software you select? The vendor you choose will be your long-term partner and is just as important as the software itself. Consider the vendor’s level of investment in R&D, financial health and risk of acquisition, size and relevancy of customer install base, strength of partner ecosystem, implementation methodology, input from customer references, and user and technical support capabilities.
  4. Usability & End User Productivity: Will your employees be able to use the tool easily and effectively? After all demonstrations are completed, participants should rank each software based on the usability of the tool. Some examples include: how easy it is to navigate, find, and search for information within the system’s user interface, the support of multiple client options, including mobile and tablets which are growing in importance in the enterprise applications market.

After evaluating these four criteria, each software vendor will have a weighted overall score. 

You may be asking, “Why is Total Cost of Ownership (TCO) not included in the balanced scorecard”?  We purposely do not assign a weight or grade to the TCO.  The TCO should be calculated carefully and considered alongside the weighted score but shouldn’t be graded (I will discuss the specifics of why that is in a later blog post in detail). The total score for each vendor is then compared to each vendor’s TCO and potential ROI to make a cost vs. value-based decision on what solution is right for your company.

In every software selection I’ve led, my clients have peppered in qualitative feedback throughout the lifecycle of the project, which is always valuable. But, using this quantitative framework provides a simple way for my clients to define their priorities up front and then evaluate software side by side using the same criteria for each.  When the final score is produced by this framework matches your “gut feel,” you’ll be satisfied that you have done the proper diligence and you will be confident in your decision!

Explore our latest perspectives