Electronic dashboards and scorecards are powerful enterprise tools that provide executives with quick insight into business performance. They can be custom-built or based on reporting solutions offered by a number of vendors. Many organizations have come to rely on the key performance indicators (KPIs) found in dashboards. A brief analysis of KPIs often highlights important trends that can significantly impact strategic performance improvement initiatives. Dashboards have matured significantly over the last decade and have evolved into rich solutions possessing comprehensive graphical and tabular reporting capabilities.
However easy they may sound on paper, dashboard implementation projects can be extremely challenging because of complexities that aren't obvious to the inexperienced dashboard development team. Experience shows that such complexities, if not mitigated early in the project, can cause unnecessary delays or even project failure.
It's our experience that a dashboard project needs to be treated like any other enterprise IT initiative and attacked with rigor and diligence. In this article, we discuss some common mistakes in dashboard implementations. If you can avoid these pitfalls, you'll significantly improve your chances of having a successful project.
1. Defining too many KPIs. It's all too easy to get carried away during your KPI design process. Defining a large number of indicators is a common mistake, and it takes time, patience, and peer reviews to ensure that your KPIs are consolidated to the smallest possible MECE (mutually exclusive, collectively exhaustive) set. You might find it useful to start with a longer list and work it down to a smaller one.
Bear in mind that the complexity of your dashboard and the effort required to design and implement it will increase exponentially as the number of KPIs increases. KPI design is complex, and gaining consensus on how a KPI will work is time-consuming. If you have lots of KPIs, you'll probably need to extract the data from a large number of data sources. As a result, development will be slower and user acceptance may take longer because the dashboard will likely become unwieldy and hard to understand.
Here's how to reduce your KPI set:
Extract dimensions. If you have three KPIs for three products, for example, consider consolidating these into one KPI that features "Product" as a dimension.
Normalize your KPIs. You may find that your initial list contains KPIs that are not mutually exclusive. Two KPIs may measure similar aspects of performance in different ways or contribute to more than one aggregated metric. One effective technique is to make a list of all source data fields required in your KPI set, then count the number of KPIs in which each source data field is used. Examine all KPIs that share the same source data field to see if there's room for normalization.
Take your time. Give your KPI design process the time it deserves. Trying to complete it in a day or two may generate a lot of output, but it won't produce the best results. Consider putting the KPI set away for a while and reviewing it later with a fresh eye.
As you work through this process, make sure that your KPIs are MECE and well focused, and that ownership of the indicators is shared across the business rather than limited to specific segments. Also, ensure that your design allows for occasional redefinition of KPIs as target areas of the business improve and new challenges come into focus.
2. Implementing your solution before fully considering the structure and behavior of your metrics. Since the desired behavior and structure of a KPI can be complex, an initial attempt to fully design each one should be made before selecting the technology and implementing the KPIs. If you choose your reporting software before considering the KPI requirements, you may wind up getting stuck with a solution that won't support your metrics as effectively as an alternative might.
At the start of the project, you should consider the structure and behavior of your KPIs in reasonable detail. A KPI's structure includes its text; its hierarchy (i.e., its subsets, if any); and its dimensions. A KPI's behavior includes considerations such as how it's banded, how its value is calculated from the source data, and how that value behaves when aggregated.
A good approach is to record the design of your metrics in a spreadsheet and then create prototypes that can be reviewed collaboratively to ensure that you start with the end in mind. Experience shows that these reviews will be iterative in nature, so don't be surprised if it takes time to gain consensus from key stakeholders. To further validate your design (and as a sanity check), consider manually calculating your suggested KPIs using historic data.
You should consider these eight key factors to ensure a comprehensive design:
Hierarchy. KPIs can have "parent-child" relationships, so consider whether a higher-level KPI can be modularized into smaller but still useful KPIs that, when combined, calculate the parent. Be careful, however, to keep hierarchies simple. A good rule of thumb is that you should carefully reconsider a three-level hierarchy, and any more levels than that is probably too much.
Aggregation. Child metrics can be aggregated into parent metrics, and parent metrics can be exploded into underlying child metrics. When calculating the value of an aggregation, you can use simple addition or averages, or you can use a more complex calculation such as a weighted average, median, or percentile range. Sometimes, complex multidimensional expression (MDX) formulas can be used to define aggregation logic, and these can be articulated independently of the selected technology (most tools support MDX). Double-check that all aggregations are accurate, because the obvious aggregation may not always be the correct one. Bear in mind that dimensions can also be used to aggregate data, as we explain below.
Dimensions. Across what dimensions will you want to aggregate and filter your data? Time is an obvious choice, but also consider other useful dimensions within your specific business context. Dimensions will have parent-child relationships with other dimensions in the case of a "snowflake" schema design - for example, a geography dimension might consist of countries that are split into states, which are further split into counties. When you aggregate using a parent dimension and the data is summarized, you need to consider the rules for calculating summary values. For example, you need to know the rules for what happens to the data if you summarize from daily to monthly.
Banding. Be sure to clearly define your banding rules at each level of aggregation. Green, amber, and red constitute the most common and effective banding structure. Care is needed when defining how the banding will behave. How do you know when the metric moves from green to amber, for example? Will the banding boundaries be spread equally across the possible range of values, or will some other scheme - for example, absolute values or an MDX formula - be used to define the behavior?
Weighting. A parent KPI may be a weighted summary of its child KPIs; however, these weightings will need to be carefully considered to ensure that the summarized KPI accurately reflects its definition.
Direction. As obvious as this may sound, you need to make sure that it's clear whether a high value or low value indicates a positive result. For new people unfamiliar with the KPI, this can be a source of confusion. Also, ensure that scales and measurements are clear. For example, does 0.1 actually mean 1 percent?
Ownership. Assign a business owner for the design of each KPI (one owner may own multiple KPIs). This ensures that someone is monitoring its utility and can suggest changes as the KPI comes into common use.
Data source. Ensure that your organization has data source(s) that contain the information needed to calculate the KPI. If you discover that electronic source data is not readily available, the scope of the project will increase; this is an issue that needs to be raised sooner rather than later. Manual entry of metric data can be time-consuming and complex, and if you need to support it, make sure that you have a tool that can easily be deployed for this purpose.
3. Allowing scope creep of KPI requirements. As implementation begins and end-users review progress, they often request changes. However, if they're not carefully managed, changes can impact your dashboard delivery schedule. Your KPI requirements should be treated formally, like any other software requirements.
A Statement of Requirements (SoR) documents the initial requirements and ensures that there's a common understanding on the scope of your dashboard implementation project. It includes MECE KPI requirements and other functional and nonfunctional requirements.
You should create a formal SoR and manage it carefully throughout your project to avoid scope creep and to ensure that your stakeholders get the dashboard they need. To ensure that value is delivered early to the business, prioritize KPIs to allow earlier iterations to implement the most important indicators.
One approach to prioritizing KPIs is to tag them with one of the following "MoSCoW" (must, should, could, want to) rules:
Must have - for KPIs that are fundamental to the system. Without these metrics, the system will be unworkable and useless. The "must haves" define the minimum usable subset, and your project guarantees to satisfy this set.
Should have - for important KPIs for which there is a work-around in the short term and which would normally be classed as mandatory in less time-constrained developments. The system will still be useful and usable without them.
Could have - for KPIs that can more easily be left out of the iteration under development.
Want to have but won't have this time around - for those KPIs that can wait until later. These are key indicators captured in the design process but discarded or defined to be outside of scope.
To minimize scope creep, make sure that your stakeholders fully understand the SoR and that you can achieve their approval and sign-off before commencing development. However, bear in mind that the SoR is not frozen in time; it's a living document that needs to be continually reviewed and updated as the dashboard is developed.
For larger projects, ensure that a change management process is set up that includes a workflow to raise change requests, estimate costs, and proceed with an approval or rejection decision by a project steering committee.
4. Leaving high-risk data sources until the end of the process. The greatest uncertainty in your project before development begins, and therefore the hardest thing to estimate, will be the complexity of the integration points with the data sources that will feed your dashboard. Since these will be disparate systems, they may not readily support the integration that's required for your reporting solution. So the integration points will need to be planned and designed early in the project.
As you design your dashboard development project, plan to tackle high-risk, difficult data sources early. You may find that you have a tendency to implement the easier data sources first to demonstrate early progress; however, if you leave the difficult ones until the end, you greatly increase the chances of falling behind your schedule. Tackling data sources with high architectural risk first will ensure that you have time in the project to account for the inevitable unforeseen issues that will arise.
You may need to deal with multiple input source data: relational databases, text files, Excel, CSV (comma-separated values), unrelational data stores, application exports, and manually entered data and reports. Check that your tool can support the formats that you have so that you won't need to deal with complex format conversions - don't underestimate the time and eventual maintenance effort that you're signing up for in converting formats. Also, if your dashboard involves batch updates, don't be too ambitious with the frequency. Daily updates are often too aggressive; weekly updates may be more sensible.
5. Overlooking existing technology investments. Many organizations have already invested in dashboard technology without knowing it. Before considering a purchase of new dashboard toolsets, examine existing applications to see what reporting solutions they provide. Other departments or units may own tools that are already providing the functionality you need.
Review your organization's enterprise road map. You may find that certain packages are a better fit with your company's longer-term IT strategy, and this may impact your selection decision.
If your company has enterprise resource planning (ERP) or business intelligence (BI) systems in place, bear in mind that these assets may have reasonable dashboard features. It's worth investigating them to see if there's a fit. You may also want to review your in-house technical skills to determine whether they are sufficient to maintain the dashboard using your selected technology.
A thorough and comprehensive system selection process is critical in ensuring that you choose the tool that will best address your requirements. Take time to diligently perform a full evaluation of the various options. Your specific needs will dictate which application works best for you.
6. Developing all of your data sources and KPIs at once. As with many software projects, it's prudent to develop a dashboard iteratively using best-practice agile principles, including continuous testing, rapid prototyping, time-boxed development, and frequent releases. Because of the complexity of the data sources you're working with and the frequently changing requirements you'll encounter as end-users start to see progress, agile approaches work better than traditional waterfall approaches for dashboard projects.
Aim to implement a data source in each iteration rather than starting all of your KPIs at the same time, which may overwhelm the development team. A divide-and-conquer approach is well suited to the development of your KPIs.
This approach has the following advantages:
Lower risk. By dividing KPIs into iterations, you improve the project's chances of overall success because at least some KPIs will likely be delivered even in budget-constrained environments.
Higher quality. Each iteration is completely tested and should in principle be a fully functional release. Early, continuous testing results in fewer product defects.
Earlier benefit delivery. Since the first iteration implements a complete KPI, users can start taking advantage of the new tool quickly, rather than having to wait until the end of the project.
7. Underestimating testing effort. A dashboard is like any other piece of custom software: Early errors will destroy credibility and user confidence. The structure and behavior of KPIs and scorecards can be very complex, and testing them can be even more complicated. The complexity increases with the number of KPIs in your complete set, and it escalates rapidly if custom code has been developed to calculate and render your facts and dimensions. Develop robust scripts that test every scenario. These should be based on reliable source data sets that reflect the full range of inputs that a production environment generates.
In fact, from a project scheduling perspective, the complex calculations that dashboards typically implement mean that you can expect to spend at least as much time on testing as on development. Relying on users' experiences alone is not enough. You should prepare a comprehensive test plan and staff the project with specialized testers who are experienced in identifying unusual boundary cases.
One helpful technique is to create spreadsheets that manually calculate KPIs and compare the results to your dashboard output.
Key areas that your test scripts should target include the following:
Dimensions. Ensure that your collection of test scripts has full coverage of all dimensions within each KPI.
Banding. Check that your banding behaves as intended.
Hierarchy. If a KPI is dependent on child KPIs, the children must be independently verified, followed by verification of the parent calculation.
Aggregation. If a KPI involves aggregation (roll-up), ensure that a comprehensively representative sample of results is correct.
Data source. Compare the dashboard output directly to the source to ensure that it matches, particularly where there are possible data type conversions.
Time. Confirm that data is being reported correctly over time as a dimension, especially if aggregation is involved. Time-based calculations can be prone to errors and warrant higher levels of testing.