Raising the bar in benchmarking
‘If you know the enemy and know yourself, you need not fear the result of a hundred battles.’
Even the legendary Chinese general Sun Tzu was a sucker for benchmarking. But honestly, aren’t we all? One of the most common human traits is our strong desire to always compare with others. ‘Did you see the neighbors’ new car? It’s bigger than ours. How can they afford that?’
Though, in practice, it’s one of the most popular management tools, it’s time to raise the bar. In this article I’ll explore the origins, future, flaws, and benefits of benchmarking.
The origins of benchmarking
The term benchmarking (in its current meaning) originated in the eighties. Charles Christ, the CEO of Xerox at the time, saw an ad from the competition for a cheaper copy machine.
He wanted to look into it and called for ‘a benchmark, something that I can measure myself against to understand where we have to go from here’. The benchmarking program that originated from this led to the first book on benchmarking: ‘Benchmarking: The Search for Industrial Best Practices that Lead to Superior Performance’.
In that book benchmarking is defined as ‘the approach of establishing operating targets and productivity programs based on industry best practices that leads to superior performance’.
Related (free) resource ahead! Continue reading below ↓
51 HR Metrics cheat sheet
Data-driven HR starts by implementing relevant HR metrics. Download the FREE cheat sheet with 51 HR Metrics
The book mentions four basic steps of benchmarking: knowing your operation; knowing the industry leaders or competitors; incorporating the best practices; and gaining superiority. If only it was this easy. If it was, we’d all be driving a car as big as my neighbors’.
The flaws of current benchmarking
To understand the flaws of benchmarking, let’s first take a look at why the concept is so widely spread.
At the risk of shooting myself in the foot, for a consultant, there is nothing easier than to make money from benchmarking. Just choose a subject, come up with high level questions and generic definitions, design a report, automatize the analysis, send out your surveys and start churning out reports with the click of a button.
Because benchmarking is so easily productized, consultancy firms aggressively push their benchmarking solutions into the market. And with success. Participating in a benchmark is a great way to take the pressure off of the manager.
If you outperform the benchmark, you can prove you’re doing well. If you score below the benchmark, but you can show progress compared to the last time that you participated, it means you are on the right track. In all other cases, you can blame it on the benchmark.
Either there is something wrong with the questions asked or your organization is simply ‘incomparable to other organizations, because…’.
The problem is that because benchmarks are so productized, their effect is hollowed out. The more high-level and generic questions and definitions are, the more easily other potential clients can fit the mold.
For example, if you want to compare how successful the people in your street are, there are a ton of variables that you can take into account. Do they enjoy life, spend time with their kids, are they doing something they love, how much money do they make, et cetera.
Taking loads of variables into account is labor intensive and risks exclusion of potential participants. What happens in benchmarking is that things get simplified. ‘All my neighbors have cars. Let’s define success by the price of their cars.’
The problem is that simplification can lead to false conclusions. Just look at the founder of IKEA. Even when he had a net worth of $58.7 billion, he drove a $22.000 Volvo 240 GL for two decades.
Another common flaw is found in the perverse effects of aiming for the largest possible benchmark.
Because benchmarking is so widely productized, the most common way to stand out in the market is by the size of your benchmark. One of our partners offers a tool to continuously measure the vitality of employees. They have a benchmark of around 2.000 employees from a range of different organizations.
One their potential clients’ feedback was that they loved the concept but were only interested if the benchmark had exceeded 10.000 employees. As a result, consultancy firms tend to cherish every single respondent, because it (falsely) adds credibility to their benchmark in the eye of their clients.
The problem with striving for the biggest possible benchmark is twofold. Firstly, it enhances the urge for high level, generalized benchmarking, as described above. Benchmark suppliers have more potential clients and gain credibility if they make sure as many organizations as possible can fit the mold.
Secondly, it prevents keeping the benchmark up to date. Times change and so do conditions but adapting the benchmark or throwing out old respondents is going to harm the position of benchmark suppliers. The result is that when you decide today to benchmark on, for example, IT-costs, chances are you are comparing your organization with IT-costs during the financial crisis from a couple of years ago.
Raising the bar
Don’t get me wrong. Benchmarking if done properly can help you build a successful organization. Even before Xerox came up with the term, Japanese companies have been incredibly successful using the benchmarking method.
But as technology progresses, now is the time to start raising the bar on benchmarking. There are two ways, which ideally can be combined.
The first one is in the measuring frequency. Just like in my previous article on the second wall in HR, I’d urge you to read Patrick Coolen’s great article on continuous analytics. These days, there are great examples in employee engagement where the focus is not on external benchmarking (benchmarking with others), but on internal benchmarking (by comparing your organization not with others, but with itself over a period of time).
The other way to raise the bar, is by comparing on a more detailed and more advanced level. Let’s explain this using an example.
In the Netherlands there is a fiscal rule called the WKR. It allows organizations to pay employees’ untaxed remunerations in specific cases (for example to cover travel expenses, meals during overtime, et cetera). Let’s say you want to benchmark your organization on this subject. The standard way would be to count and compare the amounts per category, maybe divide the outcomes per employee and compare results on a high level.
Forget the standard way. There is a better way. In this example we’ve collected the WKR-data from three organizations. Not on an aggregate level, but on a booking level. We used this data to build a self-learning model. At this point (the model is work in progress), it can already predict with 94% accuracy, in which category each booking should fall and whether or not the booking is fiscally sound.
That means that with the input of just three organizations, we can not only offer a comparison on an aggregate level, but also a list of every potential fault on a booking level.
If you combine the two (high frequency measurement and advanced analysis on a detailed level) you can maximize the impact for your organization. That might sound farfetched, but is in fact closer than you might think.
With more and more administrations that run in the cloud, existing boundaries between administrations are starting to fade. When administrations are connected, it allows you to continuously compare on an advanced and detailed level.
In an era where a company like Xerox, that defined benchmarking four decades ago, is transforming into a software company rather than a printing machine company, benchmarking itself has hardly progressed. Technology is not the problem. The only thing missing is a change in mindset.
One of our clients recently told me they wanted to start a benchmark on the performance of their Financial Shared Services Center. The client created an Excel format with a number of questions and definitions that they had sent out to their peers.
When I reminded the client that our software continuously scans their financial administration to look for faults and risks on a booking level and that we do the same for some of their peers, it struck a chord.
I hope this article will strike the same chord with you.