Preface
This Column is the first of two-parts that should be read together. In it I will tackle the general measurement principles as well as a top-down approach to gaining the appropriate measurement data. Part 2 will deal with a bottom-up approach and with some cautions regarding the use of measurement information.
Introduction
Business Architecture is hard at the best of times but there is nothing better to connect the dots than measurement. Trustworthy measures are gold for all when it comes to making data centric business decisions. Relying mostly on perception-based decisions is something that everyone can agree may be misleading.
This article is intended to provide an update on previous ones that strove to describe the challenge of having useful measurements. It is amended based on more recent learnings about how best to define performance and build the capability to measure and report in a traceable manner. How can we tell if our Business Model is effective? How do we know if our Operating model choices were good ones? Are our resources being well used? Do we have the culture we strive for? How can we know what's working well and what's not? Which of the landscape domains (Figure 1) are measurable? Which are worth measuring?


Defining a connected measurement framework, as shown in Figure 2, makes sense now that we have things to measure and we can connect them in a traceable way. We now know the external opportunities and threats, the needs and expectations of the business external stakeholders, the ends and means of the business value chains and the aspirations we are striving for in the North Star target. We also know which value creating business processes we have to manage, and which capabilities need to be in place to do so. In our strategy and process design work we should also have articulated what behaviours and cultural attributes we deem desirable or critical to attain. So, what kind of measurement data do we need to make the decisions that must be made in the remaining work of architecture development, in change resource allocation and day to day business operations? What do we need to know in order to manage? It should be no surprise that the answer may depend on who you ask. The senior executive will be interested in trends in overall success factors and risks that may not be the same as what the production supervisor may value to make decisions in near real time.
The value of measurement
Anyone who has the experience of trying to define measures to live by will confirm that this is one of the hardest things to get right , although it looks deceptively straight forward at the outset. Is it worth the aggravation? I think it is, so let's see what measurement is used for? There are many things that can be measured, but useful measurement should first and foremost help decision making about acting on both small adjustments and more significant change. Most organizations do not have a shortage of existing measures, but many fail to measure the right things. Often, they have insufficient data to back up many of the decisions they need to make. One of the benefits of having the right performance indicators and trustworthy performance data is to be able to better understand what is really happening in the business for both executive and front-line management decisions.
If you are a senior manager, you want support to:
- Plan Business Strategies and Tactics
- Connect Operations to Strategy
- Develop traceability scorecards from top to bottom
- Develop leading and lagging indicators
- Assign responsibilities for performance to organizational units
- Assure organizational alignment to value chain outcomes
- Evaluate strategy execution progress and adapt
- Control work and resources
- Learn what works and does not and why
- Change tactics and strategy based on reality
- Monitor alignment of work assignment
- Assess the human aspects of work
- Align incentives of people with desired performance results
- Determine the behavioral fit and gaps of individuals and groups with the envisioned culture
- Motivate managers and staff
- Evaluate the human acceptance and readiness for change
If you are an operational supervisor, you may be more interested in the ability to know when to:
- Make quick adjustments in operational execution
- Monitor business operations and evaluate what's working and what's not
- Reallocate resources operationally very quickly for control or improvement
- Align individual performance incentives with formal organization targets
- Change behavior and build culture
The quality of measurement
For both the top and bottom points of view, the measurement structure enables the business nervous system to function and keeps all work on track so long as the measurement data is accurate, timely and trustworthy, and is used. This requires us to have high quality performance indicators. Similar to the concept of 'SMART' objectives, good measures have the following characteristics.
- Relevant:
- supports the assessment of a vision or goal in order to track and make a management decision
- Comparable:
- has a distinct Unit of Measure that can be compared over time periods and or locations or other companies (benchmarking)
- Time bound:
- is associated with a period of time or a point in time
- Measurable:
- reliable data can be attained without bias or excessive time and cost
- Reliable
- the more factual the better although external perceptions may also serve as facts if collected in an unbiased way
- Trustworthy:
- people feel confident that the data is accurate even if they do not like it
Often times we hear business managers and analysts talk in terms of some very vague or non-specific measures such as:
- We want to improve our reliability
- We aim to increase customer satisfaction
- We need to get better staff loyalty
None of these has a performance indicator in it. These are goals to be attained but not measurable without some interpretation. We need a unit of measure for which we can gather data.
- We want to improve our reliability as measured by the percentage of orders that are perfect (right product delivered to the right place, at the agreed time, for the agreed price, paid on time).
- We aim to increase customer satisfaction as measured by net promoter score.
- We need to get better staff loyalty as measured by annual turnover percentage of employees.
According to the Business Motivation Model from Object Management Group (OMG) we have now changed our vague goals into precise objectives, and we have accomplished that by adding a target and a timeframe.

- We want to improve our reliability as measured by the percentage of orders that are perfect (right product delivered to the right place, at the agreed time, for the agreed price, paid on time) to 95% within one year.
- We aim to increase customer satisfaction as measured by increasing net promoter score by 10 points in eighteen months.
- We need to get better staff loyalty as measured by decreasing annual turnover percentage of employees to 8% by the end of the calendar year.
By structuring measures well, we now have is a set of indicators that can be evaluated periodically in the long and short term to monitor progress against our business and personal targets.
Top-down or Bottom-up?

Please note that a Performance Indicator (PI) does not exist in its own right. It is always associated with something else and essentially becomes an attribute of that object. So the connective structure of the measurement system typically will follow some other structure within the Business Architecture that we may have already figured out. Figure 3 shows some potential places to hang performance Indicators that may be good choices depending on your needs. As you can see, there are different performance indicators of interest to different parties. If you are driving towards strategic transformation you, most likely, will start at the top level KPIs and work your way down the stack with the and determine the contributing (lower level) more detailed PIs. This is aimed at the interests of the executive and board. If you are trying to get control over operations and are less concerned with strategic change, you may start with front line PI measures and selectively synthesize to top level KPIs of the executive scorecard. This approach serves operations managers first. A third, and popular option, is to tackle some aspects of both approaches concurrently by conducting KPI determination for the very thin top level and then, for selected and focused priority areas build bottom KPIs and reconcile the fit. There is not a right or wrong approach, just the one that best deals with the prime issues that face the organization and concurrently is supported by your readiness and architecture maturity. This approach will be of great benefit to both executives and operations managers in the area of focus.
The Top-Down view
Original Balanced Scorecard

In the early 90's Kaplan and Norton published a number of books advocating a multi variant approach to measuring business performance aimed at getting away from solely after-the-fact financial indicators. Their rationale was that once the financials were in, it was typically too late to do anything about them. They advocated for a number of indicator categories to be reported and evaluated periodically that were not purely lagging. To get a better insight into what was happening along the way, they advocated indicators in different categories. Their idea was to apply a 'Balanced Scorecard' with the four quadrants as shown in Figure 4. This is now considered to be a traditional balanced scorecard. This approach kept the financial view but added others. The Customer view reflects what is important to the markets the enterprise is serving and shows how well it is doing with them in terms of well-established indicators such as market share. The Operations (often called the internal process view) perspective looks at internal efficiencies and quality, such as average time spent by call center staff with a caller online. The Innovation and Learning POV focusses on key indicators such as time-to-market and emphasizes the sharing of knowledge. These additional types of measures were very useful for executives to track the overall progress of the company and along with Kaplan and Norton's companion innovation – the strategy map – gave organizations a way to develop plans that tackled multiple perspectives and a way to strategize how to achieve the targets in each going forward. The challenge with the traditional Balanced Scorecard became apparent when attempting to build a scorecard that would cascade well down the organization chart. As the four quadrants were pushed downwards departments in order to find a traceable system of measurement, it became apparent that several of these sectors were ill-suited to be segmented since the decomposition just broke things up that were not decomposable in terms of the organization chart.. The result of force fitting lower-level work into the same set of Balanced Scorecard categories meant that intentions of value creation were compromised by focussing internally and not to the ultimate outside stakeholders for whom value is created. End-to-end processes were broken into organizational unit sub-process that were only a part of the whole. The drive to optimize work overall was lost, and sub optimization ran rampant. In addition, Innovation and Learning issues became locally focussed and not throughout the whole business. Sharing of insights was difficult to accomplish outside of a group. For these structural reasons, the balanced scorecard has fallen out of favor as a management tool in the last few years as organizations become more concerned with end-to-end value creation.
Value-oriented balanced scorecard

Seeing the value of multiple measurement perspectives, we at PRG felt that a value oriented and cross- functional scorecard was in order. We felt that measures that directly were traceable to customer outcomes of value and other stakeholder needs and expectations was needed. Consequently, we looked at the prevalent value-oriented approaches that focused on the results we would want to create for the myriad of external stakeholders of the organization and we worked back from there. The four main categories that we have found to be most useful as shown in Figure 5.
Process centered balanced scorecard – Effectiveness
Effectiveness questions how well we are doing the right things for external customers and consumers in the first place how well we do them as far as the recipients are concerned. This sector reflects the customer value creation point of view as well also the business' view of success with recipients. Although each industry and organization is different, typical indicators would be something like the following:
- Customer satisfaction rating
- Net Promoter Score
- Customer effort score
- Market share
- Wallet share
- Cost of non-compliance to customer expectations
- Repeat business revenue
- Lifetime revenue value to us
Process centered balanced scorecard – Efficiency
Efficiency looks at how well your business uses its consumable and reusable resources to deliver the outputs. These measures are the classic production types that have been the subject of process improvement regimes such as Lean and Six Sigma over the years. They are typically unaligned with the question of 'are we doing the right things?' and are more focussed on 'are we doing things right?' Some indicators may be:
- Cost of service per transaction
- % transactions that are straight through with no manual intervention
- Ratios of outputs to time and cost incurred
- Proportion of Waste (as defined by Lean)
- Average time to resolve a problem
Process centered balanced scorecard – Quality
Quality deals with how well we meet the expectation of the product or service recipient in terms of consistency and how well we meet standards and compliance requirements. It also covers the implications of a lack of quality. Some examples are:
- Defects / returns ratios to total counts
- Service Level Agreement % compliance
- Consistency of outputs as shown by ratio of variants to standard over locations and time
- Cost of non-compliance driven extra work to correct lack of quality or risk compliance (rework)
- Returns ratio on non-performing products
- Complaints ratios to total orders
- Cost of lost future business due to poor quality
- Regulatory compliance costs (fines and restrictions)
Process centered balanced scorecard – Business Agility
Business Agility covers the ability to change quickly and effectively when doing so. It includes operational agility which tackles the flexibility to respond quickly to fast moving market conditions. It also covers reconfiguration agility that enables insights to be turned into designs quickly and designs into products, services or operations rapidly. Quick change includes all aspects of change not just technology. Some illustrations are:
- Time to change a business rule while in operation
- Time to reallocate resources to an incident or crisis response
- Time to market for a product or service
- Number of insights generated annually and the conversion rate into executable offerings
- Proportion of customer special requests or variations turned down
- Cost of change for a product specification update
- Number of shared uses of a developed capability
- Lost time between human resource assignments (resources downtime on a change)
- Cost of staff retraining due to staff turnover
The scorecard structure
The structure of the indicators will, for the most part, follow a value creation pattern., meaning value streams and business processes are quite often what's being measured or at least they are the place to hang measures in the reporting hierarchy. At the top levels, the indicators are very much associated with the stakeholders of the end-to-end process or value chain. These measures will be strong contributors to the overall satisfaction of the stakeholders but may be lagging. As we dive deeper in the hierarchy, we will find the component processes or value stream stages as the place to attach our KPIs. Some of these will deliver direct stakeholder value, but some will be indirect and not apparent to the stakeholder while in progress. In those cases, we will still have value measures, but we are acting on behalf of the recipient as well as taking other stakeholders' requirements into account. For example, running a credit check before accepting a loan request is still of value to the business for risk purposes and may be required due to regulation, even if it may not be appreciated by the requestor. This hierarchy typically follows the process architecture structure as shown in Figure 6.

As described in our work hierarchy we can measure a lot of things, some directly and some indirectly. We need to have indicators of:
- Strategic Objectives
- Stakeholders' relationships
- The North Star directional guidance
- The work we do (business processes / value streams)
A number of other architectural elements are also important but are hard to measure directly since they have no inherent value until put into action in a process. These include information and capabilities. While we obviously need good accessible information and certainly strong capabilities, if these are never actioned or needed, their value cannot be seen and may not be of direct assessment. The value they bring is in the difference that each makes, once enhanced, to the KPIs of the business in action (i.e. the processes). A big performance indicator gap as shown by our KPI data implies a big capability gap. Fixing the right capabilities should close the process performance gap. This enigma is more obvious once we consider that information is used all over the organization and good capabilities may be served up in many value streams making this even harder to appreciate and capture explicitly unless we appreciate the usage.
We also have to connect indicators into a cause/effect pattern. With the strategy structure connected to the process structure we can accomplish this and define our reporting dashboard requirements. Of course, for the dashboard to function we not only have to provide the structure we also have to provide ranges of performance data levels that may signify danger, concern and safety. These levels are often shown as red, yellow, green lights on the management dashboard. With the KPI structure and the defined rules on limits of performance warnings we should also be able to drill down the stack to the causes of concern regarding actual performance problems in the details.
A scorecard planning template
For the upper levels of the scorecard hierarchy there are a number of attributes which we could describe. A template for each level of work hierarchy is shown next in Figure 7 to illustrate the types of measures, the desired direction of the measurement data, the method of data collection, the current level and desired level of performance of the indicator. Not all of this may be known when starting our architecture work but over time we should strive to fill in the blanks.

Preface
This Column is the second part of a two-part series that should be read together. In it, I will present a bottom-up approach and alert you to potential problems regarding the use of measurement information. Part 1 dealt with the general measurement principles as well as a top-down approach to gaining the appropriate measurement data.
The Bottom-up View
The structure of detailed measurement

One tool that can help us with determining performance indicators is the Concept Model (Figure 1) that we used previously to determine information, capabilities, business rules and processes. If we look at every box (noun=thing) there is the potential of having a count for several of them if we feel it will be useful to someone to be able make an operational or management decision. We can look for the relevant measures among the concepts. Please note that I am not assuming that these decisions are managers' decisions but are decisions required to be able operate or manage regardless of who makes them. Organizational structure is irrelevant to what you need to know at this point.
A way to structure and represent the determination of possible PIs and KPIs based on the concept model is shown in Figure 2.

Counts (things)
The starting point for all measurement is to count things; an inventory if you like. Clearly there are lots of things we could count but typically we choose the ones of importance to us because of risk, strategy, or some form of operational importance. Stakeholders are important so we count them. How many customers of various types do we have? How many Employees of various classifications do we have? Physical items are the next most obvious. How many branch offices are there? How many mobile phones are there? How many locations are there? There are also many non-physical items that need to be tracked such as orders, transactions and agreements. Do we know what the counts are? In addition, can we determine other important attributes about each such as the size of the branch office, the longevity of tenure of staff, how old is each brand and model of mobile devices we have.
In this sample matrix each concept on the process model is listed in both the columns and the rows. Then we look at the top left to bottom right diagonal of same concept to itself where we determine which concepts are worthy of being counted – i.e., of relevance – for tracking and decision-making purposes. In the illustration, we decided that only three of ten shown are of interest – things to be counted or quantified, and reported. The business representatives will have answers of what is useful for them.
Associations (things per thing)
In other cells or intersections of concept to different concept shown below the diagonal, we can first identify which set are relevant in combination. Not every combination is of importance. Item 4 in the sheet shows that we have chosen to know about Financial Services as related to Customers. The other combinations of relevance are also shown in the high-lit cells – such as Customers to Orders and Financial Services to Orders. This shows us the places where PI or KPI data must be captured in the process execution as well as defining what any IT requirement must include for data capture.
Many things are relevant in their association with the other things. Typical factors would be counts of some things of interest relative to items such as organization unit, role, or person. They can also be tied to some work mechanism such as a system, a process, or location. Some examples would be:
- Number of orders received and the total dollar amount by location for each order type
- Number and size distribution of financial service transactions for each channel of customer interaction type (web, kiosk, branch…)
Possible measurement associations can often be seen directly from the concept model by looking for the direct linkage of concepts (nouns) by the wordings between them (verbs). Some examples would be:
- Number of orders received by and the total dollar amount for each consumer category (link from the consumer to order)
- Number and size distribution of financial service transactions for each financial service type (link from the financial transaction to financial service linkages)
Looking at every direct link in the concept model between concepts will allow us to question whether or not there is some associative measure of importance to the decision making or execution of the business.
Timing of things
I often see organizations initially defining measures in non-comparable ways. When it comes to nailing down useful KPIs the timing factor has to come into play to see trend lines. The examples just above are examples that are still not yet fully formed since we have not defined the period over which we will compare and contrast them. Are we counting daily or annually? The numbers will be hugely different and the reporting period and systems requirements for gathering and consolidating quite different also. By adding in the time factor we now are able to compare apples to apples meaningfully in all places and time periods that we sell apples. Reframing the previous examples would give us useful measurement data to work with:
- Number of orders received by consumer category and the total dollar amount for the category per month.
- Number and size distribution of financial service transactions for each financial service type per quarter.
Ratios
Most of the associative performance indicators are based on counts factored by the counts of other associated things. For example, 'number of orders per customer category per month'. It is typical to see performance indicators report on exceptions to the norm or to the desired outcome as a ratio. Many meaningful indicators are best expressed as a comparison of one count by volume of another such as:
- The percentage of all financial transactions delivered by partners per month.
- The ratio of returned orders over total orders by sales channel per month. Again, the usefulness of the performance indicator is gauged by how well it informs those who need to know in order to act and change something about how work is performed.
Who cares?
So far, I have delved into the quantifiable part of measurement; counting and comparing the things and associations that are discrete and for which data is more readily available as a bi-product of doing the work so long as we have the capture mechanism or can derive it from our work mechanisms like IT systems. Now the hard part, which is the soft part, comes in. With the unrelenting push toward customer focus comes the question 'how do we know how they feel about us?' Customer journey mapping, customer satisfaction surveys and the drive to customer experience improvement are all aspects of this phenomenon. In an earlier writing I discussed the issue of stakeholder expectations of value and the fact that great experience in terms of how things were being done was not so useful if the main value delivered through the product or service was not up to par. For example, the staff were nice and fast but sold us the wrong product. We have to evaluate both factors in light of the customer expectation. The challenge is that the expectation may be easily met if you do not expect much in the first place. A great example of this is seen in hotel ratings listed online by various travel sites. The super high-end hotels often do not get the best ratings because the expectation of visitors to the hotel anticipated perfection due to the high price they paid. In the same survey ranking list you will often see much lower priced hotels with great ratings because no one expected the features and services of a five-star property in a two-star hotel that was one quarter of the price. Comparables here are much harder to rationalize. Nonetheless we can still evaluate the satisfaction level and the experience perception of the external stakeholder. If this can be captured through counts and associations that may be the best that we can do. Sometimes proxy measures are an easier way to judge this factor although they may be imperfect. For example, can we trust that easily measurable indicators of repeat business are a good indicator of satisfaction or should we ask or do both?
Reconciling the measurement indicators with your current measurement scorecard
We all know that a clean sheet is unrealistic when it comes to defining measurement. There are invariably lots of pre-existing measurements being reported today but are they useful for current managers? A good idea is to reconcile these with your concept model and process architecture measures to see if all your current measures will have a home in our future and to see if any can be retired for better ones based on either the top down or bottom-up point of view.
By cross correlating the list of KPIs to the process hierarchy several questions can be asked:
- Are there too many KPIs for this bucket of work?
- Are there too few KPIs for it?
- Are existing KPIs sufficient or do we need some new ones as well?
- Can we drop any current ones in favor of some new ones?
- Do we have KPIs which have no process associated?
- Does a KPI cover too many processes or should each have more specific indicators?
This is a good sanity check that should be done with the management team to gain commitment on a better way of measuring and of managing.
Measurement opportunities, challenges and the problem of bias Gathering the data: How much is enough?
As we have seen, measurement can be overwhelming if taken too far. Our challenge is to capture just as much as we need to make good operational and management decisions. It is easy to get caught up in trying to get absolute precision in all our measurement data. If you are fortunate to have measurement data-capture built into all your IT systems, or smart enough to have designed them to capture everything as you go, then congratulations, you are on your way. The challenge is that for all the things you want to know that are not systematizeable you will have to design data capture into your processes and go out and capture that information. At worst, you will have to sample the population of transactions. The question of sampling becomes one of need for statistical significance of the measures and you will have to decide what a sound sample looks like according to the rules of sampling theory so you can remain unbiased and assured. You also have to decide what degree of precision you need since, if not careful, you will expend more energy gathering the data than the effort required to do the work itself. There is a fine line between enough and not worth it. Furthermore, some data may be wonderful to have but the methods to get it may be convoluted and the results unreliable. Perhaps some simple proxy may be better and still give sufficient insight as to what's going on. Attention to how the data can be acquired is an important consideration.
Alignment with personal motivation
There will always be arguments over what data to collect since managers know that if we are going to capture it then someone (herself?) will become accountable for it; something he may shy away from. Performance indicator data and the associated targets are almost always tied to the formal or informal incentives of an organization and the people within it. Peter Drucker is attributed to have said that without measurement it is hard to hold onto staff's attention. He also said the without feedback data it is like having staff hit 'golf balls into the fog'. So long as the individual's measures are in alignment with everyone else's indicators and are traceable to overall strategic objectives then personal incentive will positively push behavior and decision-making in the intended direction. Sadly, much of the time, this traceability and alignment is lacking, and indicators are not well connected. If done poorly, laser focus on the official personal and organizational objectives can actually lead to significant sub optimization and conflict in terms of end-to-end results for the stakeholders and non-realization of strategic intent. Everyone drives toward targets but oftentimes these are the wrong targets since they are biased towards divisional motivations, not the customer. It is imperative that, when deriving the hierarchy of measures to be sought, that it not be done based on the organization chart but on the concept model, process architecture and the results of value streams, all of which are agnostic to the formal organogram. With the right set of performance indicators aligned to value delivery, rather than an arbitrary formal hierarchy that fractures value propositions, we can ask who can take accountability for monitoring and advocacy for whatever needs to be done to attain intended results. Then and only then can we see how the organization structure can map to the performance hierarchy.
Customer Measurement Bias
There is also a challenge with perception-based measures since it has become easier to ask the customer for their opinion than ever before. So, what is a good measurement strategy? With so much online booking of services and digital delivery it is simple for the service provider to be able to automatically generate surveys for perception-based feedback and scoring. Since I travel a lot, I expect to see survey requests coming at me for everything I do. On a trip in 2019, I had survey e-mails from my airline for each of two flights, from the hotel I stayed at for the few nights and from my restaurant booking company for three restaurants I visited. I responded to exactly none of them. My concern is that we may have reached the stage of survey overload (at least for me) and that we are back into the old realm of the customer comment card in hotel rooms that guests rarely filled out. The only time I filled them out was when I was over the moon by the great service I received (e.g. someone going out of their way to satisfy a critical requirement of mine) or if something happened that was so poorly dealt with, I just had to tell them. Anything in between got no action from me. I have reached the same point with online surveys now. I typically just delete them, and I think I am not alone. I have to wonder how representative the samples are of reality. I call this measurement fatigue. Are too many surveys of customer experience detracting from the actual experience? Furthermore, we have lost the ability, in my opinion, to truly score or trust the results. Differences between vendors and service providers seem to all be between 4 stars and 5. What happened to the real estate from 1 to 4? If we notice that our Uber driver only has a 4.8, we are conditioned to ask what's wrong with the person since it was not a perfect 5 every time. Differences are so minute and offer little in the way of valuable differentiating feedback knowledge that I cannot trust what I see as a consumer. This is made worse when the organization actually games the numbers by telling the customer what the score should be or has fake reviewers who jack up or down the ratings in ethically questionable ways.
The observer effect
In science, the term 'observer effect' means that the act of observing will influence the phenomenon being observed. In business it means that the act of measuring itself will bias the measurement data. We all know from high school physics days that the insertion of a thermometer into a substance may not accurately capture the temperature of the material because it will change it. The classic business example goes back the observations that led to the observation of the Hawthorne effect in which it was shown that when people are watched they change their behavior. Experiments at the Hawthorne Works in the 1920's adjusted working conditions in multiple ways to observe worker productivity. No matter what they did, such as brightening the workplace and then later dimming it, the performance improved but only for short periods. The conclusion was that the fact that workers were getting attention and wanted to please the experimenters was the driving reason not the innovations per se. As a former Industrial Engineer forced to do time studies in full view of the work subjects, I can assure you that the workers did not work the same way when I was not there watching.
Visibility of measurement data alone can be a blessing if done appropriately. On a recent process improvement exercise we noticed a significant amount of disagreement regarding the straight through processes (STP) rate for loan applications (% of applications without manual intervention). We got estimates from 40% to 70% from different groups including executives. Once we gathered 100% of the universe of transactions from the preceding year, we found it was unquestionably 55%. The measurement indicator was then added to the scorecard for all branch locations and for all staff to see and within two months the results jumped to closer to 70% with zero process or technology changes. Being aware of the data was a powerful virtuous motivator affecting behavior in its own right.
Discovering bias-free ways of getting the data and aligning them with motivation is as important as the data being sought.
Measurement and Behavior
One of the benefits of measurement is its ability to align process work to results assigned to people, analyse those results and to discover causes of poor process and individual performance. This allows us to help people do better. A problem, however, comes when the organization wants a culture that is different from today and the hard measures are not sufficient to capture the behaviors of the individuals which collectively reflects that culture. The jury is still out on appropriate measures to indicate behavioral consistency with what's needed. There is no simple scorecard to show this yet. It is till our view that defining the behaviors desired under a set of circumstances is a key part of defining requirements. Designing the observation and coaching roles required (Figure 3) as part of process design and development is a critical aspect that is often missed. It is an essential complement to measurement.

Measurement and organizational maturity
Making a serious commitment to aligned and traceable measurement is a big ask. It does not mean that your organization has no measurements today and should get some more. You all probably have lots already but a real commitment to foundational measurement as a more formal discipline based on the business architecture and the operating model of the organization implies that responsibilities for measurement outcomes be established and honored. Typically, that requires certain aspects of the architecture be in place. It is hard to assure traceability if no clear strategic framework exists and if no business process architecture is available. If they are not in play, do your best to get some measurement thinking and some common-sense indicators in place while you build out the operating model. If you have these models then determine your performance structure, your indicators and your targets and strive to make measurement a key part of managing. Everything will reconfigure itself because there will be a 'why' to aim for.
Measurement ability builds on the business model and the operating model. Gaining alignment for all and assuring a traceable measurement dashboard and data capture mechanisms will be worth it. These will help us to become better focused on customers and end to end management. With clear strategic requirements framed earlier in our journey, good architectural models for concepts, processes and capabilities and measurable performance, we will be able to prioritize the changes of biggest strategic importance and performance improvement potential as well as the capabilities that should be tackled from which a transformation roadmap can be derived.
Speak Your Mind