Goodhart's law says...

"When a measure becomes a target, it ceases to be a good measure."

This "law" is often used when denouncing poorly devised safety metrics and/or incentive programs.  In most situations where there is a weak/immature Safety Process/SMS, the "law" makes sense; however, this is because, without strong leadership and a functioning Safety Process/SMS, organizations will inevitably and eventually turn "leading indicators" into a "numbers game," just like they turned their lagging indicators into a numbers game to achieve what they defined as "safety success." 

But there is a simple solution to combat this "law"...

VERIFY and VALIDATE our safety metrics and the Safety Process/SMS that produced those metrics!

 

Let's first understand what I mean by saying, "verify and validate our safety metrics.Merriam-Webster dictionary says:

To VERIFY is: “to establish the truth, accuracy, or reality.”

To VALIDATE is “to recognize, establish, or illustrate the worthiness.

 

What does this look like in safety metrics, and how do we use it?

Both leading and lagging safety metrics are necessary for any functioning Safety Process/SMS. However, we must also recognize that humans will honestly, and in some situations dishonestly, manipulate these metrics to achieve success; however, we have defined success in safety.

With this in mind, I introduce you to Gage Repeatability and Reproducibility (Gage R&R), a tool I learned in my Six Sigma training.  Gage R&R is a means/method to ensure that the data used in the safety process/SMS is reliable and consistent, enabling us to make informed decisions that help us achieve our safety goals.  This is the VALIDATION aspect of the data we receive from our Safety Process/SMS. 

Look at it this way: if the data we are receiving is BAD DATA and we make decisions (or assumptions) on that BAD DATA, then we are making BAD DECISIONS (or assumptions).

 

How do we VALIDATE our safety data?

We use 2nd and 3rd parties to perform the same exercise(s) in the same environment(s) to see if they achieve the same result(s). 

For example, let's imagine we have a leading indicator metric that requires each operating unit to perform ten (10) Safe Work Permit audits each month. 

It is easy to VERIFY this is being done by the number of audits they turn in.  But how do we know these audits were not "pencil whipped" to achieve success in this goal? 

Doing 10 SWP audits poorly each month is NOT what we expect to accomplish with the Auditing/Inspection element of our Safety Process/SMS! 

We do these audits/inspections to IDENTIFY GAPS (i.e., hazards) within one of our most critical administrative controls. So, I think we can agree that the quality of these audits/inspections plays as big a role in our success as the numbers being completed.

I always used my safety teams to perform a "sampling" of these audits and inspections, and this data was kept separate from the Unit's Performance data.  We would try to get 2 to 3 of our audits/inspections in each of these units each month. 

But understand this: our audits/inspections were simply a means to gather data - we were NOT playing the "safety police," although we would address any concerns we found during our audits.  In other words, we were really auditing the auditors.

So, each month, we would tally the scores of these audits/inspections for a "grade" of sorts.  We always strived to be >85% correct in:

  1. how the permit was completed
  2. how the permit was issued
  3. how the workers complied with the permit requirements
  4. how the permit is closed*

* these were done solely via "desktop audits."

This was just like we would be graded back in our days of taking tests in school.  Each SWP permit had so many points, and each failure would be subtracted from the total points for a percentage grade, which we always started with 85% and continually updated our expectations each year in our Leading Metric Goals to where a very mature Safety Process/SMS would eventually have the expectation of performing around 95-98% accuracy.

So, let's say Unit #1 turned in its ten (10) audits, and their score was 92% for that month.  However, when we look at the 2nd/3rd Party audits done in that Unit over the same period, we discover that the safety team audit score was 72%.  Is this too much variation for us?  I would say - yes, way too much.  This should lead us to review the permits to identify any trends in the errors associated with them and to IDENTIFY which side of the fence these errors are occurring:

  1. the Unit Supers who are issuing the permits
  2. the workers working under the permits

Please keep in mind that 10-12 audits are NOT enough data points to be statistically relevant; however, it may allow us to recognize some "influences" that may have arisen that month that we should examine and determine if we need to "intervene" from a management system perspective.

This effort to "close the gap" is how we improve our safety performance in this critical administrative control.

When everyone knows we are not only counting the number of audits being conducted (Verifying) but also VALIDATING the data we are receiving from these audits, this will impact the effort made by those issuing the permits and working under those permits.  And we end the "numbers game" that frequently infects a Safety Process/SMS.  A little trick I learned:  As the Safety Process/SMS matures and our expectations increase, we will begin NOT to count permit audits if they fail to meet certain QUALITY standards.  For example, a unit turned in their ten (10) audits, but two (2) of them failed to identify X% of errors, and thus, the unit would NOT get credit for those two (2) audits, thus failing to meet their goal of ten (10) audits for the month.  This failure to meet this goal has consequences, usually in terms of my Leading Indicator Safety Incentive Program.

I have written several other articles on how we VERIFY and VALIDATE our Behavior-Based Safety Observation data, Safety Training data, Incident and Nearmiss reporting data, etc.  Every safety metric we use to gauge the performance of our Safety Process/SMS MUST be VERIFIED and VALIDATED before we believe it and begin making decisions based on that data. 

Do not fear the flaws in humans manipulating safety data; recognize this flaw and build a means/method to verify and validate each data stream. Work to close the gaps we identify in these Process/SMS elements using this VERIFIED and VALIDATED data.

Strengthening the Safety Process/SMS is how we improve BOTH safety performance and culture!

Using VERFIEID and VALIDATED data to measure the effectiveness of our Safety Process/SMS is vital to protecting the men and women doing the dirty and dangerous work we have the privilege to protect.

You have no rights to post comments

 
View 's profile on LinkedIn

 

 LinkedIn Group Button

facebookIcon

 

Partner Organizations

 Chlroine Institute Logo 100 years

I am proud to announce that

The Chlorine Institute and SAFTENG

have extended our"Partners in Safety" agreement

for another year (2024)

CI Members, send me an e-mail

to request your FREE SAFTENG membership

 

RCECHILL BW

  

kemkey logo

OHS Solutions logoCEMANE power association logo

 EIT LOGO

 

Member Associations

ASME logo

 

Screen Shot 2018 05 28 at 10.25.35 PM

aiche logo cmyk highres

Chlorine institute

 nfpa logo.5942a119dcb25

 

TOCAS

 

BLR Logo 2018

 

 

 

 

safteng man copy

 

 organdonor