About this seminar
In this seminar, international safety expert Professor Patrick Hudson reflects on the challenges facing organisations trying to implement processes to improve their safety performance. He discusses his safety culture ladder model, and how companies can assess their safety maturity by using this model.
Who is this seminar for?
This presentation is for leaders, managers and work health and safety consultants however anyone with a passion for improving organisational performance will find this presentation insightful.
About the presenter
Professor Hudson is a psychologist and internationally recognised safety expert who has worked in a wide range of high-hazard industries and is Professor of the Human Factor in Safety at Delft University of Technology in the Netherlands. He was one of the developers of the Tripod model for Shell which is better known as the ‘swiss cheese’ model.
Professor Hudson was selected as a Distinguished Lecturer of the Society of Petroleum Engineers in 2012–13, and an expert witness on process safety and safety culture in the BP Deepwater Horizon lawsuit in New Orleans.
- Moving up the culture ladder, video featuring Professor Patrick Hudson
- Safety and culture leadership, video featuring Professor Patrick Hudson
Risk and profitability: Reflections and insights from Patrick Hudson
Virtual Seminar Series - Transcript
[On screen: Professor Patrick Hudson reflections and insights]
[On screen: Part 1 – What has driven step changes in WHS maturity?]
When I look back at my history of working in the area of safety in particular, and occupation health, and environment I realise that we've gone through a number of step changes.
And the first big step change that seems to be being made by organisations is when they're faced with a massive disaster in one form or another and they have to really start and take this seriously to the next stage of saying, "Well, we're doing all sorts of good things, but we don't know if we're doing the right things. We're doing too much. We're doing too little."
So we get the next stage where we start getting organised. We work out what our biggest problems are, our greatest risks, we work out what problems we can put aside because they're very infrequent. The next step change is the one that everyone these days is really wanting to make and the one that they're finding quite difficult to make and that is moving from being organised to actually having the performance that you really want to achieve. And one of the things that's driving that is a realisation that you can actually do better than you actually are right now, and that's moving to being ahead of the game rather than being driven by events.
The final step change, which we can identify is the one way where you really want to integrate, get everything together. You need to actually understand how you're doing it, what you're doing it, who's doing it, and whether you're being successful, and then permanently getting into state where you think we must be doing better. We're not doing as well as we should do.
The other way to make that original step change, which is when regulators basically step in and say, "Either you change or we're going to make your life a misery." But many organisations haven't needed that strong focus and regulator because they realise themselves that what they're doing is not very good. The problem is that in that reactive stage you're always waiting for the next thing goes wrong and if things are going okay, you think you're doing quite well. The only thing that drives the next step change is you don't get many of those relaxation moments when things do go well, and you get an awful lot of them when they don't go well.
And the next step, which you really have to consider then is, "Wait a minute. We're reacting ... we're just to events as they come along. What are the most frequent events one of our biggest problems but there's also the much less frequent major process disasters, and those are the ones which we really don't quite know how to deal with it except we just hope that we've got a system that is robust enough not to have it happen very often.
We actually do get a serious improvement when we do start thinking about how we're going to be managing our safety. And what we had hoped quite often is that if we manage the personal safety that the process safety will come along. As it turned out this isn't always the case. Big disasters like Longford, BP's Texas City, and the Deepwater Horizon have shown that simply reliance on personal safety wasn't quite good enough. But once you've got that organised, and you've got you risk assessments done, you've got your priorities set, you're very successful. It does work. It makes a big difference to performance, and what they would say, "Hey, we wish we could reach that level of performance.", they reach that level, and now they say, "This isn't good enough. We've got to get better.", and that's much much harder as moving to a proactive situation, but what you're trying to do is to get ahead of the game and then the final step change is the one where you really want to make sure that all this is sustainable. And even if everybody got up and went away, the new people coming in and carrying on the operations would be as safe and carried on doing it the same way as they are today.
[On screen: Stages of the model]
When I think about the critical elements of my model, I start of course with the pathological, and the pathological is the only one, which really is not a culture of safety. It's a culture of get the job done however, don't get caught, and if anything goes wrong we know who to blame, and it's not me. It's somebody else. It's the victim. The remaining elements the steps on the ladder are all cultures, but they're distinct cultures, which are the core cultures of safety.
The first one is the reactive, which is basically where we wait until things go wrong, and then we try and fix it, and then we wait for the next one, and we try and fix that. And the problem there is that what happened last time, what happens this time, or will certainly happen next time quite quickly don't have the same immediate direct causes, except usually there's people doing things, and so those are the people you're originally try and blame because they're the only thing that they got in common.
The next level that we move to is now called the calculative, used to be called bureaucratic, but they didn't like that because I imagine people saying, "We're being very calculated today.", but they'd hate to sound being bureaucratic. But that next stage is one where in fact you got systems and processes, you get organised, you get your priorities, you get your resources, you make sure you've got your training, you do your risk assessments. The one after that the next level up is the proactive, and the proactive is the one that everybody really wants to attain these days. And the proactive is where you're dealing with the problems before the problems come and attack you. The calculative by collecting a lot of data is still inherently reactive, is waiting. The proactive is really looking at what's the next thing coming down the line rather than trying to fight yesterday's battle is trying to win tomorrow's skirmish.
And finally with the generative it's where everybody is doing their own job. You do your job and I'll do mine. And there's a lot of power, which is being still held in the levels of upper management and line management, and the calculative and the proactive level is now dispersed down to the level of the workforce because the workforce are really the experts that have to do it safely. And the job of management is to make sure the workforce gets what the workforce needs.
[On screen: Is culture static or dynamic?]
The ladder has got five levels on it, five treads on the ladder, but they're not really discrete points. They represent clusters of attributes and behaviours within the organisation. It's a much more dynamic. There's lots of different points. We distinguished 18 dimensions for personal safety. Though when we add in process safety we add in about another 10 dimensions. And you can be at different points but they form a cluster for each one of those different dimensions. What I often find very useful is thinking about where you are on the ladder not as a point but as a footprint. So what you have is that the main weight of the foot maybe carried in the middle. Typically it's going to be in the calculative areas somewhere.
We have processes, but we also nevertheless still manage to exhibit very very clear reactive behaviours. When some sorts of things happen we just react as if we've been stung and we've been bitten. But there's also the front of the foot because the reactive is the heel and I think of it as a footprint. The middle is in the calculative. And then up there in the proactive there's a few bits and pieces, parts of the organisation, which are really scrambling to try and get ahead of the game.
So when you want to understand how an organization's culture is operating it's not a single point. It's a whole series of points, and they're dynamically moving. People are getting better, and sometimes people are getting worse. People often think that the only way to go with the ladder is to go up the ladder. But in fact if people are left on their own and they're not supported in the appropriate sorts of ways, they can also go down the ladder.
But if you stand back a bit, you immediately realise that those characteristics, those cultural characteristics of organisations are much much broader than just safety. They refer to how we do the finance, how we go about dealing with our customers, what makes the ladder specifically relevant for safety is the transition going up the ladder in terms of a level of understanding of the risks and hazards being faced by the organisation. This applies to safety. It also naturally applies to the environment. It applies to occupational health. It applies to security, and probably applies even in finance and the realisation is down at the bottom of the ladder. You really don't understand your risks. You haven't a clue, and the best thing you can do is shut your eyes and hope it all go away. And in a well-regulated world where other partners and other players are doing it well you can get away with it. It's rather like a bad driver in traffic. A pathological organisation can be like someone who's doing terrible things on the highway, and they don't cause an accident just because everybody else is avoiding them and making up for their bad behaviour.
As we go up the ladder move to a basic simple understanding of what the risks are. And then you're moving to a slightly more nuanced idea of what the risks are and where the risks are coming from up to a full understanding not just by the people at the top, not just by the safety department, but an understanding of the people who are facing the hazards, and the people who are managing the hazards, exactly what those hazards are, what makes them more likely to be a problem, what makes them less likely to be a problem, what's the best way of controlling them, what are the ways that we actually don't need to do.
And so when you get to the top, you've got a lot of new nuance. And you can actually quite often avoid having to do some of the things that we have to do lower down the ladder because failing to understand what we're doing means that we really have few choices. We can't be nuanced.
[On screen: Risk and profitability]
One of the ways I think about operating in a risky environment is a bit like having a bull's eye. And you can have a bit in the middle where there's basically no risk; it's inherently safe, and it doesn't matter what you do or how badly you behave. But the returns on investment at that point are pretty minimal because everybody can do it and anybody can operate in that particular part of the space.
As you move out a bit, you move into an outer ring where the risks are pretty normal, their standard, we understand them. Not everybody wishes to take them so we can make more money, we can get better returns because we know how to do it, and we do it, and we do it well. And the better you're getting it at doing it the further out you are moving towards, what I call, the edge, where the edge is where it gets very exciting. But if you fall over the edge, that's when you have an accident or a major incident.
And what is interesting about thinking about things like high reliability, organisations, proactive and generative cultures in general is that they enable you to operate out close to the edge for two reasons: One is you know how to operate, and the second is you've got pretty good systems for telling you where the edge is, and your operating processes keep you away from the edge.
So for organisations that, in a harsh commercial environment have to sweat their assets, it's absolutely vital for them to do this kind of stuff well in a harsh commercial world. So when you're sweating the assets, you better be jolly good at what you're doing rather than just doing what the bookkeepers told you.
One of the natural questions you can ask is: doesn't do all the safety stuff just cost money? And the answer is no. It makes you money, but you've got to get your head around how it does it. And I think that's very important because typically what people do is they complain they have to do this, they have to do that. Safety is just a cost, gets in the way of doing the business. But the reality is: if you've got your safety right and you can do it safely. Then you can go in and do interesting, and exciting, dangerous, and dare I say profitable things because you're good at it, you know what you're doing, and you know when to back off so you don't get hurt. Whereas if you're not very good at this, you don't know when to back off, but you may not have the nerve to do the really exciting stuff as well.
[On screen: Costing risk]
To make the argument I often do this especially with senior people like boards, which is I have a figure in my head, which is that roughly 10% of turnover is wasted on poor performance in areas like OHS, environment, and process safety. So if you're turning over 20 billion a year, two billion is vanishing in smoke because you're not actually managing it very well.
Now people disagree with me, and I've had to level types of disagreement. One was a friend of mine from a very big company that makes an awful lot of profit said that he thought I was entirely wrong. And I said, "Well, what figure should it be." And he said, "15%."
So the trick I do is to say to people, "Well, I may be wrong." But if they object say, "Well, you must have the figures. So you know what the figures are." They’ll usually retreat in some confusion and they say, "But they still can't be right." And I say, "Okay, it's fine. It's only a guesswork anyway but it's got the conversation started. And we can use a spreadsheet where you can look at the costs of different types of accidents and different levels of consequence." And we can say, "Well, you fill in your own data. You fill in the likelihood that these kind of incidents are going to happen from unlikely, to very unlikely, to almost impossible. And you fill in where they're going to be really expensive or just little expensive. And then you put it all together. Low and behold, you come up with something it looks suspiciously like 10% of turnover. But they're your figures not mine."
By the time you've got people at that level to that level of understanding, the finance people, their only complaint is, "Why didn't you tell me this earlier?" And so all of a sudden the finance people can become the safety people's best friend rather than what they thought was their natural enemy. It's quite interesting the way we go about it is to create basically what looks like a risk assessment matrix where the cells contain for different types of incident. The costs of a level five total disaster times the cost of level one, which is just near almost a near miss, and level zero is highly counts is an incident at all for different sorts of incidents.
So in the case of oil industry we had a quick rule of thumb that a total platform loss was going to be about 1.6 billion dollars. Whereas a level four where more than one person is killed and there's major asset damage, you're probably looking at something more like 160 million, an order of magnitude less. Now losing the total platform is very unlikely. So the expected cost that you're actually exposed to is the product of a very small probability of the very large amount, and it usually comes down to maybe a couple hundred dollars on an annual basis. What we discovered was interesting was the place where all the money is vanishing is not the big headline events. We're actually quite good at managing them most of the time, although we could still be better. The way it turned out is that level two level three stuff, which is regarded typically as not hot hardly worth reporting more than a little way up the line gets, aggregated, lost, and isn't considered as being worth bothering about. So it's what I call the death of a thousand cuts is where almost all of that 10 percent exposure actually comes from. When you see the figures looking at you say, "Ooh, now we know what we can do. We can actually do something about that, and if we're clever manage it in a way that the bigger more headline items get covered at the same time as well."
[On screen: Litigation]
One of the big problems that we face in today's world is litigation. It scares people to the point where they think that they shouldn't say what's going on. They shouldn't say what's happened to them because they're afraid that if they get into court they're going to be in terrible trouble. Now I think this is actually misunderstood. The really crucial discovery is that your probably your best defence in court is the realisation that things will always go wrong. Life is not fair. And what counts in court and what counts in the spirit at least of the law, although sometimes the letter of the law might be tidied up a bit, and I'm not just talking about Australian legislation areas. I'm talking about America. I'm talking about Europe as well.
What counts is were you trying. If you were trying hard, and you're doing your darndest to avoid an accident, and nevertheless you just got caught by something, which came completely out of left field then you really should be able to get off. You might be still required to compensate the people, but the problem is that if people are terrified by a litigation, then what can often happens is that they're going to say, "Don't tell me. I don't want to know. I don't want to hear."
One of the things that you have to do is really realise that you've got to do certain things because that's what's expected. That is the right thing to do. I'll give you a classic example, which we discovered in the Deepwater Horizon case that I've been personally involved in as an expert witness. And what it turned out was that BP had, what I would argue, was the best safety management system in the world at that time called OMS and they developed OMS as a specific response to their Texas City disaster, and we thought that they hadn't rolled out OMS in the Gulf of Mexico because that was a difficult region and they've done it elsewhere.
It turned out BP had rolled it out in the Gulf of Mexico, but what they did was they rolled it out on their own assets first, and they left the non BP assets, like Transocean's Deepwater Horizon, to a later date. So what they did and demonstrated that they failed to exercise their true duty of care was that they took the active decision not to implement the local operating management system, LOMS, on that particular well. And what I was going to argue in court was quite clear, which was that the system was so good that if they had implemented it, it would have prevented the disaster.
Now, if they'd failed to do it because they just forgot or they were in a hurry, and it was literally coming along the next week that would be understandable. Where they went wrong was they actually had a committee meeting of the risk committee and took the active decision not to implement it. So don't do this at home folks.
[On screen: Assessing your organisational and cultural maturity]
One of the things people want to do is to find out what their culture is like, and the standard way of doing this is to carry out a safety culture survey.
[On screen: Safety culture survey]
The problem with that is that, first of all, they're big. Everyone's first to fill them in, and also that they're really attitude surveys. And the problem I have is also the people know what answers to give. And they may well give the answers to achieve the results they want to achieve. So I've seen surveys that have been filled in by groups of people who had a very clear message they wanted to send to their management. It wasn't about safety it was about their relationships and their industrial relationships.
But leaving that aside we also have complications because you've got 150 questions and say,
[On screen: Safety culture survey. How to interpret?]
"What are we going to do once we've got the data? What's a 3.8 mean on a five point scale? What's a 4.2 mean?" They are useful, but they're not that useful, I find at least. And what they do is they fit with the requirements for instance from the U.K. health and safety executive's definition of safety culture in terms of values, beliefs, and attitudes, and behaviours with respect to safety, which is a perfectly good definition. But except it doesn't really capture values too well. It doesn't capture beliefs at all. It's very good on attitudes and somewhat weak on behaviours. But I have a paradox, which I discovered as well: if I know what your values, beliefs, attitudes, and behaviours are from your questionnaire, from your survey, I may not necessarily be able to predict exactly what it is you're going to do when you're on your own at 3 o’clock in the morning. This as I have discovered is a typical example of the classic definition of aircraft line maintenance. It all happens at three o'clock in the morning with a single engineer who's working on their own at night trying to make sure everyone stays safe.
[On screen: Observe behaviour]
If on the other hand I observe you behaving at three o'clock in the morning, I can work out pretty accurately what your values, beliefs, and attitudes are because I know your behaviour.
So one of the things that you really need to concentrate on is work out what people actually do rather than what they say they do. A lot of senior management their behaviour is saying the right things rather than necessarily doing the right things. They're very good at talking but their not always quite so good at walking. So the way we try and assess safety culture is by saying rather than giving people very carefully crafted single item questions like: my supervisor tells me when I'm not behaving correctly or something like that.
What we do is we can have what we call rich descriptions where people can say, "That's us. That's feels like us. That's what we are like." The can be around what is the status of the safety department? What are the rewards of good safety performance? How do we do audits? How do we communicate? Who communicates? And when you have a sentence of three or four that you can actually put a description together along one of those dimensions.
So you take each of the five steps on the ladder then you can say you can pick one of those and say, "That's us.", and we derived the original tool that we used in the Hearts and Minds Programme going right back to the early study in 2000 by doing this. What we also discovered was that we made the tools so people would say, "Well, we're a bit calculative, and we're a bit proactive, and it's in between the two somewhere." So we made a system and we scored those. And that's where we left it for a long time. But I became dissatisfied and I realised that there was something going on. And what was going on was that people were not picking the description of where they were. They were picking a description that also reflected where they would like to be just for themselves and for their colleagues. They would like to feel that their workplace wasn't as bad as they were tempted to score, and so they would say, "Well it edge it up a bit as well."
And so what we found was that the scores on these tests were probably being heavily influenced by the effect of self-esteem, an aspiration rather than the actuality. So I decided single-handedly a few years ago to change the way we measured. Turned out to be very useful and very insightful. And what I did was I said, "I'm not interested in measuring where you are. Read the descriptions, and choose those descriptions where you think reasonably that your organisation could be 24 months from this day." And I say 24 months because 24 months is long enough to think that you might actually be able to effect the change within an organisation, and short enough that you haven't been moved away from your job. It's just a way of anchoring people to a point in time, which is not tomorrow but is not ten years down the line.
And so I said, "Just pick those what I call aspiration scores. We’re not interested in where you are." So they do this, and all of a sudden they're not saying, "Well we want to be a bit proactive but also some generative." 99.9% picked one box out of the five, said, "That's us. That's what we reckon we could be. That described us really well."
So we get a new score and a profile, that footprint with a heel and the toes, but usually calculative and proactive. And now what we've got is a gap. So then, we pick what are the most impactful gaps, the ones we think you've got the best chance of success. Let's work on those, and what you are now doing is picking quite concrete steps will be exhibited as an organisation rather than simply going around saying, "What we need is better values around here.”
[On screen: Small business application]
These tools and approaches have typically been developed with large resource-rich industries like the oil and gas industry, like aviation, and the question often arises is: what about the little guy, where they're all working their tootsies off because they got to stay in business, and they can't spend a lot of time going around filling in paperwork for people because they've got a job to do. In fact, in some ways it's easier because you've got fewer people to persuade, fewer people to work on, they know each other. When people know each other, then they know what other people are good at, what they're not so good. If we can get everyone into a shed or if it's an aviation operation, if we can get them all into a hangar.
And what we do at the end of every day is try and say, "What are the kind of things that we should be doing now? What went well? What went badly? Why did it go wrong? What are we going to do to make sure we never get into that problem again? And sometimes you say, "Well, we thought we fixed it, and we haven't so we'll have to try again." And what you realise with small organisations is that they can do this if they are given enough time. And I think one of the problems is quite often is that clients don't give small contractors enough time to become better. And one of the things that you can do is actually make an investment in your contractors as if you're a bigger company by saying, "Take some time at our charge.". And we may be talking 10 minutes, maybe talking 20, in a week, in a day, in a day or even half an hour in a week to say "What are the things that we could do that would make us better next week than we have been this week than we were last week."
[On screen: What are the things we can do to improve?]
I find thinking about progression up the ladder, which is why people really approach me and they asked can I help, is that there are five steps on the ladder and there are four arrows as the arrow from pathological to reactive from reaction to calculative and so on. And what we're trying to do and we're getting better is make a transition over one of those arrows.
And I discovered that there's a very simple structure, which helps me a lot when I'm trying to advise organisations in how to do it, and I'm trying to design plans for improvement, and that comes from the first realisation that when I was talking to organisations they would say to me, "We're pretty good. We're definitely heading up the ladder. We're heading towards the higher reaches." And I'd say, "Yeah. I'm impressed. Pretty good stuff." I would do because they're paying me. But I'm tricky. So I'd say, "Well, why are you so good?" And they'd say, "Well, we've got this in place and that in place, and this place, and that's in place." And I'd hear in place coming in like mortar fire from the enemy trenches. And I'd say, "Yes, again I'm deeply impressed. Just one question." And they'd say, "Yes." This point they'd think they're beginning to know I'm tricky. "This one question. Are you using any of it yet? Is it in operation?" "Ah.", they'd say, "We're going to. We've got a plan. We've got an implementation plan. We've got a work group and we're starting next week." I'd say, "Good. So you're not actually using it yet, but you're going to." Or sometimes, "We're using some of it, but we're still planning on using some more."
So the transition from reactive to calculative is taking the stuff that you put in place when you stop being purely pathological and actually getting it to work. So we have standards but we actually use them as opposed to having them sitting on a shelf looking bright and shiny, but not actually influencing anything. And then there comes another problem, which is, "Okay. So were using them." I'd say, "Are they any good?" "Well," they'd say, "We've got a few processes that really don't work well, but we don't dare stop using them because that'll show our lack of commitment to safety." I say, "Well wait a minute. Why aren't they working?". "Well they're not very good." So the realisation I became to discover was there's another transition: the transition from calculative to proactive, which is a difficult one, which is of making what you've got effective. So actually taking what you put in place and then making sure that you're actually going to achieve the performance, and the results, and the behaviours that you intended when you put stuff in place.
[On screen: Future challenges]
If we look into the future there's one thing we know: things are going to change. What we've got to do when we change is recognise how we're moving as we change into the way in which we're going to operate with the world. We may slip back into a reactive mode because we don't actually understand how our new technology is working. But if you understand that you don't understand, then you are already beginning to get a head start. So I think that in my ideal world people would move up because as they start to design new approaches to work they were design in how the organisation is going to handle the changes not simply in classic change management but much more also at the cultural level. And one thing I can guarantee, which is that if there's major technological changes, and these cultural aspects are not considered you're going to get a few massive disasters along the way.