Safety Management: A Personal Development Strategy

 

Abstract (Summary)

One axiom many SH&E professionals have historically been taught in safety management is that unsafe acts plus unsafe conditions plus time = an incident. It sounds true and the logic is simple and neat. Yet, as one spends time in safety management it becomes readily apparent that this axiom is, in fact, false. HW Heinrich, considered by many to be the father of modern industrial safety, said that 85% of industrial accidents were the direct result of workers' unsafe actions. Safety management has focused its effort to prevent accidents on fixing worker behaviors ever since. The focus on changing workers' behaviors as the way to improve the quality of any safety program opened the door for psychology to enter the field of safety management. Because top management tends to accept that the ideal quality of a safety program equates to meeting specifications, safety performance has not come dose to matching results achieved in quality.

 

One axiom many SH&E professionals have historically been taught in safety management is that unsafe acts + unsafe conditions + time = an incident. It sounds true and the logic is simple and neat. Yet, as one spends time in safety management it becomes readily apparent that this axiom is, in fact, false.

The theory that unsafe acts cause most industrial accidents grew out of research conducted from 1931 to 1950 by H.W. Heinrich, considered by many to be the father of modern industrial safety.

Heinrich worked in the safety department of Travelers Insurance, where he investigated thousands of industrial incidents and injuries. He concluded that a worker's failure was at the heart of accidental injuries and that methods of control must be directed toward that failure (Heinrich, 1950, pp. 2, 10). Heinrich said that 85% of industrial accidents were the direct result of workers' unsafe actions. Safety management has focused its effort to prevent accidents on fixing worker behaviors ever since.

The focus on changing workers' behaviors as the way to improve the quality of any safety program opened the door for psychology to enter the field of safety management. In the 1960s, the dominant psychological theory in American academics was behaviorism. It was not difficult to link behaviorism with Heinrich's theory. The marriage of these ideas provided a simple solution to the complex problem of accident prevention.

But by the 1970s, behaviorism proved to have serious flaws and lost its level of dominance with the move in psychology toward cognitivism (Hunt, 1993). Despite this, by the 1980s, companies were embracing behaviorism as the way to prevent employee accidents. Many articles were published suggesting that improving safety was just a matter of motivating workers by applying behaviorism in some form. Many managers followed suit because this meant the workers had to change, not them.

When some safety managers examined the theory behind this process, they found that behaviorism could explain only "the elementary forms of behavior that make up only part of the psychology of rats, and a very small part of the psychology of human beings" (Hunt, 1993, p. 279). Behaviorism continues to be applied in today workplaces in the form of observing workers and providing positive reinforcement when they are observed working safely.

This article takes an in-depth look at what the word quality means when applied to safety management and how it affects the skills SH&E managers need to perform their jobs effectively.

Quality of Safety Management

In any endeavor, one must step back and evaluate whether actions being taken make good sense. That time has come for safety management. SH&E professionals need to identify the ultimate goals. What does the word quality mean when applied to a safety program? Do we only want to control shortterm behaviors of workers, or do we want employees to become responsible thinkers, decision makers and problem solvers? How does one know when a safety program is functioning at its highest level or in the ideal state? Should compliance be used as a measure of an ideal safety program?

The author believes that the SH&E profession should be concerned with the development of people's skills. This encompasses the quality of safety associated with people being engaged, enthused and empowered to take action on safety problems faced each day - problems caused by the universal force of entropy, the measure of disorder that exists in all systems. It is not about compliance and its goal to meet standards/regulations and maintain the status quo; rather, it is about continual improvement where improving safety is a constant, consistent process. Ine latter requires the mental and manual labor of all people at every level of an organization.

For top management today, safety management often equates to meeting specifications. This means that once "X" is achieved, the level of safety in operations is deemed "good enough." Furthermore, it may not even need to be done all the time. It is most important during a safety inspection or safety audit. No safety program can guarantee that a guard will be in place or a safety rule will be enforced or adhered to 100% of the time. However, if compliance is verified during safety inspections or audits, then the safety manager's job is secure.

However, most SH&E professionals would agree that only meeting safety specifications has little to do with the quality of safety delivered to employees. Employees must deal with the daily reality of work and the effect of disorder on the safety of operations each minute of every day.

Because top management tends to accept that the ideal quality of a safety program equates to meeting specifications, safety performance has not come close to matching results achieved in quality. For example, in 1987, Motorola set a goal of 10 times improvement of quality by 1989 and 100 times improvement by 1991; another goal was to reach a level of 3.4 defective parts per million by 1992. The objective was to achieve zero defects delivered to customers. The company achieved this goal in some parts of its operations, with overall operations achieving 40 defects per million (dpm), which is a marked improvement over the accepted level of 2,700 dpm just a few years before (Shina, 2002). Almost all electronics companies now work at these quality levels.

The origins of quality management theory practiced today can be traced to what was taught to the leaders of Japan after World War ? by Homer Sarasohn and W.E. Deming (Dobyns & CrawfordMason, 1991). They stressed two important ideas about quality management that were largely abandoned by American managers after the war. Both ideas apply to the quality of safety management.

Ideal: It's the System

First, Deming and Sarasohn emphasized that to understand and improve quality, one must view work as a system. They realized that producing anything is done in a system, whether it is mining coal, assembling cars, building radios or processing paperwork. The system is where problems originate. Deming (1994) defined a system as "a network of interdependent components that work together to accomplish the aim of the system." Both men knew that a system cannot be improved until those involved examine the entire system simultaneously to find out what is happening in it and what it is capable of producing.

Deming (1994) linked systems with management by explaining why production must be viewed as a system. He explained that when people obtain and apply profound knowledge to systems they look at work differently. The overriding theory of managing a system for high quality and continual improvement is to manage processes so the output is as consistent as possible. That means one must work hard to reduce the variation inherent in every operation in the system. The quality of safety management can be improved using the same methods Sarasohn and Deming advocated to improve the quality of any process (Dobyns & Crawford-Mason, 1991).

Understanding Systems

Learning about and improving systems requires systems thinking. Not until the 1990s did American managers rediscover the importance of thinking about systems. Senge (1990) describes systems thinking as "the discipline of seeing wholes." By definition, a system always exists to serve something other than itself. Safety management is a system. It exists to serve other work systems such as research and development, sales, marketing, service, maintenance, production, assembly and delivery so that each one can operate without causing injuries or harm to people who work in or around them.

Traditional safety management relies heavily on analysis of single events to study the outcomes of the system and take corrective action. Analysis involves taking something apart, studying the behavior of each part separately, then trying to aggregate the understanding of the parts into an understanding of the whole.

This is opposite of the approach used in systems thinking. One cannot understand the behavior of a system through analysis. Explaining a system requires synthesis. This means instead of breaking something down and examining each part separately, one keeps the parts together and studies outcomes while the system is mrming. This is an impossible task for analysis (Dobyns & Crawford-Mason, 1994).

Consider this analogy about how a football team prepares for a game. Each position player practices with a purpose. Linemen block and tackle. Tight ends work on catching passes. Running backs practice carrying and running with the ball. The quarterback works on handing off and throwing the ball. Defensive players practice skills to prevent the offense from achieving its goal. This allows one to analyze the skills of each individual player.

However, no one can tell how well a team will play by gauging how each individual player performs in practice. The team's performance depends on how well everyone works together in the game. Ironically, a football team spends many hours practicing during the week to prepare for a single game in which the playing time of an individual player is only a few minutes. This is opposite of the working world where people get only a few minutes to practice job skills and are expected to always use them.

The lack of systems thinking often results in people misinterpreting single events (symptoms) as causes to explain why things happen. Consequently, they try to solve problems by removing symptoms rather than causes. Heinrich worked from a singleevent perspective. It would be easy to interpret unsafe actions as causes from that viewpoint. From a systems viewpoint, however, unsafe acts are interpreted differently.

Symptoms can be called the facts of the case. When a patient visits the doctor, s/he starts by describing the symptoms, such as a headache or nausea. These are the facts of the case or the symptoms of poor health. The doctor makes initial observations to identify any other symptoms to add to the facts of the case. Once the doctor completes the diagnosis, s/he develops a hypothesis about the illness, then prescribes what s/he thinks will eliminate the cause. If the prescription is correct, it will remove the cause and the symptoms will disappear. In other words, symptoms are caused by something else.

By applying systems thinking, SH&E professionals can look at work from a different vantage point. From this view, it is clear that unsafe acts are actually symptoms of safety problems, not causes. Once employees are hired, trained and ready to perform their jobs, unsafe actions exist mostly because of deficiencies in the safety management system, not the individual. They can be reduced or eliminated, or their impact minimized through continual improvement of safety in production. But even if all unsafe actions are eliminated, incidents will continue to occur until the system is corrected.

The only management theory Heinrich had was command and control. From that perspective, it made sense for him to classify workers' unsafe actions as the cause of accidents. It was a simple way to solve the problem.

As is now known, however, how one manages work is optional. In a command-and-control environment, managers do all of the thinking and mental labor while workers wield the tools and perform the manual labor. The ultimate objective in such an organization is for the output to meet specifications. This means the company will be unable to improve its outputs because of all the variation allowed within specification limits. As long as the site meets specifications, its output, with all its inherent variation, will be considered "good enough."

In this world, outcomes vary as much as possible. The attitude that "things are good enough" when output is within specifications permeates such an organization. It becomes the company's culture and it means quality, productivity and safety can be only mediocre at best. No law says a company cannot manage its business this way, but no law says it must, either.

Idea 2: Workers Are Not ffie Problem

The second important idea Sarasohn and Deming professed was that the interactions of a system's components are responsible for most of its outcomes (Dobyns & Crawford -Mason, 1991). That means individual workers are not responsible for most of the things that go wrong in the system (including quality, productivity and safety). That responsibility belongs to management.

In other words, most work problems, including employee accidents, are a result of management decisions and directions. Only management has the power to make decisions about the system. Deming (1994) noted that management cannot abdicate its responsibility for system outcomes. It must be held accountable for deficiencies in the system, not workers. Deming was not placing blame; he wanted to ensure that management provided the leadership necessary to improve the system.

This thinking is opposite one of the most enduring beliefs held by managers. That is, if something goes wrong at work, someone must be identified and held accountable for it. For managers, Heinrich's theory makes sense. It is a logical, easy way to hold workers accountable for accidents. It reinforces what they have been taught and makes it acceptable to assume that everything was working properly up to the moment just before an incident occurred. It is easy to accept the premise that if workers would only do what they are told and obey safety rules and regulations, no accidents would occur.

A worker is in no position to refute this proposition. Suppose a worker is involved in an incident. If the worker agrees that s/he cannot control his/ her actions, the worker sounds incompetent. Who wants to admit that? The easiest thing to do is to accept the blame (85%), commit to doing better the next time and hope for no reprimand from management.

The Need for Higher-Level Thinking & a Better Way of Managing

Most critical to a company's success is how it manages its operations and treats its people. The traditional American model, with its ultimate goal being to meet specifications to maintain the status quo, cannot get the best from any system.

Witness the bankruptcy of GM, once one of the largest and most admired corporations in the world, and hundreds of its suppliers. Its leaders believed for a company to be at its best each department had to operate at peak efficiency. This theory served GM well during its first 50 years. But for the last 30 years, numerous signs warned this approach was not working. This is because such thinking simply is not true. Imagine how an orchestra would sound if each member played as loud as possible just to show off.

Organizations need to move beyond the current management model and learn how to manage so work can be completed most efficiently with the highest quality outcomes. This includes people not getting hurt while working. It requires a new way of thinking about work with the utmost respect for human capabilities. Workers must be viewed not as bionic machines or "human capital," but as the brains required to solve the daily quality and safety problems found in every work system.

For most management, the goal for measuring quality is to supply 100% conforming product. This is achieved by conducting a final inspection to identify and remove any defective parts. Meeting specifications is the only concern. All costs are passed on to the customer so management has little incentive to reduce waste, including the worst form of waste, employee injuries.

In continual improvement, one measures quality in two ways: 1) shipping 100% good parts and 2) the state of statistical control of processes. Statistical control helps those involved identify the types of problems in a system and predict how the system will behave in the future. By combining these two ways of measuring processes, one can identify four possible states (Wheeler & Chambers, 1992):

1) the ideal state;

2) the threshold state;

3) the brink of chaos;

4) the state of chaos.

The force of disorder within the system acts on all of these states, pushing everything toward the state of chaos. A constant push and pull exists between entropy and managing to achieve the ideal state. The goal for production is to have the work system operate at the highest possible level (the ideal state). The ideal state is achieved when production is running smoothly, quality levels are on target and accidents are consistently zero. When it comes to work, how should one manage systems to keep them operating in the ideal state?

Management & the Four States

The states of chaos and the brink of chaos revolve around the circle of incompetence. Traditional management keeps shifting between unconscious incompetence and conscious incompetence (Figure 1). This system is managed through firefighting routines by managers whose jobs are to bring the process out of chaos up to the brink of chaos, where outcomes meet specifications and the situation is considered to be "out of trouble." Once the fires are extinguished, managers move on to the next problem.

Soon, however, the process begins the entropy slide and the fires return. The manager's tools are what hourly employees call "flavors of the month." Basically, the goal to meet specifications and maintain the status quo through firefighting keeps an operation trapped in the circle of incompetence. The ability of managers shifts back and forth from unconscious incompetence to conscious incompetence. They believe they are doing something that is working, but it is not. These managers are certain of things that simply are not so.

To escape this trap, companies are always trying new management techniques. The past 30 years have seen approaches such as total quality management, right first time, reengineering, zero defects and six sigma. All have good points to make about problem solving and managing operations, but the success of these programs depends on a new management philosophy.

Many companies tried to adapt or modernize Taylorism to blend in with these new techniques. American companies started to learn quality management from Japanese competitors that were making superior products at lower costs. Management learned about these new techniques, but implemented them with limited success because managers often could not abandon existing management methods. Many were slow to realize that Taylorism does not work well with the philosophy of continual improvement.

The only way managers can stop fighting fires and work on preventing them is to advance to a higher level of thinking. Deming was one of the first to realize this. He labeled this new way of thinking profound knowledge (Deming, 1994). Without profound knowledge and the effective use of control charts, Deming knew managers could not advance to a higher level of thinking. Until managers obtain profound knowledge, they cannot escape this circle of incompetence.

Many believe Demingfs only contribution to quality was the use of statistics and control charts. Many also thought control charts were only meant to be used in a manufacturing setting. Deming (1994; 1995) understood how they could be applied to general management. Deming showed that control charts can help an organization study and improve any work process so it can predict future results of processes. These include the intellectual side of operations, such as designing, planning and training, all of which apply to safety.

American managers tend to use control charts as report cards, to adjust or pilot processes, and for extended monitoring to track quality characteristics. Many do not recognize and appreciate the power of these charts to help management reduce variation and control and improve processes over time (Deming 1994; 1995).

The state of statistical process control is not the natural state of any production process. Over time, if a process is left alone it will deteriorate and go out of control. Control charts provide signals from the process that enable management to remove any special or assignable causes and improve the process. When managers use control charts in this manner, they are exercising conscious competence.

Control charts also help to create process knowledge. To be effective, they cannot exist in a vacuum. Only when a company's culture, from top to bottom, is attuned to how to use them effectively can their maximum benefit of continual improvement be achieved. A company with internal barriers to cooperation and communication will never gain the advantage of control charts. When the culture is constructive, a company will have unconscious competence for continual improvement where operating at high levels of quality, safety and productivity is "the way we work around here."

Safety management has made little use of the theory of control charts. Like production, the SH&E profession has not recognized the power and usefulness of these charts. This may be because many felt the tools were after-the-fact measurements and did not realize that their successful application requires a new way of managing and new competencies.

Some important reasons for using control charts to manage safety are:

1) They are the only reliable method to determine common versus special causes of variation that are responsible for accidents in a process. They will help prevent management from making the mistake of blaming individuals for causing accidents when those causes are actually built into the system.

Without statistical aids, managers often default to the idea that something happened here, or something special or unusual happened, such as the person being careless. When one looks to find something special and takes action on a particular person when the incident is the result of a common cause one is doomed to fail in the effort to improve safety.

2) Control charts provide a common language between management and hourly workers so they can communicate without emotion and focus on the business of fixing the system so it can be moved to the ideal state. When used for this purpose, control charts create knowledge about safety in a system.

3) They help people focus on reducing process variation, which leads to consistency and predictability.

4) They prevent managers from making the mistake of judging every single data point and classifying up or down movement as good or bad.

Cause & Effect: The Missing Link

Mian Kundera said, "The stupidity of people comes from having an answer to everything. The wisdom of the novel comes from having a question for eveiything." Many managers have been trained in the certainty of cause and effect: do this and that will happen. Command-and-control theory is contingent on the fact that cause and effect are always closely related in time and space.

Heinrich (1950) applied the idea to safety and it became convenient for management to not worry about making this distinction. Consequently, traditional accident prevention theory mandates that all employee incidents be thoroughly investigated. The unspoken objective of this exercise is to use analysis to discover who messed up and take corrective action on that party. Managers involved in such an activity justify it on the basis that they are doing something proactive to prevent a similar incident in the future. In reality, they are merely closing the door after the horses have escaped.

This process has evolved to a popular investigation method called root-cause analysis. The theory is that one can isolate the single thing that caused all the other things and, ultimately, the incident itself. Once the root cause is identified, the idea is to change something about it or eliminate it so the incident will not recur.

For a nonsystems thinker, seeking a root cause is logical and rational. But systems thinking suggests otherwise (Corcoran, 1996). A simple control chart often reveals that even when every defect is thoroughly investigated and corrective action taken to address each one, the average number and variation of defects remains constant. For example, most companies have a policy that all employee injuries must be investigated and follow this requirement 100%. Yet, their average number of accidents and amount of variation stays the same over time.

This is because root-cause analysis often does not address systemic problems. A root-cause analysis is based on the premise that cause and effect are connected instantaneously (Gano, 1999). Systems thinking shows that cause and effect are not always closely related in time and space. Think of the effect of deficient safety training. It may not be exposed for weeks, months or even years after being administered.

When one corrects a root cause, the belief is that action has been taken to prevent an accident. However, no two accidents can be exactly the same (there's that variation again), so the accident being responded to will not occur exactly the same in the future. It's also likely that something will be missed, not because of inadequate powers of observation but because the wrong theory is being applied.

Let's consider two fundamentally different kinds of problems. One type is called convergent - the problem being addressed has an answer. The other type is divergent - the more people with knowledge and intelligence study the problem, the more their solutions contradict each other. When using rootcause analysis, the premise is that the focus is on a convergent problem (Senge, 1990, p. 283). However, if the issue involves a divergent problem, then one must use the heuristic approach to problem solving as opposed to an algorithm.

When it comes to safety, classifying a problem as convergent or divergent can be difficult. For example, suppose a safety director wants to stop lacerations. S/he may identify a solution and believe it will solve the laceration problem. The safety director sees a convergent problem with an obvious answer: Enforce the safety rule. But workers, who likely know more about the operations, might have answers that contradict what the safety director believes. They see a divergent problem and may suggest many different solutions.

This is similar to the conundrum Deming (1994) warned of regarding common and special causes. Who is to say whether the person or team conducting a root- cause analysis has reached the correct conclusion?

Deming's quality system has proven that once a stable system is in place, spending time looking for a scapegoat to blame for something caused by the system is a major mistake. As noted, this involves applying analysis, which does little to help those involved understand the system's behavior.

Most problematic outcomes of a work system stem from common cause variation. If the problem is systemic, yet workers are blamed, management will lose their respect now and in the future. In addition, nothing has been done to fix the system that allowed the mistake to occur. That means something similar can recur.

The obvious question is, if investigating every incident does little to improve safety, what should an SH&E manager do? The answer is to think about, examine and manage work systems in a new and different way.

Where Do Work Accidents Come From?

Walter Kelly's famous comic strip character Pogo said, "We have met the enemy and he is us!" After much deliberation, what's clear is that the answer to the problem is the question itself. Employee accidents are built into the work system. They are not the result of workers messing up management's plans to control things. Deming's (1994) quality theory explains that variation of common causes is primarily responsible for most accidents and things wrong in the system, not the unsafe actions of workers as Heinrich (1950) suggested. Making things more difficult is the fact that interactions of common causes are not easy to discern. That is why even simple systems can be quite complex and prove difficult to manage.

Common causes are required for the system to function. They are inherent in the system hour after hour, day after day, and affect everyone working in the system. They include but are not limited to people, materials, methods, machinery, equipment and environment. They are not good or bad in and of themselves.

However, one common characteristic they share is variation. They depend on each other in some way to get the job done, and their variation determines the quality of these interactions. According to Deming's (1995) theory of quality, common causes are responsible for 85% to 99% of both the desired and undesired outcomes of a system; that includes defects and accidents. The causes responsible for the remaining 1% to 15% of the system are those things not normally found in the system. They come and go without warning. They are special or assignable causes.

Variation in common causes is difficult to observe and manage, made more so because these causes may not be connected physically, yet can still have a strong influence over what people do. Picture the mental processes associated with training people to perform their jobs safely. Variation always exists in what each person learns.

This variation causes many things to occur in many different ways (e.g., how workers handle customers, follow safety procedures while operating machinery and equipment). Most employee incidents are created by the interaction of this variation. Making their detection even more difficult is the fact that once the work system is operating, the effect of common causes can be instantaneous or long term. Common causes have no starting point; they just are.

For example, variation of training interacts with the variation of people, methods, environment, machinery and equipment. If safety training is not properly designed, it could be disastrous. How safety training fits with all other common causes influences safety outcomes. If an employee does not understand some information conveyed during training, s/he will not be able to apply it on the job and could be injured as a result.

Management controls most common causes involved in safety training, including such factors as training facilities, lighting, content and delivery, room temperature and duration. These factors all have variation and could reduce the effectiveness of safety training. One common cause management does not control is the fact that humans forget topic content information over time. Who is to blame for that?

When the variation of common causes lines up properly, the quality of the safety training will likely be good. Employees will learn the important information and apply it on the job.

However, if common cause variation goes awry, the training's quality and, ultimately, its effect will be less than desired. What happens if some employees are unable to see data on a screen due to poor lighting or cannot hear the instructor because of poor acoustics? What if they do not learn valuable information as a result? Who's at fault? These are common causes controlled by management, not employees.

Common cause deficiencies in safety training occur every day, which means additional variation has been introduced to operations. When managers do not know about them or how to manage them, they tend to hold individual workers accountable for them. That's what Heinrich did. He did not have the benefit of profound knowledge.

Profound Knowledge Applied to Safety on the Job

Consider this example of what happens when people are given the opportunity to apply their profound knowledge to their daily work routines. It is based on an actual operation to which most operators in any plant can relate. The system involves operators unpacking component parts coming off a conveyor line, performing an operation, then packing them in another container for the next operation. The plant has a full-time safety manager. Safety inspections are conducted and all incidents are reported, logged and investigated. In the previous 12 months, this operation has had no lost-time accidents, no other recordable accidents and no minor first-aid injuries. Figure 2 presents a basic flow chart of the operation.

This job was in operation for more than 1 year when it was brought to the attention of an improvement team. Operators were asked what was bothering them about safety on the job. They said they were constantly hitting their shins and knees against parts stacked in boxes on the pallet adjacent to their workspace, and they had the bruises to prove it. There was only 14 in. of clearance between their machine and the pallets.

When the operators first told the foreman about the problem, he said nothing could be done because the pallets were located inside a safety line and nothing could be stacked past the line. He also said the operation would be changed soon when a new part was designed and this process would be eliminated. (This promise was made months before the team revisited the problem.) Out of frustration, workers eventually stopped complaining and the manager assumed things were okay.

In reality, the safety process remained in the state of chaos. The physical injuries were minor and the management system had no way to identify them. They are an example of the hidden factory that does not produce numbers - what Deming (1994; 1995) meant when he said, "The most important numbers are unknown and unknowable." These unknown figures add to the cycle of despair people experience when the fires they put out reignite.

The team was trained to apply the plan, do, study and act (PDSA) cycle. First, they were asked to view the operators as customers since they would benefit from the project. Second, they decided they had to view the operation as a system which had processes and could be changed. Third, they had to be innovative and consider things differently to find an effective solution. They were seeking the ideal state where quality, productivity and safety would be at their highest level and maintained that way over time.

Profound Knowledge: Psychology, Systems Thinking, Variation & Knowledge

To make Deming's quality management system work, he advocated the application of profound knowledge. Profound knowledge has four areas: psychology, systems thinking, variation and knowledge itself (Figure 3) (Deming, 1994, p. 92).

Let's start with how the team used psychology, which deals with extrinsic and intrinsic motivation and how people interact with each other in different circumstances. Workers have intrinsic motivation to perform a job to the best of their ability and to do so safely. In this case, the operators were tired of bruising their legs and knees, and they had concluded management did not care. The foreman assumed the problem had gone away because operators stopped complaining.

When employees were asked to make a difference, they jumped at the chance without being offered any incentives. Operators and supervision worked together to study the system. Everyone realized that although they were a team, their individual effort would make a difference. Using the tools to define, measure, analyze/synthesize and solve the problem allowed all involved to cooperate instead of compete.

Cognitive psychology holds that extrinsic motivators do much to destroy intrinsic motivation (Pink, 2009). The supervisor's initial response to the problem was destroying the workers' intrinsic motivation to work with and respect management. Fixing the problem restored their intrinsic motivation to do a good job and do it safely. It restored their pride and joy in work.

The second part of profound knowledge was the application of systems thinking. The improvement team examined everything in the operation to make sure any changes would be evaluated to determine what else would be affected. To do this, they created a flowchart to evaluate and understand the system. This helped them redesign the operation by moving the gear box to the opposite side of the work area, then move the safety line a total of 11 in. into the aisle. This gave operators 25 in. of workspace compared to the original 14 in.

The team also started to understand variation and how it affected the safety of the process. Some days, the problems were not so great due to the amount and size of boxes; consequently, operators were not always bumping against them. The operators were different sizes, shapes and ages as well. Some hit the parts boxes more frequently than others. The team could not change the size and shapes of the operators, but it could change the size and shape of the work process itself. Team members also learned that variation demonstrates the saying, "The absence of a negative doesn't mean you have a positive."

Finally, consider the knowledge gained by working on this project. Knowledge requires one to make a prediction. Team members learned how to predict what would happen by gathering data about how many times people bruised their legs when they worked in the area (Figure 4). The team obtained these data by asking workers; it was not available in the existing system.

The team used the flowchart and data to convey the concerns to the maintenance department, which made the changes. Once improvements were made, the team predicted zero instances of the problem and it was correct, and verified its prediction with data. The team tracked data for 6 months during which time no injuries or near hits were recorded.

The goal was to take the safety process to an ideal state so operators could perform their task at maximum efficiency with no injuries. Workers believed nothing could be done to fix the system. They used the flowchart and the data to communicate with the safety and maintenance departments. After seeing the data, everyone agreed action was needed. The team then brainstormed and drew up a reconfigured workspace (Figure 5).

By moving some machinery, the team gained enough space so that workers would no longer bump against the pallets and machinery. The new design worked to eliminate any injuries over time. The maintenance department moved the equipment during a weekend shift at minimal cost. The team was able to achieve the ideal state (no injuries) and keep it at that level over time.

Once the team examined the entire system, it quickly realized the injuries were not the fault of operators simply not paying attention, but rather were the result of the interactions between many different things: process layout; variation in each worker's size and physical ability; and the amount of work load (common causes).

The team saw how factors that were not related directly to the work process still influenced the operation. The safety department required the line on the floor so equipment would be kept out of the aisle, but this group had no problem with moving the line to open space.

The interaction of processes of the system created the minor bruises. Initially, in a subtle way, the supervisor in this operation shifted the problem back to the workers. He held them accountable for their own actions and appealed to them to use common sense while working. As a result, he felt no urgency to change the system. The manager required employees to work around the problem instead of trying to solve it. This is an event-focused response instead of a systems-thinking approach.

The example also shows how easy it is to shift the burden for safety away from the system to workers. Too often, managers believe that most industrial incidents are caused primarily by individuals not applying common sense or paying attention. They do not want to address the system because they know how hard it is to change.

Team members used what profound knowledge they had to consider the problem. Doing so helped them approach it with a different attitude. Instead of competing, management and workers cooperated as did the production, safety and maintenance departments. They stayed focused on fixing the system, not on affixing blame. The final result was a synergistic solution that increased ownership of safety by everyone involved which in turn created pride and joy in work. All involved were able to extricate themselves from a cycle of despair by looking at the situation through the lens of profound knowledge that they all possessed to some degree.

What Safety Managers Need to Learn & Apply

The tendency among managers and the general public is to pay more attention to safety when severe or catastrophic events occur. But what about the attention paid to hazards people face on the job when they perform mundane tasks related to assembly and process operations? In 2009, more than 4 million people in private and public sectors were injured at work (BLS, 2010). Management needs to examine their common causes. Investigating such incidents to fix the blame or find a single root cause has not been efficient. A better approach is to apply the methods used in industry to eliminate product defects in order to improve quality.

Since most incidents are caused by the system, they are divergent problems of a systemic nature. Any investigation should using a heuristic approach for learning by discovery through enquiry. That is why Deming (1994; 1995) advocated the PDSA cycle.

New Challenges Require New Skills

People responsible for ensuring that safety is functioning in daily operations must learn new ideas about leadership and management (Deming 1994; 1995). To improve the quality of safety in operations, they need to learn new tools and techniques, including the following:

1) the management philosophy of continual improvement;

2) profound knowledge and how it can be used to examine safety in work systems:

*systems tliinking;

*how variation relates to incidents;

*psychology (e.g., the important role of intrinsic motivation);

*knowledge (e.g., how confident you are about your predictions).

3) basic knowledge about elementary statistics and statistical process control theory;

4) how to determine common and special causes of accidents and react appropriately;

5) heuristic problem-solving tools including process flowcharts, run charts, control charts, operational definitions, getting the customer's voice into the system's voice, the Pareto principle, brainstorming and the PDSA cycle;

6) understand how to develop and lead teams for problem solving and process improvement;

7) why and how the customer principle applies to safety;

8) how to make constancy of purpose for safety a reality;

9) how to restore pride and joy (intrinsic motivation) in work;

10) culture and its impact on safety.

Today's SH&E manager should be able to recognize, define, describe and improve the safety of the work systems in which customers (employees) work. This requires the application of the theory of continual improvement to replace what is being done now.

Today's SH&E manager must understand the customer-supplier link and how it affects safety. The SH&E manager should spend most of his/her time working with other managers and employees, either training them on the elements in the list or using them to improve the safety of work systems.

Conclusion

Deming said, "No organization can survive with just good people. They need people that are improving." When people see a completed continual process improvement project, such as the one described in this article, they express amazement at the seemingly simple and obvious solutions. They forget they are observing things after the PDSA cycle has been completed and changes have been made. They assume the solutions were obvious and easy to implement.

However, as anyone who has tried to fix the system especially while it is running, can attest, fixing the system is anything but easy. Nonetheless, a company the size of Toyota implements a million ideas each year that are similar to the kind presented in this article (May, 2007). Command - and-control management does not promote such change and innovation. It cannot because its focus is on maintaining the status quo.

In the world of profound knowledge and continual improvement, the theory of accident causation is elevated to a higher level. From there, one can see how the system itself creates most accidents.

This places most of the responsibility for safety on management and prevents it from being shifted back to workers. It does not change or allieviate the responsibility of workers to employ proper safety practices or to pay attention. It provides a theory to ensure that accountability and responsibility for safety is always properly assigned.

When it comes to safety, the most senior managers should ask: What are we doing? Why are we doing it? No production system can exceed the amount of safety designed into it. Managing to meet specifications and emphasizing compliance will never deliver continual improvement.

Systems thinking helps all stakeholders understand that when a safety problem exists, management likely caused it. The lack of quality in a safety program typically stems from the absence of profound knowledge and a management philosophy that meeting safety standards and complying with safety regulations is "good enough." This is simply an outmoded theory.

Without profound knowledge, managers will continue to believe that holding workers accountable is the key to good safety management. For them, managing safety to meet specs or find fault is more important and makes more sense than fixing the system.

When traditional techniques fail, these managers resort to incentivizing workers to get them to follow procedures, use common sense or try harder. Administering a safety incentive program is much easier than changing a management paradigm.

With profound knowledge, management ascends to a higher level of understanding about what causes accidents. It gives management the ability to examine the system in a new way and teach others how to remove barriers around workers that prevent them from being safe on the job each minute of every day. Profound knowledge will improve the quality of safety efforts. It's time to upgrade the system.

[Sidebar]

IN BRIEF

*The word quality and how it applies to safety management is explored, as is its effect on the skills SH&E managers need to perform their jobs effectively.

*Systems thinking helps all stakeholders understand that when a safety problem exists, management likely caused it.

 

 

[Sidebar]

Explaining a system requires synthesis. This means instead of breaking something down and examining each part separately, one keeps the parts together and studies outcomes while the system is running.

 

 

[Sidebar]

The states of chaos and the brink of chaos revolve around the circle of incompetence. Traditional management keeps shifting between unconscious incompetence and conscious incompetence.

 

 

[Sidebar]

Employee accidents are built into the work system. They are not the result of workers messing up management's plans to control things.

Operators in this system were constantly hitting their shins and knees against parts stacked in boxes on the pallet adjacent to their workspace, and they had the bruises to prove it. There was only 14 in. of clearance between their machine and the pallets.

 

 

[Sidebar]

Profound knowledge has four areas: psychology, systems thinking, variation and knowledge itself.

 

 

[Sidebar]

Team members learned how to predict what would happen by gathering data about how many times people bruised their legs when they worked in the area. The team obtained these data by asking workers; it was not available in the existing system.

 

 

[Sidebar]

By moving some machinery, the team gained enough space so that workers would no longer bump against the pallets and machinery. The new design worked to eliminate any injuries over time.

 

 

[Sidebar]

In the world of profound knowledge and continual improvement, the theory of accident causation is elevated to a higher level. From there, one can see how the system itself creates most accidents.

 

 

[Reference]  »   View reference page with links

References

Bureau of Labor Statistics (BLS). (2010). Workplace injuries and illnesses, 2009 (Table 2: Numbers of nonfatal occupational injuries and illnesses by case type and ownership, selected industries). Washington, DC: U.S. Department of Labor, Author.

Corcoran, W.R. (1996). Don't believe in the myth of the root cause. Quality Progress.

Cole, R.E. (1999). Managing quality fads. New York: Oxford University Press.

Deming, W.E. (1994). The new economics for industry, government and education. Cambridge, MA: ?GG Press.

Deming, W.E. (1995). Out of the crisis. Cambridge, MA: MIT Press.

Dobyns, L. & Crawford-Mason, C. (1991). Quality or else. Boston: Houghton-Mifflin Co.

Dobyns, L. & Crawford-Mason, C. (1994). Thinking about quality progress, wisdom and the Deming philosophy. New York: Times Books/Random House.

Gano, D.L. (1999). Apollo root-cause analysis. Yakima, WA: Apollonian Publications.

Heinrich, H.W. (1950). Industrial accident prevention: A scientific approach (3rd ed.). New York: McGraw Hill.

Hunt, M. (1993). The story of psychology. New York: Doubleday.

May, M. (2007). The elegant solution: Toyota's formula for innovation. New York: Free Press.

Pink, D.H. (2009) . Drive: The surprising truth about what motivates us. New York: Riverhead Books.

Senge, P. (1990). The fifth discipline. New York: Doubleday.

Shina, S.G. (2002). Six sigma for electronics design and manufacturing. New York: McGraw Hill.

Wheeler, DJ. & Chambers, D.S. (1992). Understanding statistical process control. Knoxville, TN: SPC Press.

 

 

[Author Affiliation]

Thomas A. Smith is president of Mocal Inc., a management consulting finn in Lake Orion, MI. His work focuses on teaching managers and hourly employees how to apply the prindples of continual improvement to safety management. Smith has more than 30 years' experience in safety management, having worked as a loss control representative and manager in the insurance industry for 16 years before starting Mocal. He has written numerous artides as well as the book System Accidents: Why Americans Are Injured at Work and What Can Be Done About It.