Corporate Pathogens

Summary
The term “Pathogens” is a creation of medical science in 1880 from the two words “patho” that means disease and “gen” that means producer and is defined as an infectious agent that can only cause disease when the host’s resistance is low (MedicineNet, 2010). This concept of pathogens was later borrowed and applied to risk management by Reason (1990) in his study of several accidents in complex high-risk technologies, where such accidents occurred due to the adverse conjunction of a large number of causal factors which are all vital causes but singly incapable of disaster. These unfortunate factors in an organization are in essence, organizational pathogens. This report first provides a simple outline of the concept of pathogens based on Turner (1994a) prior to critically evaluating various issues in the concept and its solutions through the concepts of anticipating risk, being resilient to risk, high reliability theory and the normal accident theory. The report subsequently looks at the concept of root cause analysis as a more structural approach to combating pathogens before providing future directions to academic studies of risk management.
1 Introduction
The butterfly effect, based on a meteorological study originally conducted by Edward Lorenz, is where seemingly insignificant actions or circumstances, like a single flap of a butterfly’s wings, can lead to history altering events such as the occurrence of a hurricane (Dizikes, 2008). Oddly enough, the central theme of Turner’s concept of pathogens is relatively similar. Turner (1994a) states that pathogens are elements available to contribute to a disaster (e.g. butterfly’s wing flap) that when not neutralized over the incubation period, and later triggered by some event, could lead to disaster (e.g. hurricane). Turner’s (1994a) paper begins by establishing that disasters are commonly caused by both social and technological events and as such, despite the necessity for some form of technical control, such controls are insufficient to nullify pathogens when used in isolation. The paper further provides a full list of management failings and system properties that are assumably preconditions for disasters and later attempt to provide a solution to curbing pathogens through improved management standards without rigid orthodoxies.
2 Pathogens
2.1 How Pathogens Could Lead to a Large-Scale Disaster
Pathogens, as Turner (1994a) notes, do not instantaneously cause catastrophe but are merely accidents waiting to occur if, and only if the environment allows it. Turner (1994a) suggests that for pathogens to lead to a large-scale disaster, the following three factors must first exist:
i) More than one pathogen
ii) The Incubation Period
iii) Management or Technical Issues
Turner (1978) presented an example of disaster through the case of a coal mine where the predisposing management incorrectly assumed that the mine was free from methane. In this instance, one could state that a pathogen exists. This single pathogen would have been of little significance other pathogens such as inefficient ventilation systems, pressure to sustain production and poor practice of testing repaired electrical equipment without a cover. All these pathogens would again be of little significance without the incubation period where a mixture of air and methane accumulated unnoticed. And all these would again be of no significance if the management had firstly undergone a proper check for methane in the mine or even noticed the signs of disaster just around the corner. And as such, when all these sociotechnical pathogens interacted with one another, an explosion occurred.
2.2 Improved Standards of Management as the Way to Prevent the Chemical Reaction of Pathogens - Disaster
In attempt to prevent pathogen-induced disasters, Turner (1994a) prescribed a combination of both management and technical “best practices” that mirrors high-reliability organizations (HRO) targeted at preventing the last two factors mentioned earlier (an incubation period and managerial or technical induced issues). Turner (1994a) notes that when a management system loses touch with operational realities, it is indicative that an incubation period is building. During the incubation period, it is inevitable that the flow of information will be disturbed through toleration of gaps between important information, failure to reveal information, information only available to members of organization who lack the necessary understanding and rigid hierarchies to inhibit flow of information (Turner, 1994a). Understanding that poor or incomplete information conditions are not removed through communicating everything, Turner (1994) prescribes the forming of a high-reliability management where trade-offs between mutually incompatible system demands and decisions are to be made quickly and more importantly, accurately (Halpern, 1989). Turner (1994a) further promotes an establishment of a company culture that stresses on safe operations and cultivates openness through a “no-blame” attitude to uncover and deal with pathogens. For technical aspects, understanding that complex systems will inevitably generate “normal accidents” (Turner and Toft, 1988), Turner (1994a) suggests that this can only be eliminated through large-scale system redesign.
2.3 Critical Issues in the Concept of Pathogens, Disaster and its Solutions
One might perhaps say that the term pathogen is merely yet another fancy academic jargon for a study of human failings that have long existed (Reason, 1990). Turner’s (1994a) concept furthers Reason’s (1990) study by blaming sloppy management for the end result of pathogens through concepts largely based on two theoretical backgrounds; firstly the organizations anticipating position towards risk management (as opposed to be resilient to risk) and subsequently the high reliability theory (as opposed to the normal accident theory). While the concept of pathogens, disasters and its solutions previously mentioned appears logically valid and is further supported by Smith and Elliot (2006), one can still remain highly critical of this concept and it’s solutions. Even Turner (1994a) stated as an endnote that all these solutions may be for naught, as a management comparable to a HRO remains insufficient to avoid all catastrophes and at can only serve to dampen potential disasters from pathogens.
Academic support on Turner’s concepts has been long divided. Firstly, when looking at the concept of a corporate culture that stresses a concern for safety operations to combat pathogens, as presented by Turner (1994a), little academic studies with substantial empirical validation exists, making the end result vague at best (Clarke, 2000). It is crucial at this point to note that safety and reliability are separate properties and does not imply or require one other (Leveson et al., 2009). But taking HRO’s assumption that safety and reliability are equivalent, the best examples of a safe culture would then be HROs. Robert (1990) notes that HROs are highly reliable organizations that could have caused catastrophes several thousand times from highly hazardous conditions but didn’t.
La Porte and Consolini (1991) states that in HRO management cultures, performance and safety are commonly strived for. While these objectives are desirable, a delicate balance between the two must be maintained in the HRO as both objectives are generally negatively correlated with one another. Furthermore, the fact that lives are at stake in most HROs creates an organization-wide awareness and caring attitude towards ramifications of individuals’ actions. And as such, for a simple corporation, blatantly mirroring managerial practices would prove hazardous both capability and attitude wise. In a typical organization where employees may lack even a basic concern with their own bodily integrity (Turner, 1994a), would it be wise to attempt at mirroring HROs’ delicate management of performance and safety? Would increased financial expenditures on training coal miners mirroring HROs in terms of training really be of any use? An example of the futility of training would be simply a fire drill in a university dorm, rather than becoming more responsive to danger signals, such a false alarm and especially when is done rather frequently, would cause students to be complacent and ignoring the dangers signs when a real fire occurs.
In one of the classic examples of HRO - an aircraft carrier, whereby planes constantly take off and land on, accidents do in fact constantly occur despite claims by HRO proponents that there are no or little accidents (Marais et al, 2004). While Shrivastava et al. (2009) note that in such an event HRO proponents would state that the HRO has ceased being reliable, making the reasoning behind HRO appeared skewed and non falsifiable (Rosa, 2005). Assuming that this was not the case, and if the management system in HROs were perfect at curbing pathogens as Turner (1994a) claims, such minor accidents would not have occurred at all. Furthermore, in yet another example of HRO management - the space shuttle, at any time a shuttle has over 3000 waivers that allow flight even when potential problems have yet to be resolved (Marais et al., 2004). The truth is that HROs, rather than merely combat pathogens through anticipating risks via managerial best practices, are reliable due to its resilience towards disaster as managers in HROs are well aware that as the normal accident theory (NAT) suggests, accidents are inevitable in complex organizations (Perrow, 1984). And if accidents will occur in HROs, what good would the duplicated system be when applied to a less reliable organization?
When managing risks based on the NAT, it is assumed that the accidents are any unintended and untoward event that disrupts ongoing or future outputs (Perrow, 1984). Accidents have a tendency form from lower level incidents, and as such can be countered using various engineered safety procedures incorporated into systems to prevent this (Perrow, 1984). This approach serves to limit disaster from complex interaction of systems that are not immediately visible or comprehensible (Perrow, 1994) of which a safe management culture would not suffice. Squier (2008) notes that rather than blindly believing on management culture, the best solution towards managing pathogens is to allow front line workers to trust and use one’s own common sense. In the case of the coal mine mentioned earlier, if the workers had used their own common sense to check the mine for methane rather than trusting the management, if the workers had the common sense to repair electrical equipment with a cover, if and only if, the workers had some trust in their own common sense, those pathogens would not have led to a mine explosion. While it is true that human factors are the main cause of pathogens, humans are still the one of the beset mediums to uncover pathogens, particularly in the incubation stage. Lehrer (2009) states that through subconscious learning abilities, our emotions are highly empirical. The fact that in the example of the mine explosion as depicted in Turner (1978) some workers were cursorily checking for methane indicated that some of the workers’ instincts were right, there was methane in the air.
Due to several reasons, academic research on the prevention of pathogens through safe high-reliable organization cultures or the normal accident theory has been inconclusive thus far. Turner’s (1994a) claims that the management culture is the key to reliability remain unproven as thus far, HRO proponents have merely produced a list of factors assumingly associated to reliability, and unless systematic empirical evidence exists, the causality between corporate culture and pathogens can not be made (Shrivastava et al., 2009). While corporate culture and related theories remains unjustified as a means of combating pathogens and Turner (1994a) merely provides a simple to list of environments that cultivate pathogens and corporate cultures that nullify pathogens, alternative approaches must be taken to detect and nullify pathogens.
3 Root Cause Analysis and Alternatives to detecting Turner’s Pathogens
3.1 Root Cause Analysis and Pathogens
One of the major applicability failings of Turner’s (1994b) pathogen concepts while it might be easy to look back in time and note that these few factors are pathogens, it is lacks a proper structure to determine the exact causal factors and as such can only be seen as general advice directed at risk management. A structured approach at studying such causality factors is the root cause analysis. The concept of root cause analysis goes through four stages - data collection, causal factor charting, root cause identification and recommendation generation and implementation to answer what, how and why a particular event occurred (Rooney and Lee, 2004). The end result of this analysis would be a list of pathogens and the incubators that allowed the disaster to occur. This will be later incorporated into the organizations learning curve to prevent similar pathogens transforming to yet another corporate crisis. While this may serve as a tool to detect and learn risk management concepts from pathogens and various methods to apply root cause analysis exists, it is insufficient for two reasons. Firstly, often organizations fail to go far enough in detecting pathogens through this analysis (Kingery et al., 2010). Subsequently, and more importantly, root cause analysis merely indicates pathogens following the occurrence of an incident.
3.2 Alternative Approaches to Determining Pathogens
Understanding that pathogens are often overlooked due to its appearance to be meaningless (Holland, 2002), McKelvey and Andriani (2010) propose the application of a several scalability theories to detect pathogens or incubation periods as early as possible.
4. The Concept of Risk Management and Concluding Remarks
One of the main downfalls of the applicability of risk management through pathogens is that virtually anything can contribute to a disaster of great magnitude. The danger of pathogens lies not in its potential to create a disaster but it’s very nature to be perceived as not being a vital factor. With the limitations to pinpoint pathogens prior to the incubation stage, one can be overly paranoid and employ stringent management to avert potential disaster from every single pathogen. The paranoia, rigid control and attempts at removing the pathogen may well become the root cause for the disaster as quoting a catchy line from Kung-Fu Panda - one often meets his destiny on the road he takes to avoid it (IMDb, 2008). Aside from the several issues in Turner’s (1994a) views on pathogen, one must still understand that pathogens are not always existent within the company; pathogens can be due to global market conditions or even government ignorance. Simple examples of such a disaster would be the recent global financial crisis and the September 11 terrorist crashing.
Disasters and pathogens are inevitable as most of these are beyond the corporation’s controls. Like stock markets, a crisis will occur, but this remains necessary both as a learning opportunity and a correctional process towards weaknesses and human failings in the corporation. Furthermore, in this globalized economy where politics and weather are both chaotic and unpredictable, the key to corporate risk management is to manage pathogens the best the corporate can based on Turner’s (1994a) suggestions on a culture of safe and high reliability, but at the same time, have disaster mitigation and support plans in place.
One might further argue that this self-perceived access to information relevant to risk management can only be seen as a comforting illusion that no disaster would befall the corporation. The famous financial modelling model CAPM has proven that despite the best efforts of academics, historical information goes so far when it comes to making uncertainty certain. The concepts of risk management thus far have been relatively similar. When applying Turner’s (1994a) concept, we are in essence making present decisions based on present information on historical circumstances, and attempting to construct possible futures through best practices in HROs (Turner, 1994b) that is ironically, not even empirically established.
Turner’s (1994a) concept of how a disaster can occur from a combination of pathogens is true, the same way meteorological researcher Edward Lorenz indicated that a single butterfly’s wing flap could cause a hurricane. The issue is however, the chances of this occurring is relatively low. Unless the management is as lax as described in Turner (1994a) it is highly improbable for a series of random unfortunate pathogens to be able to interact, incubate and cause disaster. And if the corporation is as lax as described by Turner (1994a) no amount of HRO concepts will prevent disaster from befalling the corporation. As such, Turner’s (1994a) concept of pathogens and managing it through a HRO based management culture and as of all other approaches at making risk related uncertainties appear to be certain, does not make the inevitable avoidable but merely gives hope that such disasters can be minimized both in frequency and magnitude (Roberts and Bea, 2001).
Reference List
Bak, P. (1996) How Nature Works: The Science of Self-Organized Criticality. Copernicus: New York.
Bak, P., Tang, C. and Wiesenfeld, K. (1987) Self Organized Criticality: An Explanation of the 1/f Noise. Physical Review Letters. Vol. 59: 381-384.
Clarke, S. G. (2000) Safety Culture: Under-Specified and Over Rated?. International Journal of Management Reviews. Vol. 2(1), pp. 65-90.
Dizikes, P. (2008) The Meaning of the Butterfly. Why Pop Culture Loves the ‘Butterfly Effect,’ and Gets it Totally Wrong. Global Newspaper Company . Available at: http://www.boston.com/bostonglobe/ideas/articles/2008/06/08/the_meaning_of_the_butterfly/?page=full Accessed: 30 April 2010.
Frigg, R. (2003) Self Organized-Criticality - What it is and What it isn’t. Studies in History and Philosophy of Science. Vol 40(2), pp. 229-231.
Halpern, J. J. (1989) Cognitive Factors Influencing Decision-Making in a Highly Reliable Organization. Industrial Crisis Quarterly. Vol. 3(2), pp. 143-158.
Holland, J. H. (2002) Complex Adaptive Systems and Spontaneous Emergence. In A. Q. Curzio and Murtis, M. (eds.) Complexity and Industrial Clusters. Hiedelberg: Germany, pp. 24-34.
IMDb (2008) Memorable Quotes from Kung Fu Panda . Available at: http://www.imdb.com/title/tt0441773/quotes Accessed: 30 April 2010.
Kingery, C., Krueger, J. and Nguyen, K. 2010 Root Cause Analysis . Available at: Accessed: http://trs-new.jpl.nasa.gov/dspace/bitstream/2014/37730/1/05-2450.pdf 30 April 2010.
La Porte, T. and Consolini, P. (1991) Working in Practice But Not in Theory: Theoretical Challenges of High-Reliability Organizations. Journal of Public Administration Research and Theory. Vol. 1, pp. 19-47.
Lehrer, J. (2009) The Decisive Moment, How the Brain Makes Up Its Minds. Canongate Books Ltd: Edinburgh.
Leveson, N., Dulac, H., Marais, K. and Carroll (2009) Moving Beyond Normal Accidents and High Reliability Organizations: A Systems Approach to Safety in Complex Systems. Organizational Studies. Vol. 30, pp. 227-249.
Marais, K., Dulac, N. and Leveson, N. (2004) Beyond Normal Accidents and High Reliability Organizations: The Need for a Alternate Approach to Safety in Complex Systems. MIT Working Paper.
Mckelvey, B. and Andriani, P. (2010) Avoiding Extreme Risks Before it Occurs: A Complexity Science Approach to Incubation. Risk Management. Vol. 12(1), pp. 54-82.
MedicineNet (2010) Definition of Pathogen . Available at: http://www.medterms.com/script/main/art.asp?articlekey=6383 Accessed: 30 April 2010.
Newman, M. E. J. (2005) Power Laws, Pareto Distributions and Zipf’s Law. Contemporary Physics. Vol. 46(5), pp. 323-351.
Perrow, C. (1984) Normal Accidents: Living With High Risk Technologies. Basic Books: New York.
Perrow, C. (1994) The Limits of Safety: The Enhancement of a Theory of Accidents. Journal of Contingencies and Crisis Management. Vol. 2, pp. 212-220.
Reason, J. (1990) The Contribution of Latent Human Failures to the Breakdown of Complex Systems. Philosophical Transactions of the Royal Society B. 12 April, pp. 475-484.
Roberts, K. H. (1990) Some Characteristics of One Type of High Reliability Organization. Organization Science. Vol. 1(2), pp. 160-176.
Roberts, K. H. and Bea, R. (1991) Must Accidents Happen? Lessons from High Reliability Organizations. Academy of Management Executive. Vol. 15(3), pp.70-79.
Rooney, J. J. and Lee, N. V. H. (2004) Root Cause Analysis for Beginners. Quality Progress. July, pp. 45-53.
Rosa, E. A. (2005) Celebrating a Citation Classic - and More. Organization and Environment. Vol. 18, 229-234.
Schkeinkman, J. and Woodford, M. (1994) Self Organized Criticality and Economic Fluctuations. American Economic Review. Vol. 84(2), pp. 417-421.
Shrivastava, S., Sonpar, K., and Pazzaglia, F. (2009) Normal Accident Theory Versus High Reliability Theory: A Resolution and Call for an Open Systems Review of Accidents. Human Relations. Vol. 62(9), pp. 1357-1390.
Smith, D. and Elliot, D. (2006) Key Readings in Crisis Management: Systems and Structures for Prevention and Recovery. Routledge: New York, pp. 99-114.
Squirer, S. (2008) The Sky is falling: Risk, Safety and the Avian Flu. South Atlantic Quarterly. Vol. 107(2), pp. 387-409.
Turner, B. A. (1978) Manmade Disasters. Wykeham Press: London.
Turner, B. A. (1994a) Causes of Disaster: Sloppy Management. British Journal of Management. Vol. 5, pp. 215-219.
Turner, B. A. (1994b) The Future of Risk Management. Journal of Contingencies and Crisis Management. Vol. 2(3), pp. 146-156.
Turner, B. A. and Toft, B. (1988) Emergency Planning from Industrial Hazards. Elsevier Applied Science: London. Pp. 297-313.
West, G. B., Brown, J. H. and Enquist, B. J. (1997) A General Model for the Origin of Allometric Scaling Laws in Biology. Science. Vol. 276(4):122-126.
 
< Prev   Next >