The human factor in healthcare

August 14, 2012

Moin Rahman wrote a very informative piece about the various factors which influence emergency healthcare.

He clearly illustrates the stages which occurred in the case study of a child who died from septic shock as a result of a small cut he received whilst playing basketball.  Fits beautifully into the safety management systemframework.

What is apparent immediately is that it reflects a common theme in society – the tendency to attribute blame to the end user despite the underlying reasons for an incident.  As is so often the case in other areas such as aviation, road use and military applications, ‘human error’ is commonly given as the reason an incident occurred, often with deadly consequences.  However, as Moin succinctly points out, there are very clear underlying factors that are probably more important and should be highlighted.  The root cause is the process which almost makes the final act, in this case the death of a child, almost inevitable.

Unfortunately, as in many fields where there is a high aspect of skill or ‘art’ in a profession such as medicine, these root causes are too often subsumed as there is an easy scapegoat on whom to focus attention.  But what about the lack of funding, high workload and lack of resourcing common in the medical field, especially in public-funded or profit-driven private hospitals.

As is now the case in OH&S matters, managers are increasingly being scrutinised regarding their contribution to an incident.  Adopting Reason’s (1990) model as described in Moin’s article, their function is to provide the first three layers of the safety system and one would expect that they should shoulder an appropriate proportion of the blame if something does go wrong.  Perhaps they would be less inclined to reduce services if they were held truly accountable for their actions. Perhaps the accountants who have no knowledge of the coalface and make cost cutting  decisions without first taking a reasonable view of the potential results could take a fair cop as well.

But then, how will they know what is wrong?  What is a reasonable view? A theme which I have espoused in my other blogs is that many, if not all systems contain humans as an integral part. Therefore, a scientific, objective assessment of the human in the system should be fundamental.  And given human scientist expertise in this area, it should be evident that they would be best placed to undertake this role.

Advertisement

Autonomous vehicles – a true step forward?

August 7, 2012
 

Movement within a roundabout in a country wher...
Movement within a roundabout in a country where traffic drives on the left. Note the clockwise circulation. (Photo credit: Wikipedia)

I was reading the August RACV Royalauto magazine article Smart Vehicle Safety, Removing the driver from a vehicle may be the smartest safety decision of all (Bruce Newton, page 66)..

As a human factors researcher in the automotive field I read this article with great interest.

Everyone acknowledges that human error is the major cause of accidents. However, there is a great deal of evidence that the final human error typically occurs as the result of a systemic problem – the ‘tip of the iceberg’ to coin a phrase.  Examples of this would be the design of vehicles with significant ‘blind spots’ or the design of roads with blind corners which make it difficult or even impossible for drivers to perceive and respond to a dangerous scenario.  The Safety Management System taxonomy currently being adopted by many industries recognises and illustrates this fact.

A philosophical aspect has been raised by the claim that automation would eliminate human error.  My contention is that automation itself, because it relies on hardware and software, will also have inherent human error – that of the designers and programmers of the system.

I’m sure many of us will have experienced the problems of using common computer software – I certainly would not relish having to reboot my automated vehicle whilst driving in heavy traffic.  The problems experienced by drivers using software menu systems to control vehicle functions also illustrate the human error inherent in poorly designed automated systems.

Cruise control has been a fantastic assistance for driver fatigue and vigilance management in long distance driving.  However, the new adaptive cruise control systems have been shown to induce human error as they can cause confusion when the driver is required to provide braking inputs especially when their cognitive state, workload and other factors are taken into account.

A salient case of highly automated systems is that of the aircraft which crashed into the Atlantic which caused headlines recently.  Whilst details are still being determined and investigated, it is suspected that icing of the pitot tubes, which the aircraft system uses to determine its correct altitude, was the primary cause.  In effect, the automation in the aircraft had a perception failure.  I’m wondering whether the passengers and crew of that aircraft felt safer because of the level of automation, which obviously hindered the pilots’ ability to manually resume control of the aircraft and possibly recover the situation.  This aspect of the crew being flight managers, rather than pilots, is a hot topic in the aviation industry at present and should inform where we go with regard to automation in the road environment.

In summary, I support automation, subject to it being designed with substantial human factors and human science input to ensure that one type of human error is not replaced with another.

What do you think?