The rise of the drones

September 17, 2012
Kawasaki KAQ-1 Drone at Castle Air Museum

Kawasaki KAQ-1 Drone at Castle Air Museum (Photo credit: Wikipedia)

The ever increasing ability and availability of drone technology will have a major impact on defence and law-enforcement operations in the future.  Where capabilities of platforms used to be defined by the ability to deploy various assets within the capability of a class or type of vessel ( in naval operations for example, a carrier vs an offshore combatant vessel), the latter smaller platforms may soon be able to deploy an aerial capability which could previously only be provided by the larger and far more expensive vessels.

As noted in the article, this can have far-reaching consequences for navies where the majority of tasks are routine patrol and ‘constabulary’ operations of protecting sea lanes and territorial integrity.  With drone technology, a single smaller platform could perform tasks which currently require several more capable and expensive assets (for example, the patrol of a shipping route subject to piracy).

A telling point made in the article is the close involvement of the human in the loop.  Similar to other advances in automation, the command and control function remains within the human domain.  As stated “…there will inevitably be a human in the operational Observe, Orient, Decide and Act (OODA) loop – be they a remote operator, data handler oFull diagram originally drawn by John Boyd for...r decision maker within any of a number of mission sets.”

So the design of the HMI will determine how successful this shift in technology will be.  As has been seen in Afghanistan, the ability of the remote operator of a drone aircraft to gain and maintain situational awareness to perform their mission without unintended consequences will greatly depend on the amount, type and quality of information available to them and what range of tasks they need to perform.  Many combat aircraft have a crew of two due to the separate and demanding pilot and SA/engagement tasks, and military drone strike operations seem to reflect this crewing model. Perhaps this model is a historical legacy which may also change in the future as drones dispense with the constraint of having to fit the aircrew into the platform.

This may cause a shift in emphasis of the Human Factors and Ergonomics discipline.  A lot of effort was traditionally expended in the physical anthropometric ergonomics aspect of the human in the loop probably because it was so obvious. For example, range of movement, posture etc within a cockpit could be calculated and the 95thpercentile adopted as a standard which could then be used to determine interaction within the crew space available in the airframe. As we all know, engineers love standards, so perhaps this aspect was pursued to the detriment of equally or possibly more important aspects of the human/machine interface.

Computer Workstation Variables

Similar adoption of standards cannot be readily applied to much more esoteric aspect of neurological interaction with a system. For example, although it provides a very good framework to predict and test how an operator will interface with a HMI, Multiple Resource Theory doesn’t provide the level of certainty available from physical ergonomic models. Each aspect needs to be tested according to the many variables which could arise and the neural adaptability inherent in the human which makes them so important to the command and control function.  That’s why the non-physical human interaction field is so interesting to us practitioners (and perhaps perplexes many physical scientists who cannot seem to grasp the notion that humans cannot be treated as a simple probability or linear contributor in their decision models).

So while drone technology will enhance capability, it will only do so effectively if there is a requisite paradigm shift in how the interface is designed to incorporate the more difficult ‘neural ergonomic’ aspects described above.  Perhaps we can finally move away from the tyranny of standards which are sometimes adopted without further thought for the equally important sensory, perceptual and cognitive aspects which we pesky Human Factors types are constantly trying to highlight to our seemingly dullard peers in other fields, sometimes with, but unfortunately many times without success.

The human factor in healthcare

August 14, 2012

Moin Rahman wrote a very informative piece about the various factors which influence emergency healthcare.

He clearly illustrates the stages which occurred in the case study of a child who died from septic shock as a result of a small cut he received whilst playing basketball.  Fits beautifully into the safety management systemframework.

What is apparent immediately is that it reflects a common theme in society – the tendency to attribute blame to the end user despite the underlying reasons for an incident.  As is so often the case in other areas such as aviation, road use and military applications, ‘human error’ is commonly given as the reason an incident occurred, often with deadly consequences.  However, as Moin succinctly points out, there are very clear underlying factors that are probably more important and should be highlighted.  The root cause is the process which almost makes the final act, in this case the death of a child, almost inevitable.

Unfortunately, as in many fields where there is a high aspect of skill or ‘art’ in a profession such as medicine, these root causes are too often subsumed as there is an easy scapegoat on whom to focus attention.  But what about the lack of funding, high workload and lack of resourcing common in the medical field, especially in public-funded or profit-driven private hospitals.

As is now the case in OH&S matters, managers are increasingly being scrutinised regarding their contribution to an incident.  Adopting Reason’s (1990) model as described in Moin’s article, their function is to provide the first three layers of the safety system and one would expect that they should shoulder an appropriate proportion of the blame if something does go wrong.  Perhaps they would be less inclined to reduce services if they were held truly accountable for their actions. Perhaps the accountants who have no knowledge of the coalface and make cost cutting  decisions without first taking a reasonable view of the potential results could take a fair cop as well.

But then, how will they know what is wrong?  What is a reasonable view? A theme which I have espoused in my other blogs is that many, if not all systems contain humans as an integral part. Therefore, a scientific, objective assessment of the human in the system should be fundamental.  And given human scientist expertise in this area, it should be evident that they would be best placed to undertake this role.