Big Brother on Mars

August 14, 2012

Mars

Great little piece re getting people to go to Mars.

Seems the engineers and physicists are really across the technical aspects of this idea.  Fantastic stuff!

But using the ‘Big Brother’ paradigm to investigate individual and group behaviour of small teams in special environments, is a bit too left field as far as I’m concerned.Big Brother Logo

Now I don’t profess to being a ‘Big Brother’ admirer (try to keep away as much as possible in fact).  Don’t find the intrusion, trivialisation and sensationalism of the human condition that occurs on the program entertaining.  See it on the News and current affairs shows all too often.  Would rather watch something uplifting, educational or inspirational.

However I am interested in some of the really important experiments that have been occurring in various deserts around the world over the past several years exploring exactly how human teams interact in these extreme environments.  Testing how life would really be like on Mars. These experiments unlike what appears to be planned for Big Brother, are scientifically and ethically based.

Remember Ethics?  That aspect which supposedly is the core of all scientific research?  And remember Ethics in Human Research?

I wonder how the ethics of the Big Brother paradigm on mars will be handled.  Suspect it will be overlooked as all efforts will be focused on the technical and physical science brilliance required to  go to Mars.  Seems that all too often the technical sciences are either unaware of or remember to forget Ethics.  Perhaps training in this area should be compulsory in their undergraduate training as it is for all social sciences.  Maybe then there would be more understanding of this critical issue.

I wonder how the principle of informed consent would be handled for the typical ‘Big Brother’ contestant?  Consider the breadth of discussion at the table of the Human Research Ethics Committee considering the proposal – now that would be fantastic TV viewing!

Big Brother

Advertisement

The human factor in healthcare

August 14, 2012

Moin Rahman wrote a very informative piece about the various factors which influence emergency healthcare.

He clearly illustrates the stages which occurred in the case study of a child who died from septic shock as a result of a small cut he received whilst playing basketball.  Fits beautifully into the safety management systemframework.

What is apparent immediately is that it reflects a common theme in society – the tendency to attribute blame to the end user despite the underlying reasons for an incident.  As is so often the case in other areas such as aviation, road use and military applications, ‘human error’ is commonly given as the reason an incident occurred, often with deadly consequences.  However, as Moin succinctly points out, there are very clear underlying factors that are probably more important and should be highlighted.  The root cause is the process which almost makes the final act, in this case the death of a child, almost inevitable.

Unfortunately, as in many fields where there is a high aspect of skill or ‘art’ in a profession such as medicine, these root causes are too often subsumed as there is an easy scapegoat on whom to focus attention.  But what about the lack of funding, high workload and lack of resourcing common in the medical field, especially in public-funded or profit-driven private hospitals.

As is now the case in OH&S matters, managers are increasingly being scrutinised regarding their contribution to an incident.  Adopting Reason’s (1990) model as described in Moin’s article, their function is to provide the first three layers of the safety system and one would expect that they should shoulder an appropriate proportion of the blame if something does go wrong.  Perhaps they would be less inclined to reduce services if they were held truly accountable for their actions. Perhaps the accountants who have no knowledge of the coalface and make cost cutting  decisions without first taking a reasonable view of the potential results could take a fair cop as well.

But then, how will they know what is wrong?  What is a reasonable view? A theme which I have espoused in my other blogs is that many, if not all systems contain humans as an integral part. Therefore, a scientific, objective assessment of the human in the system should be fundamental.  And given human scientist expertise in this area, it should be evident that they would be best placed to undertake this role.


Are Security Questions a Joke? Or is the way the Systems are Designed the Real Joke?

August 9, 2012
Security questions

Security questions (Photo credit: janetmck)

I read a great article the other day on the threat posed by the use of password security questions as a Computer security issue.

I too have been quite amused by the poorly designed questions which purport to help you if you forget your login information for a site.  Frank Voisin suggests a few ideas to make them more applicable.

However, the second item jarred with me – Applicable: the question should be possible to answer for as large a portion of users as possible (ideally, universal).

Why?

I would have thought that the primary (and only) function was to have something which was individual to the person involved.

Now I’m only a human factors scientist, but my training suggests that we ask the individual to design their own questions.  Sure, give them some advice and make the process as intuitive as possible, but give them the ability to make it as individual as they like – surely that‘s the whole point!  After all, this information is only kept in a secure database to be accessed as needs permit.

Is it more that the systems designer was trying to make his or her job easier?  Sort of fitting the human to the system rather than designing it to the individual’s explicit needs?  Did this save them a few lines of code?

Obviously some human science input into this area is sorely needed.  This raises the question of whether someone who is a computer scientist first and has cross-trained into the human interface is the best person for this role, or someone with a psychology or social science background.
My suggestion is that in this case, you really need some cross disciplinary interaction to arrive at an optimal solution.


Was Steve Jobs the Commercial Mesiah?

August 9, 2012

English: Steve Jobs shows off the white iPhone...

I recently viewed a Simon Sinek presentation on TED:

He used Apple as an example of a business which uses the why or underlying belief system as its primary corporate message which then leads into the how and what they do.

This brings to mind an article reflecting on the Steve Jobs legacy that I read after he passed away.  Steve insisted that the design of a product be the key factor.  This then informed the subsequent engineering process and marketing.  As Sinek notes, he did the opposite of what other technology companies typically do.

In doing so he not only made Apple a premier company but also made it a leader in its field.  If imitation is the sincerest form of flattery, the design of competition mobile phones, entertainment devices and tablets signal that Apple’s business method is the one to follow.This is a simple diagram known as a Business O...

How does all this relate to Human Factors Science and Human Science generally?

I believe that we provide the why based on our knowledge of the end user – the human.  Unfortunately, all too often the technical and marketing areas dictate what is produced without any input or thought of the human interface, reflecting some of Sinek’s assertions.  If the end user does not find the product intuitive or empowering to their human experience (informed by our scientific approach to this aspect) the product will probably fail as a commercial success.

So really the challenge is not a real challenge at all.  Get professionals to handle matters at each stage of the process.  However, start with the Human Factors Scientists to provide the why, then let the engineers and technicians loose to produce what they’re good at, the how and what.


Autonomous vehicles – a true step forward?

August 7, 2012
 

Movement within a roundabout in a country wher...
Movement within a roundabout in a country where traffic drives on the left. Note the clockwise circulation. (Photo credit: Wikipedia)

I was reading the August RACV Royalauto magazine article Smart Vehicle Safety, Removing the driver from a vehicle may be the smartest safety decision of all (Bruce Newton, page 66)..

As a human factors researcher in the automotive field I read this article with great interest.

Everyone acknowledges that human error is the major cause of accidents. However, there is a great deal of evidence that the final human error typically occurs as the result of a systemic problem – the ‘tip of the iceberg’ to coin a phrase.  Examples of this would be the design of vehicles with significant ‘blind spots’ or the design of roads with blind corners which make it difficult or even impossible for drivers to perceive and respond to a dangerous scenario.  The Safety Management System taxonomy currently being adopted by many industries recognises and illustrates this fact.

A philosophical aspect has been raised by the claim that automation would eliminate human error.  My contention is that automation itself, because it relies on hardware and software, will also have inherent human error – that of the designers and programmers of the system.

I’m sure many of us will have experienced the problems of using common computer software – I certainly would not relish having to reboot my automated vehicle whilst driving in heavy traffic.  The problems experienced by drivers using software menu systems to control vehicle functions also illustrate the human error inherent in poorly designed automated systems.

Cruise control has been a fantastic assistance for driver fatigue and vigilance management in long distance driving.  However, the new adaptive cruise control systems have been shown to induce human error as they can cause confusion when the driver is required to provide braking inputs especially when their cognitive state, workload and other factors are taken into account.

A salient case of highly automated systems is that of the aircraft which crashed into the Atlantic which caused headlines recently.  Whilst details are still being determined and investigated, it is suspected that icing of the pitot tubes, which the aircraft system uses to determine its correct altitude, was the primary cause.  In effect, the automation in the aircraft had a perception failure.  I’m wondering whether the passengers and crew of that aircraft felt safer because of the level of automation, which obviously hindered the pilots’ ability to manually resume control of the aircraft and possibly recover the situation.  This aspect of the crew being flight managers, rather than pilots, is a hot topic in the aviation industry at present and should inform where we go with regard to automation in the road environment.

In summary, I support automation, subject to it being designed with substantial human factors and human science input to ensure that one type of human error is not replaced with another.

What do you think?