九色国产,午夜在线视频,新黄色网址,九九色综合,天天做夜夜做久久做狠狠,天天躁夜夜躁狠狠躁2021a,久久不卡一区二区三区

打開(kāi)APP
userphoto
未登錄

開(kāi)通VIP,暢享免費(fèi)電子書(shū)等14項(xiàng)超值服

開(kāi)通VIP
什么是人的因素、人類(lèi)因素、影響人類(lèi)的因素、社交技術(shù)系統(tǒng)互動(dòng)?

Four Kinds of ‘Human Factors’: 1. The Human Factor

Posted on 11/08/2017 by stevenshorrock

Over the last decade or so, the term ‘human factors’ has gained currency with an increasing range of people, professions, organisations and industries. It is a significant development, bringing what might seem like a niche discipline into the open, to a wider set of stakeholders. But as with any such development, there are inevitable differences in the meanings that people attach to the term, the mindsets that they bring or develop, and their communication with others.  It is useful to know, then, what kind of ‘human factors’ we are talking about? At least four kinds seem to exist in our minds, each with somewhat different meanings and – perhaps – implications. These will be outlined in this short blog post series, beginning with the first: The Human Factor.


Posts in this series:

  1. The Human Factor人為因素/人的因素

  2. Factors of Humans人類(lèi)的因素

  3. Factors Affecting Humans影響人類(lèi)的因素

  4. Socio-Technical System Interaction社會(huì)技術(shù)系統(tǒng)相互作用

What is it?

The first kind of human factors is the most colloquial: ‘the human factor’. Human-factors-as-the-human-factor seems enters discussions about human and system performance, usually in relation to unwanted events such as accidents and – increasingly – cybersecurity risks and breaches. It is rarely defined explicitly.

Who uses it?

As a colloquial term, ‘the human factor’ seems to be most often used by those with an applied interest in (their own or others’) performance. The term was the title of an early text on human factors in aviation (see David Beaty’s ‘The Human Factor in Aircraft Accidents’, originally published in 1969, now ‘The Naked Pilot: The Human Factor in Aircraft Accidents‘). It can be found in magazine articles concerning human performance by aviators (e.g., this series by Jay Hopkins in Flying magazine) and information security specialists (e.g., KasperskyProofpoint). Journalists tend to use the term in a vague way to refer to any adverse human involvement. Aside from occasional books and reports on human factors (e.g., Kim Vicente’s excellent ‘The Human Factor: Revolutionizing the Way People Live with Technology‘), the term is rarely used by human factors specialists.

The Good

In a sense, ‘the human factor’ is more intuitively appealing than the term ‘human factors’, which implies plurality. It seems to point to something concrete – a person, a human being with intention and agency. And yet it also hints at something vague – mystery, ‘human nature’. Human-factors-as-the-human-factor might therefore be seen in the frame of humanistic psychology, reminding us that:

  1. Human beings, as human, supersede the sum of their parts. They cannot be reduced to components.

  2. Human beings have their existence in a uniquely human context, as well as in a cosmic ecology.

  3. Human beings are aware and aware of being aware – i.e., they are conscious. Human consciousness always includes an awareness of oneself in the context of other people.

  4. Human beings have some choice and, with that, responsibility.

  5. Human beings are intentional, aim at goals, are aware that they cause future events, and seek meaning, value and creativity. (Association for Humanistic Psychology in Britain)

The individual, and her life and experience, is something that cannot be reduced to ‘factors’ the same way as a machine can be reduced to its parts, nor isolated from her context. The individual cannot be fully generalised, explained or predicted, since every person is quite different, even if we have broadly similar capabilities, limitations, and needs. Importantly, we also have responsibility, borne out our goals, intentions and choices. This responsibility is something that professional human factors scientists and practitioners are often nervous about approaching, and may deploy reductionism, externalisation or obfuscation to put responsibility ‘in context’ (this is sometimes at odds with others such as front-line practitioners, patients and their families, management and the judiciary, who perceive these narratives as absolving or sidestepping individual responsibility; see also just culture regulation).

Unfortunately, these possible upsides to human-factors-as-the-human-factor are more imaginary than real, since the term itself is rarely used in this way in practice.

The Bad

In use, ‘the human factor’ is loaded with simplistic and negative connotations about people, almost always people at the sharp end. ‘The human factor’ usually frames the person as a source of trouble – an unreliable and unpredictable element of an otherwise (imagined to be) well-designed and well-managed system. It comes with a suggestion that safety problems – and causes of accidents – can be located in individuals; safety (or rather, unsafety) is an individual behaviour issue. By example, Kaspersky’s blogpost ‘The Human Factor in IT Security: How Employees are Making Businesses Vulnerable from Within’ repeatedly uses adjectives such as ‘irresponsible’ and ‘careless’ to describe users. That is not to say that people are never careless or irresponsible, since we observe countless examples in everyday life, and the courts deal with many in judicial proceedings, but the question is whether this is a useful way to frame human interaction with systems in a work context. In the press, ‘the human factor’ is often used as a catch-all ‘explanation’ for accidents and breaches. It is a throwaway cause.

The human-factors-as-the-human-factor mindset tends to generate a behaviour modification solution to reduce mistakes – psychology, not ergonomics – via fear (threats of punishment or sanctions), monitoring (monitoring and supervision), or awareness raising and training (information campaigns, posters, training).  The mindset may lead to sacking perceived ‘bad apples’, or removing people altogether (by automating particular functions). In some cases, each of these is an appropriate response (especially training, for issues requiring knowledge and skill), but they will tend not to be effective (or fair) without considering the system as a whole, including the design of artefacts, equipment, tasks, jobs and environments.

2. Factors of Humans

In the first post in this series, I reflected on the popularisation of the term ‘human factors’ and discussion about the topic. This has brought into focus various differences in the meanings ascribed to ‘human factors’, both within and outside the discipline and profession itself. The first post explored human factors as ‘the human factor’. This second post explores another kind of human factors: Factors of Humans.

What is it?

This kind of human factors focuses primarily on human characteristics, understood primarily via reductionism. Factors of humans include, for example:

  • cognitive functions (such as attention, detection, perception, memory, judgement and reasoning (including heuristics and biases), decision making – each of these is further divided into sub-categories)

  • cognitive systems (such as Kahneman’s dual process theory, or System 1 and System 2)

  • types of performance (such as Rasmussen’s skill-based, rule-based, and knowledge-based performance)

  • error types (such as Reason’s slips, lapses, and mistakes, and hundreds of other taxonomies, including my own)

  • physical functions and qualities (such as strength, speed, accuracy, balance and reach)

  • behaviours and skills (such as situation awareness, decision making, teamwork, and other ‘non-technical skills’)

  • learning domains (such as Bloom’s learning taxonomy) and

  • physical, cognitive and emotional states (such as stress and fatigue).

These factors of humans may be seen as limitations and capabilities. As with human-factors-as-the-human-factor, the main emphasis of human-factors-as-factors-of-humans is on the human; but general constituent human characteristics, not the person as an individual. The factors of humans approach acts like a prism, splitting human experience into conceptual categories.

This kind of human factors is emphasised in a definition provided by human factors pioneer Alphonse Chapanis (1991):

“Human Factors is a body of knowledge about human abilities, human limitations, and other human characteristics that are relevant to design.”

But Chapanis went on to say that “Human factors engineering is the application of human factors information to the design of tools, machines, systems, tasks, jobs, and environments for safe, comfortable, and effective human use.” He therefore distinguished between ‘human factors’ and ‘human factors engineering’. The two would probably be indivisible to most human factors practitioners today (certainly those who identify as ‘ergonomists’, i.e., designers), and knowledge and application come together as parts of many definitions of human factors (or ergonomics). Human factors is interested in these factors of humans, then, to the extent that they are relevant to design, at least in theory (in practice, the sheer volume of literature on these factors suggests otherwise!).

Who uses it?

Factors of humans have been researched extensively, by psychologists (especially cognitive psychologists, and increasingly neuropsychologists), physiologists and anatomists, and ergonomists/human factors specialists. Human abilities, limitations and characteristics are therefore the emphasis of many academic books and scientific articles concerning human performance, applied cognitive psychology, cognitive neuropsychology, and human factors/ergonomics, and  is the standard fare of such courses.

This kind of human factors is also of interest to front-line professionals in non-technical skills training, where skilled performance is seen through the lenses of decision making, situational awareness, teamwork, and communication.

The Good

Factors of humans – abilities, limitations, and other characteristics – must be understood, at least at a basic level, for effective design and management. Decades of scientific research have produced a plethora of empirical data and theories on factors of humans, along with a sizeable corpus of measures. Arguably, literature is far more voluminous for this kind of human factors than any other kind. We therefore have a sophisticated understanding of these factors. Much is now known from psychology and related disciplines (including human factors/ergonomics) about sustained attention (vigilance), divided attention, selective attention, working memory, long term memory, skilled performance, ‘human error’, fatigue, stress, and so on. Much is also known about physiological and physical characteristics. These are relevant to the way we think about, design, perform, and talk about, record or describe human work: work-as-imagined, work-as-prescribed, work-as-done and work-as-disclosed. Various design guidelines (such as the FAA Human Factors Design Standard, HF-STD-001) have been produced on the basis of this research, and hundreds of HF/E methods.

This kind of human factors may also help people, such as front-line professionals, to understand their own performance in terms of inherent human limitations. While humanistic psychology emphasises the whole person, and resists reducing the person into parts, cognitive psychology emphasises functions and processes, and resists seeing the whole person. So while reductionism often comes in for attack among humanistic and systems practitioners, knowledge of limits to sustained attention, memory, judgement, and so on, may be helpful to better understand failure, alleviating the embarrassment or shame that often comes with so-called ‘human error’. Knowledge of social and cultural resistance to speaking up can help to bring barriers out into the open for discussion and resolution. So perhaps reductionism can help to demystify experience, help to manage problems by going down and in to our cognitive and physical make-up, and help to reduce the stigma of failure.

The Bad

Focusing on human abilities, human limitations, and other human characteristics, at the expense of the whole person, the context, and system interactions, comes with several problems, but only a few will be outlined here.

One problem relates to the descriptions and understandings that emerge from the reductive ‘factors of humans’ approach. Conceptually, human experience (e.g., of performance) is understood through one or more conceptual lenses (e.g., situation awareness, mental workload), which reflect partial and fragmented reflections of experience. Furthermore, measurement relating to these concepts often favours quantification. So one’s experience may be reduced to workload, which is reduced further to a number on a 10-point scale. The result is a fragmented, partial and quantified account of experience, and these numbers have special power in decision making. However, as humanistic psychology and systems thinking reminds us, the whole is greater than the sum of its parts; measures of parts (such as cognitive functions, which are not objectively identifiable) may be misleading, and will not add up to form a good understanding of the whole. Understanding the person’s experience is likely to require qualitative approaches, which may be more difficult to gain, more difficult to publish, and more difficult to digest by decision-makers.

Related to this, analytical and conceptual accounts of performance with respect to factors of humans can seem alien to those who actually do the work. This was pointed out to me by an air traffic controller friend, who said that the concepts and language of such human factors descriptions do not match her way of thinking about her work. Human factors has inherited and integrated some of the language of cognitive psychology (which, for instance, talks about ‘encoding, storing and retrieving’, instead of ‘remembering’; cognitive neuropsychology obfuscates further still). So while reductionism may help to demystify performance issues, this starts to backfire, and the language in use can mystify, leaving the person feeling that their experience has been described in an unnatural and decontextualised way. Going further, the factors of humans approach is often used to feed databases of incident data. ‘Human errors’ are analysed, decomposed, and entered into databases to be displayed as graphs. In the end, there is little trace of the person’s lived experience, as their understandings are reduced to an analytical melting pot.

By fragmenting performance problems down to cognitive functions (e.g., attention, decision-making), systems (e.g., System 1), error types (e.g., slips, mistakes), etc, this kind of human factors struggles with questions of responsibility. At what point does performance become unacceptable (e.g., negligent)? On the one hand, many human factors specialists would avoid this question, arguing that this is a matter for management, professional associations, and the judicial system. On the other hand, many human factors specialists use terms such as ‘violation’ (often further divided into sub-types; situational violation, routine violation, etc) to categorise decisions post hoc. (Various algorithms are available to assist with this process.) To those caught up in situations involving harm (e.g., practitioners, patients, families), this kind of analysis, reductionism and labelling may be seen as sidestepping or paying lip service to issues of responsibility.

While fundamental knowledge on factors of humans is critical to understanding, influencing and designing for performance, reductionist (including cognitivist) approaches fail to shed much light on context. By going down and in to physical and cognitive architecture, but not up and out to context and the complex human-in-system interactions, this kind of human factors fails to understand performance in context, including the physical, ambient, informational, temporal, social, organisational, legal and cultural influences on performance. This problem stems partly from the experimental paradigm that is the foundation for most of the fundamental ‘factors of humans’ knowledge. This deliberately strips away most of the richness and messiness of real context, and also tends to isolate factors from one another.

Because this kind of human factors does not understand performance in context, it may fail to deal with performance problems effectively or sustainably. For instance, simple design patterns (general reusable solutions to commonly occurring problems) are often used to counter specific cognitive limitations. These can backfire when designed artefacts are used in natural environments, and the design pattern is seen as a hindrance to be overcome or bypassed (problems with the design and implementation of checklists in hospitals is an example). Another example may be found in so-called ‘human factors training’ (which, often, should be called ‘human performance training’). This aims to improve human performance by improving knowledge and skills concerning human cognitive, social and physical limitations and capabilities. While in some areas, this has had success (e.g., teamwork), in others we remain constrained severely by our limited abilities to stretch and mitigate our native capacities and overcome system conditions (e.g., staffing constraints). Of course, in the absence of design change, training may also be the only feasible option.

A final issue worth mentioning here is that, more than any other kind of human factors, the ‘factors of humans’ kind has arguably been over-researched. Factors of humans are relatively straightforward to measure in laboratory settings, and related research seems to attract funding and journal publications. Accordingly, there are many thousands of research papers on factors of humans. The relative impact of this huge body of research on the design of real systems in real industry (e.g., road transport, healthcare, maritime) is dubious, but that is another discussion for another time.

References

Chapanis, A. (1991). To communicate the human factors message, you have to know what the message is and how to communicate it. Bulletin of the Human Factors Society34, 1-4.

3. Factors Affecting Humans

In the first post in this series, I reflected on the popularisation of the term ‘human factors’ and discussion about the topic. This has brought into focus various differences in the meanings ascribed to ‘human factors’, both within and outside the discipline and profession itself. The first post explored human factors as ‘the human factor’. The second post explored human factors as ‘factors of humans’. This third post explores another kind of human factors: Factors Affecting Humans.

What is it?

This kind of ‘human factors’ turns to the factors – external and internal to humans – that affect human performance: equipment, procedures, supervision, training, culture, as well as aspects of human nature, such as our capabilities and limitations. Factors affecting humans tend to include

  • aspects of planned organisational activity (e.g., supervision, training, regulation, handover, communication, scheduling)

  • organisational artefacts (e.g., equipment, procedures, policy)

  • emergent aspects of organisations and groups (e.g., culture, workload, trust, teamwork, relationships)

  • aspects of the designed environment (e.g., airport layout, airspace design, hospital design, signage, lighting)

  • aspects of the natural environment (e.g., weather, terrain, flora, fauna)

  • aspects of transient situations (e.g., emergencies, blockages, delays, congestion, temporary activities)

  • aspect of work and job design (e.g., pacing, timing, sequencing, variety, rostering)

  • aspects of stakeholders (e.g., language, role)

  • aspects of human functions, qualities and states that affect performance (e.g.,

    • cognitive functions such as attention, detection, perception, memory, judgement and reasoning, decision making, motor control, speech;

    • physical functions and qualities such as strength, speed, accuracy, balance and reach;

    • physical, cognitive and emotional states such as stress and fatigue).

The following well-known definition from the UK Health and Safety Executive (1999) seems to emphasise the ‘factors that affect humans’ kind of human factors:

“Human factors refer to environmental, organisational and job factors, and human and individual characteristics, which influence behaviour at work in a way which can affect health and safety” (Health and Safety Executive, Reducing error and influencing behaviour HSG48)

Who uses it?

This kind of human factors is the most traditional in human factors guidance and courses, and so is familiar to human factors specialists. It naturally fits courses on human factors (as modules), texts on human factors (as chapters), and studies on human factors (which might consider specific factors as independent variables).

This kind of human factors is also of interest to safety specialists, who might use taxonomies to classify ‘causal factors’ to incidents and accidents, or select ‘performance shaping factors’ as part of human reliability assessments.

It also suits the way that organisations tend to be organised (functionally, e.g. training, procedures, engineering) and so tends to make natural sense in an organisational context; it is obvious that the various factors affect behaviour. It is just not obvious how.

The Good

Some of the positive aspects of this kind of human factors are shared with the ‘factors of humans‘ kind. One is a great body of knowledge to help understand, classify and predict or imagine these effects. The design of artefacts such as equipment, tools and procedures, as well as tasks, jobs and work systems, affect human performance in different ways. This understanding can therefore be applied to and integrated in the design of equipment, procedures, tools, regulations, roles, jobs, and management systems, etc.

The ‘factors affecting humans’ kind of human factors is also relatively easy to understand at a basic level. Most people seem to know that the design of artefacts (even simple ones, such as door handles, or more complicated ones such as self-assembly furniture instructions) affect our behaviour. The details of the effects are not obvious, but the existence of some effect is fairly obvious.

While the ‘factors of humans’ perspective goes down and in to the cognitive, emotional and physical aspects of human nature, the ‘factors affecting humans’ perspective extends also up and out into the system, environment and context of work. This acknowledges the influence of factors outside of humans on human performance, and therefore helps to explain it. ‘Human error’ is not usually ‘simple carelessness’, but a symptom of various aspects of the work situation. This acknowledges an important reality for any of us; our performance is subject to many factors, and many of these are beyond our direct control.

This kind of human factors therefore more clearly points to design as a primary means to influence performance and wellbeing, as well as instruction, training and supervision. The view of factors affecting humans also mirrors to some degree the way that organisations are designed and operated, as functional specialisms (e.g., training, procedures, design).

Together, ‘factors affecting humans’ and ‘factors of humans’ comprise what many would think of as ‘human factors’, especially staff and managers in organisations.

The Bad

Many of the downsides of the ‘factors of humans’ perspective on human factors are addressed by the ‘factors affecting humans’ perspective. But some other issues remain. One concerns the difficulty in understanding the influence of multiple, interacting factors affecting humans in the real work context. How do factors affect performance when those factors interact dynamically and in concert in the real environment, which is probably far messier than imagined?

In trying to understand performance, we tend to dislike the mess of complexity and instead prefer single-factor explanations. This can be seen in organisations, media, the justiciary, and even in science, which is one facet of human factors. But the effects of multiple interacting factors in messy environments are hard to extrapolate from experiments. Experiments tend to focus on each variable of interest (e.g., a new interface or shift system or a checklist; ‘independent variables’) while controlling, removing or ignoring myriad other factors that are relevant to work-as-done (e.g., readiness for change, culture, supervision, staffing pressures, unusual demand, history of similar interventions, resources available for implementation; ‘confounding variables’), in order to measure things of interest (e.g., time, satisfaction, errors; ‘dependent variables’). Even where we go beyond single factor explanations, the effects of multiple, interacting factors affecting humans in real environments are hard to understand from reading about these factors or from factorial tools such as taxonomic safety databases. They are also hard or impossible to estimate with predictive tools, such as human-reliability assessments or safety risk assessments.

A reductionist, factorial approach can hide system-wide patterns of influence and emergent effects. Factors can appear disconnected, when in reality they are interconnected. Influence appears linear, when it is non-linear. Effects appear resultant, when they are emergent. Wholes are split into parts. Information is analysed but not synthesised. Hence, when a change is introduced, in the full richness of the real environment, surprises are encountered. The air traffic control flight data interface is fine in standard conditions but not for complex re-routings at short notice under high traffic load. The new individual roster system is good for staff availability but adversely affects teamwork. The checklist is completed but before the task steps have actually been completed. Interventions on factors affecting humans are designed and implemented but don’t work as imagined; they are less effective than predicted, have unintended consequences or create new unforeseen influences, changing the context in unexpected ways.

The direction of influence of ‘factors affecting humans’ is often assumed to be one-way (linear), as per the HSE definition above. But people also influence these influencing ‘factors’ in the context of a sociotechnical system. So the design of a shift system influences behaviour, but people also influence shift patterns (e.g., via shift swapping). Interfaces influence people, but people use interfaces outside of design intent. Feedback loops are hard to see with a fragmented and linear approach to human factors. These might sound like rather abstract or theoretical problems, but the examples above are just the first real ones that come to mind; there are many cases of interventions that fail in large part because factors are considered in a non-systemic and decontextualised way that is too far from the messy reality of work.

Additionally, when applied in a safety management context, the ‘factors affecting humans’ perspective is almost entirely negative. From a safety perspective, the positive influence of ‘factors affecting humans’ (and indeed ‘factors of humans’ and ‘the human factor’) is mostly ignored. What is it that makes people and organisations perform effectively to ensure that things go right? Safety management has little idea. Only the contribution of ‘factors’ to unwanted outcomes (real or potential) is usually considered. This can give human factors in safety a negative tone, reducing human activity to ‘causal factors’. Human factors (or ergonomics) is really about something much broader; improving performance and wellbeing, (especially) by design.

There can be something unintuitive and distancing about human factors viewed from a reductionist, factorial point of view. Perhaps it is partly that the narrative of real experience is lost amid the analysis. Consider textbooks, the initial source material for anyone learning human factors (or ergonomics) as a discipline. Relatively few human factors texts are organised around narrative. Instead, they are usually organised around ‘factors’. One of the rare examples of the narrative approach is Set Phasers on Stun by Steven Casey, while an example of the factorial approach is Human Performance: Cognition, Stress and Individual Differences, by Gerald Matthews, Stephen Western and Rob Stammers. Both are excellent in their own ways, but the latter is the default (and happens to be far less interesting to the wider audience). Rich narrative tries to recreate or bring to life lived experience and context, while a factorial or analytical approach deconstructs experience and context into concepts. (Again, an example is incident databases, which analyse factors extracted from multiple situations, partly with the intention of understanding factor prevalence across scale.)

Finally, but related to all of the above, this kind of human factors struggles with questions of responsibility (as with the ‘factors of humans‘ perspective). At what point does performance become unacceptable (e.g., negligent)? How do we locate responsibility and accountability amid the ‘factors’. And if top management is responsible for those ‘factors’, then what when they move on? The ‘human factor‘ perspective, while much misused, at least seems to acknowledge that human beings have some choice and, with that, responsibility. To those affected by situations involving harm (e.g., harmed patients and families, local communities affected by chemical exposure and oil spills), deconstructing the influences on behaviour, in an attempt to explain, may be seen as excusing unacceptable behaviour, sidestepping issues of responsibility and turning a blind eye to the dark sides of organisations, and even human nature.

4. Socio-Technical System Interaction

This is the fourth in a series of posts on different ‘kinds’ of human factors, as understood both within and outside the discipline and profession of human factors and ergonomics itself. The first post explored human factors as ‘the human factor’. The second post explored human factors as ‘factors of humans’. The third post explored human factors as ‘factors affecting humans’. This post explores a fourth kind of human factors: Socio-technical system interaction.

What is it?

This kind of ‘human factors’ aims to understand and design or influence purposive interaction between people and all other elements of socio-technical systems, concrete and abstract. For industrial applications, a good shorthand for this is ‘work’. The following definition, from the International Ergonomics Association, and adopted by the Human Factors and Ergonomics Society and Chartered Institute of Ergonomics and Human Factors and other societies and associations, characterises this view of human factors.

“Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance.”

Note from this definition that ‘human factors’ is formally indistinguishable from ‘ergonomics’. While some people attempt to make a distinction between the terms, the relevant professional societies and associations do not, and typically instead recognise that the two terms have different origins (in the US and Europe, respectively). The terms are often used interchangeably by HF/E specialists, akin to ‘counselling’ and ‘psychotherapy’, with scientific journals (e.g., Ergonomics, Human Factors, Applied Ergonomics) using one term or the other but with the same scope. (The equivalence of the terms of sometimes a surprise to those who are not formally trained in human factors and ergonomics, especially those from anglophone backgrounds since many languages use translations of ‘ergonomics’ (ergonomia, ergonomie, ergonomija, eirgeanamaíocht, ergonoomika, ergonomika…).

It is relevant that ‘ergonomics’ derives from the Greek ergo (‘work’) and nomos (‘laws’). There are, in fact, very few accepted laws in human factors/ergonomics (aside from familiar laws such as Fitts’ Law and Hicks’ Law), but many would acknowledge and agree on certain ‘principles’. It is also relevant that the origin of human factors and ergonomics was in the study of interaction between people and equipment and how the design of this equipment influenced performance. Notably, Fitts and Jones (1947) analysed ‘pilot error’ accidents and found that these were really symptoms of interaction with aircraft cockpit design features. For instance, flap and gear controls looked and felt alike and were colocated (a problem that has been largely solved in cockpits but remains in pharmacy in terms of medicines).

The beginnings of human factors and ergonomics, then, focused not on the human or the factors that affect the human per se, but on interaction, and how context shapes that interaction. If we ignore context, ‘factors of humans’ and ‘factors that affect humans’ become less problematic. If I turn on the wrong burner on my stove (which I do, about 30-40% of the time), it is not a problem. I simply turn it off and now I know the correct dial to turn. If I want to be sure I can bend down to look at the little diagram, but often I can’t be bothered. If an anaesthetist presses the wrong button, she might turn off the power to a continuous-flow anaesthetic machine inadvertently because of a badly positioned power switch. If the consequence of my turning the wrong dial were more severe, I would bother to check the little diagram often, but I would still make mistakes, mostly because the layout of the stoves is incompatible with the layout of the dials, which look identical and are co-located.

This fourth kind of human factors is a scientific discipline, especially from an academic point of view, and a design discipline, especially from an applied point of view. But what we are designing is not so much an artefact or procedure, as the interactions between people, tools, and environments, in particular contexts. This design involves science, engineering and craft.

Human-factors-as-sociotechnical-interaction has a dual purpose to improve system performance and human wellbeing. System performance includes all system goals (e.g., production, efficiency, safety, capacity, security, environment). Human wellbeing, meanwhile, includes human needs and values (e.g., health, safety, meaning, satisfaction, comfort, pleasure, joy).

Who uses it?

This perspective – more nuanced than the other three – is most prevalent among professional human factors specialists/ergonomists, who are accredited, certified, registered or chartered by relevant societies and associations. However, it is also natural fit with the work of system engineers, interaction designers, and even anthropologists.

The Good

This kind of human factors takes account of human limitations and capabilities, influences on human performance, and human influences on system performance. It is rooted in:

  • systems thinking, including an understanding of system goals, system structure, system boundaries, system dynamics and system outcomes;

  • design thinking, and the principles and processes of designing for human use; and,

  • scientific understanding of people and the nature of human performance, and empirical study of activity.

This kind of human factors also makes system interaction and influence visible. It uses systems methods to understand and map this interaction, and how interaction propagates across scale, over time, as non-linear interactions within and between systems: legal, regulatory, organisational, social, individual, informational, technical, etc. While the ‘factors affecting humans’ perspective tends to be restricted to linear ‘resultant’ causation, the systems interaction perspective is alert to emergence.

As an example, what can seem like a simple and common sense intervention from one perspective (e.g., a performance target, such as the four-hour accident and emergency target in UK hospitals), can create complex non-linear interactions and emergent phenomena across almost all aspects of the wider context noted above. (See the example from General Practitioner Doctor Margaret McCartney in this post, concerning targets for dementia screening [examples are at the bottom of the post]).

Human factors as system interaction considers all stakeholders’ needs and system/design requirements, in the context of all relevant systems, including an intervention (or designed solution) as a system (e.g., a sat nav), the context as a system (e.g., vehicles, drivers, pedestrians, roads, buildings), competing systems (e.g., smartphone apps, signs), and systems that collaborate with the intervention system to deliver a function (e.g., satellites, power sources). Most failed interventions can be traced to a failed understanding of one or more of these systems, especially the context as a system. (See the example from surgeon Craig McIlhenny in this post on the installation of a fully computerised system for ordering tests [radiology requests, lab requests, etc.])

This kind of human factors is the only kind that really recognises the world as it is: complex interaction and interdependency across micro, meso, and macro scales. Also unlike the other three kinds of human factors, at least in terms of their connotations, human-factors-as-sociotechnical-interaction has a clear dual purpose: improved system performance and human well-being. It is one of the only disciplines to have this dual focus.

The Bad

This kind of human factors is it is the least intuitive of the four. It is much easier to restrict ourselves to discussion of ‘the human factor’, ‘factors of humans’ and ‘factors affecting humans’, since these tend to restrict us to isolated factors and linear cause-effect thinking, usually within a restricted system boundary. This kind of human factors is therefore the perspective that tends to be neglected in favour of simplistic approaches to ‘human factors’.

It is also the most difficult of the four kinds of human factors to address in practice. In safety management, for instance, the tools that are routinely in use tend not to address system interactions. Taxonomies focus on ‘factors of humans’ and ‘factors affecting humans’, but do not model system interactions. Fault and event trees map interactions but only in the context of failure, and the interactions typically are fixed (unchanging), linear (lacking feedback loops), and assume direct cause-effect relationships, with no consideration of emergence. There is an important distinction here between thinking systemically (thinking in an ordered or structured way) and systems thinking (thinking about the nature and functioning of systems).

When human factors is approached as the study and design or influence of system interaction, it is rare that simple, straightforward answers can be given to questions. The reason that “it depends” (usually an unwanted answer to a question) is because the answer to a question, the solution to a problem, or the realisation of an opportunity in a sociotechnical system does depend on many factors: the stakeholders (and their skills, knowledge, experience, etc), their activities, the artefacts that they interact with, the demand and pressure, resources and constraints, incentives and punishments, and other aspects of the wider context – informational, temporal, technical, operational, natural, social, financial, organisational, political, cultural, and judicial. Not all of these will always be relevant, but they need to be considered in the context of interactions across scale and over time.

It is fair to say that this kind of human factors is depersonalising. As we study, map and design system interaction, the person (‘the human factor’) can seem to be an anonymous system component, certainly less interesting than system interaction. Even tools that we use to try to capture this in design – such as personas – tend to depict imaginary people. So this kind of human factors can feel more like an engineering discipline than a human discipline. It is important that this be addressed in the way that human factors is practised, both in general interpersonal approach and via qualitative methods that aim at understanding personal needs, assets and experience. Systems thinking and design thinking must be combined with humanistic thinking.

Finally, as with the second and third kinds of human factors, this kind struggles with issues of responsibility and accountability (the concepts, subtly different in English, are no different in many languages). Responsibility for system outcomes now appears to be distributed among complex system interactions, which change over time and space. Outcomes in complex sociotechnical systems are increasingly seen as emergent, arising from the nature of complex non-linear interactions across scale. But when something goes wrong, we as people, and our laws, demand that accountability be located. The nature of accountability often means that this must be held by one person or body. People at all levels – minister, regulator, CEO, manager, supervisor, front line operator – have choice. With that choice comes responsibility and accountability. A police officer chooses to drag a woman by the hair for trying to vote. A senior nurse chooses whether to bully junior nurses. A professional cyclist chooses to take prohibited drugs. A driver chooses whether to drink before driving, to drive without insurance, to drive at 60mph in a 30mph zone, or to or send text messages while driving. There may well be contextual influences on all of these behaviours, but we make choices in our behaviour. In these kinds of cases, it is important that ‘systems thinking’ is not used to scatter such choices into the ether of ‘the system’, stripping people of responsibility and accountability. That would be the ruin of both systems thinking and justice.

本站僅提供存儲(chǔ)服務(wù),所有內(nèi)容均由用戶(hù)發(fā)布,如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請(qǐng)點(diǎn)擊舉報(bào)。
打開(kāi)APP,閱讀全文并永久保存 查看更多類(lèi)似文章
猜你喜歡
類(lèi)似文章
3.7 Management and Planning Tools
Deep Learning for Chatbots, Part 1 – Introduction – WildML
Command &Control
15 Ways to Use Software to Improve Your Knowledge Management
Learning management system
GIS in Design and Asset Management of Intermi...
更多類(lèi)似文章 >>
生活服務(wù)
熱點(diǎn)新聞
分享 收藏 導(dǎo)長(zhǎng)圖 關(guān)注 下載文章
綁定賬號(hào)成功
后續(xù)可登錄賬號(hào)暢享VIP特權(quán)!
如果VIP功能使用有故障,
可點(diǎn)擊這里聯(lián)系客服!

聯(lián)系客服