• Adjust their communications to the
agents’ capabilities (for example, graphi-
cal interfaces work well for a human but
wouldn’t help most agents)
Because the team members will be in
different states depending on how much of
their original plan they’ve executed, CMAs
must support further negotiation and re-
planning at runtime.
Attention management
Challenge 9: Agents must be able to
participate in managing attention.
As part of maintaining common ground
during coordinated activity, team members
direct each other’s attention to the most
important signals, activities, and changes.
They must do this in an intelligent and con-
text-sensitive manner, so as not to over-
whelm others with low-level messages con-
taining minimal signals mixed with a great
deal of distracting noise.
Relying on their mental models of each
other, responsible team members expend
effort to appreciate what each other needs to
notice, within the context of the task and the
current situation.
19
Automation can com-
pensate for trouble (for example, asymmet-
ric lift due to wing icing), but currently does
so invisibly. Crews can remain unaware of
the developing trouble until the automation
nears the limits of its authority or capability
to compensate. As a result, the crew might
take over too late or be unprepared to han-
dle the disturbance once they take over, re-
sulting in a bumpy transfer of control and
significant control excursions. This general
problem has been a part of several aviation
incident and accident scenarios.
It will push the limits of technology to
get the machines to communicate as flu-
ently as a well-coordinated human team
working in an open, visible environment.
The automation will have to signal when
it’s having trouble and when it’s taking
extreme action or moving toward the ex-
treme end of its range of authority. Such
capabilities will require interesting rela-
tional judgments about agent activities:
How does an agent tell when another team
member is having trouble performing a
function but has not yet failed? How and
when does an agent effectively reveal or
communicate that it’s moving toward its
limit of capability?
Adding threshold-crossing alarms is the
usual answer to these questions in automa-
tion design. However, in practice, rigid and
context-insensitive thresholds will typically
be crossed too early (resulting in an agent
that speaks up too often, too soon) or too
late (resulting in an agent that’s too silent,
speaking up too little). However, focusing
on the basic functions of joint activity rather
than machine autonomy has already pro-
duced some promising successes.
20
Cost control
Challenge 10: All team members must
help control the costs of coordinated activity.
The Basic Compact commits people to
coordinating with each other and to incur-
ring the costs of providing signals, improv-
ing predictability, monitoring the others’
status, and so forth. All these take time and
energy. These coordination costs can easily
get out of hand, so the partners in a coordi-
nation transaction must do what they rea-
sonably can to keep coordination costs
down. This is a tacit expectation—to try to
achieve economy of effort. Coordination
requires continuing investment and hence
the power of the Basic Compact—a will-
ingness to invest energy and accommodate
others, rather than just performing alone in
one’s narrow scope and subgoals. Coordi-
nation doesn’t come for free, and coordina-
tion, once achieved, doesn’t allow us to
stop investing. Otherwise, the coordination
breaks down.
Keeping coordination costs down is
partly, but only partly, a matter of good
human-computer interface design. More
than that, the agents must conform to the
operators’ needs rather than require opera-
tors to adapt to them. Information hand-off,
which is a basic exchange during coordina-
tion involving humans and agents, depends
on common ground and mutual predictabil-
ity. As the notions of HCC suggest, agents
must become more understandable, more
predictable, and more sensitive to people’s
needs and knowledge.
The 10 challenges we’ve presented can
be viewed in different lights:
• As a blueprint for designing and evaluat-
ing intelligent systems—requirements
for successful operation and the avoid-
ance or mitigation of coordination
breakdowns.
• As cautionary tales about the ways that
technology can disrupt rather than support
coordination: Simply relying on explicit
procedures, such as common operating
pictures, isn’t likely to be sufficient.
• As the basis for practicable human-agent
systems. All the challenges have us walk-
ing a fine line between the two views of
AI: the traditional one that AI’s goal is to
create systems that emulate human capa-
bilities, versus the nontraditional human-
centered computing goal—to create sys-
tems that extend human capabilities,
enabling people to reach into contexts
that matter for human purposes.
We can imagine in the future that some
agents will be able to enter into some form
of a Basic Compact, with diminished capa-
bility.
6
Agents might eventually be fellow
team members with humans in the way a
young child or a novice can be—subject
to the consequences of brittle and literal-
minded interpretation of language and
events, limited ability to appreciate or even
attend effectively to key aspects of the
interaction, poor anticipation, and insensi-
tivity to nuance. In the meantime, we hope
you might use the 10 challenges we’ve
outlined to guide research in the design of
team and organizational simulations that
seek to capture coordination breakdowns
and other features of joint activity. Through
further research, restricted types of Basic
Compacts might be created that could be
suitable for use in human-agent systems.
Acknowledgments
Klein Associates, Ohio State University, and
the Institute for Human and Machine Cognition
94 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS
How does an agent tell when a team
member is having trouble performing
a function but hasn’t yet failed?
When does an agent communicate
that it’s moving toward
its limit of capability?