top of page
Search

Raising the Bar for Human-Machine Teams



After a decade of studies, one Air Force researcher says the key to information dominance isn't AI's smarts alone – it's the human touch that unlocks true human-machine teamwork.

 

I recently attended a presentation by Dr. Chad Tossell on the importance of the human in human-machine teaming, and I felt like I was back in the classroom. In the good ways, not the worried if I’ll have to retake Astro in the summer kind of way. Dr. Tossell put on an incredible presentation about the importance of human-machine teaming that was easily digestible and thought-provoking. The talk – titled “The Importance of the Human in Human-Machine Teaming: Reflections on a Decade of Studies, Simulations, and Surprises” – distilled years of research and simulations into five core lessons for anyone interested in AI, defense, and the future of warfighting.

 

Five Takeaways from a Decade of HMT Research: Dr. Tossell’s five “wisdom” takeaways about human-machine teaming (HMT) were as insightful as they were hard-won:

 

  1. Humans are susceptible to influence from AI – often in surprising, subtle ways. In other words, AI teammates can persuade and anchor human decision-making more than we realize. From experiments where even a simple Roomba vacuum robot successfully convinced people to keep following its suggestions, to a "untrusted but obeyed" scenario in a command-and-control simulation (where operators followed an AI’s recommendations despite labeling it untrustworthy), the evidence is clear: we humans can be led by our machine partners (​pmc.ncbi.nlm.nih.gov). This influence can be beneficial – like an AI assistant nudging us toward a smarter decision – but it can also be dangerous if a flawed or malicious AI steers us wrong.

 

  1. Moral advice from AI must be earned, not assumed. As Dr. Tossell put it succinctly, “Moral advice from AI must be earned, not assumed.” An AI might speak with confidence or a friendly human-like face, but that doesn’t mean its ethical compass is aligned with ours. We should not automatically trust a machine’s judgment on right and wrong. This point was driven home by scenarios involving a Furhat social robot engaging in moral reasoning dialogues. The takeaway: an AI advisor needs to demonstrate reliability and uphold our values over time before we rely on it for life-and-death decisions.

 

  1. Context is king in human-machine teaming. The success of an AI teammate depends more on its role, the concept of operations (CONOPS), and the environment than on any flashy tech specs or fancy algorithms. In practical terms, a mediocre AI properly integrated into a unit’s workflow can outperform a cutting-edge AI that’s bolted on without context. A navigation AI that works great in the lab might flop in a desert battlefield if soldiers aren’t trained in how to use it under stress. Or consider space operations: an autonomous satellite monitoring system is only as useful as the command structure that can interpret and act on its alerts. Dr. Tossell emphasized that getting the human-machine organizational setup right – who trusts whom, and under what conditions – often matters more than the AI’s raw capability.

 

  1. We need to raise the bar for both humans and AI. There’s a lot of hand-wringing these days that reliance on AI could erode human skills like critical thinking. Tossell’s response: instead of lowering our standards, let’s raise them. He challenged the audience to think beyond just “AI might make us lazy.” What about ensuring our AI partners embody (or at least respect) qualities like ethical reasoning, loyalty, values, esprit de corps – even the will to fight? In one example from education, Tossell and colleagues reframed a college essay assignment to require the use of ChatGPT as a collaborator, rather than treating it as a cheating shortcut. The result? Students shifted from seeing the AI as a threat to academic integrity to viewing it as a “trusted partner” (albeit one under oversight)- ​scribd.com. They produced higher-quality essays when they engaged critically with the AI, whereas passive, uncritical use of ChatGPT led to worse outcomes. This “raising the bar” approach meant students had to do more thinking, not less – exactly what we need in military settings, too. Instead of AI replacing human judgment, it should provoke deeper human analysis. Imagine applying this in a military intelligence cell: analysts could be required to critique and refine the AI’s reports rather than just rubber-stamping them. The human-AI team would likely outperform either alone, much like how in chess the best results come from centaur teams (human+AI) rather than AI alone.

 

  1. Human-centered AI isn’t a feature – it’s a foundation. Perhaps the overarching theme of Dr. Tossell’s talk was that human-centric design and testing must be the bedrock of all our AI efforts. “If we want resilient, adaptable, multi-domain teams, we have to design with the human at the core — not as an afterthought, but as the constant,” he argued. In defense tech development, too often the human factors get bolted on late in the process. Tossell’s message: flip that script. By building systems around human strengths and limitations from day one, we not only avoid the “bad surprises” (like an autonomous system misinterpreting an order with catastrophic results), but we also enable “good surprises” – emergent advantages that arise when humans and machines truly complement each other. A well-designed human-machine team can yield creative solutions on the fly; we’ve seen hints of this in exercises where, for instance, an AI decision aid and a human commander together devised a tactic neither would have conceived alone. Those are the kind of positive surprises we want.

 

Raising the Bar for Information Dominance: One discussion question from the presentation struck a chord with me: “How can we raise the bar in HMT to ensure information dominance (in space and/or other domains)?” In today’s strategic environment, information dominance – the ability to see, decide, and act faster and better than any adversary – is the holy grail. Achieving it will absolutely require seamless human-machine teaming. So, how do we raise the bar?

 

From my perspective, it starts with training and trust. We need to train our operators and analysts with AI as part of the team, not separate from it. Just as Dr. Tossell’s students learned to use ChatGPT in a supervised way, our warfighters should regularly exercise with AI wingmen, scouts, and battle managers so that using these tools becomes second nature. This builds calibrated trust: neither blind faith in the computer, nor reflexive skepticism, but an accurate understanding of what the AI can and cannot do. For example, Space Force guardians working with an AI surveillance system should practice scenarios where the AI spots a potential orbital threat – sometimes correctly, sometimes erroneously – and learn how to verify and respond. Through repetition, the human-machine team learns each other’s tendencies and improves as a whole.

 

Raising the bar also means demanding more from the AI itself. Information dominance isn’t just about crunching data faster; it’s also about making sound judgments. That means our AI systems need to be imbued with our doctrine and values, or at least be able to factor them in. If an AI algorithm flags a piece of enemy propaganda, can it also assess the context – is it just noise, or could it sway our troops or allies? A human-aware AI that understands, say, the importance of morale or the rules of engagement will be far more useful than one that blindly optimizes a spreadsheet. In practice, this could involve developing AI decision aids that explain their reasoning in human terms (“I recommend moving satellite X because it has a higher risk of cyber compromise, and we value assured comms over keeping that asset in its current orbit”). Such transparency and alignment help the human partner grasp the bigger picture and act decisively, maintaining the tempo needed for information superiority.

 

We can also draw lessons from recent real-world events. In the Russo-Ukrainian conflict, both sides have employed drones and AI-enabled surveillance, but the most successful operations paired cheap autonomous systems with clever human tactics. Ukrainian forces, for instance, often jury-rig simple drones (essentially flying robots) and use them in ways the Russians didn’t expect – a creativity that comes from human minds on the ground. The side that best fuses human ingenuity with machine efficiency tends to gain the upper hand. The same will hold true in space and cyber domains. Adversaries are racing to field AI for electronic warfare, targeting, and propaganda; our edge will come from out-teaming them, not just out-coding them. That means developing doctrines where human judgment and machine intelligence are each used where they’re strongest. It also means investing in HMT research as a priority, not a footnote – something Dr. Tossell, now a research lead at the University of Colorado’s national security institute, has championed - ​colorado.edu.

 

In reflecting on Dr. Tossell’s presentation and the broader state of human-machine teaming, I’m struck by how clear the path is, if we’re willing to follow it. We must be proactive and thoughtful: test our AIs not just in ideal conditions but in messy, human ones; educate our people to leverage AI as a partner; and build a culture that values the human-machine bond as a critical warfighting capability.

 

In the end, maintaining information dominance through HMT isn’t about ceding the high ground to machines – it’s about elevating ourselves. The human element is our decisive advantage, and if we harness technology to amplify that advantage, we won’t just avoid being outsmarted by algorithms; we’ll shape the battlefield on our terms. Dr. Tossell’s final message was a forward-looking one: we have an opportunity to design the future of warfare as human-centered by design. It’s an opportunity we can’t afford to squander. The next era of conflict will be won by teams that expertly blend human intuition and ethical judgment with machine speed and precision. In other words, the side that best answers the call to “raise the bar” in human-machine teaming will dominate – in space and everywhere else.

 

 

Sources:

Tossell, C. (2025, April 30). The importance of the human in human-machine teaming: Reflections on a decade of studies, simulations, and surprises. Wisdom.011 Session, Center for National Security Initiatives, University of Colorado Boulder.

 

Tossell, C., Tenhundfeld, N., Momen, A., Cooley, K., & de Visser, E. J. (2024). Student perceptions of ChatGPT use in a college essay assignment: Implications for learning, grading, and trust in artificial intelligence. IEEE Transactions on Learning Technologies. https://doi.org/10.1109/TLT.2024.XXXXX

 

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–457. https://doi.org/10.1017/S0140525X00005756

 

Wagner, A. R., & Arkin, R. C. (2011). Acting deceptively: Providing robots with the capacity for deception. International Journal of Social Robotics, 3(1), 5–26. https://doi.org/10.1007/s12369-010-0071-3 (Referenced for “burning room” robot trust study from Georgia Tech.)

 

Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human-Computer Interaction, 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118

 
 
 

Comments


bottom of page