Dialogue is one of the most basic ways humans use language, and is a desirable capability for autonomous systems. Army researchers developed a novel dialogue capability to transform soldier-robot interaction and perform joint tasks at operational speeds.
The fluid communication achieved by dialogue will reduce training overhead in controlling autonomous systems and improve soldier-agent teaming.
Researchers from the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory, in collaboration with the University of Southern California’s Institute for Creative Technologies, developed the Joint Understanding and Dialogue Interface, or JUDI, capability, which enables bi-directional conversational interactions between soldiers and autonomous systems.
The Institute for Creative Technologies, or ICT, is a Department of Defense-sponsored University Affiliated Research Center, or UARC, working in collaboration with DOD services and organizations. UARCs are aligned with prestigious institutions conducting research at the forefront of science and innovation. ICT brings film and game industry artists together with computer and social scientists to study and develop immersive media for military training, health therapies, education and more.
Related: How landmine detection could improve with drones and machine learning
This effort supports the Next Generation Combat Vehicle Army Modernization Priority and the Army Priority Research Area for Autonomy through reduction of soldier burden when teaming with autonomous systems and by allowing verbal command and control of systems.
“Dialogue will be a critical capability for autonomous systems operating across multiple echelons of Multi-Domain Operations so that soldiers across land, air, sea and information spaces can maintain situational awareness on the battlefield,” said Dr. Matthew Marge, a research scientist at the laboratory. “This technology enables a soldier to interact with autonomous systems through bidirectional speech and dialogue in tactical operations where verbal task instructions can be used for command and control of a mobile robot. In turn, the technology gives the robot the ability to ask for clarification or provide status updates as tasks are completed. Instead of relying on pre-specified, and possibly outdated, information about a mission, dialogue enables these systems to supplement their understanding of the world by conversing with human teammates.”
In this innovative approach, he said, dialogue processing is based on a statistical classification method that interprets a soldier’s intent from their spoken language. The classifier was trained on a small dataset of human-robot dialogue where human experimenters stood in for the robot’s autonomy during initial phases of the research.
The software developed as part of the collaboration with USC ICT leverages technologies developed in the institute’s Virtual Human Toolkit.
“JUDI’s ability to leverage natural language will reduce the learning curve for soldiers who will need to control or team with robots, some of which may contribute different capabilities to a mission, like scouting or delivery of supplies,” Marge said.
The goal, he said, is to shift the paradigm of soldier-robot interaction from today’s heads-down, hands-full joystick operation of robots to a heads-up, hands-free mode of interaction where a soldier can team with one or more robots while maintaining situational awareness of their surroundings.
Related: GE building worm-like robots to dig military tunnels
According to the researchers, JUDI is distinct from current similar research conducted in the commercial realm.
“Commercial industry has largely focused on intelligent personal assistants like Siri and Alexa – systems that can retrieve factual knowledge and perform specialized tasks like setting reminders, but do not reason over the immediate physical surroundings,” Marge said. “These systems also rely on cloud connectivity and large, labeled datasets to learn how to perform tasks.”
In contrast, Marge said, JUDI is designed for tasks that require reasoning in the physical world, where data is sparse because it requires previous human-robot interaction and there is little to no reliable cloud-connectivity. Current intelligent personal assistants may rely on thousands of training examples, while JUDI can be tailored to a task with only hundreds, an order of magnitude smaller.
Moreover, he said, JUDI is a dialogue system adapted to autonomous systems like robots, allowing it to access multiple sources of context, like soldier speech and the robot’s perception system, to help in collaborative decision-making.
This research represents a synergy of approaches created by ARL researchers from both the lab’s Maryland locations and ARL West in Playa Vista, California, who are part the lab’s Human Autonomy Teaming, or HAT, and Artificial Intelligence for Maneuver and Mobility, or AIMM, Essential Research Program, and experts in dialogue from USC ICT. The group’s speech recognizer also leveraged a speech model developed as part of the Intelligence Advanced Research Projects Activity’s Babel program, designed for reverberant and noisy acoustic environments.
JUDI will be integrated into the CCDC ARL Autonomy Stack, a suite of software algorithms, libraries and software components that perform specific functions that are required by intelligent systems such as navigation, planning, perception, control and reasoning, which was developed under the decade-long Robotics Collaborative Technology Alliance.
Successful innovations in the stack are also rolled into the CCDC Ground Vehicle System Center’s Robotics Technology Kernel.
“Once ARL develops a new capability that is built into the autonomy software stack, it is spiraled into GVSC’s Robotics Technology Kernel where it goes through extensive testing and hardening and is used in programs such as the Combat Vehicle Robotics, or CoVeR, program,” said Dr. John Fossaceca, AIMM ERP program manager. “Ultimately, this will end up as Army owned intellectual property that will be shared with industry partners as a common architecture to ensure that Next Generation Combat Vehicles are based on best of breed technologies with modular interfaces.”
Moving forward, the researchers will evaluate the robustness of JUDI and the soldier-robot interaction with physical mobile robot platforms at an upcoming AIMM ERP-wide field test currently planned for September.
“Our ultimate goal is to enable soldiers to more easily team with autonomous systems so they can more effectively and safely complete missions, especially in scenarios like reconnaissance and search-and-rescue,” Marge said. “It will be extremely gratifying to know that soldiers can have more accessible interfaces to autonomous systems that can scale and easily adapt to mission contexts.”
Editor’s Note: This article was republished from the U.S. Army CCDC Army Research Laboratory.
Tell Us What You Think!