Listen to this article
|
Human systems engineering aims to combine engineering and psychology to create systems that are designed to work with humans’ capabilities and limitations. Interest in the subject has grown among government agencies, like the FDA, the FAA and NASA, as well as in private sectors like cybersecurity and defense.
More and more, we’re seeing robots deployed in real-world situations that have to work alongside or directly with people. In manufacturing and warehouse settings, it’s common to see collaborative robots (cobots) and autonomous mobile robots (AMRs) work alongside humans with no fencing or restrictions to divide them.
Dr. Kelly Hale, of Draper, a nonprofit engineering innovation company, has seen that too often human factors principles are an afterthought in the robotics development process. She gave some insight into things roboticists should keep in mind to make robots that can successfully work with humans.
Specifically, Hale outlined three overarching ideas that roboticists should keep in mind: start with your end goal in mind, consider how human and robot limitations and strengths can work together and minimize communication to make it as efficient as possible.
Robotics Summit & Expo (May 10-11) returns to Boston
Start with an end goal in mind
It’s important that human factors are considered at every stage of the development process, not just at the end when you’re beginning to put a finished system into the world, according to Dr. Hale.
“There’s not as many tweaks and changes that can be made [at the end of the process],” Dr. Hale said. “Whereas if we were brought in earlier, some small design changes probably would have made that interface even more useful.”
Once the hardware capabilities of a system are set, Dr. Hale’s team has to work around those parameters. In the early design phase, researchers should consider not only how a system functions but where and how a human comes in.
“I like to start with the end in mind,” Dr. Hale said. “And really, that’s the operational impact of whatever I’m designing, whether it’s an operational system, whether it’s a training system, whatever it is. I think that’s a key notion of the human-centered system, really saying, okay, at the end of the day, how do I want to provide value to the user through this increased capability?”
Working with human limitations and robot limitations
“From my perspective, human systems engineering is really about combining humans and technology in the best way so that the overall system can be more capable than the parts,” Dr. Hale said. “So more useful than a human by themselves or a machine or a system by themselves.”
There are many questions roboticists should ask themselves early in the process of building their systems. Roboticists should have an understanding of human capabilities and limitations and think about whether they’re being effectively considered in the system’s design, according to Dr. Hale. They should also consider human physical and cognitive capabilities, as there’s only so much data a human can handle at once.
Knowing human limitations will help roboticists build systems that fill in those gaps and, alternatively, they can build systems that maximize the things that humans are good at.
Another hurdle to consider when building systems to work with humans is building trust with the people working with them. It’s important for people working alongside robots to understand what the robot can do, and trust that it will do it consistently.
“Part of it is building that situational awareness and an understanding from the human’s perspective of the system and what its capabilities are,” Dr. Hale said. “To have trust, you want to make sure that what I believe the system is capable of matches the automation capability.”
For Dr. Hale, it’s about pushing humans and robotic systems toward learning from each other and having the ability to grow together.
For example, while driving, there are many things humans can do better than autonomous vehicles. Humans have a better understanding of the complexity of road rules, and can better read cues from other drivers. At the same time, there are many things autonomous vehicles do better than humans. With advanced sensors and vision, they have fewer blindspots and can see things from farther away than humans can.
In this case, the autonomous system can learn from human drivers as they’re driving, taking note of how they respond to tricky situations.
“A lot of it is having that shared experience and having the understand of the baseline of what the system’s capable of, but then having that learning opportunity with this system over time to really kind of push the boundaries.”
Making systems that communicate effectively with humans
People are able to discern whether a system is not optimized for their use. The manner and frequency with which the technology interacts with humans may be a dead giveaway.
“What you’ll find with some of the systems that were less ideally designed, you start to get notified for everything,” Dr. Hale said.
Dr. Hale compared these systems to Clippy, the animated paperclip that used to show up in Mircosoft Word. Clippy was infamous for butting in too often to tell users things they already knew. A robotic system that interrupts people while they’re working too often, with information that isn’t important, results in a poor user experience.
“Even with those systems that have a lot of user experience and human factors considered, there are still those touch points and those endpoints that make it tricky. And to me, it’s a lot of those ‘false alarms’, where you’re getting notified when you don’t necessarily want to be,” Dr. Hale said.
Dr. Hale also advises that roboticists should consider access and maintenance when designing robots to prevent downtime.
With these things in mind, Hale said the robotic development process can be greatly shortened, resulting in a robot that not only works better for the people that need to work with it, but can also be quickly deployed in many environments.
Tell Us What You Think!