Self-Walking Exoskeletons

Instead of being directed movement by movement, an AI-based prosthetic infers the wearer’s destination

By Steven Cherry

The Existentialist philosopher Jean-Paul Sartre has an example that he uses to describe what it is to decide to do something … what the process of decision-making is. Or, more precisely, what the process isn’t.

He says, imagine you’re driving your car and waiting at a stop light. When the light turns green, you don’t first decide to take your foot off the brake pedal and put it on the gas pedal. You see the light change, and your foot moves from the one pedal to the other, and that is the decision.

Unfortunately, in the world of prosthetics, specifically exoskeletons, that’s not how it works. We’re still at the stage where a person has to instruct the prosthetic to first do one thing, then another, then another. As University of Waterloo Ph.D. researcher Brokoslaw Laschowski put it recently, “Every time you want to perform a new locomotor activity, you have to stop, take out your smartphone and select the desired mode.”

Until perhaps now, or at least soon. Laschowski and his fellow researchers have been developing a device, using a system called ExoNet, that uses wearable cameras and deep learning to figure out the task that the exoskeleton-wearing person is engaged in, perhaps walking down a flight of stairs, or along a street, avoiding pedestrians and parking meters, and have the device make the decisions about where a foot should fall, which new direction the person should face, and so on.

Brokoslaw Laschowski is the lead author of a new paper, “Computer Vision and Deep Learning for Environment-Adaptive Control of Robotic Lower-Limb Exoskeletons,” which is freely downloadable from BioRxiv, and he’s my guest today.

Brock, welcome to the podcast.

Brokoslaw Laschowski Thank you for having me, Steven. Appreciate it.

Steven Cherry You’re very welcome. A press release compares your system to autonomous cars, but I would like to think it’s more like the Jean-Paul Sartre example, where instead of making micro-decisions, the only decisions a person wearing a robotic exoskeleton has to make are at the level of perception and intention. How do you think of it?

Brokoslaw Laschowski Yeah, I think that’s a fair comparison. Right now, we rely on the user for communicating their intent to these robotic devices. It is my contention that there is a certain level of cognitive demand and inconvenience associated with that. So by developing autonomous systems that can sense and decide for themselves, hopefully we can lessen that cognitive burden where it’s essentially controlling itself. So in some ways, it’s similar to the idea of an autonomous vehicle, but not quite.

Steven Cherry Let’s start with the machine learning. What was the data set and how did you train it? [[[[REPP TO HERE]]]]

Brokoslaw Laschowski The data set was generated by myself, this involves using a wearable camera system and collecting millions and millions of images of different walking environments at various places throughout Ontario. And those images were then labeled—this is important for what’s known as supervised learning. So this is when you give the machine an image and that machine has access to the labels. So what is the image showing? And then machine learning is … Essentially you can think of it almost like tuning knobs and fine-tuning objects in the machine so that the output of that or the prediction of that image matches the labels to which you are to give the image.

So in some ways, it’s kind of similar to an optimization problem and it’s referred to as machine learning because the machine itself learns the optimal design based on a certain optimization algorithm that allows it to immediately update that tunable weights within, say, a neural network.

Steven Cherry So basically, you were training it to understand different walking environments. [READ MORE]