Hacker News new | past | comments | ask | show | jobs | submit login
Underactuated Robotics (csail.mit.edu)
242 points by jeffreyrogers on Feb 18, 2019 | hide | past | favorite | 27 comments



Interesting. I used to work on legged locomotion in the 1990s.[1]

The notes point out that locomotion on the flat is a special case. Too much work on legged locomotion assumes flat ground. On the flat, balance dominates the problem. Once you get off a flat surface, slip/traction control dominates.

Slip control is like "ABS for feet". You have to keep the forces applied parallel to the ground below the point where slip starts. That changes the shape of the problem. Classically, most robot control is positional. Slip control is in force space. Until you have slip control, hill climbing will not work. So the first step is to constrain those forces to stay below the break-loose point.

I pointed out in the 1990s that legs with three joints allow manipulating the contact force vector and the position independently.[2] This is visible once you watch people climbing hills, and even clearer with horses, where the leg bones are closer to being of equal length.

Most legged robots don't make fast starts and stops or fast turns. Even the Boston Dynamics machines usually start by trotting in place and then shifting to forward. Motion with high accelerations is traction-limited.

The first step is like ABS, constraining the forces below the break-loose point. You need that because in the real world bad stuff is going to happen and recovery involves backing off the forces until traction is regained. The second step is considering forces when planning, so that movements get near the limits of traction but don't exceed them. This is where you can begin to do more aggressive movements.

The lesson notes are finally going in the right direction, looking at this as a two-point boundary value problem. Most previous work has focused on finding some expression that measures stability and maintaining that. If you want agility, you have to give up stability maintenance throughout the gait cycle. You need good landings. Everything else is secondary.

You have a set of constraints that apply at a landing, as a foot touches down - be within slip tolerance on forces, joints not too close to limits, impact not too high. And, importantly, the situation must be within the basket that allows stability recovery during the ground contact phase. Land, stabilize, launch, reposition for landing while in air, repeat.

Most work focuses on stabilization. That's only part of the problem. What to do in the air is classic rocket science trajectory planning. Launch control is mostly force-limited, and that's when you get to apply big forces and get big accelerations.

How far ahead do you plan? One landing ahead for basic locomotion, two landings ahead for athletics. If you plan two landings ahead, the stability criterion for the first landing can be relaxed somewhat. You might fail to zero out rotation in yaw because that will be fixed at the next landing, for example.

This stuff is cool, but there's no market. It was fun to work on, though. In the end, I sold much of the technology to a game middleware company, so it worked out OK.

[1] https://www.youtube.com/watch?v=kc5n0iTw-NU [2] http://www.animats.com/papers/articulated/articulated.html


I've been thinking about this stuff for decades but never saw anything like this. I think you're the first person to nail the three-joint thing. Super impressive, thanks.


From biology class, legged creatures like us have another major consideration besides stability: staying on the ground. If you try to walk too fast, your contact force in the middle of the stride goes to zero and you risk leaving the ground! To go faster, you either need to flatten you trajectory or run.

Also:

> How far ahead do you plan? One landing ahead for basic locomotion, two landings ahead for athletics.

More than two if you want to make a mogul-skiing robot :)


Another version of this course (I think it was an OCW version on YouTube.) had a video of a fish holding position in a stream. There was an obstacle in the stream, and the fish swam up into its wake to save energy. Neat. Then they showed a video of a dead fish being towed through a simulated stream by a line. It exhibited "swimming" behavior that could be dismissed as just flapping in the current right up until the dead fish swam up into the lee of the nearby obstacle for shelter.

It's amazing how much adaptation and complex behavior can be embedded into an underactuated system. A fish's dead body "knows" how to save energy when swimming against a current!


I'm curious the results of that experiment with a towed sphere. My gut says it too would find the lee, simply because that's a local and stable energetically favorable configuration of the system.


I think this might be the video you mention: https://vimeo.com/44887922 - interesting stuff.


I can't find a link right now, but a dead whale will move forward at a knot or two just due to the action of its flukes in the waves. Some folks were working on a means to propel boats by artificial flukes.


Yep, that's an amazing video. The course is being taught this semester and is being livestreamed and recorded on YouTube. I think the 5th lecture is tomorrow. Videos from previous semesters are also on YouTube.


This looks very good. It's actually a stealth optimization textbook ;)

Relevant recent work from Google AI: building a dynamic model of the world from pixel observations only

https://ai.googleblog.com/2019/02/introducing-planet-deep-pl...


I took the course in 2015, when it was Matlab-only and Russ was working on a C++ version; it was one of the best courses in robotics I've taken (together with Udacity's Self-driving car ND). It also had a simple version of Boston Dynamic's Atlas robot that was massively improved recently. Does anyone know if the course contains new content related to the recent advances at BD? Is the optimization code now compatible with ROS? Thanks!


All the new optimization code has C++ and/or python (2.7) bindings.

I don’t think TRI has made ROS wrappers but it could conceivably be done without much pain I think.

Not sure there is much in there about new stuff from BD. There is some deep RL stuff now though, not really sure how much the course has changed since 2015.


Was the Udacity Self-driving course good?



Looks like it's archived. There are video lectures available at MIT's OCW: https://ocw.mit.edu/courses/electrical-engineering-and-compu...


This is somewhat relevant if you are interested in underactuated robots in production. One of our customers, Tokyo Kitty [1], a nightclub in Cincinnati uses two underactuated robots to deliver drinks to private rooms without spilling. We use convex optimization (chapter 10) to not spill, the bartenders and patrons love it [2].

[1] https://www.facebook.com/thattokyobar/ [2] https://www.youtube.com/watch?v=n_PeLAlVpzQ


This makes me wonder: imagine you have a quadcopter-type drone or something, where some of the propellers are mounted on inverted pendulums, or double-inverted pendulums, or something equally wildly unstable.

Could one then learn to control it with deep learning, and maybe gain some benefits from doing so? Like gaining the rapid maneuverability that birds have.


You don't need an inverted pendulum to make an aircraft unstable. See the Grumman X-29, the forward-swept wing made it aerodynamically unstable, and it piloting it was only made possible by feedback control software.

https://en.wikipedia.org/wiki/Grumman_X-29


Relating to the current deep RL hotness... If I'm training a network to control a walker/swimmer, will simply adding a penalty on energy expenditure, on top of whatever the goal may be (e.g. maximise forwards distance), lead to underactuation?


This looks really really interesting. Thank you for posting this!

Right now I am trying to build a quadruped robot and still a bit scared of that mathematics, but its awesome and I'll fight my way through it.


I'm wondering if/when all the control theory will be replaced by some simple generic black-box deep-learning scheme.


It really depends on your application. If your system is so complex that you give up on understanding the dynamics, then a "generic" ML algorithm makes sense. However, a big problem with applying ML to robotics is you need a ton of data and producing a representative dataset for your system can be hard. Traditional control methods don't need nearly as much data, including the system discovery process.


I think that depends on how good our physical simulations become? If we can model our robot to a sufficient degree of physical precision, and our world simulation is good enough, with the right objective functions it seems like one probably build a robust control system without really understanding any of the control theory at all? Granted this might take the equivalent of 50 years worth of learning but it seems possible? This is how animals have learned to move over time right?


Why is it a useful goal to not understand control theory?

Animals like us have plenty of control theory built into our firmware, even if we aren't going through the calculations on the cognitive software level.

A control system, tuned and running, is a few lines of code in a fast loop, or a feedback amplifier circuit. Just like a neural net, it takes a while to tune the coefficients but then it becomes muscle memory. And since much of control theory is fully general (like Bode's sensitivity integral), those principles must also hold for biological systems as well.

A baby doesn't think about Laplace transforms when learning to walk, but neither do they think about backpropagation.

If you're keen to replace control theory with something else, I recommend solidly understanding it first.


That presumes the existence of an arbitrarily precise simulator, that you can run efficiently enough to generate the large datasets required for deep learning. That's a tall ask for not a lot of gain. We've understood rigid-body dynamics for hundreds of years. Finding the equations of motion for a system is just running an algorithm at this point. Why not use what we know?

That said, deep learning is great for when we can't model things well. For example, there are a number of mathematical issues with how we model object grasping even before visual data is introduced. Since the setting is so difficult to work with analytically, there have been a number of exciting data-driven solutions proposed in this space.


Not convinced it will completely replace current techniques, if it does start significantly outperforming more old school control systems we’ll still need a lot of progress in explaining what the controller learned and proving that it will work within the assumptions the engineers make.

Personally I think the future will be merging parts of control theory/dynamics with machine learning.


One of the few MIT classes I had the pleasure to attend in person. So good. Russ Tedrake is amazing.


Thanks for posting this! It looks very interesting. I wish I had the time to go through all those books!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: