Robotics: A new approach

Download

Right click “Save as…”

  • MP3 (Audio only)


In this podcast, Jon Udell invites Tandy Trower and Henrik Nielsen to explain why robotics is taking off, and how their new approach to the technology will generalize to a broad range of scenarios.

JU: So you were just in Japan. What did you see and do?

TT: We were at IREX, the international robotics exhibition in Tokyo. All forms of robots were there, heavily dominated by industrial robots. That was the big-ticket item. But we were in a smaller section that focused on this new market, service robots, which are moving into new areas. Industrial robots have done the dangerous, dull, and dirty jobs. Now there's a new market coming, where robots move outside the factories and into the homes.

It's a dramatic change. Industrial robots are very expensive, they require special operators, they perform repetitive functions, and they're dangerous for humans to interact with. But that market is starting to flatten out. So a lot of the vendors in that area, including one of our best supporting partners, Kuko, one of the top industrial robotic arm manufacturers in the world, is looking for new markets, and very anxious to engage with us in this new service, or personal, robotics market.

Bill Gates reflected this in his January article in Scientific American, where he likened the personal and service robotics world to the PC world in the 1970s. The personal computer market, in its infancy, looked kind of weird. You had the Commodore PET, which had a strange little keyboard and saved programs to cassette, you had the Apple II. The transition we see in this robotics market now is very similar to what we saw coming out of that era, and even the industrial vendors are starting to look to this new market as a place to go.

JU: In this case, there's also a particular demographic driver: aging populations create the need for these personal assistants in the home. And in Japan, in particular, there's a special interest in companionable robots.

TT: Yes. In Japan and in many Asian countries, there's much more interest in the social aspect of robots. It's partly cultural, they grew up with AstroBoy and the idea that robots were friendly companions. So you're right, one of the biggest motivating factors is this aging of the population. I face this myself. My father-in-law is 84, he lives on his own, he needs help from his family to be able to live independently. It certainly would be helpful if we had more technology that would allow us to stay in touch with him, remind him to take his medications, connect him better with his health care providers, these are all things that robots could perform.

It's also the case that in the Asian countries, because of family and cultural traditions, it's more important to take care of your elders.

JU: So if the analogy is to the early PC era, then you're providing what is, in a sense, DOS.

TT: Exactly.

HN: I actually think robotics can grow far beyond where the PC started out. Because the PC, until very recently, had a fairly uniform form factor. You could rely on a screen and a keyboard and a mouse, and that dictated what the user interface can be. As soon as you start having what I call more context-aware applications, things that know where you are, what you are doing, what the surroundings are doing -- and not just the local environment -- this causes the computation and the applications to be completely different. They are inherently part of the environment. They have to become much more aware, and the ways you interact with them have to become more aware. You might want to use speech for some things, or touch, or just you being there in person so it can track you using heat, or motion.

Robotics hardware has come a long way in terms of price, functionality, and flexibility. But service robots have yet to reach a level of usefulness that defines how they might be able to take off. There are some obvious entertainment opportunities, and remote presence opportunities, but beyond that we're only in the beginning phase of figuring out what these applications look like.

And we actually think it applies not only to robotics but also to how you might start thinking about interaction with information systems in general.

TT: And that's been one of the challenges. How do you create applications like this, where you have a lot of things going on? PCs have had it easy. They just sit there, they take the keyboard input, the mouse input, but when they have to go and sense things in our environment, and actually operate in our environment, it takes a much more complex model. How do you deal with all these different sensory inputs that are coming in at the same time? How do you deal with controlling the activations of many different things at the same time? This, we believe, is not just a model for robotics, but is a model for software of the future.

JU: Absolutely, because there no longer is the illusion of god-like control of the machine. In the early PC network, pre-network, you really did make the rules and you really did have that control. But in the network era, and now as the network extends into the physical world, you're an actor on a stage with a number of other actors running around with their own agendas. It becomes a negotation, a game of interaction. So yes, it absolutely mandates a different model, and that model extends equally to loosely-coupled services that communicate by sending messages over the network.

TT: Yes, a model that deals with the inherent complexity of concurrency, and the coordination or orchestration of what's going on. This was the whole reason for choosing the CCR and DSS pieces for robotics. This was actually an advanced programming model designed not for robotics per se, but as a general purpose programming model. We put it into the robotics SDK as a way to test this out, but now we're seeing that people are lifting the hood on the engine inside this SDK and finding other uses for it. We have people who are using it to build trading systems, who are doing large data-set scientific modeling, the folks at MySpace are using it to manage their server farms.

JU: So let's review, for people who may not have followed the story. The CCR, which is the Concurrency and Coordination Runtime, and DSS, which stands for Decentralized Software Services, are projects that were in the works, and had a relationship to one another, prior to their incorporation into the robotics kit. Is that true?

TT: Yes, that's right.

HN: Yes, absolutely. DSS is built on top of CCR. By way of background, the challenge was to answer the question: What is the programming and application model when it's no longer true that you have a single process running on a single cpu on a single machine? We think that is already no longer true. When you look down you see many cores under you that operate concurrently...

JU: And many nodes on the network...

HN: That's right, and when you look up you see many nodes on the network, and you want to have your application function in that environment. In fact you need to define what an application is. If you are in fact building a composition of services you need to deal with the concurrency, but also about messages flowing around in the system. It becomes much more autonomous computing. And this is why it fits nicely with robotics. It's about sensing, get a huge amount of input from the environment in a very asynchronous and loosely-coupled way.

Everything becomes an autonomous unit. And each can be participating in many different applications at the same time, without even knowing it...

JU: Or not participating, because some of them went AWOL, but that's OK because you have the redundancy to handle that.

HN: Exactly. The web has been trying to push toward this model for a long time, and now the appearance of many-core CPUs has started to push toward it. So the whole idea of an application, which hasn't changed for 30 years, now has to change. And that's the question we tried to answer when we started out with CCR and DSS. They work nicely together. One provides a programming model, the other provides an application model, that together fits nicely around messaging, as you said. We think it leads you down a path of building very robust, scalable, and flexible applications.

JU: So in this context how do you define an application?

HN: It is a composition of a set of loosely-coupled services that function individually. Kind of like in a mashup environment. You have a variety of inputs, a different set of outputs that you want to be able to affect, it is the orchestration of messages going in and out. It's the collection -- it is effectively, when you look at it, a graph of services that you start thinking of as your application.

JU: And a ruleset.

HN: And a ruleset, yes, exactly. So it's about having a set of services hooked together, and ruleset for how to orchestrate messages over that set. And it's about partial failure, and redundancy, because you don't have control over all of these services. Some run locally, some run across the network, some run in the cloud. You want to be able to leverage them all, and hook new things in.

Here's a very practical problem from a robotics point of view. You might have had your robot in the home for a couple of years. It has learned where you go, it knows your calendar, it knows a bunch of things about you. Now you might get another robot. Rather than wait a couple of years for it to get up to speed on what you think matters to it, you might want to be able to hook into the same application context. It's a web of information that you want the new guy to be able to hook into. It's all about the connectedness of applications.

TT: And of course this is the way that living systems operate, whether we're talking about the cellular structure of our bodies, or our neural systems, or even full ecosystems. It's all based on the fact that the nodes themselves have a certain importance, but it's the connectivity through the nodes -- the way they communicate with one another -- that provides the inherent power. Our own neural system is a massive network. The individual nodes provide insignificant data, yet they pass these messages along, and through the orchestration of these connections we get the ability to see, or to hear, or to be able to function in our world.

JU: Biomimicry, that's the ticket. Nature's already done all this R and D, why don't we piggyback on what it's already figured out.

TT: Exactly. When I first looked at applying the technology that Henrik was working on, that was one of the areas I looked at. Now it turns out that biologically inspired techniques are still in a crude stage, so my second attempt was to apply this to robotics because it's a more practical technology that may eventually evolve toward more biologically inspired technologies.

Again, this whole model was never designed to be exclusively for robotics. It was designed to be a programming model for the future that would enable a new generation of applications. We've been trying to create them, today, as if they were all on a single neuron. What this technology says is that with the trends that are coming -- Intel and AMD both now shipping 4-core systems, 8-cores coming next year, how are we going to manage all this power? And the Internet shows us that we've already moved past the idea of running a single application that runs on a single core on a single machine, that's just obsolete. How do you reduce the overall complexity when your application runs in five different places at the same time? Is it even a solvable problem? Well it turns out that the CCR and DSS have solved that problem, they do provide that programming model. And that's not just me saying that, we have customers who are embracing them because they are helping solve these complex problems.

JU: One of the challenges, as we see in the web services space, is that when the application becomes a set of actors on the stage, with a lot of other actors, how do I know that I'm meeting my requirements, how do I test? I think these are all extensions of things we know how to do, but still, it changes the game.

HN: Oh, it changes dramatically. We hold the basic assumption that bad things happen, and things fail for unknown reasons. In the case of robotics it works beautifully, because the robot falls off the cliff, and it's gone. But you can't just stop. It would be smart to say, well, don't do what that thing did. Try to avoid falling off the cliff. That's where this magic term loose coupling comes in. It's often seen as a good thing to do, an important architectural principle, but in fact how to do it turns out to be difficult. How can you write an application that can fail partially without the rest of it going down?

JU: How do you evaluate the performance of an application? We're used to a model where the testable performance is discrete. It did or didn't do this function. But in this world, it tends toward the probabilistic.

HN: Oh, absolutely.

JU: It's not whether it vacuumed the room or not, but how well did it do that? And over a series of trials, how did that average out? It's fuzzier.

HN: Yes. Of course people already know that on the web, when they use search engines. They know they'll get a decent response, but an exact snapshot is just not possible.

JU: It won't be authoritative or complete.

HN: It's a snapshot of a moment in time. I think a lot of the applications we deal with will have to think about that, and be organized around that. And that boils down to, well, I have information, how do I orchestrate it, how do I put weights on the different pieces of information? And how do I spread it around so I can build something that doesn't freeze?

TT: Related to that, what do you do when one of your program components does freeze up, or crashes. In this world, it's fine. If you lose one of the services in the set, because its state is separated out, you can drop the service or restart it or replace it...

JU: And reattach the state to another instance of the service.

TT: Exactly. What do you do when you find out that code has failed? Do you reboot the system? Do you remove the whole application? Or do you just surgically go in there and remove or fix one piece? I mean, we lose cells all the time, and they're replaced, and yet we don't have to be rebooted every time a new cell comes in. It just fits into the network, finds its place, replaces the old one, and we continue on. Software needs that kind of resiliency. You need to do that kind of surgical maintenance.

Back to robotics, the classical model was this. You read your sensors, you decide what to do about that sensory input, and then you effect your actuators. The problem with that was twofold. First it's very brittle. You get one wrong instruction, you bring the whole application down. Second, while you're processing your sensory input or actuator outputs, you're not reading your sensors. So at the time you should be noticing that you're running into the wall, you're telling the wheels to move forward.

The fact that we talk about this as orchestration is a very apt metaphor. What happens in an orchestra, what does a conductor do? He has a lot of people playing at the same time, his task is to make sure that it all blends together and sounds beautiful. This is the key, this is the programmer's challenge in the future. How are they going to keep an application flowing that way? It needs a simple model, but one that is scalable from the lowest level of abstraction to the highest level. That's what we believe we have here in the CSS/DSS companionship.

JU: I was going to ask how you begin to instill this way of doing things into a new generation of programmers, but I think I got the answer in a recent conversation with Matt MacLaurin, in the Creative Systems Group. He's developing a thing called Boku, which is both a game and a game development system, but all on these same principles. A kid puts an object into the world, then declares what are the goals or the reactions that it can have. Then you start to get emergent things happening, and you are learning to operate in a world which is much like the one you're describing. You're not controlling this world. You're injecting things into it that participate and interact, and you need to shape those interactions.

HN: Robotics offers a lot of excitement in terms of education. It ties together a lot of technologies, in terms of science, math, applied technologies like vision and audio, and also computer science. So it's a powerful vehicle for getting attention from students.

So we had this problem, people said, well, if you want to use it for computer science, then computer science 101 has to be a for loop, or a function call...

JU: Sorting.

HN: Sorting, exactly. And we said, well, we think it might be interesting to expose this model of distribution and concurrency directly. We don't think the students will freeze up, they are already aware of the asynchronicity from IM and email.

JU: This is what Matt is doing, actually. It's beautiful. So, to make this concrete, let's come back to home automation. In the case of HealthVault, currently, any of the home health devices that connect to it will be satellites of the PC. But you're imagining a model where the home is more of a network of...well, in a sense, the entire home is a complex robot.

HN: My view is that the P in the PC will go away. Because it's about computers in the network, and the connectedness of them, and the fact that you want them to be orchestrated, but you don't really go and sit in front of any of them. When the robot's around you might do some stuff with it, then you go down into your basement that might do something else, but you want the information to be continuous.

JU: It's not like your cellphone, a thing that's permanently attached to you. When it's in your environment, you can interact with it, but it doesn't have to be there, and you can interact with lots of other things.

HN: Yes. Of course the cellphone has had clearly subordinate role to the PC, you dock it and synch it, but these devices are becoming full-fledged network devices. So again you have to have an application and programming model that allows you to build applications that can float around these devices as they come and go.

JU: And the support software is light enough for these devices?

HN: You mean in terms of CCR and DSS? Yes. We run today on Windows CE, and I think we can say for the next release we will run -- we are already running now -- on the micro framework, which doesn't have any Windows underneath, it's really running a very lightweight managed environment straight on top of the hardware. We can run on very small things. Light switches.

JU: Really?

HN: Yes. We run a limited version of what we have, but it's the same bits, fundamentally. We had a researcher in MSR implement a very lightweight version of our protocol, on a small device, and have it show up in our environment, without having to do anything else. It could be a light switch, a thermostat, a security alarm, any number of devices that don't do much computation but provide sensory input.

JU: So if somebody wants to get their feet wet with this, and do a little project that gives them a taste of what it's like, what would you recommend? I mean, they can get the kit, but what's a good example to try?

HN: There are a lot of people who'd be excited about going to the store and buying a robot, and that's great. But assuming you couldn't, what you would start with is the simulation environment. It allows you, without having touched a robot at all, to play around with a set of robots that we provide you with simulation models for. You can very easily, in 5 minutes, download the SDK and then get going with a robot that is only in the visual environment. However it's more than that, it is a fully physics-aware environment.

JU: Sensors, actuators...

HN: Yes, so when you bump into other things they will move, and if you made them very heavy, your robot will flip or crash. So you have a very easy way to get started.

JU: Thanks Tandy. And thanks, Henrik.

Tag:

Follow the Discussion

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know.