IAN LANG ELECTRONICS
It used to be when I was a child that first steps into electronics were taken by building simple radio sets and then moving on to more complicated radio sets. Then you'd discover the electric motor and motorise your toys, then you'd discover other bits and completely transform your Action Man tanks/jeeps/ armoured cars or whatever into something loud and noisy that would annoy your parents no end. When you got on to train sets you'd have a bonanza electrifying everything until something blew up and your Meccano projects would be frankly a danger to everybody who came near them. Then you'd get better at handling electronics and you'd make stuff that would amaze your mother and father and make them wonder if they'd created some sort of mini James Bond villain who would one day attempt to dominate the Earth whilst stroking a white cat.
These days kids go straight to robots. Such is progress, as you are far more likely to dominate the Earth and all that is in it with an army of mechanoids than you are with your Hornby 00 model of the Flying Scotsman whether or not you've made it radio-controlled and given it steam-engine sounds. Electronics may change. Kids don't.
The point of that unfocussed rambling above, in case you were wondering, is that nowadays there is a great interest in making mechanoids that do something useful and/or entertaining. Sadly the current state of robotics (or cybernetics to give it its posh name) is such that nothing like the Terminator robot exists and nor is it likely to do before you and I are merely somebody's distant ancestors (unless the Ministry of Defence knows something we don't) simply because you and I and even the annoying fly buzzing about on the lightshade as I write this have far more computing power available to us than the machine you are reading this on. Really. I don't care if you have spent two thousand-jillion pounds on the machine that you are looking at and it occupies the space the size of Amazon's biggest warehouse. That lump of organic matter that sits in your head can do in milliseconds what your whizz-bang machine would take years to process if it doesn't topple over and give you an error message first. The computer however can do maths a lot faster than you can, even if you are the Professor of Doing Really Hard Maths Very Quickly at the University of Cleverton. God alone knows what would have happened if Maxwell, Einstein et al had got their hands on my laptop and started doing some spreadsheets.
So, after that extra bit of philosophical meandering on my part, let's get back on topic. What's needed to make a robot? Here's a diagram of a basic robot topology:
(Human Interface Device)
There are seven components here and not all seven need be present on a single robot but one of them is essential and is common to everything- that is the control system. Just like you would be an empty shell without a brain, a robot will not do anything without a control system. It can be as simple as a couple of transistors and relays, but usually there is a microcontroller involved somewhere; PIC, Intelligent Brick, Lego NXT controller, Arduino, Raspberry Pi- take your pick, there's something like it in there somewhere. There's also usually a human interface device (although there need not be) even if it's just something as simple as an on/off switch.
Now, cybernetics is a branch of physical computing which has achieved an unholy mating with electro-mechanics. So the next two down the diagram from the HID are stuff the robot does for itself, or gets other robots to do and report back (the latter is variously known as swarm. hive, or multiple-device cybernetics and is very much an infant and experimental science (although there are warehouses that can achieve a robot workforce the control is done by a central computer and no robot bosses any other about or directly influences a co-robot's actions) . That former is the co-ordination input and is usually achieved by radio transmissions of data to and from the controller based on what the controller knows all the other robots to be currently doing. It works like this: The controller will send a message to robot A and robot A will send an acknowledgement. The acknowledgement will contain a message to say the instruction has been received and in a well-designed system the instruction received will be relayed back to the controller, which will compare what the robot thinks it's got to do with what the controller told it to do. If the controller finds a discrepancy it will send a message to say the robot equivalent of "no, you dolt" and re-send the instruction. The cycle will begin again until the robot gets it right, and if it doesn't get it right a number of times or doesn't acknowledge at all, the controller will whistle up some humans to go and sort the errant unit out and find another robot to do the job in hand. And that, in a nutshell, is what co-ordination input is all about, and that brings us on to the third type of input, which is autonomous.
Autonomy means empowered to do it oneself. In the business world, many employees are empowered to act autonomously, and what this means is "I'm going to blame you when this goes wrong". In the robot world autonomy is granted to stop accidents happening. For instance, in the warehouse where robots fetch and carry and do all the donkey work, you still need people to do the packing and make sure the right goods are going to the right place. There's no robot postman for the very simple reason that robots don't know the difference between Glasgow and Milton Keynes and even if they did they would not know which lorry to put it on without somebody telling them. Even if they know which lorry to put it on, they don't know that some daft driver going to Newcastle has backed his lorry on bay 19 rather than 18, and 19 is where a lorry bound for Swansea should go. So the robots are going to load all the Welsh stuff on to a lorry that the driver is going to point at Newcastle unless somebody stops them.
So, you've got machines and people working together. Despite the fact that the people are excluded from the robot area whilst robots are working, despite the fact that big signs in foot-high red letters say "DANGER- ROBOTS WORKING - KEEP OUT" and despite the fact that he or she knows that anything made of meat is going to fare badly when it meets anything made of metal and moving rapidly there will always be at least one thickie who ventures into the robots-only area when the robots are working. For an initial safeguard you can put a beam-break device on the human entrance points that will bring all robots to a standstill until somebody resets it. There's no guarantee that the thickie will actually use the human entrance points though and may well enter through a robot entrance/exit or even climb over a fence.Equally you can't be sure that sombody will or will not reset the break signal whilst the thickie is still in there. Now you and I might say that anybody that thick should be run over by a robot for the good of humanity in general, and preferably before they have had a chance to breed, however the courts tend to take a dim view of that sort of argument and in the ensuing compensation case will find in favour of the thickie or, if the thickie has gone to meet his or her maker as a result of getting squashed by a robot, the thickie's family. This is where the autonomy of the individual robot comes in.
It's in the nature of capitalism that it does not care about its workforce, only the wealth they generate, and if you are of the Socialist Worker persuasion you can scream and shout for as long as you like and it won't change. So certain laws are in place to make sure workers are not too badly treated, and here in the UK it's the Health and Safety at Work Act 1974 (HASAWA) and subsequent amendments. Generally speaking HASAWA is a pain in the neck, but what it does ensure is that our thickie in the example above is not going to meet his or her untimely demise at the hands of a warehouse robot because a robot that can't stop itself when it meets an unexpected human would not be allowed to be installed in an environment where the possibility exists that it may do so. So, what happens is that the robot itself is fitted with sensors to let it detect the presence of our thickie. This could be anything- usually it will be infrared- but what happens is as soon as the sensor kicks out a voltage, the signal is picked up by the unit's controller which stops the motors dead. The unit then sends a message to the co-ordinator which stops all the other robots dead and whistles up another human to go in and fish out the thickie and give him or her a good tearing-off for being in the robot area. Only when the co-ordinator is told by a human that the robot area is clear (i.e the human pressing the right button) will robot operations resume. In this way each of the robots themselves have the autonomy to stop all the others working if something is amiss in the robot area, but no robot has the autonomy to make decisions for all the others without the direct intervention of the co-ordinator. This is most unlike human workers who can actually be doing jobs that management don't even know exist in the course of a working day, such as an impromptu repair to a racking or helping another worker lift a heavy weight.
Of course these sensors need careful planning and calibration. Many things can go wrong. Most things often do. The more complex a robot system is, the more things to break.
Alright then, mobility devices. You can get flying robots, they're known as UAVs and they are a particular favourite of the US Air Force who use them to attack targets in Afghanistan, reportedly controlling them from as far afield as Lincolnshire and that is one very fast communications system. These robots have no autonomy whatsoever, they do as their pilots tell them, and as such they aren't considered to be robots by cyberneticists, but remotely-operated vehicles (ROVs). In fact (once again unless the MoD knows something we don't) there is no flying autonomous robot for the simple reason that you don't want something to break whilst it's 30,000 feet up and the robot to come crashing down on somebody's head. And so usually the three most common mobility devices are wheels, tracks and less commonly legs. We all know about wheels, they're round and they turn, and due to friction on the ground surface they push or pull along whatever it is they're attached to turning rotary motion into linear. There are many ways of steering a wheeled robot. A common one is to have only two wheels and a ball castor to support the chassis on the ground.
On the left you can see a picture of the Dagu Magician Chassis which Robotbits.co.uk sell for £15.95 and which is very good for home-made robots being a reasonably well made plastic chassis for the price. It isn't flimsy and you construct the assembly yourself meaning that you can make the odd modification or two if you're careful. There's two wheels, each has an independent gearbox and if you look carefully towards the bottom left you can see the ball castor that supports the other end of the frame and stops it tipping forwards. The ball castor can move in a variety of directions and so does not impede the movement of the chassis but it can't be used to steer it.
The Magician can be obtained from the following sources:
So how do you steer it? The trick is in the movement of the wheels themselves. If you consider the ball castor as the reference point and say it is at the front of the chassis, and you are looking down from the top, then when both wheels are turning forwards the chassis and anything you've attached to it will go forwards. If both wheels are turning backwards so will the chassis. If the left wheel is turning forwards but the right is turning backwards, the reference point of the ball castor will turn right, if the left wheel goes backwards and the right forwards then the castor will turn left, and of course so will the chassis.
This system has advantages and disadvantages. Although the robot can turn on its own axis and thus requires hardly any space to turn in, it isn't guaranteed that both motors are running at the same rate in terms of revolutions per second. This would result in a slight bias to one side or the other in the run of the robot causing it to drift left or right. If the robot is a line follower it will correct itself but if it's a ROV then you'd have to correct by steering. A small trimmer potentiometer can be added to one or both motors to correct the fault. In addition, although the magician uses DC motors you can use continually rotating servos which turn at a much more constant rate than motors but require more complex circuitry to run them. A second solution is to use two wheels on one axle for driving and a third in place of the ball castor for steering. Invariably the steering is done with a servo for simplicity which turns the wheel assembly directly- the entire assembly is attached to the servo horn and the servo body is attached to the chassis.
A third option is to use caterpillar tracks, like a tank does. The steering system is exactly the same as for a two-wheeled chassis and it has just the same problems, but the advantage is that where a wheel would get stuck, a track will just roll right over.
Robots on legs? I don't doubt that you've seen Hexbug toys. These work in two ways- they can react to their environments or you can control them. In the former case they are completely autonomous and in the latter they have no autonomy at all. They are actually quite remarkably advanced, but for the most advanced robot in the world (which is on legs) you have to look at ASIMO, developed by Honda since the year 2000 and which is the most advanced robot in existence. It's humanoid for a start, and stands 4ft 3 inches tall (130cm) and can work in co-operation with other ASIMO units and has a range of cutting-edge technologies embeddded in the works. Much of the technology is patented to and kept secret by Honda as they use what they learn from ASIMO in their products. However, ASIMO can walk at about 2mph and run at about 4mph. The control is achieved by zero-moment point control, i.e where the dynamic reaction force of the foot on the ground produces no horizontal moment. When this is achieved the ASIMO unit knows it is stable and begins the next step. This is not made any easier by the fact that the centre of gravity on an ASIMO unit is quite high due to the mass of the torso, shoulders and head.Nevertheless one unit on display at Disneyland can dance. ASIMO has limited interactivity with humans; it can move out of the way when one approaches, it can interpret various voice commands and gestures, and recognises up to ten different faces and addresses them by name.
Notice that I said nowhere in that that it can make a cup of tea. Notice too that I said that ASIMO was the most advanced robot in the world. It still can't do as much as the average human toddler. That is the state of robotics today; though great strides have been made (literally in ASIMO's case) we are not much nearer to creating the T-9000 now than we were thirty years ago.
In these pages the intention is to look at some theoretical concerns pertaining to definitions of robots and then look at what we can actually do practically with these mechanisms. On the way we'll meet some technology that can be used to help you in your projects and explore the electronic and mechanical principles behind them. There's going to be a lot of servos. There's going to be a lot of motors. Transistors and relays will be making appearances and oh yes, there will be Arduinos as it's my favourite microcontroller system. Exciting isn't it?
Ian Lang, August 2013