WeeklyWorker

04.06.2015

In service of humanity?

Can robots and artificial intelligence be subordinated to human need? Yassamine Mather examines the issues

The word ‘robot’ (or robota - a term used by a Czech writer Karel Capek) means hard labour, servitude. But the reality is that the robots we are facing in our time are sometimes just software - we use the term ‘software robots’ in the industry - and sometimes they deal with the automation of parts of microsurgery, so we are no longer talking of hard labour as such.

Although people define robots differently, there are estimated to be a couple of million industrial and household robots in the world, and every year around 200,000 new robots are sold. It is certainly true that in a number of monotonous, very repetitive and clearly defined jobs, the robot has taken over - car plants are a very good example of this. What has not happened, however, is what was predicted in the 1950s: a world where robots do everything or where robots with intelligence dominate humanity. Obviously we are nowhere near that. This does not mean that, given the speed with which computing science is progressing, at times exponentially, with technology and nano-technology advancing at a much faster rate than a decade ago, things cannot change very rapidly - I am not making predictions on such matters.

There are a number of obstacles to robotic research, and in describing the technology I want to make sure that we also understand the politics, particularly as they apply to the limitations. The way in which research funding works has directed developments towards a very particular type of robot: in a capitalist society we do not develop robots because of human necessity: we develop robots where there is good funding and that is usually in the military, catering for the specific circumstances required by the aerospace industry. The National Aeronautics and Space Administration (Nasa) has very good robots, for example. But robots are also employed in cases where there is absolute necessity, such as in a nuclear-contaminated disaster zone.

As such, it is not surprising that in a capitalist society we have robots that can take scientific measurements on Mars controlled from Nasa on Earth, yet dustbins are still being collected by human beings. It is still cheaper to employ a person, even in advanced capitalist countries, to pick up dustbins and empty them, as opposed to using a robot. A robot can in theory do this - it is a very simple task, and I will explain why this is amongst the tasks that a robot can perform well - yet humans remain cheaper.

There is also the issue of funding for research - most universities, with the exception of very few elite institutions, ‘restructured’ their engineering science departments at the onset of the recession, so that they now do testing for big companies. So if you look at most UK universities there is very little in the way of innovative research going on. What has happened is that those big companies have closed down their own testing division - it is very cheap to get university staff and postgraduates doing this instead, imagining that they are doing research, while in reality it is just testing, or quality control.

The other limitation is connected to what can be done in terms of recreating the human being’s five senses. As you can imagine, touch and smell are more difficult and they are a real challenge for computers to deal with. But also robots can typically be given two or three of these senses, but it is very difficult for them to cope with coordinating all five; this is a limitation that means that in some areas the dumbest human being is still well ahead of the most advanced robot. The issue of dexterity - the way we can use our fingers, for instance - is a challenge in any mechanised system despite the huge progress that has been made in the last 10 or 15 years. That applies particularly in terms of more detailed and intricate work, including within the household.

One of the other challenges faced by scientists and engineers - a serious one that affects both robotics and artificial intelligence - is that medical science has still a lot to learn about how a human brain works. We know, using animal testing and various other means, exactly how an eye works. But our knowledge about the brain is less developed. Again, artificial intelligence is making huge progress in this field, but it is a challenge.

Last but not least, competition, despite what capitalism says, does not help research. Competition means that companies refuse to share their research, while universities are market-oriented, with each department applying for a grant in opposition not only to the same department in a different university, but other departments in the same university. There is much duplication, with a lot of people reinventing the wheel, not just in robotics and artificial intelligence, but in every field. Basically, if our cumulative knowledge had been better gathered we would be further ahead.

Limits

Earlier I referred to assembly work, such as painting car doors. Nowadays the best robots arms operate with six axes. So whereas they used to be only capable of spraying one side of a car, they can now get around every angle to do the job.

Such robots have developed from the very basic models that initially existed. In one of my first jobs in the National Engineering Laboratory, in conjunction with university grants, we were making robots that assembled an electric rotor. It was a challenge to develop a robot that could assemble all the parts of a rotor, finish it off and make it ready to go into another part of the factory.

A robot works through following a series of instructions relating to how it moves, for example, when a number of sensors, or transducers, are used to stop it hitting obstacles, which could cause a huge amount of damage. Such sophisticated robots are very expensive to make, and it was essential to ensure that the robot arm would not inadvertently strike a worker, but also, for example, to ensure that, as it picked up a section of the rotor, it did not collide with the container carrying it. In that case everything would have spilled out, which would have cost more time than it saved. There are a complicated set of programs - languages - used in this kind of work.

A more advanced type is a two-armed robot, where coordination of the two arms requires more complicated computer programs: one arm picks up various components - nuts, bolts, rotors, etc - moving them to the right location on a production belt, while the other arm waits, turns the components and places them in the right location in relation to the first arm. A video is available of a number of adults looking in utter amazement at what could be described as a reasonably simple task being performed by such a robot.1 Yet such a task could be undertaken by a person who was not necessarily very bright and it would not be a challenge at all.

According to mass media, currently there are more domestic than industrial robots, and it is apparently fashionable to use then in the home, especially in the United States - robot pets perhaps. Joking aside, take the example of a lawn mower, which can be programmed to do the task in hand and is obviously a time-saving device. If you look at advertisements for existing robotic lawnmowers, you will see that they come with an instruction video. I have attempted to follow the instructions on one such video and found that it was so complicated that the average customer would surely give up halfway through and mow the lawn themselves. Given that the equipment cost £1,300 to buy, it would be cheaper to employ someone instead. What is more, this particular lawnmower seems to be accident-prone, and the customer has to add functions to program its scope, etc. If you press the wrong button the robot could veer off the wrong way and self-destruct.

As new generations of robots are launched, such tasks will become easier. More advanced robotic lawnmowers are equipped with sensitive transducers and artificial vision, making them far more expensive. However, the more commonly used robotic lawnmowers deploy very basic means for the detection of obstacles and so on. In one example,2 the customer has to create a perimeter around the garden using a wire which the robot is able to detect. But if the wire was not of the exact material used in the original design, the lawnmower would not recognise it and would go straight through the perimeter into your window! Or imagine a scenario when a child or pet is running around the garden and the lawnmower’s transducers have not been programmed to detect it. In other words, there are limits to such ‘intelligence’.

Given the lifespan of some of these types of equipment, it is difficult to see how they could take over most domestic tasks, free of human intervention. Of course, there will be further development of domestic robots, with the use of cheaper sensors, faster processor power and improvements in artificial vision. However, for most tasks I doubt they will be cheaper or more reliable than human beings in the foreseeable future. Current rates of pay for domestic labour are often below the minimum wage in most advanced capitalist countries and, given the current need to supervise domestic robots, it will take some time before robotic vacuum cleaners, lawnmowers, etc cause major job losses.

When it comes to robotic toys, they should not be dismissed - some have been used for testing space projects by Nasa, for instance. So the pet robot industry (which is quite lucrative) can have its uses, although it mostly remains more of a gimmick for the rich.

Employment

Today’s mobile phones are actually a form of robot. Using Google Speak Now, you can ask any question and Google will answer you. However, the answer is only as accurate as the person who gave the information last. To give one example, in August 2014 I asked my device who the president of Iraq was, and I was informed it was Nouri al-Maliki, who had been dismissed a few days earlier! The human beings who wrote the software to update the Google data with the latest news had made a mistake, as a result of which the correct information had not been added to the data bank.

The reality is that a huge amount of data is stored by the likes of Google or Apple in the form of data lakes, or ‘Glaciers’. Right now Google is not just buying every kind of software: it is buying hardware for the manufacture of robots. It is one of the most aggressive firms in terms of buying in the robotics industry. It is also investing in cutting-edge artificial intelligence companies - it recently bought up Deep Mind, which is considered to be one of the most advanced. But the traditional blue-collar worker is no longer the only person who might lose their job to a robot, to a computer, to automation. We all know the arguments about driverless trains, which in reality are semi-driverless - they need constant monitoring.

Birmingham University is currently attempting to develop a robot security guard - such jobs can be hazardous, after all. Security guard robots with three-dimensional vision need a level of mobility, allowing them to pass through the building concerned. At the end of the day we are talking of a moving camera capable of initiating a series of programmed actions in response to specific, well defined conditions (detecting a moving object, a broken door or windows), and initiating further action, such as automated telephone calls or automated locking of entrances/exits.

There are a number of researchers who have looked at potential unemployment resulting from such developments, and in one study in the US it was calculated that in the 702 occupations studied about half of them would be at risk in the next five to 10 years. Carl Frey in Oxford has done extensive studies on this issue and he is predicting that 47% of existing US jobs are at risk. Everyone knows that the use of automated cash dispensers and self-service pay counters in supermarkets is constantly increasing, and many people have phone or computer apps allowing them to carry out financial transactions rather than going to the bank.

There are also studies explaining how other white-collar jobs are at risk. For example, some basic legal work can be performed by machines/robots and the same is true of paralegal and other administrative jobs. Computing language goes along the lines of ‘If statement A, or process B or C, look for a Boolean combination of words in pages 1-410 of legal document X and use algorithm Z for legal exemption or addition.’ You can imagine how many mistakes such a programme can make and the possibilities for confusion. Having said that, I am not denying that a minimum of such para-legal work could be conducted this way. The Los Angeles Times created an algorithm that produced a short article whenever a small earthquake occurred. The robotic program had been programmed to react to reports of a quake and to automatically produce a short article.

In Japan they seem very keen to leave their children in the care of robots - I am not quite sure why. There are pictures of happy Japanese families, complete with robot child-minder. Now, any mother who lets her child be held by a robot needs her brain examining. Robots are notorious for taking the wrong action, such as swinging around too quickly.

A robot can perform a number of functions, but we should distinguish between ‘functions’ and ‘jobs’. A human being doing a job carries out a number of functions at the same time. For example, the assembly line mentioned earlier requires a number of distinct functions and because of this two robot arms are needed. In general many jobs are more complicated and beyond the current capabilities of robots - most can, at best, perform two or three functions simultaneously, as humans can. Mowing the lawn is a good example - while doing this, we use our ears to hear if anyone is at the door, we look to see if a child is in our way, and our brain might be engaged in a totally separate task. We do a number of diverse, parallel tasks at the same time, but this is beyond the capability of the robot.

What robots excel in is precision, in being accurate, and it is in this way they should be used. Currently, there is extensive use of robots in surgery, where they help to perform operations on patients. Advances in laser surgery has increased the use of robots in the operating theatre. The cameras in the robot head are used for constant scanning - something the surgeon cannot do. The robot moves to the necessary position with great precision and then makes the incision, for example. None of this is done without humans, of course - there can be no surgery without surgeons. But a robot can eliminate the use of a shaking hand, or a human eye that may cut a couple of millimetres off target.

The other area where automation has made quite a lot of progress (although not as much as people expected) is in aircraft. In fact most of the advances in recent years have been applied in military planes, but for passenger flights too there is automation in terms of take-off, taxiing, cruising, descent, approach and landing - the two most difficult and therefore useful areas being taxiing and take-off. But we are not talking of autopilot - the human pilot has not been replaced. At all times a human being is actually in charge, and another responsible adult has to be at the cockpit in case the pilot is injured or taken ill.

However, there have been advances in this area and new research is concentrating on creating systems that will actually be autopilot. The idea is that if the plane is hijacked the door to the cabin locks, or if the pilot is ill, dies or leaves the cabin, then this device allows the plane to land. Yet we know this does not always work.

Then there are drones, of course, which are in a way automated. But there are limitations to the use of this sophisticated, expensive, auto-guided system. In the case of the US military the cameras pick up satellite images of a gathering, which is interpreted by the automated system as a Taliban meeting; the drone is sent to blow up the gathering - which turns out to be a wedding party. No big problem for the United States airforce - it is just ‘collateral damage’ and the event will not make the news. But if a civilian plane carrying 400 European or American passengers falls out of the sky and crashes, that will be a different story. In other words, automation on aircraft is employed extensively in drones and military planes, but is used with great caution in civilian air travel.

Artificial intelligence

Once we discuss vision and hearing, we understand the problems of artificial intelligence. In general a vision-guided robot is quite advanced because it has to perform a lot of complex tasks. It has to recognise various parts and components, and it has to understand what they mean for the task in hand. Usually a robot does not simply rely on its own ‘vision’ to guide its action: it relies on a whole set of other equipment to find what it is looking for. This complexity is important to grasp because, for all the talk of artificial vision and robots that can see, it is not quite as simple as that.

A robot with a guided system tries to recreate a certain image. There is a level of image processing for the robot to understand the patterns it contains, and the robot can put the various shapes back together. But it has to recognise what it is looking for and it needs vision guidance. While a human would simply look at all the parts and recognise that, say, a triangle is needed, for the robot all the shapes must be exactly where it is expecting them to be - a millimetre out and they will be missed. So for all the computer-generated ability to recognise shapes, to understand what shapes are, this still has serious limitations.

In terms of robots that can ‘listen’ and respond, we should remember there are those that read a lot of data in the form of plain text, find what is needed and display the results: Google and other search engines are example of such software robots. But ‘hearing’ is more of a problem. We, as humans, can hear four or five voices and distinguish between them and respond to various pieces of information. The robot is at an immediate disadvantage because the motor and electronics inside it are already making some noise. So the first thing you have to do is neutralise this sound, so that the robot is not confused. Then you have to give it instructions on how to interpret audio instructions and data.

There is a video on YouTube of a lab where a man is teaching a robot to dance to music with different tempos and the robot learns to do it.3The robot is listening to what he says, but also to the music, and this is a complication. Eventually the robot is able to distinguish between the man’s voice and the music, and it changes the tempo. But there is an 11-second delay, whereas for a human two-year-old the delay would be a fraction of this. The man enunciates the instructions very carefully, but even then the robot is confused and keeps asking him to repeat the command.

All this shows the current limitations of an average robot in responding to voice instructions. Even though my mobile device can answer my question via Google search, robots have a lot of difficulty in coping with multiple commands, with multiple sounds.

What does this mean for artificial intelligence? Sound and vision are used to teach robots to react to a given environment, but for humans it is not just what we see and hear that guides us. We also have memories of seeing the same sight, hearing the same sound, on previous occasions. There is a level of processing, of adaptation and learning over time, and our ability to efficiently use what we know already. All of these combine to determine our reaction to a phenomenon. In our brain there are neural networks holding interconnected data stores and accumulated knowledge.

However, the brain is a very complex organ. It does not work in a simple, straightforward way, but relies on billions of neurons. That is the problem when it comes to robots. It is the lack of precise information about how these neurons interact that makes developing artificial intelligence difficult - unlike, say, artificial vision. True, great progress is being made and the situation could change dramatically in a very short time.

Currently there are different hypotheses about how our brain works. What we know from observation, from experience, from medical tests and so on is that some of the tasks that a teenager undertakes are objectively easier than those done by a two-year-old. What we lack is sufficient understanding of the cognitive science of the early years of a human. We know what we have taught the teenager, the knowledge and experience that allows him/her to read, write or whatever. But what we do not fully understand is how the two-year old has gained the kind of knowledge that makes her/him much more clever than most robots.

Neural networks

Of course, artificial intelligence is not about recreating human intelligence, at least not at this stage. It is about giving robots or automated devices the capability to understand their own conditions in a way a human would. But in this whole process the amount of knowledge we transfer to the robot or the automated device plays a crucial role in how that device would react, how it would work.

First of all, there is the concept of ‘fuzzy logic’. It is no longer in fashion as much as it was in the 1990s and early 2000s, but this refers to the science of getting the computer to distinguish between what is inherent in its understanding of, say, 0 and 1, and grey areas - areas that do not fit exactly. There is a vast array of software packages that allow the computer to do that. But the most advanced forms of artificial intelligence rely on neural networks. These are linked to an animal’s central nervous system, enabling it to understand, to compute an image, for instance, and then use its short-term or long-term memory to relate it to what has to be done. In this connection there are a lot of theories about how our brain works. Some experts say that our brains differ in relation to the speed with which their electrical signals are interpreted, that humans do not have different levels of intelligence, but it is the speed of short-term and long-term memory which is different.

For artificial intelligence this is quite an important discovery, if it is true. But it is only a hypotheses at present: there is no proof. But if it is true that would make a lot of difference in getting AI to work. What we have to do is to find a way to observe the intricacies of human behaviour, and then convert those observations into instructions that a computer can understand. Here there has been a lot of development in commercial as well as open-source software - converting the observations we have of human responses to actions and reactions, and then adding this to the memory of the robot or the computer. This is a cumulative process. As the machine learns more information, it becomes more efficient, more reliable.

And this takes me back to Google. Neural networks were a real challenge for science and engineering departments. How exactly do you replicate this mass of knowledge artificially? But in some ways Google is the world’s largest neural network. Every time you ask a question it memorises it and adds it to its knowledge base. Google is not buying up robotics companies because it thinks they are a good source of income, but because Google is learning from hardware robotics as well as software robotics, and it believes that by combining this knowledge it will go very far. Again, the speed of computers, the speed of processors will make a big difference.

And Google is not the only neural network worldwide. A number of prestigious universities are now learning from Google and getting involved with other companies, with travel companies and so on, and trying to learn from big data information, gathered from the internet. Distributed computing and parallel processing allow us to make sense of huge databases.

A human being’s understanding of a page of text is very high compared to that of a computer. A computer can ‘read’ a page, but is perfectly capable of taking the wrong information from it. But if it can read a page one million or one billion times quicker than a human being - and that is what is happening with the new neural network processors - then its ability to correct its mistakes can also be enhanced. It is this that gives the defenders of artificial intelligence the optimism to predict miracles in the next 10 to 15 years.

Will a computer be able to decipher information, to accumulate it and then to learn from that accumulated knowledge in the way a two-year-old does? That is the prospect which makes artificial intelligence quite interesting. All of this will change artificial intelligence and robotics, as well as the way we see human labour.

yassamine.mather@weeklyworker.co.uk

Notes

1. www.dualarmrobot.com/media.html.

2. www.youtube.com/watch?v=7HRHMo_ZY3w.

3. www.youtube.com/watch?v=QSszdZoyXTA.