A couple of million years ago we learnt how to walk on our two hind limbs. Natural evolution selected it into our genes because it improved our chances of survival. We could see above the toll grasses of the Eastern African plains and detect approaching predators before it was too late. And once detected the danger, we could escape at a higher speed. But that evolutionary step also had another profound effect on our ancestors: it freed their hands(1).
Our cousins, the great apes, also use tools. Chimpanzees move objects to step on when they need to reach fruits hanging high above their heads. They also use sticks to collect termites and stones to crack nuts, but they cannot take their tools with them when they need to run. We could.
The initial tools we used strengthened and extended our arms. They were stones, sticks, and bones as we found them. We realised that certain shapes suited our purposes better than others, and started refining our tools. Thanks to the dexterity of our hands, we became a species of tool makers and inventors. Our motivation was survival, both as individuals and as a species. In that, we were exactly the same as any other animal, but we were unique in the way in which we went about it.
Instead of accepting the environment in which we lived, we did our best to modify it and insulate ourselves from natural dangers. We covered ourselves in clothes in order to survive in cold climates. We also learnt how to keep and create fire at will, so that we could chase away predators, stay warm in Winter, and have light after sunset. Then, starting from around 10000 BME (Before Modern Era) we tamed animals and seeded cereals, so that we could have a regular supply of food.
By 5000 BME, farming was common in Asia and Europe. The planting of crops forced us to settle, and the first permanent villages came into existence. Supported by the increasing density of population in and around conglomerations, some people became specialist craftsmen and started producing goods for others. Trade thrived, and with it the need to measure and record properties and transactions. It led to the invention of writing, for which we have to thank the Sumerians.
They lived in Mesopotamia, the alluvial plain lying between the two rivers Tigris and Euphrates(2) where parts of modern Iraq, Syria, and Turkey are. For some centuries, from around 3300 BME, the Sumerians had made lists of valuable items by pressing little stamps on clay tablets and then baking them to preserve the imprints. But then, shortly after 3000 BME, they started pressing on their tablets signs that corresponded to spoken sounds. For the first time, people were able to record and exchange any type of information. This marked the birth of History as we know it.
Writing made possible to transfer knowledge from place to place and from generation to generation in a reliable way. Thinkers and inventors could effectively build upon ideas and discoveries of others. During the next four and a half millennia, the medium used to write on changed from clay to papyrus, waxed tablets, parchment, and finally to paper. But it was only with the invention of the printing press in mid fifteenth century that written information could reach everyone. Many more people could take advantage of the knowledge that already existed, and our intellectual and technological evolution took off.
In one way or another, most of the technological developments that have taken place during the past five and a half centuries have kept extending our senses and physical capabilities. With microphones, amplifiers, loudspeakers, earphones, radio, telephones, and mobile phones, we have become able to hear beyond the limits of our ears. With spectacles, telescopes, microscopes (optical, electronic, and tunnelling), night vision goggles, radar, radiography, magnetic resonance imaging (MRI), positron emission tomography (PET), and television, we have became able to see what our unaided eyes had not evolved to see. With bikes, trains, ships, submarines, lifts, motorcars, balloons, dirigibles, airplanes, parachutes, hang gliders, escalators, helicopters, and rockets, we have reached all corners of our world and beyond. With books, photography, magnetic tapes, vinyl records, cinematography, CDs, and DVDs, we have developed the means of saving the accumulated knowledge of billions of people.
Today, almost all the tools we use or know of are in fact extensions of ourselves, from the knife to the food processor, from the spade to the oil drill, from the hatchet to the nuclear bomb. During the second world war, we invented the digital computer. The first computers were bulky machines confined to climatised rooms and attended by technicians in white lab coats. Many thought that it would always be that way and that computers were so complicated that only professionally trained people could deal with them. But the introduction of transistors, with their reliability and small size, changed all that.
Around the mid 1960s the computers entered the normal office environment as minicomputers. For a while it seemed that the minis would be around forever, but their appeal already began to wane in the mid 1970s, when the first microprocessor-based systems hit the market. In 1977 Apple introduced the Apple II microcomputer, which became extremely popular in the education sector. Two years later, the Apple II broke into the business market with the introduction of the first spreadsheet software. It was only in the mid 1980s that macintoshes and personal computers began to replace the Apple II in homes and offices.
Towards the end of 1969 what was to become the Internet was born as a connection between two American research labs(3). By the late 1980s, the Internet connected almost 100,000 computers worldwide. They were mostly in universities and research institutions, although large and progressive companies had created their own private networks and exchanged data through the Internet via gateway systems. The year 1990 saw the birth of the World Wide Web, through which personal computers have become our window to the world. Thanks to the Web, we can now tap sources of information and interact with each other in ways that were unthinkable just at the turn of the century.
For a couple of decades, computers remained for most of us something that we switched on and off when we needed them. Only during the past few years has the advent of fast wireless networks made access to the Internet almost ubiquitous, although a large number of people still see them as belonging to the four walls of homes and offices, where fixed Internet connections are available.
The introduction of third and fourth generations of mobile networks has blurred the distinction between telephones, computers, and television sets. With a handheld device you can now remain connected to the the rest of the world almost anywhere. And the current trend of using web services to store personal information (the Cloud) means that you are less and less bound to using the storage space provided by computers resting on or under your desk.
One of the limitations of modern handheld computers is the size of their screens. The Apple iPad and other tablet computers, with their 10-inch screens, are an exception, but they do not fit inside a shirt pocket. All major computer screen manufacturers like Fujitsu, Samsung, and Toshiba have been working prototypes of flexible liquid crystal displays (LCD). But, although it will be nice to be able to take a large screen from a pocket and unfold it, I prefer the idea of using virtual screens.
A researcher who agrees with me is Steve Mann(4). In 1970, while still in high school, he invented WearComp0, the first version of a wearable computer. His eye-tap captured the images seen by his right eye and sent them wirelessly to a remote computer for electronic analysis and manipulation. It then merged the result of the computer elaboration with the original images before presenting them to the eye.
Fighter jets use Head–Up Displays (HUD) to present information to the pilot via their helmet, but Steve Mann’s eye-tap is much more than that, because it transfers information in both directions. Steve describes it as mediated reality, because his system can filter out and modify captured images, while HUDs can only superimpose data to what the pilot sees.
Today, Steve Mann is professor of Electrical and Computer Engineering at the University of Toronto. He has perfected his invention to the level that it has become practically indistinguishable from a normal pair of sunglasses (see figure below(5)).
The latest versions of Prof. Mann's devices are so small and light that he can wear them for long periods of time. Life in such a video-mediated-reality develops its own thought processes. For example, he began pointing his finger at objects shown by the eye-tap but not present in the real world.
The wearable computer with an eye-tap opens up a host of new possibilities, because it connects to other computers via wireless networks. For example, it makes possible to read emails via the eye-tap. Prof. Mann's group developed programs that analyse the mini-cam images and display emails only on suitable blank areas, like empty walls. Once attached to a wall present in the real world, the text of an email then moves in and out of sight together with it. It appears as if the email had been painted on the wall.
Prof. Mann built an eye-tap with a heat–sensitive camera. A remote computer connected to the eye-tap made the captured infrared images visible by associating different colours to different temperatures. This effectively provided night vision through eye-tap technology. In another test, Prof. Mann installed face recognition software on the remote computer. The eye-tap was then able to display the names of people that appeared in its field of vision.
Coupled with a GPS (Global Positioning System) device, the eye-tap can show route indications, both for walkers and for drivers.
Three technological developments have made possible for Prof. Mann to achieve such remarkable results: firstly, computers have become powerful enough to be able to process images in real time; secondly, wireless transmission has become fast enough to support the transfer of images forth and back without significant delay; and thirdly, the miniaturisation of electronic components has transformed the heavy backpacks of the first prototypes into light and unobtrusive devices.
We will relate to wearable computers as we now relate to our clothes and the simple tools of everyday life, and they will become new status symbols. People will show off their wearables as they now do with labelled clothing, watches and, increasingly, laptops and mobile phones. We will still take them off before going to bed or taking a shower, but wearables are only the next step on a path of increasing human–computer integration. Many further steps will follow.
If you are concerned about the look of the eye-taps, don’t be. Babak Parviz, an assistant professor of electrical engineering at the University of Washington, is working on contact lenses to replace external eye-taps. He unveiled the first prototype in January 2008. Although it was not functional, the prototype demonstrated that imaging and transmission circuitry could fit on a contact lens without impairing normal eyesight.
In any case, the main limitations of Prof. Mann’s wearable computers are on the input side. Today you can only send commands to the computer via mini-keyboards. Voice input could improve the situation, but the privacy of the commands would normally be lost, as bystanders could hear them. The commands could be sub-vocalised and picked up by neck microphones, but it still doesn't seem the ideal solution.
What we should be able to do is to think our commands. We would formulate a query in our mind and see possible answers scroll directly before our eyes, or hear them whispered in our ear. This might seem far fetched, but it will only take less than a decade before we will be able to do just that.
Notes:
(1) Some researchers actually believe that our ancestors started walking erect precisely to keep their hands free. But this theory is somewhat controversial and goes beyond the scope of this book. The end result is in any case the same.
(2) Mesopotamia is a Greek word (Μεσοποταμία) that means ‘between rivers’.
(3) UCLA (University of California Los Angeles) and SRI (Stanford Research Institute) International, on October 29th.
(4) http://www.eecg.toronto.edu/~mann/ http://wearcam.org/ http://eyetap.org
(5) The image is a composition realised by taking parts of two images freely licensed by Wikimedia Commons as http://en.wikipedia.org/wiki/Image:Wearcompevolution.jpg and
http://en.wikipedia.org/wiki/Image:Aimoneyetap.jpg
No comments:
Post a Comment