In the opening sequence from a popular 1970’s television show, the music swells in the background; we see images of a re-entering spacecraft crash, followed by an operating room, and subsequently a man running on a treadmill reaching speeds exceeding 50 miles per hour. A voice-over intones, “Steve Austin, astronaut, a man barely alive. Gentlemen, we can rebuild him, we have the technology. We can make him better than he was before: better, stronger, faster.”
This theme of human augmentation via bionic or computer was introduced to primetime during the run of the series, The Six Million Dollar Man, based on the novel Cyborg, by Martin Caidin.
Following this series, the nebula-award-winning cyberpunk novel, When Gravity Fails, by George Alec Effinger, introduced the concept of characters with implanted brain/computer interfaces, allowing a character to attach a module (“moddy”) to provide access to unlearned knowledge, such as language translation, or to assume fictional character traits such as Sherlock Holmes.
These are just two of the multitude of science fiction novels and movies that have addressed the concept of human augmentation, a recurrent theme that dates back several millennia. The use of prosthetic devices to replace or augment human limbs can be traced to 600 BE, augmentation and correction of human vision via eyeglasses was introduced in the 1200s, and the use of “ear trumpets,” mechanical tubes designed to guide and amplify sound into the human ear, were described in a book published in 1624. In 1868, Edward Ellis published a pulp novel, “The Steam Man of the Prairies,” which depicted a giant human-shaped steam engine that towed its inventor at speeds of 60 miles per hour and chased buffaloes and terrorized Native Americans (thus, the first “Iron Man”).
In modern times, the cochlear implant (or bionic ear) was invented in 1961 and was first effectively used in human subjects in 1972. Today more than 200,000 people worldwide have such implants. Augmentation of human vision via implanted chips is being developed by a number of scientists throughout the world. The augmentation or replacement of human limbs and motion has also been an active area of development. In 1961 the Pentagon invited proposals for real-life wearable “exoskeletons” that could augment the capabilities of individual soldiers. In the 1980s, scientists at Los Alamos National Laboratories created a design for the “PITN” suit, a full-body-powered exoskeleton for use by the U. S. Army to augment infantrymen. Recent developments have included artificial limbs enhanced via microprocessor control and the ability to create a sense of heat, cold, and “touch” in artificial hands and limbs. Unfortunately, these developments have been spurred by the need to assist soldiers injured by improvised explosive devices (IEDs).
These advances are made possible by a combination of technology advances in materials (e.g., nanoscale materials, lightweight flexible materials, etc.), new micro-scale sensing and computing devices, and improvements in processing algorithms and artificial intelligence. The advent of cloud-based computing and near-universal internet connectivity provides opportunities for development of “augmented reality” concepts such as Google Glass, which allows a person to superimpose information about their surroundings in their line of vision. This offers the equivalent of “X-ray” vision, giving ordinary people the ability to envision what is inside buildings or machines, to use automated facial recognition software to identify surrounding people, and to obtain significant background information via social media links.
In effect, no one is a stranger, and every person can access near-genius level information about their surroundings. Thus, we can achieve Effinger’s fictional capability of enhanced intelligence through technology available to consumers today.
These new capabilities raise a number of issues and challenges for IST researchers: How do we develop “human computer interfaces” when the computer is already an intrinsic part of us? How do we manage the intrusiveness of computer-aided information; could we inadvertently create people who are permanently distracted by their on-line access (analogous to issues such as driver safety by texting motorists)? What happens if a computer or sensor system associated with a human gets “hacked” or infected with a virus? How do we manage upgrades? Will we create a new class-system of “haves and have-nots” (i.e., augmented versus un-augmented humans)?
As we move into a new era of pervasive technology, these are just a few of the issues we will need to address. We have already experienced controversy surrounding physically enhanced humans (e.g., the South African “Blade Runner” who created controversy in the 2012 Olympic Games). Will we need to develop “classes” of humans in sporting events analogous to stock car racing versus fully enhanced racing cars? What about sensory or intelligence enhanced humans?
IST faculty and graduate students are working at the forefront of these issues. John Yen and Mike McNeese address issues of collective cognition aided by intelligent agents; Jeff Rimland is completing a dissertation on hybrid human-computing distributed sense-making. Mark Ballora conducts experiments in the conversion of data into sound, to allow aural interfaces with large data sets. Lee Giles and his students develop increasingly sophisticated search engines to access nuggets of information in huge data sets. Guoray Cai develops new methods for human-data interaction, and Chao Chu leads research on the “Internet of Things,” in which computers and sensors are embedded in everyday devices and communicate among themselves.
One thing is certain; the one-time science fiction visions of cyber enhanced humans are becoming a reality today. In the near future, we will no doubt view these enhancements as “natural” and routine. Moreover, we will look back on The Six Million Dollar Man as a very expensive antique!