The history of artificial intelligence (AI) is replete with waves of enthusiasm, followed by disappointment and funding cutbacks. In the enthusiasm phase, some researchers and writers gush about the possibilities of computing capabilities in areas such as machine translation (in the mid-1960s), pattern recognition (in the mid 1970s), symbolic computing (in the mid-1980s), and knowledge-based systems (in the early 1990s). This enthusiasm becomes tempered (perhaps overly so) by realizations of the challenges of the problem being addressed and the limitations of computers. Indeed, it has been said that the career of AI researchers has a common trajectory. In the early part of their career, an AI researcher says in effect, “Computers are wonderful – I will study how to make computers do what humans can do.” Later in their career the researcher says, “Computers are still wonderful, but this is a challenging problem – I need more computational speed and memory.” Still, later in their career, the researcher says, “Humans are wonderful!”
Fundamentally, AI is the study of how to make computers do what humans and animals can do. Fields of study within AI include: purposeful motion (robotics), vision (computer vision), thinking and reasoning (knowledge-based systems), language (natural language processing), and, more recently, addressing emotions and even humor. A number of IST faculty are working in AI areas and have achieved world-wide recognition for their research. Examples include James Wang, who studies computer vision, John Yen in the area of knowledge-based systems, including intelligent team-based agents and fuzzy logic, Lee Giles in the area of knowledge representation and intelligent information search, and Michael McNeese in the area of cognitive psychology, knowledge elicitation, and computer-aided cognition. Moreover, researchers in the Penn State Applied Research Laboratory (ARL) have an extensive history and success in creating autonomous underwater vehicles (in effect, robots under water).
It can be frustrating to researchers making significant progress in AI to deal with both overly enthusiastic hype as well as undeserved disdain. I would suggest that there are three main issues related to AI. First, AI addresses fundamentally challenging problems in human cognition and action. In natural language processing (NLP), for example, understanding of human language requires understanding of context, inflection, and “real-world” information that humans take for granted. The phrase, “eats shoots and leaves,” could refer to the normal actions of a Panda bear or, with the addition of a comma after the word eats the actions of a robber in a restaurant. Similarly, human-level reasoning generally requires extensive “real-world” knowledge that must somehow be encoded for a computing system. We easily know that the syntactically correct phrase, “George Washington threw a 747 across the Potomac,” is nonsense, because we know who George Washington was, what a 747 airplane is, and that the Potomac is a river that flows into the Chesapeake Bay in Maryland. None of these are known a priori by a computer.
The second challenge in AI is that, as soon as AI accomplishes a difficult task, people shrug and say, “Well of course a computer can do that.” An example is symbolic computing – the ability to perform algebra, calculus, and other mathematics using symbols. This was a very challenging problem in the late 1970s and early 80s. We now take programs such as Mathematica for granted. This program would have amazed scientists in the 1970s and, incidentally, would have allowed me to complete my entire Ph.D. dissertation on stellar structure in a week instead of a year!
Finally, the very term AI seems to go in and out of fashion (and funding). John Yen, of our faculty, remarked once that sometimes people seem to think that AI means “anything interesting.” In other words, it is sloppily applied to virtually any field of study involving computers and emulation of human tasks.
Despite these issues, I believe AI has a very bright future and will lead to new applications that we will quickly come to treat as routine from smart homes to self-driving cars, computerized medical diagnosis, robotic homecare aides, intelligent agent assistants, and self-aware machines.