INTERFACING TO HUMANS
Artificial Reality Corporation
In the late 1960s when the idea of the human interface as a research discipline was just dawning, the usual approach was to think in terms of the minimum devices and software that would allow users to operate existing computer applications within the then current economic constraints.
The work described in this talk took a very different approach. It asked, what are the possible ways that humans and machines might interact? It began with the creation of interfaces that were as far away from existing conventions as possible. Instead of interfacing the human to the computer, it interfaced the computer to the human. The interface would be the human body and the human senses. Full-body participation in distributed computer simulations was considered the interface of the future.
As in the virtual reality efforts that came later, encumbering devices such as wireless HMDs and tracking systems were considered but discarded because there were many situations in which wearing such paraphernalia would be unwelcome. Instead, unencumbering technology was developed based on environmental sensors such as video cameras and pressure-sensitive floors. Feedback was provided through projected graphics and synthesized sound. In addition to the full-body format, a desktop version was created.
Mature applications which were significant research projects in their own right were developed and widely demonstrated. These included systems for gas flow visualization, teletutoring, multipoint gesture control, range-of-motion therapy, spatial interfaces for the blind, and olfactory display.
Myron Krueger is a computer visionary who implements his predictions as both science and art. He received a BA in Mathematics from Dartmouth College where he was in the first Basic class. He spent two years in the military at the U.S. Electronic Proving Ground at Fort Huachuca Arizona. He then attended the University of Wisconsin and earned MS & PhD degrees in Computer Science. His thesis title was “Computer-Controlled Responsive Environments.” Upon completing his thesis, he moved across the street to the Space Science and Engineering Center where he proposed a worldwide virtual reality telecommunication project as the theme of the U.S. Bicentennial. In 1978 he moved to the University of Connecticut where he taught Computer Science and completed the preliminary development of the Videoplace System which was first demonstrated at CHI 85 and SIGGRAPH 85. He left UCONN in 1985 to form Artificial Reality Corporation which has performed research and consulting for corporations and government agencies.
Dr. Krueger pioneered the development of unencumbered, full-body participation in computer-created telecommunication experiences and coined the term “Artificial Reality” in 1973 to describe the ultimate expression of this technology. Dr. Krueger’s 1974 dissertation was published as Artificial Reality (Addison-Wesley,1983) and reissued as Artificial Reality II (Addison-Wesley) in 1991. These writings were the first to introduce the concepts of virtual reality and interactive art to the broad technological community. Additional ideas introduced in his writings include wireless wearable computers and displays, augmented reality, motion capture, the CAVE, the shared telecommunication space, and application areas such as embedded training, as well as consumer products like the V-Chip, TIVO™, and the EyeToy™.
His technologies have been used to implement innovative applications in gas flow visualization, teletutoring, range-of-motion therapy, and a spatial interface for the blind which includes speech input and output. He also repurposed his vision hardware to perform wide-area wireless tracking for head-mounted displays and developed a wireless olfactory display.
Dr. Krueger has given over a hundred invited talks worldwide and received awards from both the art and scientific communities for his work.