This article is the first in a short series about virtual reality (VR); the origins of the term and concept from imagination to realization, as well as disambiguation of the term augmented reality (AR). Later articles will focus on VR in gaming, adopting AR as a lifestyle, and modern VR devices.
Virtual Reality Series
Part I: From Imagination to Realization
When Oscar Wilde said “Life imitates art,” he was not referring to science fiction; however, in an age of exponential technological growth and nearly ubiquitous smart devices, it is easy to draw a connection. Virtual Reality (VR) was first described in 1935, less than a decade after the invention of the television, by Stanley G. Weinbaum. In his short story, “Pygmalion’s Spectacles,” characters could use a pair of goggles to experience a holographic version of invented scenarios, but unlike modern VR headsets, such as Google Daydream, Weinbaum’s device also allowed the user to experience smell and touch.
The term “virtual reality,” was coined three years later in 1938 by French author and playwright Antonin Artaud in a collection of essays titled “The Theater and its Double.” However, the term “virtual” has only been used to describe something that is rendered via software since 1959. The partial realisation of Weinbaum’s invention was achieved in 1968 by Harvard professor Ivan Sutherland and his student Bob Sproull. The device was not a pair of goggles, but rather a head-mounted display (HMD) so heavy it was suspended from the ceiling, earning its name “The Sword of Damocles.”
The Sword of Damocles was not only the first VR device but also the first augmented reality (AR) device; the difference being that the former replaces the user’s visual input with a virtual one, while the latter overlays graphics on the existing physical landscape. The world portrayed in The Matrix is more akin to VR; while the portrayal of the Terminator’s vision in the Terminator films, more closely resembles AR. For a more modern example, readers who have played Pokemon Go! may be familiar with how the device’s camera ‘sees’ the surroundings, including Pokemon who may be hiding in a tree, perched on top of a mailbox , or standing on the sidewalk.
Despite VR and AR developing during the early 70’s, it seemed relegated to marginal cultures, such as cyberpunks or recreational druggies, who viewed the technology as a vehicle for social change or a new frontier and artform, respectively. VR gained more mainstream popularity through movies like Brainstorm, Tron, and The Lawnmower Man in the early 80’s, and a decade later CyberEdge and PCVR, two VR industry magazines, were in circulation.
The research boom of the 1990’s was aided by the publication of Howard Rheingold’s non-fiction book, Virtual Reality, which expanded interest in VR beyond sci-fi and computer enthusiasts. Unfortunately, because of the available computing power and the exorbitant costs of producing such devices, VR remained largely theoretical. Research and development between the 70’s and early 90’s was conducted mostly by the medical, military and automobile industries for simulations, design and training purposes.
This article is the second in a short series about virtual reality (VR) and augmented reality (AR); having discussed the difference between VR and AR, as well as the origins of the concepts and technologies, this article will focus on the development of VR for training programs and a new frontier of amusement. Later articles pertain to contemporary VR devices and adopting AR as a lifestyle.
Virtual Reality Series
Part II: Training and Gaming
As discussed in the previous article, although the first head-mounted display (HMD), the “Sword of Damocles”, was invented in 1968, VR and AR remained the purviews of military research and video game design until the research boom of the 90’s. Both industries focused on increasing the systems’ immersiveness and responsiveness, resulting in more realistic graphics, wearable tech, and the expansion of VR and AR into niche fields.
Recognizing the revolutionary possibilities of VR, in 1966 the US Air Force commissioned Thomas A. Furness III to develop the first flight simulator. Working out of the Wright-Patterson Air Force Base in Dayton, Ohio, from ‘66 to ‘89, Furness developed advanced cockpit simulators for fighter aircraft. In 1982, the first training flight simulator, the Visually Coupled Airborne Systems Simulator (VCASS), was used to offer trainees a virtual environment where they could develop skills without real-world consequences to mistakes.
Beyond offering a safe environment for soldiers to learn, simulators also allow trainees to experience various scenarios, landscapes, and situations. By using a virtual program, soldiers are able to repeat training exercises, and therefore get more training hours with less down-time; simulators are also much cheaper and eco-friendly than in-air flight training.
Using flight or driving simulators allows the trainee to become familiar with the controls and handling of expensive, and potentially lethal, vehicles before being placed in the cockpit or being the wheel. Simulations can also be used to train medical personnel to better perform complicated surgeries and become familiar with various procedures in a controlled, corrective and repetitive manner.
The other major industry for VR and AR during the decades before it became relatively mainstream was video gaming. In particular, Atari played a key role by hiring Jaron Lanier and Thomas G. Zimmerman, who would later go on to co-found VPL Research in 1984. VPL Research is credited with developing early wearable tech, such as the Data Glove, allowing people to manipulate virtual objects in three dimensions; the Eye Phone, an HMD that tracks head and eye movements; and the Data Suit, a full body outfit covered in sensors that allows measurement of arm, leg, and trunk movements.
Unfortunately, the technology remained prohibitively expensive for the daily consumer, and VR for the layman was largely relegated to arcades. For example, in 1991 Virtuality released the first mass-produced, networked, multiplayer VR entertainment system under the same name. A Virtuality system costed roughly $75,000, and contained multiple player pods, headsets and exoskeleton gloves, making the system the first immersive VR experience available to the public. Other VR arcade systems were more widespread, such as driving and first-person shooter games, some of which incorporated haptic feedback to more fully immerse the player.
The next article in this series will delve deeper into contemporary — here meaning “since 2000” — VR and AR devices, particularly those developed for personal use.
This article is the third in a short series about virtual reality (VR) and augmented reality (AR); having discussed the origins of the concepts and the applications of the technologies, this article will focus on contemporary VR and AR devices on the market. The next and final article will be a case study on adopting AR as a lifestyle.
Virtual Reality Series
Part III: Contemporary VR and AR Devices
Today’s commercial VR devices are either mobile or tethered; that is they either work off of a smartphone, or require a physical connection to a computer or game console. Obviously, each option has its advantages and drawbacks, such as portability versus definition.
Several devices are currently on the market, falling into one of the two categories; Samsung Gear VR and Google Daydream View, for example, are mobile, while the Oculus Rift and PlayStation VR are tethered.
Mobile headsets are two lenses in a cardboard or plastic shell that has a slot for a smartphone. The software splits the screen into two almost-circular identical images, while the lenses bend the light so that the user perceives a 3D landscape. Because these headsets do not have hardware of their own, they tend to be inexpensive.
Additionally, because the smartphone is used as the monitor and VR system, the headset can be worn anywhere. However, smartphones are not specialized for VR and therefore they do not offer comparable graphics to tethered VR devices.
Tethered devices are able to offer a more complex experience by relegating the computing and processing to the VR or gaming console. Tethered devices also tend to offer better head-tracking and less image lag thanks to the built-in motion sensors and camera(s).
Beyond having to be physically connected to the console, tethered devices are also more expensive than mobile units because they are more than just a shell. Users would have to own the PlayStation 4, for example, and then by the PS VR for about $400 plus any add-ons and accessories; while PC-based platforms require powerful computers.
- Samsung Gear VR ($85) offers on-board touchpad and a resolution of 2,560 by 1,440 pixels, but the refresh rate is dependant on the phone.
- Google Daydream View ($50) is the least expensive option but the resolution and refresh rate depend on the phone.
- Sony PlayStation VR ($400) includes external visual positioning, a field of view of 100 degrees, and a refresh rate of 120 Hz, but the resolution is 960 by 1,080 pixels (per eye).
- HTC Vive ($800) includes camera and external motion tracking with 110 degrees of visibility and 1,080 by 1,200 pixel resolution (per eye), but is run off of a PC and is the most expensive VR package.
- Oculus Rift ($700) includes external visual positioning, a field of view of 110 degrees, and 1,080 by 1,200 pixel resolution (per eye), but the refresh rate is 90 Hz and it requires the Oculus Touch or Xbox One Gamepad.
This article is the fourth and final in a short series about virtual reality (VR) and augmented reality (AR); having discussed the origins of the concepts and the applications of the technologies, and contemporary devices, this article will focus on adopting AR as a lifestyle.
Virtual Reality Series
Part IV: The Evolution of Mann
Steve Mann — Professor at the University of Toronto’s Department of Electrical and Computer Engineering and Chief Scientist of the Rotman School of Management’s Computer Design Lab — is known as the “Father of AR” for good reason: he has been living in what he calls “computer-mediated reality” for over thirty-five years.
Mann’s current HMD, or what he refers to as “computerized eyewear,” is known as EyeTap Generation 4 and is physically attached to his skull, such that special tools are required for its removal. Because of this, Mann has been called “the world’s first cyborg” by the Canadian press, though he himself dismisses the term as too vague.
Mann — who has a doctorate in Media Arts from MIT, a Bachelor’s of Science degree, as well as both a Bachelor’s and Master’s degree of Engineering — is Founder and Director at both the FL_UI_D Laboratory and the EyeTap Personal Imaging Lab. On the FL_UI_D website, Mann describes the group as one that “designs, invents, builds and uses wearable computers and digital prosthesis in ordinary day-to-day settings.”
Mann explains why he prefers the term “mediated reality” in his article published by Spectrum IEEE,
My computerized eyewear can augment the dark portions of what I’m viewing while diminishing the amount of light in the bright areas … For example, when a car’s headlights shine directly into my eyes at night, I can still make out the driver’s face clearly. That’s because the computerized system combines multiple images taken with different exposures before displaying the results to me… I say that it provides a “mediated” version of reality.
While the “less interesting” AR is described as “the overlay of text or graphics on top of your normal vision.” Mann points out that this often makes eyesight worse, not better, by “obscuring your view with a lot of visual clutter.”
Mann believes that once a person has experienced day-to-day life with computerized eyewear, they’ll understand the numerous advantages it grants and will be reticent to give up new abilities. For example, Mann explains that his EyeTap includes an infrared camera capable of detecting subtle heat signatures, which allows him to “see which seats in a lecture hall had just been vacated, or which cars in a parking lot most recently had their engines switched off.” Additionally, the EyeTap can enhance text, making it easy to read signs that would otherwise be too far away to discern or that are printed in foreign languages.
In 2013, Google released its own version of the EyeTap, called Google Glass. The prototype was the first widely-known and commercialized computerized eyewear, though its development followed more than a decade after Mann’s first Generation EyeTap. Despite the many strides made by Mann over thirty years, Google Glass failed to incorporate several features that reduce eyestrain in the wearer.
Mann is expanding his influence and spreading his knowledge of wearable technology through work with his companies and with the IEEE (Institute for Electrical and Electronics Engineers). Decades ahead of the curve, Mann’s innovations continue to break down the barriers between man and machine.