August 15, 1994
SECTION: Vol. 8 ; No. 12 ; Pg. 3; ISSN: 0889-9762
LENGTH: 8464 words
HEADLINE: Siggraph: virtual-reality applications and immersion experiences; Assn
for Computing Machinery's Special Interest Group on Computer Graphics and
Interactive Techniques 1994 conference and exhibition highlights
BYLINE: Rossello, Rosanne
BODY:
Sidestepping our normal realm of graphic arts and desktop publishing trade
shows, we spent a week (July 24-28) at the 21st annual Siggraph conference and
exhibition in Orlando, FL. At first glance, you might have mistaken Siggraph for
a scaled-down version of Woodstock, with sandal-clad, ponytailed attenders who
preferred backpacks and tie-dyes to briefcases and business suits. The
atmosphere was laid back, yet full of anticipation for all things virtual.
The annual event, sponsored by the Association for Computing Machinery's
Special Interest Group on Computer Graphics and Interactive Techniques, is
mainly structured around a plenitude of panels and courses, which have always
been at the heart of Siggraph shows. A nice offering for first-time attenders
was a fundamentals seminar, complete with a course book containing the slides
used in the presentation. It provided a helpful three-hour, nonstop crash course
of the software, hardware and application terms you're likely to come across.
Our coverage. Because of its expansive scope, it was difficult to cover
everything at the show. However, we tried to get around enough to taste a little
of everything. We came away with our eyes widened (or perhaps sore from wearing
3d glasses and vr head mounts for the better part of five days1) and our heads
spinning.
Experiencing is believing. The problem with reporting this kind of event is
that it is difficult to describe what is meant to be experienced. Imagine trying
to describe to someone who has never eaten a piece of cake what it tastes and
feels like. A session called "Computer Graphics: Are We Forcing People to
Evolve?" touched on this concept. The panel included Brenda Laurel of Interval
Research, and Leonard Shlain and Terence McKenna, both independents. The core
of the discussion centered on the idea that we have "maxed out" our language;
there are things now that words alone cannot fully describe. There is nothing
new about that situation, we would point out. There have always been concepts
that words can't describe adequately. In assigning words to an idea or
experience, some meaning is lost, resulting in ambiguity.
The panelists predicted that the language of the future will be visual, which
will affect how and what people think. Our concept of a literate person may be
determined by how subtle a shade of pink we can render. Literacy will be shaped
by the acts of sharing and imagination. Here's where vr fits in, according to
this theory: Vr will take nonfeeling equipment (computers) and provide the
embodiment needed to create a natural experience. It will allow us to show each
other our dreams, turning the human body inside out so that the soul may be free
to share its images. An iconic language will address the mind and the body, thus
allowing us to understand each other better. By distributing images, we will
promote a universal language.
But we cannot possibly know where this technology will take us. As McKenna
pointed out, "Technology always exceeds the ideas of its inventor." In other
words, we don't ever know all the consequences of technology.
Of course, the virtual worlds being created now are far from perfect.
Processing speeds still limit the ability to render images realistically, but
the process of immersion is really effective. However, one factor that must be
overcome to appreciate truly these new developments is the embarrassment
involved in trying out these new gizmos. You must ignore the fact that several
dozen people are watching you as you jump, crawl and move across the floor with
a helmet on your head like a madman chasing his shadow. Forgetting this, and
letting go of inhibitions about how you will look sprawled on a waterbed or in a
motion chair with a black bag over your head, you can experience a bit of the
beyond -- some of what every science-fiction fan has been reading about for
years: the future. (Or, perhaps, the present.)
Siggraph was not primarily a show of new technology introductions, but more
one of how people are using today's technology to allow for fuller immersion
experiences. The imagination and dedication that went into these new art and
educational forms is remarkable. Today, thanks to this technology, we can
experience what it's like to swim like a dolphin or run like a wolf across the
frozen tundra of the Arctic, thanks to this technology.
(We don't necessarily agree with all of the claims made for virtual reality.
In fact, we'll be amazed if even 10% of this year's "miracles" are still around
in two years. But that doesn't stop us from appreciating the r&d behind the
hype and from trying to anticipate which parts of it might someday apply to our
lives.)
The Special Venues
Aside from the panels, papers and courses, Siggraph also had several special
venues for the vr- and interactive-curious: The Edge, Sigkids, the Art & Design
Show, the Electronic Theater and Vroom. The goal was to show how technology
interacts with, and is affected by, the human spirit. Specifically, how people,
through their capacity to imagine and create, can develop new ways to interact
with technology.
Life at The Edge
The Edge featured more than 25 explorable interactive exhibits by scientists,
artists and educators. If you weren't timid about trying some of them out (and
most people weren't), you might have waited up to two hours to take them for a
test ride. (It was almost like waiting for rides at Disney World.) But it was
worth the wait to swim with dolphins and to be propelled into a spinning virtual
world, clutching a motion chair for dear life.
However, some of the less obtrusive exhibits turned out to be the most
exciting. And if you were so moved, you could get on the Video Soapbox (right)
and record your feelings about the expo and what you experienced, as well as
view what others said.
Teaching devices. The Personal Communicator, a communication environment for
deaf children, was developed by the Comm Tech Lab at Michigan State University.
Words entered via the keyboard are read back to the user in sign language. Users
also can click on objects in a room and learn how to sign them. This project,
which will become an actual product, will make learning and understanding sign
language easier not only for the hearing-impaired, but also for those wanting to
communicate with them.
A similar teaching tool, developed by the folks at Berlitz, is called English
Discoveries. It helps Spanish-speaking students learn English as a second
language. Students learn through recording their voice, listening to it and
comparing it with the computer's version, as well as through interactive games
and study sessions.
Medical applications. Virtual reality is rapidly being accepted by the
medical community as a way to train physicians and to heal the sick. David
Warner, a Medical Neuroscientist at Loma Linda University Medical Center, has
used Immersive Systems' Meme software to give a seven-year-old quadriplegic girl
the ability to experience motion visually through vr.
GMB (Gesellschaft f*r Mathematik und Datenverarbeitung) has developed the
Responsive Workbench, which allows you to view objects, such as the human body
or a building, from different perspectives. An actual-size model of the human
skeleton can be viewed, and users can study a beating heart and even remove it
from the rib cage.
WaxWeb Mosaic-moo2. Another interesting exhibit was the WaxWeb
network-delivered hypermedia project based on David Blair's electronic film, Wax
or the Discovery of Television Among the Bees. (In 1993, Wax was sent out as a
relatively high-bandwidth multicast over experimental multimedia backbone, aka
MBone.) It combines one of the largest hypermedia narrative databases on the
Internet with a unique authoring interface that allows Mosaic or moo users to
make immediate, publicly visible hypermedia links to the main document.
WaxWeb consists of more than 900 pages of hypertext (with 5,000 links to data
including index links). English, French and Japanese (Kanji) text versions of
the film's monologue can automatically be inserted into the hypertext. Mosaic
users will have access to the hypermedia portions of the document, which contain
the entire film embedded as 1,500 color stills, 500 mpeg video clips and 2,000
aiff audio clips, including the soundtrack in English, French, German and
Japanese.
Users can attach links to any word, add comments to any page, create their
own pages and place bookmarks on others. Any and all changes are immediately
visible to other viewers creating a dynamic html document. Once registered on
the system, users' names are automatically attached to any notes they may have
created so that others might know who provided the information.
The implications for this functionality, written by Tom Meyer with Suzanne
Hader, are limitless. It could be used as a new form of moviemaking as well as a
new type of publishing. Rather than serving as a method to transmit data, like
Adobe Acrobat, it provides a dynamic interface for communicating ideas.
Blair is working also with the Labyrinth Group (see below) to use the Virtual
Reality Markup Language3 ( vrml) to create objects that are rendered as you
walk through a room. Html hot links can be attached to any object in the room to
pull up text or additional images. Blair will use this to map the images from
his film in 3d to add a "walk-through" experience to the film.
A cross-platform compatible cd-rom of the database will be available, which
will allow you to read the media locally while interacting with the dynamic
database on moo.
The video is available for about $ 30. The cd will be priced between $ 30 and
$ 35.
World Wide Web users can reach WaxWeb Mosaic-moo at this universal resource
locator (url): http://bug.village.virginia.edu:7777/. Moo people can reach it at
bug.village.virginia.edu 7777.
David Blair, PO Box 174, Cooper Station, New York, NY 10276; phone/fax (212)
228-1514, internet artist1@bug.village.virginia.edu.
Kids lead the way to the future
The Sigkids exhibit, housed in the same area as The Edge, gave us a look at
what education could be like in the future. Kids could sign up to participate in
any or all of the 20 exhibits and learn everything from 3d modeling to how to
take apart a '486 and put it back together. This venue was just as popular with
the adults -- although sometimes ten-year-olds mastered the technology with
ease, while the adults didn't know where to begin.
A virtual tour. Using a vrml -based browser, Labyrinth ties together WWW and
vr into one environment. As its first project, the Labyrinth Group has rendered
rooms from the Holocaust Memorial Museum and, using the vrml browser, provides
access to these images in the database via WWW. The images are rendered in real
time as you move throughout the room. The environment is navigated with a track
ball or mouse. Objects in the room are linked to textual information and other
environments.
The technology can be applied to other areas, such as creating virtual
libraries that can be linked to provide a vast information resource. The project
will be available on the Internet in October.
Labyrinth Group, 45 Henry St. #2, San Francisco, CA 94114; phone (415)
621-1981.
Museum of the future. Another virtual walk-through exhibit was created for
the Wakayama Prefectural Museum in Japan by Cyber Network. The exhibits in the
museum were photographed with a motion-control camera that provided a 360* view
of the exhibit. These images were then combined with animation, audio, still
images and hyperlinked text to produce the piece.
Users walk through various ages in Japan's history and visit the dwellings of
its inhabitants. You encounter many virtual people who, when you touch them,
tell you about their lives. The actual version produced for the museum uses a
touch screen to travel through the exhibits, but it works equally well with a
mouse. Common desktop programs, including Adobe Illustrator, Photoshop and
Premiere, were used to create the Mac-based piece. Proprietary software was used
to create a full-screen QuickTime movie.
The publishers of tomorrow. Another Sigkids exhibit, called Time Travel, gave
visitors the opportunity to create (using Kid's Studio by CyberPuppy Software)
their own stories, complete with pictures, text and sound. Alicia Van Borssum of
the Henry W. Longfellow school in Rochester, NY, created a curriculum to teach
students about story conceptualization and writing. With Macromedia Director,
Backyard Multimedia created a cd that included the students' stories, a
presentation documenting their creation and Van Borssum's curriculum notes. The
cd-rom is available from Backyard Multimedia for $ 7.95.
At the show, the kids' stories were converted into QuickTime movies when
completed. The kids took home both printed and digital copies of their
masterpieces.
Backyard Multimedia, 407 Pearl St., Rochester, NY 14607; phone (716)
473-0389.
The Art and Design Show
The Art and Design Show was a juried exhibition of 93 framed works,
sculptures, animation, installations and site-specific artwork featuring the use
of computers to generate art. The show even extended into the restrooms with a
work by Tim Binkley from the School of Visual Arts, aptly called "Rest Rooms."
Participants in the women's room could communicate with those in the men's room.
A video camera mounted on a monitor served to put an end to the mystery of
what's on the other side by allowing members of the opposite sex to see some of
each other's environment. Participants could also send messages to each other by
drawing on the touch-sensitive screen.
The piece was designed also to illustrate the breakdown of privacy, which is
rapidly becoming an issue we have to face as technology becomes more invasive.
It also touches on the subjects of segregation and gender equality. At the very
least, it may help to solve the age-old problem of the line for the ladies' room
being so much longer than the line for the men's room.
Other works in the show, especially the animation, took advantage of visual
and auditory stimulation. The music associated with many pieces almost replaced
the spoken word in its ability to evoke emotions and add depth to the
accompanying action.
If you still don't perceive computer graphics as art, consider this: Like
traditional art, computer art can be stolen, and it was -- right off the exhibit
wall of the Art and Design show.
Full-body immersion experiences
Vroom (short for virtual-reality room) consisted of three caves (cave
automatic virtual environment), each costing $ 100,000. (Funding was provided by
the National Science Foundation, the Advanced Research Projects Agency, the U.S.
Department of Energy and NASA.) A cave is a multiperson, 10x10x9-foot,
high-resolution, 3d, video and audio environment developed by the Electronic
Visualization Laboratory at the University of Illinois. It consists of three
rear-projection screens for the walls and a down-projection screen for the
floor.
As viewers move around wearing Stereographics' lcd stereo shutter glasses
(which separate the alternate fields going to the eyes), the images move with
the viewers and surround them. An image's direction and its positioning are
determined by the person wearing a tracking device.
Power behind the display. Electrohome Marquis 8000 projectors throw
full-color workstation fields (1,024x768) at 96 hz onto the screens, giving
approximately 2,000-linear-pixel resolution to the surrounding composite image.
The processing power required to run these displays is worthy of mention.
Silicon Graphics' ChallengeArray configuration, from the company's
Supercomputing Systems Division, supplied the power behind several of the
applications in the Vroom exhibits. Two of the three caves were driven by a
72-cpu ChallengeArray, which provides more than five gflops (five billion
floating-point operations per second) to allow interactive steering of complex
simulations.
The display in each of the caves was controlled by 12-cpu Onyx graphics
supercomputers from SGI. Two of these were connected to the ChallengeArray using
HIPPI connections operating at 100 mb per second. Each Onyx has three
RealityEngine2 rendering engines with stereoscopic output, which work together
to allow the user to walk around within the data.
The caves. Vroom exhibits focused on four areas: the environment, medicine,
manufacturing and education. The simulations and immersion experiences
illustrate how scientists and engineers are solving real-world problems through
virtual reality.
Some of the more interesting exhibits included interactive molecular
modeling, which showed how drug designers can interact with molecular models,
such as the hiv virus, by docking a drug molecule to its molecular receptor. For
those more into machines than medicine, another simulation allowed you to drive
a new piece of equipment designed by Caterpillar to test it for design defects
before it is built.
One endeavor, "Detour: Brain Deconstruction Ahead," allowed viewers to
experience what artist Rita Addison went through after a debilitating car crash.
The simulation takes you from a time before the accident, illustrating how she
saw things in her mind, to a time after the accident when she was struggling
with brain damage. In addition to the emotional empathy it evoked, it also
suggested a new way for patients and health-care professionals to communicate.
The project was a collaboration with Marcus Thiebaux of the University of
Illinois and David Zeltzer, Ph.D, of the Massachusetts Institute of Technology.
Boom, boom, boom. Another part of the Vroom exhibit was the boom (Binocular
Omni-Oriented Monitor) room, which featured three vr displays -- RealEyes: a
system for visualizing large physical structures (Boeing Computer Services), The
Virtual Windtunnel (NASA Ames Research Center) and MUSE: Multidimensional,
user-oriented, synthetic environment (Sandia National Laboratories). These
exhibits took advantage of the Fakespace binocular units that help create the
virtual experience. The systems allow users to fly through the passenger portion
of an aircraft carrier, see wind as it deflects off the wings of the Shuttle or
view the inside of an explosion.
Scaled-down version. If you didn't get a chance to see all the Vrooms, you
could wander over to the Learn More area, where SGI workstations running
NCSA/Mosaic-documentation of the Vroom applications could be viewed. On display
were photographs, 3d animations and even some cave simulations. The interface
will be posted on the Internet for mass viewing. (But you'll need an SGI
workstation to run the cave simulations.)