RE: Vr Art vs culture

Claude L. Bullard (bullardc@source.asset.com)
Tue, 28 Mar 1995 13:06:42 -0500


[Gavin Nicol]

| One thing I often wonder about VRML, is what does it do for
| sight-disabled people? I guess this is just an extreme case of the
| above differences.

This is a debate that goes on in the kiosk industry daily. There
are handheld devices for scanning screens to pick up cues
and render them on a braille pin set. What is of importance
is how the cues are rendered, that is, usually good old text
rendered as sound or braille.

VRML may not do much for them simply as a geometric set.
The blind depend on the other senses for information. However,
spatial ordering is not lost on them because of the auditory
effects of reflective surfaces, air currents on the skin, smell,
etc. VRML won't support this, but then, VRML is in SGML
terms, the notation for the 3-D space. A fully-multimedia
document in a complete cyberspace could account for the
other sensual aspects and provide the visually-impaired with
the required cues. Yes, a quite complex application is required,
but I see no requirements for major technical breakthoughs.
This is where the HTML/WWW location models begin to
constrain developers. Applications capable of rendering a more
complex set of sensual dimensions must be capable
or *getting* data back from within the aligned applications. For
example, to provide auditory cues, software exists that
analyses the surfaces of a room and provides the depth
cues required by the human ear.

The auditory component must request that the spatial component
deliver its spatial dimensions, contents of room, types of surfaces
(i.e., painted cinder block is reflective while non-painted
block is not as the material is porous - a sad fact discovered
while building a few audio studios), types of available filters
if any (equalization of a room depends on Shooting the Room
to find dead spots, placing a material such as Sonex on the
walls, and setting the eq to the near-field monitors), etc. This
is complex indeed, and like the use of visual cues, dependent
on standard descriptions of environments. Also, one should
remember that movement of bodies through a room change
both ambient lighting and sound dynamically. I won't deal with smell
as I know of no technology for rendering it and hope one
isn't fielded soon. Chemical stimulants could be added to
the computer and sprayed into the environment, but
yeachhh! We're back to patouli (sp?) oil and black light posters...

So while creating a VRML for geometric realization is
quite important and the task at hand, a fully *Real*
virtuality would accommodate stronger modes of
addressing and communication. VRML is a good
beginning but may be overtitled. As HyTime apps
begin to appear, integration of the notations may
become more common. It seems that each sensory
channel requires its own notation, and that a common
integrating application has to *play them* concurrently.

One step at a time. I think it should be noted that while
the impairment of blindness is problematic, what VR
can do for other impairments such as quadraplegics or
autistic children is phenomenal. Excellent research is
going on in this area.

Len Bullard