Archaeological visualization is slowly realizing its spatial goals–but still has cubic kilometers to travel before it approximates a realistic Star Trek holo-deck phenomenologically appropriate replication machine.
Recent collaborations between the CISA3 visual groups and the Calit2 audiolabs have resulted in sound layers to some of the visualization demonstration systems–mainly dealing with voice-overs and explanations of the visuals on screen.
Long chatted about, but not yet accomplished, are dreams of recording the audio landscape of archaeological sites in order to add the sensory perception of sound into the visual analytics. One of the huge problems with virtual archaeological visualizations,–with viewing an archaeological landscape on equipment in a lab instead of on-site, is that the audience is too distracted by its bright shiny prettyness to analyze it thoroughly. The closer to the actual experience, the more likely realistic interpretations might be had. Whats more, often these phenomenological missing pieces are the most interesting and relevant to the puzzle. Soundscapes are more and more being realized as important pieces of ancient constructs. Landscapes of ancient groups like those who lived in New Mexico’s Chaco Canyon or those who cultivated the monumental complex round Avebury in southern England were built around the soundscape potential of the natural environment.
In order to prepare the Calit2 audio lab for coordinating their spatial data collection and processing needs for future work with CISA3 and for their own upcoming projects in Chaco Canyon, I trained a group of their sensory specialists in the wonderful world of terrestrial laser scanning.
Working first with the Faro Focus 3D scanner to digitize the interior of the University of California, San Diego’s Conrad Prebys’ Concert Hall and then with a Leica Scanstation 2 in the great wilds of the backyard at Asgard (our house), the audio lab got a glimpse into the beautiful and troublesome realm of point clouds.