Layered Realities at the Theoretical Archaeology Group 2013

In May of 2013, Vid, John, David, and I presented the following project via teleconference to the combined Theoretical Archaeology Group conference, broadcasting to our peers at the multinational event held simultaneously at the University of Southhampton in the United Kingdom and the University of Chicago, Illinois from our own homebase in the Visualization Room (V-Room) in the California Institute of Telecommunications and Information Technology (Calit2) at the University of California, San Diego.

Expanding Layered Realities: Cognitive Annotated Imaging Systems for Accurate Archaeological Visualizations and Augmentations of Space and Time in 3D Immersive Virtual and Physical Environments for Collaborative Research and Public Dissemination

Ashley M. Richter, Vid Petrovic, David Vanoni, John Mangan, Tom Wypych, James M. Darling, Shelby Cohantz, Falko Kuester, and Thomas E. Levy

 

With current technology and networking capacity- archaeological visualization needs to take center stage within the discipline- not only for the cross-reference-able, accessible archive it could create for preservation purposes, but for its ability to make the past more transparently accessible and relevant to contemporary and future societies as an omnipresent, interactive feature within their own temporal space. At the Center of Interdisciplinary Science for Art, Architecture, and Archaeology (CISA3) at the California Institute for Telecommunications and Information Technology (CaliT2) at the University of California, San Diego new systems are being developed to streamline the scientific and replicable data collection, processing, and auto-publishing of archaeological data for use in dynamic visualization environments and out in the real world. Our system utilizes a series of integrated diagnostic imaging systems (terrestrial LiDAR, drone aerial photography, multispectral imaging, panoscan, gigapan CAVEcams, etc) to collect visual data which creates the framework for a tapestried, raw point data scaffold. The combined visualization data is displayed in our immersive 3DCave environments for which we are building a cognitively minded annotation interface to display layers reflective of all levels of data collection and analysis for visual collaboration. The system created for the Caves is mirrored in a linked augmented reality application for cultural heritage data known as ARtifact- which can display the same explore-able layers of annotation and visualization available in our labs via Android tablets and phones on archaeological sites and at cultural heritage monuments. Archaeological visualization in our system is intended to be phenomenologically realized on several stratum, creating collaborative research space out in the real world which embraces public input, as well as enhancing the potentiality of shareable visual analytics in our research labs.

Banner Image: The holographic display system from the science fiction film Prometheus, which in addition to its flying LiDAR balls, is one of the best harbingers of the future of the kind of surveying technology Vid and I have been imagining/discussing/building…despite not being the best or even fifth best entry into the Alien series.

Leave a comment