The Problem with Point Clouds

A screenshot of a naked composite point cloud of St. Trophime and its environs in the south of France. Courtesy of CyArk's Archive.

A screenshot of a naked composite point cloud of St. Trophime and its environs in the south of France. Courtesy of CyArk’s Archive.

Point Clouds as Collaboration Makers

Though beautiful and elusive, the point cloud is ultimately an awkward creature which does not prosper well outside of its native file formats in any form of media.

But what, you ask, is this ‘point cloud’ thing that everyone keeps bandying about in digital cultural heritage?

Contrary to a misconception we’ve run across, ‘point clouds’ and ‘The Cloud’ are very different things. A ‘point cloud’ is a digital recreation of a space, comprising geo-referenced points, as collected by a laser scanner (often LiDAR–Light Detection and Ranging); whereas ‘The Cloud’ is a buzzword that’s sometimes used to describe networks of distributed computational services–storage, processing, whatever–but never point clouds. One can therefore store point clouds in The Cloud. And if flying on an airplane possessed of wi-fi, one can actually be playing with point clouds in the cloud while in the clouds (otherwise known as cloud-ception). And with aerial LiDAR, one can also collect these point clouds from airplanes (ooo new meta levels!). Most of what I’m going to be babbling about in this and subsequent blogs however, will be terrestrial laser scanning, i.e. laser scanning done from the ground. And for the sake of all those supra-detail focused techies out there who are clamoring to know what kinds of scanners we’ve been using and what kinds of software – most often I’m talking about the use of the wonderful but outdated Leica ScanStation 2 or the fabulous Faro Focus 3D. And therefore Leica’s tricky but wonderful Cyclone software package and Faro’s speedier Scene system.

Depending on the ability of the scanner and the time constraints of scanning and needed resolution- a point cloud can be super ‘dense’ and really really high resolution, or it can be ‘sparse’ and feature fewer points. Often cultural heritage terrestrial laser scanning, as a time sensitive operation, falls on a medium density level scale except where supra-high resolution is absolutely demanded and allowed within the time constraints of a site.

Typically, how nice a combination of scans comes together to form a three-dimensional digital replica of a space or object depends both on how well it was put together (it’s literally a 3D puzzle that one gets to put together– Yay puzzles!) and the quality of the scanner. Most scanners collect a combination of images from a camera and the geo-referenced points via a laser and time of flight algorithm. The images are then swathed over the points to make it all look even more life-like. Sounds cool, right? And it looks even cooler.

But its kind of a pain in the ass to get it all to work right and it can take absolute ages for the scanner’s proprietary software package. Yep that’s right, you  have to start off your data in the specific scanner company’s file format, piece it together there, and usually make media like videos and what not there too (if you’ve the patience). There is no standard point cloud format. And everyone’s software has different things its good at.

But what’s worse is that in order to translate point clouds out of these software systems and into something that can be put online or published digitally, in video clips for instance– the point clouds have to be severely decimated down. Meaning that the data that gets shown off is a mere, tiny fraction of the data collected, typically only 1% of it.

As part of his graduate studies at the University of California, Irvine, and subsequently, the University of California, San Diego–Vid began working with point clouds as a visualization challenge; initially with an emphasis on displaying neuroscience data. Because surely there ought to be some way to get all of this gorgeous data up and running faster, at its full capacity and resolution–so that it wasn’t just pretty data but data that lent itself to critical analysis. And eventually, the point buffer was born. Check out more details  on the early stages of his point buffering project ‘Rapid Visualization for Cultural Heritage Diagnostics’ at the National Science Foundation’s website.

Meanwhile, I (Ashley) was working as a Content Creation intern at the fabulously amazing non-profit foundation CyArk. Dedicated to the digital data collection and archiving of cultural heritage monuments, CyArk was founded by one of the pioneers of laser scanning technologies, Ben Kacyra. It is one of the most progressive and profound combinations of technology, archaeology, and philanthropy; and I loved every minute I was there. But as I coaxed all of those gorgeous point clouds into shareable data, it was clear that more communication between archaeological and archival groups and the computer scientists and engineers developing tools for them needed to happen. And so when I had a choice of graduate schools, I opted to head towards the University of California, San Diego and its Center of Interdisciplinary Science for Art, Architecture, and Archaeology- which had snapped up Vid and other like-minded techno-optimistic individuals in its quest to gear technology towards art historical and archaeological purposes.

And so, per the problems with point clouds, Vid and I were united on what would become the first of many research quests and international adventures.

One thought on “The Problem with Point Clouds

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s