Capturing Large-Scale Artifacts via Reflectance Transformation Imaging with a Drone
AbstractReflectance Transformation Imaging (RTI), also known as Polynomial Texture Mapping, was developed to create textures with self-shadowing. However, it is most widely used by the archeological community to collect information about artifacts' surfaces via specular reflection, the bright highlights created when light reflects off of objects. To capture RTI, several photos are taken of an object with a light placed in different positions, each a consistent radius from the center of the object’s face. Currently, this data is captured either by using a string between the light and the artifact's center to maintain a consistent distance or by placing a dome of lights over the object with each light in a fixed position. However, the current methods for capturing RTI limit the size of artifact archeologists and other cultural heritage researchers can capture. Captures of larger objects need to be completed in smaller subsections, which increases the difficulty and duration of the capture process. In our work, we introduce a new method that makes use of a drone to hold the lights in position, allowing researchers to capture the detailed specular information and inner object shadows of large artifacts.
Keywords: Reflectance Transformation Imaging, Drone Capture, Polynomial Texture Mapping
Reflectance Transformation Imaging (RTI) has revolutionized the way in which archeologists and other cultural heritage researchers are able to study detailed surface indentations. While 3D scanning and photogrammetry create a 3D representation of a surface as a mesh and 3D point cloud, RTI creates a 2D mathematical representation of the surface texture. According to Caine and Magen (2017), the “practicality, accuracy and immediacy” of RTI make it a useful technique for scanning cultural heritage artifacts. As an alternative to 3D scanning, researchers can view the details of an artifact that are difficult to see with photographs or the naked eye. However, the current methods for capturing data for RTI make it difficult to capture large objects. To perform RTI, researchers must take photographs of an artifact with a light in different positions to gather specular information for every part of the object’s face. They then either use a string to ensure that light is a consistent distance away from their object (CHI, 2013), or they must have access to an expensive dome of lights, alleviating the tedium of the string method, but limiting the size of the object, as the lights must be positioned in a dome formation with a radius of two to three times the length of the object’s facial diagonal (Piquette, 2011).
We propose a novel method for capturing RTI with a drone, enabling the gathering of detailed surface information of larger artifacts. Using a drone, we can position the lights an arbitrary distance from the object without scaffolding or the use of a fixed dome. To validate our method, we collected data on the Santa Cruz Surfer Memorial Statue, located at the edge of a cliff in Santa Cruz, California, using commercial Lume Cube Lighting Kit mounted to the drone with a brightness of 750 LUX at one meter. The statue is 18 feet tall, so we isolated the concentration of the RTI imaging to the body with a 6-foot diagonal. The statue’s height and proximity to the cliff would make it difficult and even dangerous to use the string method. With our method, we were able to capture the statue quickly and effectively.
Although our initial tests validate our approach, the use of a drone is not without challenges. For example, programming and/or flying a drone requires specialized expertise. In our tests, we manually controlled our entry-level model Phantom 4 drone. We used a garden hose to model the rings that the drone flew in the XZ plane, i.e., the plane parallel to the ground, so that the drone pilot could concentrate on maintaining the height over the hose, which simplified the imaging process in windy, outdoor conditions. Future iterations of this project will use a fully programmable drone. Our method invites cultural heritage researchers to capture RTI in outdoor environments with large immobile artifacts, which introduces another challenge. RTI is usually captured indoors, where the lighting can be precisely controlled. As the brightness of sunlight can affect outdoor RTI, diluting the contrast made by the drone’s changing positions, we flew the drone at night. Future work will investigate the use of image processing techniques to enable outdoor RTI during the day.
As we demonstrate, drones provide a functional alternative for distancing light in order to capture artifacts with RTI. RTI using a drone provides a low cost solution for museums wishing to capture objects of arbitrary size, even in outdoor environments.
In this paper, we utilize RTI to read surface detail from objects. The original innovation of RTI, also known as Polynomial Texture Mapping, was to model self-shadowing and inner-object reflections on textures (Malzbender, 2001). Since its invention, RTI has become a useful method to study detailed indentations on artifacts. Cultural Heritage Imaging (CHI, 2013) is a nonprofit organization providing software and starter kits to help people conduct their own RTI. They provide detailed directions on how to position lights, cameras, and objects for their software to work. They detail the string method for determining the distance between the light and the object (CHI, 2013) (Mudge, 2006). We used their software to synthesize our images. Another commonly used method of capturing RTI is using a dome of lights. This method is faster than holding the string with a light manually, but the size of the dome limits the size of the object. Piquette describes a dome one meter (39.3 inches) in diameter designed by the University of Southampton designed to capture Ancient Egyptian artifacts (Piquette, 2011). Since the radius of the dome must be two to three times the diagonal of the object’s face, this means that, at a maximum, this dome could only capture the face of an object with a diagonal of approximately nine inches. Our results were able to capture a vase with a diagonal of 32 inches and a statue with a diagonal of six feet (72 inches) using a drone to hold the light in positions to form a dome.
The string and dome methods are difficult to use on large objects. Miles (2014) describes the difficulties they went through in order to capture RTI of Hoa Hakananai’a, an Easter Island statue currently displayed in the British Museum. The statue is 8.2 feet tall on top of a 4.2-foot high plinth, making it difficult to capture due to its size. The researchers additionally had a limited time window and were only permitted two sessions with the statue. Since RTI requires the lights to be at a distance of two to three times the diagonal length of the area being captured, Miles (2014) split up the parts of the statue that they were capturing into multiple sections in order to minimize the distance lights would have to be away from the statue. In capturing the RTI of this large statue, researchers painstakingly held a string to measure the distance from the light to the object and then manually hold the light on a pole in position. Using our approach, the team could have more easily and efficiently captured an entire face of the large statue.
Most drones have lighting mounts available for users to illuminate scenes for photography, videography, and safety; however, they have never been used for holding a light for RTI purposes. In Hepp (2018), the team captures clean 3D scans of buildings with an automatically routed quadcopter, a drone with four rotors, with an RGB camera mounted on it. This experiment demonstrates the capability for drones to be automatically programmed for flight paths, and that they have the potential to capture information for objects as large as full buildings. In this paper, we demonstrate that these attributes of a drone can be applied to RTI in order for researchers to capture RTI of much larger objects more easily.
In setting up the RTI experiment, we followed the instructions provided with the free software by Cultural Heritage Imaging. We used two billiard 8-balls for the software to calculate the locations of the light based on the highlight reflected on the two spheres. We secured the spheres by placing them in the center of hair ties so that the wind would not move them. The software uses the locations of the light’s reflection on both spheres to fit a polynomial to the observed lighting intensity on the object. After the polynomial fitting, users can view all lighting angles on the object smoothly by opening the output in their RTIViewer application (CHI, 2013).
We conducted experiments on three different artifacts. First, we used the string method on a Buddha statue with a diagonal length of 15 inches in order to test the original method through the software pipeline. In our first experiment with a larger artifact, we used a drone to hold the light for a vase with a diagonal of 32 inches. We made the radius of the dome for the vase 82 inches (2.56 times its diagonal) since the recommendation is two to three times the length of the diagonal. The vase is small enough that it was physically possible to capture it with the string method manually holding the light.
For objects not elevated that sit on the ground, the polynomial is fit to a quarter of a sphere instead of to a hemisphere. The resulting calculations are accurate over the range of captured lighting directions but are inaccurate when attempting to extrapolate and visualize lighting from angles never observed.
To demonstrate our ability to capture RTI of an artifact that would prove too difficult to capture with the traditional string or dome methods, we captured the Santa Cruz Surfer Memorial Statue. Though the entire statue is approximately 18 feet tall, we isolated our RTI capture to the human figure portion, which had a diagonal of six feet. This meant we needed to design a dome with a 12-foot radius. Since the human figure was elevated within the composition of the statue we were able to get slightly more than half of the hemisphere by also collecting lighting information for 18 degrees below the radius.
In today’s drone market, pilots can program their drones to get accurate locations as demonstrated by research using modern drones (Hepp, 2018). However, in order to prove our concept for this project to the greater RTI community, we made use of our older generation drone, a Phantom 4. Since it did not have the easily programmable capabilities that consumer drones have today, we flew it manually. In order for the drone pilot to focus on manually keeping the drone at the correct height, we made it easy to place the drone in the correct location in the XZ plane, the plane parallel to the ground. We laid out a garden hose in rings around the object. We precomputed what height the lights should be at for each ring, adjusting our spacing so that the rings could be far enough apart that we could distinguish the XZ positions. This way, the pilot could simply center the hose with the camera on the drone facing directly down and hold the drone at the correct height for that ring.
For our lighting, we mounted a commercially available Lume Cube Lighting Kit to our Phantom 4 drone with a brightness of 750 LUX at 1 meter. The challenge with the lighting in our drone experiment was insuring our lights at the correct distance for the radius from the artifact would illuminate shadows and indentations on the object. Due to the intense brightness of sunlight, we conducted our experiments at night. This added the challenge of conducting the experiment in the dark but made our lights strong enough to have a visible effect on our artifacts at their proper distance (82 inches away for the vase and 12 feet away for the Surfer Memorial Statue). The Lume Cube Lighting Kit lights are pointed in a direction, allowing them to have a strong effect, but requiring angle adjustment during the experiment. After flying around an entire ring of a certain height we needed to land the drone and adjust the angle for the next height to ensure the lights pointed at the artifact. We had no trouble for either the vase or the statue doing the entire experiment within the drone’s thirty-minute battery life. Since advanced drones have adjustable camera mounts, having lights that adjust programmatically mid-flight would also speed up capture.
While flying a drone opens up the opportunity for cultural heritage researchers to collect RTI of artifacts in places where scaffolding would be challenging to build, such as at the Santa Cruz Surfer Memorial Statue, it does become sensitive to wind. Though drones stabilize themselves to handle some wind interference, excessively windy conditions would have made it difficult to conduct our outdoor experiments.
|Santa Cruz Surfer Memorial Statue Measurements|
|-18°||11.4 feet||6.3 feet|
|18°||11.4 feet||13.7 feet|
|36°||9.7 feet||17 feet|
|54°||7.0 feet||19.7 feet|
|72°||3.7 feet||21.4 feet|
|90°||0.0 feet||22.0 feet|
Table 1: To form a partial hemisphere of lights around the figure in the Santa Cruz Surfer Memorial Statue, we increment the degrees from the base of the feet of the human figure and calculate the ring radius (domeRadius * cos(angle)) and the height (domeRadius * sin(angle)) of the light.
In our three experiments with the Buddha statue, vase, and Santa Cruz Surfer Memorial Statue, we produced reasonable results with the RTI software (CHI, 2013). Using our method, researchers will now be able to capture RTI of larger artifacts. Since the Easter Island statue (discussed in the Related Work section) is 12.4 feet tall, the team had to section it into parts in order to capture RTI for the entire statue. Within the drone”s thirty-minute battery life, we were able to capture RTI of the human figure portion of the Santa Cruz Surfer Memorial Statue that had a 6 foot diagonal. Additionally, the statue resides on a cliff that would make constructing scaffolding for the string method dangerous and difficult.
Limitations and Future Work
One limitation of our work was the need for a skilled drone pilot. The drone we used was an older model that did not have programmable functionality, so we flew it manually in order to hold it at the right height above measured rings on the ground. Another limitation of flying manually with an older drone is having a degree of imprecision of the location of the drone. Though we do not currently have a way of readily measuring this possible error, we did not observe any negative consequences in the created visualizations, and we note that existing methods also introduce minor errors. Future work will explore how best to measure and mitigate any discrepancies. We also plan to explore the use of programmable drones to capture RTI in order to get data from very large objects both precisely and quickly.
Another limitation of our work was that we had to collect the data at night due to the fact that the specular difference is indistinguishable with the naked eye outside during the day with the strength of the lights we used. Collecting data at night worsens operational visibility. Future work will explore ways to capture RTI in daylight by subtracting average information from every image to capture the difference in data between images.
When collecting the RTI data of the Santa Cruz Surfer Memorial Statue, we decided to focus on only collecting the human figure. This was because, in order to collect the RTI data for the entire statue, we would have had a dome radius of over twenty feet. At that distance, the lights we had would have little effect on our statue. In future work, with more powerful lights mounted on a stronger drone, we plan to capture larger objects.
Using drones to capture larger objects for Reflectance Transformation Imaging will expand the possible objects and locations one can use this technology. With the investment of a drone, museums can create domes of lights of arbitrary size. They will not need to invest in a dome of lights that limits the size of the objects they can capture. In this paper, we proved that drones provide a functional alternative for holding the light at the proper distance from the object in order to capture RTI.
We would like to thank the following people for their assistance and feedback on the project: John Fowler for flying the drone, Amy Fowler for providing the camera, Theo Fowler for demonstrating the string method, Anna Sofia Frattini for graphics assistance, and Lucas Ferreira for the initial inspiration.
Caine, M., & Magen, M. (2017). “Low cost heritage imaging techniques compared.” Proceedings of the Conference on Electronic Visualisation and the Arts (EVA 2017), 430-437. Available https://dl.acm.org/doi/10.14236/ewic/EVA2017.85
Cultural Heritage Imaging. (2013). Reflectance transformation imaging: Guide to highlight image capture. Available http://culturalheritageimaging.org/What_We_Offer/Downloads/
Hepp, B., M. Nießner, & O. Hilliges. (2018) “Plan3d: Viewpoint and trajectory optimization for aerial multi-view stereo reconstruction.” ACM Transactions on Graphics, 38(1):1–17. Available https://dl.acm.org/doi/abs/10.1145/3233794
Malzbender, T., D. Gelb, & H. Wolters. (2001) “Polynomial texture maps.” Proceedings of the 28th Conference on Computer Graphics and Interactive Techniques (SIGGRAPH). Available https://www.hpl.hp.com/research/ptm/papers/ptm.pdf
Miles, J., M. Pitts, H. Pagi, & G. Earl (2014). “New applications of photogrammetry and reflectance transformation imaging to an Easter Island statue.” Antiquity 88(340), 596–605. Available https://www.cambridge.org/core/journals/antiquity/article/new-applications-of-photogrammetry-and-reflectance-transformation-imaging-to-an-easter-island-statue/3D70CA5B8B3B56A23411E4A8125EFB36
Mudge, M., T. Malzbender, C. Schroer, & M. Lum. (2006) “New reflection transformation Imaging methods for rock art and multiple-viewpoint display.” The 7th International Symposium on Virtual Reality, Archaeology and Cultural Heritage. Available http://culturalheritageimaging.org/What_We_Do/Publications/vast2006/index.html
Piquette, K. (2011) “Reflectance transformation imaging (RTI) and ancient Egyptian material culture.” Damqatum: The CEHAO newsletter – El boletín de noticias del CEHAO 7. Available https://repositorio.uca.edu.ar/bitstream/123456789/7350/1/damqatum7-eng.pdf
Fowler, Montana, Davis, James and Forbes, Angus G.. "Capturing Large-Scale Artifacts via Reflectance Transformation Imaging with a Drone." MW20: MW 2020. Published February 16, 2020. Consulted .