As We May Learn: The Coming Fusion of Maker Technologies, Spatial Computing, and IoT

Bill Meyer, Inquireables, USA


This paper discusses how location-based learning exhibits—3D experiential interactives—can now be developed and scaled in ways not previously possible, nor imaginable. By analogy: Uber, Lyft, and Airbnb might seem like obvious developments in hindsight, but early in the 21st century when the new medium of mobile computing was still in earlier stages, such applications (that merge GPS, Internet, smart mobile devices, cloud computing, and social networking) were far from self-evident. Today the most valuable hotel company doesn’t own the properties its customers stay in. The most valuable public transportation companies don’t own the vehicles that transport their customers. The most valuable movie rental company streams its content.  Might museums of the future exist as networked physical entities, where IKEA style kit-of-parts exhibits are easily duplicated and assembled, existing in high schools, libraries, malls, and other public spaces? Could a docent or facilitator be virtually present, with a Learning Record Store (LRS) seamlessly integrated? The author believes the answer is yes. All the key pieces exist today, though they have yet to be coherently merged.  This paper explores how digitally controlled fabrication devices (3D printers, laser cutters, and CNCs) and the emerging medium of spatial computing (virtual, augmented and mixed reality) integrated with the Internet are destined to make the above vision a physical and virtual reality.

Keywords: VR, AR, Maker, LRS, Scaleability, IoT


Going to a physical location with friends or family to share a museum experience can be profound. A lot of valuable learning happens that doesn’t feel like learning. As good as a website about the same content might be, it’s just not as impactful. Large well-funded exhibitions with superior interactive experiences tend to open in and travel to big cities. Millions of dollars may go into developing what can only exist in one place at a time. Because today’s exhibitions aren’t scalable, they may never be experienced by people who are geographically distant. This paper discusses how location-based learning exhibits—3D experiential interactives—might be developed so they can scale. Duplicated exhibits, existing at all kinds of non-traditional locations, would reach far larger and more diverse audiences.

Questions of an ideological nature (e.g. “What constitutes a museum?”) and practical implementation details may arise in the reader’s mind. Although these discussions remain outside the scope of this paper, the author acknowledges their importance. Likewise, there is no dispute that historical sites, buildings, and artifacts will always be important to preserve and maintain. Being in their actual physical presence is powerful and unique, and they assure historical accuracy. 

This paper explores the art of the possible—how IKEA-style, kit-of-parts interactive exhibits that are easily duplicated and assembled—can exist in high schools, libraries, malls, and other local public venues. It shows how a docent/facilitator can be virtually present, and how a Learning Record Store (LRS) might be integrated.

Enabling technologies include today’s digitally controlled fabrication devices (3D printers, laser cutters, and CNCs), the emerging medium of spatial computing (virtual, augmented, and mixed reality), smart home, and maker devices. To the greatest extent possible, designs, content, and tools employed would be open-source.

A key idea is modularity at every level: from content to the physical space and associated hardware and software. A small, standardized, individual space can function as a multipurpose interactive theater “playing” one exhibit at a time. Public venues could have clusters of such spaces. Here, we’ll refer to each individual exhibit experience as a Location Based Exhibit (LBX).

LBX Development and Content

Initially, LBXs might feature adaptations of “best of” experiences that have already been time-tested and iterated with the public. These can come from science, art and history museums of every kind. Experiences from heritage sites, zoos, and aquariums could feature content adapted for virtual interaction. Some hidden gems might come from customer experience centers.

Most museums have deeper exhibit content available that museum educators, docents, explainers, or facilitators access and share with outside educators and visitors. Such information can be incorporated into a collective LBX back-end. This content could be made available in an experience itself, or via a single, unified mobile app that functions as a companion for all LBX experiences.

Spatial Computing “Realities”

Spatial computing is steadily evolving, encompassing all the avenues by which computers send outputs and receive inputs from our physical world. Today it’s not only possible to digitize and physically duplicate objects, but also to publish three-dimensional objects and environmental experiences for virtual, mixed, and augmented reality interactions. Such experiences have traditionally been platform-specific stand-alone apps. OpenXR (Orland, 2019) and Mozilla Mixed Reality (n.d.) signals a change where experiences will work on any manufacturer’s headset, and increasingly complex VR environments can be published and made instantly available on the Internet as VR Web URLs.

In their myriad forms, museums create, preserve, and display vast amounts of three-dimensional content. From reality capture to virtual docents, the increasing importance of digitally spatial mediums and associated content for museums can’t be overstated. The following sections describe emerging spatial computing “realities” and associated devices. All of them have a role in delivering transformative museum experiences to remote venues.

Virtual Reality (VR)

A head mounted display tracks the user’s head location and orientation, showing a dynamic, perspectively correct, computer-generated simulation of a three-dimensional environment that uses visual, audio, and other perceptual cues to create an immersive experience. The Oculus Quest and Rifts, HTC Vive, and PlayStation® VR are examples.

One question about VR is “If an exhibit is created for VR, why can’t visitors do it at home?” The answer is that a majority of would-be visitors don’t yet have these devices, and a key goal is to keep these activities social in local public environments. As will be explained further down, VR experiences are much richer when encountered in an appropriate physical space that includes touchable elements.

Mixed Reality (MR)

As in VR, the user’s head location and orientation are tracked. The difference is that in MR, a perspectively-correct dynamic blend of the virtual and real world are experienced as one. Microsoft’s HoloLens, Magic Leap, and many AR Glasses fall into this category and utilize semi-transparent screens. Much more vivid virtual images and wider fields of view are possible with (currently much bulkier) VR headsets equipped with stereo cameras attached to the front of the headset to feed in video from the outside world. To understand how strong MR experiences will become as displays improve and form factors shrink, this type of headset is worth experiencing if possible.

Augmented Reality (AR)

A smartphone or tablet is held and viewed at arm’s length, and an app superimposes perspectively correct dynamic digital images onto the device’s display screen blended with the live camera feed. In this case, the location and orientation of the display screen, rather than the position of the user’s head, is tracked by the app.

Google Cardboard

Not true immersive VR, but good enough to watch 360-degree videos. The user slips a smartphone into an inexpensive cardboard or plastic box with lenses, then holds the assembly to their face. An app using the smartphone’s internal inertial sensors estimates in which direction the user is looking (orientation), and changes the view accordingly. The app can’t, however, determine how close or far the user has moved in any direction (position).

Extended Reality (XR)

A new industry term encompassing all of the above “realities” (Goode 2019). In this paper, XR and spatial computing are used somewhat interchangeably.

Internet of Things (IoT)

Objects and devices of all kinds are increasingly embedded with sensors, software, and networking capabilities enabling them to be remotely controlled or collect and exchange data. Collectively, this is the vast and evolving Internet of Things (IoT).

Using Bluetooth beacons and mobile devices, IoT has been employed to study how museum visitors move through a large castle (Pierdicca, Marques-Pita, Paolanti, & Malinverni, 2019). A more familiar example may be Philips Hue products, where nuanced control of the color and brightness of LED bulbs happens via WiFi. Such control can come from an app or via voice commands processed by other familiar IoT devices such as Alexa or Google Home. The next section discusses one of many possible ways LBXs might leverage IoT technologies.

Using Voice and XR as Docent, Explainer, or Facilitator

XR applications can detect and respond to fabricated objects and external images in LBX experiences. Responses can be designed to explain how to take the next action, or define what something is or does. Apple’s developer documentation explicitly mentions museums as use cases:

“For example, a museum app might show a virtual curator when the user points their device at a painting.” (“Detecting Images in an AR Experience” n.d.).

“One way to build compelling AR experiences is to recognize features of the user’s environment and use them to trigger the appearance of virtual content. For example, a museum app might add interactive 3D visualizations when the user points their device at a displayed sculpture or artifact.” (“Scanning and Detecting 3D Objects” n.d.)

Richer XR implementations can allow visitors to ask questions that can be heard and answered by robust artificial intelligence engines. LBX developers can already deploy such functionality using “Alexa Voice Service Integration” (n.d.) and “Google Assistant SDK” (n.d.). The physical devices Amazon and Google sell aren’t required to access the power of these systems. A simple $35 open-source Raspberry Pi single-board computer with microphone and speaker can be used to access either (see “Resources”).

An AI voice facilitator’s answers would not simply be software doing an Internet search. Rather, the system would access curated LBX content previously input into the cloud and associated with particular exhibits or exhibit groupings. For Alexa, these are called “skills” (“Alexa: Build Skills for Voice” n.d.).

Creating Virtual Worlds Visitors Hear and Touch (Passive Haptics)

Designing what users see is an obvious first step when creating a VR- or MR-enabled LBXs. Less obvious is how 3D audio can be included to wordlessly guide the user’s attention to where it should be (Thakur, n.d.). In immersive VR and MR, users can be looking anywhere. Because the user’s head (and therefore the location of her ears) is tracked, software automatically spatializes sounds, enabling experience designers to “attach” sounds to virtual objects. For instance, the chirping of a virtual bird on a perch will stay in place as it would in real life, even as the visitor physically walks around the virtual bird. If it takes flight, the sound perceptually moves with the bird.

LBXs that include a VR world can also be designed to leverage the visitor’s sense of touch. Digital interactions involving touch are called haptic. If a hand controller vibrates when a visitor “touches” something virtual, that vibration is haptic feedback. When a designer incorporates a virtual model of an object that precisely aligns with a real physical object (of the same size/dimensions in the physical space), this is called passive haptics. The visitor might see a beautiful Eames & Saarinen cabinet in VR. In the real world, only an identically sized inexpensive block of white Styrofoam need exist, glued into place. When touching the cabinet in VR, the coordinated visual and tactile experience increases the visitor’s sense of presence (the feeling of really being there in the virtual world) and intensifies memories of the experience (Insko, Meehan, & Brooks, 2001).

If a physical object’s location is tracked, and a model corresponding to the object is incorporated into the virtual world, passive haptics becomes applicable to objects that can be moved. The simplest example is when holding a VR hand controller. In VR, a visitor sees a 3D model of that same controller. When their hand rotates, the controller model rotates. How the controller appears in VR is easily (and often) altered in the software.

As of this writing, it is straightforward to detect and track 3D objects for AR applications. This should soon become easier to do for VR (“Object Recognition” 2019), because newer, “untethered” headsets have outward facing cameras (Coldewey, 2019).

A Standardized LBX Footprint

For a variety of reasons, it would be important to standardize LBX theater spaces. If we standardize on what a minimum unit of space is for a single visitor, then when we scale to N units we can accommodate N visitors simultaneously. If a footprint unit were agreed as, for example, 2.5 x 2.5 meters, larger spaces could be simple multiples. For instance, 5 x 2.5 meters (2 units) or 5 x 5 meters, etc (4 units).

In VR, the “play space” or “play area” is the physical footprint of the area in which a user can move without obstruction. The standard LBX footprint would correspond to such a play space. This enables development of VR models and their associated scenes that work unmodified in any LBX theater conforming to the standard. This vastly reduces development time and effort, because digital VR model libraries and spaces can be efficiently repurposed for different LBX experiences.

VR Location Based Entertainment companies such as Dreamscape Immersive have amplified experience strength by installing a railing around the footprint and exactly modeling that railing in VR. “The walkable space in each pod is encircled by railings that are heavily incorporated into the various VR worlds” (Bishop 2019). This is a practical application of passive haptics that would work well for LBXs. It makes it possible, for instance, for visitors to feel as if they’ve stepped up to the edge of a Grand Canyon lookout. As they lean on the physical/virtual railing, they can look around and take in the vast virtual scene just as they would at a real lookout.

Digital Fabrication of LBX Components

Fast Company recently reported on how “3D printing is quietly transforming an unexpected industry: museums” (Samaroudi & Rodriguez Echavarria, 2019). Working from CAD files and 3D scans, exact reproductions of shapes and volumes are now possible on almost any scale via CNCs, laser cutters, and 3D printers.

LBXs would smartly employ XR so only a minimum amount of digitally fabricated “stuff” need exist to make strong and memorable interactive experiences possible. Physical objects would be created with AR/MR recognition or passive haptics for VR in mind. Key fabricated items would include:

  • Railings and walls that define the standardized exhibit footprint (play area) boundaries.
  • Enclosures for IoT devices.
  • Objects where tactile interactions measurably increase learning value.
  • Tactile objects that are user interface elements, manipulated in the real world (and seen in the virtual) with specifications sensed by XR software.

The standard LBX structure and exhibit objects can be open-sourced. The Open Source Hardware Association (OSHWA) has a well considered set of “Best Practices for Open-Source Hardware” (2012). These practices also apply to how electronics, such as associated single-board computers like the Raspberry Pi and Arduino, are integrated into an LBX’s design.

Reality Capture and 3D Digital Assets

Scanning real world locations and objects and converting them into 3D digital assets usable for physical and XR design is termed reality capture. The simplest method is photogrammetry, in which ordinary 2D camera photos from different perspectives are uploaded and then analyzed in software that creates a highly accurate 3D model. Autodesk’s ReCap (n.d.) is one example museums can use.

CyArk is a non-profit that captures and preserves cultural heritage sites via:

“…a combination of 3D recording technologies to accurately map environments using LiDAR, high resolution photogrammetry and drone imagery. Data from these sources are combined to produce a centimeter accurate and photorealistic model. These models can then provide the basis for 3D environments used in VR applications” (“CyArk Virtual Reality” n.d.)

CyArk recently teamed with Google Arts & Culture (“Open Heritage” n.d). National Public Radio reported on the partnership and considerations around ownership of the related data (Sydell, 2018).

Evolving libraries of VR ready models include Sketchfab, Google Poly, and the Unity3D Asset store. The National Historical Museums of Stockholm, Sweden, has already made high-resolution 3D reality capture models of several of its museums freely available for download on Sketchfab (Lernestål, 2019).

Computer Hardware: Simple as Possible

Raspberry Pi and Arduino are ideal platforms for integrating IoT and computer-mediated interactions into a LBX. Both are open-source hardware. Once configured, they require no ongoing support and rarely fail, making them ideal for environments such as malls, libraries, high schools, and other public spaces. Whenever operational issues do arise, problems need to be easily diagnosed and resolved remotely. To report on LBX exhibit health and usage, each Raspberry Pi can run Node-RED (Mobberley, n.d.). This is an open-source IBM-created tool that can be used to remotely monitor LBXs via a cloud-based application. This architecture facilitates remote monitoring of a large number of exhibits, as well as gathering data around how visitors interact with those exhibits.

For networked museum VR exhibits, “untethered” headsets that operate without any need for an external computer are ideal. The Oculus Quest is currently the best selling example, and supports “a comprehensive approach to scaled deployments” that includes “cloud-based management tools” (“A Closer Look at the New Oculus for Business” 2019).

Informal Learning Experiences as Data

Individually and collectively, museums are experts at informal learning. An evolving and distributed collection of LBXs could also be a “best of” informal learning experiences from a variety of institutions. Details of experiences and interactions visitors have at LBXs could be captured by software. The resulting data would be useful to visitors to show they’ve had certain learning experiences. However, such data capture could also be useful for LBX creators to improve experience design and enable automated scaffolding of learning experiences across LBXs. For example, if a visitor has already experienced “this simple idea” at one LBX, expose them to “this next, slightly deeper idea” at another LBX.

Some in the museum community may know of Open Badges, originally created by Mozilla. Less well known may be the US government’s Advanced Distributed Learning (ADL) Initiative that funded creation of the open-source Experience API (xAPI). The goal was to enable informal recognition and communication of learning experiences. Examples include “mobile learning, simulations, virtual worlds, serious games, real-world activities, experiential learning, social learning, offline learning, and collaborative learning” (“What is xAPI?” n.d.).

The xAPI allows for electronic communication of “actor, verb, subject” statements (“Anatomy of an xAPI statement” n.d.) For example: “Rhonda” “mixed” “colored lights,” or “Jamal” “walked through” “Dan Flavin’s art installations at Marfa.” These statements are sent to a cloud-based Learning Record Store (“What is an LRS?” n.d.). LBXs, learners, and educators could all access this.

An organized list of ADL’s resources, including an open-source Learning Record Store, is available on GitHub (


Companies such as Uber, Lyft and Airbnb (that merge GPS, Internet, smart mobile devices, cloud computing, and social networking) only became possible when enabling technological ecosystems emerged (Desmet, Maerkedahl & Shi, 2017). Such companies then radically reinvented customer journeys in their service industries (Bughin, LaBerge & Mellbye, 2017). Similarly, how visitors physically experience and interact with museum content can now be reinvented and scaled.

This paper suggests a way to bring great museum experiences right to where visitors live, rather than the other way around. LBXs could invigorate libraries and enliven high schools. Malls may be seeing a resurgence as places that feature collections of location-based entertainment experiences (Hess, 2019; Schaefer, 2019). Engaging museum experiences can be part of this mix too.

Vannavar Bush’s 1945 essay “As We May Think” helped people consider new possibilities, afforded by then emerging technologies, to access and organize the expanding global mass of written material, photos, illustrations, and moving images that no individual could otherwise efficiently access. In our time, there’s a vast and growing amount of three-dimensional museum and cultural experiences to be had. These exist at more locations than are possible to visit in a lifetime. Maker technologies, IoT, and spatial computing hold out the possibility of making these experiences remotely browsable, accessible, and memorable—as if the visitor were “really there.”


Alexa: Build Skills for Voice. Retrieved Dec 28, 2019, from

Anatomy of an xAPI statement. Retrieved Jan 5, 2020, from

Best Practices for Open-Source Hardware 1.0. (2012). Retrieved Jan 7, 2020, from

Bishop, B. (2019). Dreamscape Immersive is bringing VR to the masses. Retrieved Jan 7, 2020, from

Bughin, J., LaBerge, L. & Mellbye, A. (2017). The case for digital reinvention | McKinsey. Retrieved Dec 31, 2019, from

Bush, V. (1945, -07-01). As We May Think. The Atlantic. Retrieved Dec 15, 2019, from

A Closer Look at the New Oculus for Business | Oculus. (2019). Retrieved Jan 3, 2020, from

Coldewey, D. (2019, Aug 22,). How Oculus squeezed sophisticated tracking into pipsqueak hardware. Retrieved Dec 28, 2019, from

CyArk Virtual Reality. Retrieved Jan 2, 2020, from

Desmet, D., Maerkedahl, N. & Shi, P. (2017). Adopting an ecosystem view of business technology | McKinsey. Retrieved Dec 31, 2019, from

Detecting Images in an AR Experience | Apple Developer Documentation. Retrieved Jan 6, 2020, from

Goode, L. (2019). Get Ready to Hear a Lot More About ‘XR’. Retrieved Jan 2, 2020, from

Google Assistant SDK. Retrieved Dec 28, 2019, from

Hess, A. (2019, Dec 27,). Welcome to the Era of the Post-Shopping Mall. NYTimes, Retrieved from

Insko, B. E., Meehan, M., Whitton, M., & Brooks, F. (2001). Passive haptics significantly enhances virtual environments. University of North Carolina at Chapel Hill. Doctoral Dissertation. Retrieved Jan 2, 2019, from

Lernestål, E. (2019). API Spotlight: Swedish Historical Museums. Retrieved Jan 15, 2020, from

Object Recognition. Retrieved Dec 26, 2019, from

Open Heritage — Google Arts & Culture. Retrieved Jan 2, 2020, from

Pierdicca, R., Marques-Pita, M., Paolanti, M., & Malinverni, E. (2019). IoT and Engagement in the Ubiquitous Museum. Sensors, 19, 1387. doi:10.3390/s19061387. Retrieved Jan 2, 2020, from

ReCap | Reality Capture Software | 3D Scanning Software | Autodesk. Retrieved Jan 2, 2020, from

Samaroudi, M., & Rodriguez Echavarria, K. (2019). 3D printing is quietly transforming an unexpected industry: museums. Retrieved Jan 3, 2020, from

Scanning and Detecting 3D Objects | Apple Developer Documentation. Retrieved Jan 3, 2020, from

Schaefer, K. (2019). Malls have a future: location-based entertainment. Retrieved Dec 27, 2019, from

Sydell, L. (2018). 3D Scans Help Preserve History, But Who Should Own Them? Retrieved Jan 2, 2020, from

Thakur, A. Spatial Audio. Retrieved Jan 3, 2020, from

What is an LRS? Learn more about Learning Record Stores. Retrieved Dec 16, 2019, from

What is xAPI aka the Experience API. Retrieved Jan 5, 2020, from


Alexa Voice Service Integration for AWS IoT Core, a new way to cost-effectively bring Alexa Voice to any type of connected device. (2019). Retrieved Dec 28, 2019, from

Google AIY Voice. Retrieved Dec 28, 2019, from

Raspberry Pi Hosting Node-Red. Retrieved Dec 28, 2019, from

Using the AWS IoT SDKs on a Raspberry Pi – AWS IoT. Retrieved Dec 28, 2019, from

Cite as:
Meyer, Bill. "As We May Learn: The Coming Fusion of Maker Technologies, Spatial Computing, and IoT." MW20: MW 2020. Published January 15, 2020. Consulted .