2020-21 Magic Grant Profile: Leitmotif

Our 2020-21 Magic Grant Leitmotif: Location-Driven Audio Storytelling uses geolocation to deliver dynamic audio storytelling and connect users to stories of people, places, and things they walk by. The team is creating software to enable the generation of location-specific audio stories and a paired smartphone application to allow users to consume audio content. Through this application, users will be able to preview and select audio stories and listen to them as they move through the physical world.

This post is part of a series of interviews with our current 2020-21 Magic Grants. Since we are back in Magic Grant application season we want to showcase some of the great work our current grantees are doing and encourage you to consider applying for a Magic Grant. Here’s the link to the call for proposals, and to our FAQ. And here’s the link to the application itselfapplications are due May 1.


Here’s a (lightly) edited version of the interview.

What was the main impetus for your project? Where did the idea come from?

The main impetus for the project was our observation that despite the increasing popularity of audio media like podcasts, content creators lack the computational tools that would allow them to create dynamic content that can come alive by reacting to listeners and their environment. We were inspired to develop a system that can make use of advances in sensing and machine learning to create that kind of dynamic content and to investigate what strategies artists might use to take advantage of the new affordances this technology will have.

How has your project evolved during (or because of) the pandemic? I assume being forced to stay indoors doesn’t help with the “location-specific” aspect of your project. Has Covid made you rethink parts of your idea?

The pandemic has definitely limited the kinds of locations that we can visit, and it’s also emptied out many locations which would normally be full of people and interesting activities that we could use as material for stories. Thankfully, we’ve still been able to (safely) visit a wide range of places to conduct background research and test out our ideas.

The pandemic has also provided us with new opportunities to tell different kinds of audio stories. Over the last few months, we have been supporting the Medicine and the Muse Program at the Stanford School of Medicine, who we’ve helped to create a location-specific audio experience to memorialize those lost to COVID-19. Through a mobile site, listeners can take a walking tour with content linked to meaningful locations on campus. For example, at the campus’s Angel of Grief statue, they hear the history of this memorial, are guided through a reflection on the deaths from COVID-19, and listen to a specially-recorded piece of classical music that was selected to match both the narration and the space itself. Working with professional musicians and an ordained clergy-person on this project helped us get a sense of how location-based storytelling can support serious activities like contemplation and mourning, and to identify the usability challenges involved in bringing these experiences to users

What are some of the biggest milestones you’ve achieved so far?

We’ve completed several phases of the project, including a survey of existing interactive audio tours, where I (Jacob) physically completed a set of tours to understand what techniques creators were currently using and what problems exist with current location-based audio software. We’ve also been through several rounds of brainstorming and prototyping to get a sense of the functionality necessary for the system we want to build, and recently completed a working prototype that uses an off-the-shelf machine learning model to add interactivity to short audio stories.

What are some of the most challenging aspects of your project?

This project requires us to bring together lots of different kinds of knowledge – for example, software system-building, strategies for effective storytelling, and understanding how human beings perceive their environment. Balancing all of these aspects has been challenging, but necessary, and we’re hopeful that it will result in an effective final product.

What are some of the ideal use cases for what you are developing? Where do you hope to have the biggest impact?

One use case we are really excited about is using storytelling to help children learn. By telling educational stories that react to a child’s environment and their actions, and are based around proven techniques from learning science, we’re hoping that we can create a compelling tool for experiential, self-directed learning. The audio-based experiences we build would be one part of Prof. Landay’s larger Smart Primer project on educational augmented reality storytelling.

What comes next?

We’re currently testing out ways that we can combine geolocation and machine learning to create audio stories that will unfurl as users walk through an environment, and increase their immersion and sense of place. We hope that in the next few months we can produce a functioning system that can showcase what new kinds of storytelling we are making possible.