The Brown Institute for Media Innovation, a collaboration between Columbia Journalism School and the Stanford University’s School of Engineering, is pleased to announce its 2018-2019 Magic Grants. Each year, the Brown Institute awards close to $1M in funding to foster new tools and modes of expression, and to create stories that escape the bounds of page and screen. We are committed to radical experimentation with the potential to define new priorities and practices for both engineering and journalism. This year, we have awarded 12 Magic Grants. Among them you will find both powerful works of journalism and inventive, new technologies that shift the ways we find and tell stories.
Each project—in its own way—addresses an important contemporary issue, be it political, cultural, or technical.
With the support of the Institute, one team will develop a database and interactive display connecting deaths in Mexico stemming from U.S. deportations. Another will conduct algorithmic audits of forensic DNA software used for criminal prosecution. And still another project pairs a documentary filmmaker and a theater director to examine the suppression of the African-American vote in the 2016 Presidential election.
Through our grants this year, we will also continue to explore interfaces that support creative processes — from an application that creates dynamic screen overlays to help photographers be more intentional about their artistic decisions, to a platform assisting with script writing and rough character sketches by remixing and reusing images and video from the web.
David and Helen Gurley Brown believed that magic happens when innovative technology is combined with great content and talented people are given the opportunity to explore and create new ways to inform and entertain. The Brown Institute annually awards fellowships, grants and scholarships, and designs public events and novel educational experiences in digital storytelling.
This year, the Institute’s projects are also supported by the Data Science Institute at Columbia University, as well as through a longstanding partnership with PBS FRONTLINE.
Learn more about the Brown Institute for Media Innovation at http://brown.columbia.edu or http://brown.stanford.edu, on Twitter at @BrownInstitute and Facebook at https://www.facebook.com/MadeAtBrown.
The following is a complete list of Magic Grants funded by the Brown Institute for 2018-19.
Traditionally, fashion stories were a specialized type of consumer reporting. They focused on exposing the reader to innovation and trends in clothing, accessories and beauty products. Today, social media and online retail allow fashion designers and marketers direct communication with consumers, putting fashion reporting in somewhat of a crossroad. Lineage is a research tool that promotes better understanding of the cultural context and history of contemporary fashion. It uses the publicly available databases of art and design institutions such as the Met and its Costume Institute. When a reporter uploads a new fashion image, Lineage will display similar items from its database: clothing, craft, furniture, architecture, and visual arts. The tool, to be designed by Journalist and Data Scientist Noya Kohavi, is not meant to find identical images but rather images that evoke the same visual language in a more playful and serendipitous way, like a reverse-engineered mood board that tells the story of the item or collection the reporter is covering.
The crucial footage for breaking news reports often comes from eye witnesses, “citizen journalists,” using their smartphones. While these videos often do not meet the quality standards set by news organizations, there is a hesitation to perform much post-processing to improve the content — in the spirit of being accurate and truthful. With their Magic Grant, two Computer Scientists, Jane E and Ohad Fried, will help people capture higher quality content and, ultimately, contribute more impactful, immediate, on-scene documentation of breaking events. E and Fried will create tools that overlay directly on the screen of a traditional camera, dynamically augmenting the current view of a scene with information that will help people make better photo capture decisions. “Our hope is that such interfaces will empower users to be more intentional about their storytelling and artistic decisions while taking photos.”
Ninety-two journalists have been killed in Mexico since 2000. Contrary to popular belief, these reporters did not die as the result of generalized violence. Instead, they were targeted. Their deaths cannot be understood without reading and listening to their work. Consequently, the worth of their journalism — and the risks they undertook — cannot be fully comprehended without understanding the rich context and history of the places where they lived, the social forces they faced, and the stories they told. Alejandra Ibarra Chaoul, a Journalist, and Rigel Jarabo, a student in Urban Planning, want to give these reporters’ work a home and provide that context so that “through this repository, their fight for democracy will continue.”
Ten years of U.S. TV News — Since 2009, the Internet Archive has been actively curating a collection of news broadcasts from across the country, assembling a corpus of over 200,000 hours of video. Computer Scientists Will Crichton and Haotian Zhang will perform an in-depth longitudinal study of this video collection, scanning for patterns in both audio as well as visual trends. How has coverage of different topics changed over the years? How often do women get cut off in conversation versus men? What is the relationship between still images and subject? How does clothing and fashion differ across networks and shows? This project will tackle these and many other difficult questions, demonstrating the new potential for large-scale video analysis. This Magic Grant will build on a previous grant from Brown, also led by Will Crichton, called Esper. That project created an open-source software infrastructure that helped journalists and researchers “scale up” their investigations, to analyze, visualize and query extremely large video collections.
Sarah Stillman, Staff Writer at The New Yorker, will lead a team to build the first-ever searchable database of deaths-by-deportation, in a manner that is empirically rigorous, narratively engaging, and visually stunning. The team will merge cutting-edge data journalism (pursued alongside foreign correspondence in refugee camps, migrant shelters, mortuaries) with technological innovation (focusing on the aesthetic power of the mobile experience) to build a practical but elegant database that turns their massive spreadsheet into an unshakable story. The team includes the powerful data visualization expertise of Giorgia Lupi, co-founder of Accurat. They will make their findings and ongoing investigation accessible through a website that amplifies the very best of what Lupi calls “data humanism.” In Stillman’s words, “Absent this new effort to bring these data to light, the stories will remain buried, unspoken, and unaccounted-for in the public record.”
News organizations like The New York Times and The Guardian have experimented with fast-paced, serial production schedules for 360 videos, hoping to prove out the medium. While 360 videos offer viewers with more freedom to explore scenes in a story, that freedom also poses an added challenge to directors and creators. Because users can be looking anywhere at any time, they might be looking in the wrong direction while important events or actions in a story take place, outside the user’s field of view. By contrast, Virtual Reality environments can address this problem by controlling the animation of objects, perhaps having a scene pause or loop until the user is looking in the right direction. With her Magic Grant, Computer Scientist Sean Liu will consider how to adapt these strategies to 360 videos, providing better storytelling without compromising the immersive feeling of these videos.
Imagine testing the fingernail scrapings of a murder victim to determine if a suspect could be the killer, only to have one DNA interpretation software program incriminate the suspect and a different program absolve them. Such a scenario played out two years ago in the widely-publicized murder trial of Oral Nicholas Hillary, raising questions that the criminal justice system still cannot answer: why, when, and by how much do these programs differ from one another? To answer these questions, this Magic Grant assembles a multi-disciplinary team — Jeanna Matthews is a Computer Scientist; Nathan Adams, a DNA investigations specialist; Jessica Goldthwaite with The Legal Aid Society; Dan Krane, a Biologist; Surya Mattu, a Journalist; and David Madigan, a Statistician. This Magic Grant project will systematically compare forensic DNA software, moving the story beyond anecdotal examples to a systematic investigative strategy. In the process, they will explore important issues of algorithmic transparency, and the role of complex software systems in the criminal justice system and beyond.
Stories come in in many forms, and in a wide range of detail — from casual anecdotes told among friends, to epic Hollywood blockbusters, heavily engineered and rendered in vivid high-definition. But regardless of how they are told, great stories do not simply appear fully formed in the mind; they are inspired by the work of others, crafted with familiar tools, and refined through iteration. The Magic Grant team of Computer Scientists Abe Davis and Mackenzie Leake will provide users with tools that focus on the construction of a narrative (specifically, through the writing of a script or the posing of rough character sketches) and use algorithms to search the Internet for visuals that can be repurposed or remixed to fit that narrative. In doing so, their work will offer an accessible way for untrained users to learn from and build on the work of experts.
Barack Obama’s two Presidential campaigns were defined, in part by the black voters they brought to the polls. In both 2008 and 2012, African-American women voted at a higher rate than any other demographic group in the country. But the latest analyses show that in 2016, African-Americans voted at a lower rate than any other group. Magic Grantees June Cross, a Documentary Filmmaker, and Charlotte Braithwaite, a Theater Director, will explore how foreign interference, gerrymandering, and domestic legal challenges like including voter ID laws combined to suppress the black vote in 2016. They will use “big data” to inform shoe leather reporting, with the results presented as projected data, pre-recorded audio interviews, and some re-enacted interviews in a theatrical setting. The team will include historical video archives and develop a production design for five 3-5 minute videos. Their aim is to “Wake” the larger African-American community to the impact voter suppression campaigns waged on social media, in the courts, and in state legislatures.
State patrols stop and search drivers in every state, but until recently it has been nearly impossible to understand what they’ve been doing — and whether these searches discriminate against certain drivers. The data was scattered across jurisdictions, “public” but not online, and in a dizzying variety of formats. In 2014, Cheryl Phillips began the Stanford Open Policing Project to provide open, ongoing and consistent access to police stop data in 31 states, and created a new statistical test for discrimination. This is just one example of how sharing local data an improve local journalism. Phillips — together with Columbia Journalist Jonathan Stray, Stanford Electrical Engineering PhD student Irena Fischer-Hwang, and Columbia Journalism/Computer Science MS student Erin Riglin — was awarded a Magic Grant to build on this success, creating a pipeline that will enable more local accountability journalism and boost the likelihood of big policy impact. The team will collect, clean, archive and distribute data that can be used to tell important journalistic stories. The data will be archived in the Stanford Digital Repository, and the teams work will also help extend Columbia’s Workbench computational platform, making the analysis of local data broadly available to even novice data journalists.
People are interacting with artificial intelligence (AI) systems more every day. AI systems play roles in call centers, mental health support, and workplace team structures. As AI systems enter these human environments, they inevitably will need to interact with people in order to achieve their goals. Most AI systems to date, however, have focused entirely on performance and rarely, if at all, on their social interactions with people, and how to balance the AI’s goals against their human collaborators’ goals. Success requires learning quickly how to interact with people in the real world. Stanford Computer Scientists Ranjay Krishna and Apoorva Dornadula were awarded a Magic Grant to create a conversational AI agent on Instagram, where it will learn to ask engaging questions of people about the photos they upload. Its goal will be to simultaneously learn new facts about the visual world by asking questions, and learn how to interact with people around their photos in order to expand its knowledge of those concepts.
Particularly in the American South, historical memory is distorted by outdated structures in public spaces. Antebellum and Confederate era monuments celebrate the oppressive legacy of white men and exclude the contributions of women and people of color to American society, complicating claims to equality in the present. White supremacists gather around them, local governments fight over whether to remove them, and activists tear them down. It’s a slow moving process toward creating a physical space that reflects more current ideas about the past and present. With a seed grant, Columbia Documentary Journalism student Robert Tokanel, Stanford Computer Scientist Kyle Qian, and Stanford undergraduates Khoi Le and Hope Schroeder will help audiences imagine a powerful new reality. The team will work toward digitally transforming public spaces in Charleston, South Carolina, using narrative film techniques and augmented reality to flip the power structures of the past, hoping to expose users to a range of perspectives about the value of monuments as they currently stand.