The Brown Institute for Media Innovation, a collaboration between Columbia Journalism School and Stanford University’s School of Engineering, is pleased to announce its 2019-20 Magic Grant recipients. Each year, the Brown Institute awards $1M in grants and fellowships to foster new tools and modes of expression, and to create stories that escape the bounds of page and screen.
This year’s awards include nine Magic Grants and, for the first time, four seed grants. Each project addresses an important contemporary question, be it political, cultural or technical — from a large-scale study of “inauthentic activity” on platforms like Facebook, Instagram and YouTube; to a new tool that securely transforms a smartphone into a socially-minded diagnostic device offering insights into digital behavior; to the first comprehensive database of rights violations in Pakistan.
The Brown Institute was established in 2012 with a generous endowment gift from longtime Cosmopolitan magazine editor and author Helen Gurley Brown. It was inspired by the memory of Ms. Brown’s late husband, David Brown, a graduate of both Stanford University and the Columbia School of Journalism. Through its Magic Grants, the institute encourages unique interdisciplinary collaborations. As Gurley Brown put it, “Sharing a language is where this magic happens.”
We are grateful for everyone who applied this year — the field was incredibly diverse.
Below we list our 13 winning teams. Please join us in congratulating them on an incredible collection of projects!
2019-20 Magic Grants
Media innovation comes in many forms. This year, the nine winning Magic Grant projects include both powerful stories as well as technically rich platforms and tools.
Public analysis of TV news
James Hong and Dan Fu, both Stanford doctoral candidates in computer science
Each day, cable TV news networks determine what information millions of Americans receive and the context in which they receive it. This project will refine research analyzing nearly 10 years of 24/7 news video archived from CNN, FOX and MSNBC, including breakdowns of screen time by gender, individuals and topics, as well as detection of patterns such as interviews. Over the course of the coming year, Hong and Fu will test and release a set of tools to a broader community of journalists, news organizations and the general public to enable interactive exploration of the data on audiovisual and textual dimensions. The goal is to release these tools online in the form of TV news search and analysis widgets, and a video query language for mining high-level spatiotemporal patterns.
Trump Town
Derek Kravitz, contributing reporter, ProPublica, Columbia Journalism School
Who exactly is running the federal government? To answer this question, ProPublica launched Trump Town, the only public collection of data on the Trump administration’s current and former appointees. Over 3,000 people are documented in the database, including their jobs and specific offices, employment history, lobbying records, government ethics documents, financial disclosures and, in some cases, resumes. Kravitz, an independent journalist who was part of the Trump Town project, together with the Stabile Center and Columbia Journalism Investigations (CJI), will use their Magic Grant to significantly expand and automate the data collection effort behind Trump Town. A researcher overseen by the Brown Institute will create scripts to issue financial disclosure requests from federal agencies’ and agency staffing lists; develop open-source tools to scrape data out of the returned PDFs; and design new visualizations of the organizational structure of top federal agencies. Fellows with CJI will use the database for reporting stories about appointees’ interests and potential conflicts, and how those impact policymaking.
Voice-based interface for storytelling
Elizabeth Murnane, Stanford computer science postdoc, and Griffin Dietz, Stanford computer science doctoral candidate
Storytelling has long been considered a crucial part of children’s early social development. Today, we are seeing a parallel push for early computer science education. Students in early elementary school, however, lack approachable, engaging and accessible interdisciplinary computer science learning tools. Through this project, Stanford Computer Scientists Murnane and Dietz will develop a voice interface that supports the acquisition and practice of these two modern skills – programming and storytelling. By experimenting with this emerging interface in new ways, the team will develop and test novel speech and audio interactions designed to address the literacy challenges facing young audiences and better equip children to create and share their own narratives with the world.
Maternal figures
Ashley Okwuosa, reporter, The Teacher Project at Columbia Journalism School, and Chuma Asuzu, data analyst at Immigration, Refugees and Citizenship Canada
Nigeria’s estimated 40,000 maternal deaths account for a staggering 14% of the world’s annual total — a statistic from a country that represents just 2.6% of the world’s population. Mobile blood banks, free health care for mothers and newborns in some Nigerian states, and community health care centers in underserved regions have led to noticeable reductions in maternal deaths. But to date, there has been no systematic assessment of the efficacy of any given health intervention, and analysis is complicated by the fact that the statistics are scattered and often undercount actual deaths. The researchers, in partnership with Nigeria Health Watch, will centralize health data from the WHO, World Bank and others, and research shifts in technology, policy and culture that have impacted the Nigerian maternal death rate.
Screenomics interactive dashboard
Jung Cho, Jihye Lee, Yingdan Lu, Dan Muise and Katie Roehrick, all doctoral candidates in communication at Stanford
The smartphone is redefining what it means to be human. As digital devices become more accessible, powerful and personalized, a growing proportion of our life experience is concentrated on a smartphone screen that weaves quickly through apps, videos, pictures and texts. Yet most media research collects aggregate data from individual digital platforms, such as the amount of time spent on social media or reading the news. Due to this limitation, no one really knows or understands what people actually see on their screens. Using a novel software framework that captures digital life in action, the researchers aim to transform the smartphone into a social-minded diagnostic tool that drives both personal and public insight into digital behavior. Specifically, the team will design a public, web-based dashboard for showcasing research-based socio-psychological analytics related to digital behavior. By creating innovative analytics, the team plans to provide users with psychologically meaningful metrics that drive increased understanding of topics such as political polarization, multitasking, and mental and physical well-being. The team will use Stanford-vetted privacy and security protocols that are based on transparency and confidentiality, explicit statements about the data collected and why, and continuous internal and external inspection of data management systems.
Contrast agent
Cameron Hickey, technology manager, Shorenstein Center, Harvard Kennedy School; Laura Edelson, doctoral candidate in computer science at New York University
To get a clearer view of the organs and other structures inside the human body, doctors will often have patients ingest a “contrast agent” before taking medical images like MRIs and X-rays. Using the digital equivalent of a contrast agent, the team will develop a methodology and toolkit for the large-scale study of “inauthentic activity” on platforms like Facebook, Instagram and YouTube. They will create a comprehensive view of the industry behind the development and deployment of bots, fake accounts, sock puppets and click farms. How prevalent are these accounts? What forms of content do they promote? Do they exhibit common patterns of behavior? Can these observations lead to an automated detection system – a diagnostic test for this infection of inauthentic activity?
Synthesizing novel video from GANs
Anh Truong and Haotian Zhang, both doctoral candidates in computer science at Stanford
While video has become an increasingly popular way for people to share and consume stories, there currently are limited inexpensive ways for users or filmmakers to quickly edit continuity errors and/or expand creative content for entertainment. Stanford graduate students Truong and Zhang will build on advances in generative adversarial networks (GANs) to create tools that allow users or editors to synthesize new video that captures different viewpoints, perspectives or actions. For example, if a director has multiple takes of an actor’s performance but likes the actor’s delivery in one and his gestures in another, this tool would allow the director to combine aspects of both takes into one “perfect performance.” Ultimately, the output of this project will be a set of tools that enables users to interactively control the camera and pose of human actors in video.
The right to have rights
In their Magic Grant proposal, an interdisciplinary team of journalists and researchers noted that the last two decades of the “war on terror” have impacted the state of human rights in Pakistan. This project will mine Pakistani newspapers, government reports, papers gathered by Pakistani human-rights groups and other public records to build a database that will present and quantify stories of rights violations in Pakistan. Alongside the data collection, the team will report out stories centered on individuals through narrative and visualization. The project aims to accelerate the conversation around the issue of rights in Pakistan, with a storytelling and research practice rooted in data.
Next chapter 360
Sean Liu, doctoral candidate in computer science at Stanford
Today, many filmmakers and directors use 360-degree video for innovative storytelling. It’s a way to give viewers freedom to explore scenes and create a more immersive experience than using traditional video. However, because viewers can look anywhere at any time, they often don’t know where to focus first and often look in the wrong direction when important story events happen outside their field of view. In this project, Stanford graduate student Liu will build on tools that modify 360-degree video playback to ensure viewers see all important story elements in a video, without compromising the immersive feeling. She also will develop methods that support a wider range of 360-degree scenes and help video creators achieve a more diverse set of narrative goals.
Seed grants
In addition to Magic Grants, the Brown Institute is providing seed funds to the following initiatives to assist in prototyping and early project development:
Tech tweets
Katy Gero, doctoral candidate in computer science at Columbia University; Lydia Chilton, assistant professor in computer science at Columbia University; Tim Requarth, lecturer in science and writing at New York University
Searching for #tweetorial on Twitter produces a stunning number of threads in which scientists explain their new research or put a body of scientific work in context. These are often made by scientists for scientists – from the history of hydroxychloroquine, an anti-malarial drug, to how steroids increase white blood cell counts; and from the roots of interventional cardiology to an economic study of how unprepared seniors are for housing and health care costs after retirement. The team will collect and study tweetorials across multiple scientific domains. The hope is to build a web application that can explore and extend the potential of this new form of explanatory writing, with the net effect of increasing collaboration between scientific fields and serving as an entry point for journalists to both science and the scientific community.
Digital reappropriation
Hillary Crosley Coker, senior news producer, Genius.com
In 2018, Epic Games included two hip-hop dances from rappers BlocBoy JB and 2 Milly with their game Fortnite. The dances were renamed and sold as “emotes,” custom animations that players buy and use to express themselves in the game. Renaming distanced the emotes from the dances’ roots in hip-hop and their unacknowledged and unpaid creators. This prompted a backlash, with one lawyer for the rappers accusing Epic of “brazenly misappropriating” the dances. Crosley Coker will build a tool that links current musical trends to their origins. Based on her detailed reporting methods, the tool will help creators demonstrate their ownership and make “the gray area of cultural origin … undoubtedly less gray,” writes Crosley Coker. It will also act as a new form of cultural critique by tracing phrases, beats and “movements” from their inception to popular or even mainstream usage.
Social lives of urban trees
Patricia Culligan, professor of civil engineering, Columbia University; Rachel Strickland, independent filmmaker
A tree growing in a sidewalk pit is an “architectural organism.” It organizes its urban surroundings and, through its body language and habits, gives definition to public space. Strickland and Culligan will document trees on the exceptionally slow time-scales on which they live. They will develop a multi-modal “Treecorder” device – an audio-visual-sensor recording system with the purpose of capturing in intelligible form the intricate lives of urban trees and the impacts of human habits on trees’ everyday experiences.
AI identities
Margot Hanley, research assistant at Cornell Tech and graduate student in sociology at Columbia University; Shiri Azenkot, assistant professor at Cornell Tech
For blind people, interactions with visual media on the web occur through “alt text,” a caption that describes the image and its purpose on the page. The idea is as old as HTML itself, with <img> tags providing a text-based alternative to a graphic. On platforms like Facebook and Twitter, this description is increasingly being written by AI captioning algorithms. Similar to the other algorithms underlying these social media platforms, AI captioning algorithms are not impervious to bias. This project will examine concepts of identity and representation within the images. They will explore who decides how identities are represented in captions and how, and uncover what guidelines exist to help navigate this complex task. This work will be possible through analysis of AI-generated alt text as well as interviews of computer scientists across various tech companies and the blind user base most affected by alt text.