Announcing the 2019 Brown Institute Showcase

Mark Hansen and Maneesh Agrawala cordially invite you to the Brown Institute for Media Innovation 2019 Showcase!

Join us for a reception and exhibition of our 2018-2019 projects.

October 17, 2019 – 6:00pm
at the Brown Institute
at Columbia University

Descriptions of the projects can be seen below. The event will take place in the Brown Institute, located in Pulitzer Hall (2950 Broadway) at Columbia University.

Artistic Vision. The crucial footage for breaking news reports often comes from eye witnesses, “citizen journalists,” using their smartphones. While these videos often do not meet the quality standards set by news organizations, there is a hesitation to perform much post-processing to improve the content — in the spirit of being accurate and truthful. With their Magic Grant, two Computer Scientists, Jane E and Ohad Fried, will help people capture higher quality content and, ultimately, contribute more impactful, immediate, on-scene documentation of breaking events. E and Fried will create tools that overlay directly on the screen of a traditional camera, dynamically augmenting the current view of a scene with information that will help people make better photo capture decisions. “Our hope is that such interfaces will empower users to be more intentional about their storytelling and artistic decisions while taking photos.”

 

Audiovisual Analysis of 10 Years of TV News. Ten years of U.S. TV News — Since 2009, the Internet Archive has been actively curating a collection of news broadcasts from across the country, assembling a corpus of over 200,000 hours of video. Computer Scientists Will Crichton and Haotian Zhang will perform an in-depth longitudinal study of this video collection, scanning for patterns in both audio as well as visual trends. How has coverage of different topics changed over the years? How often do women get cut off in conversation versus men? What is the relationship between still images and subject? How does clothing and fashion differ across networks and shows? This project will tackle these and many other difficult questions, demonstrating the new potential for large-scale video analysis. This Magic Grant will build on a previous grant from Brown, also led by Will Crichton, called Esper. That project created an open-source software infrastructure that helped journalists and researchers “scale up” their investigations, to analyze, visualize and query extremely large video collections.

 

BigLocal News. State patrols stop and search drivers in every state, but until recently it has been nearly impossible to understand what they’ve been doing — and whether these searches discriminate against certain drivers. The data was scattered across jurisdictions, “public” but not online, and in a dizzying variety of formats. In 2014, Cheryl Phillips began the Stanford Open Policing Project to provide open, ongoing and consistent access to police stop data in 31 states, and created a new statistical test for discrimination. This is just one example of how sharing local data an improve local journalism. Phillips — together with Columbia Journalist Jonathan Stray, Stanford Electrical Engineering PhD student Irena Fischer-Hwang, and Columbia Journalism/Computer Science MS student Erin Riglin — was awarded a Magic Grant to build on this success, creating a pipeline that will enable more local accountability journalism and boost the likelihood of big policy impact. The team will collect, clean, archive and distribute data that can be used to tell important journalistic stories. The data will be archived in the Stanford Digital Repository, and the teams work will also help extend Columbia’s Workbench computational platform, making the analysis of local data broadly available to even novice data journalists.

 

Charleston Reconstructed. Particularly in the American South, historical memory is distorted by outdated structures in public spaces. Antebellum and Confederate era monuments celebrate the oppressive legacy of white men and exclude the contributions of women and people of color to American society, complicating claims to equality in the present. White supremacists gather around them, local governments fight over whether to remove them, and activists tear them down. It’s a slow moving process toward creating a physical space that reflects more current ideas about the past and present. With a seed grant, Columbia Documentary Journalism student Robert Tokanel, Stanford Computer Scientist Kyle Qian, and Stanford undergraduates Khoi Le and Hope Schroeder will help audiences imagine a powerful new reality. The team will work toward digitally transforming public spaces in Charleston, South Carolina, using narrative film techniques and augmented reality to flip the power structures of the past, hoping to expose users to a range of perspectives about the value of monuments as they currently stand.

 

Decoding Differences in DNA Forensic Software. Imagine testing the fingernail scrapings of a murder victim to determine if a suspect could be the killer, only to have one DNA interpretation software program incriminate the suspect and a different program absolve them. Such a scenario played out two years ago in the widely-publicized murder trial of Oral Nicholas Hillary, raising questions that the criminal justice system still cannot answer: why, when, and by how much do these programs differ from one another? To answer these questions, this Magic Grant assembles a multi-disciplinary team — Jeanna Matthews is a Computer Scientist; Nathan Adams, a DNA investigations specialist; Jessica Goldthwaite with The Legal Aid Society; Dan Krane, a Biologist; Surya Mattu, a Journalist; and David Madigan, a Statistician. This Magic Grant project will systematically compare forensic DNA software, moving the story beyond anecdotal examples to a systematic investigative strategy. In the process, they will explore important issues of algorithmic transparency, and the role of complex software systems in the criminal justice system and beyond.

 

Democracy Fighters. Ninety-two journalists have been killed in Mexico since 2000. Contrary to popular belief, these reporters did not die as the result of generalized violence. Instead, they were targeted. Their deaths cannot be understood without reading and listening to their work. Consequently, the worth of their journalism — and the risks they undertook — cannot be fully comprehended without understanding the rich context and history of the places where they lived, the social forces they faced, and the stories they told. Alejandra Ibarra Chaoul, a Journalist want to give these reporters’ work a home and provide that context so that “through this repository, their fight for democracy will continue.”

 

Casting the Vote. Casting the Vote is an interactive, live theatrical experiment in cross-disciplinary co-creation. It’s also a dinner party, a gathering of strangers, friends, students, organizers and artists – a temporary intentional community brought together to discuss democracy, and address its history, embody its presence, and imagine its futures. Conceived by documentarian/journalist June Cross and director Charlotte Brathwaite, Casting the Vote experiments with participatory co-creation between forms: journalism, documentary film, theatre, activism, culinary arts, and alternative pedagogies. It creates a space intended to hold and mediate the most difficult, most important questions of our time – denying neither the integrity of our anger, or the fullness of our collective spirit.

Learning to Engage in Conversations for AI Systems. People are interacting with artificial intelligence (AI) systems more every day. AI systems play roles in call centers, mental health support, and workplace team structures. As AI systems enter these human environments, they inevitably will need to interact with people in order to achieve their goals. Most AI systems to date, however, have focused entirely on performance and rarely, if at all, on their social interactions with people, and how to balance the AI’s goals against their human collaborators’ goals. Success requires learning quickly how to interact with people in the real world. Stanford Computer Scientists Ranjay Krishna and Apoorva Dornadula were awarded a Magic Grant to create a conversational AI agent on Instagram, where it will learn to ask engaging questions of people about the photos they upload. Its goal will be to simultaneously learn new facts about the visual world by asking questions, and learn how to interact with people around their photos in order to expand its knowledge of those concepts.

 

Lineage.

Lineage is an artificially intelligent engine that enables the exploration of digitized visual archives in a human-like manner. With Lineage, the user can input any image, and get in return visually similar images from thousands of years of art and design. The returned images are not identical to the input but rather give the user the visual context in which it exists, allowing for a deeper understanding of the input image. Lineage uses the publicly available databases of art and design institutions, museums, archives and libraries. It eschews verbal, keyword-based search, preferring a visual, open-ended, non-definitive result schema. Its similarity algorithm relies on colors, shapes, patterns and their layered combinations, mimicking the way humans look at objects, and encouraging serendipitous connections across time periods, location of origin, creator and mediums: clothing, craft, furniture, architecture, graphic and industrial design, visual arts and so on.

 

NeverEnding 360.

News organizations like The New York Times and The Guardian have experimented with fast-paced, serial production schedules for 360 videos, hoping to prove out the medium. While 360 videos offer viewers with more freedom to explore scenes in a story, that freedom also poses an added challenge to directors and creators. Because users can be looking anywhere at any time, they might be looking in the wrong direction while important events or actions in a story take place, outside the user’s field of view. By contrast, Virtual Reality environments can address this problem by controlling the animation of objects, perhaps having a scene pause or loop until the user is looking in the right direction. With her Magic Grant, Computer Scientist Sean Liu will consider how to adapt these strategies to 360 videos, providing better storytelling without compromising the immersive feeling of these videos.

 

ParaFrame.

Stories come in many forms, and in a wide range of detail — from casual anecdotes told among friends, to epic Hollywood blockbusters, heavily engineered and rendered in vivid high-definition. But regardless of how they are told, great stories do not simply appear fully formed in the mind; they are inspired by the work of others, crafted with familiar tools, and refined through iteration. The Magic Grant team of Computer Scientists Abe Davis and Mackenzie Leake will provide users with tools that focus on the construction of a narrative (specifically, through the writing of a script or the posing of rough character sketches) and use algorithms to search the Internet for visuals that can be repurposed or remixed to fit that narrative. In doing so, their work will offer an accessible way for untrained users to learn from and build on the work of experts.

 

When Deportation is a Death Sentence.

Sarah Stillman, Staff Writer at The New Yorker, will lead a team to build the first-ever searchable database of deaths-by-deportation, in a manner that is empirically rigorous, narratively engaging, and visually stunning. The team will merge cutting-edge data journalism (pursued alongside foreign correspondence in refugee camps, migrant shelters, mortuaries) with technological innovation (focusing on the aesthetic power of the mobile experience) to build a practical but elegant database that turns their massive spreadsheet into an unshakable story. The team includes the powerful data visualization expertise of Giorgia Lupi, co-founder of Accurat. They will make their findings and ongoing investigation accessible through a website that amplifies the very best of what Lupi calls “data humanism.” In Stillman’s words, “Absent this new effort to bring these data to light, the stories will remain buried, unspoken, and unaccounted-for in the public record.”