News

Reporting “on” and “with” Twitter

The last session of the Transparency Series took place last Saturday and featured Craig Silverman, BuzzFeed’s media editor. Silverman has an incredible record of reporting on Twitter and social media platforms generally. During the day-long workshop, he shared his strategies for breaking stories down into “actors, content, behavior, and networks.” His investigative routine when looking

Read More

Becoming more mindful about visual information:  A Q&A with Alberto Cairo, author of ‘How Charts Lie’

Alberto Cairo is an associate professor and Knight Chair of Visual Journalism at the University of Miami. He recorded this interview with Alex Calderwood before delivering a lecture about his recently released book How Charts Lie: Getting Smarter about Visual Information. What spurred you to write this book? Have you been thinking about it for

Read More

Innovating with AI

Medium’s Chief Architect Xiao Ma spoke at Stanford on Nov. 5th asking the question: How does technology reshape content discovery and delivery? He unpacked Medium’s on-point recommendation system, a hybrid model that joins collaborative filtering (“How can we recommend content based on your previous history and people similar to you?”) and content-based filtering (“I don’t

Read More

Visual Relationships as Functions: Enabling Few-Shot Scene Graph Prediction

Authors Apoorva Dornadula, Austin Narcomey, Ranjay Krishna, Michael Bernstein, Li Fei-Fei We introduce a scene graph approach that formulates predicates as learned functions, which result in an embedding space for objects that is effective for few-shot. Our formulation treats predicates as learned semantic and spatial functions, which are trained within a graph convolution network. First,

Read MoreRead More

Scene Graph Prediction with Limited Labels

Authors Vincent Chen, Paroma Varma, Ranjay Krishna, Michael Bernstein, Christopher Re, Li Fei-Fei Our semi-supervised method automatically generates probabilistic relationship labels to train any scene graph model. Abstract Visual knowledge bases such as Visual Genome power numerous applications in computer vision, like visual question answering and captioning, but suffer from sparse, incomplete relationships. All scene

Read MoreRead More

A Taxonomy for VR

Eve Weston, CEO and founder of Los-Angeles based VR studio Exelauno told Stanford  students that she has developed a way to talk about VR — what she calls a “taxonomy” for VR. This taxonomy unpacks the emotional intensity of the VR experience into its key parts: Narrative (1st person, 2nd person, 3rd person?) Visual Options (Embodied

Read More