Searching for #tweetorial on Twitter produces a stunning number of threads in which scientists explain their new research or put a body of scientific work in context. These are often made by scientists for scientists – from the history of hydroxychloroquine, an anti-malarial drug, to how steroids increase white blood cell counts; and from the roots of interventional cardiology to an economic study of how unprepared seniors are for housing and health care costs after retirement. The team will collect and study tweetorials across multiple scientific domains. The hope is to build a web application that can explore and extend the potential of this new form of explanatory writing, with the net effect of increasing collaboration between scientific fields and serving as an entry point for journalists to both science and the scientific community.
Social lives of urban trees
A tree growing in a sidewalk pit is an “architectural organism.” It organizes its urban surroundings and, through its body language and habits, gives definition to public space. Strickland and Culligan will document trees on the exceptionally slow time-scales on which they live. They will develop a multi-modal “Treecorder” device – an audio-visual-sensor recording system with the purpose of capturing in intelligible form the intricate lives of urban trees and the impacts of human habits on trees’ everyday experiences.
For blind people, interactions with visual media on the web occur through “alt text,” a caption that describes the image and its purpose on the page. The idea is as old as HTML itself, with <img> tags providing a text-based alternative to a graphic. On platforms like Facebook and Twitter, this description is increasingly being written by AI captioning algorithms. Similar to the other algorithms underlying these social media platforms, AI captioning algorithms are not impervious to bias. This project will examine concepts of identity and representation within the images. They will explore who decides how identities are represented in captions and how, and uncover what guidelines exist to help navigate this complex task. This work will be possible through analysis of AI-generated alt text as well as interviews of computer scientists across various tech companies and the blind user base most affected by alt text.