Social media posts can be incredibly difficult to understand because they can be highly contextualized. Misunderstandings have real consequences: interpersonal violence, police officers arresting innocent people, administrators barring students from opportunities, or journalists reproducing stories that pathologize vulnerable communities. To interpret social media with a greater focus on context, racial bias reflection, and restorative justice, we created Interpret Me — a learning simulation intervention platform that trains law enforcement officials, journalists, and educators to recognize racism in their interpretation of social media posts by Black people. By partnering with members from the Brownsville Community Justice Center (BCJC) who will act as advisors and co-designers in the training development process and the Stanford Social Media Lab, we will ensure the intervention is community-driven.Interpret Me is embedded with AI-generated, human-in-the-loop feedback for stakeholders who engage with social media and online predictive reporting algorithms to make decisions about speculative social media. Our simulations will provide continuous feedback and self-reflection for users to learn and establish a new vocabulary for culturally aware and ethical social media risk assessment.