2020-21 Magic Grant Profile: Self-Moderating Online Focus Groups and Deliberation

Our 2020-21 Magic Grant Self-Moderating Online Focus Groups and Deliberation is building a tool for better online deliberation. Through an automated moderator that provides equitable, respectful, and constructive conversation the team behind this grant aims to support deliberative democracy, developing a scalable online platform that addresses current challenges such a small group who dominates the conversation, a single topic that absorbs too much time and a biased moderator.

This post is part of a series of interviews with our current 2020-21 Magic Grants. Since we are back in Magic Grant application season we want to showcase some of the great work our current grantees are doing and encourage you to consider applying for a Magic Grant. Here’s the link to the call for proposals, and to our FAQ. And here’s the link to the application itselfapplications are due May 1.


Here’s a (lightly) edited version of the interview with Lodewijk Gelauff and Sukolsak Sakshuwong from Stanford University.

What was the main impetus for your project? Where did the idea come from?

LG: We are part of the Crowdsourced Democracy Team, a cross-disciplinary lab with Prof. Ashish Goel in the Management Science and Engineering department at Stanford. The goal of this Platform for Online Deliberation project is to develop an online platform that can host deliberations. Deliberations consist of structured, constructive conversations of many groups of 8-15 people in parallel. The challenge is to have these groups moderate themselves, and to stimulate equitable behavior within the groups.

SS: The Internet is often seen nowadays as a place that sows division and polarization. But we believe that we can build a platform that brings people together to deliberate on complex issues in a civil way and learn from each other. We have been collaborating with Prof. James Fishkin and Alice Siu from the Center for Deliberative Democracy who were looking for ways to bring deliberations online and make them more scalable. Both the travel of participants to a single location and the training of moderators to guide the groups in a neutral way were limiting how deliberative polls could be executed.

How has your project evolved during (or because of) the pandemic? You mention Covid as one of the main impulses behind your project, but how has the pandemic actually shaped your project?

LG: Due to the restrictions imposed by the pandemic, we have seen an increased interest to take deliberations online. Deliberations that were scheduled offline, are now considering to move online. People have an increased expectation that meetings take place online, and we think it may have increased the interest to collaborate with us. At the same time, we all experience the challenges of unmoderated and unstructured conversations.

What are some of the biggest milestones you’ve achieved so far?

LG: We have been very excited to host already some large deliberative polls in Hong Kong, Japan, Canada, and Chile and on US schools with a few hundred participants discussing the same agenda over multiple sessions over the span of a few hours. Last January, the Stanford faculty used our platform to discuss the new School for Sustainability and it was very nice to receive such positive feedback from people around campus. We’re pushing our boundaries with every deployment and collaboration – sometimes in size, and sometimes by providing more flexibility in conversation design. It’s now a matter of minutes to set up a structured conversation for dozens of groups in parallel!

SS: In the past few months, we have been working hard to scale up the platform by automating parts of the deliberation process. For example, it used to be that we had to assign each participant to each group one by one. That process is now automatic. The setup and teardown of servers to handle workload on demand is also automatic. We went from being able to host at most 2-3 groups simultaneously to 50 groups. And we plan to support more. We also have been simplifying the user interface for deliberation organizers so that they can set up everything by themselves with minimal assistance from us.

What are some of the most challenging aspects of your project?

SS: There are two main challenging components to our platform. First, there is the challenge of scaling up infrastructure: as you increase the number of people on the platform, you keep running into problems that you were never aware existed before. Unusual setups that cause people to have a hard time logging on, or workflows that break down when too many people participate. Not only technical processes break down as you scale them, but also human processes are sometimes only feasible for a certain scale. We have to continuously rethink our assumptions and implementation to keep up.

The second challenge is demonstrating that this platform is effective compared to a human moderator. For this purpose, we have recently organized a controlled experiment and are collecting data from our deliberations. We’re now processing the data from it, and hoping for more news soon!

Any publications or conferences where you’ve showcased your project?

LG: We have demonstrated our platform in an earlier phase at HCOMP. After that, we have collected a lot of data that hopefully will be the foundation for more publications. There are several technical reports and press releases about the deliberations that were held through our platform, available through the Center for Deliberative Democracy website.

What are some of the ideal use cases for what you are developing? Where do you hope to have the biggest impact?

LG: Each time we show our platform to a new group, we get excited feedback and suggestions to roll it out in a different direction. People are looking for better ways to have structured conversations, and we believe there are many ways we can help to make those more constructive and effective.

We’re currently optimizing to perform really well in the use case of online deliberative polls: parallel groups that discuss a structured agenda with remaining questions as an outcome for each group, that are then presented in a plenary session. However, it is easy to imagine that this could be useful in much broader use cases, and we’re exploring which of those are most worth pursuing. Different outcome models and more flexibility for the conversation flow come to mind. Our goal is that any organizer of a structured set of small-group conversations can easily set up a set of rooms and invite participants to enjoy a constructive and effective discussion – without intervention of our team.

What comes next?

LG: Next steps are analyzing our data, and keep going ahead. We have requests for more collaborations already and to scale up even further. Our goal remains to have a real impact on how people meet online, and to make it easy to set up a structured conversation for thousands of people at the same time.

SS: We are trying to get to the point where it doesn’t matter whether there are 10 or 100,000 participants because everything takes the same amount of work. For things that can’t be automated because the technology is not there yet, we are trying to create a tool to facilitate human moderators as much as possible. We are also working on using machine learning and natural language processing to gain more insight into how people participate in these discussions. We want to analyze this in real time and use this data to nudge the conversation towards more engagement.