Quantifying the impact of AI-powered pro-social feedback
One of the biggest problems facing online platforms today is the prevalence of so-called “toxic'” behavior, such as personal attacks, harassment, and general incivility. My prior research, conducted during my Ph.D., pioneered a novel approach to this problem: rather than *reactively* detecting toxic content and flagging it for removal, I developed language models that can accurately identify ongoing conversations that are at risk of becoming toxic, enabling *proactive* intervention to prevent such an outcome. However, the real-world impact of such interventions on actual users remains poorly understood. This project aims to develop and implement user studies that will enable us to observe how users behave when presented with AI-powered pro-social interventions.
The project entails a joint technical and social approach. From a technical perspective, we will need to implement a platform that allows volunteers to interact with the AI-powered intervention system, and allows us to observe and record their behavior (for instance, this could take the form of a mockup social media comment section). From a social perspective, we will need to analyze and interpret the observed behavior of the volunteers, aided by the context of existing literature on intervention design.
This project will be a novel contribution to the emerging literature on designing more pro-social online communities. While there has long been interest in technology that can *reactively detect* toxicity, interest in technology to help *proactively prevent* it is more recent. The findings will be of interest both to researchers who study intervention design in software platforms, and to researchers who study content moderation and online community governance (a group that spans disciplines, ranging from CS to political and social science). And the broader societal impact must not be understated: successfully promoting more pro-social behavior online has the potential to create healthier online communities and reduce harms associated with toxicity.
Interested students should apply by emailing Prof Chang at jpchang@g.hmc.edu
Name of research group, project, or lab
Jonathan P. Chang Lab
Why join this research group or lab?
Our work straddles the boundary between computer science and social science. If you are a Harvey Mudd student who is itching for an opportunity to jointly make use of your interests in STEM and HSA subjects, this may be a perfect fit for you! Students with interests in sociology, government, or philosophy are especially encouraged to apply. Furthermore, this project has room for contributions both from students who are new to computer science and from students with more extensive experience. First and second year students will get a chance to apply their CS5 knowledge to help build the Python-based backend of our proposed platform, while also gaining exposure to the underlying machine learning concepts which could spark a future interest in that area. Meanwhile, more senior students with prior AI/ML/NLP experience will have a chance to work on refining and further developing the underlying models.
3 sp. | 0 appl.
Hours per week
Summer - Full Time
Project categories
Computer Science
Related ProjectRelated Projects
Countdown...
This timer uses the clock on your device and is only an estimate. The actual time remaining is determined on the website.