"OpenBloom": A Question-Based LLM Tool to Support Stigma Reduction in Reproductive Well-Being

"OpenBloom": A Question-Based LLM Tool to Support Stigma Reduction in Reproductive Well-Being
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Reproductive well-being education remains widely stigmatized across diverse cultural contexts, constraining how individuals access and interpret reproductive health knowledge. We designed and evaluated OpenBloom, a stigma-sensitive, AI-mediated system that uses LLMs to transform reproductive health articles into reflective, question-based learning prompts. We employed OpenBloom as a design probe, aiming to explore the emerging challenges of reproductive well-being stigma through LLMs. Through surveys, semi-structured interviews, and focus group discussions, we examine how sociocultural stigma shapes participants’ engagements with AI-generated questions and the opportunities of inquiry-based reproductive health education. Our findings identify key design considerations for stigma-sensitive LLM, including empathetic framing, inclusive language, values-based reflection, and explicit representation of marginalized identities. However, while current LLM outputs largely meet expectations for cultural sensitivity and non-offensiveness, they default to superficial rephrasing and factual recall rather than critical reflection. This guides well-being HCI design in sensitive health domains toward culturally grounded, participatory workflows.


💡 Research Summary

This paper presents OpenBloom, a question‑based large language model (LLM) tool designed to explore how AI‑mediated interactions can surface and potentially reduce stigma surrounding reproductive well‑being. The authors position OpenBloom not as a definitive solution but as a design probe that converts user‑submitted reproductive health articles into concise summaries and a series of multiple‑choice questions (MCQs). Implemented as a Flutter/Dart web application for Android, the system calls the OpenAI API to generate both the summary and the questions, then collects real‑time user feedback on relevance, clarity, creativity, and cultural sensitivity.

The study recruited 34 participants for an initial phase of surveys and semi‑structured interviews, followed by a second phase of focus‑group discussions with eight participants. Participants represented diverse cultural backgrounds, including regions where reproductive topics are heavily tabooed. The research addressed three core questions: (1) Which stigmas emerge when users engage with LLM‑generated questions? (2) How do users interpret and react to questions that challenge those stigmas? (3) What design principles can guide culturally appropriate, stigma‑sensitive LLM tools?

Findings reveal a nuanced picture. On the positive side, participants praised the system’s non‑offensive language and inclusive terminology; the LLM reliably avoided overtly vulgar or judgmental phrasing. However, the generated questions tended to stay at the level of factual recall or superficial re‑phrasing of article content, failing to provoke deeper critical reflection. In culturally sensitive contexts, this “safety‑first” approach resulted in questions that skirted the very stigma they were meant to address, offering little opportunity for users to confront internalized shame, social judgment, or structural barriers. For example, many items asked “What are the recommended methods for menstrual hygiene?” rather than probing beliefs such as “How does your community’s view of menstruation affect your willingness to seek help?”

Through thematic analysis, the authors distilled four design considerations for stigma‑sensitive LLMs:

  1. Empathetic Framing – prepend questions with language that acknowledges users’ feelings (“How did you feel about…?”).
  2. Inclusive Language – explicitly reference gender diversity, sexual orientation, and cultural groups to signal belonging.
  3. Values‑Based Reflection – embed ethical or social‑justice prompts that invite users to examine the broader implications of reproductive norms.
  4. Explicit Representation of Marginalized Identities – directly name groups such as LGBTQIA+ or unmarried women to make their experiences visible.

Technically, the study highlights the limits of prompt engineering alone. The authors argue for a hybrid approach that combines scenario‑based prompts, continuous user‑feedback loops, and fine‑tuning on culturally annotated corpora. They also recommend integrating educational frameworks like Bloom’s taxonomy into the question‑generation pipeline to move beyond recall toward application, analysis, and evaluation levels. Privacy safeguards and ethical review are emphasized given the sensitivity of health data.

In conclusion, OpenBloom demonstrates that LLMs can produce culturally aware, non‑offensive content, yet they fall short of generating the deep, reflective questions needed for stigma reduction. Future work should involve co‑design workshops with target communities, embed cultural‑ethics metadata into LLM prompts, and conduct longitudinal field trials to assess educational and attitudinal outcomes. The insights extend beyond reproductive health, offering a roadmap for building culturally grounded, stigma‑sensitive AI tools in other sensitive health domains.


Comments & Academic Discussion

Loading comments...

Leave a Comment