How Do Human Creators Embrace Human-AI Co-Creation? A Perspective on Human Agency of Screenwriters
Generative AI has greatly transformed creative work in various domains, such as screenwriting. To understand this transformation, prior research often focused on capturing a snapshot of human-AI co-creation practice at a specific moment, with less attention to how humans mobilize, regulate, and reflect to form the practice gradually. Motivated by Bandura’s theory of human agency, we conducted a two-week study with 19 professional screenwriters to investigate how they embraced AI in their creation process. Our findings revealed that screenwriters not only mindfully planned, foresaw, and responded to AI usage, but, more importantly, through reflections on practice, they developed themselves and human-AI co-creation paradigms, such as cognition, strategies, and workflows. They also expressed various expectations for how future AI should better support their agency. Based on our findings, we conclude this paper with extensive discussion and actionable suggestions to screenwriters, tool developers, and researchers for sustainable human-AI co-creation.
💡 Research Summary
This paper investigates how professional screenwriters actively embrace and shape human‑AI co‑creation over time, using Bandura’s theory of human agency as an analytical lens. While prior work has largely captured isolated moments of AI‑assisted writing—focusing on immediate strategies, roles, or workflow patterns—this study shifts to a developmental perspective, asking how creators mobilize, regulate, and reflect on AI use to evolve their practice.
Bandura’s model defines four core properties of agency: (1) intentionality (forming plans to achieve goals), (2) forethought (anticipating outcomes), (3) self‑reactiveness (monitoring and adjusting actions), and (4) self‑reflectiveness (adapting thoughts, actions, and capabilities based on experience). The authors mapped these properties onto three research questions: (RQ1) How do screenwriters establish intentions, anticipate outcomes, and regulate actions in current AI‑assisted creation? (RQ2) How do they develop themselves and their co‑creation paradigms through reflection on accumulated interaction experience? (RQ3) How do they envision future AI support based on their experience?
Methodologically, the study recruited 19 professional screenwriters (average 8 years experience, ages 20‑39) through snowball sampling. Over two weeks, participants engaged in a three‑stage qualitative protocol: (1) a co‑creation session where they wrote a short screenplay with a large‑language‑model (LLM) tool, allowing observation of intentionality and self‑reactiveness; (2) a retrospective think‑aloud session where they revisited their interactions, verbalizing the reasoning behind prompt choices, evaluation criteria, and on‑the‑fly adjustments—capturing forethought and self‑reactiveness; (3) a semi‑structured interview probing self‑reflectiveness, evolution of practices, and expectations for future AI tools.
Findings reveal a rich, iterative process. First, screenwriters entered each task with clear creative goals (e.g., plot arc, character voice) and deliberately defined the AI’s role—whether as idea generator, dialogue draft, or structural assistant—demonstrating strong intentionality. Second, they projected likely AI outputs, tailoring prompts and setting expectations (forethought). When outputs deviated from expectations, they quickly re‑prompted, edited, or used the AI’s “mistakes” as creative leverage, exemplifying self‑reactiveness. Third, across multiple sessions, participants built a repertoire of strategies: a typical cycle emerged—“idea sketch → AI expansion → human validation → refinement”—which they formalized into personal workflows. This strategic portfolio grew as they learned which prompts yielded useful content, how to steer temperature or sampling, and when to intervene manually.
Crucially, participants reported meta‑cognitive changes (self‑reflectiveness). They recognized that sustained AI collaboration reshaped their creative identity: they felt both empowered (AI accelerated ideation, reduced routine drafting) and wary of skill erosion (over‑reliance could blunt narrative instincts). They reflected on the balance between leveraging AI’s breadth and preserving their unique voice.
When asked about future AI, screenwriters articulated three primary desiderata: (1) Plan‑making assistance—AI that helps construct structured prompts aligned with narrative goals, perhaps offering templates or guided question trees; (2) Outcome feedback—automated evaluation of AI‑generated text for coherence, character consistency, and tonal fit, providing actionable scores or suggestions; (3) Mentoring—an AI “coach” that can suggest alternative story beats, highlight narrative gaps, and support skill development, especially for less‑experienced writers. These expectations point toward AI as a collaborative partner rather than a mere generator.
The authors discuss implications for tool designers: embed agency‑supportive features (planning scaffolds, transparent feedback loops, reflective prompts) and consider ethical safeguards (preventing over‑automation, preserving authorship credit). They also note risks: potential deskilling, homogenization of storytelling, and the need for ongoing human critical judgment.
Contributions are threefold: (1) introducing a human‑agency framework to HCI studies of creative AI, offering a systematic way to analyze intention, anticipation, regulation, and reflection; (2) providing empirical evidence of how professional screenwriters evolve their co‑creation practices over sustained interaction, moving beyond snapshot analyses; (3) delivering actionable design recommendations and highlighting future research avenues (larger longitudinal studies, cross‑genre comparisons, quantitative measures of skill change).
Limitations include the modest sample size, reliance on a single LLM‑based tool, and cultural specificity (all participants based in Hong Kong/China), which may affect generalizability. Future work should expand participant diversity, explore multimodal AI tools (e.g., visual storyboarding), and develop metrics to track long‑term impacts on creative competence.
In sum, the paper demonstrates that human creators do not passively consume AI output; they actively plan, anticipate, regulate, and reflect, thereby co‑constructing new creative paradigms and reshaping their professional agency in the age of generative AI.
Comments & Academic Discussion
Loading comments...
Leave a Comment