From Labor to Collaboration: A Methodological Experiment Using AI Agents to Augment Research Perspectives in Taiwan's Humanities and Social Sciences

From Labor to Collaboration: A Methodological Experiment Using AI Agents to Augment Research Perspectives in Taiwan's Humanities and Social Sciences
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Generative AI is reshaping knowledge work, yet existing research focuses predominantly on software engineering and the natural sciences, with limited methodological exploration for the humanities and social sciences. Positioned as a “methodological experiment,” this study proposes an AI Agent-based collaborative research workflow (Agentic Workflow) for humanities and social science research. Taiwan’s Claude.ai usage data (N = 7,729 conversations, November 2025) from the Anthropic Economic Index (AEI) serves as the empirical vehicle for validating the feasibility of this methodology. This study operates on two levels: the primary level is the design and validation of a methodological framework - a seven-stage modular workflow grounded in three principles: task modularization, human-AI division of labor, and verifiability, with each stage delineating clear roles for human researchers (research judgment and ethical decisions) and AI Agents (information retrieval and text generation); the secondary level is the empirical analysis of AEI Taiwan data - serving as an operational demonstration of the workflow’s application to secondary data research, showcasing both the process and output quality (see Appendix A). This study contributes by proposing a replicable AI collaboration framework for humanities and social science researchers, and identifying three operational modes of human-AI collaboration - direct execution, iterative refinement, and human-led - through reflexive documentation of the operational process. This taxonomy reveals the irreplaceability of human judgment in research question formulation, theoretical interpretation, contextualized reasoning, and ethical reflection. Limitations including single-platform data, cross-sectional design, and AI reliability risks are acknowledged.


💡 Research Summary

This paper presents a methodological experiment that designs, implements, and validates an AI‑agent‑based collaborative workflow for secondary‑data research in the humanities and social sciences. Recognizing that most generative‑AI studies focus on engineering or natural‑science contexts, the author argues that the interpretive, theory‑building, and context‑sensitive nature of humanities and social‑science work demands a distinct methodological approach. The proposed “Agentic Workflow” consists of seven modular stages: (1) research goal definition, (2) literature and data scouting, (3) data collection and preprocessing, (4) analysis design and code generation, (5) result interpretation and visualization, (6) manuscript drafting, and (7) reference management and verification. Three design principles underlie the workflow: task modularization, a clear human‑AI division of labor, and verifiability of outputs. Human researchers retain authority over research‑question formulation, theoretical framing, ethical judgment, and nuanced interpretation, while AI agents handle large‑scale information retrieval, summarization, code writing, and draft generation.

To test feasibility, the study uses the Anthropic Economic Index (AEI) dataset of Claude.ai conversations from Taiwan (7,729 records collected during 13–20 November 2025). Descriptive analyses of task type, collaboration mode, autonomy level, and success rate serve as the empirical vehicle. Three operational modes of human‑AI collaboration emerge: (a) direct execution, where the AI performs most steps with minimal human oversight; (b) iterative refinement, where AI outputs are repeatedly revised through human feedback; and (c) human‑led, where the researcher drives the process and the AI acts as a supportive tool. The analysis highlights that human intervention is indispensable at stages requiring judgment—research‑question refinement, theoretical integration, ethical review, and nuanced result interpretation—where AI performance declines or “hallucinates.” Conversely, AI excels in well‑structured, bounded tasks such as data cleaning, code scaffolding, and drafting, dramatically reducing time costs.

The paper situates its contribution within three perspectives on human‑AI interaction: the tool view (AI as efficiency enhancer), the collaboration view (complementary strengths), and the agency view (autonomous agents). By integrating these, the workflow balances efficiency with scholarly rigor. Limitations include reliance on a single platform, a cross‑sectional snapshot, and inherent AI reliability risks. The author calls for future work that expands to multiple AI platforms, cross‑national datasets, and robust verification mechanisms, as well as deeper ethical and legal frameworks. In sum, the study offers a replicable, transparent framework that demonstrates how AI agents can augment, rather than replace, critical human judgment in humanities and social‑science research.


Comments & Academic Discussion

Loading comments...

Leave a Comment