ReTracing: An Archaeological Approach Through Body, Machine, and Generative Systems

ReTracing: An Archaeological Approach Through Body, Machine, and Generative Systems
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We present ReTracing, a multi-agent embodied performance art that adopts an archaeological approach to examine how artificial intelligence shapes, constrains, and produces bodily movement. Drawing from science-fiction novels, the project extracts sentences that describe human-machine interaction. We use large language models (LLMs) to generate paired prompts “what to do” and “what not to do” for each excerpt. A diffusion-based text-to-video model transforms these prompts into choreographic guides for a human performer and motor commands for a quadruped robot. Both agents enact the actions on a mirrored floor, captured by multi-camera motion tracking and reconstructed into 3D point clouds and motion trails, forming a digital archive of motion traces. Through this process, ReTracing serves as a novel approach to reveal how generative systems encode socio-cultural biases through choreographed movements. Through an immersive interplay of AI, human, and robot, ReTracing confronts a critical question of our time: What does it mean to be human among AIs that also move, think, and leave traces behind?


💡 Research Summary

ReTracing is a multi‑agent embodied performance that treats generative AI as an archaeological object, tracing how language models encode and reproduce bodily movement. The authors begin by selecting seven science‑fiction excerpts that describe human‑machine interactions (e.g., Frankenstein, The Yellow Wallpaper, The Handmaid’s Tale). Using the Qwen‑2.5 large language model at a temperature of 0.7, each excerpt is transformed into a pair of prompts: a “what to do” (positive) prompt and a “what not to do” (negative) prompt. This dual‑prompt strategy forces the model to articulate both permissible and prohibited actions, exposing the implicit normative logic of the AI.

The positive prompts are fed into a diffusion‑based text‑to‑video model (similar to MDM, Di2Pose) to generate short choreography videos that serve as visual guides for a human performer. The same prompts are also mapped onto a predefined action set for a quadruped robot (Unitree Go2), producing a numbered movement list (e.g., stretch, pounce, run forward). Negative prompts list actions that would contradict the emotional tone of the source text, highlighting what the model deems unsuitable.

Both agents perform simultaneously on a mirrored floor. A multi‑camera rig captures the scene from several angles; the footage is processed by a monocular 3D point‑tracking system (SpatialTrackerV2) to reconstruct temporally consistent 3D skeletal keypoints and point‑cloud trails for both the human and the robot. These reconstructions become “motion traces” that are archived as a digital dataset, intended for open release so other researchers can replay, analyze, or extend the pipeline.

Through this workflow the authors demonstrate that generative systems are not culturally neutral. The LLM frequently generates feminine‑coded bodies and associates certain emotions with stereotypical gestures, while the robot’s action set reflects mechanical interpretations of the same textual cues. By juxtaposing human and robotic enactments, the performance visualizes how AI‑driven control is inscribed across biological, mechanical, and computational substrates.

The paper also discusses ethical considerations: hidden biases in training data, privacy risks associated with capturing detailed body motion, and the danger that the aesthetic appeal of the performance could obscure its critical message. The authors advocate for transparent model documentation, careful data governance, and contextual framing to mitigate these risks.

In conclusion, ReTracing proposes an “archaeology of AI” where language acts as a distributed logic of control that leaves physical traces in human and robot bodies. The work reframes generative AI from a purely computational tool to an active agent that shapes identity, memory, and embodiment. Future directions include expanding to other robot morphologies, incorporating multimodal prompts (audio, haptic), and establishing real‑time feedback loops to deepen the archaeological investigation of AI‑generated movement.


Comments & Academic Discussion

Loading comments...

Leave a Comment