"If You're Very Clever, No One Knows You've Used It": The Social Dynamics of Developing Generative AI Literacy in the Workplace

"If You're Very Clever, No One Knows You've Used It": The Social Dynamics of Developing Generative AI Literacy in the Workplace
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Generative AI (GenAI) tools are rapidly transforming knowledge work, making AI literacy a critical priority for organizations. However, research on AI literacy lacks empirical insight into how knowledge workers’ beliefs around GenAI literacy are shaped by the social dynamics of the workplace, and how workers learn to apply GenAI tools in these environments. To address this gap, we conducted in-depth interviews with 19 knowledge workers across multiple sectors to examine how they develop GenAI competencies in real-world professional contexts. We found that, while knowledge sharing from colleagues supported learning, the ability to remove cues indicating GenAI use was perceived as validation of domain expertise. These behaviours ultimately reduced opportunities for learning via knowledge sharing and undermined transparency. To advance workplace AI literacy, we argue for fostering open dialogue, increasing visibility of user-generated knowledge, and greater emphasis on the benefits of collaborative learning for navigating rapid technological developments.


💡 Research Summary

The paper investigates how knowledge workers develop generative AI (GenAI) literacy within the social dynamics of the workplace. Using semi‑structured interviews with 19 professionals from diverse sectors, the authors address two research questions: (1) which competencies are valued when using GenAI at work, and (2) what strategies are considered essential for developing those competencies.

Key findings reveal that informal knowledge sharing among colleagues is a primary driver of learning. Participants routinely exchange prompts, evaluation techniques, and troubleshooting tips, which accelerates skill acquisition beyond formal training programs. However, the study also uncovers a pervasive “AI hiding” behavior: workers deliberately remove or mask cues that reveal AI involvement in their outputs. Unlike prior interpretations that frame this as shame or status‑seeking, the authors argue that workers view successful erasure of AI signatures as a validation of their domain expertise. By editing AI‑generated text, re‑styling images, or otherwise obscuring the tool’s contribution, they signal that the work is genuinely theirs.

While this practice boosts perceived personal credibility, it undermines transparency and reduces opportunities for peer learning. When colleagues cannot see how AI is being used, they lose a valuable source of insight, weakening the knowledge‑sharing network that sustains long‑term AI literacy within the organization.

The authors extend existing AI literacy frameworks by explicitly incorporating a socio‑technical dimension that captures these social influences. They propose three practical implications: (1) design HCI interventions that make AI usage visible—such as automatic usage logs, shared edit histories, or UI cues that flag AI‑generated content; (2) foster open dialogue about AI through formal forums, mentorship programs, and “AI usage certifications” that normalize disclosure; and (3) embed policies that encourage collaborative learning rather than individual concealment.

By highlighting the tension between personal reputation management and collective learning, the paper argues that organizations must balance the desire for individual expertise signaling with the need for transparent, shared knowledge. Future work should broaden the sample size, examine longitudinal effects of AI hiding on performance, and empirically test the proposed design and policy interventions. Overall, the study offers a nuanced view of GenAI literacy as a socially embedded practice, urging both researchers and practitioners to consider the cultural and relational factors that shape how AI tools are adopted and mastered in real‑world work settings.


Comments & Academic Discussion

Loading comments...

Leave a Comment