Algorithmic Governance in the United States: A Multi-Level Case Analysis of AI Deployment Across Federal, State, and Municipal Authorities

Algorithmic Governance in the United States: A Multi-Level Case Analysis of AI Deployment Across Federal, State, and Municipal Authorities
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The rapid expansion of artificial intelligence in public governance has generated strong optimism about faster processes, smarter decisions, and more modern administrative systems. Yet despite this enthusiasm, we still know surprisingly little about how AI actually takes shape inside different layers of government. Especially in federal systems where authority is fragmented across multiple levels. In practice, the same algorithm can serve very different purposes. This study responds to that gap by examining how AI is used across federal, state, and municipal levels in the United States. Drawing on a comparative qualitative analysis of thirty AI implementation cases, and guided by a digital-era governance framework combined with a sociotechnical perspective, the study identifies two broad modes of algorithmic governance: control-oriented systems and support-oriented systems. The findings reveal a clear pattern of functional differentiation across levels of government. At the federal level, AI is most often institutionalized as a tool for high-stakes control: supporting surveillance, enforcement, and regulatory oversight. State governments occupy a more ambiguous middle ground, where AI frequently combines supportive functions with algorithmic gatekeeping, particularly in areas such as welfare administration and public health. Municipal governments, by contrast, tend to deploy AI in more pragmatic and service-oriented ways, using it to streamline everyday operations and improve direct interactions with residents. By foregrounding institutional context, this study advances debates on algorithmic governance by demonstrating that the character, function, and risks of AI in the public sector are fundamentally shaped by the level of governance at which these systems are deployed.


💡 Research Summary

This paper investigates how artificial intelligence (AI) is embedded in public governance across the three tiers of the United States federal system—federal, state, and municipal—through a comparative qualitative analysis of thirty implementation cases. Drawing on the “third wave” theory of digital transformation and a multi‑level governance framework, the authors conceptualize two analytically distinct regimes of algorithmic governance: control‑oriented AI (designed to strengthen supervisory, coercive, and enforcement capacities) and support‑oriented AI (intended to expand service delivery, reduce administrative burdens, and enable predictive risk prevention).

The introduction situates the study within a rapid expansion of AI in the public sector, noting policy milestones such as the 2025 executive order that accelerated AI deployment, the America’s AI Action Plan, and industry forecasts that predict a 19 % annual growth in public‑sector AI spending through 2027. Despite these trends, scholarship has largely examined single levels of government or isolated technologies, leaving a gap in understanding how institutional context shapes AI use.

The theoretical section links the third‑wave perspective—where AI becomes a structural component of decision‑making rather than a peripheral automation tool—to the fragmented nature of U.S. federalism. At the federal level, AI is typically embedded through centralized regulations, cross‑agency standards, and formal oversight bodies, producing a highly coordinated, control‑oriented regime. In contrast, states and municipalities experience greater local adaptation, leading to mixed or support‑oriented configurations. The authors also discuss algorithmic accountability, opacity (“black‑box” problems), and vendor dependence, emphasizing that these risk factors vary by level of government.

Methodologically, the study selects ten federal, ten state, and ten municipal cases from public reports, audits, and interviews. Each case is coded for functional orientation (control vs. support) based on criteria such as sanctioning capability, monitoring intensity, service expansion, and risk prediction. When a case exhibits both dimensions, the dominant institutional impact determines classification.

Findings reveal a clear functional differentiation: 78 % of federal cases are control‑oriented, focusing on surveillance, sanctions, and large‑scale regulatory enforcement (e.g., border security analytics, financial fraud detection, environmental compliance monitoring). State cases are more heterogeneous; 54 % are mixed, combining welfare eligibility automation, public‑health risk modeling, and transportation management, thereby blending supervisory and service functions. Municipal cases are overwhelmingly support‑oriented (71 %+), using AI for routine operations such as 311 call routing, building‑permit review, smart‑city traffic optimization, and citizen‑service chatbots.

Risk analysis shows that the federal tier faces the highest “black‑box” and vendor‑lock‑in challenges, with diffuse accountability and complex regulatory responses. State governments grapple with fragmented standards and uneven auditing capacity, while municipalities suffer from limited resources and informal oversight, leading to ad‑hoc accountability practices. These divergent risk profiles underscore that AI’s benefits and harms are not intrinsic to the technology but are shaped by institutional architecture.

The discussion highlights three contributions: (1) providing the first empirically grounded, cross‑level comparison of AI use in U.S. public administration; (2) introducing the control‑vs‑support analytical lens to help policymakers clarify the purpose and risk orientation of AI projects; and (3) mapping level‑specific risk profiles to inform tailored regulatory and oversight designs. Limitations include reliance on publicly documented cases (potentially omitting pilot projects) and the absence of quantitative performance metrics. The authors call for longitudinal studies, cost‑benefit analyses, and integrated ethical‑fairness assessments to deepen understanding of algorithmic governance.

In conclusion, the paper demonstrates that the character, function, and risk of AI in the public sector are fundamentally contingent on the level of governance at which the technology is deployed. Recognizing these institutional differences is essential for crafting effective, accountable, and equitable AI policies across the United States’ multi‑layered governmental landscape.


Comments & Academic Discussion

Loading comments...

Leave a Comment