Secure Semantic Communications via AI Defenses: Fundamentals, Solutions, and Future Directions

Semantic communication (SemCom) redefines wireless communication from reproducing symbols to transmitting task-relevant semantics. However, this AI-native architecture also introduces new vulnerabilities, as semantic failures may arise from adversari…

Authors: Lan Zhang, Chengsi Liang, Zeming Zhuang

Secure Semantic Communications via AI Defenses: Fundamentals, Solutions, and Future Directions
1 Secure Semantic Communications via AI Defenses: Fundamentals, Solutions, and Future Directions Lan Zhang, Senior Member , IEEE , Chengsi Liang, Zeming Zhuang, Y ao Sun, Senior Member , IEEE , Fang Fang, Senior Member , IEEE , Xiaoyong Y uan, Member , IEEE , Dusit Niyato, F ellow , IEEE Abstract —Semantic communication (SemCom) redefines wir e- less communication from r eproducing symbols to transmitting task-rele vant semantics. Enabled by learned encoders, decoders, and shar ed knowledge modules, SemCom supports efficient and rob ust integration with downstream and distributed AI tasks across sensing, computing, and contr ol. However , this AI-nati ve architectur e also introduces new vulnerabilities, as semantic failures may arise fr om adversarial perturbations to models, corrupted training data, desynchronized priors, or misaligned inference even when lower -layer transmission reliability and cryptographic protection remain intact. This surv ey pro vides a defense-centered and system-oriented synthesis of security in SemCom via AI defense. W e analyze AI-centric threat models by consolidating existing studies and organizing attack surfaces across model-lev el, channel-realizable, knowledge-based, and net- worked inference vectors. Building on this foundation, we present a structur ed taxonomy of defense strategies organized by where semantic integrity can be compromised in SemCom systems despite correct symbol deliv ery , spanning semantic encoding, wireless transmission, knowledge integrity , and coordination among multiple agents. These categories correspond to dis- tinct security failur e modes, including repr esentation fragility , channel-realizable manipulation, semantic prior poisoning or desynchronization, and adversarial propagation through dis- tributed inference. T o bridge design and deployment, we examine security utility operating en velopes that capture tradeoffs among semantic fidelity , robustness, latency , and energy under realistic constraints, survey ev aluation frameworks and representati ve applications, and identify open challenges in cr oss-layer com- position and deployment-time certification. Overall, this survey offers a unified system-le vel perspective that enables r eaders to understand major threat and defense mechanisms in AI-native SemCom systems and to leverage emer ging security techniques in the design and deployment of rob ust SemCom architectures for next-generation intelligent networks. Index T erms —Semantic communication, semantic security , AI- native threats, AI defense, secure communication systems. I . I N T R O D U C T I O N A. Backgr ound Semantic communication (SemCom) redefines the goal of wireless communication from reproducing symbols to con vey- ing task-relev ant meaning [1]. Rather than treating communi- cation as a bit-lev el reproduction problem, SemCom systems lev erage learned encoders, decoders, and shared knowledge modules to extract and conv ey information that is most critical for downstream tasks such as classification, decision-making, or control [2], [3]. This shift enables substantial gains in band- width efficiency , rob ustness to channel impairments, and tight This paper w as produced by the IEEE Publication T echnology Group. The y are in Piscataway , NJ. Manuscript received April 19, 2021; re vised August 16, 2021. coupling with inference across a wide range of applications, including image and video transmission, natural language interaction, multimodal sensing, autonomous systems, and distributed AI tasks [4]–[8]. A defining characteristic of modern SemCom systems is that they are fundamentally AI-native . Core components of the SemCom pipeline, including semantic encoders, semantic decoders, and kno wledge-access or context-inference modules, are implemented as learned models trained on data and queried at inference time [4], [6]–[8]. These models determine what is deemed semantically important, ho w it is represented, and ho w it is reconstructed, introducing dependencies that are absent in conv entional communication architectures. Consequently , semantic fidelity depends not only on channel conditions but also on model performance, which is affected by training data quality , generalization behavior , model robustness, and context alignment. Failures may therefore arise ev en when packet deliv ery ratios and bit error rates remain within acceptable bounds, making semantic degradation dif ficult to observe using con ventional metrics. This shift toward learned, model-dri ven semantic processing fundamentally alters the communication security landscape. T raditional mechanisms such as encryption, authentication, and error control remain necessary , but are insuf ficient to protect semantic integrity [9]–[11]. Even when lower-layer re- liability and confidentiality ar e preserved, adversaries can in- fluence communication outcomes by exploiting learned model behaviors, manipulating training data, or desynchr onizing shar ed context, without disrupting packet delivery or trig- gering con ventional alarms. These attacks target semantic meaning rather than symbols and therefore ev ade traditional communication-layer security abstractions. B. Motivation and Contrib ution While the AI security community has dev eloped po w- erful techniques for improving robustness, priv acy , and in- tegrity , such as adversarial training, data poisoning defenses, membership inference protection, and secure federated learn- ing [12]–[15], they are primarily designed for standalone models and rarely account for deployment within dynamic, resource-constrained, ov er-the-air communication pipelines where learning, transmission, and inference are tightly coupled and interdependent. As a result, isolated model vulnerabilities can propagate across the SemCom pipeline, leading to system- lev el failures that are not addressed by existing AI security framew orks. Reflecting growing awareness of these challenges, a number of recent surveys hav e be gun to examine security issues 2 T ABLE I C O M PAR I S O N O F S U RV E Y A N D OV ERV I E W PAP E R S O N S E C U R I T Y I N S E M A N T I C C O M M U N I C AT I O N S Y S T E M S . A C H E C K M A R K I N D I C A T E S T H AT T H E C O R R E S P O N D I N G S E C U R I T Y S U R FAC E I S E X P L I C I T LY D I S C U S S E D , R ATH E R T H A N S Y S T E M - L E V E L I N T E G R A T I O N O R D E P L OY M E N T - AW A R E A N A LY S I S . Survey Primary F ocus AI-Native Semantic Security Surfaces Semantic Model Wir eless Channel Shared Knowledge Semantic Inference Networked Inference Xin et al. (2024) [7] Theoretical foundations of SemCom, including semantic entropy , semantic rate–distortion, and mathematical repre- sentations of knowledge resources ✓ Guo et al. (2024) [8] SemCom networks (SemComNet) for multi-agent systems, emphasizing architecture, orchestration, and network-le vel security ✓ ✓ W on et al. (2025) [16] Resource management, scheduling, and security/priv acy considerations for SemCom systems ✓ Li et al. (2024) [17] Robust semantic transmission under adversarial attacks and priv acy risks for learned encoders and decoders ✓ ✓ Meng et al. (2025) [18] Stage-wise analysis of security and pri vacy threats and countermeasures across the SemCom life cycle ✓ ✓ ✓ Y ang et al. (2024) [9] Conceptual foundations and key challenges of secure Sem- Com, distinguishing semantic information security from semantic model security ✓ ✓ Shen et al. (2023) [19] Survey of security threats and defenses across the SemCom pipeline, including training, transmission, and knowledge base maintenance ✓ ✓ ✓ This Surv ey Defense-centered, system-level synthesis of AI-native se- mantic threats and defenses, bridging AI security and SemCom, with emphasis on semantic inference integrity, cross-layer composition, deployment constraints, and security–utility tradeoffs ✓ ✓ ✓ ✓ ✓ in SemCom systems [7]–[9], [16]–[20]. As summarized in T able I, existing surveys span topics ranging from semantic information theory and rob ust transmission to network archi- tectures and stage-wise security analysis. Ho wev er, most sur- ve ys concentrate on individual components or attack surfaces, offering limited insight into how threats and defenses interact and compose across encoder and decoder models, wireless transmission, shared kno wledge, and inference. As a result, a defense-centered, system-level understanding of SemCom security remains largely absent. Motiv ated by these gaps, this survey e xamines SemCom security with the focus on AI defense, inte grating model- centric AI security insights with system-level SemCom design. W e advocate a system-le vel perspectiv e in which rob ustness is treated as an explicit and central design objectiv e rather than a post hoc add-on. Departing from prior surve ys that emphasize isolated threats or indi vidual components, we fo- cus on cross-layer composition, deployment constraints, and runtime security utility tradeoffs. W e ar gue that securing SemCom requires coherent alignment of threats, defenses, and performance objectives across the integrated pipeline, span- ning semantic encoding and decoding, wireless transmission, shared knowledge, and networked inference and coordination in multi-agent systems. As shown in Fig. 1, recent advances in SemCom have rapidly expanded beyond point-to-point transmission to ward model–driv en, kno wledge-assisted, and multi-agent systems that operate under tight resource and latenc y constraints [6]– [8]. At the same time, the security landscape of AI systems has e volved significantly , with demonstrated vulnerabilities in robustness, priv acy , and integrity that directly impact learned semantic representations and inference pipelines [12]–[15]. These dev elopments have progressed largely in parallel, cre- ating a gro wing disconnect between ho w SemCom systems are designed and how they can be defended in practice. As SemCom technologies move closer to deployment in safety- and mission-critical applications, there is an urgent need for a unified, defense-centered perspective that bridges AI security advances with the architectural and operational realities of SemCom. This surve y responds to that need by consolidating emerging threats, defenses, and deplo yment challenges into a coherent system-lev el framework. In summary , this survey makes the following contributions. • AI-Centric Threat Model: W e synthesize and system- atize an AI-centric threat model for SemCom by re view- ing and consolidating existing works across SemCom security and AI security . This synthesis characterizes semantic-lev el adversarial objectiv es, attacker capabili- ties, and access assumptions, and organizes represen- tativ e attack classes spanning model-centric threats to learned semantics, channel-realizable semantic attacks, and semantic manipulation via knowledge and context, highlighting recurring patterns and gaps in the literature. • Layered Defense T axonomy and Synthesis: W e provide a structured, system-level synthesis of defense mecha- nisms across the SemCom pipeline, co vering semantic encoders and decoders, wireless transmission, shared knowledge resources, and network ed inference. By con- necting advances in AI security with SemCom-specific architectural and operational constraints, this taxonomy clarifies how existing defenses operate, interact, and fall short when deployed in AI-nativ e communication sys- tems. • Bridging Design and Deployment: Drawing on insights 3 Section IV . AI-Centric Threat Model and Attack Surfaces in SemCom AI-native SemCom Section VII. Secure SemCom in Practical Applications Section V . How AI Defenses Can Be Used in SemCom Systems Section VIII. Open Research Directions Section VI. Bridging Design and Deployment for Secure SemCom Training data quality Generalization behavior Model robustness Context alignment AI-Centric Threat Model New Design and Deployment 1. AI-Centric Threat Model 2. Model-Centric Threats 3. Channel-Realizable Semantic Attacks 4. Semantic Manipulation 5. Networked Semantic Inference and Multi-Agent Propagation 1. Defense Design Philosophy 2. Encoder/Decoder-Level Defenses 3. Channel-Level Defenses 4. Knowledge-Base-Level Defenses 5. Network-Level Defenses 6. Cross-Domain Lessons and Integration Paths 1. Security–Utility T radeoffs and Operating Envelopes 2. Certification and Robustness 3. Evaluation Metrics and Benchmarking Gaps 4. Simulation and Red-T eaming 5. Real-World T estbeds and Deployment Studies Section I. Introduction Semantic Fidelity Section IX. Conclusion Section III. AI Security and Defense Foundations: Threat Models and Design Principles 1. AI-Powered T ransceiver Architectures 2. Learning Paradigms 3. Security Concerns from AI- Induced Dependencies 1. How AI Redefines Security 2. Canonical AI Threat Models and Attack Surfaces 3. AI Defense T axonomy 4. Why AI Defense Cannot Transfer Directly to SemCom Section II. AI-Native SemCom: Architectures, Paradigm, and Dependencies Storyline New System-level AI Defense New Semantic-level AI Threats New Metrics and Experimental Settings Evaluation and Benchmarking Gaps Bridging Design and Deployment Defense T axonomy and Synthesis Fig. 1. The organization structure of this surve y paper. from prior studies and practical deployment consider- ations, we articulate a deployment-oriented perspective centered on security–utility operating en velopes. This per- spectiv e consolidates how robustness, semantic fidelity , latency , energy , and adversarial b udget tradeoffs are an- alyzed in the literature, and distills design principles for integrating defenses under realistic resource and runtime constraints. • Evaluation and Benchmarking Gaps: W e revie w ex- isting ev aluation practices for SemCom security and identify critical gaps in metrics, benchmarks, and ex- perimental settings. Based on this analysis, we outline requirements for deployment-aw are ev aluation, repro- ducible benchmarks, and adversarial testbeds that better support system-level assessment of semantic robustness and security . C. P aper Or ganization The organization of the paper is shown in Fig. 1. Section II revie ws the architectural foundations of SemCom and ana- lyzes AI-induced dependencies that giv e rise to ne w security vulnerabilities. Section IV formalizes an AI-centric threat model that characterizes semantic attacks targeting learned models, o ver -the-air transmission, and shared kno wledge or contextual inference in SemCom systems. Section V presents a layered defense taxonomy and synthesis, surve ying security mechanisms across semantic encoding, transmission, shared knowledge, and networked inference. Section VI bridges de- sign and deployment by examining system-level constraints and introducing security–utility operating en velopes for robust SemCom operation under real-world conditions. Section VII surve ys representative application domains of secure SemCom across sensing, control, and distributed AI systems. Finally , Section VIII outlines open research challenges and future directions toward secure and trustworthy SemCom systems. I I . A I - N A T I V E S E M A N T I C C O M M U N I C A T I O N : A R C H I T E C T U R E S , L E A R N I N G P A R A D I G M , A N D D E P E N D E N C I E S SemCom departs from classic communication architectures by shifting the design objectiv e from bit-lev el fidelity to task- lev el semantic correctness [2]. This shift is fundamentally enabled by AI, which learns task-relev ant representations and gov erns how information is extracted, compressed, transmit- ted, and interpreted under uncertainty . A central architectural abstraction is the AI-po wered semantic transceiv er [4], [6], [8], which consists of three tightly coupled components: a semantic encoder , a semantic decoder , and a shared knowledge base. The semantic encoder maps raw inputs into compact latent representations that preserve task utility , while the semantic decoder reconstructs the intended outcome at the recei ver by lev eraging shared context or knowledge. In this section, we focus on AI-based semantic transceiv ers and kno wledge-assisted architectures that go vern ho w meaning is encoded, transmitted, and reconstructed, rather than access network- or network-le vel architectures, which are beyond the scope of this section. W e introduce the architectural founda- tions and learning paradigms of modern SemCom systems, 4 highlighting the unique dependencies and design affordances that arise from embedding intelligence throughout the commu- nication pipeline. These foundations set the stage for a deeper security analysis in Section IV. A. AI-P ower ed Semantic T ransceiver Ar chitectur es This architectural abstraction b uilds on the end-to-end learn- ing paradigm with an autoencoder that represents a complete communication system by jointly learning modulation and de- modulation without explicitly separating source coding, chan- nel coding, and modulation stages [21]. Extending this idea beyond symbol transmission, architectures such as DeepSC for text [2] and DeepJSCC for images [3] demonstrate that learned semantic encoders and decoders can transmit task- relev ant representations directly o ver the channel and recover meaning even under se vere channel impairments. In these systems, semantic correctness is determined by the alignment between the encoder–decoder mapping and the task objecti ve, rather than by symbol-le vel fidelity . As SemCom architectures mature, the encoder–decoder pipeline is increasingly augmented with e xplicit semantic knowledge. Kno wledge-assisted SemCom incorporates shared priors, such as ontologies, knowledge graphs, or structured semantic memories, directly into the encoding and decod- ing process. A line of work proposed mapping sentences into kno wledge-graph triplets and selectively transmitting se- mantically important units, enabling receiv ers to reconstruct meaning e ven under limited communication resources [22], [23]. More recent work integrates structured kno wledge with neural encoders using graph neural networks (GNNs) [24] and Transformer -based models [25], allo wing semantic rep- resentations to be grounded in entities and relations rather than surface-lev el symbols . In such architectures, semantic correctness depends not only on the transmitted signal but also on the consistency , integrity , and synchronization of shared knowledge. Building on these foundations, modern SemCom systems increasingly adopt advanced neural backbones and adaptiv e structures to cope with dynamic environments and diverse task requirements. T ransformer-based joint source–channel coding (JSCC) impro ves rob ustness to conte xtual and syntactic variations in text, while adaptive semantic transceiv ers dynam- ically adjust coding strategies to meet latency and resource constraints [25]–[28]. At the frontier , foundation models and generativ e architectures are being embedded directly into SemCom pipelines, where large language model (LLM) based tokenizers are dev eloped to transmit compact semantic cues and reconstruct rich outputs [29]–[31], [31]–[33]. These gen- erativ e designs enable extremely high compression ratios and flexible semantic reconstruction, but they also expand the trust boundary of SemCom systems to ward learned models and shared semantic priors. Overall, modern SemCom transcei ver architectures are AI- nativ e systems composed of learned semantic-lev el encoders and decoders, often augmented with shared knowledge or gen- erativ e models. While these architectures provide substantial gains in efficiency and flexibility , they also introduce new dependencies on learning algorithms, training data, adaptation mechanisms, and external knowledge. These dependencies fundamentally shape system beha vior and directly influence robustness and security . These e volving architectures raise ne w questions about how meaning is learned, updated, and pre- served. These questions will be addressed ne xt by e xamining the learning paradigms that underpin SemCom systems. B. Learning P aradigms in Semantic Communication SemCom systems are fundamentally learning-driven: se- mantic encoders, decoders, and knowledge modules are re- alized through data-driv en training and adaptation rather than fixed signal mappings. Consequently , the learning paradigm directly determines how meaning is defined, represented, shared, and updated, and thus shapes the dominant security dependencies of a SemCom system. Most early SemCom designs rely on supervised learning (SL), where encoders and decoders are trained in an end-to- end manner using labeled data and task-oriented objecti ves [2], [3]. In this setting, semantic correctness is implicitly tied to the integrity , coverage, and bias of the training data, making learned representations sensiti ve to data manipulation and distributional mismatch between training and deployment. Self-supervised learning (SSL) is increasingly adopted to reduce reliance on labeled data and improv e generalization. By exploiting intrinsic data structure through contrastiv e, masked, or multimodal objectiv es, SSL enables transferable semantic abstractions that can be reused across tasks and en vironments. At the same time, semantics learned via SSL depend strongly on pretraining corpora and alignment assumptions, introducing new representation-le vel dependencies [34]–[36]. Reinforcement learning (RL) arises naturally in SemCom systems operating in dynamic or closed-loop settings, where semantic transmission decisions are optimized with respect to long-term task rewards rather than explicit reconstruction losses [37]–[39]. While RL enables adapti ve semantic beha vior under time-varying conditions, it tightly couples semantics to rew ard design, feedback fidelity , and e xploration dynamics. Learning paradigms also dif fer in ho w training and adap- tation are distributed. Centralized training enables global optimization but scales poorly under data locality and pri- vac y constraints, motiv ating distributed and federated learning approaches [40], [41]. These methods allo w collaborativ e learning of semantic models and shared knowledge without raw data e xchange, at the cost of additional dependencies on synchronization, aggregation, and trust. Finally , many SemCom systems [28], [42]–[44] rely on online adaptation or continual learning to remain effecti ve after deployment, as channel conditions, tasks, and context ev olve over time. While such mechanisms support long-li ved operation in non-stationary en vironments, they also extend the temporal windo w ov er which semantic behavior can be influenced. T ogether, these learning paradigms define how semantic meaning is formed and maintained in SemCom systems, and explain why vulnerabilities can arise even when lower -layer communication remains reliable. W e b uild on this foundation 5 Encoder DNN Source Semantic representations Decoder DNN Distorted semantic representations Reconstructed Message Physical Channel Complex Model Behavior Complicated DNN Small variations High- dimensional outcomes Degradation Receiver's KB Agent B Agent C Agent A Biases KBs Omissions Distributional shifts Distributed/federated settings External Semantic Resources Knowledge Update Update Desynchronization Degradation Memory Modules Pretrained Models Knowledge graphs Divergence in transceiver Degradation Semantic Extraction Semantic Inference SemCom pipeline AI Model AI-Induced Dependencies Knowledge Sharing Sender's KB Data-driven Learning System Fragility Fig. 2. System fragility across the AI-nati ve semantic communication inference pipeline: 1) the reliance on complex model behavior , 2) the training corpora to build semantic behavior , and 3) the use of external semantic resources. in the next section by analyzing the resulting threat surfaces from an AI defense perspecti ve. C. Lessons Learned: Security Concerns in SemCom from AI- Induced Dependencies The integration of AI into SemCom introduces new forms of adaptability and efficienc y , but also giv es rise to security concerns rooted in how meaning is defined, learned, and maintained . As illustrated in Fig. 2, unlike traditional com- munication pipelines where information fidelity is grounded in analytically specified encoders and decoders, SemCom systems depend on model-driv en inference pipelines, where meaning is shaped by the behavior of trained representations under v arying conditions. Three main security concerns in SemCom are summarized in T able II. The first concern is from dependence on complex learned models. Semantic encoders and decoders are typically imple- mented as deep neural networks trained on high-dimensional data, making their behavior sensitive to subtle variations in input distributions, latent structure, or contextual cues [2], [3]. As a result, ev en when lower -layer transmission succeeds and syntactic correctness is preserved, small discrepancies between training and deployment conditions can lead to de- graded or inconsistent semantic outcomes [45]. F or e xample, by crafting inputs that exploit vulnerabilities in the semantic encoder , an adversary can induce misleading latent semantic representations even though the channel encoder and physical transmission operate normally , resulting in incorrect semantic reconstruction at the decoder . Data-driv en learning further binds semantic behavior to the properties of the training corpus. Biases, omissions, or distributional skew in training data can manifest as system- atic performance degradation at deplo yment, particularly in dynamic en vironments. This effect is amplified in systems that incorporate continual learning or online adaptation, where accumulated updates may gradually shift semantic representa- tions away from their original operating re gime. In distributed or federated settings, such drift can occur unev enly across agents, leading to desynchronization of shared semantic mod- els. For example, by poisoning the data used to train or update the semantic encoder/decoder pair , an adversary can gradually shift the mapping between ra w data and semantic symbols, causing semantic drift that persists e ven when the channel encoder preserves bit-lev el fidelity . Additional concerns comes from reliance on external se- mantic resources, such as pretrained foundation models, shared knowledge bases, or memory modules. In these architectures, effecti ve semantic reconstruction depends not only on local inference, b ut also on the consistency and alignment of shared priors across communicating endpoints. Div ergence in these priors, e ven in the absence of e xplicit transmission failures, can distort interpretation or render recei ved content ambiguous or misleading. For example, tampering with foundation models, shared embeddings, or external memory modules that serv e as semantic priors can create mismatches between endpoints, causing ambiguous or misleading semantic decoding even though the channel decoder returns error-free symbols. Crucially , these AI-induced dependencies do not represent failures or vulnerabilities in isolation. Fidelity is no longer de- termined solely by symbol-le vel inte grity , but by the coherence of the end-to-end semantic pipeline, encompassing learned model behavior , shared kno wledge, and adaptation dynamics. In the next section, we adopt a security-centered perspectiv e that examines how adversarial objectives can interact with these semantic dependencies, and ho w attacks emerge from the same learned mechanisms. 6 T ABLE II C O M PAR I S O N O F A I - I N D U C E D S E C U R I T Y C O N C E R N S I N S E M C O M Security Concern Underlying Dependency Failur e Mode in SemCom Representativ e Attack V ector Security Implication Dependence on Com- plex Learned Models Semantic encoder/de- coder implemented as deep learned models Semantic meaning shifts un- der distributional or conte xtual perturbation despite error-free channel delivery Adversarial input manipulation targeting semantic encoders (e.g., semantic misclassification) Latent meaning becomes attackable without violat- ing channel-level integrity Dependence on Data- Driv en Learning Cor- pora T raining data, continual learning updates, feder- ated semantic alignment Drift or mismatch in seman- tic mappings, especially across agents or over time Data poisoning or federated desynchronization attacks Semantic boundaries move over deployment, silently altering meaning Dependence on Exter - nal Semantic Resources Shared embeddings, knowledge bases, foundation models, memory priors Inconsistent semantic recon- struction arising from mis- matched or tampered shared priors Ontology manipulation or pri- ors mismatch injection Meaning reconstruction re- lies on integrity of shared external semantics I I I . A I S E C U R I T Y A N D D E F E N S E F O U N D AT I O N S : T H R E A T M O D E L S A N D D E S I G N P R I N C I P L E S The observ ations in Section II highlight a fundamental shift in how correctness and failure arise in SemCom systems. When meaning is defined and reconstructed through learned models, shared knowledge, and adaptiv e inference pipelines, semantic de gradation can occur without explicit transmission errors or protocol violations. These failure modes are not incidental artifacts of implementation, but direct consequences of adopting AI-native designs. This change raises security questions that differ from classical channel- and protocol-centric settings. If semantic correctness depends on learned behavior , data distributions, and contextual alignment, then adversaries need not tamper with signals or protocols to induce failure. Instead, they may exploit the same mechanisms that enable semantic ef ficiency and adaptability , including representation learning, training dynamics, and inference-time sensitivity [12], [46], [47]. Rea- soning about secure SemCom therefore requires a security framew ork that goes be yond reliability and explicitly addresses robustness, integrity , and pri vacy of learned behavior . T o establish this foundation, we step back from SemCom- specific mechanisms and summarize how security is modeled and defended in AI systems more broadly . Importantly , we do so with an explicit vie w tow ard communication-centric instantiations of these concepts. For example, threat models that are classically defined over input perturbations or training data in AI naturally map, in SemCom systems, to perturbations of learned joint source–channel representations, poisoning of end-to-end JSCC training pipelines, or manipulation of shared semantic priors between transmitters and recei vers. This section therefore introduces core concepts in AI security and defense not as abstract machine learning constructs, but as b uilding blocks that directly inform the threat surfaces and defense strate gies of semantic communication systems. These concepts provide the necessary vocab ulary and analytical lens for translating AI-induced dependencies into concrete, communication-relev ant attack models and defense principles for SemCom. A. F r om Reliability to Robustness: How AI Redefines Security T raditional communication security is rooted in a reliability- centric abstraction, where correctness is defined by f aithful symbol reproduction under noise, interference, and adversarial disruption. Within this paradigm, failures are explicit and observable, manifesting as decoding errors, packet loss, or violated cryptographic checks. Protecting the channel and enforcing protocol-le vel integrity are therefore sufficient to ensure correct system beha vior . AI-driv en systems depart fundamentally from this model. Learned components do not implement analytically specified transformations with predictable error behavior . Instead, they infer high-dimensional decision functions from data, mak- ing system beha vior sensiti ve to subtle variations in inputs, context, or operating conditions. As a result, failures may arise e ven when inputs are syntactically v alid, transmission is reliable, and cryptographic protections remain intact [12], [46], [47]. This shift reframes security from a question of reliability to one of rob ustness. In AI systems, rob ustness characterizes the stability of a learned model under admissible perturbations to its inputs. Let ( x, y ) ∼ D denote a data distribution over inputs x ∈ X and task labels (or tar gets) y ∈ Y . Let f θ : X → Y be a learned predictor parameterized by θ , and let ℓ : Y × Y → R ≥ 0 denote a task loss. Giv en a threat model specified by an admissible perturbation set S ( x ) , rob ustness is commonly formalized as the worst-case loss in a neighborhood of x : L rob ( f θ ; x, y ) = sup δ ∈S ( x ) ℓ ( f θ ( x + δ ) , y ) , and the corresponding robust risk is R rob ( f θ ) = E ( x,y ) ∼D [ L rob ( f θ ; x, y )] . Here, the perturbation set S ( x ) encodes the assumed adv ersary model, and rob ustness is ev aluated with respect to the worst-case degradation in task performance rather than symbol-level error . Robust learning then seeks model parameters that minimize this worst-case risk R rob ( f θ ) , which recovers the standard min– max formulation underlying adversarial training and related robustness methods [48]. This distinction fundamentally alters security objecti ves. Whereas reliability-centric systems protect symbols and pro- tocols, AI-centric systems must protect learned behavior itself. Robustness seeks to limit worst-case semantic or task degra- dation under an e xplicit threat model [48], [49]. Priv acy aims to pre vent sensitiv e information about inputs or training data from being inferred through model outputs or representations. Integrity focuses on ensuring that learned models behave consistently with intended objecti ves despite malicious data manipulation or adversarial interaction. 7 Crucially , robustness in AI is not an intrinsic or univ er- sal property of a model. Instead, it is defined relativ e to explicit threat models, perturbation budgets, and operational constraints, and it often trades off against accuracy , effi- ciency , or adaptability [49]. Complementary notions such as distributional robustness further emphasize rob ustness under deployment shift rather than pointwise perturbations [50]. This perspectiv e aligns directly with SemCom, where semantic fidelity , latenc y , energy , and robustness must be jointly man- aged. Understanding this shift from reliability to rob ustness is therefore essential before instantiating threat models and defenses for SemCom systems [51]. B. Canonical AI Thr eat Models and Attack Surfaces Security analysis in AI systems begins with an explicit threat model that characterizes what the adv ersary can ac- cess, manipulate, and observe, as well as what constitutes a successful attack. Unlike traditional communication systems, where threat models are often defined at the signal or protocol lev el [52], [53], AI threat models are centered on learned representations, data-dri ven behavior , and inference dynamics. This shift leads to attack surfaces that are semantic, statistical, and often indirect. A first organizing dimension is the stage of interaction at which the adversary operates. T raining-time threats target the data and procedures used to construct or update models. In supervised or self-supervised learning pipelines, adversaries may inject poisoned samples to bias learned representations or implant backdoors that acti vate under specific conditions [54], [55]. In distributed and federated learning settings, which are increasingly common in networked and edge deployments, attackers can further exploit aggre gation and synchronization mechanisms, introducing subtle but persistent de viations that are difficult to detect through local validation alone [56], [57]. These attacks are particularly concerning because their effects may remain dormant until deployment, at which point mitigation options are limited. Inference-time threats arise when adversaries interact with a deployed model through its inputs or query interface. Adver- sarial e xamples are the most well-kno wn manifestation, where carefully crafted inputs induce incorrect predictions while remaining perceptually or syntactically valid [46]. Beyond worst-case perturbations, inference-time attacks also include semantic manipulations such as paraphrasing, reordering, or context injection that exploit inductive biases of learned repre- sentations [58]. Because inference-time attacks do not require access to training data or model internals, they are especially relev ant in open or service-based deployments. A second dimension concerns information exposur e and leakage . Learned models often encode rich information about their training inputs in internal representations and outputs. Adversaries can exploit this through membership inference, model inv ersion, or representation probing to extract sensiti ve attributes or reconstruct training samples [59], [60]. Even when models are queried through restricted interfaces, repeated interaction may enable model extraction attacks that approxi- mate decision boundaries or replicate functionality [61]. From a security objecti ves perspecti ve, these dimensions can be interpreted—rather than replaced—through the classical confidentiality–integrity–a vailability (CIA) triad [62]. Infor- mation exposure and leakage directly relate to confidentiality , training- and inference-time manipulation primarily threaten semantic integrity , and adaptation- or feedback-driv en attacks can undermine av ailability by inducing cascading or persistent semantic failure [63], [64]. Ho wev er , unlike traditional sys- tems, these objectiv es are often violated in SemCom without explicit service disruption or syntactic corruption [65], mo- tiv ating a threat taxonomy that emphasizes where and how learned beha vior is exploited rather than categorizing attacks solely by security objecti ves. A third dimension captures adaptation and feedback-driven thr eats . Many AI systems operate in closed-loop settings, where model outputs influence future inputs, data collection, or learning updates. RL, online adaptation, and continual learning introduce feedback paths that adversaries can e xploit by shaping observ ations, re wards, or en vironment dynamics [66]. Such attacks need not cause immediate failure. Instead, they may induce gradual policy drift, degraded e xploration, or biased adaptation over time, making them dif ficult to attribute to malicious behavior . Across these dimensions, AI threat models are commonly classified by the adversary’ s access assumptions. White-box adversaries possess full knowledge of model architecture and parameters, gray-box adv ersaries observe partial internal sig- nals, and black-box adversaries interact only through inputs and outputs. In SemCom systems, these access assumptions correspond to different degrees of visibility into the semantic pipeline rather than direct access to symbols or protocols. For instance, adversarial access may range from knowledge of learned semantic mappings, to observ ation of intermediate semantic signals or adaptation feedback, to interaction only through end-to-end semantic inputs and task-level outputs. Importantly , many ef fectiv e AI attacks operate under gray- or black-box assumptions, e xploiting transferability , query access, or statistical side channels rather than explicit pa- rameter access [67]. W e defer a detailed SemCom-specific instantiation of these access models to Section IV, where they are embedded into an AI-centric threat model tailored to semantic communication pipelines. T aken together , these threat models operationalize the de- parture from classical security abstractions by exposing how learned behavior becomes an attack surface. Attacks on AI systems do not necessarily violate syntactic correctness, pro- tocol compliance, or signal integrity . Instead, the y e xploit the statistical and semantic structure of learned beha vior , often lev eraging the same mechanisms that enable generalization and adaptability , and that underpin semantic efficiency in communication pipelines. C. AI Defense T axonomy: Pr eventive, Reactive, and Certifi- able Defenses Defenses for AI systems are designed in response to these threat models and are typically organized by when and how they intervene in the learning and inference pipeline. Un- like traditional security mechanisms that enforce correctness 8 through fixed rules or protocol guarantees, AI defenses aim to shape, monitor, or constrain learned behavior under un- certainty . Defense ef fectiveness is therefore tied to explicit assumptions about adversarial capabilities, perturbation sets, and system constraints, with the learned model serving as the primary unit of protection. A first category consists of pr eventive defenses , which are applied prior to deployment to reduce vulnerability to anticipated attacks. Representativ e examples include adversar - ial training and its v ariants [48], [49], robust augmentation [68]–[71], and representation-le vel regularization that enforces stability through Jacobian or Lipschitz control [72]–[76]. Distributionally robust objecti ves further impro ve resilience under deployment drift rather than pointwise perturbations [50]. Prev entiv e defenses are attracti ve because they incur no runtime overhead, but their effecti veness depends critically on how well the assumed threat model matches the deployment conditions of the model. A second category comprises r eactive defenses , which op- erate during inference or system execution. These defenses monitor model behavior and interv ene when anomalous or suspicious conditions are detected, using detection tests, recon- struction checks, or input transformations [77]–[81]. Ho wev er , reactiv e defenses are often brittle under adaptiv e attackers that explicitly optimize against the detector or the transformation, making careful e valuation essential [82], [83]. In addition, reactiv e defenses introduce latency and computation overhead, which can be prohibitive in resource- and latenc y-constrained inference settings. A third category focuses on certifiable and verifiable de- fenses , which provide formal guarantees on model behavior under explicitly defined perturbation sets. Certified robust- ness via randomized smoothing yields probabilistic guaran- tees under additive noise models [51], [84]. V erification and bound-propagation methods provide deterministic certificates for certain architectures by propagating relaxations through the network [85]–[87]. While such guarantees are attractiv e in safety- and mission-critical settings, they often trade co verage and tightness for rigor and scalability . These defense categories are not mutually exclusi ve. Pre ven- tiv e defenses can reduce the frequency and severity of failures encountered at runtime, reacti ve defenses can handle residual or unmodeled threats, and certifiable components can serv e as safety en velopes when failure is unacceptable. This compo- sitional view is especially important in systems that operate under dynamic conditions, where adversarial strategies, data distributions, and system objecti ves may e volve over time. These categories originate from model-centric AI security and treat the learned model as the primary unit of protection; in Section V, we reinterpret this taxonomy under the system-le vel constraints, physical channels, and cross-layer dependencies unique to SemCom. D. Lessons Learned: Why AI Defense Cannot T ransfer Di- r ectly to Semantic Communication Despite the maturity of AI security research, defense tech- niques developed for standalone learning systems cannot be applied directly to SemCom systems. This gap arises not from missing mechanisms, but from a fundamental mismatch in system assumptions. Classical AI defense framew orks typi- cally consider a single model operating on well-defined inputs and producing outputs that can be e valuated in isolation. SemCom systems, by contrast, embed learned models within a tightly coupled pipeline that spans physical transmission, shared kno wledge, and distributed inference, where failures emerge from cross-layer interactions rather than from any single component. A primary challenge stems from the pr esence of the physical communication channel . Most AI defenses assume direct ac- cess to model inputs and outputs, with perturbations defined in abstract feature or input spaces. In SemCom, ho wev er , seman- tic representations are transmitted o ver noisy and bandwidth- constrained wireless channels, where perturbations are shaped by propagation ef fects, interference, and protocol dynamics. As a result, admissible attacks are neither arbitrary nor norm- bounded in the con ventional sense. Instead of unconstrained ℓ p perturbations applied directly to model inputs, semantic perturbations in SemCom must be realized through physical channels and protocol operations, and are therefore constrained by noise statistics, bandwidth, modulation, and coding [3], [21]. Defenses that rely on carefully constructed adversarial examples or certified input neighborhoods may thus fail to cap- ture the structure of channel-realizable semantic distortions, limiting their effecti veness when deployed over the air . A second challenge arises from the use of shar ed knowledge and conte xtual prior s . Many AI defense techniques implicitly assume a fixed and trusted context under which inference is performed. SemCom systems violate this assumption by design. T ransmitters and receiv ers may rely on external knowl- edge bases, pretrained models, or conte xtual memories that ev olve ov er time and may not remain perfectly synchronized. Attacks that manipulate, poison, or desynchronize this shared context can induce semantic failure without directly affecting the learned encoder or decoder [88], [89]. Such failure modes fall outside the scope of most existing AI defenses, which focus on protecting individual models rather than the integrity and alignment of distrib uted semantic priors. Resour ce and latency constraints further complicate defense deployment. AI defenses are often ev aluated under of fline or compute-rich settings, where increased model complexity , en- semble inference, or repeated sampling is acceptable. SemCom systems, particularly those operating at the edge or in real-time control loops, must satisfy stringent constraints on latency , energy , and bandwidth [25]–[27]. Defensiv e mechanisms that significantly increase computation, communication ov erhead, or decision delay may ne gate the efficiency gains that motiv ate SemCom in the first place. This tension fundamentally alters what constitutes a feasible and deployable defense in SemCom systems. Crucially , SemCom systems exhibit emergent failure modes driv en by feedback and interaction. Many SemCom deploy- ments in volve closed-loop operation, multi-agent coordination, or adaptiv e rate and representation selection based on inferred semantic confidence. In such settings, small semantic distor- tions introduced at one point in the pipeline can be amplified 9 Source Encoder NN Semantic representations Physical Channel Distorted semantic representations Decoder NN Recon- structed message T ask Degradation (Inference, classification...) Full visibility Intermediate outputs/metadata Channel-Realizable Semantic Attacks Semantic Attack Exploit Sensitive Data and Model Parameters Data Poisoning and Backdoor Attack AI Model Adversarial Capabilities Access Assumptions Receiver's KB Only through query interfaces Semantic Leaka ge (Latent codes/ shared knowledg e) Adversarial Objectives (task/meaning level) Interpretation Steering (Semantic change while syntactic fixed) Data Poisoning and Backdoor Implantation Perturbations Injection (Decoder/downstream task module) Eavesdrop (Training data properties/ model parameters) Privacy and Model Extraction Risks Semantic Jamming Protocol-aware Manipulation Knowledge Poisoning Semantic Manipulation via Knowledge and Context Sender's KB Knowledge Sharing Structured Interference Signals (Power , bandwidth constrained) Wireless Channel Retransmission policies, rate adaptation, scheduling decisions Adversary Knowledge Cascade in Networks Knowledge Desynchronization Privacy-oriented Threats Inject corrupted, biased, misleading entries Knowledge Update Shared KB Sensitive information Training-time Poisoning and Backdoors Inference-time Semantic Manipulation AI-Centric Threats Model Model-Centric Threats Fig. 3. Threat model and attack surfaces in semantic communication. T op : an AI-centric threat model over adversarial objectiv es, adversarial capabilities, and access assumptions; Bottom : how threats manifest dif ferently across model, channel, and kno wledge layers. through feedback, propagated across agents, or reinforced through adaptation. T raditional AI defenses, which are largely ev aluated on single-shot inference tasks, are ill-equipped to reason about these cascading, temporal, and collecti ve ef fects. T aken together, these challenges underscore that securing SemCom is not a matter of directly importing AI defense techniques, but of reinterpreting them within a system-le vel framew ork. Prev entiv e, reactiv e, and certifiable defenses must be redesigned to account for physical channels, shared kno wl- edge, resource constraints, and distributed inference. This ob- servation moti vates the need for SemCom-native threat models and defense taxonomies, which we develop in the next section to explicitly capture where semantic failures arise and ho w robustness can be enforced across the entire communication pipeline. I V . A I - C E N T R I C T H R E A T M O D E L A N D A T T AC K S U R FAC E S I N S E M A N T I C C O M M U N I C A T I O N Building on the AI-centric security foundations introduced in Section III, we now formalize a threat model tailored to SemCom systems. Rather than focusing on symbol corruption or protocol violations, this model characterizes adversarial actions that target learned semantic representations, shared contextual priors, and adaptiv e inference behavior across the communication pipeline as illustrated in Fig. 3. W e begin by identifying adversarial assumptions specific to SemCom pipelines, including attacker capabilities related to model training, knowledge sharing, inference beha vior , and feedback loops. W e then introduce a taxonomy of semantic attack surfaces, organized by the system layers or dependencies they target, ranging from encoder–decoder representations and context-dependent adaptation mechanisms to the integrity of shared semantic priors. A. AI-Centric Threat Model W e adopt an AI-centric threat model that reflects the unique vulnerabilities introduced by learning-based semantic communication. Unlike con ventional threat models that target symbol-lev el integrity or channel confidentiality , the emphasis here is on compromising the interpretation of meaning itself. In SemCom systems, adv ersaries aim to induce failures at the semantic lev el such as shaping, rev ealing, or corrupting the con ve yed meaning, while potentially leaving lower -layer communication metrics such as packet deli very and bit error rates unaffected [9], [18]. Adversarial objectives. The adversary’ s goals are defined at the task or meaning lev el, rather than at the physical or syntac- tic lev els. Three primary objectives arise: (1) task performance de gradation 1 , inducing incorrect semantic inference, misclas- sification, or f ailure in do wnstream decision-making [10], [90]; (2) semantic leakag e , extracting sensitive information from latent codes or shared knowledge [91]; and (3) interpr etation steering , shaping the receiver’ s understanding toward attacker- chosen semantics without altering the syntactic structure [88], [89]. These objectiv es may manifest as explicit task failures or silent behavioral shifts and are especially dangerous in applications where decisions are made autonomously based on semantic understanding. Adversarial capabilities. T o achie ve these goals, adver- saries may e xploit vulnerabilities across different phases of a SemCom pipeline. At inference time, they may manipulate se- mantic inputs using adv ersarial perturbations that preserve syn- tactic v alidity while inducing misinterpretation by the decoder or do wnstream task module [10]. These attacks often bypass 1 Throughout this paper , the term task is used in a broad sense and includes traditional data reconstruction objecti ves (e.g., minimizing reconstruction error or bit error rate) in con ventional communication systems as special cases. 10 integrity checks, highlighting a mismatch between semantic and bit-level robustness [47]. During model de velopment or online adaptation, adversaries may poison training data [92], implant backdoors, or compromise the integrity of federated or continual learning processes used to refine semantic models and shared knowledge bases [93], [94]. Other attacks exploit model observability: by issuing structured queries or monitor- ing outputs, adversaries may reconstruct latent representations, infer training data properties, or extract model parameters. Access assumptions. In SemCom systems, adversarial ac- cess is shaped by ho w semantic encoders/decoders, channels, and shared knowledge are exposed and coordinated [9], [18]. In white-box SemCom settings, an adversary may have ac- cess to semantic encoder or decoder architectures, pretrained weights, or update mechanisms used for semantic adaptation or continual learning, consistent with canonical AI threat models [12]. Gray-box access commonly arises when the adversary can observe intermediate semantic representations, confidence scores, channel-aw are adaptation signals, or metadata ex- changed for coordination, rate control, or semantic alignment [67]. In blac k-box SemCom scenarios, the adversary interacts only through semantic inputs and outputs, such as transmitted semantic features, decoded task outputs, or query-based access to semantic services, where attacks e xploit transferability and query-based probing rather than parameter access [67]. Importantly , SemCom attacks need not target isolated com- ponents. Because meaning is reconstructed through the joint operation of learned models, channels, and contextual pri- ors, partial access at one interface can exploit cross-module dependencies, such as encoder–decoder mismatch, context desynchronization, or biased adaptation. Throughout this survey , we focus on semantic threat sce- narios that persist ev en when con ventional link- and physical- layer protections such as encryption, authentication, or error correction, function correctly . This framing underscores that semantic attacks exploit vulnerabilities in meaning-making rather than signal delivery , and motiv ates the need for semantic-aware defenses beyond traditional communication security . B. Model-Centric Threats to Learned Semantics At the core of SemCom systems lie learned models that encode, decode, and interpret semantic information. Because these models determine how meaning is abstracted, com- pressed, and reconstructed, they form a primary attack surface in semantic communication pipelines [12]–[15]. Model-centric threats arise from the fact that semantic representations are learned from data and optimized for task objectives, rather than being analytically specified or symbolically verified. Infer ence-time semantic manipulation. At inference time, adversaries may craft inputs that are syntactically valid and perceptually plausible, yet induce incorrect task-lev el seman- tics at the decoder or downstream inference module. Un- like classical adversarial examples that focus on pixel- or wa veform-le vel perturbations, semantic manipulation often ex- ploits higher -level structure such as paraphrasing, reordering, or conte xtual ambiguity to steer meaning while preserving surface v alidity [10], [90]. Even without access to model internals, repeated queries can enable black-box probing of fragile semantic regions, exposing sensitivity in learned rep- resentations. T raining-time poisoning and backdoors. When semantic encoders and decoders are trained or adapted from data, adversaries may inject poisoned samples that bias the learned semantic space [93], [95]. Clean-label poisoning can gradu- ally distort semantic boundaries without degrading nominal accuracy , while backdoor attacks implant hidden triggers that activ ate attacker -chosen semantic behavior under specific con- ditions. Because SemCom models are often trained end to end and reused across tasks or deployments, such training- time compromises can propagate through the communication pipeline and affect multiple downstream functions. Privacy leakage and model extr action. Semantic represen- tations typically compress rich, high-le vel information, which creates risks of unintended semantic leakage. Latent codes may encode sensiti ve attributes of inputs or training data, enabling membership inference, model inv ersion, or represen- tation probing attacks [91], [96], [97]. In deployed SemCom services, repeated semantic queries can further facilitate model extraction, allowing adversaries to approximate semantic en- coders or decoders and replicate or manipulate their behavior . In SemCom, learned models act as intermediate seman- tic interfaces within an end-to-end communication pipeline, so priv acy leakage, extraction, and manipulation can hav e system-lev el consequences beyond a single inference task. As a result, model-lev el attacks need not cause immediate misclassification to be effecti ve. Small distortions in semantic representations can propagate through channel transmission, knowledge-assisted reconstruction, and do wnstream inference, leading to amplified or delayed semantic failures even when lower -layer communication remains reliable. Moreover , be- cause semantic encoders and decoders are jointly optimized with communication objecti ves and often reused across tasks, model-centric attacks in SemCom can simultaneously affect compression efficienc y , robustness to channel impairments, and task-lev el correctness. This coupling moti vates examining additional threat surfaces that arise when semantic models interact with channels, shared knowledge, and networked inference, which we discuss next. C. Channel-Realizable Semantic Attac ks Beyond direct manipulation of learned models, SemCom systems are vulnerable to attacks that exploit the physical communication channel while remaining within realistic sig- nal, po wer , and protocol constraints. Unlike classical jamming or interference, which aim to disrupt symbol decoding or synchronization, semantic attacks o ver the channel tar get the learned semantic inference process at the receiver . Channel-constrained semantic perturbations. In SemCom, channel noise and interference are not merely impairments to be corrected; the y directly interact with the learned semantic decoders that map receiv ed signals to semantic representa- tions [98]–[100]. As a result, adversaries can craft channel- realizable perturbations that leave con ventional metrics such 11 as bit error rate, frame error rate, or CRC checks lar gely unaffected, yet cause substantial degradation in task-lev el semantic outcomes. A prominent class of threats is semantic jamming [101]– [103], in which the adversary injects carefully structured interference signals constrained by power , bandwidth, and spectral masks. Rather than maximizing energy or random- ness, the adversary optimizes interference to distort specific semantic features extracted by the decoder . For example, small wa veform perturbations may selectiv ely alter latent features corresponding to key entities, labels, or commands, leading to misclassification, mistranslation, or incorrect control actions ev en when packets are successfully decoded. Pr otocol-aware semantic manipulation. Another attack vec- tor arises from protocol-aware manipulation [104], [105], where the adversary exploits retransmission policies, rate adaptation, or scheduling decisions that are coupled with semantic confidence or task feedback. By subtly degrading semantic quality without triggering retransmissions or fallback modes, an attacker can induce persistent semantic drift while av oiding detection by lower -layer reliability mechanisms. Importantly , channel-realizable semantic attacks e xpose a fundamental mismatch between syntactic reliability and se- mantic integrity . Because semantic decoders are trained under assumed channel models, adversarial perturbations that exploit modeling gaps or worst-case channel realizations can push receiv ed signals into regions of the semantic decision space that were rarely observed during training. This highlights that robustness to random noise does not imply robustness to adversarial, structure-aw are channel manipulation. D. Semantic Manipulation via Shared Knowledge and Conte xt A defining feature of modern SemCom architectures is the use of shared knowledge, contextual priors, and coordinated inference across multiple agents to guide semantic encod- ing and decoding [45], [104]. Knowledge bases, pretrained models, ontologies, and external memory modules enable aggressiv e semantic compression and robust reconstruction at the link lev el, while networked deployment allo ws semantic information to be reused, aggre gated, and refined across nodes. At the same time, these capabilities introduce a broad and underexplored attack surface that extends beyond indi vidual encoders or channels. Knowledge poisoning and semantic prior manipulation. At the knowledge and context level, adversaries may target the integrity , av ailability , or alignment of semantic priors used by transmitters and receivers [4], [6]. Unlike attacks on encoders or physical channels, such manipulations may not modify transmitted signals. Instead, they alter how meaning is grounded and inferred. Kno wledge poisoning attacks inject corrupted, biased, or misleading entries into a semantic knowl- edge base, causing systematic misinterpretation of transmitted representations [89], [92], [106]. Because semantic encoders rely on knowledge to decide what information to transmit and semantic decoders rely on the same priors to reconstruct missing details, e ven small perturbations to shared kno wledge can silently bias both sides of the communication. Context desynchr onization and semantic diver gence. Sem- Com also implicitly assumes sufficient alignment between the contextual priors held by the transmitter and recei ver . Ad- versaries may exploit update delays, partial synchronization, or version mismatches to induce semantic di vergence, where identical transmitted representations are interpreted differently across endpoints [88], [89], [107]. Such desynchronization attacks are particularly subtle in distrib uted systems, as similar effects may arise from benign context drift, making malicious manipulation difficult to distinguish from natural system evo- lution. Infer ence-time semantic leakage . Priv acy-oriented threats further arise when semantic reconstruction depends on rich contextual knowledge. Adversaries who gain access to shared knowledge, or who can issue queries to semantic models con- ditioned on that kno wledge, may infer sensitiv e information that was never explicitly transmitted [96], [108]. In these cases, semantic leakage occurs through inference rather than communication, blurring the boundary between data exposure and semantic reasoning. E. Networked Semantic Infer ence and Multi-Agent Propa ga- tion Beyond individual links, SemCom systems are frequently deployed in multi-agent and networked settings, where se- mantic information is relayed, aggregated, or jointly processed across nodes [109]–[111]. In such architectures, semantic inference is no longer localized to a single transmitter–receiver pair , but emerges from collectiv e processing, shared models, and feedback-dri ven coordination among multiple agents [8]. This networked inference paradigm enables scalability and robustness, but also introduces a distinct and underexplored attack surface. Pr opagation and amplification of semantic corruption. In networked SemCom systems, attacks need not compromise a single component to be effecti ve. Corrupted semantic repre- sentations, biased conte xtual updates, or manipulated inference outputs introduced at one node may propagate through shared models, cooperati ve inference, or adaptiv e feedback loops [112], [113]. Since downstream agents may treat recei ved semantic information as reliable context, ev en small pertur - bations can be amplified through collecti ve decision processes [114], [115], leading to correlated errors, cascading misinter- pretations, or emergent misbehavior at the system le vel. Coor dination-level manipulation and collective misalign- ment. Adversaries may further exploit coordination protocols, consensus mechanisms, or task-sharing strate gies that govern how agents fuse semantic information [116], [117]. By selec- tiv ely influencing a subset of agents or disrupting coordination signals, an attacker can induce semantic disagreement, delayed con vergence, or biased collecti ve inference without necessarily attacking individual semantic encoders or channels. Such ef- fects are particularly dif ficult to diagnose, as failures manifest only at the global system level and may resemble benign coordination noise or nonstationary en vironments. Networked semantic inference threats highlight that secu- rity in SemCom cannot be assessed solely at the le vel of 12 Generative correction and semantic restoration Semantic anti-jamming and adaptive reception Semantic-aware channel coding with unequal protection Constraint-aware and consistency-based representation Joint source– channel–security coding Source coding Channel coding Channel-Level Defenses Encoder/Decoder- Level Defenses Knowledge-Base- Level Defenses Network-Level Defenses Adversarially trained semantic transceivers Principles Semantic security is end- to-end. Robustness should be semantically efficient. Time Location Defenses Intervention align Defenses support Composability accross system layers Cross-Domain Lessons and Integration Paths Transferable ideas from NLP and CV robustness Integration paths for SemCom-native robustness Adaptation challenges under SemCom constraints Adversarial and robust training Latent-space regularization and stability constraints Channel Semantically important Secure knowledge construction and maintenance Controlled knowledge sharing and synchronization Knowledge-assisted coding with security awareness (More secure structure) Semantic representation Reliability-oriented semantic networking and cooperative robustness Semantic-aware correction T rust, identity , and misbehavior detection Network-aware semantic defense orchestration Feedback Feedback Fig. 4. Layered defenses for semantic communication security . According to ho w and where they intervene in the communication pipeline, defenses are organized into five categories: encoder/decoder lev el, channel le vel, knowledge-base lev el, network lev el, and cross-domain integration. The layered structure highlights how AI-centric defenses must be instantiated and composed across multiple SemCom subsystems to provide end-to-end semantic security . individual models or links. When meaning is inferred through distributed, cooperati ve processes, semantic inte grity becomes a collecti ve property of the system, requiring defenses that explicitly account for propagation dynamics, inter-agent trust, and coordination robustness. F . Lessons Learned The threats surveyed in this section re veal that vulnera- bilities in semantic communication systems arise fundamen- tally from their AI-nativ e design rather than from unreliable channels or protocol violations. By targeting learned repre- sentations, training and adaptation dynamics, shared semantic priors, and networked inference processes, adversaries can induce semantic failures while preserving syntactic correctness and lower -layer integrity . This decoupling between semantic integrity and con ventional communication metrics challenges traditional security abstractions based on bit-le vel reliability , encryption, and authentication. A central lesson is that semantic failures rarely origi- nate from isolated components. Instead, they emerge from interactions among models, channels, kno wledge resources, and coordination mechanisms, where seemingly localized or benign perturbations can propagate and amplify through cross- module dependencies. As a result, assessing SemCom security requires reasoning about semantic inference as a coupled, end- to-end process rather than as a collection of independently secured blocks. Another key insight is that rob ustness, security , and pri- vac y in SemCom are tightly intertwined. Design choices that improv e semantic efficiency through aggressi ve compression, context reuse, or cooperativ e inference often increase exposure to poisoning, leakage, and misalignment risks. This tension underscores the need to analyze security as a first-class design objectiv e, shaping architectural choices and operational trade- offs. Finally , these observations indicate that defending semantic communication systems requires mechanisms that are a ware of meaning, conte xt, and coordination, rather than solely signal fi- delity or protocol correctness. W e defer a systematic discussion of defenses and mitigation strate gies to the following sections. V . H OW A I D E F E N S E S C A N B E U S E D I N S E M A N T I C C O M M U N I C A T I O N S Y S T E M S As discussed in Section IV, SemCom enables attacks that target meaning, leakage, or misalignment while leaving syntac- tic integrity and lower -layer metrics lar gely intact. Defending SemCom therefore requires system-le vel protections that span semantic transceiv ers, channels, shared knowledge, and net- worked coordination. Follo wing Fig. 4, we organize defenses by intervention interface and feasibility under rate, latency , energy , and deployment constraints, and summarize design principles in T able III and the taxonomy in T able IV. 13 T ABLE III D E S I G N P R I N C I P L E S F O R A I D E F E N S E I N S E M A N T I C C O M M U N I C ATI O N S Y S T E M S Principle Defense Focus K ey Observ ation Design Implication P1: End-to-end se- mantic security System-lev el correct- ness Semantic correctness arises from interac- tions among learned and non-learned com- ponents across the entire SemCom pipeline. Defenses must address vulnerabilities at semantic encoders/decoders, channel ef fects, shared knowl- edge, and multi-agent coordination, rather than protecting individual models in isolation. P2: Time–location aligned defense Intervention alignment Defenses differ in when they act (training- time, inference-time, certified) and wher e they intervene within the pipeline. Effecti ve protection requires mapping defense mechanisms to concrete intervention points at the model, channel, knowledge-base, and network lev- els. P3: Semantically ef- ficient rob ustness Cost-aware robustness SemCom operates under strict constraints on rate, latency , energy , and computation. Defense mechanisms should maximize semantic fidelity or task robustness per unit cost, av oiding excessi ve redundancy or model complexity that negates SemCom ef ficiency gains. P4: Layered and composable defenses Cross-layer protection No single defense is sufficient to secure system-lev el semantics in SemCom. Layered defenses across pipeline stages enable complementary protection, adapti ve responses, and safety envelopes for critical components. A. Defense Design Philosophy for Semantic Communication T raditional AI defenses are typically developed in a model- centric setting, where the objective is to protect a single learned function against adv ersarial manipulation, data poi- soning, or information leakage. In contrast, SemCom systems embed learned components within a broader pipeline that couples semantic encoding and decoding with wireless trans- mission, shared knowledge, and distributed decision-making. As a result, defenses that are ef fective for standalone models do not directly translate to SemCom without reinterpretation at the system lev el. In SemCom, the security objectiv e is not to preserve intermediate representations, but to preserve task- relev ant meaning after the entire pipeline has acted on the message. Designing defenses for SemCom requires reasoning along two fundamental dimensions. The first concerns when a de- fense intervenes in the pipeline. Defenses may be pre venti ve, such as training-time hardening of semantic representations; reactiv e, such as inference-time detection and mitigation; or certifiable, providing formal guarantees under explicit threat models, as discussed in Section III. The second concerns wher e the defense intervenes. Because semantics is created, transformed, and consumed across multiple interfaces, nat- ural intervention points arise at the encoder/decoder, chan- nel, knowledge-base, and network lev els. Ef fective protection therefore requires mapping AI defense mechanisms to these concrete system interfaces, rather than treating security as an add-on to a single model. Beyond this time–location alignment, SemCom introduces domain-specific constraints that fundamentally shape defense feasibility . T ransmission is inherently lossy: noise, interfer- ence, and bandwidth limits can irrev ersibly distort semantic representations. Many SemCom designs rely on shared or external kno wledge, introducing risks of poisoning, desynchro- nization, and context leakage. Moreover , SemCom systems often operate under strict rate, latenc y , energy , and compu- tation budgets, making hea vyweight redundancy or expensi ve inference impractical. Finally , semantic failures can be subtle: outputs may remain fluent or syntactically valid while becom- ing task incorrect or unsafe. These constraints motiv ate four SemCom-specific principles in T able III: end-to-end semantic security (P1), time–location aligned defense (P2), semantically efficient robustness (P3), and layered composability (P4). Guided by these principles, we organize AI defenses for SemCom along the same pipeline through which semantic information is generated, transmitted, contextualized, and propagated. Rather than grouping defenses by algorithmic family , we group them by their points of intervention within the SemCom system. This or ganization highlights how different defense mechanisms address distinct semantic failure modes and how protections at one layer interact with vulnerabilities at others. T able IV further refines each interface by stating the dominant threat objecti ve and the corresponding AI defense primiti ves, making explicit the translation from model-centric defense mechanisms to Sem- Com intervention points. The remainder of this section follows this structure, examining encoder/decoder-lev el, channel-lev el, knowledge-base-le vel, and network-lev el defenses in turn. B. Encoder/Decoder-Level Defenses Guided by the design philosophy outlined abov e, we begin by e xamining defenses at the semantic encoder/decoder level. This le vel is the most direct interface between AI security and SemCom: it is where meaning is first abstracted into learned representations and where semantic errors most immediately translate into task failure. As a result, many vulnerabilities and defenses studied in model-centric AI security reappear here as direct effects on semantic representations and task performance. At this first stage of the pipeline, defenses focus on shaping how meaning is encoded and recov ered before transmission. At this layer, the dominant threats are semantic evasion and task degradation under meaning-preserving perturbations, so defenses primarily instantiate robust optimization, stability regularization, purification, and secrecy-aw are coding prim- itiv es. Early work adapts adversarial training to semantic representations, while subsequent approaches increasingly em- phasize representation geometry , correction at inference time, and finally architectural integration of robustness and confi- dentiality . T ogether , these ef forts reflect a progression from reactiv e hardening to ward robustness by construction. 14 T ABLE IV L AY E R E D D E F E N S E TAX O N O M Y F O R S E M A N T I C C O M M U N I CAT I O N S Y S T E M S , O R G A N I Z E D B Y I N T E RVE N T I O N I N T E R FA C E , T HR E AT O B J E C T I V E , A N D A I D E F E N S E P R I M I T I V E S . Defense Layer Defense Category Threat / Defense Goal Defense primitives and repr esentative works Encoder/ Decoder - Level Adversarial and Rob ust T raining Semantic ev asion; task degrada- tion under meaning-preserving perturbations Robust optimization ov er semantic neighborhoods using ad- versarial or minimax training objectives [118], [119]; robust architectur e via constrained bottlenecks (e.g., masked V AE) [120]. Representation Shaping Representation instability; en- larged semantic attack surface Stability regularization through Jacobian, spectral, or Lip- schitz constraints [74], [75]; latent compression and quan- tization to reduce adversarial degrees of freedom [121]; semantic consistency regularization across meaning- preserving views [122], [123]. Generativ e Correction and Semantic Restoration T est-time corruption; adversar- ial or channel-induced semantic distortion Purification and projection of corrupted representations using GANs or dif fusion models [124], [125]; generative denoising for semantic manifold recovery [125], [126]. Joint Source–Channel– Security Coding Semantic leakage; eavesdrop- ping under compressed repre- sentations Secrecy-awar e repr esentation learning that embeds rob ust- ness and confidentiality into the encoder [127]; security- by-design JSCC without higher-layer cryptography [127], [128]. Channel- Level Adversarial Channel Model- ing Over -the-air semantic degrada- tion under physically realizable interference Robust training under structured channel noise combin- ing stochastic fading and adversarial perturbations [129], [130]; worst-case utility optimization under power- bounded perturbation models [129], [130]. Semantic-A ware Channel Coding Unequal semantic importance; disproportionate task impact Importance-weighted protection and unequal semantic er - ror control in Deep JSCC [3], [131]; cost-sensitiv e robust- ness [131]. Semantic Anti-Jamming and Adaptiv e Reception Adaptiv e jamming targeting task-lev el meaning Detection and co-training against learned semantic jam- mers [101]; adaptiv e reception via task-dri ven decoding and interference-aware adaptation [101], [103]. Knowledge- Base- Level Secure Knowledge Construction and Maintenance Knowledge poisoning; integrity and provenance loss Prov enance tracking, auditing, and verification for KB ingestion and updates [9], [19]; rollback and isolation of untrusted sources [19]. Controlled Knowledge Shar- ing and Synchronization Knowledge desynchronization; context leakage across agents Access control and authenticated synchronization of shared semantic kno wledge [45], [132]; separation of global and private knowledge [45]. Secure Knowledge-Assisted Semantic Coding Malicious grounding; semantic inference by unauthorized re- ceiv ers V erified gr ounding and cross-sour ce consistency checks [133], [134]; private-knowledge separation as an implicit semantic security key [45]. Network- Level Reliability-Oriented Seman- tic Networking Cascading semantic drift in multi-hop or multi-agent prop- agation Network-lev el semantic correction using generati ve re- construction and relay refinement [135], [136]; cooperative rob ustness [137], [138]. T rust, Identity , and Semantic Misbehavior Detection Impersonation; malicious se- mantic injection Semantic-aware authentication and trust evaluation using en vironment or content-level features [139], [140]; semantic anomaly detection [139]. Feedback-A ware Semantic Defense Coordination Adaptiv e, multi-agent semantic attacks over time Closed-loop defense orchestration via feedback signals and reinforcement learning [141]; graceful degradation under partial compromise as a system-level resilience objective. 1) Adversarial and r obust training of semantic encoders and decoders. This category extends adversarial training from pixel- or wa veform-le vel perturbations to semantic neighborhoods defined by meaning-preserving transfor- mations. Rather than norm-bounded noise, adversaries are constructed through paraphrasing, masking, reorder- ing, or content-preserving edits that preserve syntac- tic v alidity while inducing task-level errors, enabling encoders and decoders to be trained directly against worst-case semantic f ailures [118]. Robustness can be further strengthened through architectural choices such as masked V ariational Autoencoder (V AE) bottlenecks that promote discrete, semantically meaningful latent codes [120], as well as task-aligned objectives that enforce semantic agreement across multiple vie ws of the same content [122], [123]. At a more principled lev el, robustness has been formalized as a minimax semantic rate–distortion problem, where training ex- plicitly optimizes worst-case semantic distortion rather than average-case performance [119]. Compared with standard adv ersarial training targeting norm-bounded perturbations [48], these approaches align robustness objectiv es with task utility and semantic correctness. 2) Repr esentation shaping via r e gularization and semantic consistency . Rather than e xplicitly generating adversarial examples, this class improves robustness by shaping the geometry and in variances of learned semantic represen- tations. Latent-space regularization constrains encoder sensitivity by limiting Jacobian norms, spectral radii, or latent dispersion, reducing vulnerability to small but semantically harmful perturbations under noise and interference. Masked or quantized codebooks further 15 compress semantics into compact latent spaces, shrink- ing the adversary’ s ef fectiv e action range while improv- ing stability and efficienc y [120], [121]. Complemen- tary semantic consistency constraints encode task- or knowledge-deri ved in variances directly into the learning objectiv e, encouraging paraphrases or cross-lingual v ari- ants to map to nearby representations with aligned task outputs [122], [123]. These techniques parallel classical stability and Lipschitz regularization in AI security [74], [75], b ut are adapted to semantic fidelity objecti ves and communication constraints, making them attractive for resource-constrained SemCom deployments. 3) Generative correction and semantic r estoration. Instead of hardening the encoder or decoder alone, this fam- ily introduces generative modules that activ ely correct corrupted semantic representations at inference time by projecting noisy signals or latent codes back onto a manifold of semantically plausible content. Generati ve Adversarial Networks (GAN) and dif fusion-based pre- and post-processors ha ve been proposed to filter adver - sarial artif acts before decoding in speech and multimodal SemCom settings [125], [142]. Generativ e models can also serve a dual role by acting as adaptive attackers during training and as purification modules at infer- ence [126]. These approaches parallel generative purifi- cation in AI defense [124], but must remain lightweight and interruptible to respect latency , computation, and energy constraints in communication systems. 4) Joint sour ce–channel–security coding at the encoder/de- coder . A distinct line of work embeds rob ustness and confidentiality directly into the semantic transceiver ar- chitecture. Rather than treating security as an exter - nal layer or post-hoc correction, neural encoders are trained so that latent representations simultaneously en- able reliable semantic recovery at the intended receiv er while remaining statistically uninformati ve to ea vesdrop- pers [128]. Representati ve designs such as deep joint source–channel and encryption coding unify semantic compression, channel coding, and secrecy within a sin- gle learned mapping [127]. In contrast to adversarial training or generativ e purification, these approaches enforce robustness and confidentiality by construction , making them attractiv e when higher -layer cryptographic support is una vailable or incompatible with semantic objectiv es. Overall, encoder/decoder-le vel defenses in SemCom closely mirror the AI security toolbox, including adversarial train- ing [48], stability regularization [74], [75], consistency-based learning, and generativ e purification [124]. Howe ver , they differ fundamentally in e valuation criteria and operating con- straints. Robustness in SemCom is ev aluated not by ℓ p distor- tion or bit error rate, but by task-level semantic utility , often assessed via BLEU score [143], semantic similarity [144], or downstream accuracy under rate, latency , and energy budget constraints in wireless systems. Despite promising advances, several challenges remain. Most existing defenses assume static or bounded semantic per- turbations and are dev eloped for single-shot inference, lacking support for multi-turn or interacti ve SemCom workflo ws where feedback loops and adaptation play key roles. Furthermore, ro- bustness is rarely co-optimized with rate–distortion and latency objectiv es, leaving open how much semantic robustness can be achieved without ne gating the efficienc y gains of SemCom. These gaps underscore the need for SemCom-nati ve rob ustness ev aluation protocols that integrate task utility , communication constraints, and system feedback. Progress will require tighter co-design across encoder training, JSCC architecture, and physical-layer dynamics, where robustness is not bolted on, b ut emerges from end-to-end semantic fidelity under adversarial and constrained conditions. C. Channel-Level Defenses under Semantic Objectives Even when semantic representations are robustly encoded, transmission ov er a wireless channel introduces distortions that cannot be controlled at the model le vel alone. This shifts the defense objectiv e from representation robustness to resilience under physically constrained interference. W ire- less channels con vey encoded semantic representations over noisy , interference-limited links, making them a primary locus where adversaries can induce semantic degradation under strict physical constraints. Unlike con ventional attacks that aim to disrupt symbol recovery , semantic attackers may inject care- fully crafted ov er-the-air perturbations that preserve syntactic reliability while inducing task-level semantic distortion, as discussed in Section IV -C. Channel-level defenses in SemCom must therefore operate within spectrum, power , and protocol constraints while e xplicitly optimizing semantic utility rather than bit-lev el fidelity . Channel-lev el defenses therefore e volve along three com- plementary directions, corresponding to how much control the system assumes over the channel during training, coding, and runtime adaptation. 1) Adversarial channel modeling for r obust SemCom. A direct defense strategy hardens semantic transceivers by explicitly modeling the wireless channel as a composi- tion of stochastic fading and physically realizable adver- sarial perturbations constrained by power and bandwidth budgets. Semantic encoders and decoders are jointly trained under this mixed channel to optimize worst- case task utility rather than bit-lev el fidelity [129]. Extensions of this approach incorporate dif fusion-based secure DeepJSCC to improve robustness against struc- tured adversarial interference [130], task- and context- adaptiv e objecti ves to handle non-stationary environ- ments [145], and sparse semantic coding schemes that enhance resilience under dynamic channel and task variations [146]. Collecti vely , these methods emphasize training-time exposure to realistic worst-case channels as a foundation for semantic robustness. 2) Semantic-awar e channel coding with unequal pr otec- tion. A complementary line of w ork focuses on allo- cating channel resources asymmetrically according to semantic importance, ensuring that task-critical infor- mation receives stronger protection. Deep JSCC inher- ently exhibits unequal error protection by prioritizing 16 salient semantic features during end-to-end training [3]. Building on this property , explicit semantic importance maps have been introduced to guide adapti ve power allocation, redundancy , and div ersity across semantic representations [131], [147]. These approaches resem- ble cost-sensiti ve rob ust optimization in AI, b ut must additionally respect physical-layer constraints such as spectral efficienc y , latency , and transmit power . 3) Semantic anti-jamming and adaptive r eception. Seman- tic jamming targets task-level meaning rather than max- imizing bit errors, allowing attacks to remain effecti ve ev en when signal-lev el metrics appear benign. Recent defenses therefore operate at the semantic lev el, ei- ther by co-training recei vers against learned semantic jammers or by embedding coding-aware interference strategies that selecti vely de grade an eav esdropper while preserving legitimate semantic recovery . Representati ve examples include GAN-inspired adversarial framew orks that jointly train a semantic jammer and a robust receiv er [101], as well as coding-enhanced jamming schemes that strengthen secrecy through encoder -side design under rate and reliability constraints [103]. These approaches highlight both the effecti veness and the adaptivity of semantic jamming, underscoring the need for dynamic, task-aware reception strategies. T aken together , these defenses reflect three complementary design philosophies: hardening semantic transcei vers through channel-realizable adversarial training, reshaping physical and link-layer resources around semantic importance, and incorpo- rating runtime mechanisms that adapt transmission or decod- ing in response to semantic-a ware interference. While dif fering in mechanism and assumptions, all three shift the protection objectiv e from symbol fidelity to task-lev el semantic utility under physically realizable attacks. From an AI defense perspectiv e, channel-le vel defenses can be viewed as robustness under structured measurement noise, where adversarial perturbations are constrained by wireless physics rather than arbitrary ℓ p bounds. The defining challenge in SemCom is that rob ustness must be engineered jointly with spectral efficiency , latency , and regulatory constraints. De- fenses cannot arbitrarily increase redundancy or computation and must interoperate with standardized physical- and link- layer protocols. Despite gro wing interest, channel-le vel defenses remain limited by simplified channel models and narrow adversary as- sumptions. Most ev aluations rely on A WGN or single-antenna fading channels and do not capture dense, multi-user , or multi-RA T en vironments where attackers may exploit protocol features and scheduling dynamics. Interactions with crypto- graphic mechanisms such as PHY -layer authentication and link encryption are also underexplored, as semantic rob ustness and confidentiality are often treated as separate objectiv es. A key takeaw ay is that channel-lev el defenses should be ev aluated using SemCom-nati ve metrics that couple semantic utility with adversarial, spectrum- and po wer-constrained interference, and that rob ust transceiv er design should be co-optimized with adaptiv e, attack-a ware resource management. D. Knowledge-Base-Level Defenses While channel-le vel defenses protect semantic signals dur- ing transmission, they implicitly assume a stable and trusted semantic context at the receiv er . In traditional systems with centralized kno wledge bases, this assumption is often reason- able: the KB is treated as a backend resource whose integrity and access are enforced through perimeter security , access control, and centralized governance. In SemCom, howe ver , encoding and decoding explicitly depend on shared or ex- ternal semantic kno wledge that is distributed, replicated, and continuously updated across communicating endpoints [4], [19]. As a result, semantic context itself becomes part of the communication process rather than a passi ve background resource. Knowledge bases (KBs), pretrained foundation models, on- tologies, and structured memories serve as semantic priors that enable aggressi ve compression and robust reconstruction under limited communication resources [4], [25]. Unlike traditional centralized KBs that primarily support query answering or decision support, semantic knowledge in SemCom directly shapes what information is transmitted and how missing content is inferred at the receiv er . Despite this central r ole, the security implications of semantic knowledge in communi- cation pipelines have received comparatively limited attention r elative to encoder- and channel-le vel defenses . As discussed in Section IV -D, this tighter coupling between knowledge and communication introduces a unique attack surface: semantic failures may arise from manipulation, in- consistency , or ov erexposure of knowledge even when learned encoders, decoders, and wireless channels behave nominally [9]. Because semantic kno wledge simultaneously influences encoding decisions and decoding interpretation, attacks on the knowledge layer can silently induce semantic misalignment without triggering signal-lev el anomalies or reliability failures [46], [88]. As semantic knowledge becomes an active par - ticipant in encoding and decoding, defenses naturally ev olve along three complementary directions that address integrity , synchronization, and secure semantic grounding. 1) Secur e knowledge construction and maintenance. The first line of defense treats the semantic KB as a critical system asset whose ingestion, update, and ev olution must be explicitly secured. Threats to semantic KBs hav e been systematized, with poisoning, tampering, and stale or inconsistent updates identified as primary risk factors [19]. Pro venance-aware KB pipelines tag each update with source identity , temporal metadata, and verification status, enabling suspicious or conflicting updates to be quarantined or rolled back. Secure mainte- nance workflo ws can further incorporate access control, logging, and auditability to bound the blast radius of compromised contributors and support post-hoc forensic analysis [9]. These mechanisms parallel data lineage and governance practices in secure ML pipelines, but are uniquely critical in SemCom because KB corruption affects both semantic compression and reconstruction. 2) Contr olled knowledge sharing and synchr onization. The second category focuses on regulating how semantic 17 knowledge is shared and synchronized across devices, agents, and administrati ve domains. SemCom systems often implicitly assume sufficiently aligned knowledge at the transmitter and receiv er; adversaries can exploit partial synchronization, delayed updates, or version mis- matches to induce semantic div ergence without disrupt- ing signal deliv ery . Recent works therefore distinguish between global knowledge and priv ate or personalized knowledge, and enforce authenticated synchronization policies [132], [148]. A dedicated knowledge manage- ment layer can coordinate access control, update authen- tication, and selectiv e synchronization, prev enting ad- versaries from reconstructing the exact semantic context used by legitimate endpoints [45]. From a defense stand- point, such mechanisms act as semantic access control, limiting both semantic leakage and desynchronization- induced misinterpretation. 3) Secur e knowledge-assisted SemCom pipeline. Rather than treating knowledge solely as a background re- source, this category integrates structured knowledge directly into semantic encoding and decoding in a security-aware manner . Knowledge-assisted semantic coding schemes inject graph-based or symbolic priors into representation learning so that encoders compress messages in ways consistent with trusted kno wledge, while decoders use knowledge-guided reasoning to re- cov er missing semantics under noise. Incorporating ver - ified knowledge sources and cross-source consistency checks can improv e robustness against both channel noise and semantic ambiguity [9], [133], [134]. More recent designs emphasize separating public and priv ate knowledge during encoding, such that accurate semantic reconstruction requires access to the intended priv ate knowledge context [45]. This ef fectiv ely turns pri vate knowledge into an implicit security key: even if an adversary intercepts semantic representations and pos- sesses strong global priors, the absence of synchronized priv ate kno wledge prev ents correct interpretation of sen- sitiv e semantics. From an AI defense viewpoint, KB-lev el defenses share commonalities with robust data management, trustw orthy knowledge graphs, and secure retrie val-augmented generation. Prov enance tracking and access control mirror secure data pipelines in ML, while robust graph learning limits the in- fluence of corrupted nodes or edges during message passing. The ke y distinction in SemCom is that KB integrity and a vail- ability directly influence both encoding and decoding behavior . Small perturbations in knowledge can therefore induce large semantic shifts without triggering any signal-le vel anomaly . Despite gro wing recognition of their importance, KB-lev el defenses remain relativ ely underdev eloped in SemCom. There is no standardized benchmark for ev aluating robustness or priv acy of semantic kno wledge bases under dynamic updates and partial trust assumptions. Many proposed defenses rely on strong pro venance or trust signals that may be una vailable in open or cross-domain deployments. Moreover , restricting knowledge access to improve security can complicate verifi- cation, auditing, and synchronization, creating new trade-offs between robustness, priv acy , and system usability . A key takeaway is that semantic knowledge should be studied as a first-class security asset rather than a benign back- ground resource. Its construction, synchronization, exposure, and integration into semantic models should be co-designed with explicit robustness and priv acy objecti ves, and e valuated under adversarial conditions that reflect realistic SemCom deployments. E. Network-Level Defenses Unlike encoder-, channel-, or kno wledge-lev el defenses, network-le vel defenses address semantic failures that emerge from interaction, propagation, and coordination among multi- ple terminals ov er time, rather than from any single compro- mised component. In networked SemCom systems, meaning is not only decoded locally but also propagated, aggregated, and acted upon across agents, making semantic security an inherently collectiv e property . From an AI defense perspective, network-le vel semantic de- fenses parallel rob ust multi-agent learning, distributed anomaly detection, and trust-aware decision-making under adversarial conditions. Generativ e correction mechanisms resemble ro- bust generati ve modeling under distribution shift, while trust ev aluation and authentication mirror reputation systems in adversarial multi-agent en vironments. The key distinction in SemCom is that the protected object is not packet deliv ery or throughput, but the consistency and integrity of shared meaning as it propagates through the network. Existing netw ork-lev el defenses for SemCom manifest in three recurring defense patterns. 1) Reliability-oriented semantic networking and cooper a- tive r obustness. The first category addresses semantic failures that emerge from multi-hop transmission and multi-agent propagation, where small local distortions can accumulate into global task failure. Generative and semantic-aware correction mechanisms can be deployed at intermediate nodes, relays, or edge serv ers to stabilize semantics before forw arding. Representati ve e xamples include Inv erseJSCC and Generativ eJSCC, which use generativ e models to denoise or reconstruct semanti- cally plausible outputs from heavily corrupted Deep- JSCC transmissions [135]. Semantic reconstruction has also been formulated as an in verse problem, combin- ing inv ertible neural networks with diffusion models to recover high-quality semantics from degraded inter- mediate representations [136]. Beyond pure correction, intelligent semantic relaying allows intermediate nodes to partially decode, refine, or re-encode semantic in- formation using shared or local kno wledge, reducing semantic drift across hops [137], [138]. Collecti vely , these approaches act as network-lev el semantic filters that prev ent localized corruption from cascading through the system. 2) T rust, identity , and semantic misbehavior detection. Even when links are reliable, adversaries may imperson- ate le gitimate agents, inject malicious semantic content, 18 or manipulate coordination protocols. Network-lev el de- fenses therefore extend beyond packet-le vel authentica- tion to reasoning about semantic consistency and task impact. T rust establishment and authentication can lev er- age semantic and en vironmental features that are difficult to forge at scale, such as environment-le vel semantics extracted from massi ve MIMO channels for physical- layer authentication [139]. Authentication mechanisms hav e also been co-designed with semantic objectives through metrics that explicitly balance authentication reliability and throughput [140]. More broadly , misbe- havior detection in SemCom networks must identify agents whose semantic outputs systematically distort shared meaning or degrade task performance, even when syntactic validity is preserved. 3) F eedback-awar e semantic defense coor dination. A third category treats semantic defense as a coordinated, adap- tiv e process across the network rather than a static mechanism at indi vidual nodes. In this vie w , detectors, trust scores, and semantic performance metrics serve as feedback signals that inform routing, scheduling, model selection, or fallback strategies. Detection of semantic jamming or anomalous behavior at one node can trig- ger route changes, decoder switching, or semantic rate adaptation else where in the network [149]. Although still in an early stage, such orchestration aligns closely with AI-based control and reinforcement learning frame works for autonomous networks [141]. From an AI defense perspectiv e, these mechanisms resemble robust multi- agent learning and trust-aw are decision-making, with the key distinction that SemCom networks protect shar ed meaning and semantic coher ence , rather than packet deliv ery or throughput alone. Despite their importance, many existing schemes assume benign cooperation or random impairments and do not explic- itly model adaptiv e adversaries that exploit network dynamics, feedback loops, or generativ e priors. Scalability and latency pose additional challenges: trust management and generative correction may introduce communication, computation, and coordination ov erhead that is incompatible with real-time semantic tasks. Moreov er , trust is often defined at the agent lev el rather than at the lev el of semantic content or knowledge items, leaving open ho w to quantify , propagate, and act on trust in meaning itself. More broadly , these mechanisms aim for graceful de gradation under partial compromise, where semantic utility degrades predictably rather than failing catas- trophically . A key takeaway is that semantic security is an emergent, system-lev el property . Ensuring robustness in networked Sem- Com requires defenses that jointly reason over agents, links, knowledge, and tasks, and that enable graceful degradation under partial compromise rather than catastrophic failure. F . Cr oss-Domain Lessons and Inte gration P aths The layered defenses discussed above demonstrate that securing SemCom requires coordinated protection across rep- resentation learning, physical transmission, shared knowledge, and networked interaction. At each layer , many proposed mechanisms draw inspiration implicitly or e xplicitly from robustness techniques dev eloped in mature AI domains, par- ticularly NLP and CV . T o place these defenses in a broader context and to identify paths toward principled inte gration, it is instructi ve to e xamine ho w related challenges have been addressed in NLP and CV . Both NLP and CV confront adversaries that manipulate learned representations to alter meaning without obvious syn- tactic corruption, a threat model that closely mirrors semantic attacks in SemCom. Ho wev er, while robustness techniques in these domains are typically dev eloped for standalone in- ference pipelines, SemCom embeds learned models within communication-constrained, distributed systems where robust- ness must coexist with rate, latency , energy , and coordination constraints. This subsection distills transferable lessons from NLP and CV robustness, clarifies where direct adoption breaks down, and outlines integration paths for dev eloping SemCom- nativ e defenses that respect system-level constraints. a) T ransferable Ideas fr om NLP and CV Rob ustness: In NLP , consistenc y-based learning and semantic-preserving perturbations, such as paraphrasing or synonym substitution, hav e improved robustness against ambiguity and adversar - ial re writing [150], [151]. Similarly , in CV , defenses hav e ev olved from norm-bounded perturbation resistance to in- clude semantic shifts and natural distrib utional corruptions. Notable techniques include representation smoothing, input transformations, and ensemble-based strategies [152], [153]. These adv ances hav e inspired emerging SemCom defenses, such as semantic consistency objecti ves [122], latent-space regularization [121], and generative purification [125], which aim to align encoded representations with underlying meaning under constrained conditions. b) Adaptation Challenges Under SemCom Constraints: Despite these parallels, SemCom introduces domain-specific challenges that limit direct transfer . As highlighted in the “Distinct Constraints” of Section V -A, SemCom pipelines must meet strict rate, power , and latency b udgets, operate over noisy and lossy physical channels, and maintain coherence with shared or e xternal knowledge modules. Defenses must therefore be communication-aware and lightweight, often re- quiring co-design with encoder–decoder architectures or joint source–channel coding schemes. Standard robustness tech- niques from NLP or CV , such as computationally expensi ve adversarial training or always-on ensembles, cannot be directly applied without violating these deployment constraints. Evaluation practices from NLP and CV also of fer useful ref- erence points. Benchmark datasets like ImageNet-C [154] and adversarial GLUE [155] hav e catalyzed standardized robust- ness ev aluation. A SemCom analog should incorporate over - the-air channel traces, semantic misalignment detection, and knowledge desynchronization scenarios that reflect practical operation. These additions would encourage ev aluation under constraints that match the semantic and physical realities of deployment. c) Inte gration P aths for SemCom-Native Robustness: T o meaningfully integrate these lessons, robustness in Sem- Com should be treated as a system-lev el resource allocation 19 AI-native SemCom pipeline Semantic Encoder Wireless Channel Shared Knowledge/Context latenet code, confidence SNR, distortion, uncertainty synchronization state, priors Inference & Multi-Agent Coordination task decision, trust signals Semantic Control Plane/ Runtime Policy Engine Semantic confidence & drift Certified robustness metadata Latency , energy , compute telemetry Cross-layer disagreement signals Channel state Semantic rate adaptation Redundancy/ abstention Fallback or cached inference Priority enforcement Decoder or model switching Operating-envelope awareness Priority-aware defense Policy coordination AI-defense lens Semantic Utility Robustness Energy/Compute Latency Admissible Operating Envelope Envelope Violation/Silent Failure Risk Policy selection within certified region Individually admissible perturbations Composed semantic failure (Non-composable guarantees) Cross-layer consistency & certification Fig. 5. Bridging design and deployment through operating en velopes and runtime control in secure semantic communication. Rob ustness is enforced as a managed system property via a semantic control plane that integrates operating-en velope awareness, resource and priority telemetry , policy coordination, and cross-layer consistency signals from the AI-defense lens. problem. Rather than uniformly deploying defenses, protection budgets, such as redundancy , abstention, or adaptive filtering, should be allocated based on mar ginal utility under task pri- orities and system constraints. Lightweight mechanisms such as real-time anomaly detection, f allback control strategies, or contract-based semantic monitoring may offer better tradeof fs than heavy , static defenses. Future SemCom pipelines will benefit from semantically aware, composable defenses that are compatible with con- strained en vironments and adversarial uncertainty . These de- fenses should be designed with explicit assumptions, verified through contextual testing, and reported using deployment- aware robustness profiles rather than isolated metrics. Lessons from NLP and CV rob ustness will be instrumental in shaping these systems, but success will require adapting them to the unique architectural and operational demands of SemCom. V I . B R I D G I N G D E S I G N A N D D E P L OY M E N T F O R S E C U R E S E M A N T I C C O M M U N I C A T I O N S Despite advances in semantic encoding, adversarial robust- ness, and threat detection, many SemCom defenses remain demonstrated only in isolated models or simplified simula- tions rather than integrated into deployable systems. Bridging this gap requires system architectures that support runtime introspection, policy enforcement, and adapti ve beha vior under stringent rate, latency , energy , and compute constraints. This section highlights three deployment challenges: (i) manag- ing security–utility tradeof fs within constrained operating en- velopes, (ii) enforcing robustness without violating resource or timing budgets, and (iii) enabling v erifiable behavior through lightweight, system-f acing assurance signals. W e emphasize that enforcing robustness under resource and timing budgets is primarily realized through runtime policy enforcement, adaptiv e fallback, and en velope-a ware monitoring, which recur across the certification, red-teaming, and testbed discussions below . A. Security–Utility T radeof fs and Operating Envelopes Security mechanisms applied in SemCom often conflict with task utility , latency , and energy constraints. While these defenses improve robustness under attack or drift, they impose computational or communication ov erhead that can degrade system throughput or violate timing b udgets. This creates a core tension: securing SemCom systems cannot be done in isolation from their performance en velopes. T o reason about these tradeoffs, we adopt the notion of operating en velopes : regions in the multidimensional space of semantic utility , latency , robustness, and system cost that represent acceptable operating points under bounded resources and threat models. This concept, widely used in real-time and networked systems [156], [157], has recently found traction in robust AI and adaptiv e communication design, where runtime decisions are governed by policies that trade off robustness and performance under e xplicit b udgets [48], [158]. W ithin this frame work, robustness mechanisms are vie wed as policy choices constrained by explicit system budgets. For instance, increasing abstention thresholds may improve resilience to semantic drift, but at the cost of higher decision latency or degraded task completion rates. In practice, most current SemCom pipelines are tuned offline using fix ed thresholds and e valuated under a verage- case conditions. Runtime enforcement of semantic risk b udgets is largely absent. Moreover , the lack of standard metrics for semantic degradation under resource constraints makes it difficult to compare or certify defenses across systems. In addition, we also need to consider the resource limitations of SemCom systems. Thus, to transition from design to de- ployment, future SemCom systems should support en velope- aware introspection : mechanisms that continuously monitor semantic fidelity , latency , energy usage, and channel state to dynamically adjust semantic security policies under e xplicit resource budgets. This requires coordination with lower-layer scheduling and system monitors to ensure defenses remain 20 within operating env elopes. B. Certification and V erifiable Robustness Certification represents a promising bridge between princi- pled rob ustness analysis and deployable SemCom systems. In the SemCom context, certification is not merely about proving worst-case guarantees for isolated models, b ut about enabling system-lev el assurances that support safe and predictable be- havior under real-world uncertainty . This includes certifying task-lev el properties, such as intent preserv ation or safe control actions, under physically realizable perturbations and dynamic operating conditions. Unlike classic AI settings, SemCom systems operate under stochastic fading, time-varying bandwidth, feedback-dri ven protocols, and adaptiv e control mechanisms. As a result, ro- bustness guarantees are inherently multi-layered and context- dependent. While techniques such as randomized smoothing, Lipschitz bounding, and con ve x relaxations provide useful foundations in static learning pipelines [51], [159], [160], their assumptions rarely hold end-to-end in wireless systems. Over -the-air attackers are constrained by physical limits such as transmit power , spectral locality , and temporal coherence rather than abstract ℓ p norms, and semantic utility itself may depend on protocol state, decoding slack, or application feedback. Adaptiv e behaviors such as HARQ, link adaptation, and semantic fallback further complicate static certification. T o address these challenges, certification in SemCom should be treated as a runtime system artifact rather than an offline proof obligation. Certificates, such as confidence radii, ab- stention indicators, or semantic bounds, should be attached to semantic outputs as lightweight metadata. Such metadata can be deriv ed from calibrated uncertainty estimates, decoder disagreement, or bounded sensitivity of semantically stable representations. Additionally , deployable certification requires threat models aligned with physical-layer realities. Certifiable perturbation sets should be defined in terms of SNR ranges, error vector magnitude (EVM) bounds, or semantic edit dis- tances that reflect realistic wireless degradation. Certification efforts should prioritize semantically stable representations, such as latent embeddings or task-relev ant decision statistics, where guarantees are more meaningful and resilient to trans- mission variability . Rather than producing static worst-case bounds, certified robustness should be embedded within the operating en- velope of the system. Operationally , these regions can be implemented as lookup tables or lightweight predictors that map observed channel and workload telemetry to whether a certificate is valid and what fallback action is required. Fu- ture research directions include composing certificates across encoder–channel–decoder pipelines, caching and reusing certi- fied inference regions, and integrating certification signals into adaptiv e abstention or fallback policies. Ultimately , certifica- tion in SemCom must ev olve into a dynamic, observ able, and composable mechanism that supports system-le vel decision making under real-world conditions. 0 3 6 9 12 15 18   0.0 0.2 0.4 0.6 0.8 1.0                                                                                     Fig. 6. BLEU (1-gram) scores for the proposed secure SemCom framework with varying thresholds ( τ = 0 . 5 , 0 . 6 , 0 . 7 ) vs. DeepSC [20]. The parameter τ is the safeguarded threshold guaranteed in the proposed frame work. DeepSC is a traditional SemCom framework with no consideration of security . C. Evaluation Metrics and Benchmarking Gaps The ev aluation of security in SemCom systems requires a fundamental rethinking of what constitutes rob ustness, util- ity , and performance. Traditional metrics such as bit error rate (BER), SNR, or do wnstream task accuracy of fer limited visibility into the security posture of SemCom pipelines, especially under adversarial conditions. Unlike classical norm- bounded robustness metrics in machine learning [161], ad- versarial budgets in SemCom reflect semantic corruption, protocol beha vior , and physical-layer constraints. This calls for a shift toward semantic robustness , an e valuation axis that measures degradation in meaning preservation, task utility , or reconstruction fidelity under worst-case semantic perturbations or cross-layer interference. For instance, in visual SemCom, degradation can be quantified via mean intersection over union (mIoU) or classification accuracy under semantic perturbation, while in language tasks, BLEU or semantic similarity scores may better reflect task failure. Fig.6 reports BLEU (1-gram) under varying channel conditions as an e xample of task-level reliability ev aluation in secure SemCom [20]. Semantic robustness should also account for context de- pendence and kno wledge misalignment [6], where identical transmitted representations may yield di ver gent interpretations due to corrupted priors or desynchronized knowledge bases. For example, attacks may not directly perturb the transmitted signal but instead induce semantic failure by poisoning the receiv er’ s grounding assumptions. Benchmarks should reflect this fragility by modeling both data-space di ver gence and knowledge-space di vergence within the pipeline. All ev alu- ation pipelines should support dual reporting under benign and adversarial conditions to expose hidden robustness–utility tradeoffs that would otherwise remain latent under average- case testing. Beyond robustness, security ev aluations should quantify semantic leakage , which reflects the extent to which transmit- ted representations inadvertently rev eal priv ate or structured information about the source input [162]–[164]. Leakage can be measured via mutual information, in version success rates, or recoverability of sensiti ve attributes from latent codes. For systems employing modular or cross-task semantic encoding, 21 Fig. 7. Time overheads and communication o verheads of reliable training method for secure SemCom [19]. leakage may also span unrelated downstream applications, requiring broader auditing. In deployment scenarios, operational tradeoffs con- nect rob ustness with system viability . These include la- tency–rob ustness curves, semantic fidelity versus throughput, and energy per bit of meaning [165]–[170]. Semantic env elope violation rates, i.e. , instances where performance falls below acceptable thresholds under attack, are especially useful for quantifying resilience limits in constrained en vironments such as edge devices or time-critical inference pipelines. Similarly , latent trust drift, where decoder predictions di verge gradually due to upstream semantic desynchronization or poisoned pri- ors, can be monitored to trigger adapti ve defenses or trust recalibration protocols. F or example, Fig.7 examines the time ov erhead and communication overhead of training reliable SemCom coding models [19]. Despite these needs, current SemCom e valuations remain fragmented and ad hoc [171]. Most reuse task-oriented metrics from AI or wireless domains without addressing SemCom- specific risks. The field urgently requires reproducible, open- source benchmark suites that simulate realistic attacker mod- els, environmental variations, and cross-layer semantics. Only with such foundations can future work offer rigorous guar - antees and practical resilience at scale. T o capture SemCom- specific risks under realistic stress, static metrics should be complemented with adapti ve adversarial ev aluation, as elabo- rated in the ne xt subsection. D. Thr eat Simulation and Red-T eaming While static benchmarks and norm-bounded attacks provide valuable baselines, they fail to capture the adaptive, layered nature of real-world adversaries. Red-teaming fills this gap by simulating intelligent, feedback-dri ven agents that dynamically probe, manipulate, and stress-test SemCom pipelines under realistic conditions [172]–[174]. Red-teaming agents adapt their strategies based on ob- served system responses, such as retransmission patterns, de- coding confidence, or semantic drift. F or instance, a semantic- aware jammer may ev olve its interference by monitoring task failure rates, while a gray-box adv ersary may exploit repeated queries to reconstruct latent embeddings or trigger meaning misalignment. These agents can operate across layers, such as coordinating encoder poisoning, channel perturbations, and knowledge-base desynchronization, to induce emer gent f ail- ures that span semantic, protocol, and physical layers—failures that remain latent in isolated testing. Ef fective implementations may rely on adaptiv e optimization and co-training to co-ev olve with system defenses [173], [174]. Threat roles and surfaces v ary in observabili ty and access: white-box agents emulate insider threats with full model access; gray-box attackers observe intermediate features or metadata; black-box adversaries interact only with system inputs and outputs [67]. Attack surfaces span semantic-aware jamming, backdoor insertion, kno wledge poisoning, pri vac y leakage, and coordinated misinformation in multi-agent set- tings. A modular red-teaming framew ork should support di- verse attacker profiles and allo w composition of complex, multi-phase campaigns that reflect real-world threat e volution. Integrated evaluation embeds red-teaming agents into training, adaptation, or runtime inference to reveal resilience under dynamic pressure. Defenses such as decoder switching, abstention, or kno wledge re-synchronization can be stress- tested using runtime triggers like semantic mismatch, latent trust drift, or en velope violations [18]. Beyond degrading performance, red-teaming enables semantic risk auditing —a diagnostic process that identifies brittle input modalities, frag- ile concept classes, and cascading failure paths. These insights support the dev elopment of semantic risk budgets that quantify acceptable degradation bounds under adversarial stress, par- ticularly for safety-critical applications such as autonomous driving, remote surgery , or multi-agent coordination. Ultimately , red-teaming is not a replacement for theoretical analysis or benchmark testing but a necessary complement. By exposing emergent, cross-layer vulnerabilities that elude static analysis, red-teaming provides a pathway toward more trustworthy , resilient SemCom systems. Future work should prioritize open-source red-teaming toolkits tailored to Sem- Com, incorporating domain-specific artifacts such as latent codes, shared priors, and task-conditioned decoders. Howe ver , ev en the most sophisticated simulations should ultimately be validated under physical constraints, motiv ating real-world testbed deployments. E. Real-W orld T estbeds and Deployment Studies Simulation and red-teaming offer powerful abstractions for ev aluating SemCom security . Howe ver , validating re- silience under real-world constraints requires deployment- centric testbeds that e xpose the unpredictability of physical hardware, asynchronous control loops, and imperfect commu- nication channels. Semantic observability is central to effecti ve deployment studies. Unlike conv entional wireless testbeds that focus on throughput, delay , or error rates, SemCom ev aluation re- quires introspection into latent code distributions, semantic drift, decoder disagreement, and knowledge desynchroniza- tion. Emerging 6G/O-RAN platforms [175], [176], augmented with software defined radios (SDRs), xApps, and edge AI accelerators, provide partial support for this layered introspec- tion. These platforms should be extended to monitor high- lev el behaviors such as task-lev el degradation under semantic 22 Fig. 8. Semantic communication on SDR-based LoRa communication plat- form [177]. mismatch and adversarial drift. For example, Fig.8 shows a simple SemCom platform built on SDR-based LoRa system, which includes two computers and USRPs to perform practical semantic coding and data transmission o ver a wireless link [177]. T estbeds e xpose practical system constraints that of- ten go unnoticed in simulation [19], [177]. Knowledge re- synchronization may fail under lossy control channels or when agents are asynchronously updated [45], [132]. Redundancy- based defenses, such as ensemble decoding or semantic v oting, may conflict with stringent real-time latency constraints typical of edge en vironments [135], [136]. Practical issues such as hardware timing violations, adaptation delays, and resource contention often emerge as performance bottlenecks during deployment and must be explicitly profiled and reported as part of security-utility operating en velopes [137]. Beyond synthetic benchmarks, testbeds allow cross-domain rob ustness evaluation across realistic and heterogeneous set- tings, ranging from remote robotic control and telehealth to rural IoT deployments [178]. These domains introduce asym- metric vulnerabilities: lo w-power encoders at the edge may rely on powerful, semantically informed decoders in the cloud, resulting in one-sided failure modes when semantic codes are distorted or desynchronized. Edge–cloud heterogeneity , link asymmetry , and mobility further stress semantic security under real-world operating en velopes. T o consolidate these deployment-centric ev aluation needs, T able V summarizes core security objectives, ev aluation methods, and representative failure modes that characterize deployment-aw are SemCom security studies. Controlled fault injection provides additional le vers for testing adversarial resilience. SDR-based wav eform injection can simulate semantic jamming at the physical layer . Com- promised agents can be used to poison models or propagate misinformation through shared knowledge graphs. These se- tups further support fallback e valuations, including absten- tion, human-in-the-loop verification, and automated triggers for semantic anomalies. T o enable reproducible and modular ev aluation, future ef forts should prioritize open-source testbeds that integrate semantic communication pipelines, red-teaming agents, and runtime observ ability . Platforms such as Po wder- RENEW [179], B5G Playground [180], OpenAirInterface [181], ARA [182], and RISE-6G [183] provide promising foundations for SemCom-aw are security experimentation and system-lev el v alidation. F . Lessons Learned In summary , bridging design and deployment for secure semantic communication requires a system-aware approach that goes beyond isolated model hardening to address interde- pendencies among semantic fidelity , robustness, latency , and resource constraints. Practical deployment motiv ates security– utility operating en velopes for managing tradeoffs under real- world conditions. T able V summarizes the ev aluation dimen- sions in this section, highlighting metrics and methodolo- gies for assessing SemCom security under adversarial and deployment-realistic conditions. This section re veals that securing SemCom is fundamen- tally a co-design challenge, requiring tight integration of AI defenses with communication protocols, knowledge manage- ment, and network coordination. Success depends not only on adv ancing rob ustness techniques b ut also on developing adaptiv e, lightweight assurance mechanisms that can operate within strict operational en velopes, supported by continuous adversarial ev aluation and deployment-focused validation. In this way , SemCom systems can achie ve the resilience and trustworthiness required for real-world, safety-critical appli- cations. V I I . S E C U R E S E M A N T I C C O M M U N I C AT I O N S I N P R A C T I C A L A P P L I C A T I O N S SemCom has been gaining lots of attention in se veral application domains where distributed intelligence, percep- tion, and coordinated decision-making play a central role. While SemCom offers considerable advantages in reducing communication load and aligning transmitted information with downstream tasks, its deployment also exposes security concerns that arise from the manipulation, degradation, or misinterpretation of meaning rather than raw syntactic data. This section surveys 4 typical applications and highlights the corresponding security challenges and threat surfaces of SemCom for each application. A. Cooperative P erception Cooperativ e perception allows multiple terminals (e.g., au- tonomous vehicles, robotic platforms, and sensor nodes) to share processed interpretations of their local observations to achiev e a more complete, accurate, and rob ust understanding of the en vironment. Traditional cooperative perception meth- ods often rely on exchanging high-le vel features or compressed raw data [184]–[186]. Ho wev er, these traditional methods may suffer from information loss, limited adaptability to dynamic en vironments, and reduced perception accuracy under limited communication and computing resources. Considering the promises of SemCom, se veral works start to exploit SemCom to enhance the performance of cooperati ve perception by align- ing transmitted information with downstream fusion tasks. For example, the authors in [187] proposed an importance-aw are semantic encoder that prioritizes perceptually salient regions 23 T ABLE V D E P L OY M E N T -A W A R E E V A L UATI O N D I M E N S I O N S F O R S E M A N T I C C O M M U N I C ATI O N S E C U R I T Y . Evaluation Goal Security Objective Evaluation Method Representativ e Threats and Failure Modes Semantic Ro- bustness Meaning preservation and task utility un- der adversarial semantic perturbations Semantic fidelity metrics, adversarial perturba- tion tests, semantic envelope violation rates Knowledge misalignment, decoder drift, semantic corruption Semantic Leakage Priv acy exposure from latent representa- tions or shared knowledge Mutual information analysis, inv ersion success rate, sensitive attribute recovery Latent in version, encoder leakage, cross-task information leakage Operational Resilience T radeoffs between robustness, latency , and energy under adversarial and runtime constraints Latency–rob ustness curves, ener gy per bit of meaning, graceful degradation tests T ime-critical failures, resource exhaus- tion, edge deployment stress Threat Adaptability System behavior under adaptive, multi- phase adversaries Adaptiv e red-teaming agents, multi-stage and feedback-driv en stress testing Gray-box jammers, encoder poisoning, semantic desynchronization Deployment Robustness Resilience under real-world hardware, protocol, and update constraints SDR-based testbeds, fault injection, runtime observability and monitoring Semantic-aware jamming, asynchronous updates, decoder disagreement for cooperativ e automotive perception, reducing communica- tion load while preserving safety-critical information. More recently , the authors in [188] designed a cross-modal SemCom framew ork to support heterogeneous perception modalities (e.g., camera and LiDAR) during collaborative perception. In general, these works indicate the significant potential of SemCom in cooperative perception. Howe ver , incorporating SemCom into multi-agent cooper- ativ e perception introduces new security risks, as adversarial agents may inject false detections, misleading trajectories, or fabricated hazards, and semantic desynchronization can lead to inconsistent world models. Con ventional channel security mechanisms, such as authentication and encryption, do not address semantic integrity violations, where packets are syn- tactically correct but adv ersarial at the meaning lev el. This gap motiv ates secure SemCom mechanisms that ensure shared semantics remain trustworthy and resilient under adversarial conditions. Secure cooperativ e perception can be realized through the four layers of semantic defense outlined in Section V. Encoder- and decoder -level defenses improve robustness against semantic poisoning through constrained and adversari- ally robust representations. Channel-level mechanisms, includ- ing semantic-aware coding and unequal protection, preserve safety-critical semantics under interference and jamming. Knowledge-base defenses enforce semantic provenance and consistency across shared w orld models, mitig ating desynchro- nization. Network-le vel defenses further support trust, misbe- havior detection, and semantic consensus to ensure reliable collectiv e perception in adversarial en vironments. B. Remote Robotic Systems Remote robotic systems, referred to as telerobotics or teleoperation—enable human operators to control robotic platforms at a distance by coupling human cognition with machine embodiment in remote or hazardous en vironments [189], [190]. Leveraging SemCom for remote robotics has recently emer ged as a means to reduce communication over - head and improve task alignment/completion by transmitting task-lev el semantics instead of raw sensor streams. In [191], the authors introduced SemCom strategies for remote robotic manipulation, showing that task-aware feature selection leads to reduced bandwidth consumption without degrading con- trol performance. Kno wledge-based SemCom frame works for robotic edge intelligence ha ve further been proposed to match transmitted task semantics with stored knowledge graphs for low-latenc y robotic exploration and control assistance [192]. While SemCom improves communication efficienc y in re- mote robotic systems, it also introduces semantic security risks. Adversaries may manipulate affordance descriptors, in- ject false constraints, or corrupt task objecti ves, leading to unsafe behavior or task failure. Semantic ambiguity between operator intent and robot ex ecution can further be exploited at the meaning level, creating mismatches between percei ved and actual system states that are not addressed by traditional control-channel security mechanisms. Secure SemCom for remote robotics can be realized through the four-layer defense framework in Section V. Encoder- and decoder-le vel defenses enforce robust and constrained semantic representations, channel-level mechanisms protect safety-critical semantics under interference, kno wledge-base defenses preserve consistency between task models and en- vironment semantics, and network-lev el mechanisms enable trust, misbehavior detection, and semantic consensus among distributed robots. T ogether , these layers extend security from channel inte grity to semantic integrity , enabling safe and reliable remote robotic operation. C. Agentic AI Systems Agentic AI systems refer to autonomous or semi- autonomous agents that perceive en vironments, plan actions, and interact with digital or physical systems to achie ve specific goals. These systems inte grate perception, decision-making, and actuation loops to operate with minimal human interven- tion [193], [194]. Recent research has e xplored the use of SemCom to support agentic AI coordination by transmitting task-oriented semantics instead of raw data of state observa- tions or control messages. For example, in [195], a SemCom framew ork for multi-agent coordination was proposed to com- press and align exchanged semantics for cooperative decision- making. Knowledge-enhanced SemCom has also been lever - aged to support semantic alignment among autonomous agents via shared or partially shared kno wledge bases, enabling more robust inference and coordination [192]. Moreov er , cognitive 24 and goal-oriented SemCom frameworks ha ve demonstrated improv ed communication efficienc y in distributed planning and navigation tasks, where semantics are ranked according to their value for downstream decision modules [196]. Howe ver , SemCom-enabled agentic AI systems introduce new semantic security and trust challenges [197]. Malicious agents may manipulate goals, beliefs, or task outcomes to in- fluence collectiv e planning, while semantic misalignment can lead to inconsistent task interpretations. Conv entional crypto- graphic and access-control mechanisms pre vent unauthorized participation but do not guarantee semantic correctness or con- sistency across distributed decision-making processes, making semantic integrity a critical concern in agentic AI systems. Secure SemCom for agentic AI can be realized through the four-layer defense framew ork in Section V. Encoder-and decoder-le vel defenses enforce robust and verifiable semantic representations, channel-le vel mechanisms protect critical task semantics under interference, kno wledge-base defenses pre- serve consistenc y of shared world models through provenance and synchronization, and network-lev el mechanisms enable misbehavior detection and semantic consensus. T ogether , these layers extend security from communication integrity to se- mantic integrity , enabling trustworth y coordination among distributed agentic AI systems. D. Smart Manufacturing W ith the development of Industry 4.0/5.0, smart manufac- turing has been en visioning highly automated, interconnected, and data-dri ven production systems that integrate sensing, computation, and control across cyber-physical infrastructures, factory equipment, and supervisory systems [198], [199]. Recently , SemCom has been exploited to enhance smart manufacturing by transmitting task-oriented or context-aware semantics rather than raw sensor streams. For example, the work of [200] discusses how semantic communications enable intelligent, goal-centric machine interactions in smart facto- ries, improving operational efficienc y by transmitting only the semantic intent of monitoring and control information. Besides, the authors in [201] dev elop a SemCom framework with continuous federated reinforcement learning capabilities for smart factories and industry IoT scenarios. Moreover , digital twin-driv en semantic communication has been in vesti- gated for synchronizing simulation models with factory-floor equipment, e xploiting semantic compression to reduce update ov erhead between physical and virtual entities [202], [203]. Integrating SemCom into smart manufacturing introduces semantic-lev el vulnerabilities, as adversarial nodes may ma- nipulate fault semantics, scheduling intent, or digital-twin updates to distort factory-state representations. For example, in a robotic assembly line where sensors transmit semantic fault descriptors rather than raw data, an adversary can subtly alter these semantics to misclassify a critical fault as benign, delaying maintenance and causing cumulative equipment dam- age. Unlike traditional industrial cybersecurity , which focuses on protocol-lev el attacks or unauthorized actuation, semantic attacks target meaning and task context, potentially causing performance degradation or safety hazards without violating syntactic integrity . Secure SemCom for smart manufacturing can be achie ved through the four -layer defense frame work in Section V. Encoder- and decoder-le vel defenses enforce physically plau- sible semantic representations, channel-lev el mechanisms pro- tect critical semantics under interference, kno wledge-base de- fenses ensure secure digital-twin synchronization and semantic prov enance, and network-le vel mechanisms enable trust-aware semantic fusion and misbeha vior detection. T ogether, these layers protect semantic inte grity in industrial cyber -physical systems, enabling safe and reliable manufacturing operations. E. Lessons Learned Across domains, SemCom is promising to reduce communi- cation overhead, align transmitted information with tasks, and support distrib uted perception–decision–action loops. Mean- while, SemCom also creates ne w threats where adversaries manipulate meaning without violating syntactic integrity . Lay- ered defenses identified in Section V are essential to ensure semantic integrity , provenance, and trust. The tradeof f between security and ef ficiency could vary across different domain requirement, emphasizing that secure SemCom is context- dependent and requires joint optimization across rob ustness, semantic fidelity , and real-time constraints. V I I I . O P E N R E S E A R C H D I R E C T I O N S While recent adv ances in SemCom and robust AI hav e established foundational principles, translating these dev el- opments into secure and deplo yable systems for real-world wireless en vironments remains a significant challenge. This section outlines key research directions needed to move from proof-of-concept demonstrations to trustworthy and integrated SemCom deployments. a) Human-Center ed Evaluation and Semantic T rust Met- rics: T raditional metrics such as bit error rate (BER), classi- fication accuracy , or BLEU scores fail to capture the semanti- cally grounded objectiv es of secure communication, including meaning preservation, intent alignment, and robustness to adversarial or contextual shifts. Ne w e valuation metrics are needed that better reflect the goals of SemCom systems, namely preserving semantic correctness, enabling downstream task success, and adapting gracefully to uncertainty and con- text changes. Effecti ve metrics should be task-aware, model-agnostic, and interpretable. F or example, semantic distortion may be quanti- fied through changes in control accuracy , degradation in policy performance, or inconsistencies at the concept lev el. Besides, human-in-the-loop ev aluation should be incorporated [204], [205]. In safety-critical domains such as autonomous driv- ing, healthcare, and mission planning, systems must expose meaningful indicators—including confidence scores, semantic anomaly signals, and fallback prompts—that allow human operators to identify and mitigate semantic degradation or ambiguity during runtime. Future benchmarks should further include human-centered robustness measures, such as perceived utility loss, semantic disagreement rates, or erosion of user trust [204], [206], [207]. These measures, collected through user studies or simulated 25 ev aluation agents, can complement task-lev el metrics and rev eal failure modes that are not visible through standard performance curves. Recent work on semantic intent modeling and shared control interfaces suggests promising pathways for aligning semantic representations with human goals and expectations [208], [209]. Finally , standardized ev aluation suites and red-teaming benchmarks discussed in Section VI should e xplicitly sup- port real-time alerts, abstention mechanisms, and interaction modalities between AI agents and human supervisors. Incor- porating human feedback into ev aluation protocols is essential for de veloping SemCom systems that remain robust under adversarial conditions and operational uncertainty . b) Composable System-W ide Assur ance: Robustness guarantees deri ved in isolation often fail when systems are integrated [210]. Achieving system-wide assurance requires framew orks that can compose guarantees across semantic encoders, channel coders, semantic decoders, and application logic, even under runtime adaptation, partial failure, or adver- sarial interference. Future research should de velop composable assurance framew orks that (i) formalize how per-layer certificates can be composed and verified across protocol stacks, (ii) support runtime reconfiguration in response to en velope violations or degraded trust signals, and (iii) expose control hooks that enable graceful degradation and fallback behaviors. Such framew orks may le verage modular runtime go vernors, pro- grammable semantics-aware control planes, or hybrid verifica- tion techniques to enforce multi-stage robustness guarantees. c) Secur e Semantic Knowledge Management: SemCom pipelines increasingly rely on shared semantic assets, including pretrained models, task embeddings, and knowledge graphs. These assets are vulnerable to poisoning, exfiltration, and unauthorized adaptation, which can silently de grade system performance or introduce subtle backdoors [211], [212]. Securing semantic assets requires protocols that ensure prov enance, integrity , access control, and confidentiality , while maintaining low-latenc y inference and lightweight memory footprints. Promising research directions include cryptographic tagging of semantic units, dif ferential priv acy mechanisms for knowledge sharing [213], and federated update schemes with semantic-aware version control [41]. In particular , the rise of knowledge-centric AI pipelines demands renewed attention to knowledge hygiene and integrity in SemCom, where shared models can become a vector for semantic compromise [214]. Maintaining semantic knowledge hygiene is critical for reli- able inference and trustworthy decision-making in distributed SemCom settings. d) Inter operability with Cryptographic Security: Seman- tic security must coexist with traditional cryptographic guar- antees, including confidentiality , authentication, and integrity [215]–[217]. Ho wev er, combining semantic processing with cryptographic primitiv es introduces nontri vial interactions. For instance, encryption may obscure semantic content from downstream inference modules, while semantic inspection may conflict with confidentiality requirements. Addressing this challenge calls for joint designs that support encrypted semantic processing, secure feature extraction, and zero-knowledge verification of semantic properties. Research on semantically aware encryption schemes, homomorphic op- erations ov er semantic embeddings [218], and cryptographic protocols for semantic provenance and policy enforcement can enable principled integration of security and semantics. Integrating priv acy-preserving mechanisms, such as differen- tial priv acy [213] or secure multiparty computation [219], could support semantic robustness in cross-device SemCom scenarios, including vehicular edge networks or federated smart sensors, without compromising trust or confidentiality . e) T rustworthiness Dimensions in Semantic Communica- tion: Beyond technical security mechanisms, SemCom sys- tems must align with broader AI assurance principles [13], such as fairness, explainability , and accountability , to support safe and ethical deployment. While preceding sections address robustness, abstention, and cryptographic protection, these capabilities must be situated within framew orks that promote transparent and human-aligned communication [220]. For example, semantic encoders and inference modules should be e v aluated for potential bias amplification, partic- ularly in domains like healthcare or autonomous systems [221], [222]. Explainability remains underdeveloped: when semantic miscommunication occurs, systems should expose interpretable reasoning traces or semantic prov enance to aid diagnosis and recovery [208], [223]. Accountability is also essential, where SemCom pipelines should support traceable decision chains across multi-agent, multi-model systems to assign responsibility when semantic failures arise [224]. Future research should embed these trust dimensions into both system design and ev aluation, such as fairness-aware en- coder/decoder training, interpretable semantic reconstruction, contract-based traceability , and human-v erifiable semantic in- tents. By explicitly aligning semantic security with trustwor- thiness goals, SemCom can e volv e into an ethically grounded infrastructure for AI-native communication. I X . C O N C L U S I O N SemCom transforms wireless systems by shifting the focus from symbol reproduction to preserving task-rele vant meaning, introducing AI-induced dependencies that render con ventional communication security models insufficient. By embedding learned models, shared knowledge, and inference into the communication pipeline, SemCom exposes new semantic-le vel attack surfaces in which failures may arise ev en when lower - layer reliability and cryptographic protections remain intact, making semantic integrity a system-le vel security challenge. This survey presented a system-level, AI-defense–oriented synthesis of security in SemCom, org anized around an AI- centric threat model and a pipeline-spanning taxonomy of countermeasures. Our analysis reveals fundamental gaps in robustness transfer across layers and highlights shared se- mantic knowledge as a first-class vulnerability that demands explicit protection and go vernance rather than isolated model hardening. 26 Looking ahead, securing SemCom requires principled sys- tem design that inte grates human-centered e valuation, compos- able assurance, and secure semantic kno wledge management. Future SemCom systems should expose verifiable semantic interfaces and adopt e valuation practices that capture task performance, user trust, and rob ustness under adversarial con- ditions, within explicit security–utility operating en velopes. Interoperability with cryptographic mechanisms and alignment with AI assurance principles, such as fairness, e xplainability , and accountability , are essential. By unifying these elements, this surv ey aims to guide the de velopment of rob ust, trust- worthy , and deployable SemCom systems for real-world and safety-critical applications. R E F E R E N C E S [1] W . W eav er , Recent Contrib utions to the Mathematical Theory of Communication . University of Illinois Press, 1949. [2] H. Xie, Z. Qin, G. Y . Li, and B.-H. Juang, “Deep learning enabled se- mantic communication systems, ” IEEE T rans. Signal Process. , vol. 69, pp. 2663–2675, 2021. [3] E. Bourtsoulatze, D. B. Kurka, and D. G ¨ und ¨ uz, “Deep joint source- channel coding for wireless image transmission, ” IEEE T rans. Cogn. Commun. Netw . , vol. 5, no. 3, pp. 567–579, 2019. [4] W . Y ang, H. Du, Z. Q. Lie w , W . Y . B. Lim, Z. Xiong, D. Niyato, X. Chi, X. Shen, and C. Miao, “Semantic communications for future internet: Fundamentals, applications, and challenges, ” IEEE Commun. Surveys T uts. , vol. 25, no. 1, pp. 213–250, 2022. [5] C. Liang, X. Deng, Y . Sun, R. Cheng, L. Xia, D. Niyato, and M. A. Imran, “VIST A: V ideo Transmission over A Semantic Communication Approach, ” in Pr oc. IEEE Int. Conf. Commun. W orkshops (ICC W ork- shops) . IEEE, 2023, pp. 1777–1782. [6] C. Chaccour, W . Saad, M. Debbah, Z. Han, and H. V . Poor , “Less data, more knowledge: Building next-generation semantic communication networks, ” IEEE Commun. Surve ys T uts. , vol. 27, no. 1, pp. 37–76, 2024. [7] Y . Xin, M. Chen, and J. Zhang, “Semantic communication: A surve y of its theoretical foundations and applications, ” Entr opy , v ol. 26, no. 7, p. 547, 2024. [8] S. Guo, Y . W ang, N. Zhang, Z. Su, T . H. Luan, Z. Tian, and X. Shen, “ A surve y on semantic communication networks: Architecture, security , and pri vac y , ” IEEE Commun. Surve ys T uts. , vol. 27, no. 5, pp. 2860– 2894, 2025. [9] Z. Y ang, M. Chen, G. Li, Y . Y ang, and Z. Zhang, “Secure semantic communications: Fundamentals and challenges, ” IEEE Netw . , vol. 38, no. 6, pp. 513–520, 2024. [10] Y . E. Sagduyu and S. Ulukus, “Is semantic communication secure? a tale of multi-domain vulnerabilities, ” IEEE Commun. Mag. , vol. 61, no. 11, pp. 40–46, 2023. [11] W . Chen, Q. Y ang, Y . Jia, J. Pan, S. Shao, J. Dai, M. T ao, and P . Zhang, “Secure digital semantic communications: Fundamentals, challenges, and opportunities, ” arXiv pr eprint arXiv:2512.24602 , 2025. [12] X. Y uan, P . He, Q. Zhu, and X. Li, “ Adversarial examples: Attacks and defenses for deep learning, ” IEEE T rans. Neural Netw . Learn. Syst. , vol. 30, no. 9, pp. 2805–2824, 2019. [13] B. Li, P . Qi, B. Liu, S. Di, J. Liu, J. Pei, J. Y i, and B. Zhou, “T rustworthy AI: From principles to practices, ” ACM Comput. Surveys , vol. 55, no. 9, pp. 1–46, 2023. [14] W . W ei and L. Liu, “T rustworthy distributed AI systems: Robustness, priv acy , and gov ernance, ” A CM Comput. Surveys , v ol. 57, no. 6, pp. 1–42, 2025. [15] J. W en, Z. Zhang, Y . Lan, Z. Cui, J. Cai, and W . Zhang, “ A surve y on federated learning: challenges and applications, ” Int. J. Mach. Learn. Cybern. , vol. 14, no. 2, pp. 513–535, 2023. [16] D. W on, G. W oraphonbenjakul, A. B. W ondmagegn, A.-T . Tran, D. Lee, D. S. Lakew , and S. Cho, “Resource management, security , and priv acy issues in semantic communications: A survey , ” IEEE Commun. Surveys T uts. , vol. 27, no. 3, pp. 1758–1797, 2025. [17] X. Zhang, Rob ust Semantic Communications and Privacy Protection . W iley , 2024, pp. 67–86. [18] R. Meng, S. Gao, D. Fan, H. Gao, Y . W ang, X. Xu, B. W ang, S. Lv , Z. Zhang, M. Sun, S. Han, C. Dong, X. T ao, and P . Zhang, “ A surve y of secure semantic communications, ” J. Netw . Comput. Appl. , p. 104181, 2025. [19] M. Shen, J. W ang, H. Du, D. Niyato, X. T ang, J. Kang, Y . Ding, and L. Zhu, “Secure semantic communications: Challenges, approaches, and opportunities, ” IEEE Netw . , vol. 38, no. 4, pp. 197–206, 2023. [20] C. Liang, Y . Sun, D. Liu, D. Y u, and M. A. Imran, “Safeguarded AI- Driv en Semantic Communication: Design Principles, Architecture, and Challenges, ” IEEE Commun. Standar ds Mag. , 2025. [21] T . O’Shea and J. Hoydis, “ An introduction to deep learning for the physical layer, ” IEEE T rans. Cogn. Commun. Netw . , vol. 3, no. 4, pp. 563–575, 2017. [22] S. Jiang, Y . Liu, Y . Zhang, P . Luo, K. Cao, J. Xiong, H. Zhao, and J. W ei, “Reliable semantic communication system enabled by knowledge graph, ” Entropy , vol. 24, no. 6, p. 846, 2022. [23] B. W ang, R. Li, J. Zhu, Z. Zhao, and H. Zhang, “Knowledge enhanced semantic communication receiver , ” IEEE Commun. Lett. , vol. 27, no. 7, pp. 1794–1798, 2023. [24] N. Hello, P . Di Lorenzo, and E. C. Strinati, “Semantic communica- tion enhanced by knowledge graph representation learning, ” in Pr oc. IEEE Int. W orkshop Signal Process. Adv . W ireless Commun. (SP A WC) . IEEE, 2024, pp. 876–880. [25] C. Liang, Y . Sun, D. Niyato, and M. A. Imran, “Knowledge Graph Fusion Based Semantic Communication Framework, ” IEEE T rans. Mobile Comput. , vol. 24, no. 11, pp. 11 416–11 429, 2025. [26] S. Liu, Z. Gao, G. Chen, Y . Su, and L. Peng, “T ransformer-based joint source channel coding for textual semantic communication, ” in Proc. IEEE/CIC Int. Conf. Commun. China (ICCC) , 2023, pp. 1–6. [27] J. Xu, T .-Y . Tung, B. Ai, W . Chen, Y . Sun, and D. G ¨ und ¨ uz, “Deep joint source-channel coding for semantic communications, ” IEEE Commun. Mag. , vol. 61, no. 11, pp. 42–48, 2023. [28] L. Zhang, M. Das, Y . Sun, D. Niyato, and X. Y uan, “Prompt-based transceiv er cooperation for semantic communications with domain- incremental background kno wledge, ” in Pr oc. IEEE Global Commun. Conf. (GLOBECOM) . IEEE, 2023, pp. 2087–2092. [29] Z. W ang, L. Zou, S. W ei, K. Li, F . Liao, H. Mi, and R. Lai, “Llm-sc: Large language model–enabled text semantic communication systems, ” Applied Sciences , vol. 15, no. 13, p. 7227, 2025. [30] S. Salehi, M. Erol-Kantarci, and D. Niyato, “Llm-enabled data transmission in end-to-end semantic communication, ” arXiv preprint arXiv:2504.07431 , 2025. [31] R. Cheng, Y . Sun, D. Niyato, L. Zhang, L. Zhang, and M. A. Imran, “A W ireless AI-generated Content (AIGC) Provisioning Frame work Em- powered by Semantic Communication, ” IEEE T rans. Mobile Comput. , vol. 24, no. 3, pp. 2137–2150, 2024. [32] L. Xia, Y . Sun, C. Liang, L. Zhang, M. A. Imran, and D. Niyato, “Gen- erativ e AI for Semantic Communication: Architecture, Challenges, and Outlook, ” IEEE W ireless Commun. , vol. 32, no. 1, pp. 132–140, 2025. [33] C. Liang, H. Du, Y . Sun, D. Niyato, J. Kang, D. Zhao, and M. A. Imran, “Generativ e AI-driven Semantic Communication Networks: Architec- ture, T echnologies and Applications, ” IEEE T rans. Cogn. Commun. Netw . , 2024. [34] H. Zhao, H. Li, D. Xu, S. Song, and K. B. Letaief, “Multi-modal self-supervised semantic communication, ” in Pr oc. IEEE Int. Mediter- ranean Conf. Commun. Netw . (MeditCom) , 2025, pp. 1–6. [35] S. T ang, Q. Y ang, L. Fan, X. Lei, A. Nallanathan, and G. K. Karagian- nidis, “Contrastive learning-based semantic communications, ” IEEE T rans. Commun. , vol. 72, no. 10, pp. 6328–6343, 2024. [36] J. Zou, Z. W an, F . W ang, S. Y e, and S. Liu, “The self supervised multimodal semantic transmission mechanism for complex network en vironments, ” Scientific Reports , v ol. 15, no. 1, p. 29899, 2025. [37] X. Y an, F . Xiumei, K.-L. A. Y au, X. Zhixin, M. Rui, and Y . Gang, “ A revie w of reinforcement learning for semantic communications, ” J . Netw . Syst. Manage. , vol. 33, no. 3, p. 52, 2025. [38] K. Lu, R. Li, X. Chen, Z. Zhao, and H. Zhang, “Reinforcement learning-powered semantic communication via semantic similarity , ” arXiv preprint arXiv:2108.12121 , 2021. [39] F . Zhao, G. Bagwe, E. Mohammed, L. Feng, L. Zhang, and Y . Sun, “Joint computing resource and bandwidth allocation for semantic communication networks, ” in Pr oc. IEEE V eh. T echnol. Conf. (VTC) , 2023, pp. 1–5. [40] J. Xu, H. Y ao, R. Zhang, T . Mai, S. Huang, and S. Guo, “Federated learning po wered semantic communication for ua v swarm cooperation, ” IEEE Wir eless Commun. , vol. 31, no. 4, pp. 140–146, 2024. 27 [41] L. X. Nguyen, H. Q. Le, Y . L. T un, P . S. Aung, Y . K. Tun, Z. Han, and C. S. Hong, “ An efficient federated learning framework for training semantic communication systems, ” IEEE T rans. V eh. T echnol. , vol. 73, no. 10, pp. 15 872–15 877, 2024. [42] P . Si, R. Liu, L. Qian, J. Zhao, and K.-Y . Lam, “Post-deployment fine-tunable semantic communication, ” IEEE T rans. W ireless Commun. , vol. 24, no. 1, pp. 35–50, 2024. [43] G. Zhang, K. Kang, Y . Cai, Q. Hu, Y . C. Eldar, and A. L. Swindlehurst, “O2SC: Realizing channel-adaptive semantic communication with one- shot online-learning, ” IEEE Tr ans. Commun. , vol. 73, no. 5, pp. 3268– 3282, 2025. [44] C. Liu, C. Guo, Y . Y ang, W . Ni, and T . Q. Quek, “OFDM-based digital semantic communication with importance awareness, ” IEEE T rans. Commun. , vol. 72, no. 10, pp. 6301–6315, 2024. [45] X. Liu, Y . Sun, R. Cheng, L. Xia, H. Ab umarshoud, L. Zhang, and M. A. Imran, “Knowledge-assisted privac y preserving in semantic communication, ” IEEE W ir eless Commun. , v ol. 32, no. 2, pp. 76–83, 2025. [46] I. J. Goodfellow , J. Shlens, and C. Szegedy , “Explaining and harnessing adversarial examples, ” arXiv pr eprint arXiv:1412.6572 , 2014. [47] B. Biggio and F . Roli, “W ild patterns: T en years after the rise of adversarial machine learning, ” in Proc. A CM SIGSAC Conf. Comput. Commun. Secur . (CCS) , 2018, pp. 2154–2156. [48] A. Madry , A. Makelov , L. Schmidt, D. Tsipras, and A. Vladu, “T o wards deep learning models resistant to adversarial attacks, ” in Pr oc. Int. Conf. Learn. Representations (ICLR) , 2017. [49] H. Zhang, Y . Y u, J. Jiao, E. Xing, L. El Ghaoui, and M. Jordan, “Theoretically principled trade-of f between robustness and accuracy , ” in Pr oc. Int. Conf. Mach. Learn. (ICML) . PMLR, 2019, pp. 7472– 7482. [50] A. Sinha, H. Namkoong, and J. Duchi, “Certifying some distributional robustness with principled adversarial training, ” Pr oc. Int. Conf. Learn. Repr esentations (ICLR) , 2018. [51] J. Cohen, E. Rosenfeld, and Z. Kolter , “Certified adversarial robustness via randomized smoothing, ” in Pr oc. Int. Conf. Mach. Learn. (ICML) . PMLR, 2019, pp. 1310–1320. [52] Y . Zou, J. Zhu, X. W ang, and L. Hanzo, “ A survey on wireless security: T echnical challenges, recent advances, and future trends, ” Pr oc. IEEE , vol. 104, no. 9, pp. 1727–1765, 2016. [53] R. K. Nichols, P . Lekkas, and P . C. Lekkas, Wir eless security . McGraw-Hill Professional Publishing, 2001. [54] B. Biggio, B. Nelson, and P . Lasko v , “Poisoning attacks against support vector machines, ” arXiv preprint , 2012. [55] T . Gu, B. Dolan-Gavitt, and S. Garg, “Badnets: Identifying vulnera- bilities in the machine learning model supply chain, ” arXiv preprint arXiv:1708.06733 , 2017. [56] A. N. Bhagoji, S. Chakraborty , P . Mittal, and S. Calo, “ Analyzing federated learning through an adversarial lens, ” in Proc. Int. Conf. Mach. Learn. (ICML) . PMLR, 2019, pp. 634–643. [57] H. W ang, K. Sreeni vasan, S. Rajput, H. V ishwakarma, S. Agarwal, J.-y . Sohn, K. Lee, and D. Papailiopoulos, “ Attack of the tails: Y es, you really can backdoor federated learning, ” Proc. Conf. Neural Inf. Pr ocess. Syst. (NeurIPS) , vol. 33, pp. 16 070–16 084, 2020. [58] A. Ilyas, S. Santurkar , D. Tsipras, L. Engstrom, B. Tran, and A. Madry , “ Adversarial examples are not bugs, they are features, ” Pr oc. Conf. Neural Inf. Process. Syst. (NeurIPS) , vol. 32, 2019. [59] R. Shokri, M. Stronati, C. Song, and V . Shmatikov , “Membership inference attacks against machine learning models, ” in Pr oc. IEEE Symp. Secur . Priv . (SP) . IEEE, 2017, pp. 3–18. [60] M. Fredrikson, S. Jha, and T . Ristenpart, “Model inv ersion attacks that exploit confidence information and basic countermeasures, ” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur . (CCS) , 2015, pp. 1322– 1333. [61] F . Tram ` er , F . Zhang, A. Juels, M. K. Reiter, and T . Ristenpart, “Stealing machine learning models via prediction { APIs } , ” in Proc. USENIX Secur . Symp. , 2016, pp. 601–618. [62] National Institute of Standards and T echnology, “ An introduction to information security , ” NIST , T ech. Rep. Special Publication 800-12 Rev . 1, 2017. [Online]. A v ailable: https://n vlpubs.nist.gov/nistpubs/ SpecialPublications/NIST .SP .800- 12r1.pdf [63] L. Huang, A. D. Joseph, B. Nelson, B. I. Rubinstein, and J. D. T ygar, “ Adversarial machine learning, ” in Proc. ACM W orkshop Secur . Artif. Intell. (AISec) , 2011, pp. 43–58. [64] M. Barreno, B. Nelson, A. D. Joseph, and J. D. T ygar , “The security of machine learning, ” Machine learning , vol. 81, no. 2, pp. 121–148, 2010. [65] N. Papernot, P . McDaniel, A. Sinha, and M. W ellman, “T owards the science of security and priv acy in machine learning, ” arXiv preprint arXiv:1611.03814 , 2016. [66] S. Huang, N. P apernot, I. Goodfellow , Y . Duan, and P . Abbeel, “ Adversarial attacks on neural network policies, ” arXiv pr eprint arXiv:1702.02284 , 2017. [67] N. Papernot, P . McDaniel, I. Goodfellow , S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning, ” in Pr oc. A CM Asia Conf. Comput. Commun. Secur . (ASIA CCS) , 2017, pp. 506–519. [68] D. Hendrycks, N. Mu, E. D. Cubuk, B. Zoph, J. Gilmer, and B. Laksh- minarayanan, “ Augmix: A simple data processing method to improve robustness and uncertainty , ” in Pr oc. Int. Conf. Learn. Repr esentations (ICLR) , 2020. [69] H. Zhang, M. Cisse, Y . N. Dauphin, and D. Lopez-P az, “mixup: Beyond empirical risk minimization, ” in Pr oc. Int. Conf. Learn. Repr esentations (ICLR) , 2018. [70] S. Y un, D. Han, S. J. Oh, S. Chun, J. Choe, and Y . Y oo, “Cutmix: Reg- ularization strategy to train strong classifiers with localizable features, ” in Proc. IEEE/CVF Int. Conf. Comput. V ision (ICCV) , 2019. [71] E. D. Cubuk, B. Zoph, D. Mane, V . V asudev an, and Q. V . Le, “ Autoaugment: Learning augmentation policies from data, ” in Pr oc. IEEE/CVF Conf. Comput. V ision P attern Recognit. (CVPR) , 2019. [72] S. Rifai, P . V incent, X. Muller , X. Glorot, and Y . Bengio, “Contractiv e auto-encoders: Explicit in variance during feature extraction, ” in Pr oc. Int. Conf. Mach. Learn. (ICML) , 2011. [73] A. Ross and F . Doshi-V elez, “Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients, ” in Pr oc. AAAI Conf. Artif. Intell. (AAAI) , v ol. 32, no. 1, 2018. [74] Y . Y oshida and T . Miyato, “Spectral norm regularization for improving the generalizability of deep learning, ” arXiv pr eprint arXiv:1705.10941 , 2017. [75] M. Cisse, P . Bojanowski, E. Gra ve, Y . Dauphin, and N. Usunier, “Parse val networks: Improving robustness to adversarial examples, ” in Pr oc. Int. Conf. Mach. Learn. (ICML) , 2017. [76] Y . Tsuzuku, I. Sato, and M. Sugiyama, “Lipschitz-margin training: Scalable certification of perturbation in variance, ” in Proc. Adv . Neural Inf. Pr ocess. Syst. (NeurIPS) , 2018. [77] X. Ma, B. Li, Y . W ang, S. M. Erfani, S. W ijewickrema, G. Schoenebeck, D. Song, M. E. Houle, and J. Bailey , “Characterizing adversarial subspaces using local intrinsic dimensionality , ” in Proc. Int. Conf. Learn. Repr esentations (ICLR) , 2018. [78] D. Meng and H. Chen, “Magnet: A tw o-pronged defense against adversarial examples, ” in Proc. A CM SIGSAC Conf. Comput. Commun. Secur . (CCS) , 2017. [79] W . Xu, D. Evans, and Y . Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks, ” in Pr oc. Netw . Distrib. Syst. Secur . Symp. (NDSS) , 2018. [80] C. Guo, M. Rana, M. Cisse, and L. V an Der Maaten, “Countering adversarial images using input transformations, ” in Pr oc. Int. Conf. Learn. Representations (ICLR) W orkshop , 2018. [81] C. Xie, J. W ang, Z. Zhang, Z. Ren, and A. Y uille, “Mitigating adversarial effects through randomization, ” in Proc. Int. Conf. Learn. Repr esentations (ICLR) , 2018. [82] N. Carlini and D. W agner, “ Adversarial examples are not easily detected: Bypassing ten detection methods, ” in Pr oc. ACM W orkshop Artif. Intell. Secur . (AISec) , 2017. [83] A. Athalye, N. Carlini, and D. W agner, “Obfuscated gradients giv e a false sense of security: Circumventing defenses to adversarial exam- ples, ” in Pr oc. Int. Conf. Mach. Learn. (ICML) , 2018. [84] H. Salman, M. Sun, G. Y ang, A. Kapoor , and J. Z. Kolter , “Denoised smoothing: A prov able defense for pretrained classifiers, ” in Pr oc. Adv . Neural Inf. Process. Syst. (NeurIPS) , 2020. [85] S. Gowal, K. Dvijotham, R. Stanforth, R. Bunel, C. Qin, J. Uesato, R. Arandjelovic, T . Mann, and P . Kohli, “On the effecti veness of interval bound propagation for training verifiably robust models, ” arXiv pr eprint arXiv:1810.12715 , 2018. [86] E. W ong and J. Z. Kolter , “Pro vable defenses against adversarial examples via the conv ex outer adversarial polytope, ” in Pr oc. Int. Conf. Mach. Learn. (ICML) , 2018. [87] H. Zhang, H. Chen, C. Xiao, S. Gowal, R. Stanforth, B. Li, D. Boning, and C.-J. Hsieh, “T ow ards stable and efficient training of verifiably robust neural networks, ” in Pr oc. Adv . Neural Inf . Pr ocess. Syst. (NeurIPS) , 2019. 28 [88] L. Li, Y . He, R. Xu, B. Chen, B. Han, Y . Zhao, and J. Li, “Syn- chronizing llm-based semantic knowledge bases via secure federated fine-tuning in semantic communication, ” F ront. Artif. Intell. , vol. 8, p. 1690950, 2025. [89] V .-T . Hoang, V .-L. Nguyen, R.-G. Chang, P .-C. Lin, R.-H. Hwang, and T . Q. Duong, “ Adversarial attacks against shared knowledge interpreta- tion in semantic communications, ” IEEE T rans. Cogn. Commun. Netw . , vol. 11, no. 2, pp. 1024–1040, 2025. [90] T . Dreossi, S. Jha, and S. A. Seshia, “Semantic adversarial deep learn- ing, ” in Proc. Int. Conf. Comput. Aided V erification (CA V) . Springer , 2018, pp. 3–26. [91] Y . Chen, Q. Y ang, Z. Shi, and J. Chen, “The model inversion ea ves- dropping attack in semantic communication systems, ” in Pr oc. IEEE Global Commun. Conf. (GLOBECOM) . IEEE, 2023, pp. 5171–5177. [92] J. Peng, H. Xing, L. Xu, S. Luo, P . Dai, L. Feng, J. Song, B. Zhao, and Z. Xiao, “ Adversarial reinforcement learning based data poisoning attacks defense for task-oriented multi-user semantic communication, ” IEEE T rans. Mobile Comput. , vol. 23, no. 12, pp. 14 834–14 851, 2024. [93] Y . E. Sagduyu, T . Erpek, S. Ulukus, and A. Y ener , “V ulnerabilities of deep learning-driven semantic communications to backdoor (trojan) attacks, ” in Proc. Annu. Conf. Inf. Sci. Syst. (CISS) . IEEE, 2023, pp. 1–6. [94] Z. Guo, A. Kumar , and R. T ourani, “Persistent backdoor attacks in continual learning, ” in Pr oc. USENIX Secur . Symp. , 2025, pp. 6379– 6397. [95] Y . Zhou, R. Q. Hu, and Y . Qian, “Backdoor attacks and defenses on semantic-symbol reconstruction in semantic communications, ” in Proc. IEEE Int. Conf. Commun. (ICC) . IEEE, 2024, pp. 734–739. [96] S. T ang, Y . Chen, Q. Y ang, R. Zhang, D. Niyato, and Z. Shi, “T o- wards secure semantic communications in the presence of intelligent eav esdroppers, ” arXiv preprint , 2025. [97] X. Y uan and L. Zhang, “Membership inference attacks and defenses in neural network pruning, ” in Proc. USENIX Secur . Symp. USENIX Association, 2022, pp. 4561–4578. [98] J. Liu, Y . He, W . Xu, Y . Xie, and J. Han, “Manipulating semantic com- munication by adding adversarial perturbations to wireless channel, ” in Pr oc. IEEE/A CM Int. Symp. Quality of Service (IWQoS) . IEEE, 2024, pp. 1–10. [99] Y . Rong, G. Nan, M. Zhang, S. Chen, S. W ang, X. Zhang, N. Ma, S. Gong, Z. Y ang, Q. Cui, X. T ao, and T . Q. S. Quek, “Semantic entropy can simultaneously benefit transmission efficiency and channel security of wireless semantic communications, ” IEEE T rans. Inf. F orensics Secur . , v ol. 20, pp. 2067–2082, 2025. [100] M. Fallahreyhani, P . Azmi, and N. Mokari, “Countering physical adversarial attacks in semantic communication networks: Innovations and strategies, ” in Pr oc. Int. Symp. T elecommun. (IST) . IEEE, 2024, pp. 637–642. [101] R. T ang, D. Gao, M. Y ang, T . Guo, H. Wu, and G. Shi, “GAN- inspired intelligent jamming and anti-jamming strategy for semantic communication systems, ” in Pr oc. IEEE Int. Conf. Commun. (ICC) , 2023, pp. 1623–1628. [102] K. Zhou, G. Zhang, Y . Cai, Q. Hu, and G. Y u, “ROME: Robust model ensembling for semantic communication against semantic jamming attacks, ” arXiv preprint , 2025. [103] W . Chen, Q. Y ang, S. Shao, Z. Shi, J. Chen, and X. Shen, “ A coding- enhanced jamming approach for secure semantic communication over wiretap channels, ” arXiv pr eprint arXiv:2504.16960 , 2025. [104] Q. Zhou, R. Li, Z. Zhao, Y . Xiao, and H. Zhang, “ Adaptiv e bit rate control in semantic communication with incremental knowledge-based HARQ, ” IEEE Open J . Commun. Soc. , v ol. 3, pp. 1076–1089, 2022. [105] W . Gong, H. T ong, S. W ang, Z. Y ang, X. He, and C. Y in, “ Adaptiv e bitrate video semantic communication over wireless networks, ” in Pr oc. Int. Conf. W ir eless Commun. Signal Pr ocess. (WCSP) . IEEE, 2023, pp. 122–127. [106] Y . Zhou, R. Q. Hu, and Y . Qian, “Stealthy backdoor attacks on semantic symbols in semantic communications, ” in Proc. IEEE Global Commun. Conf. (GLOBECOM) . IEEE, 2024, pp. 4975–4981. [107] J. Ren, Z. Zhang, J. Xu, G. Chen, Y . Sun, P . Zhang, and S. Cui, “Kno wl- edge base enabled semantic communication: A generative perspectiv e, ” IEEE Wir eless Commun. , vol. 31, no. 4, pp. 14–22, 2024. [108] B. Han, Y . He, R. Xu, D. Xiao, N. Ruan, and J. Li, “Manipulating digital twin networks by poisoning semantic knowledge base, ” in Pr oc. IEEE Int. Conf. Commun. (ICC) . IEEE, 2025, pp. 720–725. [109] Y . He, X. Y ang, G. Li, and J. Li, “On the invisible backdoor attacks on peer-to-peer semantic vehicular networks, ” Electr onics Letters , vol. 61, no. 1, p. e70431, 2025. [110] X. Y ang, G. Li, M. Dong, K. Ota, J. W u, and J. Li, “Inviins: In visible instruction backdoor attacks on peer-to-peer semantic networks, ” in Pr oc. IEEE Int. Symp. P arallel Distrib . Pr ocess. Appl. (ISP A) . IEEE, 2024, pp. 956–964. [111] G. Li, Y . Zhao, and Y . Li, “Catfl: Certificateless authentication-based trustworthy federated learning for 6g semantic communications, ” in Pr oc. IEEE W ireless Commun. Netw . Conf. (WCNC) . IEEE, 2023, pp. 1–6. [112] G. Shi, Y . Xiao, Y . Li, and X. Xie, “From semantic communication to semantic-aware networking: Model, architecture, and open problems, ” IEEE Commun. Mag. , vol. 59, no. 8, pp. 44–50, 2021. [113] E. Uysal, O. Kaya, A. Ephremides, J. Gross, M. Codreanu, P . Popovski, M. Assaad, G. Liv a, A. Munari, B. Soret et al. , “Semantic communi- cations in networked systems: A data significance perspecti ve, ” IEEE Network , vol. 36, no. 4, pp. 233–240, 2022. [114] Z. He, T . Zhang, and R. B. Lee, “Model inv ersion attacks against collaborativ e inference, ” in Proc. Annu. Comput. Secur . Appl. Conf . (ACSA C) , 2019, pp. 148–162. [115] S. Ding, L. Zhang, M. Pan, and X. Y uan, “Patrol: Pri vac y-oriented pruning for collaborati ve inference against model inv ersion attacks, ” in Pr oc. IEEE/CVF W inter Conf. Appl. Comput. V is. (W ACV) , 2024, pp. 4716–4725. [116] W . Shen, H. Li, and Z. Zheng, “Coordinated attacks against federated learning: A multi-agent reinforcement learning approach, ” in Pr oc. Int. Conf. Learn. Repr esentations W orkshops (ICLR W orkshops) . ICLR 2021 W orkshop on Security and Safety in Machine Learning Systems (SecML), 2021. [117] L. Zhao, S. Hu, Q. W ang, J. Jiang, C. Shen, X. Luo, and P . Hu, “Shield- ing collaborative learning: Mitigating poisoning attacks through client- side detection, ” IEEE T rans. Dependable Secure Comput. , vol. 18, no. 5, pp. 2029–2041, 2020. [118] Q. Hu, G. Zhang, Z. Qin, Y . Cai, G. Y u, and G. Y . Li, “Robust semantic communications against semantic noise, ” in Pr oc. IEEE V eh. T echnol. Conf. (VTC) . IEEE, 2022, pp. 1–6. [119] K. W ei, R. Xie, W . Xu, Z. Lu, and H. Xiao, “Rob ust semantic communication via adversarial training, ” IEEE T rans. V eh. T echnol. , vol. 74, no. 12, pp. 19 849–19 853, 2025. [120] Q. Hu, G. Zhang, Z. Qin, Y . Cai, G. Y u, and G. Y . Li, “Robust semantic communications with masked vq-vae enabled codebook, ” IEEE Tr ans. W ireless Commun. , v ol. 22, no. 12, pp. 8707–8722, 2023. [121] G. Chen, G. Nan, Z. Jiang, H. Du, R. Shi, Q. Cui, and X. T ao, “Lightweight and robust wireless semantic communications, ” IEEE Commun. Lett. , vol. 28, no. 11, pp. 2633–2637, 2024. [122] X. Peng, Z. Qin, D. Huang, X. T ao, J. Lu, G. Liu, and C. Pan, “ A robust deep learning enabled semantic communication system for text, ” in Pr oc. IEEE Global Commun. Conf. (GLOBECOM) . IEEE, 2022, pp. 2704–2709. [123] X. Peng, Z. Qin, X. T ao, J. Lu, and L. Hanzo, “ A rob ust semantic text communication system, ” IEEE T rans. W ir eless Commun. , vol. 23, no. 9, pp. 11 372–11 385, 2024. [124] Y . Song, T . Kim, S. Nowozin, S. Ermon, and N. Kushman, “Pixelde- fend: Lev eraging generativ e models to understand and defend against adversarial examples, ” in Pr oc. Int. Conf. Learn. Repr esentations (ICLR) , 2018. [125] Z. W eng, Z. W ang, Z. Qin, and X. T ao, “Generati ve semantic commu- nications for rob ust speech-to-text translation, ” IEEE T rans. W ir eless Commun. , p. early access, 2025. [126] Y . Cai, “Robust and adaptive semantic noise for complex secure com- munication networks, ” Physical Communication , vol. 72, p. 102763, 2025. [127] T .-Y . Tung and D. G ¨ und ¨ uz, “Deep joint source-channel and encryption coding: Secure semantic communications, ” in Pr oc. IEEE Int. Conf . Commun. (ICC) , 2023, pp. 5620–5625. [128] S. A. Ameli Kalkhoran, M. Letafati, E. Erdemir , B. H. Khalaj, H. Behroozi, and D. G ¨ und ¨ uz, “Secure deep-jscc against multiple eav esdroppers, ” in Proc. IEEE Global Commun. Conf. (GLOBECOM) , 2023, pp. 3433–3438. [129] G. Nan, Z. Li, J. Zhai, Q. Cui, G. Chen, X. Du, X. Zhang, X. T ao, Z. Han, and T . Q. Quek, “Physical-layer adversarial robustness for deep learning-based semantic communications, ” IEEE J. Sel. Areas Commun. , vol. 41, no. 8, pp. 2592–2608, 2023. [130] C. Zhao, J. W ang, R. Zhang, D. Niyato, H. Du, Z. Xiong, D. I. Kim, and P . Zhang, “Secdiff: Dif fusion-aided secure deep joint source-channel coding against adversarial attacks, ” arXiv pr eprint arXiv:2511.01466 , 2025. 29 [131] X. Peng, Z. Qin, X. T ao, J. Lu, and K. B. Letaief, “ A robust semantic communication system for image transmission, ” in Proc. IEEE Global Commun. Conf. (GLOBECOM) . IEEE, 2024, pp. 2154–2159. [132] X. Lu, K. Zhu, J. Li, and Y . Zhang, “Efficient knowledge base synchro- nization in semantic communication network: A federated distillation approach, ” in Proc. IEEE W ireless Commun. Netw . Conf. (WCNC) . IEEE, 2024, pp. 1–6. [133] L. Hu, Y . Li, H. Zhang, L. Y uan, F . Zhou, and Q. W u, “Robust semantic communication dri ven by kno wledge graph, ” in Pr oc. Int. Conf. Internet Things: Syst. Manage . Secur . (IO TSMS) . IEEE, 2022, pp. 1–5. [134] X. Liu, H. Liang, C. Dong, and X. Xu, “Semantic synchronization for enhanced reliability in communication systems, ” in Pr oc. IEEE W ireless Commun. Netw . Conf. (WCNC) . IEEE, 2024, pp. 1–6. [135] E. Erdemir, T .-Y . Tung, P . L. Dragotti, and D. G ¨ und ¨ uz, “Generativ e joint source-channel coding for semantic image transmission, ” IEEE J. Sel. Areas Commun. , vol. 41, no. 8, pp. 2645–2657, 2023. [136] J. Chen, D. Y ou, D. G ¨ und ¨ uz, and P . L. Dragotti, “CommIN: Semantic image communications as an in verse problem with INN-guided dif fu- sion models, ” in Pr oc. IEEE Int. Conf. Acoust. Speech Signal Pr ocess. (ICASSP) , 2024, pp. 6675–6679. [137] X. Luo, B. Y in, Z. Chen, and J. W ang, “ Autoencoder-based semantic communication systems with relay channels, ” in Pr oc. IEEE Int. Conf. Commun. (ICC) , 2022, pp. 711–716. [138] S. Ma, W . Liang, B. Zhang, and D. W ang, “ An in vestigation on intelligent relay assisted semantic communication networks, ” in Pr oc. IEEE Wir eless Commun. Netw . Conf. (WCNC) , 2023, pp. 1–6. [139] N. Gao, Q. Huang, C. Li, S. Jin, and M. Matthaiou, “EsaNet: En viron- ment semantics enabled physical layer authentication, ” IEEE Wir eless Commun. Lett. , vol. 13, no. 1, pp. 178–182, 2024. [140] H. T an, N. Xie, and A. X. Liu, “ An optimization frame work for active physical-layer authentication, ” IEEE T rans. Mobile Comput. , v ol. 23, no. 1, pp. 164–179, 2024. [141] M. Shokrnezhad and T . T aleb, “ An autonomous network orchestration framew ork integrating large language models with continual reinforce- ment learning, ” arXiv pr eprint arXiv:2502.16198 , 2025. [142] Z. W eng, Z. Qin, and G. Y . Li, “Robust semantic communications for speech transmission, ” in Pr oc. IEEE Int. Conf. Acoust. Speech Signal Pr ocess. (ICASSP) , 2025. [143] K. Papineni, S. Roukos, T . W ard, and W .-J. Zhu, “Bleu: a method for automatic ev aluation of machine translation, ” in Pr oc. Annu. Meeting Assoc. Comput. Linguist. (ACL) , 2002, pp. 311–318. [144] D. Cer, M. Diab, E. Agirre, I. Lopez-Gazpio, and L. Specia, “Semeval- 2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation, ” arXiv preprint , 2017. [145] A. W ijesinghe, W . W ang, S. W anninayaka, S. Zhang, and Z. Ding, “T aco: Rethinking semantic communications with task adaptation and context embedding, ” arXiv preprint , 2025. [146] X. Zhan, J. Cao, X. Zhu, Y . Zhang, Z. Dong, and C. Fan, “Sparse vector coding based robust semantic communication for dynamic envi- ronment, ” in Pr oc. IEEE Int. W orkshop Radio F r eq. Antenna T echnol. (iWRF&A T) . IEEE, 2025, pp. 425–430. [147] X. Peng, Z. Qin, X. T ao, J. Lu, and K. B. Letaief, “ A rob ust image semantic communication system with multi-scale vision transformer , ” IEEE J. Sel. Ar eas Commun. , vol. 43, no. 1, pp. 53–68, 2025. [148] Z. T ian, W . W ang, C. Zhang, and S. Y u, “Model-enabled task-oriented semantic communications through knowledge synchronization, ” IEEE T rans. Cogn. Commun. Netw . , 2025. [149] H. Gao, M. Sun, R. Zhang, Y . W ang, X. Xu, N. Ma, D. Niyato, and P . Zhang, “ Agentic ai-enhanced semantic communications: Founda- tions, architecture, and applications, ” arXiv preprint , 2025. [150] X. W ang, H. W ang, and D. Y ang, “Measure and impro ve robustness in nlp models: A survey , ” in Pr oc. Conf. North Amer . Chapter Assoc. Comput. Linguist.: Human Lang. T echnol. (NAA CL-HLT) , 2022, pp. 4569–4586. [151] S. Goyal, S. Doddapaneni, M. M. Khapra, and B. Ravindran, “ A survey of adversarial defenses and robustness in nlp, ” ACM Comput. Surveys , vol. 55, no. 14s, pp. 1–39, 2023. [152] N. Drenkow , N. Sani, I. Shpitser, and M. Unberath, “ A systematic revie w of robustness in deep learning for computer vision: Mind the gap?” arXiv pr eprint arXiv:2112.00639 , 2021. [153] Y . Li and C. Xu, “T rade-off between robustness and accuracy of vision transformers, ” in Proc. IEEE/CVF Conf. Comput. V ision P attern Recognit. (CVPR) , 2023, pp. 7558–7568. [154] D. Hendrycks and T . Dietterich, “Benchmarking neural network ro- bustness to common corruptions and perturbations, ” in Proc. Int. Conf. Learn. Representations (ICLR) , 2019. [155] Y . Nie, A. Williams, E. Dinan, M. Bansal, J. W eston, and D. Kiela, “ Adversarial nli: A ne w benchmark for natural language understand- ing, ” in Pr oc. Annu. Meeting Assoc. Comput. Linguist. (ACL) , 2020, pp. 4885–4901. [156] K. J. ˚ Astr ¨ om and R. Murray , F eedback systems: an introduction for scientists and engineers . Princeton uni versity press, 2021. [157] J. W . S. Liu, Real-T ime Systems . Prentice Hall, 2000. [158] A. Zappone, M. Di Renzo, M. Debbah, T . T . Lam, and X. Qian, “Model-aided wireless artificial intelligence: Embedding expert knowl- edge in deep neural networks for wireless system optimization, ” IEEE V eh. T echnol. Mag. , vol. 14, no. 3, pp. 60–69, 2019. [159] L. Li, T . Xie, and B. Li, “Sok: Certified robustness for deep neural networks, ” in Pr oc. IEEE Symp. Secur . Priv . (SP) . IEEE, 2023, pp. 1289–1310. [160] B. Zhang, D. Jiang, D. He, and L. W ang, “Rethinking lipschitz neural networks and certified robustness: A boolean function perspective, ” Pr oc. Adv . Neural Inf . Process. Syst. (NeurIPS) , vol. 35, pp. 19 398– 19 413, 2022. [161] N. Carlini and D. W agner , “T ow ards ev aluating the robustness of neural networks, ” in Pr oc. IEEE Symp. Secur . Priv . (SP) . IEEE, 2017, pp. 39–57. [162] M. Nasr , R. Shokri, and A. Houmansadr , “Comprehensive privac y analysis of deep learning: Passiv e and active white-box inference attacks against centralized and federated learning, ” in Pr oc. IEEE Symp. Secur . Priv . (SP) . IEEE, 2019, pp. 739–753. [163] B. Liu, M. Ding, S. Shaham, W . Rahayu, F . Farokhi, and Z. Lin, “When machine learning meets priv acy: A survey and outlook, ” ACM Comput. Surveys , vol. 54, no. 2, pp. 1–36, 2021. [164] D. Elliott and E. Soifer, “ Ai technologies, pri vac y , and security , ” F r ont. Artif. Intell. , v ol. 5, p. 826737, 2022. [165] W . Saad, M. Bennis, and M. Chen, “ A vision of 6g wireless systems: Applications, trends, technologies, and open research problems, ” IEEE Netw . , vol. 34, no. 3, pp. 134–142, 2019. [166] G. Zhu, D. Liu, Y . Du, C. Y ou, J. Zhang, and K. Huang, “T oward an intelligent edge: W ireless communication meets machine learning, ” IEEE Commun. Mag. , vol. 58, no. 1, pp. 19–25, 2020. [167] C. Zhang, L. Huang, and Q. Ning, “Resource allocation in wireless semantic communications: A comprehensi ve surve y , ” IEEE Commun. Surveys T uts. , vol. 28, pp. 2965–3001, 2025. [168] S. Hua, Y . Sun, K. Ma, L. Feng, M. Chen, Z. Y ang, and M. A. Imran, “Bandwidth Management in Semantic Communications: A T radeoff Between Data Sensing and Transmission, ” IEEE T rans. V eh. T echnol. , 2025. [169] K. Ma, H. Abumarshoud, S. Hua, M. Imran, and Y . Sun, “Power Allocation for Throughput Maximization in NOMA-Based Semantic Communication System, ” in Pr oc. IEEE Int. Conf. Commun. (ICC) . IEEE, 2025, pp. 4288–4293. [170] X. Liu, Y . Liu, H. T ang, F . Zhao, L. Xia, and Y . Sun, “Joint Knowl- edge and Power Management for Secure Semantic Communication Networks, ” arXiv pr eprint arXiv:2504.15260 , 2025. [171] T . M. Getu, G. Kaddoum, and M. Bennis, “Semantic communication: A survey on research landscape, challenges, and future directions, ” Pr oc. IEEE , vol. 112, no. 11, pp. 1649–1685, 2024. [172] J. Morris, E. Lifland, J. Y . Y oo, J. Grigsby , D. Jin, and Y . Qi, “T extattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp, ” in Pr oc. Conf. Empir . Methods Nat. Lang. Pr ocess. (EMNLP) Syst. Demonstrations , 2020, pp. 119–126. [173] Y . Zhao, T . Pang, C. Du, X. Y ang, C. Li, N.-M. M. Cheung, and M. Lin, “On ev aluating adversarial robustness of large vision-language models, ” Pr oc. Adv . Neural Inf. Pr ocess. Syst. (NeurIPS) , v ol. 36, pp. 54 111–54 138, 2023. [174] M. Brundage, S. A vin, J. W ang, H. Belfield, G. Krueger , G. Hadfield, H. Khlaaf, J. Y ang, H. T oner , R. Fong et al. , “T o ward trustworthy ai dev elopment: mechanisms for supporting verifiable claims, ” arXiv pr eprint arXiv:2004.07213 , 2020. [175] L. Bonati, S. D’Oro, M. Polese, S. Basagni, and T . Melodia, “Intelli- gence and learning in o-ran for data-driven nextg cellular networks, ” IEEE Commun. Mag. , vol. 59, no. 10, pp. 21–27, 2021. [176] M. Polese, L. Bonati, S. D’oro, S. Basagni, and T . Melodia, “Un- derstanding o-ran: Architecture, interfaces, algorithms, security , and research challenges, ” IEEE Commun. Surveys Tuts. , vol. 25, no. 2, pp. 1376–1411, 2023. [177] J. Ma, D. Xu, T . Zhang, and K. Y u, “Implementation and e valuation of semantic communication on sdr based lora platform, ” in Proc. IEEE W ireless Commun. Netw . Conf. (WCNC) . IEEE, 2024, pp. 1–6. 30 [178] T . Qiu, N. Chen, K. Li, M. Atiquzzaman, and W . Zhao, “How can heterogeneous internet of things build our future: A survey , ” IEEE Commun. Surveys T uts. , v ol. 20, no. 3, pp. 2011–2027, 2018. [179] “POWDER-RENEW Wireless Research Platform, ” https://www . powderwireless.net, accessed: 2025-12-17. [180] “6G Flagship B5G Playground, ” https://www .6gflagship.com/ b5g- playground/, univ ersity of Oulu, Finland. Accessed: 2025-12-17. [181] “OpenAirInterface, ” https://www .openairinterface.org, 2023, accessed: 2025-12-17. [182] “ARA: Advanced Wireless Research T estbed, ” https://arawireless.org, 2020, accessed: 2025-12-17. [183] “RISE-6G Project, ” https://rise- 6g.eu/, 2023, accessed: 2025-12-17. [184] X. Chen, Z. Feng, Z. W ei, F . Gao, and X. Y uan, “Performance of joint sensing-communication cooperativ e sensing uav network, ” IEEE T rans. V eh. T echnol. , vol. 69, no. 12, pp. 15 545–15 556, 2020. [185] K. Y ang, D. Y ang, J. Zhang, M. Li, Y . Liu, J. Liu, H. W ang, P . Sun, and L. Song, “Spatio-temporal domain awareness for multi-agent collaborativ e perception, ” in Proc. IEEE/CVF Int. Conf. Comput. V is. , 2023, pp. 23 383–23 392. [186] M.-Q. Dao, J. S. Berrio, V . Fr ´ emont, M. Shan, E. H ´ ery , and S. W orrall, “Practical collaborative perception: A framework for asynchronous and multi-agent 3d object detection, ” IEEE T rans. Intell. T ransp. Syst. , vol. 25, no. 9, pp. 12 163–12 175, 2024. [187] Y . Sheng, H. Y e, L. Liang, S. Jin, and G. Y . Li, “Semantic communica- tion for cooperativ e perception based on importance map, ” J. F ranklin Inst. , vol. 361, no. 6, p. 106739, 2024. [188] M. Lu, G. Liu, L. Liang, C. Guo, H. Y e, and S. Jin, “Cross-modal semantic communication for heterogeneous collaborative perception, ” arXiv preprint arXiv:2511.20000 , 2025. [189] T . B. Sheridan, T elerobotics, Automation, and Human Supervisory Contr ol . MIT Press, 1992. [190] G. Niemeyer, C. Preusche, G. Hirzinger, and M. Buss, “T elerobotics, ” Springer Handbook of Robotics , pp. 741–757, 2008. [191] P . T alli, F . Pase, F . Chiariotti, A. Zanella, and M. Zorzi, “Semantic and effecti ve communication for remote control tasks with dynamic feature compression, ” in Proc. IEEE INFOCOM W orkshop . IEEE, 2023, pp. 1–6. [192] Q. Zeng, Z. W ang, Y . Zhou, H. W u, L. Y ang, and K. Huang, “Knowledge-based ultra-low-latenc y semantic communications for robotic edge intelligence, ” IEEE T rans. Commun. , 2024. [193] D. B. Acharya, K. Kuppan, and B. Di vya, “ Agentic ai: Autonomous intelligence for complex goals–a comprehensive survey , ” IEEE Access , 2025. [194] R. Sapkota, K. I. Roumeliotis, and M. Karkee, “ Ai agents vs. agentic ai: A conceptual taxonomy , applications and challenges, ” arXiv pr eprint arXiv:2505.10468 , 2025. [195] P . Li, Z. Liu, W . Pang, and J. Cao, “Semantic collaboration: A collaborativ e approach for multi-agent systems based on semantic communication, ” in Pr oc. Int. Conf. Comput., Netw . Internet Things (CNIO T) , 2024, pp. 123–132. [196] F . Jiang, C. Pan, L. Dong, K. W ang, O. A. Dobre, and M. Debbah, “From large ai models to agentic ai: A tutorial on future intelligent communications, ” arXiv preprint , 2025. [197] S. T ang, Y . Jia, Z. Y ang, Q. Y ang, R. Zhang, J. Du, J. Park, Z. Shi, and K. B. Letaief, “Rethinking secure semantic communications in the age of generative and agentic ai: Threats and opportunities, ” arXiv preprint arXiv:2601.01791 , 2026. [198] D. Zuehlke, “Smartfactory—towards a f actory-of-things, ” Annu. Rev . Contr ol , vol. 34, no. 1, pp. 129–138, 2010. [199] J. Lee, B. Bagheri, and H.-A. Kao, “ A cyber-physical systems archi- tecture for industry 4.0-based manufacturing systems, ” Manuf. Lett. , vol. 3, pp. 18–23, 2015. [200] X. Luo, H.-H. Chen, and Q. Guo, “Semantic communications: Overvie w , open issues, and future research directions, ” IEEE W ir eless Commun. , vol. 29, no. 1, pp. 210–219, 2022. [201] S. R. Pokhrel, “Learning from data streams for automation and orchestration of 6g industrial iot: tow ard a semantic communication framew ork, ” Neural Comput. Appl. , v ol. 34, no. 18, pp. 15 197–15 206, 2022. [202] S. K. Jagatheesaperumal, Z. Y ang, Q. Y ang, C. Huang, W . Xu, M. Shikh-Bahaei, and Z. Zhang, “Semantic-aware digital twin for metav erse: A comprehensive re view , ” IEEE W ireless Commun. , vol. 30, no. 4, pp. 38–46, 2023. [203] C. K. Thomas, W . Saad, and Y . Xiao, “Causal semantic communication for digital twins: A generalizable imitation learning approach, ” IEEE J. Sel. Areas Inf. Theory , vol. 4, pp. 698–717, 2023. [204] E. Glikson and A. W . W oolley , “Human trust in artificial intelli- gence: Revie w of empirical research, ” Academy of management annals , vol. 14, no. 2, pp. 627–660, 2020. [205] S. Kumar , S. Datta, V . Singh, D. Datta, S. K. Singh, and R. Sharma, “ Applications, challenges, and future directions of human-in-the-loop learning, ” IEEE Access , v ol. 12, pp. 75 735–75 760, 2024. [206] V . Lai, C. Chen, A. Smith-Renner , Q. V . Liao, and C. T an, “T o wards a science of human-ai decision making: An overvie w of design space in empirical human-subject studies, ” in Pr oc. ACM Conf. F airness, Accountability , T ranspar ency (F AccT) , 2023, pp. 1369–1385. [207] A. T occhetti, L. Corti, A. Balayn, M. Y urrita, P . Lippmann, M. Bram- billa, and J. Y ang, “ Ai rob ustness: a human-centered perspecti ve on technological challenges and opportunities, ” ACM Comput. Surveys , vol. 57, no. 6, pp. 1–38, 2025. [208] S. W ang, Y . Zhuang, R. Zhang, and Z. Song, “Capsule netw ork- based semantic intent modeling for human-computer interaction, ” arXiv pr eprint arXiv:2507.00540 , 2025. [209] P . V aithilingam, M. Kim, F .-C. Acosta-Parenteau, D. Lee, A. Mhedhbi, E. L. Glassman, and I. Arawjo, “Semantic commit: Helping users update intent specifications for ai memory at scale, ” in Pr oc. Annu. ACM Symp. User Interface Softw . T echnol. (UIST) , 2025, pp. 1–18. [210] G. Hooker , L. Mentch, and S. Zhou, “Unrestricted permutation forces extrapolation: v ariable importance requires at least one more model, or there is no free variable importance, ” Stat. Comput. , vol. 31, no. 6, p. 82, 2021. [211] N. Carlini and A. T erzis, “Poisoning and backdooring contrasti ve learning, ” in Pr oc. Int. Conf. Learn. Repr esentations (ICLR) , 2021. [212] B. Sabir , F . Ullah, M. A. Babar , and R. Gaire, “Machine learning for detecting data exfiltration: A revie w , ” A CM Comput. Surveys , vol. 54, no. 3, pp. 1–47, 2021. [213] Y . Zhao and J. Chen, “ A survey on dif ferential priv acy for unstructured data content, ” A CM Comput. Surve ys , vol. 54, no. 10s, pp. 1–28, 2022. [214] D. Deveaux, T . Higuchi, S. Uc ¸ ar, J. H ¨ arri, and O. Altintas, “ A definition and frame work for vehicular kno wledge networking: An application of knowledge-centric networking, ” IEEE V eh. T echnol. Mag. , v ol. 16, no. 2, pp. 57–67, 2021. [215] S. W . Golomb and G. Gong, Signal design for good correlation: for wir eless communication, cryptography , and radar . Cambridge Univ ersity Press, 2005. [216] S. V . Kartalopoulos, “ A primer on cryptography in communications, ” IEEE Commun. Mag. , vol. 44, no. 4, pp. 146–151, 2006. [217] N. Sklavos, M. Manninger , X. Zhang, O. Koufopa vlou, V . Hassler , P . Kitsos, M. Mcloone, P . Hamalainen, A. P . Fournaris, V . Rijmen et al. , Wir eless security and cryptogr aphy: specifications and imple- mentations . CRC Press, 2017. [218] A. Acar , H. Aksu, A. S. Uluagac, and M. Conti, “ A survey on homomorphic encryption schemes: Theory and implementation, ” A CM Comput. Surveys , vol. 51, no. 4, pp. 1–35, 2018. [219] Y . Lindell, “Secure multiparty computation, ” Commun. A CM , vol. 64, no. 1, pp. 86–96, 2020. [220] F . Doshi-V elez and B. Kim, “T owards a rigorous science of inter- pretable machine learning, ” Nat. Mach. Intell. , vol. 3, no. 6, pp. 422– 431, 2021. [221] N. Mehrabi, F . Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “ A surve y on bias and fairness in machine learning, ” ACM Comput. Surveys , vol. 54, no. 6, pp. 1–35, 2021. [222] A. Rajkomar, M. Hardt, and M. Howell, “Ensuring fairness in machine learning to advance health equity , ” Ann. Intern. Med. , vol. 169, no. 12, pp. 866–872, 2018. [223] Z. C. Lipton, “The mythos of model interpretability , ” Commun. ACM , vol. 61, no. 10, pp. 36–43, 2018. [224] B. D. Mittelstadt, P . Allo, M. T addeo, S. W achter, and L. Floridi, “The ethics of algorithms: Mapping the debate, ” Big Data Soc. , vol. 3, no. 2, p. 2053951716679679, 2016.

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment