Deep Learning-Based Early-Stage IR-Drop Estimation via CNN Surrogate Modeling
IR-drop is a critical power integrity challenge in modern VLSI designs that can cause timing degradation, reliability issues, and functional failures if not detected early in the design flow. Conventional IR-drop analysis relies on physics-based signoff tools, which provide high accuracy but incur significant computational cost and require near-final layout information, making them unsuitable for rapid early-stage design exploration. In this work, we propose a deep learning-based surrogate modeling approach for early-stage IR-drop estimation using a CNN. The task is formulated as a dense pixel-wise regression problem, where spatial physical layout features are mapped directly to IR-drop heatmaps. A U-Net-based encoder-decoder architecture with skip connections is employed to effectively capture both local and global spatial dependencies within the layout. The model is trained on a physics-inspired synthetic dataset generated by us, which incorporates key physical factors including power grid structure, cell density distribution, and switching activity. Model performance is evaluated using standard regression metrics such as Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR). Experimental results demonstrate that the proposed approach can accurately predict IR-drop distributions with millisecond-level inference time, enabling fast pre-signoff screening and iterative design optimization. The proposed framework is intended as a complementary early-stage analysis tool, providing designers with rapid IR-drop insight prior to expensive signoff analysis. The implementation, dataset generation scripts, and the interactive inference application are publicly available at: https://github.com/riteshbhadana/IR-Drop-Predictor. The live application can be accessed at: https://ir-drop-predictor.streamlit.app/.
💡 Research Summary
The paper addresses the need for rapid early‑stage power‑integrity assessment in modern VLSI designs, focusing on the prediction of IR‑drop, a voltage drop caused by resistive losses in the power delivery network. Traditional sign‑off tools solve large‑scale electrical equations (e.g., ∇·(σ∇V)=‑J) with high accuracy but require near‑final layout data and incur substantial computational cost, making them unsuitable for fast design iterations.
To overcome these limitations, the authors formulate IR‑drop estimation as a dense pixel‑wise regression problem. Each design region is represented as a 64 × 64 image with three channels: (1) power‑grid strength (inverse of local resistance), (2) cell density (proxy for static and dynamic current demand), and (3) switching activity (dynamic current fluctuations). The target is a single‑channel IR‑drop heatmap of the same resolution.
A U‑Net‑style convolutional neural network (CNN) is employed as a surrogate model. The encoder progressively downsamples the input, extracting global context, while the bottleneck aggregates information across the whole region. The decoder upsamples the features back to the original resolution, and skip connections between corresponding encoder and decoder layers preserve fine‑grained spatial details. This architecture is well‑suited for image‑to‑image regression tasks where both global patterns (smooth voltage gradients) and local anomalies (hot‑spots) must be captured.
Because real sign‑off data are unavailable at early design stages, the authors generate a physics‑inspired synthetic dataset. They approximate IR‑drop using the simple relation IR‑drop ≈ (Cell Density × Switching Activity) / Power‑Grid Strength + ε, then apply spatial filtering to emulate both gradual voltage slopes and localized spikes. All maps are normalized to
Comments & Academic Discussion
Loading comments...
Leave a Comment