Robustness certification seeks to determine whether a deep neural network (DNN) can be provably guaranteed to resist adversarial perturbations in a neighborhood of a given input. Such perturbations are bounded, intentionally crafted modifications designed to induce misclassification. For ReLU-based architectures, the certification problem is NP-complete, which limits exact verification approaches (such as MIP) to relatively small networks. To scale to larger models, prior work commonly employs incomplete verifiers that compute lower bounds on the certification objective. We introduce a new semidefinite programming (SDP) formulation that simultaneously verifies robustness against all target classes, achieving significant speedups. The model is further compressed by eliminating variables associated with neurons whose activation states are provably stable.

