Home ML/Data science blogs A Systematic Strategy to Robustness Modelling for Deep Convolutional Neural Networks. (arXiv:2401.13751v1 [cs.LG])

A Systematic Strategy to Robustness Modelling for Deep Convolutional Neural Networks. (arXiv:2401.13751v1 [cs.LG])

0
A Systematic Strategy to Robustness Modelling for Deep Convolutional Neural Networks. (arXiv:2401.13751v1 [cs.LG])

[ad_1]

Convolutional neural networks have proven to be extensively relevant to a big
variety of fields when massive quantities of labelled information can be found. The current
development has been to make use of fashions with more and more bigger units of tunable
parameters to extend mannequin accuracy, scale back mannequin loss, or create extra
adversarially sturdy fashions — targets which can be typically at odds with each other.
Specifically, current theoretical work raises questions concerning the means for
even bigger fashions to generalize to information exterior of the managed prepare and
check units. As such, we study the function of the variety of hidden layers within the
ResNet mannequin, demonstrated on the MNIST, CIFAR10, CIFAR100 datasets. We check a
number of parameters together with the dimensions of the mannequin, the floating level
precision, and the noise stage of each the coaching information and the mannequin output.
To encapsulate the mannequin’s predictive energy and computational value, we offer
a technique that makes use of induced failures to mannequin the likelihood of failure as a
operate of time and relate that to a novel metric that enables us to rapidly
decide whether or not or not the price of coaching a mannequin outweighs the price of
attacking it. Utilizing this strategy, we’re capable of approximate the anticipated
failure fee utilizing a small variety of specifically crafted samples reasonably than
more and more bigger benchmark datasets. We show the efficacy of this
approach on each the MNIST and CIFAR10 datasets utilizing 8-, 16-, 32-, and 64-bit
floating-point numbers, numerous information pre-processing strategies, and a number of other
assaults on 5 configurations of the ResNet mannequin. Then, utilizing empirical
measurements, we study the varied trade-offs between value, robustness,
latency, and reliability to seek out that bigger fashions don’t considerably help in
adversarial robustness regardless of costing considerably extra to coach.

[ad_2]

Supply hyperlink

LEAVE A REPLY

Please enter your comment!
Please enter your name here