• Aucun résultat trouvé

Invited Talk: Using SMT and Abstraction-Refinement for Neural Network Verification

N/A
N/A
Protected

Academic year: 2022

Partager "Invited Talk: Using SMT and Abstraction-Refinement for Neural Network Verification"

Copied!
1
0
0

Texte intégral

(1)

Invited Talk: Using SMT and Abstraction-Refinement for Neural Network Verification

Guy Katz

1

, Yizhak Yisrael Elboher

2

1The Hebrew University of Jerusalem, Jerusalem, Israel

2The Hebrew University of Jerusalem, Jerusalem, Israel

Abstract

Deep neural networks are increasingly being used as controllers for safety-critical systems. Because neural networks are opaque, certifying their correctness is a significant challenge. To address this issue, several neural network verification approaches have recently been proposed, many of them based on SMT solving. However, these approaches afford limited scalability, and applying them to large networks can be challenging. In this talk we will discuss a framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network — thus making it more amenable to verification. This approximation is performed such that if the property holds for the smaller (abstract) network, it holds for the original as well. The over-approximation may be too coarse, in which case the underlying verification tool might return a spurious counterexample. Under such conditions, we can perform counterexample-guided refinement to adjust the approximation, and then repeat the process.

This approach is orthogonal to, and can be integrated with, many existing verification techniques. For evaluation purposes, we integrate it with the recently proposed Marabou framework, and observe a significant improvement in Marabou’s performance. Our experiments demonstrate the great potential of abstraction-refinement for verifying larger neural networks.

SMT’21: 19th International Workshop on Satisfiability Modulo Theories, July 18–19, 2021, Online

"g.katz@mail.huji.ac.il(G. Katz);yizhak.elboher@mail.huji.ac.il(Y. Y. Elboher)

© 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

CEUR Workshop Proceedings

http://ceur-ws.org

ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org)

1

Références

Documents relatifs

We make the assumption that even with broad and imprecise learning, the resulting neural network (represented by the weights of its synapses) constitute a good representation of

The particular case of duration mismatch in speaker recogni- tion relying on the i-vector paradigm, when the amount of data for speaker enrollment is sufficient but the test data

Secondly, we train the neural network on 5-band TokyoTech multispectral image database 5-BAND Tokyo 1 and use it to de- mosaic a 5 color CFA (see Figure 2) [30, 31].. This database is

It scales to large designs by automatically abstracting the state space of word-level variables such that only equality and dis-equality among the variables are preserved regardless

This method has been implemented using the USPTO Patent Database, with a comparative study of two Neural Networks: a Wide and Deep Neural Network and a Recurrent Neural Network,

This model is closely based on the known anatomy and physiology of the basal ganglia and demonstrates a reasonable mechanism of network level action

All training chunks are stored in groups according to their keyword, and empty words are inserted where needed such that all chunks of the same keyword have the

In this paper, we defined the labeled state transition abstrac- tion derived from a Recurrent Neural Network, we intro- duce four new safety properties and formalized six