Skip to content

antoninus96/DARTS-Deceiving-Autonomous-Cars-with-Toxic-Signs

Repository files navigation

Deceiving Autonomous Cars with Toxic Signs

Abstract

Recent studies show that the cutting edge deep neural networks are vulnerable to contradictory examples, deriving from small magnitude perturbations added to the input. With the advent of self-driving machines, the contradictory example, as we can imagine, can generate many complications: a car can interpret a signal incorrectly and generate an accident. In our project, we want to analyze and test the problem, showing that it is possible to generate specific perturbations to the input images to confuse the model and, in some way, force the network prediction.

Requirements

To test our code you can simply install a virtual environment on your machine (Turorial). Then, you can run the following code to install all the necessary libraries:

python setup.py

Explanation

  • Adversarial_img: It contains all the disturbed images and the relative CSV in the appropriate folders, based on the last attack made (read the paper for more details)
  • Blank_samples: It contains all the empty signs used for the blank_signs_attack
  • Dataset: It contains the link to the dataset used in the project
  • Logo_samples: It contains all the logo samples used for the logo_attack
  • Model: It contains the link to the trained model
  • Original_samples: It contains a set of high definition images used to generate contradictory images
  • Aug_examples: It contains some sample images generated by the augmented functions
  • Attack.py: It contains the general code of the attacks
  • Call_model.py: It contains the architecture of the network with other useful functions to call it from other files
  • Data_augmentation.py: It contains the code to modify the images and balance the dataset
  • Fg_attack.py: It contains the code of the Fast Gradient Attack
  • Histogram.py: It contains the code to plot the histogram of the unbalanced dataset
  • Iterative_attack.py: It contains the code of the Iterative Attack
  • Parameters.py: It contains the list of all parameters used on the project
  • Requirements.txt: It contains the list of all libraries needed to execute the code
  • Setup.py: It contains the code to install all libraries needed to execute the code
  • Sign_name.csv: It contains the correspondence between the class of the dataset, the class of the model and the relative label
  • Test.py: It contains the code to test the model
  • Train.py: It contains the code to train the model
  • Utils.py: It contains all the useful functions in the project

Team

Useful links

Paper: https://drive.google.com/file/d/1GtbJW3o_O1Fyb_CTORxyX4GDLOPBC_ee/view?usp=sharing

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages