Skip to content

machanic/Adversarial-Attack

 
 

Repository files navigation

adv-attack

CW Attack (Attack on Defensive Distillation) https://arxiv.org/abs/1607.04311

AAAI 2018 Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients https://arxiv.org/abs/1711.09404 https://github.com/dtak/adversarial-robustness-public

ICCV 2017 Safetynet: Detecting and rejecting adversarial examples robustly https://arxiv.org/abs/1704.00103

ICCV 2017 Adversarial Examples for Semantic Segmentation and Object Detection https://arxiv.org/abs/1703.08603 https://github.com/cihangxie/DAG

CVPR 2018 On the Robustness of Semantic Segmentation Models to Adversarial Attacks https://arxiv.org/abs/1711.09856 https://github.com/hmph/adversarial-attacks

CVPR 2018 Deflecting Adversarial Attacks with Pixel Deflection https://arxiv.org/abs/1801.08926 https://github.com/iamaaditya/pixel-deflection

CVPR 2018 Defense against Universal Adversarial Perturbations https://arxiv.org/abs/1711.05929 https://github.com/LTS4/universal

CVPR 2018 Boosting Adversarial Attacks with Momentum https://arxiv.org/abs/1710.06081 https://github.com/dongyp13/Non-Targeted-Adversarial-Attacks

CVPR 2018 Art of singular vectors and universal adversarial perturbations https://arxiv.org/abs/1709.03582 https://github.com/KhrulkovV/singular-fool

CVPR 2018 Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser https://arxiv.org/abs/1712.02976 https://github.com/lfz/Guided-Denoise

CVPR 2018 Generative Adversarial Perturbations https://arxiv.org/abs/1712.02328 https://github.com/OmidPoursaeed/Generative_Adversarial_Perturbations

ICLR 2018 Cascade Adversarial Training Regularized with a Unified Embedding https://arxiv.org/abs/1708.02582 https://github.com/taesikna/cascade_adv_training

ICLR 2018 Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality https://arxiv.org/abs/1801.02613 https://github.com/xingjunm/lid_adversarial_subspace_detection

ICLR 2018 Countering Adversarial Images Using Input Transformations https://arxiv.org/abs/1711.00117 https://github.com/facebookresearch/adversarial_image_defenses

ICLR 2018 Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models https://arxiv.org/abs/1712.04248 https://github.com/greentfrapp/boundary-attack https://github.com/bethgelab/foolbox/blob/master/foolbox/attacks/boundary_attack.py

ICLR 2018 Decision Boundary Analysis of Adversarial Examples https://openreview.net/forum?id=BkpiPMbA- https://github.com/sunblaze-ucb/decision-boundaries

ICLR 2018 Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models https://arxiv.org/abs/1805.06605 https://github.com/kabkabm/defensegan

ICLR 2018 Ensemble Adversarial Training: Attacks and Defenses https://arxiv.org/abs/1705.07204 https://github.com/ftramer/ensemble-adv-training

ICLR 2018 Generating Natural Adversarial Examples https://arxiv.org/abs/1710.11342 https://github.com/zhengliz/natural-adversary

ICLR 2018 Mitigating Adversarial Effects Through Randomization https://arxiv.org/abs/1711.01991 https://github.com/cihangxie/NIPS2017_adv_challenge_defense

ICLR 2018 Spatially Transformed Adversarial Examples https://arxiv.org/abs/1801.02612 https://github.com/rakutentech/stAdv

ICLR 2018 Stochastic activation pruning for robust adversarial defense https://arxiv.org/abs/1803.01442 https://github.com/anishathalye/obfuscated-gradients

ICLR 2018 Thermometer Encoding: One Hot Way To Resist Adversarial Examples https://openreview.net/forum?id=S18Su--CW https://github.com/Flag-C/ThermometerEncoding https://github.com/anishathalye/obfuscated-gradients/tree/master/thermometer

ICLR 2018 Towards Deep Learning Models Resistant to Adversarial Attacks https://arxiv.org/abs/1706.06083 https://github.com/karandwivedi42/adversarial

ICML & NIPS 2017 workshop? Provable defenses against adversarial examples via the convex outer adversarial polytope https://arxiv.org/abs/1711.00851 Scaling provable adversarial defenses https://github.com/locuslab/convex_adversarial

ICML 2018 Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples (Amazingly Good, kicking-face-paper, Need-to-Read Carefully) https://arxiv.org/abs/1802.00420 https://github.com/anishathalye/obfuscated-gradients

NIPS 2017 Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation https://arxiv.org/abs/1705.08475 https://github.com/max-andr/cross-lipschitz

ICML 2018 Synthesizing robust adversarial examples https://arxiv.org/abs/1707.07397 https://github.com/prabhant/synthesizing-robust-adversarial-examples

ICML 2018 Adversarial Attack on Graph Structured Data https://arxiv.org/abs/1806.02371 https://github.com/Hanjun-Dai/graph_adversarial_attack

ICML 2018 Black-box Adversarial Attacks with Limited Queries and Information https://arxiv.org/abs/1804.08598 https://github.com/labsix/limited-blackbox-attacks

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%