Antonio Emanuele Cinà
Tenure-Track Researcher (RTDA), University of Genoa
Profile Picture
antonio.cina@unige.it
"A teacher affects eternity; he can never tell where his influence stops." — Henry Adams "The pursuit of truth and beauty is a sphere of activity in which we are permitted to remain children all our lives." — Albert Einstein

Biosketch

Antonio Emanuele Cinà, born in October 1995, is a Tenure-Track Researcher (RTDA) at the University of Genoa, Italy, and a member of the SAIfer Lab. At the University of Genoa, Antonio contributes to the SAIfer Lab by actively conducting research and officially supervising PhD students. He is involved in teaching activities and delivers courses in both undergraduate and graduate programs, in Italian and English. He also supervises several Bachelor’s and Master’s theses.

Previously, Antonio worked as a Post-Doc Researcher at the CISPA Helmholtz Center for Information Security, a leading research institute located in Saarbrücken, Germany. There, he focused on cutting-edge research in cybersecurity and machine learning security. Antonio obtained his Ph.D. with honors in January 2023 from Ca' Foscari University of Venice, where he also completed his Bachelor’s and Master’s degrees in Computer Science, both with full marks and honors. At Ca' Foscari, Antonio received several personal awards for academic excellence, including being recognized as the third-best Computer Science student in 2016 and the best graduate of Ca' Foscari in 2017. He also served as an elected representative of the Ph.D. program in Computer Science at Ca' Foscari University from 2019 to 2021 and was recognized as an outstanding alumnus.

Antonio is a member of the IEEE Computer Society and the ACM Computer Society.

Research Interests

Antonio Emanuele Cinà’s research is focused on two main fronts:

  1. Machine Learning Security and Reliability: Antonio started his research in this field with his Master's thesis, focusing on the security and reliability of machine learning and deep learning models. He investigates vulnerabilities and errors that arise from spurious or adversarial correlations learned by the model during training, which can lead to unexpected behaviors such as misclassification or the generation of toxic content. His work has contributed to categorizing these risks, developing robustness benchmarks, and creating guidelines for designing resilient models.
  2. Cybersecurity and AI for Scam Detection: During his Post-Doc at CISPA, Antonio expanded his research to include natural language processing and data clustering techniques for identifying cybercriminals and analyzing the methods they use to manipulate victims. This research aims to understand the strategies of cybercriminals and develop AI-based systems to help users identify and avoid these threats.

Research Objective

The core objective of Antonio’s research is to open the "black box" of learning models to ensure their correct, robust, reliable, and ethical use in both academic and industrial contexts. This involves thoroughly understanding systems, identifying vulnerabilities, interpreting the mechanisms causing failures, and addressing them to create more secure and transparent AI systems.

Research Interests

Machine Learning Security, Adversarial ML, Cybersecurity, NLP for Cybercrime, AI Robustness

Education and Academic Experience

University of Genoa
2023 - Present | Tenure-Track Researcher
CISPA Helmholtz Center for Information Security
2023 | Postdoctoral Researcher
Ph.D. in Computer Science, Ca' Foscari University of Venice
2019 - 2023
M.Sc. in Computer Science, Ca' Foscari University of Venice
2017 - 2019 | Summa cum laude
B.Sc. in Computer Science, Ca' Foscari University of Venice
2014 - 2017 | Summa cum laude, Best Graduate

Publications

Journal Articles

  • Energy-Latency Attacks via Sponge Poisoning. Information Sciences, 2025.
  • Machine Learning Security against Data Poisoning: Are We There Yet? IEEE Computer, 2024.
  • Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions. International Journal of Machine Learning and Cybernetics, 2024.
  • Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning. ACM Computing Surveys, 2023.
  • Hardening RGB-D Object Recognition Systems against Adversarial Patch Attacks. Information Sciences, 2023.
  • A Black-box Adversarial Attack for Poisoning Clustering. Pattern Recognition, 2022.

Conference Papers

  • σ-zero: Gradient-based Optimization of ℓ0-norm Adversarial Examples. International Conference on Learning Representations (ICLR), 2025.
  • Pirates of Charity: Exploring Donation-based Abuses in Social Media Platforms. The Web Conference (WWW), 2025.
  • AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples. AAAI Conference on Artificial Intelligence (AAAI), 2025.
  • Understanding XAI Through the Philosopher’s Glasses: A Historical Perspective. European Conference on Artificial Intelligence (ECAI), 2024.
  • The Imitation Game: Exploring Brand Impersonation Attacks on Social Media Platforms. USENIX Security Symposium, 2024.
  • Conning the Crypto Conman: End-to-End Analysis of Cryptocurrency-based Technical Support Scams. IEEE Symposium on Security and Privacy (SP), 2024.
  • Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training. International Conference on Image Analysis and Processing (ICIAP), 2023.
  • Stealing with Uncertainty Quantification Models. European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2023).
  • The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers? International Joint Conference on Neural Networks (IJCNN), 2021.

Participation as Speaker at International Conferences and Workshops

  • 2025σ-zero: Gradient-based Optimization of ℓ0-norm Adversarial Examples @ ICLR
  • 2025AttackBench: Evaluating Gradient-based Attacks for Adversarial Examples @ AAAI
  • 2023On the Limitations of Model Stealing with Uncertainty Quantification Models @ ICML Workshop: New Frontiers in Adversarial Machine Learning
  • 2022Mislead Machine Learning @ ITASEC Workshop: AI for Security and Security of AI
  • 2021Explaining Backdoor Poisoning @ ICCV Workshop: Adversarial Robustness in the Real World
  • 2021Is Bilevel Optimization Really Needed to Poison Linear Classifiers? @ IJCNN

Participation as Invited Speaker at International Conferences and Workshops

  • 2024Foundations of LLMs, Applications, and Security Risks – Keynote, ICMLC
  • 2024Reliable Machine Learning Security Benchmarking – ML Security Workshop @ ICMLC
  • 2024Robust ML: Benchmarking Best Practices – ML for Cybersecurity Workshop @ ECML PKDD
  • 2023Training with Malicious Teachers – Robustness in Deep Learning Workshop @ GAMM
  • 2022Where ML Security Is Broken: Poisoning Attacks – Security of ML @ Dagstuhl Seminar
  • 2022Mislead Machine Learning – ASSG: AppSec and Cybersecurity Governance

Other Activities as Speaker

2024

  • Foundations of LLMs, Applications, and Security Risks – PRALAB, University of Cagliari
  • Data & MLOps in Sustainable Transportation & Logistics – University of Pisa, Spoke 10 PNRR
  • Training with Malicious Teachers: Poisoning Attacks – PRALAB, University of Cagliari
  • Handling Scientific Experiments with HPC Clusters & Slurm – PRALAB, University of Cagliari

2023

  • Training with Malicious Teachers – SoSySec Seminar, INRIA
  • Dose Makes the Poison – SAILab, Siena
  • Dose Makes the Poison – SmartLab, Prof. Fabio Roli

2022

  • AI in the Film Industry – AIA, Villorba
  • Mislead Machine Learning – Codemotion Online Seminar