AI RMF Podcast 09 - NIST AI 100 - 2e2025 - Adversarial Machine Learning
Artikel konnten nicht hinzugefügt werden
Der Titel konnte nicht zum Warenkorb hinzugefügt werden.
Der Titel konnte nicht zum Merkzettel hinzugefügt werden.
„Von Wunschzettel entfernen“ fehlgeschlagen.
„Podcast folgen“ fehlgeschlagen
„Podcast nicht mehr folgen“ fehlgeschlagen
-
Gesprochen von:
-
Von:
Über diesen Titel
National Institute of Standards and Technology AI 100-2e2025, Adversarial Machine Learning, examines the security risks posed by malicious actors who intentionally manipulate machine learning systems and outlines strategies to strengthen their resilience. The report explains how adversarial attacks can occur during different phases of the AI lifecycle, including data poisoning during training, model evasion through carefully crafted inputs, model extraction, and inference-time manipulation. It emphasizes that AI systems introduce new attack surfaces beyond traditional cybersecurity threats, requiring specialized risk assessment, testing, and monitoring approaches. The publication promotes secure-by-design principles, robust evaluation techniques, red-teaming, and continuous monitoring to detect and mitigate adversarial behaviors. Ultimately, NIST AI 100-2e2025 reinforces the need to integrate AI security into broader risk management and governance frameworks, ensuring machine learning systems remain reliable, trustworthy, and resilient in adversarial environments.
