Given the increasing role of machine learning and AI in the public sphere, it becomes increasingly crucial to ensure that these technologies are safe and secure. As such, technology companies are turning to 'Red Teams'—a group of information security experts who assume the role of potential hackers—to test and improve their security measures. NVIDIA's AI Red Team, comprising data scientists and offensive security professionals, employs a range of tools and methods to assess its ML systems, aiming to mitigate any significant risks.
The Red Team's assessment methodology revolves around a comprehensive framework that encompasses not only the red teaming activity but other related assessments for an ML system. This holistic approach ensures they can effectively address issues in different components of the ML pipeline, and upper management can get a complete view of security elements. Their framework is built on two important pillars: governance, risk, and compliance (GRC) which deals with the business security requirements and the machine learning development lifecycle which details the activities that GRC needs insight into. NVIDIA's Red Team methodology covers all concerns related to ML systems, including handling technical vulnerabilities, addressing harmful scenarios, and dealing with model vulnerabilities.
