AI systems are exposed to many new forms of cyberattack, in addition to all the legacy forms of cyberattack. An understanding of these new forms of cyberattack is essential to being able to protect your AI systems and to identify if an attack has occurred. A key understanding is that conventional IT security methods are relevant, but insufficient to secure an AI system or to determine if a cyberattack has occurred. - They may be lacking in both scale and scope, and fail to address unique AI attacks It is important to understand that a cyberattack may leave the AI system functioning “normally” and be difficult to detect. - A cyberattack might be engineered by means of the successful planned use of the AI system. - A cyberattack might alter the scope of “successful” outcomes in such a way to overcome the design goals of the system. Additionally, a cyberattack might be implemented via an attack on sensors or environmental systems (e.g.: power) used by an AI system. This presentation from industry thought leader Allan Cytryn reviews the similarities and differences between on attacks on conventional IT systems and those on AI systems and will provide a framework for beginning to understand, dimension and prepare for cyberattacks on AI systems.
¿Le gustaría hacer webinars o eventos online con nosotros?
|