{"id":98616,"date":"2020-02-19T01:01:24","date_gmt":"2020-02-19T09:01:24","guid":{"rendered":"\/blogs\/?p=98616"},"modified":"2025-06-02T01:28:53","modified_gmt":"2025-06-02T08:28:53","slug":"introduction-and-application-of-model-hacking","status":"publish","type":"post","link":"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/","title":{"rendered":"Introduction and Application of Model Hacking"},"content":{"rendered":"<p><em>Catherine Huang, Ph.D., and Shivangee Trivedi contributed to this blog.<\/em><\/p>\n<p>The term \u201cAdversarial Machine Learning\u201d (AML) is a mouthful!\u00a0 The term describes a research field regarding the study and design of adversarial attacks targeting Artificial Intelligence (AI) models and features.\u00a0 Even this simple definition can send the most knowledgeable security practitioner running!\u00a0 We\u2019ve coined the easier term \u201cmodel hacking\u201d to enhance the reader\u2019s comprehension of this increasing threat.\u00a0 In this blog, we will decipher this very important topic and provide examples of the real-world implications, including findings stemming from the combined efforts of McAfee\u2019s Advanced Analytic Team (AAT) and Advanced Threat Research (ATR) for a critical threat in autonomous driving.<\/p>\n<ol>\n<li>\n<h2><strong><u> First, the Basics<\/u><\/strong><\/h2>\n<\/li>\n<\/ol>\n<p>AI is interpreted by most markets to include Machine Learning (ML), Deep Learning (DL), and actual AI, and we will succumb to using this general term of AI here. \u00a0Within AI, the <em>model <\/em>\u2013 a mathematical algorithm that provides insights to enable business results \u2013 can be attacked without knowledge of the actual model created.\u00a0 <em>Features<\/em> are those characteristics of a model that define the output desired.\u00a0 Features can <em>also <\/em>be attacked without knowledge of the features used! \u00a0What we have just described is known as a \u201cblack box\u201d attack in AML \u2013 not knowing the model and features \u2013 or \u201cmodel hacking.\u201d\u00a0 Models and\/or features can be known or unknown, increasing false positives or negatives, without security awareness unless these vulnerabilities are monitored and ultimately protected and corrected.<\/p>\n<p>In the feedback learning loop of AI, recurrent training of the model occurs in order to comprehend new threats and keep the model current (see <u>Figure 1<\/u>).\u00a0 With model hacking, the attacker can poison the Training Set.\u00a0 However, the Test Set can also be hacked, causing false negatives to increase, evading the model\u2019s intent and misclassifying a model\u2019s decision.\u00a0 Simply by perturbating &#8211; changing the magnitudes of a few features (such as pixels for images), zeros to ones\/ones to zeros, or removing a few features \u2013 the attacker can wreak havoc in security operations with disastrous effects. \u00a0Hackers will continue to \u201cping\u201d unobtrusively until they are rewarded with nefarious outcomes \u2013 and they don\u2019t even have to attack with the same model that we are using initially!<\/p>\n<ol start=\"2\">\n<li>\n<h2><strong><u> Digital Attacks of Images and Malware<\/u><\/strong><\/h2>\n<\/li>\n<\/ol>\n<p>Hackers\u2019 goals can be <em>targeted<\/em> (specific features and one specific error class) or <em>non-targeted<\/em> (indiscriminate classifiers and more than one specific error class), <em>digital <\/em>(e.g., images, audio) or <em>physical<\/em> (e.g., speed limit sign).\u00a0 <u>Figure 2<\/u> shows a rockhopper penguin targeted digitally.\u00a0 A white-box evasion example (we knew the model and the features), a few pixel changes and the poor penguin in now classified as a frying pan or a computer with excellent accuracy.<\/p>\n<p>While most current model hacking research focuses on image recognition, we have investigated evasion attacks and mitigation methods for malware detection and static analysis.\u00a0 We utilized DREBIN<a href=\"#_ftn1\" name=\"_ftnref1\">[1]<\/a>, an Android malware dataset, and replicated the results of Grosse, et al., 2016<a href=\"#_ftn2\" name=\"_ftnref2\">[2]<\/a>.\u00a0 Utilizing 625 malware samples highlighting FakeInstaller, and 120k benign samples and 5.5K malware, we developed a four-layer deep neural network with about 1.5K features (see <u>Figure 3<\/u>). \u00a0However, following an evasion attack with only modifying less than 10 features, the malware evaded the neural net nearly 100%.\u00a0 This, of course, is a concern to all of us.<\/p>\n<p>Using the CleverHans<a href=\"#_ftn1\" name=\"_ftnref1\">[1]<\/a> open-source library\u2019s Jacobian Saliency Map Approach (JSMA) algorithm, we generated perturbations creating adversarial examples.\u00a0 Adversarial examples are inputs to ML models that an attacker has intentionally designed to cause the model to make a mistake<a href=\"#_ftn1\" name=\"_ftnref1\">[1]<\/a>.\u00a0 The JSMA algorithm needs only a minimum number of features need to be modified.\u00a0 <u>Figure 4<\/u> demonstrates the original malware sample (detected as malware with 91% confidence).\u00a0 After adding just two API calls in a white-box attack, the adversarial example is now detected with 100% confidence as benign. Obviously, that can be catastrophic!<\/p>\n<p>In 2016, Papernot<a href=\"#_ftn5\" name=\"_ftnref5\">[5]<\/a> demonstrated that an attacker doesn\u2019t need to know the exact model that is utilized in detecting malware. \u00a0Demonstrating this theory of <em>transferability<\/em> in Figure 5, the attacker constructed a source (or <em>substitute<\/em>) model of a K-Nearest Neighbor (KNN) algorithm, creating adversarial examples, which targeted a Support Vector Machine (SVM) algorithm.\u00a0 It resulted in an 82.16% success rate, ultimately proving that substitution and transferability of one model to another allows black-box attacks to be, not only possible, but highly successful.<\/p>\n<p>In a black-box attack, the DREBIN Android malware dataset was detected 92% as malware.\u00a0 However, using a substitute model and transferring the adversarial examples to the victim (i.e., source) system, we were able to reduce the detection of the malware to nearly <em>zero<\/em>.\u00a0 Another catastrophic example!<\/p>\n<ol start=\"3\">\n<li>\n<h2><u> Physical Attack of Traffic Signs<\/u><\/h2>\n<\/li>\n<\/ol>\n<p>While malware represents the most common artifact deployed by cybercriminals to attack victims, numerous other targets exist that pose equal or perhaps even greater threats. Over the last 18 months, we have studied what has increasingly become an industry research trend: digital and physical attacks on traffic signs. Research in this area dates back several years and has since been replicated and enhanced in numerous publications. We initially set out to reproduce one of the original <a href=\"https:\/\/arxiv.org\/abs\/1707.08945\" target=\"_blank\" rel=\"noopener noreferrer\">papers<\/a> on the topic, and built a highly robust classifier, using an RGB (Red Green Blue) webcam to classify stop signs from the <a href=\"http:\/\/cvrr.ucsd.edu\/LISA\/lisa-traffic-sign-dataset.html\" target=\"_blank\" rel=\"noopener noreferrer\">LISA<\/a><a href=\"#_ftn6\" name=\"_ftnref6\">[6]<\/a> traffic sign data set. The model performed exceptionally well, handling lighting, viewing angles, and sign obstruction. Over a period of several months, we developed model hacking code to cause both untargeted and targeted attacks on the sign, in both the digital and physical realms. Following on this success, we extended the attack vector to speed limit signs, recognizing that modern vehicles increasingly implement camera-based speed limit sign detection, not just as input to the Heads-Up-Display (HUD) on the vehicle, but in some cases, as input to the actual driving policy of the vehicle. Ultimately, we discovered that minuscule modifications to speed limit signs could allow an attacker to influence the autonomous driving features of the vehicle, controlling the speed of the adaptive cruise control! For more detail on this research, please refer to our extensive <a href=\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/model-hacking-adas-to-pave-safer-roads-for-autonomous-vehicles\/\" target=\"_blank\" rel=\"noopener noreferrer\">blog post on the topic<\/a>.<\/p>\n<ol start=\"4\">\n<li>\n<h2><u> Detecting and Protecting Against Model Hacking<\/u><\/h2>\n<\/li>\n<\/ol>\n<p>The good news is that much like classic software vulnerabilities, model hacking is possible to defend against, and the industry is taking advantage of this rare opportunity to address the threat before it becomes of real value to the adversary. Detecting and protecting against model hacking continues to develop with many articles published weekly.<\/p>\n<p>Detection methods include ensuring that all software patches have been installed, closely monitoring drift of False Positives and False Negatives, noting cause and effect of having to change thresholds, retraining frequently, and auditing decay in the field (i.e., model reliability). \u00a0Explainable AI (\u201cXAI\u201d) is being examined in the research field for answering \u201cwhy did this NN make the decision it did?\u201d but can also be applied to small changes in prioritized features to assess potential model hacking. \u00a0In addition, human-machine teaming is critical to ensure that machines are not working autonomously and have oversight from humans-in-the-loop.\u00a0 Machines currently do not understand context; however, humans do and can consider all possible root causes and mitigations of a nearly imperceptible shift in metrics.<\/p>\n<p>Protection methods commonly employed include many analytic solutions: Feature Squeezing and Reduction, Distillation, adding noise, Multiple Classifier System, Reject on Negative Impact (RONI), and many others, including combinatorial solutions.\u00a0 There are pros and cons of each method, and the reader is encouraged to consider their specific ecosystem and security metrics to select the appropriate method.<\/p>\n<ol start=\"5\">\n<li>\n<h2><u> Model Hacking Threats and Ongoing Research<\/u><\/h2>\n<\/li>\n<\/ol>\n<p>While there has been no documented report of model hacking in the wild <em>yet<\/em>, it is notable to see the increase of research over the past few years: from less than 50 literature articles in 2014 to over 1500 in 2020. \u00a0And it would be ignorant of us to assume that sophisticated hackers aren\u2019t reading this literature.\u00a0 It is also notable that, perhaps for the first time in cybersecurity, a body of researchers have <em>proactively<\/em> developed the attack, detection, and protection against these unique vulnerabilities.<\/p>\n<p>We will continue to add to the greater body of knowledge of model hacking attacks as well as ensure the solutions we implement have built-in detection and protection.\u00a0 Our research excels in targeting the latest algorithms, such as GANS (Generative Adversarial Networks) in malware detection, facial recognition, and image libraries.\u00a0 We are also in process of transferring traffic sign model hacking to further real-world examples.<\/p>\n<p>Lastly, we believe McAfee leads the security industry in this critical area. One aspect that sets McAfee apart is the unique relationship and cross-team collaboration between ATR and AAT. Each leverages its unique skillsets; ATR with in-depth and leading-edge security research capabilities, and AAT, through its world-class data analytics and artificial intelligence expertise. When combined, these teams are able to do something few can; predict, research, analyze and defend against threats in an emerging attack vector with unique components, before malicious actors have even begun to understand or weaponize the threat.<\/p>\n<p>For further reading, please see any of the references cited, or \u201cIntroduction to Adversarial Machine Learning\u201d at <a href=\"https:\/\/mascherari.press\/introduction-to-adversarial-machine-learning\/\" target=\"_blank\" rel=\"noopener noreferrer\">https:\/\/mascherari.press\/introduction-to-adversarial-machine-learning\/<\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"#_ftnref1\" name=\"_ftn1\">[1]<\/a> Courtesy of Technische Universitat Braunschweig.<\/p>\n<p><a href=\"#_ftnref2\" name=\"_ftn2\">[2]<\/a> Grosse, Kathrin, Nicolas Papernot, et al. \u201dAdversarial Perturbations Against Deep Neural Networks for Malware Classification\u201d Cornell University Library. 16 Jun 2016.<\/p>\n<p><a href=\"#_ftnref3\" name=\"_ftn3\">[3]<\/a> Cleverhans: An adversarial example library for constructing attacks, building defenses, and benchmarking both located at <a href=\"https:\/\/github.com\/tensorflow\/cleverhans\">https:\/\/github.com\/tensorflow\/cleverhans<\/a>.<\/p>\n<p><a href=\"#_ftnref4\" name=\"_ftn4\">[4]<\/a> Goodfellow, Ian, et al. \u201cGenerative Adversarial Nets\u201d <a href=\"https:\/\/papers.nips.cc\/paper\/5423-generative-adversarial-nets.pdf\">https:\/\/papers.nips.cc\/paper\/5423-generative-adversarial-nets.pdf<\/a>.<\/p>\n<p><a href=\"#_ftnref5\" name=\"_ftn5\">[5]<\/a> Papernot, Nicholas, et al. \u201cTransferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples\u201d \u00a0<a href=\"https:\/\/arxiv.org\/abs\/1605.07277\">https:\/\/arxiv.org\/abs\/1605.07277<\/a>.<\/p>\n<p><a href=\"#_ftnref6\" name=\"_ftn6\">[6]<\/a> LISA = Laboratory for Intelligent and Safe Automobiles<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Catherine Huang, Ph.D., and Shivangee Trivedi contributed to this blog. The term \u201cAdversarial Machine Learning\u201d (AML) is a mouthful!\u00a0 The&#8230;<\/p>\n","protected":false},"author":1004,"featured_media":98658,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[442],"tags":[],"coauthors":[5354,4737],"class_list":["post-98616","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-mcafee-labs"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.4 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Introduction and Application of Model Hacking | McAfee Blog<\/title>\n<meta name=\"description\" content=\"Catherine Huang, Ph.D., and Shivangee Trivedi contributed to this blog. The term \u201cAdversarial Machine Learning\u201d (AML) is a mouthful!\u00a0 The term describes a\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Introduction and Application of Model Hacking | McAfee Blog\" \/>\n<meta property=\"og:description\" content=\"Catherine Huang, Ph.D., and Shivangee Trivedi contributed to this blog. The term \u201cAdversarial Machine Learning\u201d (AML) is a mouthful!\u00a0 The term describes a\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/\" \/>\n<meta property=\"og:site_name\" content=\"McAfee Blog\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/McAfee\/\" \/>\n<meta property=\"article:published_time\" content=\"2020-02-19T09:01:24+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-06-02T08:28:53+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2020\/02\/shutterstock_718196968.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1268\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Steve Povolny, Celeste Fralick\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@spovolny\" \/>\n<meta name=\"twitter:site\" content=\"@McAfee\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Steve Povolny, Celeste Fralick\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/\"},\"author\":{\"name\":\"Steve Povolny\",\"@id\":\"https:\/\/www.mcafee.com\/blogs\/#\/schema\/person\/210ec6c1c7e372f17c4b1109f06b8267\"},\"headline\":\"Introduction and Application of Model Hacking\",\"datePublished\":\"2020-02-19T09:01:24+00:00\",\"dateModified\":\"2025-06-02T08:28:53+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/\"},\"wordCount\":1597,\"publisher\":{\"@id\":\"https:\/\/www.mcafee.com\/blogs\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2020\/02\/shutterstock_718196968.jpg\",\"articleSection\":[\"McAfee Labs\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/\",\"url\":\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/\",\"name\":\"Introduction and Application of Model Hacking | McAfee Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.mcafee.com\/blogs\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2020\/02\/shutterstock_718196968.jpg\",\"datePublished\":\"2020-02-19T09:01:24+00:00\",\"dateModified\":\"2025-06-02T08:28:53+00:00\",\"description\":\"Catherine Huang, Ph.D., and Shivangee Trivedi contributed to this blog. The term \u201cAdversarial Machine Learning\u201d (AML) is a mouthful!\u00a0 The term describes a\",\"breadcrumb\":{\"@id\":\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/#primaryimage\",\"url\":\"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2020\/02\/shutterstock_718196968.jpg\",\"contentUrl\":\"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2020\/02\/shutterstock_718196968.jpg\",\"width\":1920,\"height\":1268},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Blog\",\"item\":\"https:\/\/www.mcafee.com\/blogs\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Other Blogs\",\"item\":\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"McAfee Labs\",\"item\":\"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/\"},{\"@type\":\"ListItem\",\"position\":4,\"name\":\"Introduction and Application of Model Hacking\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.mcafee.com\/blogs\/#website\",\"url\":\"https:\/\/www.mcafee.com\/blogs\/\",\"name\":\"McAfee Blog\",\"description\":\"Internet Security News\",\"publisher\":{\"@id\":\"https:\/\/www.mcafee.com\/blogs\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.mcafee.com\/blogs\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.mcafee.com\/blogs\/#organization\",\"name\":\"McAfee\",\"url\":\"https:\/\/www.mcafee.com\/blogs\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.mcafee.com\/blogs\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2023\/02\/mcafee-logo.png\",\"contentUrl\":\"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2023\/02\/mcafee-logo.png\",\"width\":1286,\"height\":336,\"caption\":\"McAfee\"},\"image\":{\"@id\":\"https:\/\/www.mcafee.com\/blogs\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/McAfee\/\",\"https:\/\/x.com\/McAfee\",\"https:\/\/www.linkedin.com\/company\/mcafee\/\",\"https:\/\/www.youtube.com\/McAfee\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.mcafee.com\/blogs\/#\/schema\/person\/210ec6c1c7e372f17c4b1109f06b8267\",\"name\":\"Steve Povolny\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.mcafee.com\/blogs\/#\/schema\/person\/image\/d83e09f6a46193cbf6406c6f30ba3fde\",\"url\":\"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2019\/04\/steve_p_mcafee-96x96.png\",\"contentUrl\":\"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2019\/04\/steve_p_mcafee-96x96.png\",\"caption\":\"Steve Povolny\"},\"description\":\"Steve Povolny is the Head of Advanced Threat Research for McAfee Enterprise, which delivers groundbreaking vulnerability research spanning nearly every industry. With more than a decade of experience in network security, Steve is a recognized authority on hardware and software vulnerabilities, and regularly collaborates with influencers in academia, government, law enforcement, consumers and enterprise businesses of all sizes. Steve is a sought after public speaker and media commentator who often blogs on key topics. He brings his passion for threat research and a unique vision to harness the power of collaboration between the research community and product vendors, through responsible disclosure, for the benefit of all.\",\"sameAs\":[\"https:\/\/www.linkedin.com\/in\/steve-povolny-595a776\/\",\"https:\/\/x.com\/spovolny\"],\"url\":\"https:\/\/www.mcafee.com\/blogs\/author\/steve-povolny\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Introduction and Application of Model Hacking | McAfee Blog","description":"Catherine Huang, Ph.D., and Shivangee Trivedi contributed to this blog. The term \u201cAdversarial Machine Learning\u201d (AML) is a mouthful!\u00a0 The term describes a","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"og_locale":"en_US","og_type":"article","og_title":"Introduction and Application of Model Hacking | McAfee Blog","og_description":"Catherine Huang, Ph.D., and Shivangee Trivedi contributed to this blog. The term \u201cAdversarial Machine Learning\u201d (AML) is a mouthful!\u00a0 The term describes a","og_url":"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/","og_site_name":"McAfee Blog","article_publisher":"https:\/\/www.facebook.com\/McAfee\/","article_published_time":"2020-02-19T09:01:24+00:00","article_modified_time":"2025-06-02T08:28:53+00:00","og_image":[{"width":1920,"height":1268,"url":"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2020\/02\/shutterstock_718196968.jpg","type":"image\/jpeg"}],"author":"Steve Povolny, Celeste Fralick","twitter_card":"summary_large_image","twitter_creator":"@spovolny","twitter_site":"@McAfee","twitter_misc":{"Written by":"Steve Povolny, Celeste Fralick","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/#article","isPartOf":{"@id":"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/"},"author":{"name":"Steve Povolny","@id":"https:\/\/www.mcafee.com\/blogs\/#\/schema\/person\/210ec6c1c7e372f17c4b1109f06b8267"},"headline":"Introduction and Application of Model Hacking","datePublished":"2020-02-19T09:01:24+00:00","dateModified":"2025-06-02T08:28:53+00:00","mainEntityOfPage":{"@id":"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/"},"wordCount":1597,"publisher":{"@id":"https:\/\/www.mcafee.com\/blogs\/#organization"},"image":{"@id":"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/#primaryimage"},"thumbnailUrl":"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2020\/02\/shutterstock_718196968.jpg","articleSection":["McAfee Labs"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/","url":"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/","name":"Introduction and Application of Model Hacking | McAfee Blog","isPartOf":{"@id":"https:\/\/www.mcafee.com\/blogs\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/#primaryimage"},"image":{"@id":"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/#primaryimage"},"thumbnailUrl":"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2020\/02\/shutterstock_718196968.jpg","datePublished":"2020-02-19T09:01:24+00:00","dateModified":"2025-06-02T08:28:53+00:00","description":"Catherine Huang, Ph.D., and Shivangee Trivedi contributed to this blog. The term \u201cAdversarial Machine Learning\u201d (AML) is a mouthful!\u00a0 The term describes a","breadcrumb":{"@id":"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/#primaryimage","url":"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2020\/02\/shutterstock_718196968.jpg","contentUrl":"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2020\/02\/shutterstock_718196968.jpg","width":1920,"height":1268},{"@type":"BreadcrumbList","@id":"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/introduction-and-application-of-model-hacking\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Blog","item":"https:\/\/www.mcafee.com\/blogs\/"},{"@type":"ListItem","position":2,"name":"Other Blogs","item":"https:\/\/www.mcafee.com\/blogs\/other-blogs\/"},{"@type":"ListItem","position":3,"name":"McAfee Labs","item":"https:\/\/www.mcafee.com\/blogs\/other-blogs\/mcafee-labs\/"},{"@type":"ListItem","position":4,"name":"Introduction and Application of Model Hacking"}]},{"@type":"WebSite","@id":"https:\/\/www.mcafee.com\/blogs\/#website","url":"https:\/\/www.mcafee.com\/blogs\/","name":"McAfee Blog","description":"Internet Security News","publisher":{"@id":"https:\/\/www.mcafee.com\/blogs\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.mcafee.com\/blogs\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.mcafee.com\/blogs\/#organization","name":"McAfee","url":"https:\/\/www.mcafee.com\/blogs\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.mcafee.com\/blogs\/#\/schema\/logo\/image\/","url":"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2023\/02\/mcafee-logo.png","contentUrl":"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2023\/02\/mcafee-logo.png","width":1286,"height":336,"caption":"McAfee"},"image":{"@id":"https:\/\/www.mcafee.com\/blogs\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/McAfee\/","https:\/\/x.com\/McAfee","https:\/\/www.linkedin.com\/company\/mcafee\/","https:\/\/www.youtube.com\/McAfee"]},{"@type":"Person","@id":"https:\/\/www.mcafee.com\/blogs\/#\/schema\/person\/210ec6c1c7e372f17c4b1109f06b8267","name":"Steve Povolny","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.mcafee.com\/blogs\/#\/schema\/person\/image\/d83e09f6a46193cbf6406c6f30ba3fde","url":"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2019\/04\/steve_p_mcafee-96x96.png","contentUrl":"https:\/\/www.mcafee.com\/blogs\/wp-content\/uploads\/2019\/04\/steve_p_mcafee-96x96.png","caption":"Steve Povolny"},"description":"Steve Povolny is the Head of Advanced Threat Research for McAfee Enterprise, which delivers groundbreaking vulnerability research spanning nearly every industry. With more than a decade of experience in network security, Steve is a recognized authority on hardware and software vulnerabilities, and regularly collaborates with influencers in academia, government, law enforcement, consumers and enterprise businesses of all sizes. Steve is a sought after public speaker and media commentator who often blogs on key topics. He brings his passion for threat research and a unique vision to harness the power of collaboration between the research community and product vendors, through responsible disclosure, for the benefit of all.","sameAs":["https:\/\/www.linkedin.com\/in\/steve-povolny-595a776\/","https:\/\/x.com\/spovolny"],"url":"https:\/\/www.mcafee.com\/blogs\/author\/steve-povolny\/"}]}},"_links":{"self":[{"href":"https:\/\/www.mcafee.com\/blogs\/wp-json\/wp\/v2\/posts\/98616","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.mcafee.com\/blogs\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.mcafee.com\/blogs\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.mcafee.com\/blogs\/wp-json\/wp\/v2\/users\/1004"}],"replies":[{"embeddable":true,"href":"https:\/\/www.mcafee.com\/blogs\/wp-json\/wp\/v2\/comments?post=98616"}],"version-history":[{"count":3,"href":"https:\/\/www.mcafee.com\/blogs\/wp-json\/wp\/v2\/posts\/98616\/revisions"}],"predecessor-version":[{"id":214822,"href":"https:\/\/www.mcafee.com\/blogs\/wp-json\/wp\/v2\/posts\/98616\/revisions\/214822"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.mcafee.com\/blogs\/wp-json\/wp\/v2\/media\/98658"}],"wp:attachment":[{"href":"https:\/\/www.mcafee.com\/blogs\/wp-json\/wp\/v2\/media?parent=98616"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.mcafee.com\/blogs\/wp-json\/wp\/v2\/categories?post=98616"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.mcafee.com\/blogs\/wp-json\/wp\/v2\/tags?post=98616"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.mcafee.com\/blogs\/wp-json\/wp\/v2\/coauthors?post=98616"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}