top of page
Writer's pictureSoujanya Syamal

Microsoft has released an open-source tool to test the security of artificial intelligence systems.

To better understand ML system security, Microsoft conducted a study titled 'Adversarial Machine Learning - Industry Perspectives.'

Visuals, audios, emails, and other forms of feedback are used by artificial intelligence systems. Filtering, handling, and identifying malicious inputs and behaviours has become more difficult as a result. Cybersecurity is one of the top challenges for businesses all over the world. According to an Adversa survey, the number of AI Security papers has increased from 617 in 2018 to over 1500 in 2020, indicating that cybersecurity is becoming increasingly important.


Microsoft recently announced the open-source release of Counterfit, a tool for testing the security of AI systems.


The tech giant's latest move has the potential to be a game-changer in terms of establishing a robust AI ecosystem.


Behind the scenes in technology

When it comes to penetration checking and red teaming AI programmes, the tool comes in handy. The tool is already trained on the worst possible attacks thanks to pre-installed published attack algorithms. Security professionals may also use the target interface and built-in cmd2 scripting engine to bind to Counterfit from existing offensive tools.


According to the Microsoft blog, "This method was born out of our own need to test Microsoft's AI systems for vulnerabilities in order to proactively protect AI services, in line with Microsoft's responsible AI concepts and Responsible AI Strategy in Engineering (RAISE) initiative."

The tool aims to make attacks publicly accessible to the security community so that quick corrections can be made, as well as providing an interface for creating, managing, and launching attacks on models. Furthermore, it uses terms and workflows that are similar to Metasploit or PowerShell Empyre, which are already in use offensive tools.


Microsoft recommends using Counterfit in combination with the Adversarial ML Threat Matrix, an MITRE and Microsoft-developed ATT&CK-style system for detecting AI threats.


Counterfit is a tool for scanning AI models. Security professionals may use the norm, set random parameters, or customise an AI model for extensive vulnerability coverage. Organizations will use the tool to detect attacks, and if the flaws are patched, the tool would be better able to plan for potential attacks. Counterfit also has logging features that can be used to monitor attacks on a target model. The analysis' data insights could help data scientists and engineers better understand how AI systems fail.


Microsoft previously conducted a study titled "Adversarial Machine Learning – Industry Perspectives" in order to better understand the security of ML systems. According to the report, tech giants such as Google, Amazon, Microsoft, and Tesla have heavily invested in machine learning systems.


“Through interviews with 28 organisations, we discovered that most machine learning engineers and incident responders lack tactical and strategic tools to protect, identify, and respond to adversarial attacks on industry-grade machine learning systems,” Microsoft said.


a way forward

Computer vision algorithms can be messed up by applying tiny black and white stickers to stop signs, according to AI researchers. Even the most advanced deep neural networks can fail due to small input perturbations, according to the research. This could have disastrous results.


Since AI-powered machines are increasingly replacing humans in a variety of positions, AI system security is critical for reliability, protection, and justice. AI and machine learning systems are usually made up of a combination of open-source libraries and code written by non-security experts. In addition, no industry-accepted best practises for designing stable AI algorithms exist. Counterfit is a must-have, particularly in light of the solarigate scandal and rising cybersecurity breaches.



 

To help their work, Newsmusk allows writers to use primary sources. White papers, government data, initial reporting, and interviews with industry experts are only a few examples. Where relevant, we also cite original research from other respected publishers.


Source- Analytics India Mag


 



38 views0 comments

Commentaires


bottom of page