Microsoft Launches Automated 'Counterfit' Security Tool To Probe Your AI Systems

microsoft releases ai system security risk assessment tool
AI is spreading, and not in the creepy sci-fi dystopian kind of way, but by way of programs to help manage large tasks in critical business sectors, such as healthcare, finance, and defense. Now, Microsoft is releasing a tool called Counterfit, an “automation tool for security testing AI systems as an open-source project.” This way, companies will be able to “ensure that the algorithms used in their businesses are robust, reliable, and trustworthy.”

As mentioned, AI systems are becoming more prevalent in business, powering many different services. Thus, these systems must be secure from adversaries so that important or confidential information is not lost. However, performing security audits on these systems is not a small task typically, as it turns out. Microsoft surveyed 28 organizations across Fortune 500s, governments and more, to “understand the current processes in place to secure AI systems.” They found that 25 of the 28 organizations did not have the proper tools to secure AI systems and that guidance was needed.

robot microsoft releases ai system security risk assessment tool
AI-Powered Robot Sorting Coins

Out of a necessity to secure their own AI systems, Microsoft created Counterfit with the “goal of proactively securing AI services.” Starting as a group of attack scripts and slowly morphing into an automated AI attack tool, Counterfit can now “attack multiple AI systems at scale.” This will make it easier to test AI systems with routine red-team penetration tests on a framework designed for the work.

Microsoft has since published the tool on GitHub, so security researchers and companies alike can leverage it to their advantage. Hopefully this will help make AI systems more secure across various industries so that private information is not lost in the future.