Connect with us

News

How Adversarial ML Can Turn An ML Model Against Itself

Discover the main types of adversarial machine learning attacks and what you can do to protect yourself.

Published

on

how adversarial ml can turn an ml model against itself

Machine learning (ML) is at the very center of the rapidly evolving artificial intelligence (AI) landscape, with applications ranging from cybersecurity to generative AI and marketing. The data interpretation and decision-making capabilities of ML models offer unparalleled efficiency when you’re dealing with large datasets. As more and more organizations implement ML into their processes, ML models have emerged as a prime target for malicious actors. These malicious actors typically attack ML algorithms to extract sensitive data or disrupt operations.

What Is Adversarial ML?

Adversarial ML refers to an attack where an ML model’s prediction capabilities are compromised. Malicious actors carry out these attacks by either manipulating the training data that is fed into the model or by making unauthorized alterations to the inner workings of the model itself.

How Is An Adversarial ML Attack Carried Out?

There are three main types of adversarial ML attacks:

Data Poisoning

Data poisoning attacks are carried out during the training phase. These attacks involve infecting the training datasets with inaccurate or misleading data with the purpose of adversely affecting the model’s outputs. Training is the most important phase in the development of an ML model, and poisoning the data used in this step can completely derail the development process, rendering the model unfit for its intended purpose and forcing you to start from scratch.

Evasion

Evasion attacks are carried out on already-trained and deployed ML models during the inference phase, where the model is put to work on real-world data to produce actionable outputs. These are the most common form of adversarial ML attacks. In an evasion attack, the attacker adds noise or disturbances to the input data to cause the model to misclassify it, leading it to make an incorrect prediction or provide a faulty output. These disturbances are subtle alterations to the input data that are imperceptible to humans but can be picked up by the model. For example, a car’s self-driving model might have been trained to recognize and classify images of stop signs. In the case of an evasion attack, a malicious actor may feed an image of a stop sign with just enough noise to cause the ML to misclassify it as, say, a speed limit sign.

Model Inversion

A model inversion attack involves exploiting the outputs of a target model to infer the data that was used in its training. Typically, when carrying out an inversion attack, an attacker sets up their own ML model. This is then fed with the outputs produced by the target model so it can predict the data that was used to train it. This is especially concerning when you consider the fact that certain organizations may train their models on highly sensitive data.

How Can You Protect Your ML Algorithm From Adversarial ML?

While not 100% foolproof, there are several ways to protect your ML model from an adversarial attack:

Validate The Integrity Of Your Datasets

Since the training phase is the most important phase in the development of an ML model, it goes without saying you need to have a very strict qualifying process for your training data. Make sure you’re fully aware of the data you’re collecting and always make sure to verify it’s from a reliable source. By strictly monitoring the data that is being used in training, you can ensure that you aren’t unknowingly feeding your model poisoned data. You could also consider using anomaly detection techniques to make sure the training datasets do not contain any suspicious samples.

Secure Your Datasets

Make sure to store your training data in a highly secure location with strict access controls. Using cryptography also adds another layer of security, making it that much harder to tamper with this data.

Train Your Model To Detect Manipulated Data

Feed the model examples of adversarial inputs that have been flagged as such so it will learn to recognize and ignore them.

Perform Rigorous Testing

Keep testing the outputs of your model regularly. If you notice a decline in quality, it might be indicative of an issue with the input data. You could also intentionally feed malicious inputs to detect any previously unknown vulnerabilities that might be exploited.

Adversarial ML Will Only Continue To Develop

Adversarial ML is still in its early stages, and experts say current attack techniques aren’t highly sophisticated. However, as with all forms of tech, these attacks will only continue to develop, growing more complex and effective. As more and more organizations begin to adopt ML into their operations, now’s the right time to invest in hardening your ML models to defend against these threats. The last thing you want right now is to lag behind in terms of security in an era when threats continue to evolve rapidly.

Advertisement

📢 Get Exclusive Monthly Articles, Updates & Tech Tips Right In Your Inbox!

JOIN 17K+ SUBSCRIBERS

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

News

Apple Announces New iPad Pro With M4 Chip And Updated iPad Air

“This is the biggest day for iPad since its introduction,” said CEO Tim Cook in a video posted to Apple’s website.

Published

on

apple announces a new ipad pro with m4 chip and updated ipad air
Apple

Apple’s latest updates to its popular iPad Air and Pro models were announced on Tuesday, May 7. These are the first changes since 2022, the longest stretch between new models since the iconic device was revealed in 2011.

Both the 11-inch and 12.9-inch versions of the iPad Pro have received a huge design overhaul. The most noteworthy change is the move to OLED screens, with the 12.9-inch version receiving a small bump in size to 13 inches. Apple claims the new tablets are brighter and more vibrant than outgoing models, thanks to a technology it calls “tandem OLED” or “Ultra Retina XDR”.

The 13-inch model now measures an astonishing 5.1 mm in thickness, which Apple says is its slimmest device ever. (The 11-inch version is 5.3 mm thick.) For those who prefer the look of a matte display, a nano-texture coating will also be available for the first time on the Pro models.

Finally, the new iPad Pros have received a processor bump to the latest M4 chip, which Apple says is an “outrageously powerful chip for AI”, offering an example of its ability to quickly and efficiently isolate subjects from backgrounds in videos.

The iPad Pro 11-inch starts at $999, and the larger 13-inch version starts at $1,299 with 256GB of storage.

Updated iPad Air In Two Sizes

The sixth-generation iPad Air didn’t receive as many upgrades as the iPad Pro but significantly does now come in two sizes. As with the Pro models, buyers now have the choice between an 11-inch and 13-inch screen, meaning they don’t need to invest in a Pro version just to get a 30% bump in display size.

Apple kept the same design for the iPad Air that it first revealed in 2020, complete with a USB-C port and Touch ID in the top button. The only difference is the front camera placement, which has been moved to the center of the iPad when in landscape orientation.

The 11-inch iPad Air is priced at $599 for the entry-level model, while the 13-inch version starts at $799.

Also Read: How To Clean Your Apple Watch Like A Pro

New Magic Keyboard Case

Apple also announced an updated (thinner, lighter) Magic Keyboard for its Pro iPads. The refreshed version now includes a function row (with controls for screen brightness). An aluminum palm rest and large trackpad with haptic feedback also help the premium case feel more like a Macbook.

The new Magic Keyboard is available for both the 11-inch and 13-inch iPad Pros and will be priced at $299 or $349, respectively.

Apple Pencil Pro

Apple also announced a new Apple Pencil, named Pro, at its event. The new model looks exactly the same but adds a “squeeze” function that opens a new tool palette. Meanwhile, a built-in gyroscope sensor lets you alter the orientation of the tools you’re using as you twist the device, offering finer control. Finally, the new pencil gets support for Apple’s Find My network, which should keep minds at rest at the prospect of losing the $129 device.

Finally, to round off Apple’s series of announcements, the entry-level iPad was reduced to $349 — a $100 price cut.

Continue Reading

#Trending