Connect with us

Security

DDoS Attacks Are A Growing Threat In Gaming

The cybercriminals behind the attacks have a variety of different motives, from extorting money from gaming companies to causing reputation damage to preventing competing players from winning out of competitiveness.

Published

on

ddos attacks are a growing threat in gaming
Shutterstock

Imagine you’re about to get a Victory Royale in Fortnite, score a deciding goal in FIFA, or defuse the bomb in Counter-Strike when suddenly a message appears on your screen, informing you that you’ve been disconnected.

Wasting no time, you load the game again and discover that a connection can’t be established. Why? Because either you or the game’s servers are under a Distributed Denial of Service (DDoS) attack.

Such attacks are a growing threat in gaming, and we at Tech Magazine had the opportunity to discuss them with Emad Fahmy, Systems Engineering Manager Middle East at NETSCOUT. Here’s what we learned.

What Are DDoS Attacks In Gaming?

DDoS attacks are a type of cybercrime that makes resources unavailable by overloading the network across which they are transmitted with malicious requests. DDoS attacks first appeared in 2010 amid the rise of “hacktivism,” but they have evolved significantly since then, as observed in the NETSCOUT Threat Intelligence Report H2 2021.

emad fehmy netscout

Emad Fahmy, Systems Engineering Manager Middle East @ NETSCOUT

”In gaming, DDoS attacks might be directed at a single user or an entire organization,” explains Fahmy. “While an attack on a single user only affects them by slowing down their gaming experience, an attack on an organization can have a greater impact on the game’s entire user base, resulting in a group of disgruntled players who no longer have access to the game or have had their experience significantly slowed.”

The cybercriminals behind the attacks have a variety of different motives, from extorting money from gaming companies to causing reputation damage to preventing competing players from winning out of competitiveness.

Anyone Can Launch A DDoS Attack

To successfully launch a DDoS attack against a game or its players, attackers need to send so many malicious requests at the same time that the victim can’t possibly answer them all without becoming overloaded.

These requests are typically sent by bots, hacked devices (computers, routers, IoT appliances, and so on) that do what attackers tell them to do. Even a relatively small network of bots, or botnet for short, can be used to launch a massive DDoS attack.

These days, attackers don’t even have to hack vulnerable devices to obtain the DDoS firepower they need to take a target down. They can simply take advantage of DDoS-for-Hire services, which provide DDoS attacks ranging from no cost to greater than $6,500 for terabit-class attacks, according to the NETSCOUT report.

“DDoS-for-Hire services have made attacks easier to launch. We examined 19 DDoS-for-Hire services and their capabilities that eliminate the technical requirements and cost of launching massive DDoS attacks. When combined, they offer more than 200 different attack types,” says Fahmy.

Preventing DDoS Gaming Attacks

In 2021 alone, NETSCOUT recorded 9.7 million DDoS attacks, an increase of 14 percent compared with 2019. To reverse this gloomy trend, both gaming companies and gamers themselves need to take it seriously and adopt specific measures to protect themselves.

“Relying on firewalls and intrusion detection systems is no longer sufficient. This is because DDoS attacks can now manipulate or destroy them. Despite advances in cloud-based detection, the company’s Internet Service Provider (or Managed Security Service Provider) may still struggle to identify threats that wait in the shadows until it is too late,” explains Fahmy. “As a result, an on-premises DDoS risk management solution is critical,” he adds.

Individual gamers, especially eSports players and streamers, can make it harder for cybercriminals to aim DDoS attacks at them using a virtual private network (VPN) service like ExpressVPN, CyberGhost, or NordVPN. Such services channel users’ traffic through their servers, hiding its real origin in the process.

In addition to hiding their IP addresses, gamers should also adhere to cybersecurity best practices. Examples include timely installation of software updates and exercising caution when browsing the web, chatting online, or reading emails.

Conclusion

DDoS, or Distributed Denial of Service attacks, represent a serious threat to the gaming industry because they can compromise the gaming experience and expose developers to the risk of brand damage and potential extortion. DDoS attacks have evolved and become far more sophisticated in recent years. Fortunately, the same can be said about on-premises DDoS risk management solutions that gaming companies use to protect themselves.

Advertisement

📢 Get Exclusive Monthly Articles, Updates & Tech Tips Right In Your Inbox!

JOIN 17K+ SUBSCRIBERS

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Security

Can LLMs Ever Be Completely Safe From Prompt Injection?

Explore the complexities of prompt injection in large language models. Discover whether complete safety from this vulnerability is achievable in AI systems.

Published

on

can llms ever be completely safe from prompt injection

The recent introduction of advanced large language models (LLMs) such as OpenAI’s ChatGPT and Google’s Gemini has made it possible to have natural, flowing, and dynamic conversations with AI tools, as opposed to the predetermined responses we received in the past.

These natural interactions are powered by the natural language processing (NLP) capabilities of these tools. Without NLP, LLM models would not be able to respond as dynamically and naturally as they do now.

As essential as NLP is to the functioning of an LLM, it has its weaknesses. NLP capabilities can themselves be weaponized to make an LLM susceptible to manipulation if the threat actor knows what prompts to use.

Exploiting The Core Attributes Of An LLM

LLMs can be tricked into bypassing their content filters using either simple or meticulously crafted prompts, depending on the complexity of the model, to say something inappropriate or offensive, or in particularly extreme cases, even reveal potentially sensitive data that was used to train them. This is known as prompt injection. LLMs are, at their core, designed to be helpful and respond to prompts as effectively as possible. Malicious actors carrying out prompt injection attacks seek to exploit the design of these models by disguising malicious requests as benign inputs.

You may have even come across real-world examples of prompt injection on, for example, social media. Think back to the infamous Remotelli.io bot on X (formerly known as Twitter), where users managed to trick the bot into saying outlandish things on social media using embarrassingly simple prompts. This was back in 2022, shortly after ChatGPT’s public release. Thankfully, this kind of simple, generic, and obviously malicious prompt injection no longer works with newer versions of ChatGPT.

But what about prompts that cleverly disguise their malicious intent? The DAN or Do Anything Now prompt was a popular jailbreak that used an incredibly convoluted and devious prompt. It tricked ChatGPT into assuming an alternate persona capable of providing controversial and even offensive responses, ignoring the safeguards put in place by OpenAI specifically to avoid such scenarios. OpenAI was quick to respond, and the DAN jailbreak no longer works. But this didn’t stop netizens from trying variations of this prompt. Several newer versions of the prompt have been created, with DAN 15 being the latest version we found on Reddit. However, this version has also since been addressed by OpenAI.

Despite OpenAI updating GPT-4’s response generation to make it more resistant to jailbreaks such as DAN, it’s still not 100% bulletproof. For example, this prompt that we found on Reddit can trick ChatGPT into providing instructions on how to create TNT. Yes, there’s an entire Reddit community dedicated to jailbreaking ChatGPT.

There’s no denying OpenAI has accomplished an admirable job combating prompt injection. The GPT model has gone from falling for simple prompts, like in the case of the Remotelli.io bot, to now flat-out refusing requests that force it to go against its safeguards, for the most part.

Strengthening Your LLM

While great strides have been made to combat prompt injection in the last two years, there is currently no universal solution to this risk. Some malicious inputs are incredibly well-designed and specific, like the prompt from Reddit we’ve linked above. To combat these inputs, AI providers should focus on adversarial training and fine-tuning for their LLMs.

Fine-tuning involves training an ML model for a specific task, which in this case, is to build resistance to increasingly complicated and ultra-specific prompts. Developers of these models can use well-known existing malicious prompts to train them to ignore or refuse such requests.

This approach should also be used in tandem with adversarial testing. This is when the developers of the model test it rigorously with increasingly complicated malicious inputs so it can learn to completely refuse any prompt that asks the model to go against its safeguards, regardless of the scenario.

Can LLMs Ever Truly Be Safe From Prompt Injection?

The unfortunate truth is that there is no foolproof way to guarantee that LLMs are completely resistant to prompt injection. This kind of exploit is designed to exploit the NLP capabilities that are central to the functioning of these models. And when it comes to combating these vulnerabilities, it is important for developers to also strike a balance between the quality of responses and the anti-prompt injection measures because too many restrictions can hinder the model’s response capabilities.

Securing an LLM against prompt injection is a continuous process. Developers need to be vigilant so they can act as soon as a new malicious prompt has been created. Remember, there are entire communities dedicated to combating deceptive prompts. Even though there’s no way to train an LLM to be completely resistant to prompt injection, at least, not yet, vigilance and continuous action can strengthen these models, enabling you to unlock their full potential.

Continue Reading

#Trending