- AI: a double-edged sword in cybersecurity
- 100,000+
- The huge impact of AI on cybersecurity
- The other side: AI’s impact in enhancing threats and challenges
- ChatGPT has been exploited by threat actors to:
- AI aiding defenders: what is your leverage?
- Fraud detection
- Threat intelligence
- Traffic analysis
- Automation
- Graph analysis
- Dark web investigation
- Phishing detection
- Malware detection and analysis
- Enumerating TTPs of advanced persistent threats (APTs)
- Patching vulnerabilities
- Adaptive responses to cyber threats
- Code generation
- Security testing
- Training and simulation
- Data loss prevention
- Adopting AI the right way: how to gatekeep risks and build defenses
- Building a resilient cyberdefense with AI
The zero-sum game between cyber adversaries and defenders is now becoming lopsided.
The advent of artificial intelligence (AI) was nothing less than revolutionary. It promised efficiency, accuracy, speed, and agility, making businesses keen on using the technology to build their competitive edge.
However, the same technology is now being used by cybercriminals to cause widespread disruption, threatening us all.
AI: a double-edged sword in cybersecurity
At the risk of stating the obvious, AI is changing everything.
Despite its proven ability to be helpful in many areas, in matters of cyber risks, AI is being exploited to generate malicious code, craft sophisticated social engineering attacks, use synthetic media such as deepfakes, and even leverage leaked credentials from platforms like ChatGPT.
100,000+
compromised ChatGPT accounts were discovered on dark web marketplaces in 2023.
Source: Group-IB
“These credentials can not only be used to launch secondary attacks against individuals, but they can also expose private chats and communications on the OpenAI platform, which could be exploited for ransom and blackmail,” said Group-IB’s CEO, Dmitry Volkov.
Alarmingly, most businesses are unaware of the creeping dangers they are now facing with cybercriminals armed with AI. Even those who recognize the severity often lack knowledge about available defense upgrades or options to protect themselves from widespread exploitation.
However, despite the irony, the offender can act as your ultimate defender. Many cybersecurity leaders and veterans are taking center stage to discuss where there is a lag when it comes to using AI in the space and what upgraded capabilities are required to outpace adversaries.
While having a strong institutional knowledge of cybersecurity developed over the years as a technical or business professional is important, AI in cybersecurity presents an entirely new set of truths. It represents a clash and a collaboration, but if utilized correctly, it can be a powerful tool to combat constantly evolving cybersecurity threats.
The huge impact of AI on cybersecurity
AI has long been a curiosity, examined in boutique research labs on university campuses or in sandbox projects of major corporations’ R&D centers.
Expert systems, as AI was familiarly called in the late 20th century, handled basic levels of inference, rule-based reasoning, and entry-level domain knowledge. Scientists envisioned expert systems useful in cases such as first-generation credit scoring and music genre preferences.
Today, those relatively crude and limited-function precursors to what is now known as generative AI (GenAI) have become a powerful force reshaping knowledge, content, and decision-making in every industry.
In fact, research indicates billions of dollars are spent annually on AI-based systems in dozens of different industries. Five industries—banking and financial services, retail, professional services, discrete manufacturing, and process manufacturing—spend more than $10 billion annually on AI solutions.
Source: Statista
However, numerous other forms of AI have burst onto the scene with similar levels of impact and importance, each with its own unique influence on cybersecurity.
For instance, predictive AI, as the name implies, is well suited for predicting how, where, and when cyberattacks will threaten an organization. It is also good at helping users spot and analyze patterns, making it a great fit for organizations looking to predict behavior that may indicate threats or actual attacks.
Causal AI is also rapidly gaining adoption because it helps organizations understand and create models for cause-and-effect patterns—not only for possible attacks but for the most appropriate responses.
Explainable AI (XAI) is crucial for teams and organizations to comprehend the logic or rationale behind AI-generated decisions, such as alerts and recommendations. By providing transparency, XAI enables prompt, effective, and well-calculated decisions, minimizing potential biases that can arise in manual decision-making processes.
The other side: AI’s impact in enhancing threats and challenges
Businesses have placed high bets on AI to enhance their operations and reduce toil and the mounting resource pressure, but they have somehow overlooked the consequences of the technology.
83% of companies claim that AI is a top priority in their business plans. Yet, if asked about the safe use of AI—ensuring it doesn’t introduce additional vulnerabilities, privacy threats, or regulatory challenges—teams have unresolved questions rather than a definitive answer.
In contrast, adversaries seem to have clear goals when using AI technology to achieve their nefarious objectives.
Group-IB’s Hi-Tech Crime Trends Report 2023-24 shows AI weaponization as one of the top challenges in the global cyberthreat landscape.
AI has aided in advancing cybercrimes, becoming an open-source technology for low-skilled activists to initiate automated attacks, requiring little effort on their end.
Therefore, more attackers will undoubtedly move toward AI models for capabilities such as technical consultation, scam creation, intelligence gathering, and maintaining their anonymity. Cybercriminals are integrating AI into their workflows to scale their threats’ impact, innovate their threat methodologies, and create new revenue streams.
This has been made much easier for them due to the wider availability of inexpensive (and free) AI tools. They also utilize AI to execute hacking toolkits and build malicious tools for exploits and digital espionage while brainstorming attack techniques, tactics, and procedures (TTPs).
Talking specifically about GenAI, which everyone seems to have the hots for currently, there have been many threats observed. Phishing remains a primary cyberthreat, with AI being used to craft convincing phishing emails.
Other than this, let’s take the case of ChatGPT, for example. The release of ChatGPT’s GPT-4 model marked a turning point, gaining global popularity even though it has been used for beneficial and harmful purposes.
ChatGPT has been exploited by threat actors to:
- Develop malware with basic programming knowledge.
- Brainstorm new cyberattack tactics.
- Create localized scam strategies.
- Enhance operational productivity.
- Draft proofs of concept (POCs) for exploiting vulnerabilities.
Users have tried to circumvent ChatGPT’s safety measures, such as rewriting hypothetical responses with real details and breaking up sensitive terms and text continuation. A practical case showed that in a dataset of 15 one-day vulnerabilities, GPT-4 was observed to be capable of exploiting 87% of them, based solely on the CVE descriptions.
Source: Group IB
The obvious question is: while businesses manage the unforeseen threats from the accelerating technology, often with limited cybersecurity resources, how can they be robustly protected against these obstructions?
AI aiding defenders: what is your leverage?
Opinions have been divided about whether AI favors cybercriminals or security experts. However, several industry trends and industry experts claim that AI can be a cybersecurity force multiplier for organizations, outsmarting criminals sooner rather than later.
Even though attackers often gain the initial advantage in using new tools such as GenAI, defenders can more than make up the difference if they understand how to leverage the technology in key areas such as threat intelligence, analytics, and anomaly detection.
Let’s take a look at the areas where you can leverage AI against attacks.
Fraud detection
In high-risk-prone industries, especially financial services and retail, AI and ML significantly enhance the security of digital and mobile applications by analyzing user behavior and biometrics. These technologies use ML algorithms to monitor real-time data and suspicious activities that may be missed by security professionals.
For example, they can find cues of threats through unusual keyboard and cursor patterns that indicate a potential threat or fraud attempt.
Threat intelligence
With AI-powered threat intelligence, identifying, analyzing, and extrapolating threats relevant to businesses and industries becomes a cyclical and sorted activity.
AI tools can analyze historical logs, records, and data to deduce which attacker may strike which region using what tools next. They can also sift through massive data sets from diverse sources, including social media, forums, and the dark web, to identify threat patterns. These capabilities are essential for businesses preparing for potential threats and building preemptive defenses.
Traffic analysis
It is difficult to handle massive traffic on your digital channels, including tracking network activity, traffic quality (including bad bot activity), and identifying deviations from normal behavior. But with AI, businesses can quickly sift through massive network traffic to spot anomalies, optimizing monitoring and detection resources.
Automation
Automation is key to maximizing AI’s benefits in cybersecurity.
While technologies like endpoint detection and response (EDR), managed detection and response (MDR), and extended detection and response (XDR) integrate AI to accelerate actions, full automation, driven by advanced AI tools, takes it a step further. This speeds up detection and response times, reduces the likelihood of false positives, and streamlines alert management.
Graph analysis
Cybercriminals’ illicit networks and operations expand beyond geography and nodes, making it difficult to understand the full extent of their crimes. However, with AI-infused graph interpretation, one can visualize these hidden and disparate connections and sources and turn them into actionable, real-time insights.
With AI, teams can detect suspicious indicators and activities within their infrastructure, recognize patterns and correlate events, and automate insights and responses, enhancing cybersecurity operations and timely responses to potential risks.
Dark web investigation
AI can identify all of an attacker’s accounts far more reliably and quickly than manual methods. AI tools can crawl the dark web, analyzing forum posts, marketplaces, and other sources to gather intelligence on potential threats, stolen data, or emerging attack techniques. This proactive approach allows organizations to better prepare for and mitigate potential attacks.
Phishing detection
AI-powered text and image analysis can detect phishing content, reducing the risk of successful phishing attacks. Advanced AI algorithms can identify subtle indicators of phishing, such as language inconsistencies, abnormal URLs, and visual clues, that might slip past users. AI can also learn from existing phishing techniques to improve its detection abilities.
Malware detection and analysis
AI models can be trained to identify patterns of malicious behavior or anomalous activities in network traffic, aiding in the detection of malware, including polymorphic malware that constantly changes code.
Enumerating TTPs of advanced persistent threats (APTs)
AI is significant in identifying the ، chain—the sequential actions taken by cybercriminals to infiltrate a network and launch attacks. Its other use cases are building defenses and supporting intrusive cybersecurity engagements such as red teaming, where cyberattack simulations are conducted in a controlled environment to identify security loopholes and test incident response capabilities.
Teams can use GenAI to understand threat actors and their attack maneuvers and get answers to critical questions like “where am I most vulnerable?” through natural language queries.
Patching vulnerabilities
Security teams can utilize GenAI to identify vulnerabilities and automate the generation of security patches. These patches can then be tested in a simulated or controlled environment to understand their effectiveness and to ensure they don’t introduce new vulnerabilities. Thus, using AI not only reduces the time taken to deploy patches but also minimizes the risks of human error in manual patching processes.
Adaptive responses to cyber threats
With network infrastructure facing growing threats, AI enables a shift from traditional rule-based or signature-based detection to more advanced contextual analysis, helping find the hidden links that reveal the complete intent, chain, and process of threat activity.
Large language models (LLMs) are also used to develop self-supervised threat-hunting AI, autonomously scanning network logs and data to provide adaptive and appropriate threat responses, such as quarantining affected systems and malware detonation.
Code generation
The approach to coding and testing has changed drastically with the advent of AI. There is no longer a need to spend countless hours writing and testing code that could unwarrantedly introduce vulnerabilities. Today, code can be generated, queries can be answered, and playbooks can be created in just minutes.
Security testing
AI has strengthened offensive security (OffSec) testing by creating diverse and real-life attack simulations, including those based on open-source vulnerabilities. This approach ensures that code is not only robust but also continuously improved.
Training and simulation
Another area in which AI tools efficiently help often overworked, in-house cybersecurity staff is quickly and automatically generating training materials, including simulations based on historical data and rapidly changing industry trends on attack vectors.
Data loss prevention
An additional critical area with which AI can help immeasurably. New tools frequently interpret confusing and contradictory contexts for numerous data types, creating processes, rules, and procedures to further prevent sensitive and personal information from being exfiltrated inappropriately.
Note: Assessing readiness is critical to using AI as part of comprehensive cybersecurity hygiene. Before fully integrating AI solutions into their cybersecurity strategy, companies need to evaluate their current infrastructure, resources, and skill sets.
AI is a powerful force multiplier in fortifying an organization’s cyber defenses, but it must be extended and complemented with well-trained, AI-proficient cybersecurity experts.
Adopting AI the right way: how to gatekeep risks and build defenses
A well-defined AI strategy that aligns with your cybersecurity goals is crucial to best enable your cyberdefenses.
However, there often seems to be a learning curve, or teams may have different opinions regarding AI adoption. Therefore, the first and foremost step is for leadership to reach a consensus and expedite their AI readiness.
While there are specific parameters to address based on each business, the pillars to assess are your tech ecosystem, data infrastructure, and operational processes. A comprehensive AI readiness assessment survey can be a great tool to gauge your preparedness.
AI offers limitless potential, but caution is crucial.
As businesses plan to use GenAI to boost operations, innovation, and growth, they must also create frameworks, compliance solutions, and ethical guidelines to manage the technology responsibly.
Putting the right AI tools, processes, and teams in place requires more than just a checklist of cybersecurity readiness activities. It requires detailed short—and long-term planning, a well-resourced and properly orchestrated rollout and deployment, and the development of metrics to test and ensure the efficacy of AI-powered cybersecurity.
- Data quality really matters. AI systems need to connect to a wide range of high-fidelity data sources to be properly trained on threats, attack vectors, and response methodologies.
- Establish, review, and refine governance and policies frequently. This will often be uncharted territory, so it will pay to be flexible and responsive to new lessons learned about AI usage governance.
- Continuous monitoring is critical. Be sure to continuously monitor cyberthreat intelligence facilitated by AI and machine learning, of course, to stay ahead of zero day threats, advanced persistent threats, and emerging threats created and augmented by adversarial AI tools and intentions.
- The is no substitute for human resources. It’s important to understand that although sophisticated and innovative tools like AI help immeasurably, they cannot manage every cybersecurity task without expert intervention. AI isn’t a substitute but an augmentation of human intelligence. AI tools are great at reacting to new attack vectors and innovative new threats. Still, security experts play the key role in preventing a security threat from becoming a security incident.
Using AI to enhance an organization’s cybersecurity readiness is a strategic decision, but it should not be mistaken for a complete strategy on its own. It’s a starting point for a broader cybersecurity strategy.
While using AI to create more effective and efficient cybersecurity, it is wise to start with a few use cases to build success and momentum. Don’t try to do everything at once.
Also, in the words of legendary college basketball coach John Wooden, “Be quick but don’t hurry.” There is a sense of urgency here. But don’t rush into decisions. Better to take a little more time and get it right than to take less time and get it wrong.
Building a resilient cyberdefense with AI
For leaders and professionals reviewing whether to integrate AI into their cybersecurity strategy, understand that over 70% of cybersecurity professionals consider it critical for future defense strategies.
Embrace the opportunities provided by AI in cybersecurity, but do it wisely. Partner with AI and cybersecurity experts, use tried-and-tested strategies, and know your infrastructure needs inside out.
With the AI era in cybersecurity, preparation isn’t just an advantage but a necessity.
Gain insider tips on defending against zero-day attacks and explore best practices shared by leading security experts.
Edited by Shanti S Nair
منبع: https://learn.g2.com/ai-cybersecurity