Clouds In the Attack Horizon | How Identity & Access Controls Fortifies Hybrid Environments

Modern enterprises have rapidly adopted hybrid cloud environments to harness the benefits of both on-prem infrastructure and public cloud services. With higher rates of adoption and nearly half of all breaches occurring in the cloud, the question of how to secure this growing hybrid cloud landscape has become a top priority for business leaders.

One significant aspect of securing hybrid clouds is effectively managing identity and access controls. Since identity and access provide the framework and controls to authenticate and authorize user access, understanding them in the context of hybrid clouds is a critical element in establishing a secure environment.

In hybrid cloud deployments where resources are distributed across on-prem and cloud platforms, managing identities can be a challenge for security teams. This blog post covers how security teams and business leaders can combine identity and access control best practices with advanced detection and response capabilities to ensure robust security for their hybrid cloud environments.

Understanding Identity & Access Management In Cloud Security

Identity and access controls are essential measures in the fight against cyberattacks. They involve the processes used to authenticate and authorize user access to resources within an organization’s data infrastructure. Through strong identity and access controls, organizations can establish a robust security framework that helps prevent unauthorized access and mitigate the risk of breaches and cloud ransomware attacks.

Identity and access controls ensure that only authenticated and authorized individuals can access sensitive information, systems, and resources. Effective controls also enable organizations to enforce least privilege principles, limiting user access to only what they need to for their specific roles. Maintaining granular control over user permissions means security teams can reduce the attack surface and protect against insider attacks, data breaches, and attacks involving privilege escalation.

When it comes to the digital infrastructure, organizations must consider all fronts: on-prem, public cloud, and hybrid cloud environments:

  • On-premises (on-prem) refers to infrastructure that is owned and managed directly by the organization within its premises.
  • Public cloud involves utilizing resources and services provided by third-party cloud service providers (CSPs) over the internet.
  • Hybrid cloud combines both on-prem and public cloud components, allowing organizations to leverage the benefits of both.

While on-prem offers full control and customization, it requires significant upfront investment and ongoing maintenance. Public cloud offers scalability, flexibility, and cost-effectiveness, but data privacy and compliance concerns may arise. This in mind, hybrid clouds have become a popular option as they provide a balance by allowing organizations to leverage existing investments while utilizing the scalability and flexibility of the public cloud for specific workloads. Understanding these differences is crucial for organizations to make informed decisions about their infrastructure security.

Key Components of Identity & Access Controls For Hybrid Clouds

Effective management of identity and access controls is crucial in securing hybrid cloud environments. To establish a robust security framework, several key components need to be considered.

Identity Management Systems

Identity management systems play a pivotal role in managing user identities and access rights across hybrid cloud environments. These systems provide a centralized approach to identity management, enabling organizations to streamline user provisioning, authentication, and deprovisioning processes. With a unified identity management system in place, organizations have the capability to enforce password policies, implement multi-factor authentication (MFA), and efficiently manage user lifecycle events across hybrid cloud platforms.

Centralized identity management ensures consistent access control policies throughout the hybrid cloud infrastructure and reduces the risk of unauthorized access or compromised credentials. Also, it simplifies the administration of user identities, giving security teams greater visibility and control over user access privileges. Organizations with a unified identity management system in place are much better positioned to achieve a higher level of security, streamline user management processes, and ensure compliance with any industry-specific regulations.

Authentication Mechanisms

Authentication mechanisms are the gatekeepers along the path to accessing resources in hybrid cloud environments. Organizations must carefully evaluate and implement appropriate authentication methods to strengthen security. While traditional methods like passwords are still often used, they are no longer considered sufficient protection on their own. Advanced techniques such as digital certificates, biometrics, or token-based authentication offer stronger security measures.

One of the most effective authentication mechanisms in hybrid cloud environments is multi-factor authentication (MFA). MFA requires users to provide multiple pieces of evidence to verify their identities. By combining something the user knows (such as a password) with something they have (like a physical token) or something they are (biometrics), MFA significantly elevates the security posture of hybrid cloud deployments. Even if one factor is compromised, the additional layers of authentication provide an added layer of defense against unauthorized access.

Implementing strong authentication mechanisms in hybrid clouds ensures that only authorized users can access resources, minimizing the risk of credential theft and unauthorized account access. Organizations should choose authentication methods that align with their security requirements and strike the right balance between usability and protection.

Role-Based Access Control (RBAC)

Role-Based Access Control (RBAC) is a widely adopted authorization model that simplifies access management in hybrid cloud environments. RBAC associates permissions with predefined roles, rather than with specific individuals in the organization. In this approach, security teams work with leadership to assign and approve permissions based on job responsibilities. This ensures that users have access only to the resources necessary for their assigned roles.

In terms of protecting hybrid clouds, RBAC helps maintain consistent access control policies across different platforms, simplifying user privilege management. By implementing RBAC, organizations can reduce their overhead costs by managing access at a role level rather than assigning individual permissions to each user which is arduous and leaves too much room for error or oversight. This granular control is designed to lessen the risk of excessive privileges and unauthorized access – two issues that commonly threaten the overall security posture of hybrid cloud deployments.

Identity Threat Detection & Response (ITDR)

As the number of digital identities continues to grow exponentially, opportunistic threat actors have seized this expanding surface as a prime target for cyberattacks. Identity-based cyber threats have surged, challenging conventional identity management tools like Identity Access Management (IAM), Privileged Access Management (PAM), and Identity Governance and Administration (IGA). These solutions alone are insufficient to shield organizations from the evolving cyber threats targeting both digital and machine identities.

To combat the rising risks and safeguard their enterprises, many organizations are now turning to a combination of identity threat detection and response (ITDR) strategies. By employing ITDR alongside traditional identity management tools, organizations can bolster their defense against advanced cyber threats, mitigate risks, and fortify their security posture effectively.

Managing Identity & Access Controls in Hybrid Clouds

Managing identity and access controls in hybrid cloud environments requires a proactive approach. By following best practices, security teams can establish a robust framework to protect business-critical resources and data effectively.

Establish the Principle of Least Privilege (PoLP)

The Principle of Least Privilege (PoLP) is a fundamental security principle that applies to all IT environments, including hybrid clouds. It dictates that users should only be granted the minimum level of access necessary to perform their job responsibilities. Applying the PoLP ensures that individuals have access only to the resources they need and reduces the risk of unauthorized access or accidental misuse of privileges.

To implement the PoLP in hybrid cloud environments, organizations should conduct regular access reviews to evaluate user permissions and ensure they align with current roles and responsibilities. Also, consider implementing just-in-time (JIT) access where privileges are granted for a limited time when needed and revoked afterward.

Perform Continuous Monitoring & Auditing

Continuous monitoring and auditing are a core pillar in maintaining the security of identity and access controls in hybrid cloud environments. Monitoring user activities is the first step to detecting and responding to potential security incidents in real-time as well as reducing the time needed to identify and mitigate threats.

Continuous monitoring involves collecting and analyzing security logs and events from various sources, including identity management systems, authentication systems, and access control mechanisms. This enables security analysts to identify atypical behavior within hybrid clouds, such as unusual login patterns or unauthorized access attempts, and take the right actions promptly.

In addition to monitoring, regular auditing is essential to evaluate the effectiveness of identity and access controls and ensure they are in compliance with regulatory requirements. Auditing involves reviewing user permissions, access logs, and system configurations to identify any vulnerabilities or discrepancies. Having a firm auditing policy in place helps organizations to identify and address security gaps and demonstrate adherence to industry standards and compliance regulations.

Combine Advanced Endpoint Protection With ITDR

In the shifting threat landscape, identity threat detection and response (ITDR) continues to emerge, complementing advanced security solutions like Endpoint Detection and Response (EDR) and Extended Detection and Response (XDR). ITDR focuses on safeguarding credentials, privileges, cloud entitlements, and the systems that govern them, bridging a significant gap in the security realm. By implementing ITDR, organizations can:

  • Protect cloud environments – Cloud infrastructures can present permissions sprawl, overwhelming teams with numerous applications, containers, and servers to manage. ITDR solutions extend their protective umbrella to cloud environments, offering visibility into risky entitlements that could attract opportunistic attackers.
  • Detect & Prevent Identity-Based Attacks – ITDR actively seeks out attacks targeting identity vectors, swiftly identifying credential theft, signs of privilege misuse, and malicious activities on Active Directory (AD) and other systems.
  • Thwart The Attack Lifecycle – ITDR solutions add an extra layer of protection by deploying pre-set decoys to divert attackers, automatically isolating affected systems, and preventing lateral movement into other networks.
  • Build Lasting Cyber Resilience – ITDR proves its value in forensic data collection, gathering critical telemetry on attack processes. The gathered threat intelligence empowers technical teams to fortify weak policies and processes, enhancing long-term cyber resilience.

Conclusion

Hybrid clouds are often targeted by cyberattacks due to their unique complexities and increased attack surface. Exploiting potential misconfigurations, weak authentication mechanisms, and synchronization issues between different platforms, threat actors increasingly have their eyes set on hybrid clouds as a lucrative attack vector.

In the face of relentless identity-based threats affecting industries worldwide, business leaders are intensifying efforts to mitigate these risks with a more proactive approach. By centering their focus on identity and access protection, organizations can fortify their hybrid cloud deployments against unauthorized access attempts, minimize the risk of compromised credentials, and establish a foundation of trust and security across the infrastructure.

SentinelOne has leveraged its deep experience in privilege escalation and lateral movement detection to become a significant player in the ITDR space. Learn about SentinelOne’s approach defending hybrid cloud environments by contacting us or booking a demo today.

Singularity Cloud
Simplifying runtime detection and response of cloud VMs, containers, and Kubernetes clusters for maximum visibility, security, and agility.

Reverse Engineering Walkthrough | Analyzing A Sample Of Arechclient2

In partnership with vx-underground, SentinelOne recently ran its first Malware Research Challenge, in which we asked researchers across the cybersecurity community to submit their research to showcase their talents and bring their insights to a wider audience.

In today’s post, Millie Nym (@dr4k0nia) demonstrates a problem-solving approach to reverse engineering a malware sample, highlighting not just the practical steps taken but also the logical reasoning conducted as the investigation unfolded. The post offers a fascinating insight into how researchers tackle the challenges in front of them and a perfect example for anyone wishing to learn or develop their reverse engineering skills.

In this post, I will be going over my process of analyzing a sample of ArechClient2. Including initial analysis, deobfuscation and unpacking of the loader. Followed by the analysis of the .NET payload revealing its config and C2 information.

It began with this tweet by @Gi7w0rm. They mentioned me and a few others asking for help analyzing this sample. I decided to look into the sample. After publishing some threat intel and a few updates on my progress on Twitter, I decided to write this report for a more detailed documentation of my analysis. The original sample can be found here.

Initial Analysis

The sample consists of two files, an executable and an a3x file. After some quick research, I found that a3x is a “compiled” form of AutoIt script. The executables icon is the logo of AutoIt and the copyright information says it’s AutoIt. This leads me to believe that this executable is the runtime required to execute the a3x file.

I ran the file in a Windows Sandbox for some quick intel and immediately got a Windows Defender hit for MSIL:Trojan, which indicates that this AutoIt part is just a loader for a second stage .NET binary. In case you are not familiar with the terms, “MSIL” stands for Microsoft Intermediate Language, which is the bytecode that .NET binaries are compiled to.

The a3x script is human-readable, so after putting it into Visual Studio Code I saw this.

It looks pretty messy at first but taking a closer look I found something that stuck out: The calls to the function called DoctrineDrama look suspiciously like string decryption. So my next step was to find that function. I used the search function to look for its name until I found the actual implementation. All functions start with the keyword Func and end with the keyword EndFunc, making it easy to identify them. I copied the code of the DoctrineDrama function to a separate file. The code is obfuscated and seems to contain some junk code. My first step was to indent the code for easier readability.

Looking at the switch cases inside the loops, I realized that only the branches that use ExitLoop are of importance. Taking a look at the switch conditions confirmed that suspicion. At the beginning of the function, the second variable is the loop condition,  initialized with a value of 921021. Looking at the switch, it matches the case that exits the loop, meaning the other cases are dead code and can be ignored. I removed the dead branches, cleaned up the unnecessary loops and got rid of the unused variables:

After cleaning up we are left with this code. Reading this we can deduce some more fitting variable names, the first argument seems to be the encrypted input, and the second argument is the key. The first variable is the resulting string.

To understand the rest of the code I looked at the documentation of AutoIt. The StringSplit function takes the following arguments:

  • a string
  • a delimiter char
  • an optional argument for the delimiter search mode

So the second local variable in DoctrineDrama is an array of strings split from the input.

Next, the code iterates through all the elements of that array and appends a new character to the output string with every iteration. We see a call to a function called Chr, which according to documentation converts a numeric between 0-255 value to an ASCII character. But something is off, what is going on inside that call to Chr? Subtraction on a string, how does that work? I wondered about that but after a quick web search, I found out that in AutoIt digit strings seem to be auto-converted to a number if you perform any arithmetic operation on them. Once the loop is finished, the output string is returned.

Looking at this fully cleaned-up version, I reimplemented the decryption routine in C# to build a simple deobfuscator.

static string Decrypt(string input, int key)
{
    var buffer = input.Split('h');
    var builder = new StringBuilder();
    for (int i = 0; i < buffer.Length; i++)
    {
        builder.Append((char)(Convert.ToInt32(buffer[i]) - key));
    }
    return builder.ToString();
}

The deobfuscator uses a simple regex pattern to match every call to DoctrineDrama and replace it with the decrypted string. It also outputs a list of all decrypted strings. The full deobfuscator code can be found here.

Dumping the Payload

After deobfuscating all the strings, I searched the string dump for some Windows API function names that I would expect from a loader. I found a few hits on NtResumeThread, CreateProcessW and NtUnmapViewOfSection. These three in combination give a huge hint towards process hollowing. After searching the string dump for .exe I found the suspected injection target Microsoft.NETFrameworkv4.0.30319jsc.exe, a utility of .NET Framework 4.x which comes with every standard Windows 10 install.

My next step was to debug the executable using x64Dbg. I set a breakpoint on CreateProcessW, to ensure we break before the injection process is started. After running past the entry point I was greeted with this nice little message.

The message box claims I violated a EULA which I never read nor agreed to. I guess we can’t debug the malware any further, how unfortunate. Luckily for us, x64Dbg has a built-in AutoIt EULA bypass, it’s called Hide Debugger (PEB). You can find it under Debug>Advanced>Hide Debugger (PEB). Make sure to run x64Dbg in elevated mode.

After dealing with the rather simple anti-debug, we let it run. When debugged, the executable spawns a file dialog asking for an a3x file, when run without a debugger it automatically finds the script file. After pointing it to the script file, we let it run until the breakpoint for CreateProcessW is hit. At this point, jsc.exe will be started in suspended mode.

Checking Process Explorer confirms that the decrypted path from the AutoIt script was indeed the injection target. We add another breakpoint on NtResumeThread, which will break execution after the injection is finished but before the thread is resumed to execute the malware.

Since we already know the malware is .NET-based I will use ExtremeDumper to get the managed payload from the jsc.exe process. Run ExtremeDumper as admin and dump jsc.exe, if it does not show up make sure you are using the x86 version of ExtremeDumper.

At the time of writing, the loader does not run anymore but fails with an error message about Windows updates. Sifting through the string dump I suspect there is some sort of date check that prevents further execution. This was likely implemented to prevent future analysis. Luckily I had dumped the actual payload before.

The .NET Payload

After dumping the loader, I had to deal with the managed payload. The image is heavily obfuscated. I started my hunt in the class, also referred to as the global type. I start by checking this class since its constructor is called before the managed entry point. Many obfuscators call their runtime protections or functions like string decryption here.

My guess was correct, I found a string decryption method c in (token 0x06000003). The method reads the encrypted string data from an embedded resource and then performs a single XOR operation decryption on it. The key used for decryption is supplied via parameters, which leads me to believe that each string has a unique decryption key.

After checking references to c it turned out that the decryption relies on flow-dependent variables. The calls to the decryption routine have encrypted arguments that are using several opaque predicates and global variables that are initialized and changed depending on call flow.

This means we would have to emulate or solve all calculations required to obtain the local variables and global fields that are used by the expressions that decrypt the arguments of the call to our decryption method c. The additional dependency on call flow further increases the effort required since we would need to solve all calculations in every method in the correct order. Considering all this I ditched the idea of writing a static string decryption tool.

Sifting through the binary I found quite a few similarities to Redline, both making use of DataContracts and async tasks for the separate stealer modules.

One class in particular seemed interesting. After looking for networking related functions I found a class cj token 0x0200010C that connects to a server via .NET’s TcpClient. Looking at the code we can spot the use of another class called xj, which seems to contain the IP and port number for the TCP connection. See line 155 tcpClient.Connect(xj.c, Convert.ToInt32(xj.a.d)

Apart from that, xj also seems to contain a URL that the malware accesses and downloads a string from, see line 168. Let’s take a closer look at xj token 0x02000107. It contains quite a few properties, but the most interesting is the constructor.

This looks like a potential config class. It initializes the properties used for the initial TCP connection and the string download we saw in cj, which is a good indicator that we are indeed looking at the malware’s config.

I placed a breakpoint at the end of the constructor. Since the string decryption method was still an issue, the easiest way to get the strings was to run the binary and have it decrypt the strings for me. I debugged the executable using dnSpy until I hit the breakpoint at the end of the constructor. After the breakpoint hit, we can view all the properties and fields values in the Locals window by expanding the this parameter.

Here we see the C2 IP 77.73.133.83 and port 15647. We can also see a Pastebin link that caught my interest: The paste contains another IP 34.107.35.186, potentially a fallback C2.

Before debugging, I modified the string decryption method by adding a few lines to write every decrypted string to disk. This modification makes it so that instead of immediately returning the string it’s first passed to AppendAllText and written to a file of our choice.

The dump revealed the same values that we found in the Locals window and a few more strings of interest. For example, we got a list of the paths that the stealer checks for potential credentials. The main targets of this stealer seem to be browsers, mail clients and game clients like Steam. This is similar to most mainstream stealers. You can view the full-string dump here.

Speaking of strings, I noticed another similarity to Redline, the use of char array to string conversion at runtime. Although Redline in many cases does insert some additional junk into these arrays that is removed from the constructed string, using the Replace or Remove method.

Due to the heavy obfuscation and the rather similar behavior to existing stealers, I decided to not investigate this payload further. We revealed the most important IOCs and got a pretty good understanding of the stealer’s targets.

Identification of ArechClient Malware Family

After analyzing the string dump, I found some indicators that could help with attribution to a certain malware family. Although this sample does look very similar to Redline stealer, it is actually not part of that family. I found this blob of data that looked suspiciously like C2 communication:

 {"Type":"ConnectionType","ConnectionType":"Client","SessionID":"
 ","BotName":"
 ","BuildID":"
 ","BotOS":
 "Caption","URLData":"
 ","UIP":"
 "}

Referencing the above data and the port number to other writeups, like this one from IronNet Threat Research, revealed similarities to a different malware family. The screenshot below shows a network capture of an active ArechClient2 sample performed by the researchers from IronNet. Comparing this data we can conclude that our sample is also part of the ArechClient2 family.

Source

Summary

We found that the initial loader was implemented in AutoIt and uses Process Hollowing to load a .NET-based payload, we reconstructed the string decryption method enabling us to partially deobfuscate the loader. We dumped the managed payload using a debugger and ExtremeDumper. We analyzed and debugged the managed payload to reveal the payload config, containing the C2 information.

Readers can find more research by dr4k0nia here.

Indicators of Compromise

Description Indicator
C2 77.73.133.83:15647
Potential Fallback C2 34.107.35.186:15647
URL for fallback C2 https[:]//pastebin.com/raw/NdY0fAXm
.NET payload Test.exe SHA1: 054742329f83a5d177dd1937992e6755f43c420e
AutoIt loader 45.exe SHA1: 2a4062e10a5de813f5688221dbeb3f3ff33eb417
AutoIt script S.a3x SHA1: 4397b1d855e799f4d38467a848cda2273c1c6c73

This post is licensed under CC BY 4.0 by the author.

Strengthening Cyber Defenses | A Guide to Enhancing Modern Tabletop Exercises

To combat a growing range of cyber threats, enterprise leaders and cybersecurity professionals often employ tabletop exercises as a valuable tool to enhance preparedness and response capabilities. Tabletop exercises simulate real-world cyber incidents in a controlled environment, allowing organizations to test their incident response plans, evaluate team coordination, and identify vulnerabilities.

As the overall threat landscape shifts though, it is essential to continuously improve tabletop exercises so that they remain effective. Without the right strategy in place, organizations may not find value in their tabletop exercises. Adapting to the changing cybersecurity landscape requires security teams to incorporate the most current emerging threats, technologies, and attack vectors into these exercises.

This blog post discusses how modern enterprises can build their tabletop strategy to meet a changing threat climate and ways to overcome common challenges associated with the exercises. It also covers how tabletop exercises will transform in the future and how businesses can continue to derive value from such tools.

From Military Roots to Cyber Defenses | Defining Tabletop Exercises

Tabletop exercises (TTX) have a rich history in the realm of cybersecurity, dating back to the early days of military and emergency response planning. Originally used to simulate military campaigns and disaster response scenarios, TTXs gradually found their way into the cybersecurity domain. These exercises were initially developed to assess an organization’s ability to respond to physical security incidents, but as cyber threats became more prevalent, their focus expanded to include cyber incidents.

TTXs in cybersecurity typically involve a simulated scenario where participants gather in a controlled environment to collaboratively respond to a fictional cyber incident. The scenario is crafted to mimic real-world situations and may include elements like phishing attacks, data breaches, ransomware infections, or network intrusions. Participants, representing various roles within the organization, such as IT personnel, executives, legal advisors, and public relations representatives, engage in discussions and decision-making processes to address the unfolding incident.

The exercises can take different forms, ranging from informal discussions to more structured and time-constrained simulations. Facilitators guide the exercise, presenting new challenges and information as the scenario progresses, and participants must work together to assess the situation, make decisions, and develop an effective response plan. These exercises allow organizations to evaluate their incident response procedures, identify gaps and weaknesses, and refine their strategies to improve preparedness.

By simulating cyber incidents in a controlled environment, TTXs provide a safe space for learning, fostering collaboration among team members, and enabling the exploration of alternative approaches. They help organizations identify strengths and weaknesses in their incident response capabilities, assess communication channels, and uncover areas for improvement. Additionally, tabletop exercises offer the opportunity to test and validate incident response plans, refine coordination between different teams, and enhance overall cyber resilience.

Understanding the Relevance of Tabletop Exercises In Today’s World

Cyber threats have become more sophisticated and frequent, making tabletop exercises a highly useful tool for organizations. While new solutions provide advanced security measures, cybercriminals continue to exploit vulnerabilities and develop new attack vectors. This makes it essential for organizations to regularly assess and enhance their preparedness to combat cyber threats.

TTXs provide a controlled environment to simulate real-world cyber incidents and test an organization’s response capabilities. The relevance of TTXs to modern security practices can be broken down into these main areas:

  • Risk Management – TTXs allow security teams to understand pain points, challenges, and any weaknesses in processes and communication channels that may not have been apparent in day-to-day operations. The results of the exercise can help teams bolster the weak points in their response strategy and bring in additional oversight where needed.
  • Continuous Improvement & Lessons Learned – TTXs force security teams to validate documented flows that are in place for the current security program. After the exercise, all relevant participants can provide feedback on gaps and work towards revisions.
  • Cybersecurity Training – After a TTX, valuable findings and any updates for processes are documented into training guides and playbooks for future use. New stakeholders can follow vetted documentation to prepare for future exercises.
  • Stakeholder Collaboration – TTXs bring together key stakeholders, including IT personnel, executives, legal advisors, and public relations representatives. Holding regular exercises fosters collaboration and provides an opportunity to practice decision-making under pressure.

Mitigating The Challenges of Building A Tabletop Strategy

TTXs are a key element in developing the human side of incident response and cyber defense. By conducting regular tabletop exercises, organizations can test and enhance the knowledge and skills of incident responders. In the long run, having an established tabletop strategy bolsters the overall security posture of the business.

Many organizations, however, not only face challenges in implementing the strategy, but also generating ongoing value from TTXs. For some, the exercises are carried out with the best of intentions but still ‘fail’. From resource limitations to lack of engagement and availability, there are several common challenges associated with implementing value-driven TTXs. Here are some ways to overcome these pitfalls and ensure that the strategy works with the business and benefits security teams as cyber threats continue to develop.

Define Clear & Actionable Objectives

When objectives are not laid out in advance of a TTX, the sessions can feel like a perfunctory technical drill or a check-the-box activity with little to no value. Without clear goals in mind, the discussion can quickly unravel.

Defining the objectives comes from having a clear understanding of ‘the why’ behind the TTX. Based on the organization’s risk profile, senior leadership and security leaders need to pinpoint what takeaways the sessions should garner and what incremental improvements they want to make in their security strategy.

Having clear and actionable objectives for a cybersecurity tabletop exercise is key to ensuring its effectiveness. Here are some steps that enterprises can follow:

  • Identify Key Focus Areas – Start by identifying the specific areas of cybersecurity that the exercise should address. This could include incident response procedures, communication protocols, decision-making processes, or testing the effectiveness of security controls. Consider the organization’s priorities, recent trends in cyber threats, and any known vulnerabilities or weaknesses.
  • Align Objectives With Organizational Goals – The exercise objectives should align with the broader goals and priorities of the business. For example, if working towards compliance within a specific security framework or regulatory requirement, the exercise objectives can focus on testing and improving compliance-related processes.
  • Be Specific & Measurable – Objectives should be specific and measurable to enable effective evaluation. Rather than stating a vague goal like “improve incident response,” set measurable targets such as “reduce incident response time by 20%,” or “enhance coordination between IT and legal teams during a data breach scenario.”
  • Document & Communicate Objectives – Clearly document the defined objectives and share them with all participants. This ensures everyone is aligned and working towards common goals during the exercise.

Invite The Right Experts To The Discussion

A successful TTX requires the participation of key individuals who represent the roles and functions applicable to the TTX scenario being discussed. Considering the specific objectives set for a particular TTX, participants should only include those that will be able to answer for their function as too many observers may dilute the conversation if not managed.

Commonly, most TTX sessions will feature representatives from:

  • Executive Leadership – C-suites should be involved to provide a high-level decision-making perspective, assess the impact of potential cyber incidents on the organization, and give the final word on necessary resources for incident response. Cyber incidents are not only a test of technical defenses, but they also examine executive-level responses when it comes to communicating the impact to both customers and the general public.
  • Security & IT – Security professionals, including cybersecurity analysts, incident response managers, and network administrators, are essential participants. Their expertise in identifying and mitigating cyber threats supplies the technical acumen needed for the exercise.
  • Legal & Compliance – Inclusion of legal advisors and compliance officers ensures that the exercise considers legal and regulatory implications. They can offer guidance on breach notification requirements, legal obligations, and potential liabilities.
  • Communications & PR – Both internal and external communication is vital during a cyber incident. This team can speak to the management of public perception, media inquiries, and stakeholder communications during the scenario.
  • Human Resources – Human resources representatives can contribute by addressing employee-related aspects, such as incident reporting procedures, training, and handling internal communication during an incident.
  • Departmental Heads – It is beneficial to include representatives from different departments to ensure a holistic understanding of the organization’s operations and their interdependencies. Should a scenario deal with one specific department’s data, for example, that department head would be expected to provide input.
  • Operations – Participants from operations and business continuity teams can provide insights into the potential impact of cyber incidents on critical operations and contribute to the development of effective recovery strategies.

Build Business-Tailored Scenarios & Evaluation Criteria

Designing realistic scenarios that accurately reflect most current threat landscapes can be challenging. It requires staying updated on the latest attack techniques, emerging technologies, and industry trends. Creating scenarios that strike the right balance between realism and feasibility is crucial for a meaningful exercise.

To foster better TTX discussions, the scenarios should be aligned with the industry-specific risks and active and known threats to similar organizations or competitors in the same space. Scenarios can also be based on the organization’s own history of security incidents.

  • Tie Scenarios to Operations – Design scenarios that reflect the organization’s unique business operations, systems, and processes. Consider the industry, internal procedures, technology infrastructure, and specific threats relevant to the organization. This ensures that participants can relate to the scenarios and their potential impact.
  • Leverage Past Risk Assessments – Using past risk assessments, identify the critical assets, vulnerabilities, and potential impacts within the organization. This helps determine the areas to focus on and ensures that the exercises address the most applicable risks.
  • Incorporate Real-World Scenarios – Draw inspiration from real-world cyber incidents and recent data breach reports. Simulate scenarios that resemble actual incidents faced by similar organizations or that align with prevalent industry-specific threats. This helps participants gain practical experience and understand the implications of such incidents.

Create Follow Ups For the Next Exercise

Assessing the outcomes of TTXs and translating them into actionable improvements is a necessary but often overlooked part of the discussion. Proper evaluation and analysis of exercise results, followed by effective follow-up actions, are essential to maximize the value of these exercises. Having this iterative approach ensures that the teams learn from each exercise, actions any needed changes, and continuously enhance their response capabilities.

  • Evaluate The Outcome – Conduct a thorough evaluation of the tabletop exercise right after its completion. Gather feedback from participants to identify strengths, weaknesses, and areas for improvement. Document any key insights or ideas for future exercises.
  • Analyze The Gaps – Analyze the gaps and weaknesses identified during the exercise. Categorize them based on severity and prioritize them for action. Determine the root causes behind the gaps, whether they involve processes, technology, communication, or personnel.
  • Assign Actions Items – Based on the identified gaps, assign action items to address each one to relevant individuals or teams. Set realistic timelines and milestones for completion. Continuously track progress and use key performance indicators (KPIs) to gauge the success of the follow-up initiatives. This provides a basis for further refinement and adjustment.
  • Update Incident Response Plans – Revise and update the organization’s incident response plans to reflect the gaps identified during the exercise. Ensure that all employees have access to the updated plans.
  • Conduct Training and Awareness Programs – Provide training sessions to enhance skills, educate employees on specific cyber threats, and reinforce incident response procedures. This helps fill knowledge gaps and improves preparedness.

Seeing Tabletop Exercises As One Part Of A Whole

When carried out correctly, a strong tabletop exercise strategy can expose weaknesses in incident response strategies, uncover areas for improvement, and foster a better strategy for emergency preparedness. While TTXs are a helpful tool, allowing security teams to simulate various scenarios, the exercises themselves are not enough to build an end-to-end cybersecurity defense posture against advanced cyber threats. In the greater scheme, TTXs are just one part of a whole and only place emphasis on fixing known vulnerabilities and any gaps identified during the sessions.

For ongoing, holistic protection against increasingly sophisticated threat tactics, techniques, and procedures, enterprises can augment their TTX processes with artificial intelligence (AI), machine learning (ML), red teaming, and a combination of autonomous endpoint, cloud, and identity security. The future of TTXs is now including such emerging technologies as they can simulate advanced attack vectors and enable organizations to test the effectiveness of automated response mechanisms. This ensures preparedness against new and evolving threats that haven’t already been documented and tracked.

Further, AI and ML can be used to model and simulate the behavior of adversaries, both known and unknown. By analyzing historical attack data, threat intelligence, and patterns, these technologies can generate realistic adversary profiles. TTXs can then include a wide range of adversary behaviors, making the exercises more challenging and reflective of real-world threats.  Algorithms can be written to analyze historical data from previous cyber incidents and help identify patterns and trends. With this data on hand, organizations can predict and anticipate potential future threats, vulnerabilities, or attack vectors. Incorporating predictive analytics in TTXs helps security teams proactively enhance their defenses.

The new wave of TTX strategy is also seeing more involvement from red teams. Red teaming, which involves simulating adversarial attacks, can be augmented by AI and ML. These technologies can automate certain aspects of red teaming exercises, such as generating realistic attack scenarios, identifying vulnerabilities, and assessing the impact of potential attacks. This helps in uncovering weaknesses and testing the resilience of an organization’s defenses.

Conclusion

Tabletop exercises, when implemented alongside AI-powered tools, allow security operations centers (SOCs) to understand their responsibilities and spend less time collecting and analyzing data during an incident. These risk-informed exercises reduce the overall mean-time-to-containment, enhance collaboration, and allow for the refinement of incident response plans. When combined with red teaming, where simulated adversarial attacks are conducted, organizations gain a deeper understanding of their vulnerabilities and can proactively address them.

As cyberattacks grow in frequency and complexity, autonomous security, AI, and ML technologies are bringing valuable capabilities to tabletop exercises. They enable the automation of many security tasks and enhance predictive analytics. By leveraging these technologies, organizations can improve threat detection, response speed, and decision-making, allowing them to stay ahead of threat actors in the ever-changing cyber ecosystem.

SentinelOne focuses on acting faster and smarter through AI-powered prevention and autonomous detection and response. With the Singularity XDR Platform, organizations gain access to back-end data across the organization through a single solution, providing a cohesive view of their network and assets by adding a real time autonomous security layer across all enterprise assets. It is the only platform powered by AI that provides advanced threat hunting and complete visibility across every device, virtual or physical, on-prem or in the cloud.

Learn more about how Singularity helps organizations autonomously prevent, detect, and recover from threats in real time by contacting us or requesting a demo.

LeakedSource Owner Quit Ashley Madison a Month Before 2015 Hack

[This is Part III in a series on research conducted for a recent Hulu documentary on the 2015 hack of marital infidelity website AshleyMadison.com.]

In 2019, a Canadian company called Defiant Tech Inc. pleaded guilty to running LeakedSource[.]com, a service that sold access to billions of passwords and other data exposed in countless data breaches. KrebsOnSecurity has learned that the owner of Defiant Tech, a 32-year-old Ontario man named Jordan Evan Bloom, was hired in late 2014 as a developer for the marital infidelity site AshleyMadison.com. Bloom resigned from AshleyMadison citing health reasons in June 2015 — less than one month before unidentified hackers stole data on 37 million users — and launched LeakedSource three months later.

Jordan Evan Bloom, posing in front of his Lamborghini.

On Jan. 15, 2018, the Royal Canadian Mounted Police (RCMP) charged then 27-year-old Bloom, of Thornhill, Ontario, with selling stolen personal identities online through the website LeakedSource[.]com.

LeakedSource was advertised on a number of popular cybercrime forums as a service that could help hackers break into valuable or high-profile accounts. LeakedSource also tried to pass itself off as a legal, legitimate business that was marketing to security firms and professionals.

The RCMP arrested Bloom in December 2017, and said he made approximately $250,000 selling hacked data, which included information on 37 million user accounts leaked in the 2015 Ashley Madison breach.

Subsequent press releases from the RCMP about the LeakedSource investigation omitted any mention of Bloom, and referred to the defendant only as Defiant Tech. In a legal settlement that is quintessentially Canadian, the matter was resolved in 2019 after Defiant Tech agreed to plead guilty. The RCMP did not respond to requests for comment.

A GREY MARKET

The Impact Team, the hacker group that claimed responsibility for stealing and leaking the AshleyMadison user data, also leaked several years worth of email from then-CEO Noel Biderman. A review of those messages shows that Ashley Madison hired Jordan Evan Bloom as a PHP developer in December 2014 — even though the company understood that Bloom’s success as a programmer and businessman was tied to shady and legally murky enterprises.

Bloom’s recommendation came to Biderman via Trevor Sykes, then chief technology officer for Ashley Madison parent firm Avid Life Media (ALM). The following is an email from Sykes to Biderman dated Nov. 14, 2014:

“Greetings Noel,

“We’d like to offer Jordan Bloom the position of PHP developer reporting to Mike Morris for 75k CAD/Year. He did well on the test, but he also has a great understanding of the business side of things having run small businesses himself. This was an internal referral.”

When Biderman responded that he needed more information about the candidate, Sykes replied that Bloom was independently wealthy as a result of his forays into the shadowy world of “gold farming”  — the semi-automated use of large numbers of player accounts to win some advantage that is usually related to cashing out game accounts or inventory. Gold farming is particularly prevalent in massively multiplayer online role-playing games (MMORPGs), such as RuneScape and World of Warcraft.

“In his previous experience he had been doing RMT (Real Money Trading),” Sykes wrote. “This is the practice of selling virtual goods in games for real world money. This is a grey market, which is usually against the terms and services of the game companies.” Here’s the rest of his message to Biderman:

“RMT sellers traditionally have a lot of problems with chargebacks, and payment processor compliance. During my interview with him, I spent some time focusing in on this. He had to demonstrate to the processor, Paypal, at the time he had a business and technical strategy to address his charge back rate.”

“He ran this company himself, and did all the coding, including the integration with the processors,” Sykes continued in his assessment of Bloom. “Eventually he was squeezed out by Chinese gold farmers, and their ability to market with much more investment than he could. In addition the cost of ‘farming’ the virtual goods was cheaper in China to do than in North America.”

COME, ABUSE WITH US

The gold farming reference is fascinating because in 2017 KrebsOnSecurity published Who Ran LeakedSource?, which examined clues suggesting that one of the administrators of LeakedSource also was the admin of abusewith[.]us, a site unabashedly dedicated to helping people hack email and online gaming accounts.

An administrator account Xerx3s on Abusewithus.

Abusewith[.]us began in September 2013 as a forum for learning and teaching how to hack accounts at Runescape, an MMORPG set in a medieval fantasy realm where players battle for kingdoms and riches.

The currency with which Runescape players buy and sell weapons, potions and other in-game items are virtual gold coins, and many of Abusewith[dot]us’s early members traded in a handful of commodities: Phishing kits and exploits that could be used to steal Runescape usernames and passwords from fellow players; virtual gold plundered from hacked accounts; and databases from hacked forums and websites related to Runescape and other online games.

That 2017 report here interviewed a Michigan man who acknowledged being administrator of Abusewith[.]us, but denied being the operator of LeakedSource. Still, the story noted that LeakedSource likely had more than one operator, and breached records show Bloom was a prolific member of Abusewith[.]us.

In an email to all employees on Dec. 1, 2014, Ashley Madison’s director of HR said Bloom graduated from York University in Toronto with a degree in theoretical physics, and that he has been an active programmer since high school.

“He’s a proprietor of a high traffic multiplayer game and developer/publisher of utilities such as PicTrace,” the HR director enthused. “He will be a great addition to the team.”

PicTrace appears to have been a service that allowed users to glean information about anyone who viewed an image hosted on the platform, such as their Internet address, browser type and version number. A copy of pictrace[.]com from Archive.org in 2012 redirects to the domain qksnap.com, which DomainTools.com says was registered to a Jordan Bloom from Thornhill, ON that same year.

The street address listed in the registration records for qksnap.com — 204 Beverley Glen Blvd — also shows up in the registration records for leakadvisor[.]com, a domain registered in 2017 just months after Canadian authorities seized the servers running LeakedSource.

Pictrace, one of Jordan Bloom’s early IT successes.

A review of passive DNS records from DomainTools indicates that in 2013 pictrace[.]com shared a server with just a handful of other domains, including Near-Reality[.]com — a popular RuneScape Private Server (RSPS) game based on the RuneScape MMORPG.

Copies of near-reality[.]com from 2013 via Archive.org show the top of the community’s homepage was retrofitted with a message saying Near Reality was no longer available due to a copyright dispute. Although the site doesn’t specify the other party to the copyright dispute, it appears Near-Reality got sued by Jagex, the owner of RuneScape.

The message goes on to say the website will no longer “encourage, facilitate, enable or condone (i) any infringement of copyright in RuneScape or any other Jagex product; nor (ii) any breach of the terms and conditions of RuneScape or any other Jagex product.”

A scene from the MMORPG RuneScape.

AGENTJAGS

Near Reality also has a Facebook page that was last updated in 2019, when its owner posted a link to a news story about Defiant Tech’s guilty plea in the LeakedSource investigation. That Facebook page indicates Bloom also went by the nickname “Agentjags.”

“Just a quick PSA,” reads a post to the Near Reality Facebook page dated Jan. 21, 2018, which linked to a story about the charges against Bloom and a photo of Bloom standing in front of his lime-green Lamborghini. “Agentjags has got involved in some shady shit that may have compromised your personal details. I advise anyone who is using an old NR [Near Reality] password for anything remotely important should change it ASAP.”

By the beginning of 2016, Bloom was nowhere to be found, and was suspected of having fled his country for the Caribbean, according to the people commenting on the Near Reality Facebook page:

“Jordan aka Agentjags has gone missing,” wrote a presumed co-owner of the Facebook page. “He is supposedly hiding in St. Lucia, doing what he loved, scuba-diving. Any information to his whereabouts will be appreciated.”

KrebsOnSecurity ran the unusual nickname “AgentJags” through a search at Constella Intelligence, a commercial service that tracks breached data sets. That search returned just a few dozen results — and virtually all were accounts at various RuneScape-themed sites, including a half-dozen accounts at Abusewith[.]us.

Constella found other “AgentJags” accounts tied to the email address ownagegaming1@gmail.com. The marketing firm Apollo.io experienced a data breach several years back, and according to Apollo the email address ownagegaming1@gmail.com belongs to Jordan Bloom in Ontario.

Constella also revealed that the password frequently used by ownagegaming1@gmail.com across many sites was some variation on “niggapls,” which my 2017 report found was also the password used by the administrator of LeakedSource.

Constella discovered that the email eric.malek@rogers.com comes up when one searches for “AgentJags.” This is curious because emails leaked from Ashley Madison’s then-CEO Biderman show that Eric Malek from Toronto was the Ashley Madison employee who initially recommended Bloom for the PHP developer job.

According to DomainTools.com, Eric.Malek@rogers.com was used to register the domain devjobs.ca, which previously advertised “the most exciting developer jobs in Canada, delivered to you weekly.” Constella says eric.malek@rogers.com also had an account at Abusewith[.]us — under the nickname “Jags.

Biderman’s email records show Eric Malek was also a PHP developer for Ashley Madison, and that he was hired into this position just a few months before Bloom — on Sept. 2, 2014.

The CEO’s leaked emails show Eric Malek resigned from his developer position at Ashley Madison on June 19, 2015 — just four days before Bloom would announce his departure. Both men left the company less than a month before the Impact Team announced they’d hacked Ashley Madison, and both said they were leaving for health-related reasons.

“Please note that Eric Malek has resigned from this position with Avid and his last day will be June 19th,” read a June 5, 2015 email from ALM’s HR director. “He is resigning to deal with some personal issues which include health issues. Because he is not sure how much time it will take to resolve, he is not requesting a leave of absence (his time off will be indefinite). Overall, he likes the company and plans to reach out to Trevor or I when the issues are resolved to see what is available at that time.”

A follow-up email from Biderman demanded, “want to know where he’s truly going….,” and it’s unclear why there was friction with Malek’s departure. But ALM General Counsel Avi Weisman replied indicating that Malek probably would not sign an “Exit Acknowledgment Form” prior to leaving, and that the company had unanswered questions for Malek.

“Aneka should dig during exit interview,” Weisman wrote. “Let’s see if he balks at signing the Acknowledgment.”

An email dated June 5, 2015, from ALM General Counsel to Biderman, regarding an exit interview with Malek.

Bloom’s departure notice from Ashley Madison’s HR person, dated June 23, 2015, read:

“Please note that Jordan Bloom has resigned from his position as PHP Developer with Avid. He is leaving for personal reasons. He has a neck issue that will require surgery in the upcoming months and because of his medical appointment schedule and the pain he is experiencing he can no longer commit to a full-time schedule. He may pick up contract work until he is back to 100%.”

A follow-up note to Biderman about this announcement read:

“Note that he has disclosed that he is independently wealthy so he can get by without FT work until he is on the mend. He has signed the Exit Acknowledgement Form already without issue. He also says he would consider reapplying to Avid in the future if we have opportunities available at that time.”

Perhaps Mr. Bloom hurt his neck from craning it around blind spots in his Lamborghini. Maybe it was from a bad scuba outing. Whatever the pain in Bloom’s neck was, it didn’t stop him from launching himself fully into LeakedSource[.]com, which was registered roughly one month after the Impact Team leaked data on 37 million Ashley Madison accounts.

Mr. Malek did not respond to multiple requests for comment. A now-deleted LinkedIn profile for Malek from December 2018 listed him as a “technical recruiter” from Toronto who also attended Mr. Bloom’s alma mater — York University. That resume did not mention Mr. Malek’s brief stint as a PHP developer at Ashley Madison.

“Developer, entrepreneur, and now technical recruiter of the most uncommon variety!” Mr. Malek’s LinkedIn profile enthused. “Are you a developer, or other technical specialist, interested in working with a recruiter who can properly understand your concerns and aspirations, technical, environmental and financial? Don’t settle for a ‘hack’; this is your career, let’s do it right! Connect with me on LinkedIn. Note: If you are not a resident of Canada/Toronto, I cannot help you.”

INTERVIEW WITH BLOOM

Mr. Bloom told KrebsOnSecurity he had no role in harming or hacking Ashley Madison. Bloom validated his identity by responding at one of the email addresses mentioned above, and agreed to field questions so long as KrebsOnSecurity agreed to publish our email conversation in full (PDF).

Bloom said Mr. Malek did recommend him for the Ashley Madison job, but that Mr. Malek also received a $5,000 referral bonus for doing so. Given Mr. Malek’s stated role as a technical recruiter, it seems likely he also recommended several other employees to Ashley Madison.

Bloom was asked whether anyone at the RCMP, Ashley Madison or any authority anywhere ever questioned him in connection with the July 2015 hack of Ashley Madison. He replied that he was called once by someone claiming to be from the Toronto Police Service asking if he knew anything about the Ashley Madison hack.

“The AM situation was not something they pursued according to the RCMP disclosure,” Bloom wrote. “Learning about the RCMP’s most advanced cyber investigative techniques and capabilities was very interesting though. I was eventually told information by a third party which included knowledge that law enforcement effectively knew who the hacker was, but didn’t have enough evidence to proceed with a case. That is the extent of my involvement with any authorities.”

As to his company’s guilty plea for operating LeakedSource, Bloom maintains that the judge at his preliminary inquiry found that even if everything the Canadian government alleged was true it would not constitute a violation of any law in Canada with respect the charges the RCMP leveled against him, which included unauthorized use of a computer and “mischief to data.”

“In Canada at the lower court level we are allowed to possess stolen information and manipulate our copies of them as we please,” Bloom said. “The judge however decided that a trial was required to determine whether any activities of mine were reckless, as the other qualifier of intentionally criminal didn’t apply. I will note here that nothing I was accused of doing would have been illegal if done in the United States of America according to their District Attorney. +1 for free speech in America vs freedom of expression in Canada.”

“Shortly after their having most of their case thrown out, the Government proposed an offer during a closed door meeting where they would drop all charges against me, provide full and complete personal immunity, and in exchange the Corporation which has since been dissolved would plead guilty,” Bloom continued. “The Corporation would also pay a modest fine.”

Bloom said he left Ashley Madison because he was bored, but he acknowledged starting LeakedSource partly in response to the Ashley Madison hack.

“I intended to leverage my gaming connections to get into security work including for other private servers such as Minecraft communities and others,” Bloom said. “After months of asking management for more interesting tasks, I became bored. Some days I had virtually nothing to do except spin in my chair so I would browse the source code for security holes to fix because I found it enjoyable.”

“I believe the decision to start LS [LeakedSource] was partly inspired by the AM hack itself, and the large number of people from a former friend group messaging me asking if XYZ person was in the leak after I revealed to them that I downloaded a copy and had the ability to browse it,” Bloom continued. “LS was never my idea – I was just a builder, and the only Canadian. In other countries it was never thought to be illegal on closer examination of their laws.”

Bloom said he still considers himself independently wealthy, and that still has the lime green Lambo. But he said he’s currently unemployed and can’t seem to land a job in what he views as his most promising career path: Information security.

“As I’m sure you’re aware, having negative media attention associated with alleged (key word) criminal activity can have a detrimental effect on employment, banking and relationships,” Bloom wrote. “I have no current interest in being a business owner, nor do I have any useful business ideas to be honest. I was and am interested in interesting Information Security/programming work but it’s too large of a risk for any business to hire someone who was formerly accused of a crime.”

If you liked this story, please consider reading the first two pieces in this series:

SEO Expert Hired and Fired by Ashley Madison Turned on Company, Promising Revenge

Top Suspect in 2015 Ashley Madison Hack Committed Suicide in 2014

The Good, the Bad and the Ugly in Cybersecurity – Week 28

The Good | Tougher Times Ahead for Play Store Malware

With so many personal devices now hopping on and off company networks, the risks from mobile malware are always a concern. Google has had more than its fair share of problems with malicious apps on its Play store, but this week new rules for developer accounts aim to curb the problem.

Although new app submissions are vetted before being allowed on the app store, crafty developers typically circumvent these checks by initially uploading a benign version of the app. After approval, they issue an update containing the malicious code. If users are lucky, the malicious app is discovered, reported, and the developer banned from the store. Up till now, that has never presented much of a problem, as the malicious developers return by creating a new account and repeating the cycle.

In an effort to combat this, Google will begin requiring all new developer accounts to provide a valid D-U-N-S number,  a unique nine-digit identifier for businesses, from August 31st. Acquiring a valid D-U-N-S number requires passing various proof-of-identity and business checks and that will make it difficult, expensive and time-consuming for fraudsters to complete.

DUNS required for Google play store

Additionally, the Play store will require apps to display the developer’s business address, website URL and phone number in an effort to improve transparency and limit fraud.

It remains to be seen how effective these moves will be, or whether the extra identification requirements will serve to discourage legitimate independent developers from distributing on Google Play, but if it helps to reduce malware on Android devices, it will certainly be welcomed by IT and security teams.

The Bad | Cloud Credentials Stealer Targets Azure and Google

A malicious actor previously targeting AWS accounts for compromise has now turned to stealing credentials for Azure and Google Cloud Platform (GCP) services, it was revealed this week.

Updated versions of known malware scripts now gather credentials from AWS, Azure, Google Cloud Platform, Censys, Docker, Filezilla, Git, Grafana, Kubernetes, Linux, Ngrok, PostgreSQL, Redis, S3QL, and SMB. Successfully harvested credentials are then exfiltrated to a remote server under the threat actor’s control.

Newly implemented get_azure function in g.aws.sh
New functionality targets Azure and Google

The campaign has been linked to notorious crime group TeamTNT, a threat group known to primarily target cloud and containerized environments with cryptocurrency miners. However, as researchers at SentinelLabs pointed out, attribution remains challenging with script-based tools as they can be readily adapted by anyone for their own use.

The attack scripts target public-facing Docker instances and aim to deploy a worm-like propagation module. They also contain functionality for extensive system and environmental profiling. Post-exploitation activity involves collecting details from the infected host and using Bash to download the curl binary from the attacker’s server. This is notable because attacks against minimal systems like containers are often limited by the absence of common living off the land tools like curl.

Up to eight incremental versions of the credential harvesting scripts have been observed in the last two months, indicating an actively evolving threat. Based on this activity, researchers believe that the actor is likely preparing for larger scale campaigns.

The Ugly | Office 365 Intrusions Compounded by Lack of Visibility

A China-based threat actor has breached email accounts of dozens of organizations worldwide, including U.S. and Western European government agencies, it was revealed this week.

The campaign, which is thought to have begun in mid-May, was reported to Microsoft by the U.S. government after the discovery of unauthorized access to Microsoft cloud-based email services. According to reports, the State Department was the first of a number of government agencies to detect a compromise of its Microsoft Office 365 system. The Department of Commerce and the House of Representatives were also among those targeted.

Microsoft’s own investigation revealed that a China-based threat actor used forged authentication tokens and an acquired Microsoft account (MSA) consumer signing key to access user email. It is unclear how the threat actor acquired the Microsoft key or whether there is an exploitable flaw in Microsoft’s token validation system.

The company says it has blocked all tokens signed with the stolen key and worked to improve its “key management systems” since the theft occurred. Microsoft says it has contacted all targeted or compromised organizations directly with information to help them investigate and respond.

However, the company is facing criticism because access to forensic logs for users of its Office 365 services requires extra licensing. U.S. Senator Ron Wyden reportedly said that Microsoft should offer all its customers full forensic capabilities, saying that “charging people for premium features necessary to not get hacked is like selling a car and then charging extra for seatbelts and airbags”.

SEO Expert Hired and Fired By Ashley Madison Turned on Company, Promising Revenge

[This is Part II of a story published here last week on reporting that went into a new Hulu documentary series on the 2015 Ashley Madison hack.]

It was around 9 p.m. on Sunday, July 19, when I received a message through the contact form on KrebsOnSecurity.com that the marital infidelity website AshleyMadison.com had been hacked. The message contained links to confidential Ashley Madison documents, and included a manifesto that said a hacker group calling itself the Impact Team was prepared to leak data on all 37 million users unless Ashley Madison and a sister property voluntarily closed down within 30 days.

A snippet of the message left behind by the Impact Team.

The message included links to files containing highly sensitive information, including snippets of leaked user account data, maps of internal AshleyMadison company servers, employee network account information, company bank account data and salary information.

A master employee contact list was among the documents leaked that evening. Helpfully, it included the cell phone number for Noel Biderman, then the CEO of Ashley Madison parent firm Avid Life Media (ALM). To my everlasting surprise, Biderman answered on the first ring and acknowledged they’d been hacked without even waiting to be asked.

“We’re on the doorstep of [confirming] who we believe is the culprit, and unfortunately that may have triggered this mass publication,” Biderman told me on July 19, just minutes before I published the first known public report about the breach. “I’ve got their profile right in front of me, all their work credentials. It was definitely a person here that was not an employee but certainly had touched our technical services.”

On Aug 18, 2015, the Impact Team posted a “Time’s up!” message online, along with links to 60 gigabytes of Ashley Madison user data. The data leak led to the public shaming and extortion of many Ashley Madison users, and to at least two suicides. Many other users lost their jobs or their marriages. To this day, nobody has been charged in the hack, and incredibly Ashley Madison remains a thriving company.

THE CHAOS MAKER

The former employee that Biderman undoubtedly had in mind on July 19, 2015 was William Brewster Harrison, a self-described expert in search engine optimization (SEO) tricks that are designed to help websites increase their rankings for various keywords in Google and other search engines.

It is evident that Harrison was Biderman’s top suspect immediately after the breach became public because — in addition to releasing data on 37 million users a month later in August 2015 — the hackers also dumped three years worth of email they stole from Biderman. And Biderman’s inbox is full of messages about hate-filled personal attacks from Harrison.

A Native of Northern Virginia, Harrison eventually settled in North Carolina, had a son with his then-wife, and started a fence-building business. ALM hired Harrison in March 2010 to promote its various adult brands online, and it is clear that one of his roles was creating and maintaining female profiles on Ashley Madison, and creating blogs that were made to look like they were written by women who’d just joined Ashley Madison.

A selfie that William B. Harrison posted to his Facebook page in 2013 shows him holding a handgun and wearing a bulletproof vest.

It appears Harrison was working as an affiliate of Ashley Madison prior to his official employment with the company, which suggests that Harrison had already demonstrated he could drive signups to the service and help improve its standing in the search engine rankings.

What is less clear is whether anyone at ALM ever performed a basic background check on Harrison before hiring him. Because if they had, the results almost certainly would have given them pause. Virginia prosecutors charged the young 20-something Harrison with a series of misdemeanors, including trespassing, unlawful entry, drunk in public, and making obscene phone calls.

In 2008, North Carolina authorities charged Harrison with criminal extortion, a case that was transferred to South Carolina before ultimately being dismissed. In December 2009, Harrison faced charges of false imprisonment, charges that were also dropped by the local district attorney.

By the time Ashley Madison officially hired him, Harrison’s life was falling apart. His fence business had failed, and he’d just filed for bankruptcy. Also, his marriage had soured, and after a new arrest for driving under the influence, he was in danger of getting divorced, losing access to his son, and/or going to jail.

It also seems likely that nobody at ALM bothered to look at the dozens of domain names registered to Harrison’s various Vistomail.com email addresses, because had they done so they likely would have noticed two things.

One is that Harrison had a history of creating websites to lambaste companies he didn’t like, or that he believed had slighted him or his family in some way. Some of these websites included content that defamed and doxed executives, such as bash-a-business[.]com, google-your-business[.]com, contact-a-ceo[.]com, lowes-is-a-cancer[.]com (according to Harrison, the home improvement chain once employed his wife).

A background check on Harrison’s online footprint also would have revealed he was a self-styled rapper who claimed to be an active menace to corporate America. Harrison’s website lyrical-gangsta[.]com included a number of works, such as “Slim Thug — I Run — Remix Spoof,” which are replete with menacing words for unnamed corporate executives:

[HOOK]
I surf the net all night n day (the web love thug)
cuz I still surf the net all night n day
yuhh I type for my mind, got smart for my ego
still running circles round them, what’s good?
cuz I still surf, the net all night n day,
I cant stay away.

They don’t make to [sic] many hackers like me
bonafide hustler certified G
still pumpin’ the TOP 10 results
if you got the right dough!
think the results are fake? sucka Google ME
smarter than executives, bigger then Wal-Mart
Nelly strugglin’ with the fact that I’m #1 NOW
street boys know me, ain’t nuttin’ new
about to make my mill, with an all new crew
I-95 execs don’t know what to do, or where to go
watchin them stocks evaporate all their dough
I already left the hood, got up off the streets
its in my blood im a gangsta till Im deceased

moving lumber for money or typin’ in a zone
all night hackin’ till 6 in the mornin
that shit im focusin’ on, stronger then cologne
you can prolly smell the jealousy
through your LCD screen
if you still broke– better work for some green
called them Fortune execs on that legal bluff
cuz the Feds busy raidin other stuff
Imma run the Net til im six feet under
I’m a leave my mark — no reason to wonder
(Yea Yea)

Some of the anti-corporate rhymes busted by Harrison’s hacker/rapper alter ego “Chaos Dog.” Image: Archive.org.

The same theme appears in another rap (“The Hacker Backstage”) penned by Harrison’s rapper alter ego — “Chaos Dog:”

…this hacker was born to write
bust off the rhymes and watch em take flight
you know all about them corporate jets
and handing out pinkslips without regrets
oversized companies are the problem

well, I’ve got a solution
It’s called good ol’ fashioned retribution
file bankruptcy, boycott you like Boston colonists
Corporate America cant stop this Eminem style columnist
2pac would have honored my style
Im the next generation of hacker inspiration
Americans don’t want a corporate nation
All that DOW Jones shit is a dying sensation

In addition to pimping Ashley Madison with fake profiles and phony user blogs, it appears Harrison also went after the company’s enemies during the brief time he was an employee. As noted in Part I of this story, Harrison used multiple pseudonymous Vistomail.com email addresses to harass the owners of AshleyMadisonSucks[.]com into selling or shutting down the site.

When the owner of AshleyMadisonSucks[.]com refused to sell the domain, he and his then-girlfriend were subject to an unrelenting campaign of online harassment and blackmail. It now appears those attacks were perpetrated by Harrison, who sent emails from different accounts at the free email service Vistomail pretending to be the domain owner, his then-girlfriend and their friends. Harrison even went after the domain owner’s lawyer and wife, listing them both on his Contact-A-CEO[.]com website.

TURNABOUT IS FAIR PLAY

Things started going sideways for Ashley Madison when Harrison’s employment contract was terminated in November 2011. The leaked emails do not explain why Harrison was fired, but his mercurial temperament likely played a major role. According to Harrison, it was because he had expressed some moral reservations with certain aspects of his duties, although he was not specific on that point and none of this could be confirmed.

Shortly after Harrison was fired, the company’s executives began noticing that Google was auto-completing the words “Jew” and “Jewish” whenever someone searched for Biderman’s name. The results returned when one accepted Google’s recommended search at the time filled the first page with links to Stormfront, a far-right, neo-Nazi hate group. The company strongly suspected someone was using underhanded SEO techniques to slander and attack its CEO.

In July 2022, KrebsOnSecurity published a retrospective on the 2015 Ashley Madison breach which found that Biderman had become the subject of increasing ire from members of Stormfront and other extremists groups in the years leading up to the hack. According to the neo-Nazi groups, Biderman was a worthy target of their harassment not just because he was a successful Jewish CEO, but also because his company was hellbent on destroying Christian morals and families.

Biderman’s leaked emails show that in February 2012 he hired Brian Cuban — the attorney brother of Mark Cuban, the owner of the Dallas Mavericks and one the main “sharks” on the ABC reality television series Shark Tank. Through Cuban, Ashley Madison appealed their case to both Google and to the Anti-Defamation League, but neither was apparently able or willing to help.

Also in early January 2012, Biderman and other Ashley Madison executives found themselves inundated with anonymous Vistomail.com emails that were replete with profanity and slurs against Jews. Although he used fake names and email addresses, Harrison made little effort to hide his identity in several of these nastygrams.

One particularly ugly message from Harrison even included a link to a Youtube video he’d put online of his young son playing basketball for a school team. That Youtube video was included in an email wherein Harrison – then separated from his wife — lamented all the hours he spent working for Ashley Madison up in Canada instead of spending time with his son.

Harrison then turned to making threatening phone calls to Ashley Madison executives. In one incident in March 2012, Harrison called the company’s former director of Human Resources using a caller ID spoofing service to make it look like he was calling from inside the building.

ALM’s lawyers contacted the Toronto police in response to Harrison’s harassment.

“For Will to have disguised his phone number as Mark’s strongly suggest he has hacked my email, legal counsel for the opposing side in a perceived legal dispute,” ALM VP and general counsel Mike Dacks wrote in a letter to a detective at the Toronto Police. “Over the months of his many hundreds of emails he alluded a number of times to undertaking cyberattacks against us and this was noted in my original report to police.”

Based on the exchanges in Bidernman’s inbox it appears those appeals to the Toronto authorities were successful in having Harrison barred from being able to enter Canada.

ALM also contacted a detective in Harrison’s home county in North Carolina. But when the local police paid a visit to Harrison’s home to follow up on the harassment complaints, Harrison fled out his back porch, injuring himself after jumping off the second-story deck.

It is unclear if the police ever succeeded in interviewing Harrison in response to the harassment complaints from ALM. The Raleigh police officer contacted by ALM did not respond to requests for information. But the visit from the local cops only seemed to embolden and anger Harrison even more, and Biderman’s emails indicate the harassment continued after this incident.

HUMAN DECOYS

Then in August 2012, the former sex worker turned blogger and activist Maggie McNeill published screenshots from an internal system that Ashley Madison used called the “Human Decoy Interface,” which was a fancy way of describing a system built to manage phony female accounts on the service.

The screenshots appeared to show that a great many female accounts were in fact bots designed to bring in paying customers. Ashley Madison was always free to join, but users had to pay if they wished to chat directly with other users.

Although Harrison had been fired nearly a year earlier, Biderman’s leaked emails show that Harrison’s access to Ashley Madison’s internal tools wasn’t revoked until after the screenshots were posted online and the company began reviewing which employee accounts had access to the Human Decoy Interface.

“Who or what is asdfdfsda@asdf.com?,” Biderman asked, after being sent a list of nine email addresses.

“It appears to be the email address Will used for his profiles,” the IT director replied.

“And his access was never shut off until today?,” asked the company’s general counsel Mike Dacks.

TRUTH BOMBS

Biderman’s leaked emails suggest that Harrison stopped his harassment campaign sometime after 2012. A decade later, KrebsOnSecurity sought to track down and interview Harrison. Finding nobody at his former addresses and phone numbers in North Carolina, KrebsOnSecurity wound up speaking with Will’s stepmother, Tena Nauheim, who lives with Will’s dad in Northern Virginia.

Nauheim quickly dropped two big truth bombs after patiently listening to my spiel about why I was calling and looking for Mr. Harrison. The first was that Will was brought up Jewish, although he did not practice the faith: A local rabbi and friend of the family gave the service at Will’s funeral in 2014.

Nauheim also shared that her stepson had killed himself in 2014, shooting himself in the head with a handgun. Will’s mother discovered his body.

“Will committed suicide in March 2014,” Nauheim shared. “I’ve heard all those stories you just mentioned. Will was severely mentally ill. He was probably as close to a sociopath as I can imagine anyone being. He was also a paranoid schizophrenic who wouldn’t take his medication.”

William B. Harrison died on March 5, 2014, nearly 16 months before The Impact Team announced they’d hacked Ashley Madison.

Nauheim said she constantly felt physically threatened when Will was around. But she had trouble believing that her stepson was a raging anti-Semite. She also said she thought the timing of Will’s suicide effectively ruled him out as a suspect in the 2015 Ashley Madison hack.

“Considering the date of death, I’m not sure if he’s your guy,” Nauheim offered toward the end of our conversation.

[There is one silver lining to Will Harrison’s otherwise sad tale: His widow has since remarried, and her new husband agreed to adopt their son as his own.]

ANALYSIS

Does Harrison’s untimely death rule him out as a suspect, as his stepmom suggested? This remains an open question. In a parting email to Biderman in late 2012, Harrison signed his real name and said he was leaving, but not going away.

“So good luck, I’m sure we’ll talk again soon, but for now, I’ve got better things in the oven,” Harrison wrote. “Just remember I outsmarted you last time and I will outsmart you and out maneuver you this time too, by keeping myself far far away from the action and just enjoying the sideline view, cheering for the opposition.”

Nothing in the leaked Biderman emails suggests that Ashley Madison did much to revamp the security of its computer systems in the wake of Harrison’s departure and subsequent campaign of harassment — apart from removing an administrator account of his a year after he’d already left the company.

KrebsOnSecurity found nothing in Harrison’s extensive domain history suggesting he had any real malicious hacking skills. But given the clientele that typically employed his skills — the adult entertainment industry — it seems likely Harrison was at least conversant in the dark arts of “Black SEO,” which involves using underhanded or else downright illegal methods to game search engine results.

Armed with such experience, it would not have been difficult for Harrison to have worked out a way to maintain access to working administrator accounts at Ashley Madison. If that in fact did happen, it would have been trivial for him to sell or give those credentials to someone else.

Or to something else. Like Nazi groups. As KrebsOnSecurity reported last year, in the six months leading up to the July 2015 hack, Ashley Madison and Biderman became a frequent subject of derision across multiple neo-Nazi websites.

On Jan. 14, 2015, a member of the neo-Nazi forum Stormfront posted a lively thread about Ashley Madison in the general discussion area titled, “Jewish owned dating website promoting adultery.”

On July 3, 2015, Andrew Anglin, the editor of the alt-right publication Daily Stormer, posted excerpts about Biderman from a story titled, “Jewish Hyper-Sexualization of Western Culture,” which referred to Biderman as the “Jewish King of Infidelity.”

On July 10, a mocking montage of Biderman photos with racist captions was posted to the extremist website Vanguard News Network, as part of a thread called “Jews normalize sexual perversion.”

Some readers have suggested that the data leaked by the Impact Team could have originally been stolen by Harrison. But that timeline does not add up given what we know about the hack. For one thing, the financial transaction records leaked from Ashley Madison show charges up until mid-2015. Also, the final message in the archive of Biderman’s stolen emails was dated July 7, 2015 — almost two weeks before the Impact Team would announce their hack.

Whoever hacked Ashley Madison clearly wanted to disrupt the company as a business, and disgrace its CEO as the endgame. The Impact Team’s intrusion struck just as Ashley Madison’s parent was preparing go public with an initial public offering (IPO) for investors. Also, the hackers stated that while they stole all emplyee emails, they were only interested in leaking Biderman’s.

Also, the Impact Team had to know that ALM would never comply with their demands to dismantle Ashley Madison and Established Men. In 2014, ALM reported revenues of $115 million. There was little chance the company was going to shut down some of its biggest money machines.

Hence, it appears the Impact Team’s goal all along was to create prodigious amounts of drama and tension by announcing the hack of a major cheating website, and then let that drama play out over the next few months as millions of exposed Ashley Madison users freaked out and became the targets of extortion attacks and public shaming.

After the Impact Team released Biderman’s email archives, several media outlets pounced on salacious exchanges in those messages as supposed proof he had carried on multiple affairs. Biderman resigned as CEO of Ashley Madison on Aug. 28, 2015.

Complicating things further, it appears more than one malicious party may have gained access to Ashley’s Madison’s network in 2015 or possibly earlier. Cyber intelligence firm Intel 471 recorded a series of posts by a user with the handle “Brutium” on the Russian-language cybercrime forum Antichat between 2014 and 2016.

Brutium routinely advertised the sale of large, hacked databases, and on Jan. 24, 2015, this user posted a thread offering to sell data on 32 million Ashley Madison users. However, there is no indication whether anyone purchased the information. Brutium’s profile has since been removed from the Antichat forum.

I realize this ending may be unsatisfying for many readers, as it is for me. The story I wrote in 2015 about the Ashley Madison hack is still the biggest scoop I’ve published here (in terms of traffic), yet it remains perhaps the single most frustrating investigation I’ve ever pursued. But my hunch is that there is still more to this story that has yet to unfold.

Getting More From Cloud | How to Maximize Business Value Through CloudOps Services

With more businesses relying on cloud computing to streamline operations and improve scalability, enterprise leaders are adopting a cloud-first approach, combining network, performance, security, endpoint management, and support all through cloud operations, or CloudOps. CloudOps combines both IT processes and DevOps principles to ensure the smooth operation, maintenance, and optimization of a cloud-based infrastructure and its applications.

As news headlines are increasingly occupied by data breaches and new security vulnerabilities pose significant risks to businesses, CloudOps has emerged as a crucial component for ensuring the safety and integrity of cloud-based systems. In this post, learn to build and automate a strong cloud-first strategy that can help keep organizations of all sizes safe from potential cloud security risks.

Emerging CloudOps | The Next Phase of Digital Business

According to Gartner, global end user spending on public cloud services is being forecasted to grow nearly 22% totalling $597.3 billion this year. Given its prevalence, security experts are viewing cloud as the driving force behind the next phase of digital business as its role as a highly strategic platform for digital transformation and services.

While cloud usage is near universal in the modern business landscape, many deployments remain poorly architected or are assembled ad hoc. Digital trends for 2023 indicate that cloud infrastructure optimization is one of the top priorities to pay attention to. The rise of cloud is clear and enterprise leaders are working to evolve their IT, engineering and development teams to build the right cloud security management and operating strategies for their business. This is where CloudOps comes into play.

CloudOps focuses on managing and optimizing cloud-based infrastructure, applications, and services. It takes into account the unique characteristics and capabilities of cloud computing, such as elasticity, scalability, and resource virtualization.

Many businesses have relied on traditional IT operations (ITOps) practices to manage their cloud deployments. However, as the complexity of cloud infrastructure has grown, forward-thinking enterprises have moved towards specialized expertise and dedicated processes to better handle their cloud-first systems.

CloudOps combines principles from other operational models such as ITOps and DevOps and makes them applicable to a cloud-based architecture. These principles, processes, and best practices help security teams across cloud architecture, security, compliance, and IT operations to manage the overarching infrastructure. Below are some key differences that make CloudOps unique.

  • Managing & Optimizing Cloud – CloudOps focuses specifically on managing and optimizing cloud-based infrastructure, applications, and services. It takes into account the unique characteristics and capabilities of cloud computing, such as elasticity, scalability, and resource virtualization.
  • Underlying Infrastructure – While IT operations primarily deal with physical hardware and data centers, CloudOps is centered around virtualized resources and cloud service providers.
  • Security – Since cloud environments have unique security considerations, CloudOps focuses on implementing robust security measures, such as identity and access management, data encryption, and proactive threat monitoring; all specific to cloud-based systems.
  • Scalability – CloudOps offers greater flexibility compared to traditional IT operations. Cloud environments allow businesses to scale resources up or down based on demand, whereas scaling in traditional IT operations often requires additional hardware procurement and deployment.
  • Automation – CloudOps leverages automation tools and frameworks to streamline resource provisioning, configuration management, and application deployment in the cloud. This level of automation is typically more advanced and efficient compared to traditional IT operations.

Seizing Value Through CloudOps Services

As cloud technology continues to meet changing business needs, migrating from on-prem to the cloud is no longer enough to revolutionize business infrastructure. The rise of hybrid and multi-cloud structures has also multiplied the complexities and criticalities associated with the cloud.

To leverage more value from the cloud, enterprises have focused on driving cost efficiencies through shared services and flexible commercial models. Cloud-based Infrastructure-as-a-Service (IaaS), for example, plays a pivotal role in maintaining superior functionality and operational agility through automation, enabling businesses to optimize their operations.

In this context, CloudOps services have gained significant momentum. By managing software in a cloud computing environment, CloudOps ensures that enterprises are effectively harnessing the benefits of cloud-based systems. Since cloud operations services focus on optimizing performance and capacity, it validates best practices and processes that enable cloud platforms, applications, and internal data to perform optimally.

Increase Efficiency & Scalability

CloudOps optimizes resource utilization and reduces manual intervention, resulting in enhanced operational efficiency. With proactive monitoring and performance optimization, CloudOps teams can identify and resolve issues promptly, minimizing downtime and maximizing productivity.

CloudOps also enables scalability. When businesses scale their resources up or down based on demand, ensuring optimal performance without unnecessary resource allocation is key. By centralizing management and leveraging standardized practices, CloudOps streamlines operations, reduces complexity, and empowers organizations to focus on core business activities.

Cost Optimization

Cost optimization is a significant advantage of implementing CloudOps within organizations. CloudOps teams leverage automation, monitoring tools, and advanced analytics to optimize resource allocation and usage. Over time, security teams see cost savings as they continue to monitor resource consumption and apply scaling strategies based on demand. CloudOps also ensures that resources are provisioned efficiently, avoiding unnecessary expenses.

Improved Security

CloudOps plays a critical role in an enterprise’s cybersecurity strategy, keeping business-critical data safe from cloud-based security threats. CloudOps focuses on implementing robust security measures to protect sensitive data and ensuring the integrity of cloud-based systems. To do so, security teams continuously monitor the cloud environment for vulnerabilities, proactively detect and mitigate security risks, and also ensure compliance with industry regulations.

CloudOps employs various strategies and practices to secure a cloud environment effectively including:

  • Identity and Access Management (IAM) – CloudOps implements robust IAM policies to manage user identities, roles, and access privileges. This ensures that only authorized individuals can access and modify resources within the cloud environment.
  • Encryption – CloudOps utilizes encryption techniques to protect data at rest and in transit. Encryption safeguards sensitive information from unauthorized access, even if a breach occurs.
  • Regular Security Audits – CloudOps teams conduct regular security audits to identify vulnerabilities and weaknesses in the cloud infrastructure. This allows them to proactively address potential security gaps and implement necessary patches or updates.
  • Network Security – CloudOps implements network security measures such as firewalls, intrusion detection systems, and virtual private networks (VPNs) to monitor and protect the cloud environment from unauthorized access and malicious activities.
  • Incident Response Planning – CloudOps develops and maintains robust incident response plans to effectively handle security incidents. This includes defining escalation procedures, conducting drills, and implementing measures to minimize the impact of potential security breaches.
  • Compliance Management – CloudOps ensures compliance with industry-specific regulations and standards. This involves implementing controls and processes to meet data privacy requirements and maintain regulatory compliance.
  • Ongoing Monitoring and Threat Detection – CloudOps teams continuously monitor the cloud environment, employing advanced monitoring tools and technologies to detect and respond to potential security threats promptly. This includes real-time monitoring of logs, network traffic, and system activities.

Operating In The Cloud | How To Effectively Implement CloudOps

CloudOps enables organizations to focus on their core competencies by streamlining the management of their cloud environments. Security teams can follow the below key steps when implementing CloudOps best practices.

Define Your Cloud Strategy & Objectives

Begin by assessing the existing IT infrastructure, applications, and business processes. Understand the strengths, weaknesses, and limitations of the current environment, as well as all business drivers for adopting cloud technology.

Then, clearly define the objectives and expected outcomes of implementing CloudOps within the organization. Determine the specific areas of focus such as efficiency, scalability, security, or cost optimization. These definitions allow senior leadership to tie the cloud strategy to business goals and narrow down specific requirements when selecting the right cloud service provider.

Design CloudOps Processes

It is also important to develop clear processes for CloudOps that align with the organization’s goals. This can include defining resource provisioning and configuration management procedures, performance monitoring, incident response protocols, and security practices.

Implement Automation and Orchestration

Automation helps the organization to scale and grow. Teams providing CloudOps services can leverage automation and orchestration tools effectively to streamline resource provisioning, configuration, and deployment processes.

Automation increases efficiency and minimizes the risk of human error. Further, automating the provisioning and de-provisioning of cloud resources enables the organization to be more agile in the face of changing business requirements.

Leverage Infrastructure as Code (IaC)

In CloudOps, Infrastructure as Code (IaC) enables the management and provisioning of cloud infrastructure using code-based configurations. With IaC, CloudOps teams can define the desired state of the cloud infrastructure, including virtual machines, networks, storage, security settings, and other resources, using code or configuration files. This code serves as a blueprint that specifies how the infrastructure should be provisioned and configured.

By treating infrastructure as code, CloudOps practitioners can leverage version control systems to track changes, perform code reviews, and collaborate effectively. This ensures consistency and repeatability across deployments and simplifies the process of managing complex cloud environments. IaC allows for rapid infrastructure deployment and scaling, as code-based configurations can be easily replicated and applied across multiple environments.

Establish Continuous Integration & Deployment (CI/CD)

Continuous Integration and Deployment (CI/CD) focuses on automating and streamlining the software development and deployment processes within a cloud environment. Within the CloudOps framework, CI/CD pipelines can be configured to trigger automatically whenever code changes are committed to the repository. These pipelines perform a series of predefined actions, such as code compilation, testing, vulnerability scanning, and packaging. Once the code passes all the necessary tests and checks, it is deployed to the cloud environment, making it available to end-users.

This enables security teams to accelerate the delivery of software updates while maintaining high quality and reducing the risk of deployment failures. CI/CD also promotes collaboration among development, operations, and testing teams, facilitates rapid feedback cycles, and enables teams to respond quickly to changing business requirements and customer needs.

Monitor For Optimal Performance

Performance monitoring in CloudOps involves the continuous monitoring and analysis of the cloud environment to assess the performance of various resources and applications. It helps ensure optimal performance, identify potential bottlenecks or issues, and take proactive measures to maintain a high level of efficiency. CloudOps teams typically monitor for the following key areas:

  • Resources – Monitoring for cloud resources such as virtual machines, databases, storage, and network components. Monitoring metrics like CPU usage, memory utilization, network traffic, and disk I/O helps identify resource-intensive processes or potential performance degradation.
  • Application performance – Monitor the performance of applications running in the cloud environment. This involves tracking response times, latency, throughput, and error rates to identify any performance issues or anomalies.
  • Scalability – Assess the capacity of the cloud environment to handle increased workloads. Determine if the resources can scale dynamically to meet demand and identify potential limitations or constraints that may impact performance during peak periods.
  • Alerts, notifications, and historical data – Performance monitoring tools generate alerts and notifications based on predefined thresholds or anomalies. This enables CloudOps teams to receive real-time alerts about performance issues and take immediate action to mitigate potential problems. These tools also collect data over time, allowing for historical analysis and reporting. Teams can use this data to identify trends, patterns, and performance patterns, enabling those responsible for providing CloudOps services to make informed decisions about resource optimization, capacity planning, and performance improvements.

Singularity Cloud | SentinelOne’s Strategy for Securing the Cloud

SentinelOne enables organizations to protect their endpoints across all cloud environments, public, private, and hybrid, through Singularity™ Cloud. With thousands of accounts spread across multiple clouds, organizations need the right security in place for their cloud infrastructure. Singularity Cloud works by extending distributed, autonomous endpoint protection, detection, and response to compute workloads running in both public and private clouds, as well as on-prem data centers.

  • Enterprise-Grade EPP & EDR – Get full endpoint detection and response as well as container coverage in one SentinelOne agent. Singularity™ Cloud allows for complete container visibility with one agent per node and without pod instrumentation.
  • Enterprise Management & Deployment – Choose to auto-deploy Kubernetes Sentinel Agent, a component of Singularity™ Cloud to EKS, AKS, and GKE clusters, or Linux and Windows Server Sentinel Agents to AWS EC2, Azure VM, and Google Compute Engine.
  • AI-Powered Cloud Workload Protection – Behavioral AI detects unknown threats such as zero-day exploits and indicators of compromise consistent with novel ransomware and then quarantines them in real-time. Singularity Cloud protects runtime containers without container interference for Linux, Windows servers, and VMs.

Conclusion

The scalability and flexibility offered by the cloud come with inherent complexities. CloudOps addresses these challenges by providing organizations with the necessary tools and processes to monitor, manage, and optimize their cloud infrastructure. It enables businesses to streamline operations, reduce costs, and improve resource utilization while ensuring high availability and performance.

As businesses rapidly embrace cloud technology to drive innovation and scalability, leaders are prioritizing CloudOps implementation for its effective management of cloud environments. CloudOps brings a holistic approach to cloud management, combining technical expertise, automation, and best practices to optimize performance and build a long-term security posture.

As cloud technology continues to evolve, CloudOps keeps pace with the latest advancements. It embraces emerging technologies such as serverless computing, containerization, and artificial intelligence to drive innovation and unlock new possibilities. CloudOps also enables organizations to adapt quickly to changing business needs and leverage the full potential of cloud services.

SentinelOne can help organizations improve their cloud security strategy through a combination of endpoint detection and response (EDR) capability, autonomous threat hunting, and runtime solutions that can defeat cloud-based threats without compromising agility or availability. Learn more about Singularity™ Cloud by contacting us for a demo.

Singularity Cloud
Simplifying runtime detection and response of cloud VMs, containers, and Kubernetes clusters for maximum visibility, security, and agility.

Apple & Microsoft Patch Tuesday, July 2023 Edition

Microsoft Corp. today released software updates to quash 130 security bugs in its Windows operating systems and related software, including at least five flaws that are already seeing active exploitation. Meanwhile, Apple customers have their own zero-day woes again this month: On Monday, Apple issued (and then quickly pulled) an emergency update to fix a zero-day vulnerability that is being exploited on MacOS and iOS devices.

On July 10, Apple pushed a “Rapid Security Response” update to fix a code execution flaw in the Webkit browser component built into iOS, iPadOS, and macOS Ventura. Almost as soon as the patch went out, Apple pulled the software because it was reportedly causing problems loading certain websites. MacRumors says Apple will likely re-release the patches when the glitches have been addressed.

Launched in May, Apple’s Rapid Security Response updates are designed to address time-sensitive vulnerabilities, and this is the second month Apple has used it. July marks the sixth month this year that Apple has released updates for zero-day vulnerabilities — those that get exploited by malware or malcontents before there is an official patch available.

If you rely on Apple devices and don’t have automatic updates enabled, please take a moment to check the patch status of your various iDevices. The latest security update that includes the fix for the zero-day bug should be available in iOS/iPadOS 16.5.1, macOS 13.4.1, and Safari 16.5.2.

On the Windows side, there are at least four vulnerabilities patched this month that earned high CVSS (badness) scores and that are already being exploited in active attacks, according to Microsoft. They include CVE-2023-32049, which is a hole in Windows SmartScreen that lets malware bypass security warning prompts; and CVE-2023-35311 allows attackers to bypass security features in Microsoft Outlook.

The two other zero-day threats this month for Windows are both privilege escalation flaws. CVE-2023-32046 affects a core Windows component called MSHTML, which is used by Windows and other applications, like Office, Outlook and Skype. CVE-2023-36874 is an elevation of privilege bug in the Windows Error Reporting Service.

Many security experts expected Microsoft to address a fifth zero-day flaw — CVE-2023-36884 — a remote code execution weakness in Office and Windows.

“Surprisingly, there is no patch yet for one of the five zero-day vulnerabilities,” said Adam Barnett, lead software engineer at Rapid7. “Microsoft is actively investigating publicly disclosed vulnerability, and promises to update the advisory as soon as further guidance is available.”

Barnett notes that Microsoft links exploitation of this vulnerability with Storm-0978, the software giant’s name for a cybercriminal group based out of Russia that is identified by the broader security community as RomCom.

“Exploitation of CVE-2023-36884 may lead to installation of the eponymous RomCom trojan or other malware,” Barnett said. “[Microsoft] suggests that RomCom / Storm-0978 is operating in support of Russian intelligence operations. The same threat actor has also been associated with ransomware attacks targeting a wide array of victims.”

Microsoft’s advisory on CVE-2023-36884 is pretty sparse, but it does include a Windows registry hack that should help mitigate attacks on this vulnerability. Microsoft has also published a blog post about phishing campaigns tied to Storm-0978 and to the exploitation of this flaw.

Barnett said it’s while it’s possible that a patch will be issued as part of next month’s Patch Tuesday, Microsoft Office is deployed just about everywhere, and this threat actor is making waves.

“Admins should be ready for an out-of-cycle security update for CVE-2023-36884,” he said.

Microsoft also today released new details about how it plans to address the existential threat of malware that is cryptographically signed by…wait for it….Microsoft.

In late 2022, security experts at Sophos, Trend Micro and Cisco warned that ransomware criminals were using signed, malicious drivers in an attempt to evade antivirus and endpoint detection and response (EDR) tools.

In a blog post today, Sophos’s Andrew Brandt wrote that Sophos identified 133 malicious Windows driver files that were digitally signed since April 2021, and found 100 of those were actually signed by Microsoft. Microsoft said today it is taking steps to ensure those malicious driver files can no longer run on Windows computers.

As KrebsOnSecurity noted in last month’s story on malware signing-as-a-service, code-signing certificates are supposed to help authenticate the identity of software publishers, and provide cryptographic assurance that a signed piece of software has not been altered or tampered with. Both of these qualities make stolen or ill-gotten code-signing certificates attractive to cybercriminal groups, who prize their ability to add stealth and longevity to malicious software.

Dan Goodin at Ars Technica contends that whatever Microsoft may be doing to keep maliciously signed drivers from running on Windows is being bypassed by hackers using open source software that is popular with video game cheaters.

“The software comes in the form of two software tools that are available on GitHub,” Goodin explained. “Cheaters use them to digitally sign malicious system drivers so they can modify video games in ways that give the player an unfair advantage. The drivers clear the considerable hurdle required for the cheat code to run inside the Windows kernel, the fortified layer of the operating system reserved for the most critical and sensitive functions.”

Meanwhile, researchers at Cisco’s Talos security team found multiple Chinese-speaking threat groups have repurposed the tools—one apparently called “HookSignTool” and the other “FuckCertVerifyTimeValidity.”

“Instead of using the kernel access for cheating, the threat actors use it to give their malware capabilities it wouldn’t otherwise have,” Goodin said.

For a closer look at the patches released by Microsoft today, check out the always-thorough Patch Tuesday roundup from the SANS Internet Storm Center. And it’s not a bad idea to hold off updating for a few days until Microsoft works out any kinks in the updates: AskWoody.com usually has the lowdown on any patches that may be causing problems for Windows users.

And as ever, please consider backing up your system or at least your important documents and data before applying system updates. If you encounter any problems with these updates, please drop a note about it here in the comments.

What It Takes to be a Top Gun | GenAI & Cybersecurity

We believe that generative AI has the potential to generate massive value and disrupt existing industries and applications. We are now witnessing generative AI accomplish things on a daily basis that just a short time ago did not seem possible.

Generative AI has a meaningful role to play in cybersecurity, both for the good guys and the bad ones. Our motto at SentinelOne has been to be a “Force for Good”, and we intend to be a Force for Good in bringing advanced AI capabilities into cybersecurity as well.

At SentinelOne, AI has always been at the core of what we do – many years ago we introduced static AI detection models as part of our endpoint security platform that vastly improved the ability to detect potential threats versus the previous signature based approaches used in the market up to that point. At RSA 2023, we showcased the power of using generative AI in cyber with our introduction of Purple AI, which turns every security analyst into a super analyst. Purple AI significantly increases analyst productivity by reducing layers of complexity needed in generating insights and automating their ability to take action on threats and other issues surfaced via Purple.

Cybersecurity has a jobs issue, there are simply not enough trained cybersecurity professionals to meet the needs in the market – professionals who are trained to protect your favorite application, service, financial institution, utility, etc. Thus generative AI is just what cybersecurity needs – a force multiplier for security experts, for those on the good side of the equation.

What Have We Learned about Generative AI and Cybersecurity?

So what have we learned about generative AI and cybersecurity? We’ve said before that security is a data problem, and as it turns out, generative AI is ultimately a data problem as well. ChatGPT amazed the world because it provided a highly human compatible interface to the internet’s endless catalog of data. Since then we’ve seen many companies introduce “co-pilots” on top of their existing applications in many sectors – security, CRM, content creation, HR, etc.

The rapid adoption of Generative AI and LLMs technologies brings unique challenges and opportunities to the security space that will be felt in different ways than in other segments. In particular, at the same time that security analysts are gaining access to these productivity/force-multiplying tools, so are hackers and bad actors, which will in time allow them to create and execute attacks like never before. This will make it even more critical for companies to understand and adopt their own AI-based security strategy, in order to thwart a more sophisticated generation of attacks they are likely to see in the not-too-distant future.

Today’s co-pilot AI guides often use a combination of models – from open source, independent commercial LLMs, plus a vendor’s own proprietary models. These proprietary models can be tuned and trained by data specific to that vendor, often data that is gathered by the products and services it currently offers. The tuning and training performed on this data is what makes these models relevant and functional within a particular practice. The co-pilot approaches that ultimately graduate to being a Top Gun solution in their space will bring something truly transformative to their segment.

SentinelOne believes that transformational opportunities for generative AI will be driven by offerings that leverage a corpus of data that go well beyond a single vendor’s product and span an entire area or set of areas. After all, the data and information that defines a process or practice within an industry often spans multiple products.

For example, within security, it is not uncommon for an enterprise to be using between 50-100 different security tools. Today these tools define the customer’s security strategy, operations and collectively house the majority of data relative to this. GenAI’s ability to transform cybersecurity will depend in part on utilizing as much of this data as possible to introduce new capabilities never possible within a single set of tools.

Our goal with Purple AI is to do just that: provide enterprises a way to leverage the vast amount of security data spread across their security tools to create a new superpower against the evolving threat landscape, made possible through our Singularity Security DataLake.

How Does S Ventures Fit In?

So how does S Ventures fit in? We believe generative AI will give rise to an entire new tech stack, spanning core infrastructure, tools and services, and applications. LLMs will form a foundational component of the core infrastructure stack. Given the rarity of talent, funding requirements (for training and tuning new models) and ability to execute there will be a handful of players that can build a business around LLMs, perhaps most analogous to the emergence of AWS, GCP and Azure during the cloud era.

While many of these same players are also highly relevant in generative AI and LLMs today, many enterprise customers have a strong desire to avoid vendor lock-in and for a best of breed independent vendor that can cater to its specific business needs. These customers will want to establish their own unique value proposition with AI and will care about items such as customization, privacy, security, explainability and trust.

While some – particularly smaller and mid-sized customers – may be able to utilize off the shelf models, many larger customers will need more highly custom / configured models tuned for their environment. Customers may also require an enterprise level of support or engagement from their LLM provider, while others may have the sophistication to train and tune a combination of commercial and proprietary models themselves. Like in the previous cloud era, we believe both commercial and open source models will coexist alongside each other.

S Ventures is focused on actively investing in the generative AI ecosystem. Within the LLM area, we intend to invest in a select number of independent providers that hold the most promise for bringing the transformative potential of AI to the enterprise.

As part of this we are excited about our recent investment in Cohere, a leading LLM provider. Cohere has a mission – to help enable enterprises to adopt generational AI through the use of LLMs. The company has proven its ability to deliver top performing models. They have a world class technical team, that includes experts who came from GoogleBrain, DeepMind and Meta. As well as a great business team (including former operational execs from some of the largest tech companies in the world such as Google, Amazon, Apple, and Cisco) to help execute on their enterprise strategy.

Cohere understands what enterprises need – customization, data security, quality, a cloud agnostic approach and ability to deploy both on-prem and in the cloud. Cohere’s recent partnership with Oracle is a great early example of this.

We are excited about our investment in Cohere and their strong position to be one of the winners in the generative AI space.

Analyzing Attack Opportunities Against Information Security Practitioners

In partnership with vx-underground, SentinelOne recently ran its first Malware Research Challenge, in which we asked researchers across the cybersecurity community to submit previously unpublished work to showcase their talents and bring their insights to a wider audience.

Today’s post is the second in a series highlighting the best entries. Jared Stroud (@DLL_Cool_J / Arch Cloud Labs) explores the risks faced by security researchers from attacks by APTs and other threat actors through compromise of security research tools. The study includes discussion of a novel attack vector through popular open-source reverse engineering platform, Ghidra.

Background

Attacks against the Information Security research community have historically ranged from fake proof-of-concept Github repos to modifying Visual Studio project data resulting in the execution of  PowerShell commands. In recent years threat actors have begun targeting software heavily used within the Information Security community. Observed targeting techniques include directly selecting individuals of the security research community through phishing campaigns or by casting a wider opportunistic net by seeding illegal software torrents. As an industry, one’s circle of trust creates an environment where the attack surface to security practitioners is unique and wider than one may think.

Sophisticated attacks are focused on those that provide much value to the Information Security community through blog posts, Youtube Channels, and other various forms of information sharing. This article will explore some of these historical attacks along with identifying the attack surface of a researcher’s toolkit as well as defensive strategies for the community for said attacks.

Historical Targeted Attacks Against Software Used by Security Practitioners

Security Company ESET reported in 2021 that threat actors linked to DPRK backdoored IDA Pro torrents via malicious DLLs within the installation folder of IDA Pro. Arch Cloud Labs does not condone the use of pirated software, but it is likely this software was chosen due to the probability of discovering additional security research on a victim’s machine. After all, why choose IDA Pro? Other actors have backdoored popular resource-intensive video game torrents to make use of GPUs for Cryptojacking campaigns. Per ESET’s tweets, upon launching IDA Pro, a scheduled task is created which downloads an additional DLL by the name of IDA Helper to fetch and execute a remotely hosted payload for follow-on post-exploitation activity.

Abusing DLL hijacking opportunities within Windows software is nothing new, but rather demonstrates the intent of the threat actor to focus on the individuals that use IDA Pro, such as security researchers. By focusing on torrent software, the threat actor also gains the security that even if an endpoint security product flags this software as malicious, the victim themselves downloading software illegally, perhaps they’ll think it’s simply something to do with the torrent, associated crack but certainly not that they’ve been targeted by a nation-state adversary.

In 2022, Google’s Threat Analytic Group (TAG) reported a targeted phishing campaign focusing on security researchers under the guise of having known security researchers aid the attacker in finalizing a proof-of-concept exploit. This proof-of-concept exploit ultimately was a Visual Studio project which executed a PowerShell command to aid in data exfiltration of a Security Researcher’s lab environment. Numerous individuals on Twitter came forward saying they had some established contact with puppet accounts that were requesting help.

Those that seek to mentor and help one another in this industry should be applauded for their efforts, but also be wary of how this can be abused in elaborate phishing campaigns such as those reported by Google TAG. It’s unlikely that these types of attacks will slow down in the coming years, and as security practitioners understanding our tools of the trade and their associated attack surface is critical in protecting ourselves as well as our research. If one looks hard enough, the ability to use a given reverse engineering or digital forensics tool to achieve living off the land types of attacks can be found. Arch Cloud Labs analyzed how these types of attacks could be applied to other widely used software such as Ghidra to enable a threat actor to target members of the security community and will demonstrate such an attack later in this article.

Opportunistic Attacks During a CVE Crisis

Academic scholars from Leiden University recently published a paper stating that 10% of proof-of-concept exploitation repos on Github contain code meant to exfiltrate sensitive data from the targeted environment. In the event a researcher’s VM is not appropriately air-gapped or isolated to an individual project, the opportunity for sensitive data loss exists. During moments of crisis such as a high-impact vulnerability (Ex: unauthenticated RCE), defenders seek to quickly understand, assess and remediate the potential impact a given vulnerability may have in their organization. This leads to researchers publishing proof-of-concept (PoC) code on GitHub for use in the wider community.

As defenders rush to Github, this creates a “watering hole” attack scenario where independent or otherwise unknown researchers have the opportunity to name a Github repo “PoC CVE-XXXX-XXXX” to gain incoming traffic for the latest vulnerability. In 2020, Andy Gill demonstrated a perfect example of this by creating a Github repo with a bash script that would Rick Roll security researchers.

When seeking to identify whether or not a Github repo is trustworthy, how do you or your research colleagues determine this? Do you audit the code or simply look at the number of stars a repo has and think “this is probably fine”? The concept of trust in PoC-exploitation is unique to the Information Security arena as outside of commercial or well-known offensive frameworks, how often is malicious software being executed in your corporate environment intentionally? Ideally never, but a framework or risk matrix for assessing a proof-of-concept legitimacy is an area that the industry has yet to fully explore.

Identifying Attack Surface of Researcher’s Tools: Case Study Ghidra

Complex software such as Visual Studio or IDA Pro contains numerous ways to achieve code execution. Understanding the tool and how its functionality can be abused is critical to understand the paths an adversary could take to leverage research environments.

Fundamentally, treating the file formats and complex archives of build systems as ways to “live off the land” like with LOLBINs, can lead to exciting new discoveries. Today, Arch Cloud Labs is demonstrating how Ghidra, a popular Reverse Engineering tool released by the National Security Agency could be leveraged for abuse in a similar vein that IDA Pro was.

Ghidra versions are regularly released via zip files located at the official Github Repo under the releases tab here. In order to backdoor Ghidra, one simply needs to place a Java .jar file that contains the same class name as a legitimate already existing Ghidra class within the following directory of the zip file ./Ghidra/patch to have the functionality be overridden. Per the README within the Ghidra patch directory:

> This directory exists so that Ghidra releases can be patched, or overridden. Classes or jar files placed in this directory will found and loaded *before* the classes that exist in the release jar files. One exception is that classes in the Utility module can not be patched in this way. The jar files will be sorted by name before being prepended to the classpath in order to have predictable class loading between Ghidra runs. This patch directory will be the very first patch entry on the classpath such that any individual classes will be found before classes in any of the patch jar files. The class files in this directory must be in the standard java package directory structure. 

The patch directory provides the adversary a unique and low-effort opportunity to ship a zip archive to an unsuspecting researcher as a part of a phishing campaign or have a second-stage payload drop additional payloads to the patch directory. To be clear, this is not an exploit but rather abusing default functionality within the Ghidra tool.

When launching a new version of Ghidra for the first time on a machine, if an entry does not exist in the .ghidra folder within the user’s home directory, a user-agreement followed by the Help menu will be displayed by the default. By identifying which Java classes are called (Ex: Help.java), this default behavior can be abused to get code to load and execute in a guaranteed fashion if the version number for the Ghidra version in the zip file is set  to a non-existent build number (ex: build id 9999.9999). This combined functionality creates a unique opportunity for a phishing campaign. This workflow is visualized below.

Ghidra attack flow

This modified Help.java file shown below contains a simple proof-of-concept modification to echo “pwn” to /tmp/pwn.txt.

Modified Help.java
Modified Help.java

Upon modifying the Java file, and completing the build steps outlined in the Ghida build documentation, a malicious actor can then take the compiled Jar, place it in the patch directory and ship it with a Ghidra release of their choice. Alternatively, in a post-exploitation scenario, this Jar can be placed as a means of persistence when Ghidra is launched. A more sophisticated payload is left as an exercise to the reader.

Protection Against Attacks and Validation of PoCs

Nothing listed below is ground breaking or new, but they’re practices not typically applied at the individual researcher level. After all, if these mechanisms are supposed to be applied towards enterprises for securing large organizations, why not apply them to the research community as well?

Starting with assessing your threat model for your research environment will enable you to make appropriate decisions for what additional steps need to be done to protect yourself and your organization. Accidentally executing malware never results in a good day and planning your environment and associated disaster recovery plan for these events can lead you to take steps to protect yourself. In addition to a disaster recovery plan for malware analysis environments, how can the community continuously check that these safeguards in place work? Just as the industry adopts continuous vulnerability scanning for containers, code, etc., the need for auditing custom malware/offensive security research environments is critical as well.

Secure build of materials (SBOMs) have become commonplace when discussing deploying commercial software. Having the ability to identify and map given versions of plugins, tools to associated dependencies and hashes to prevent abuse can aid the research community in avoiding malicious DLLs, JARs,  plugins, etc., being placed in software distributions. As new C2 frameworks consistently are developed and adopt code from one another, the ability to track the provenance of where scripts are being derived from will help prevent maliciously modified scripts from ending up on your test system as well as provide a way to acknowledge the original author.

Finally, commercial software is cryptographically signed, why not PoCs?  PoCs hosted on Github can have git commits signed off by an individual’s PGP key. This additional level of verification can give trusted researchers a way to verify they’re publishing tools to be trusted by the wider Information Security community. Additionally, a web of trust model for GIthub where user’s vouch for a researcher’s PoCs presents an interesting possibility to be explored.

Conclusion

As attacks become increasingly complex, the tools used to dissect and reveal the inner workings of campaigns themselves will likely be targeted. Understanding how these tools can be used beyond their intended functionality is critical to identifying advanced attacks against a given organization or group. Threat modeling the environment security research is being conducted in should ultimately be done to protect the researcher and their organization. Just as these practices are applied to enterprise organizations, they should filter down to the individual researcher.

References