What is Mimikatz? (And Why Is It So Dangerous?)

What if we were to tell you that there was a magical tool that could greatly simplify the discovery and pillaging of credentials from Windows-based hosts?  This tool would be a welcome addition to any criminal’s toolbelt, as it would also be for pentesters, Red Teamers, black hats, white hats, indeed anyone interested in compromising computer security. Now, what if we told you it was FREE and already built into many of your favorite tools and malware campaigns/kits/frameworks?  Sounds exciting right!?

But then you probably already know that this is no wish list or some private NSA hacking tool, but the well-established mimikatz post-exploitation tool. In this post, we take a look at what mimikatz is, how it is used, why it still works, and how to successfully protect endpoints against it.

image of what is mimikatz

What is Mimikatz?

The mimikatz tool was first developed in 2007 by Benjamin Delpy. So why are we writing about mimikatz today? Quite simply because it still works. Not only that, but mimikatz has, over the years, become commoditized, expanded and improved upon in a number of ways. 

The official builds are still maintained and hosted on GitHub, with the current version being 2.2.0 20190813 at the time of writing. Aside from those, it is also included in a number of other popular post-exploitation frameworks and tools such as Metasploit, Cobalt Strike, Empire, PowerSploit and similar.  

image of mimikatz resources

These tools greatly simplify the process of obtaining Windows credential sets (and subsequent lateral movement) via RAM, hash dumps, Kerberos exploitation, as well as pass-the-ticket and pass-the-hash techniques.

image of mimikatz in use  

Mimikatz consists of multiple modules, taylored to either core functionality or varied vector of attack. Some of the more prevalent or utilized modules include:

  • Crypto
    • Manipulation of CryptoAPI functions.  Provides token impersonation, patching of legacy CryptoAPI
  • Kerberos
    • “Golden Ticket” creation via Microsoft Kerberos API
  • Lsadump
    • Handles manipulation of the SAM (Security Account Managers) database.  This can be used against a live system, or “offline” against backup hive copies. The modules allows for access to password via LM Hash or NTLM.
  • Process
    • lists running processes (can be handy for pivots)
  • Sekurlsa
    • Handles extraction of data from LSASS (Local Security Authority Subsystem Service). This includes tickets, pin codes, keys, and passwords.
  • Standard
    • main module of the tool. Handles basic commands and operation
  • Token
    • context discovery and limited manipulation

Does MimiKatz Still Work on Windows 10?

Yes, it does. Attempts by Microsoft to inhibit the usefulness of the tool have been temporary and unsuccessful. The tool has been continually developed and updated to enable its features to plow right through any OS-based band-aid. 

Initially, mimikatz was focused on exploitation of WDigest. Prior to 2013, Windows loaded encrypted passwords into memory, as well as the decryption key for said passwords. Mimikatz simplified the process of extracting these pairs from memory, revealing the credential sets. 

Over time Microsoft has made adjustments to the OS, and corrected some of the flaws that allow mimikatz to do what it does, but the tool continues to stay on top of these changes and adjusts accordingly. More recently, mimikatz has fixed modules which were crippled post Windows 10 1809, such as sekurlsa::logonpasswords.

image of mimikatz run in Powershell ISE

Mimikatz supports both 64-bit x64 and 32-bit x86 architectures with separate builds. One of the reasons mimikatz is so dangerous is due to its ability to load the mimikatz DLL reflexively into memory.  When combined with PowerShell (e.g., Invoke-Mimikatz) or similar methods, the attack can be carried out without anything being written to disk. 

How Widely Used Is Mimikatz Today?

Many prominent threats bundle mimikatz directly, or leverage their own implementations to pull credentials or simply spread via the discovered credential sets. NotPetya and BadRabbit are two huge examples, but more recently, Trickbot contains its own implementation for basic credential theft and lateral movement.

To get another idea of how prevalent the use of mimikatz is in real-world attacks one need only look as far as MITRE. While this list is by no means complete, it does give a good idea of how many sophisticated attackers (aka APT groups) are using this tool. This list is a true “Who’s Who” of scary threat actors involved in advanced targeted attacks: Oilrig, APT28, Lazarus, Cobalt Group, Turla, Carbanak, FIN6 & APT21 just to name a few.

image of mimikatz techniquesimage of attackers that use mimikatz

All these groups develop their own way to invoke/inject mimikatz so as to ensure the success of the attack and evade the endpoint security controls that may stand in the way. 

Cobalt Group, specifically, is a great focus point as they get their name from the use of the Cobalt Strike tool. Cobalt Strike is a collaborative Red Team and Adversary Simulation tool. As mentioned above, mimikatz is included as core functionality. Even more concerning is the ability to invoke mimikatz directly in memory from any context-appropriate process in which the Cobalt Strike beacon payload is injected. Again, this kind of ‘fileless‘ attack avoids any disk reads/writes, but it can also bypass many modern “next-gen” products that are not able to properly monitor very specific OS events/activities.  

Can Mimikatz Defeat Endpoint Security Software?

If the OS cannot keep up, can 3rd party security solutions defend against mimikatz attacks? That depends. The mimikatz tool creates a challenge for traditional endpoint security controls, aka legacy AV and some “next-gen” tools. As noted above, if they are not monitoring behavior in memory, or if they are not monitoring specific behaviors and events, they will simply not see or be able to prevent the attack.

It should also be noted that mimikatz requires Administrator or SYSTEM level privileges on target hosts.  This requires that attackers inject into a process with appropriate privileged context, or they find a way to elevate privileges that simply bypass some AV software solutions, particularly if those solutions are prone to whitelisting “trusted” OS processes.

How To Successfully Defend Against Mimikatz

As this in-the-wild case study shows, SentinelOne’s static and behavioral AI approach provides robust prevention and protection against the use of mimikatz. Even when injected directly into memory, regardless of origin, SentinelOne is able to observe, intercept, and prevent the behavior. Even more important, however, is that as a result, we also prevent the damage that mimikatz can cause. That is, the loss of critical credentials, data, and ultimately time and money is avoided as mimikatz cannot evade the SentinelOne on-device agent.

SentinelOne is able to stop mimikatz from scraping credentials from protected devices. In addition to other built-in protection, we have added a mechanism that does not allow the reading of passwords, regardless of the policy settings.  

Conclusion

The bottom line here is that mimikatz is a near-ubiquitous piece of the modern adversary’s toolset. It is used across all sophistication levels and against the full spectrum of target types and categories. Despite being developed over 12 years ago, the toolset continues to work and improve, and likewise, mimikatz continues to provide a challenge to ageing and legacy endpoint protection technologies. 

SentinelOne offers a best-in-class solution to handle all angles of mimikatz-centric attacks with behavioral AI and Active EDR.  There is simply no substitute for autonomous endpoint detection and response in today’s threat landscape. 

MITRE ATT&CK IOCs

Mimikatz {S0002}
Account Manipulation {T1098}
Credential Dumping {T1003}
Pass The Hash {T1075}
Pass The Ticket {T1097}
Private Keys {T1145}
Security Support Provider {T1101}
Cobalt Strike {S0154}


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Workday to acquire online procurement platform Scout RFP for $540M

Workday announced this afternoon that it has entered into an agreement to acquire online procurement platform Scout RFP for $540 million. The company raised more than $60 million on a post valuation of $184.5 million, according to PitchBook data.

The acquisition builds on top of Workday’s existing procurement solutions, Workday Procurement and Workday Inventory, but Workday chief product product officer Petros Dermetzis wrote in a blog post announcing the deal that Scout gives the company a more complete solution for customers.

“With increased importance around the supplier as a strategic asset, the acquisition of Scout RFP will help accelerate Workday’s ability to deliver a comprehensive source-to-pay solution with a best-in-class strategic sourcing offering, elevating the office of procurement in strategic importance and transforming the procurement function,” he wrote.

Ray Wang, founder and principal analyst at Constellation Research says that Workday has been trying to be the end-to-end cloud back office player. He says, “One of their big gaps has been in procurement.”

Wang says that Workday has been investing with eye toward filling gaps in the product set for some time. In fact, Workday Ventures has been an investor in Scout RFP since 2018, and it’s also an official Workday partner.

“A lot of the Workday investments are in portfolio companies that are complimentary to Workday’s larger vision of the future of Cloud ERP. Today’s definition of ERP includes finance, HCM (human capital management), projects, procurement, supply chain and asset management,” Wang told TechCrunch.

As the Scout RFP founders stated in a blog post about today’s announcement, the two companies have worked well together and a deal made sense. “Working closely with the Workday team, we realized how similar our companies’ beliefs and values are. Both companies put user experience at the center of product focus and are committed to customer satisfaction, employee engagement and overall business impact. It was not surprising how easy it was to work together and how quickly we saw success partnering on go-to-market activities. From a culture standpoint, it just worked,” they wrote. A deal eventually came together as a result.

Scout RFP is a fairly substantial business, with 240 customers in 155 countries. There are 300,000 users on the platform, according to data supplied by the company. The company’s 160 employees will be moving to Workday when the deal closes, which is expected by the end of January, pending standard regulatory review.

The 7 most important announcements from Microsoft Ignite

It’s Microsoft Ignite this week, the company’s premier event for IT professionals and decision-makers. But it’s not just about new tools for role-based access. Ignite is also very much a forward-looking conference that keeps the changing role of IT in mind. And while there isn’t a lot of consumer news at the event, the company does tend to make a few announcements for developers, as well.

This year’s Ignite was especially news-heavy. Ahead of the event, the company provided journalists and analysts with an 87-page document that lists all of the news items. If I counted correctly, there were about 175 separate announcements. Here are the top seven you really need to know about.

Azure Arc: you can now use Azure to manage resources anywhere, including on AWS and Google Cloud

What was announced: Microsoft was among the first of the big cloud vendors to bet big on hybrid deployments. With Arc, the company is taking this a step further. It will let enterprises use Azure to manage their resources across clouds — including those of competitors like AWS and Google Cloud. It’ll work for Windows and Linux Servers, as well as Kubernetes clusters, and also allows users to take some limited Azure data services with them to these platforms.

Why it matters: With Azure Stack, Microsoft already allowed businesses to bring many of Azure’s capabilities into their own data centers. But because it’s basically a local version of Azure, it only worked on a limited set of hardware. Arc doesn’t bring all of the Azure Services, but it gives enterprises a single platform to manage all of their resources across the large clouds and their own data centers. Virtually every major enterprise uses multiple clouds. Managing those environments is hard. So if that’s the case, Microsoft is essentially saying, let’s give them a tool to do so — and keep them in the Azure ecosystem. In many ways, that’s similar to Google’s Anthos, yet with an obvious Microsoft flavor, less reliance on Kubernetes and without the managed services piece.

Microsoft launches Project Cortex, a knowledge network for your company

What was announced: Project Cortex creates a knowledge network for your company. It uses machine learning to analyze all of the documents and contracts in your various repositories — including those of third-party partners — and then surfaces them in Microsoft apps like Outlook, Teams and its Office apps when appropriate. It’s the company’s first new commercial service since the launch of Teams.

Why it matters: Enterprises these days generate tons of documents and data, but it’s often spread across numerous repositories and is hard to find. With this new knowledge network, the company aims to surface this information proactively, but it also looks at who the people are who work on them and tries to help you find the subject matter experts when you’re working on a document about a given subject, for example.

00000IMG 00000 BURST20180924124819267 COVER 1

Microsoft launched Endpoint Manager to modernize device management

What was announced: Microsoft is combining its ConfigMgr and Intune services that allow enterprises to manage the PCs, laptops, phones and tablets they issue to their employees under the Endpoint Manager brand. With that, it’s also launching a number of tools and recommendations to help companies modernize their deployment strategies. ConfigMgr users will now also get a license to Intune to allow them to move to cloud-based management.

Why it matters: In this world of BYOD, where every employee uses multiple devices, as well as constant attacks against employee machines, effectively managing these devices has become challenging for most IT departments. They often use a mix of different tools (ConfigMgr for PCs, for example, and Intune for cloud-based management of phones). Now, they can get a single view of their deployments with the Endpoint Manager, which Microsoft CEO Satya Nadella described as one of the most important announcements of the event, and ConfigMgr users will get an easy path to move to cloud-based device management thanks to the Intune license they now have access to.

Microsoft’s Chromium-based Edge browser gets new privacy features, will be generally available January 15

What was announced: Microsoft’s Chromium-based version of Edge will be generally available on January 15. The release candidate is available now. That’s the culmination of a lot of work from the Edge team, and, with today’s release, the company is also adding a number of new privacy features to Edge that, in combination with Bing, offers some capabilities that some of Microsoft’s rivals can’t yet match, thanks to its newly enhanced InPrivate browsing mode.

Why it matters: Browsers are interesting again. After years of focusing on speed, the new focus is now privacy, and that’s giving Microsoft a chance to gain users back from Chrome (though maybe not Firefox). At Ignite, Microsoft also stressed that Edge’s business users will get to benefit from a deep integration with its updated Bing engine, which can now surface business documents, too.

hero.44d446c9

You can now try Microsoft’s web-based version of Visual Studio

What was announced: At Build earlier this year, Microsoft announced that it would soon launch a web-based version of its Visual Studio development environment, based on the work it did on the free Visual Studio Code editor. This experience, with deep integrations into the Microsoft-owned GitHub, is now live in a preview.

Why it matters: Microsoft has long said that it wants to meet developers where they are. While Visual Studio Online isn’t likely to replace the desktop-based IDE for most developers, it’s an easy way for them to make quick changes to code that lives in GitHub, for example, without having to set up their IDE locally. As long as they have a browser, developers will be able to get their work done..

Microsoft launches Power Virtual Agents, its no-code bot builder

What was announced: Power Virtual Agents is Microsoft’s new no-code/low-code tool for building chatbots. It leverages a lot of Azure’s machine learning smarts to let you create a chatbot with the help of a visual interface. In case you outgrow that and want to get to the actual code, you can always do so, too.

Why it matters: Chatbots aren’t exactly at the top of the hype cycle, but they do have lots of legitimate uses. Microsoft argues that a lot of early efforts were hampered by the fact that the developers were far removed from the user. With a visual too, though, anybody can come in and build a chatbot — and a lot of those builders will have a far better understanding of what their users are looking for than a developer who is far removed from that business group.

Cortana wants to be your personal executive assistant and read your emails to you, too

What was announced: Cortana lives — and it now also has a male voice. But more importantly, Microsoft launched a few new focused Cortana-based experiences that show how the company is focusing on its voice assistant as a tool for productivity. In Outlook on iOS (with Android coming later), Cortana can now read you a summary of what’s in your inbox — and you can have a chat with it to flag emails, delete them or dictate answers. Cortana can now also send you a daily summary of your calendar appointments, important emails that need answers and suggest focus time for you to get actual work done that’s not email.

Why it matters: In this world of competing assistants, Microsoft is very much betting on productivity. Cortana didn’t work out as a consumer product, but the company believes there is a large (and lucrative) niche for an assistant that helps you get work done. Because Microsoft doesn’t have a lot of consumer data, but does have lots of data about your work, that’s probably a smart move.

GettyImages 482028705 1

SAN FRANCISCO, CA – APRIL 02: Microsoft CEO Satya Nadella walks in front of the new Cortana logo as he delivers a keynote address during the 2014 Microsoft Build developer conference on April 2, 2014 in San Francisco, California (Photo by Justin Sullivan/Getty Images)

Bonus: Microsoft agrees with you and thinks meetings are broken — and often it’s the broken meeting room that makes meetings even harder. To battle this, the company today launched Managed Meeting Rooms, which for $50 per room/month lets you delegate to Microsoft the monitoring and management of the technical infrastructure of your meeting rooms.

NCR Barred Mint, QuickBooks from Banking Platform During Account Takeover Storm

Banking industry giant NCR Corp. [NYSE: NCR] late last month took the unusual step of temporarily blocking third-party financial data aggregators Mint and QuickBooks Online from accessing Digital Insight, an online banking platform used by hundreds of financial institutions. That ban, which came in response to a series of bank account takeovers in which cybercriminals used aggregation sites to surveil and drain consumer accounts, has since been rescinded. But the incident raises fresh questions about the proper role of digital banking platforms in fighting password abuse.

Part of a communication NCR sent Oct. 25 to banks on its Digital Insight online banking platform.

On Oct. 29, KrebsOnSecurity heard from a chief security officer at a U.S.-based credit union and Digital Insight customer who said his institution just had several dozen customer accounts hacked over the previous week.

My banking source said the attackers appeared to automate the unauthorized logins, which took place over a week in several distinct 12-hour periods in which a new account was accessed every five to ten minutes.

Most concerning, the source said, was that in many cases the aggregator service did not pass through prompts sent by the credit union’s site for multi-factor authentication, meaning the attackers could access customer accounts with nothing more than a username and password.

“The weird part is sometimes the attackers are getting the multi-factor challenge, and sometimes they aren’t,” said the source, who added that he suspected a breach at Mint and/QuickBooks because NCR had just blocked the two companies from accessing bank Web sites on its platform.

In a statement provided to KrebsOnSecurity, NCR said that on Friday, Oct. 25, the company notified Digital Insight customers “that the aggregation capabilities of certain third-party product were being temporarily suspended.”

“The notification was sent while we investigated a report involving a single user and a third-party product that aggregates bank data,” reads their statement, which was sent to customers on Oct. 29. After confirming that the incident was contained, NCR restored connectivity that is used for account aggregation. “As we noted, the criminals are getting aggressive and creative in accessing tools to access online information, NCR continues to evaluate and proactively defend against these activities.””

What were these sophisticated methods? NCR wouldn’t say, but it seems clear the hacked accounts are tied to customers re-using their online banking passwords at other sites that got hacked.

As I noted earlier this year in The Risk of Weak Online Banking Passwords, if you bank online and choose weak or re-used passwords, there’s a decent chance your account could be pilfered by cyberthieves — even if your bank offers multi-factor authentication as part of its login process.

Crooks are constantly probing bank Web sites for customer accounts protected by weak or recycled passwords. Most often, the attacker will use lists of email addresses and passwords stolen en masse from hacked sites and then try those same credentials to see if they permit online access to accounts at a range of banks.

A screenshot of a password-checking tool that can be used to target Chase Bank customers who re-use passwords. There are tools like this one for just about every other major U.S. bank.

From there, thieves can take the list of successful logins and feed them into apps that rely on application programming interfaces (API)s from one of several personal financial data aggregators, including Mint, Plaid, QuickBooks, Yodlee, and YNAB.

A number of banks that do offer customers multi-factor authentication — such as a one-time code sent via text message or an app — have chosen to allow these aggregators the ability to view balances and recent transactions without requiring that the aggregator service supply that second factor.

If the thieves are able to access a bank account via an aggregator service or API, they can view the customer’s balance(s) and decide which customers are worthy of further targeting.

But beyond targeting customers for outright account takeovers, the data available via financial aggregators enables a far more insidious type of fraud: The ability to link the target’s bank account(s) to other accounts that the attackers control.

That’s because PayPalZelle, and a number of other pure-play online financial institutions allow customers to link accounts by verifying the value of microdeposits. For example, if you wish to be able to transfer funds between PayPal and a bank account, the company will first send a couple of tiny deposits  — a few cents, usually — to the account you wish to link. Only after verifying those exact amounts will the account-linking request be granted.

The temporary blocking of data aggregators by NCR brings up a point worthy of discussion by regulators: Namely, in the absence of additional security measures put in place by the aggregators, do the digital banking platform providers like NCR, Fiserv, Jack Henry, and FIS have an obligation to help block or mitigate these large-scale credential exploitation attacks?

KrebsOnSecurity would argue they do, and that the crooks who attacked the customers of my source’s credit union have probably already moved on to using the same attack against one of several thousand other dinky banks across the country.

Intuit Inc., which owns both Mint and QuickBooks, has not responded to requests for comment.

NCR declined to discuss specifics about how it plans to respond to similar attacks going forward.

The Good, the Bad and the Ugly in Cybersecurity – Week 44

Image of The Good, The Bad & The Ugly in CyberSecurity

The Good

The benefits of the DDW (Deep Dark Web) are beginning to shine through. Whether it is a site like SecureDrop (aka DeadDrop) that allows people to anonymously share information with journalists or someone in Iran sharing on Tor’s own website how grateful they are to be able to get news “from the West” with less fear of being persecuted, these rays of light from the DDW are always welcome. So it was great to see that this week, BBC News decided to host news web servers on the DDW.  These are only accessible via Tor so that a user can’t accidentally visit a site without the anonymization protection. Even better, the BBC are hosting translated, regionally-targeted sites in Arabic, Persian, Vietnamese and Russian languages to help people in those censored regions access unfiltered content from the West.

image of tor the onion router

The Bad

Although authorities at the Kudankulam Nuclear Power Plant (KKNPP) in India denied reports on Monday that the power plant had been compromised by malware, there is little doubt amongst the security community that bridging an air gap is entirely feasible. Myriad ways and means have been developed that allow jumping air gaps via thumb drives, compromised laptops, or standing up stealthy ad-hoc sneaky wireless networks. See AirHopper, COTTONMOUTH, or USBee as examples.

Citizens in India are demanding an explanation, but instead were treated to bland denial by the KKNPP.

“…the plant and other Indian nuclear power plants control systems stand alone and are not connected to outside cyber network and Internet. Hence, any cyberattack on Nuclear Power Plant Control System [is] not possible. Moreover, all the systems had been loaded with home-grown firewalls to check the hackers’ attempts, if any.”

I don’t know about you, but “air-gapped” and “home-grown firewalls” rarely belong in the same description of mission-critical infrastructure.

On Wednesday, plant authorities confirmed the compromise, while still asserting that mission critical networks were not compromised. This author has learned from decades supporting critical operational environments in the context of military operations that the phrase “isolated” often does not actually infer an air-gap, but rather some combination of a set of firewalls, data guards, and/or data diodes that logically separate, rather than physically separate, networks. A physically isolated mission critical network would indeed be the norm for an operational Nuclear facility. So then, what about these “home-grown firewalls” mentioned earlier in the week? 

image of Indian Nuclear Power Plant
Image Credit: indiawaterportal.org/The Kudankulam Nuclear Power Plant (KKNPP)/Wikimedia Commons

The Ugly

Ransomware victims are paying upwards of over $1m USD, and the trend is just getting worse. In a twist, some of the campaigns have been first targeting the company’s insurance documentation prior to holding their data for ransom. Patrick Cannon, head of enterprise risk claims at Tokio Marine Kiln Group Ltd, said he had heard of one incident where:

“…the insured said they couldn’t afford the ransom, so the attacker produced a copy of the insurance policy and said that, actually, their cyber insurance would cover it”

image of ryuk ransomware

A report by Beazley shows a 37% rise in ransomware this quarter compared to last, and significant focus on IT Organizations and MSSP’s being hit. This uptick could be related to the recent re-emergence of Emotet-driven campaigns, or it could also be the result of last spring’s Fin 9 and related MSSP-targeted campaigns by Gift-Carding operations having been discovered and “burned”: why not make additional profit on your way out of the MSSPs by targeting both the MSSP and their customers with ransomware? It seems that, for the unprotected at least, the dilemma posed by ransomware is not going away any time soon!


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

New Relic snags early-stage serverless monitoring startup IOpipe

As we move from a world dominated by virtual machines to one of serverless, it changes the nature of monitoring, and vendors like New Relic certainly recognize that. This morning the company announced it was acquiring IOpipe, a Seattle-based early-stage serverless monitoring startup, to help beef up its serverless monitoring chops. Terms of the deal weren’t disclosed.

New Relic gets what it calls “key members of the team,” which at least includes co-founders Erica Windisch and Adam Johnson, along with the IOpipe technology. The new employees will be moving from Seattle to New Relic’s Portland offices.

“This deal allows us to make immediate investments in onboarding that will make it faster and simpler for customers to integrate their [serverless] functions with New Relic and get the most out of our instrumentation and UIs that allow fast troubleshooting of complex issues across the entire application stack,” the company wrote in a blog post announcing the acquisition.

It adds that initially the IOpipe team will concentrate on moving AWS Lambda features like Lambda Layers into the New Relic platform. Over time, the team will work on increasing support for serverless function monitoring. New Relic is hoping by combining the IOpipe team and solution with its own, it can speed up its serverless monitoring chops.

Eliot Durbin, an investor at Bold Start, which led the company’s $2 million seed round in 2018, says both companies win with this deal. “New Relic has a huge commitment to serverless, so the opportunity to bring IOpipe’s product to their market-leading customer base was attractive to everyone involved,” he told TechCrunch.

The startup has been helping monitor serverless operations for companies running AWS Lambda. It’s important to understand that serverless doesn’t mean there are no servers, but the cloud vendor — in this case AWS — provides the exact resources to complete an operation, and nothing more.

IOpipe co-founders Erica Windisch and Adam Johnson

Photo: New Relic

Once the operation ends, the resources can simply get redeployed elsewhere. That makes building monitoring tools for such ephemeral resources a huge challenge. New Relic has also been working on the problem and released New Relic Serverless for AWS Lambda earlier this year.

As TechCrunch’s Frederic Lardinois pointed out in his article about the company’s $2.5 million seed round in 2017, Windisch and Johnson bring impressive credentials:

IOpipe co-founders Adam Johnson (CEO) and Erica Windisch (CTO), too, are highly experienced in this space, having previously worked at companies like Docker and Midokura (Adam was the first hire at Midokura and Erica founded Docker’s security team). They recently graduated from the Techstars NY program.

IOpipe was founded in 2015, which was just around the time that Amazon was announcing Lambda. At the time of the seed round the company had eight employees. According to PitchBook data, it currently has between 1 and 10 employees, and has raised $7.07 million since its inception.

New Relic was founded in 2008 and has raised more than $214 million, according to Crunchbase, before going public in 2014. Its stock price was $65.42 at the time of publication, up $1.40.

Building A Custom Tool For Shellcode Analysis

The Zero2Hero malware course continues with Daniel Bunce demonstrating how to write a custom tool to load, execute and debug malicious shellcode in memory.

image of shellcode analysis

Recently, FireEye posted a blog post detailing malware used by APT 41, specifically the DEADEYE initial first stage, capable of downloading, dropping or loading payloads on an infected system, and the LOWKEY backdoor. Additionally, they described an additional “RC4 Layer”, which is Position Independent Code (PIC) that RC4 decrypts an embedded payload and loads it into memory using it’s reflective DLL loader capabilities.

Unlike Windows executables, shellcode doesn’t have any headers, meaning the Windows loader cannot execute standalone shellcode. As a result, debugging is impossible without an external tool to load and execute shellcode for you. Therefore, in this blog post, we will cover how to write a tool in C to load shellcode into memory and wait until a debugger is attached before executing it. But first, why do we need to debug shellcode?

Why Debugging Position Independent Shellcode is Useful

Position Independent Code can be executed anywhere inside memory, without any issues. This means there are no hardcoded addresses and no calls to APIs such as GetProcAddress or LoadLibrary, not to mention a few other complications.

As a result, static analysis of the shellcode can take a while to fully understand as the shellcode is forced to manually lookup addresses that may not be known without debugging. Furthermore, plenty of malware utilizes hashing when looking up APIs, so a hash will be passed into a function that will hash each export of the DLL in question, until a matching pair is found. Whilst the hashing routine can be replicated so that the correct API is found without static analysis, there are many hashing algorithms out there, from CRC hashing up to custom hashing algorithms. This means there will be plenty of situations where you will have to update the script to include an additional algorithm, slowing down analysis further. So why not just load it into memory yourself and execute when a debugger is attached?

Well, you can. The guys at OALabs have created an extremely helpful tool called BlobRunner to do exactly that. However, rather than simply use a pre-existing tool, in this post we’re going to be focusing on writing our own, as it’s not too difficult to do so, and shellcode execution isn’t uncommon inside malware, so knowing the internals of how it works will help when it comes to recognizing it inside of a sample.

Overview of the ShellCode Analysis Tool’s Routine

So, the tool we’re writing needs to be able to read the shellcode from a file, allocate a region of memory large enough to accomodate the shellcode, write the shellcode into said region of memory, wait until a debugger has been attached, and then execute it. As it is for debugging and not behavioural analysis, we need to wait until a debugger has been attached, to prevent it running before attaching to it. The shellcode is also 64 bit, and as Visual C++ compiler does not support inline assembly for 64 bit applications, we’ll have to use CreateThread() to execute the shellcode, rather than using a jmp. Anyway, with that covered, let’s take a look at what the main() function will look like.

Writing the main() Function

This function is setup to accept two arguments: the filename, and the entry point offset.

image of main function

In the case of the loader, without analyzing what executes it, the entry point seems to be at the offset 0x80. If we don’t pass in a second argument (the offset), the program will attempt to execute it from the offset 0x00. The offset is converted to an integer using strtol(), and then the function read_into_memory() is called, which, as the name suggests, reads the shellcode into process memory. This returns the base address of the shellcode, which is then passed into the function execution(), along with the entry point offset. As you can see, the layout is fairly simple – unlike loading an executable in process memory, we don’t need to map the sections of shellcode into memory, and can simply execute it once it has been copied into memory. With that said, let’s move on to the other functions that we need.

Writing the read_into_memory() Function

This function is responsible for loading the shellcode from a file into an allocated region of memory.

image of read_into_memory function

To do this, we first need the size of the shellcode, which we can get using fseek() and ftell(). This is then passed into VirtualAlloc(), along with PAGE_EXECUTE_READWRITE, allowing us to execute that region of memory without calling VirtualProtect(). The return address is put into the buffer variable, which is then passed into fread(), along with the handle to the open file. After the shellcode has been read into memory, we return the address in buffer back to the calling function, for use in the execution() function, which we will move onto now.

Writing the execution() Function

The execution() function is the most important part of the entire tool, as without it we’d still only have static shellcode.

image of execution function

To begin with, we add the entry point offset to the base address of the shellcode, so if the allocated region of memory was at 0x20000, the actual entry point would be 0x20080 for our LOWKEY reflective DLL loader. We then pass this into a call to CreateThread(), along with the value 0x4, which indicates we want to create the thread in a suspended state. This is done to prevent execution of the shellcode whatsoever, and we only resume the thread once a debugger is attached. This is done through a pretty simple while() loop that constantly calls IsDebuggerPresent() until one is detected. From there, the thread is resumed with a call to ResumeThread(), and then we enter another loop that will call WaitForSingleObject() every second to check if the thread has exited. If so, we close the handle and return from the function!

Compile Time!

That pretty much wraps up the tool, so let’s go ahead and compile it to test it out!

animated image of compiled shellcode analysis tool

As you can see, everything runs smoothly and we’re able to see which API each hash represents, a lot faster than if we were writing a script to calculate it!

So, now we know how to read and execute shellcode in memory, recognizing it in malware should be fairly easy! In this case, the DEADEYE payload that executes the shellcode is packed and protected with VMProtect, making it very difficult to locate the function responsible for loading and executing the payload; however, the unpacked payloads can be found on Malpedia here for those that have an account.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

How you react when your systems fail may define your business

Just around 9:45 a.m. Pacific Time on February 28, 2017, websites like Slack, Business Insider, Quora and other well-known destinations became inaccessible. For millions of people, the internet itself seemed broken.

It turned out that Amazon Web Services was having a massive outage involving S3 storage in its Northern Virginia datacenter, a problem that created a cascading impact and culminated in an outage that lasted four agonizing hours.

Amazon eventually figured it out, but you can only imagine how stressful it might have been for the technical teams who spent hours tracking down the cause of the outage so they could restore service. A few days later, the company issued a public post-mortem explaining what went wrong and which steps they had taken to make sure that particular problem didn’t happen again. Most companies try to anticipate these types of situations and take steps to keep them from ever happening. In fact, Netflix came up with the notion of chaos engineering, where systems are tested for weaknesses before they turn into outages.

Unfortunately, no tool can anticipate every outcome.

It’s highly likely that your company will encounter a problem of immense proportions like the one that Amazon faced in 2017. It’s what every startup founder and Fortune 500 CEO worries about — or at least they should. What will define you as an organization, and how your customers will perceive you moving forward, will be how you handle it and what you learn.

We spoke to a group of highly-trained disaster experts to learn more about preventing these types of moments from having a profoundly negative impact on your business.

It’s always about your customers

Reliability and uptime are so essential to today’s digital businesses that enterprise companies developed a new role, the Site Reliability Engineer (SRE), to keep their IT assets up and running.

Tammy Butow, principal SRE at Gremlin, a startup that makes chaos engineering tools, says the primary role of the SRE is keeping customers happy. If the site is up and running, that’s generally the key to happiness. “SRE is generally more focused on the customer impact, especially in terms of availability, uptime and data loss,” she says.

Companies measure uptime according to the so-called “five nines,” or 99.999 percent availability, but software engineer Nora Jones, who most recently led Chaos Engineering and Human Factors at Slack, says there is often too much of an emphasis on this number. According to Jones, the focus should be on the customer and the impact that availability has on their perception of you as a company and your business’s bottom line.

Someone needs to be calm and just keep asking the right questions.

“It’s money at the end of the day, but also over time, user sentiment can change [if your site is having issues],” she says. “How are they thinking about you, the way they talk about your product when they’re talking to their friends, when they’re talking to their family members. The nines don’t capture any of that.”

Robert Ross, founder and CEO at FireHydrant, an SRE as a Service platform, says it may be time to rethink the idea of the nines. “Maybe we need to change that term. Maybe we can popularize something like ‘happiness level objectives’ or ‘happiness level agreements.’ That way, the focus is on our products.”

When things go wrong

Companies go to great lengths to prevent disasters to avoid disappointing their customers and usually have contingencies for their contingencies, but sometimes, no matter how well they plan, crises can spin out of control. When that happens, SREs need to execute, which takes planning, too; knowing what to do when the going gets tough.

Ransomware Attacks: To Pay or Not To Pay? Let’s Discuss

This year has seen an escalation in the number of ransomware attacks striking organizations, with both private and public sector agencies like local government and education firmly in the firing line of threats such as Ryuk and Robinhood ransomware. Often understaffed and under resourced, those responsible for delivering critical public services are at the sharp end of the dilemma: to pay or not to pay? It’s a quandary that has technical, ethical, legal, safety and, of course, financial dimensions. In this post, we explore the arguments both for and against. Our aim here is to describe the implications and rationale from both angles across a number of different considerations. 

Ransomware Attacks: To Pay or Not To Pay? Let's Discuss

Is Paying a Ransom to Stop a Ransomware Attack Illegal?

It may seem odd to some, but it isn’t illegal to pay a ransomware demand, even though the forced encryption of someone else’s data and demand for payment is itself a federal crime under at least the Computer Fraud and Abuse Act and the Electronic Communications Privacy Act, as well as many laws passed by State legislatures.

One might argue that the best way to solve the ransomware epidemic would be to make it illegal for organizations to pay. Criminals are naturally only interested in the pay off, and if that route to the payday was simply prescribed by law, it would very quickly lead both to companies exploring other options to deal with ransomware and, at least in theory, criminals moving toward some other endeavour with an easier payout.

In sum, one could argue that it is the ease with which criminals can be paid and the perceived anonymity of crypto payment that helps foster the continuance of the ransomware threat. 

The idea of outlawing the payment of ransomware demands might seem appealing at first, until you unpack the idea to think how it would work in practice. Publicly traded companies have a legal duty to shareholders; public service companies have legally binding commitments to serve their communities. A law that threatened to fine organizations, or perhaps imprison staff, would be hugely controversial in principle and likely difficult to enforce in practice, quite aside from the ethics of criminalizing the victim of a crime whose sole intent is to coerce that victim into making a payment.

Imagine a prosecutor attempting to convince a court that an employee – whose actions, say, restored a critical public service and saved the taxpayer millions of dollars after authorizing a five figure ransomware payment – should be jailed. How would that, in principle, be different from prosecuting a parent for securing the safety of a child by paying off kidnappers? It doesn’t look like an easy case to win, particularly when the employee (or organization) might cite legitimate extenuating circumstances such as preserving life or other legal obligations. 

Is It Ethical To Pay a Ransomware Demand?

If it’s not illegal to pay a ransomware demand, that still leaves open the separate question as to whether it’s ethical. There’s a couple of different angles that can be taken on this one. According to some interpretations of ethics, something is a “good” or “right” decision if it leads to an overall benefit for the community.

On this pragmatic conception of ethics, one might argue that paying a ransomware demand that restores some vital service or unlocks some irreplaceable data outweighs the ‘harm’ of rewarding and encouraging those engaged in criminal behavior. 

On the other hand, it could be argued that what is right, or ethical, is distinct from what is a pragmatic or merely expedient solution. Indulging in a fantastical thought-experiment for a moment, would we consider it ethical if a ransomware author demanded the life of a person, instead of money, to release data that would save the lives of thousands of others? Many would have a strong intuition that it would always be unethical to murder one innocent to protect the lives of others. And that suggests that what is “right” and “wrong” might not revolve around a simple calculation of perceived benefits. 

The real problem with the pragmatic approach, however, is that there’s no agreement on how to objectively calculate the outcome of different ethical choices. More often than not, the weight we give to different ethical choices merely reflect our bias for the choice that we are naturally predisposed to.

If pragmatism can’t help inform us of whether it’s ethical or not to pay ransomware, we could look to a different view of ethics that suggest we should consider actions as “right” or “wrong” insofar as they reflect the values of the kind of society we want to live in. This view is sometimes expressed more simply as a version of the “do unto others as you would have them do unto you” maxim. A more accurate way to parse it might be to ask: Do we want to live in a society where we think it’s right (ethical) to pay those who engage in criminal behavior? Is this a maxim that we would want to teach our children? Put in those terms, many would perhaps say not.

Is It Prudent To Pay a Ransomware Demand?

Even if we might have a clear idea of the legal situation and a particular take on our own ethical stance, the question of whether to pay or not to pay raises other issues. We are not entirely done with the pragmatics of the ransomware dilemma. We may still feel inclined to make an unethical choice in light of other, seemingly more pressing concerns. 

There is a real, tangible pressure on making a choice that could save your organization or your city millions of dollars, or which might spare weeks of downtime of a critical service.

Even if they believe it would be technically unethical to do so, sometimes, some people may judge that today’s hard reality just takes imminent precedence over loftier principles.

A case in point: recently, three Alabama hospitals paid a ransom in order to resume operations. The hospitals’ spokesperson said: 

“We worked with law enforcement and IT security experts to assess all options in executing the solution we felt was in the best interests of our patients and in alignment with our health system’s mission. This included purchasing a decryption key from the attackers to expedite system recovery and help ensure patient safety.”

This “hard reality” perspective is reflected in recent changes made to the FBI’s official guidance on ransomware threats. 

“…the FBI understands that when businesses are faced with an inability to function, executives will evaluate all options to protect their shareholders, employees, and customers.”

However, the possibility that the criminals will not hold up their side of the bargain must be factored into any decision about paying a ransomware demand. In some cases, decryption keys are not even available, and in others, the ransomware authors simply didn’t respond once they were paid. We saw this to some degree with WannaCry. In the flurry of the WannaCry outbreak, some victims paid and got keys, yet a large amount either never heard from the authors, or the key pairs between victim and server were unmatched, making per-user decryption impossible.

A further point to consider when weighing up the prudence of acquiescing to the demand for payment is how this will affect your organization beyond the present attack itself. Will paying harm your reputation or earn you plaudits? Will other – or even the same – attackers now see you as a soft target and look to strike you again? Will your financial support for the criminals’ enterprise lead to further attacks against other companies, or services, that you yourself rely on? In other words, will giving in to the ransomware demand produce worse long-term effects than the immediate ones it seems – if the attackers deliver on their promise – to solve? 

What Happens If I Don’t Pay A Ransom for Ransomware Attacks?

If you choose not to pay the ransom, then of course you are in the very same position the ransomware attacker first put you in by encrypting all your files in order to “twist your arm” into paying. 

Depending on what kind of ransomware infection you have, there is some possibility that a decryptor already exists for that strain; less likely, but not unheard of, is the possibility that an expert analysis team may discover a way to decrypt your files. A lot of ransomware is poorly written and poorly implemented, and it may be that all is not lost as it might at first seem.

The NoMoreRansom Project is the culmination of effort from global law enforcement agencies and private security industry partners. They host a large repository of stand-alone decryption tools which are constantly updated by industry partners.

This can be a very valuable resource when evaluating your course of action when facing a ransomware attack.

Also consider whether you have inventoried all possible backup and recovery options. Many look no further than the Maersk shipping story during the NotPetya attack to emphasize the importance of being able to rapidly restore one’s entire infrastructure from backup. The most eye-opening realization for Maersk (and indeed the entire industry) was that recovery depended on a happy accident: a sole unaffected domain controller did not become infected due to a local power outage where it was residing. Without that fortunate, coincidental event, it would have taken exponentially longer to rebuild their entire infrastructure after 50,000 devices and thousands of apps were destroyed all at once. 

Some hail this as a success story for backups, but shareholders and operators on board the thousands of ships worldwide are quick to remind us that this incident still cost the company well over a half billion dollars in the 6 months following the incident. While backup and restoration are indeed critical, they are by no means the primary basis for a strategy to address the threat of ransomware.

Finally, there is the worst case scenario, where you have no backups and no recovery software, and you will have to dig yourself out by re-building data, services and, perhaps your reputation, from the ground up. Transparency is undoubtedly your best bet in that kind of scenario. Admit to past mistakes, commit to learning those lessons, and stand tall on your ethical decision not to reward criminal behavior. 

What Happens If I Pay A Ransom for Ransomware Attacks?

There is perhaps more uncertainty in paying than there is in not paying. At least when you choose not to pay a ransomware demand, what happens next is in your hands. In handing over whatever sum the ransomware attacker demands, you remain in their clutches until or unless they provide a working decryption key. 

Before going down the road of paying, look for experienced advisors and consultants to help negotiate with the extortionists. Despite the often taunting ransomware notes, some ransomware groups will engage in negotiating terms if they think it will improve their chances of a payday.

Tactics like asking for ‘proof of life’ to decrypt a portion of the environment up front prior to payment, or to negotiate payment terms like 50% up front, and 50% only after the environment has been decrypted, can work with some groups, albeit not with others.

The vast majority of ransom is still being paid in bitcoin, which is not an anonymous or untraceable currency. If you do feel forced to pay, you can work with the FBI and share wallet and payment details. Global Law Enforcement is keen to track where the money moves.

And where do you go beyond that? Any sensible organization must realize the need for urgent investment in determining not only the vector of that attack but all other vulnerabilities, as well as rolling out a complete cybersecurity solution that can block and rollback ransomware attacks in future. While these are all costs that need to be borne regardless of whether you pay or do not pay, the temptation to take the quick, easy way out rather than working through the entire problem risks leaving holes that may be exploited in the future. Balance the need for speed of recovery against several risks:

  1. Unknown back doors the attackers leave on systems
  2. Partial data recovery (note some systems will not be recovered at all)
  3. Zero recovery after payment (it is rare, but in some cases the decryption key provided is 100% useless, or worse, one is never sent)</span

Finally, note that some organizations that get hit successively by the same actors might have actually only been hit once, but encryption payloads may have been triggered in subsequent waves. Experience pays off tremendously in all of these scenarios, and ‘knowing thy enemy’ can make all the difference.

Conclusion

Pay or don’t pay, make sure you notify the proper law enforcement agency:

“Regardless of whether you or your organization have decided to pay the ransom, the FBI urges you to report ransomware incidents to law enforcement. Doing so provides investigators with the critical information they need to track ransomware attackers, hold them accountable under U.S. law, and prevent future attacks”.

At SentinelOne, we concur with the FBI that paying criminals for criminal activity is no way to put an end to criminal behavior. We understand the technical, ethical, and financial impacts that ransomware has on a business. It’s why we offer a ransomware guarantee with a trusted security solution that can both block known and unknown ransomware activity and also rollback your protected devices to a healthful state without recourse to backups or lengthy reinstallation. If you’d like to find out more, contact us today or request a free demo


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Samsung ramps up its B2B partner and developer efforts

Chances are you mostly think of Samsung as a consumer-focused electronics company, but it actually has a very sizable B2B business as well, which serves more than 15,000 large enterprises and hundreds of thousands of SMB entrepreneurs via its partners. At its developer conference this week, it’s putting the spotlight squarely on this side of its business — with a related hardware launch as well. The focus of today’s news, however, is on Knox, Samsung’s mobile security platform, and Project AppStack, which will likely get a different name soon, and which provides B2B customers with a new mechanism to deliver SaaS tools and native apps to their employees’ devices, as well as new tools for developers that make these services more discoverable.

At least in the U.S., Samsung hasn’t really marketed its B2B business all that much. With this event, the company is clearly thinking to change that.

At its core, Samsung is, of course, a hardware company, and as Taher Behbehani, the head of its U.S. mobile B2B division, told me, Samsung’s tablet sales actually doubled in the last year, and most of these were for industrial deployments and business-specific solutions. To better serve this market, the company today announced that it is bringing the rugged Tab Active Pro to the U.S. market. Previously, it was only available in Europe.

The Active Pro, with its 10.1″ display, supports Samsung’s S Pen, as well as Dex for using it on the desktop. It’s got all of the dust and water-resistance you would expect from a rugged device, is rated to easily support drops from about four feet high and promises up to 15 hours of battery life. It also features LTE connectivity and has an NFC reader on the back to allow you to badge into a secure application or take contactless payments (which are quite popular in most of the world but are only very slowly becoming a thing in the U.S.), as well as a programmable button to allow business users and frontline workers to open any application they select (like a barcode scanner).

“The traditional rugged devices out there are relatively expensive, relatively heavy to carry around for a full shift,” Samsung’s Chris Briglin told me. “Samsung is growing that market by serving users that traditionally haven’t been able to afford rugged devices or have had to share them between up to four co-workers.”

Today’s event is less about hardware than software and partnerships, though. At the core of the announcements is the new Knox Partner Program, a new way for partners to create and sell applications on Samsung devices. “We work with about 100,000 developers,” said Behbehani. “Some of these developers are inside companies. Some are outside independent developers and ISVs. And what we hear from these developer communities is when they have a solution or an app, how do I get that to a customer? How do I distribute it more effectively?”

This new partner program is Samsung’s solution for that. It’s a three-tier partner program that’s an evolution of the existing Samsung Enterprise Alliance program. At the most basic level, partners get access to support and marketing assets. At all tiers, partners can also get Knox validation for their applications to highlight that they properly implement all of the Knox APIs.

The free Bronze tier includes access to Knox SDKs and APIs, as well as licensing keys. At the Silver level, partners will get support in their region, while Gold-level members get access to the Samsung Solutions Catalog, as well as the ability to be included in the internal catalog used by Samsung sales teams globally. “This is to enable Samsung teams to find the right solutions to meet customer needs, and promote these solutions to its customers,” the company writes in today’s announcement. Gold-level partners also get access to test devices.

The other new service that will enable developers to reach more enterprises and SMBs is Project AppStack.

“When a new customer buys a Samsung device, no matter if it’s an SMB or an enterprise, depending on the information they provide to us, they get to search for and they get to select a number of different applications specifically designed to help them in their own vertical and for the size of the business,” explained Behbehani. “And once the phone is activated, these apps are downloaded through the ISV or the SaaS player through the back-end delivery mechanism which we are developing.”

For large enterprises, Samsung also runs an algorithm that looks at the size of the business and the vertical it is in to recommend specific applications, too.

Samsung will run a series of hackathons over the course of the next few months to figure out exactly how developers and its customers want to use this service. “It’s a module. It’s a technology backend. It has different components to it,” said Behbehani. “We have a number of tools already in place we have to fine- tune others and we also, to be honest, want to make sure that we come up with a POC in the marketplace that accurately reflects the requirements and the creativity of what the demand is in the marketplace.”