It’s Way Too Easy to Get a .gov Domain Name

Many readers probably believe they can trust links and emails coming from U.S. federal government domain names, or else assume there are at least more stringent verification requirements involved in obtaining a .gov domain versus a commercial one ending in .com or .org. But a recent experience suggests this trust may be severely misplaced, and that it is relatively straightforward for anyone to obtain their very own .gov domain.

Earlier this month, KrebsOnSecurity received an email from a researcher who said he got a .gov domain simply by filling out and emailing an online form, grabbing some letterhead off the homepage of a small U.S. town that only has a “.us” domain name, and impersonating the town’s mayor in the application.

“I used a fake Google Voice number and fake Gmail address,” said the source, who asked to remain anonymous for this story but who said he did it mainly as a thought experiment. “The only thing that was real was the mayor’s name.”

The email from this source was sent from exeterri[.]gov, a domain registered on Nov. 14 that at the time displayed the same content as the .us domain it was impersonating — — which belongs to the town of Exeter, Rhode Island (the impostor domain is no longer resolving).

“I had to [fill out] ‘an official authorization form,’ which basically just lists your admin, tech guy, and billing guy,” the source continued. “Also, it needs to be printed on ‘official letterhead,’ which of course can be easily forged just by Googling a document from said municipality. Then you either mail or fax it in. After that, they send account creation links to all the contacts.”

Technically, what my source did was wire fraud (obtaining something of value via the Internet/telephone/fax through false pretenses); had he done it through the U.S. mail, he could be facing mail fraud charges if caught.

But a cybercriminal — particularly a state-sponsored actor operating outside the United States — likely would not hesitate to do so if he thought registering a .gov was worth it to make his malicious website, emails or fake news social media campaign more believable.

“I never said it was legal, just that it was easy,” the source said. “I assumed there would be at least ID verification. The deepest research I needed to do was Yellow Pages records.”

Earlier today, KrebsOnSecurity contacted officials in the real town of Exeter, RI to find out if anyone from the U.S. General Services Administration — the federal agency responsible for managing the .gov domain registration process — had sought to validate the request prior to granting a .gov in their name.

A person who called back from the town clerk’s office but who asked not to be named said someone from the GSA did phone the mayor’s office on Nov. 24 — which was four days after I reached out to the federal agency about the domain in question and approximately 10 days after the GSA had already granted the phony request.


Responding today via email, a GSA spokesperson said the agency doesn’t comment on open investigations.

“GSA is working with the appropriate authorities and has already implemented additional fraud prevention controls,” the agency wrote, without elaborating on what those additional controls might be.

KrebsOnSecurity did get a substantive response from the Cybersecurity and Infrastructure Security Agency, a division of the U.S. Department of Homeland Security which is leading efforts to protect the federal .gov domain of civilian government networks [NB: The head of CISA, Christopher C. Krebs, is of no relation to this author].

The CISA said this matter is so critical to maintaining the security and integrity of the .gov space that DHS is now making a play to assume control over the issuance of all .gov domains.

“The .gov top-level domain (TLD) is critical infrastructure for thousands of federal, state and local government organizations across the country,” reads a statement CISA sent to KrebsOnSecurity. “Its use by these institutions should instill trust. In order to increase the security of all US-based government organizations, CISA is seeking the authority to manage the .gov TLD and assume governance from the General Services Administration.”

The statement continues:

“This transfer would allow CISA to modernize the .gov registrar, enhance the security of individual .gov domains, ensure that only authorized users obtain a .gov domain, proactively validate existing .gov holders, and better secure everyone that relies on .gov. We are appreciative of Congress’ efforts to put forth the DOTGOV bill [link added] that would grant CISA this important authority moving forward. GSA has been an important partner in these efforts and our two agencies will continue to work hand-in-hand to identify and implement near-term security enhancements to the .gov.”

In an era when the nation’s top intelligence agencies continue to warn about ongoing efforts by Russia and other countries to interfere in our elections and democratic processes, it may be difficult to fathom that an attacker could so easily leverage such a simple method for impersonating state and local authorities.

Despite the ease with which apparently anyone can get their own .gov domain, there are plenty of major U.S. cities that currently do not have one, probably because they never realized they could with very little effort or expense. A review of the Top 10 most populous U.S. cities indicates only half of them have obtained .gov domains, including Chicago, Dallas, Phoenix, San Antonio, and San Diego.

Yes, you read that right:,,, and are all still available. As is the .gov for San Jose, Calif., the economic, cultural and political center of Silicon Valley. No doubt a great number of smaller cities also haven’t figured out they’re eligible to secure their own .gov domains. That said, some of these cities do have .gov domains (e.g., but it’s not clear whether the GSA would allow the same city to have multiple .gov domains.

In addition to being able to convincingly spoof communications from and websites for cities and towns, there are almost certainly a myriad other ways that possessing a phony .gov domain could be abused. For example, my source said he was able to register his domain in Facebook’s law enforcement subpoena system, although he says he did not attempt to abuse that access.

The source who successfully registered an impostor .gov domain said he was able to use that access to register for Facebook’s law enforcement subpoena system.

Now consider what a well-funded adversary could do on Election Day armed with a handful of .gov domains for some major cities in Democrat strongholds within key swing states: The attackers register their domains a few days in advance of the election, and then on Election Day send out emails signed by .gov from, say, (also still available) informing residents that bombs had gone off at polling stations in Democrat-leaning districts. Such a hoax could well decide the fate of a close national election.

John Levine, a domain name expert, consultant and author of the book The Internet for Dummies, said the .gov domain space wasn’t always so open as it is today.

“Back in the day, everyone not in the federal government was supposed to register in the .us space,” Levine said. “At some point, someone decided .gov is going to be more democratic and let everyone in the states register. But as we see, there’s still no validation.”

Levine, who served three years as mayor of the village of Trumansburg, New York, said it would not be terribly difficult for the GSA to do a better job of validating .gov domain requests, but that some manual verification would probably be required.

“When I was a mayor, I was in frequent contact with the state, and states know who all their municipalities are and how to reach people in charge of them,” Levine said. “Also, every state has a Secretary of State that keeps track of what all the subdivisions are, and including them in the process could help as well.”

Levine said like the Internet itself, this entire debacle is yet another example of an important resource with potentially explosive geopolitical implications that was never designed with security or authentication in mind.

“It turns out that the GSA is pretty good at doing boring clerical stuff,” he said. “But as we keep discovering, what we once thought was a boring clerical thing now actually has real-world security implications.”

macOS Red Team: Spoofing Privileged Helpers (and Others) to Gain Root

As we saw in previous posts, macOS privilege escalation typically occurs by manipulating the user rather than exploiting zero days or unpatched vulnerabilities. Looking at it from from the perspective of a red team engagement, one native tool that can be useful in this regard is AppleScript, which has the ability to quickly and easily produce fake authorization requests that can appear quite convincing to the user. Although this in itself is not a new technique, in this post I will explore some novel ways we can (ab)use the abilities of AppleScript to spoof privileged processes the user already trusts on the local system.

image macos red team

What is a Privileged Helper Tool?

Most applications on a Mac don’t require elevated privileges to do their work, and indeed, if the application is sourced from Apple’s App Store, they are – at least technically – not allowed to do so. Despite that, there are times when apps have quite legitimate reasons for needing privileges greater than that possessed by the currently logged in user. Here’s a short list, from Apple’s own documentation:

  • manipulating file permissions, ownership
  • creating, reading, updating, or deleting files
  • opening privileged ports for TCP and UDP connections
  • opening raw sockets
  • managing processes
  • reading the contents of virtual memory
  • changing system settings
  • loading kernel extensions

Often, programs that need to perform any of these functions only need to do so occasionally, and in that context it makes sense to simply ask the user for authorization at the time. While this may improve security, it is also not the most convenient if the program in question is going to need to perform one or more of these actions more than once in any particular session. Users are not fond of repeated dialog alerts or of repeatedly having to type in a password just to get things done. 

Privilege separation is a technique that developers can use to solve this problem. By creating a separate “helper program” with limited functionality to carry out these tasks, the user need only be asked at install time for permission to install the helper tool. You’ve likely seen permission requests that look something like this:

image of install helper tool

The helper tool always runs with elevated privileges, but it is coded with limited functionality. At least in theory, the tool can only perform specific tasks and only at behest of the parent program. These privileged helper tools live in a folder in the local domain Library folder:


Since they are only installed by 3rd party programs sourced from outside of the App Store, you may or may not have some installed on a given system. However, some very popular and widespread macOS software either does or has made use of such tools. Since orphaned Privileged Helper Tools are not removed by the OS itself, there’s a reasonable chance that you’ll find some of these in use if you’re engaging with an organisation with Mac power users. Here’s a few from my own system that use Privileged Helper Tools:

  • BBEdit 
  • Carbon Copy Cloner
  • Pacifist

Abuses of this trust mechanism between parent process and privileged helper tool are possible (CVE-2019-13013), but that’s not the route we’re going to take today. Rather, we’re going to exploit the fact that there’s a high chance the user will be familiar with the parent apps of these privileged processes and inherently trust requests for authorization that appear to be coming from them. 

Why Use AppleScript for Spoofing?

Effective social engineering is all about context. Of course, we could just throw a fake user alert at any time, but to make it more effective, we want to :

  1. Make it look as authentic as possible – that means, using an alert with convincing text, an appropriate title and preferably a relevant icon
  2. Trigger it for a convincing reason –  apps that have no business or history of asking for privileges are going to raise more suspicion than those that do. Hence, targeting Privileged Helper tools are a useful candidate, particularly if we provide enough authentic details to pass user scrutiny. 
  3. Trigger it at an appropriate time, such as when the user is currently using the app that we’re attempting to spoof.

All of these tasks are easy to accomplish and combine using AppleScript. Here’s an example of the sort of thing we could create using a bit of AppleScripting. 

image of pacifist spoof dialog box

The actual dialog box is fairly crude. We haven’t got two fields for input for both user name and password, for one thing (although as we’ll see in Part 2 that is possible), but even so this dialog box has a lot going for it. It contains a title, an icon and the name of a process that if the user were to look it up online, would lead them back to the Privileged Helper tool that they can verify exists in their own /Library/PrivilegedHelperTools folder. The user would have to dig quite a bit deeper in order to actually discover our fraud.

Of course, a suspicious user might just press “Cancel” instead of doing any digging at all. Fortunately, using AppleScript means we can simultaneously make our request look more convincing and discourage our target from doing that again by wiring up the “Cancel” button to code that will either kill the parent app or simply cause an infinite repeat. 

An infinite repeat might raise too many suspicions, however, but killing the app and throwing a suitable alert “explaining” why this just happened could look far more legitimate. When the user relaunches the parent app and we trigger our authorization request again, the user is now far more likely to throw in the password and get on with their work. 

For good measure, we can also reject the user’s first attempt to type the password and make them type it twice. Since what is typed isn’t shown back to the user, making typos on password entry is a common experience. Forcing double entry (and capturing the input both times) should ensure that if the first attempt contained a typo or was not correct, the second one should be (we could also attempt to verify the user’s password directly before accepting it, but I shall leave such details aside here as we’ve already got quite a lot of work to get through!).

Creating the Spoofing Script

If you are unfamiliar with AppleScript or haven’t looked at how it has progressed in recent years since Yosemite 10.10, you might be surprised to learn that you can embed Objective-C code in scripts and call Cocoa and Foundation APIs directly. That means we have all the power of native APIs like NSFileManager, NSWorkspace, NSString, NSArray and many others. In the examples below, I am using a commercial AppleScript editor, but which is also available in a free version and which is far more effective as an AppleScript development environment than the unhelpful built-in Script Editor app.

As with any other scripting or programming language, we need to “import” the frameworks that we want to use, which we do in AppleScript with the use keyword. Let’s put the following at the top of our script:

image applescript use frameworks

These act as both shortcuts and a bridge to the AppleScript-Objective C scripting bridge and make the named APIs accessible in a convenient manner, as we’ll see below. 

Next, let’s write a couple of “handlers” (functions) to enumerate the PrivilegedHelper tools directory. In the image below, the left side shows the handler we will write; on the right side is an example of what it returns on my machine.

image get privileged helper tools

As we can see, this handler is just a wrapper for another handler enumerateFolderContents:, which was borrowed from a community forum. Let’s take a look at the code for that, which is a bit more complex:

# adapted from a script by Christopher Stone
on enumerateFolderContents:aFolderPath
	set folderItemList to "" as text
	set nsPath to current application's NSString's stringWithString:aFolderPath
	--- Expand Tilde & Symlinks (if any exist) ---
	set nsPath to nsPath's stringByResolvingSymlinksInPath()
	--- Get the NSURL ---
	set folderNSURL to current application's |NSURL|'s fileURLWithPath:nsPath
	set theURLs to (NSFileManager's defaultManager()'s enumeratorAtURL:folderNSURL includingPropertiesForKeys:{} options:((its NSDirectoryEnumerationSkipsPackageDescendants) + (get its NSDirectoryEnumerationSkipsHiddenFiles)) errorHandler:(missing value))'s allObjects()
	set AppleScript's text item delimiters to linefeed
		set folderItemList to ((theURLs's valueForKey:"path") as list) as text
	end try
	return folderItemList
end enumerateFolderContents:

Now that we have our list of Privileged Helper Tools, we will want to grab the file names separately from the path as we will use these names in our message text to boost our credibility. In addition, we want to find the parent app from the Helper tool’s binary both so that we can show this to the user and because we will also need it to find the app’s icon. 

This is how we do the first task, again with the code on the left and example output shown on the right:

image of paths and names

Now that we have our targets, all that remains is to find the parent apps. For that, we’ll borrow and adapt from Erik Berglund’s script here.

image of Authorized Clients

In this example, we can see the parent application’s bundle identifier is “com.barebones.bbedit”. There are a number of ways we can extract the identifier substring from the string, such as using command line utils like awk (as Erik does), or using cut to slice fields, but I’ll stick to Cocoa APIs for both the sake of speed and to avoid unnecessarily spawning more processes. Be aware with whatever technique you use, the identifier does not always occur in the same position and may not begin with “com”. In all cases that I’m aware of, though, it does follow immediately after the keyword “identifier”, so I’m going to use that as my primary delimiter. Do ensure your code accounts for edge cases (I’ll omit error checking here for want of space).

image of get bundle Identifiers

In the code above, I use Cocoa APIs to first split the string around either side of the delimiter. That should leave me with the actual bundle identifier at the beginning of the second substring. Note the as text coercion at the end. One hoop we have to jump through when mixing AppleScript and Objective C is converting back and forth between NSStrings and AppleScript text. 

With the parent application’s bundle identifier to hand, we can find the parent application’s path thanks to NSWorkspace. We’ll also add a loop to do the same for all items in the PrivilegedHelperTools folder.

image of get path to app

Note how I’ve moved the text conversion away from the bundleID variable because I still need the NSString for the NSWorkspace call that now follows it. The text conversion is delayed until I need the string again in an AppleScript call, which occurs at the end of the repeat method.

At this point, we now have the names of each Privileged Helper tool and its path, as well as the bundle identifier and path to each helper tool’s parent app. With this information, we have nearly everything we need for our authorization request. The last remaining step is to grab the application icon from each parent app.

Grabbing the Parent Application’s Icon Image

Application icons typically live in the application bundle’s Resources folder and have a .icns extension. Since we have the application’s path from above it should be a simple matter to grab the icon. 

Before we go on, we’ll need to add a couple of “helper handlers” for what’s coming next and to keep our code tidy.

image of helper handlers

Also, at the top of our script, we define some constants. For now, we’ll leave these as plain text, but we can obscure them in various ways in our final version.

image of constant strings

Notice the defaultIconStr constant, which provides our default. If you want to see what this looks like, try calling it with the following command:

-- let's get the user name from Foundation framework:
	set userName to current application's NSUserName()
	display dialog hlprName & my makeChanges & return & my privString & userName & my allowThis default answer "" with title parentName default button "OK" with icon my software_update_icon as «class furl» with hidden answer

image with default software update icon

Hmm, not bad, but not great either. It would look so much better with the app’s actual icon. The icon name is defined in the App’s Info.plist. Let’s add another handler to grab it:

image of get icon for path

Here’s our code for grabbing the icon tidied up:

image of get app icon

And here’s a few examples of what our script produces now:

image of bbedit spoof dialog
image of carbon copy cloner spoof dialog


Our authorization requests are now looking reasonably convincing. They contain an app name, and a process name, both of which will check out as legitimate if the user decides to look into them. We also have a proper password field and we call the user out by name in the message text. And it’s worth reminding ourselves at this point that all this is achieved without either triggering any traps set in Mojave and Catalina for cracking down on AppleScript, without knowing anything in advance about what is on the victim’s computer and, most importantly, without requiring any privileges at all. 

Indeed, it’s privileges that we’re after, and in the next part we’ll continue by looking at the code to capture the password entered by the user as well as how to launch our spoofing script at an appropriate time when the user is actually using one of the apps we found on their system. We’ll see how we can adapt the same techniques to target other privileged apps that use kexts and LaunchDaemons rather than Privileged Helper tools. As a bonus, we’ll also look into advanced AppleScript techniques for building even better, more convincing dialog boxes with two text fields. If you enjoyed this post, please subscribe to the blog and we will let you know when the next part is live!

To avoid any doubt, all the applications mentioned in this post are perfectly legitimate, and to my knowledge none of the apps contain any vulnerabilities related to the content of this post. The techniques described above are entirely out of control of individual developers.

Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

AWS expands its IoT services, brings Alexa to devices with only 1MB of RAM

AWS today announced a number of IoT-related updates that, for the most part, aim to make getting started with its IoT services easier, especially for companies that are trying to deploy a large fleet of devices. The marquee announcement, however, is about the Alexa Voice Service, which makes Amazon’s Alex voice assistant available to hardware manufacturers who want to build it into their devices. These manufacturers can now create “Alexa built-in” devices with very low-powered chips and 1MB of RAM.

Until now, you needed at least 100MB of RAM and an ARM Cortex A-class processor. Now, the requirement for Alexa Voice Service integration for AWS IoT Core has come down 1MB and a cheaper Cortex-M processor. With that, chances are you’ll see even more lightbulbs, light switches and other simple, single-purpose devices with Alexa functionality. You obviously can’t run a complex voice-recognition model and decision engine on a device like this, so all of the media retrieval, audio decoding, etc. is done in the cloud. All it needs to be able to do is detect the wake word to start the Alexa functionality, which is a comparably simple model.

“We now offload the vast majority of all of this to the cloud,” AWS IoT VP Dirk Didascalou told me. “So the device can be ultra dumb. The only thing that the device still needs to do is wake word detection. That still needs to be covered on the device.” Didascalou noted that with new, lower-powered processors from NXP and Qualcomm, OEMs can reduce their engineering bill of materials by up to 50 percent, which will only make this capability more attractive to many companies.

Didascalou believes we’ll see manufacturers in all kinds of areas use this new functionality, but most of it will likely be in the consumer space. “It just opens up the what we call the real ambient intelligence and ambient computing space,” he said. “Because now you don’t need to identify where’s my hub — you just speak to your environment and your environment can interact with you. I think that’s a massive step towards this ambient intelligence via Alexa.”

No cloud computing announcement these days would be complete without talking about containers. Today’s container announcement for AWS’ IoT services is that IoT Greengrass, the company’s main platform for extending AWS to edge devices, now offers support for Docker containers. The reason for this is pretty straightforward. The early idea of Greengrass was to have developers write Lambda functions for it. But as Didascalou told me, a lot of companies also wanted to bring legacy and third-party applications to Greengrass devices, as well as those written in languages that are not currently supported by Greengrass. Didascalou noted that this also means you can bring any container from the Docker Hub or any other Docker container registry to Greengrass now, too.

“The idea of Greengrass was, you build an application once. And whether you deploy it to the cloud or at the edge or hybrid, it doesn’t matter, because it’s the same programming model,” he explained. “But very many older applications use containers. And then, of course, you saying, okay, as a company, I don’t necessarily want to rewrite something that works.”

Another notable new feature is Stream Manager for Greengrass. Until now, developers had to cobble together their own solutions for managing data streams from edge devices, using Lambda functions. Now, with this new feature, they don’t have to reinvent the wheel every time they want to build a new solution for connection management and data retention policies, etc., but can instead rely on this new functionality to do that for them. It’s pre-integrated with AWS Kinesis and IoT Analytics, too.

Also new for AWS IoT Greengrass are fleet provisioning, which makes it easier for businesses to quickly set up lots of new devices automatically, as well as secure tunneling for AWS IoT Device Management, which makes it easier for developers to remote access into a device and troubleshoot them. In addition, AWS IoT Core now features configurable endpoints.

Hidden Cam Above Bluetooth Pump Skimmer

Tiny hidden spy cameras are a common sight at ATMs that have been tampered with by crooks who specialize in retrofitting the machines with card skimmers. But until this past week I’d never heard of hidden cameras being used at gas pumps in tandem with Bluetooth-based card skimming devices.

Apparently, I’m not alone.

“I believe this is the first time I’ve seen a camera on a gas pump with a Bluetooth card skimmer,” said Detective Matt Jogodka of the Las Vegas Police Department, referring to the compromised fuel pump pictured below.

The fake panel (horizontal) above the “This Sale” display obscures a tiny hidden camera angled toward the gas pump’s PIN pad.

It may be difficult to tell from the angle of the photograph above, but the horizontal bar across the top of the machine (just above the “This Sale $” indicator) contains a hidden pinhole camera angled so as to record debit card users entering their PIN.

Here’s a look at the fake panel removed from the compromised pump:

A front view of the hidden camera panel.

Jogodka said although this pump’s PIN pad is encrypted, the hidden camera sidesteps that security feature.

“The PIN pad is encrypted, so this is a NEW way to capture the PIN,” Jogodka wrote in a message to a mailing list about skimming devices found on Arizona fuel pumps. “The camera was set on Motion, [to] save memory space and battery life. Sad for the suspect, it was recovered 2 hours after it was installed.”

Whoever hacked this fuel pump was able to get inside the machine and install a Bluetooth-based circuit board that connects to the power and can transmit stolen card data wirelessly. This allows the thieves to drive by at any time and download the card data remotely from a mobile device or laptop.

The unauthorized Bluetooth circuit board can be seen at bottom left attached to the pump’s power and card reader.

This kind of fuel pump skimmer, while rare, serves as a reminder that it’s a good idea to choose credit over debit when buying fuel. For starters, there are different legal protections for fraudulent transactions on debit vs. credit cards.

With a credit card, your maximum loss on any transactions you report as fraud is $50; with a debit card, that protection only extends for within two days of the unauthorized transaction. After that, the maximum consumer liability can increase to $500 within 60 days, and to an unlimited amount after 60 days.

In practice, your bank or debit card issuer may still waive additional liabilities, and many do. But even then, having your checking account emptied of cash while your bank sorts out the situation can still be a huge hassle and create secondary problems (bounced checks, for instance).

Interestingly, this advice against using debit cards at the pump often runs counter to the messaging pushed by fuel station owners themselves, many of whom offer lower prices for cash or debit card transactions. That’s because credit card transactions typically are more expensive to process.

Anyone curious how to tell the difference between filling stations that prioritize card security versus those that haven’t should check out How to Avoid Card Skimmers at the Pump.

The compromised pump with the hidden camera bar still attached. Newer, more secure pumps have a horizontal card reader and a raised metallic keypad.

The Good, the Bad and the Ugly in Cybersecurity – Week 47

Image of The Good, The Bad & The Ugly in CyberSecurity

The Good

Not a day goes by when we don’t hear of yet another school district, or small-medium government entity being attacked with ransomware. This week it was reported that another Ryuk attack hit Louisiana’s Office of Technology, which houses the Department of Motor Vehicles, the Department of Children and Family Services and the Department of Transportation to name a few. As a result of its ongoing battle against cyberattacks, the state had previously created its own cybersecurity task force, ESF (Emergency Support Function), which this time kicked-in and acted aggressively to contain the threat and minimize the impact. There is a great lesson here. When faced with such an attack, the ability to quickly maneuver and take all necessary actions is paramount. The State of Louisiana is still dealing with some minor fallout from the campaign, but they saved themselves countless hours and headaches by moving the way they did. Given that this is an official ESF now, the process is repeatable and predictable as well. Louisiana VS. Ransomware? Louisiana Wins!

image of louisiana ransomware attack tweet

Cyber is so hot at the moment that it now has its own electric truck! Tesla’s announcement this week of a supposedly bulletproof, sledgehammer-resistant electric pickup, the Tesla Cybertruck (CybrTrk), may have little to do with endpoint security on the face of it, but it’s a cool vehicle with a great name. And besides, in the absence of a post-apocalyptic cyberpunk world to use it in, it might be just the thing for safely transporting a small server farm from one location to another! We just want to know, does it come in black?

image of tesla cyber

The Bad

Supply chain attacks are always a bitter pill to swallow. Just when you think you’ve covered all your bases, someone else’s opsec lets the side down. This week, an unknown number of Monero cryptocurrency users found themselves out of pocket after hackers compromised the website. For a short time, the site was serving up compromised binaries of Monero’s CLI wallet. The attackers created malicious versions of the genuine Linux and Windows CLI binaries by adding a function to steal the wallet’s seed from infected users. After exfiltrating the compromised user’s wallet seed to their C2, the hackers would be able to regenerate the wallet’s private keys and access the funds. At least one user on Reddit claims to be out of pocket by $7000. Although the malicious binaries appear to have only been served from Monday 18th 2:30 AM UTC to 4:30 PM UTC, it’s likely the attackers haul was somewhat larger than that. Concerned Monero users should verify the hashes of their binaries.

image of monero hack

The Ugly

Any time our personal data gets breached, there’s an expectation that the party responsible for securing our data adequately owns up to the issue and takes corrective action. Likewise, it’s expected that preventative measures are put in place to ensure the same issue does not recur. Recently, users of the video editing and sharing site were impacted with just such a scenario. Researchers from vpnMentor uncovered an unsecured AWS bucket hosting tens of thousands of videos uploaded by users of the platform. The hosted videos were, reportedly, fully exposed to anyone who went looking, regardless of whether the upload had been marked as public or private. Worse, many of the videos were of a sensitive nature, containing private family moments, proprietary information, and “even home-made pornography”. Any way you slice it, it is fair to assume that none of the platform’s users would want that data exposed.

image of veed data breach

The real rub here is that the ethical hackers tried to notify VEED of the issue on multiple occasions, but their reports were ignored. This led them to reach out to Amazon directly, who ultimately disabled access to the unsecured data. To date, the researchers have still yet to hear any response from VEED. Not pretty, and no way to build trust guys!

Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Cloud Foundry’s Kubernetes bet with Project Eirini hits 1.0

Cloud Foundry, the open-source platform-as-a-service that, with the help of lots of commercial backers, is currently in use by the majority of Fortune 500 companies, launched well before containers, and especially the Kubernetes orchestrator, were a thing. Instead, the project built its own container service, but the rise of Kubernetes obviously created a lot of interest in using it for managing Cloud Foundry’s container implementation. To do so, the organization launched Project Eirini last year; today, it’s officially launching version 1.0, which means it’s ready for production usage.

Eirini/Kubernetes doesn’t replace the old architecture. Instead, for the foreseeable future, they will operate side-by-side, with the operators deciding on which one to use.

The team working on this project shipped a first technical preview earlier this year and a number of commercial vendors, too, started to build their own commercial products around it and shipped it as a beta product.

“It’s one of the things where I think Cloud Foundry sometimes comes at things from a different angle,” IBM’s Julz Friedman told me. “Because it’s not about having a piece of technology that other people can build on in order to build a platform. We’re shipping the end thing that people use. So 1.0 for us — we have to have a thing that ticks all those boxes.”

He also noted that Diego, Cloud Foundry’s existing container management system, had been battle-tested over the years and had always been designed to be scalable to run massive multi-tenant clusters.

“If you look at people doing similar things with Kubernetes at the moment,” said Friedman, “they tend to run lots of Kubernetes clusters to scale to that kind of level. And Kubernetes, although it’s going to get there, right now, there are challenges around multi-tenancy, and super big multi-tenant scale”

But even without being able to get to this massive scale, Friedman argues that you can already get a lot of value even out of a small Kubernetes cluster. Most companies don’t need to run enormous clusters, after all, and they still get the value of Cloud Foundry with the power of Kubernetes underneath it (all without having to write YAML files for their applications).

As Cloud Foundry CTO Chip Childers also noted, once the transition to Eirini gets to the point where the Cloud Foundry community can start applying less effort to its old container engine, those resources can go back to fulfilling the project’s overall mission, which is about providing the best possible developer experience for enterprise developers.

“We’re in this phase in the industry where Kubernetes is the new infrastructure and [Cloud Foundry] has a very battle-tested developer experience around it,” said Childers. “But there’s also really interesting ideas that are out there that are coming from our community, so one of the things that I’ve suggested to the community writ large is, let’s use this time as an opportunity to not just evolve what we have, but also make sure that we’re paying attention to new workflows, new models, and figure out what’s going to provide benefit to that enterprise developer that we’re so focused on — and bring those types of capabilities in.”

Those new capabilities may be around technologies like functions and serverless, for example, though Friedman at least is more focused on Eirini 1.1 for the time being, which will include closing the gaps with what’s currently available in Cloud Foundry’s old scheduler, like Docker image support and support for the Cloud Foundry v3 API.

Making sense of a multi-cloud, hybrid world at KubeCon

More than 12,000 attendees gathered this week in San Diego to discuss all things containers, Kubernetes and cloud-native at KubeCon.

Kubernetes, the container orchestration tool, turned five this year, and the technology appears to be reaching a maturity phase where it accelerates beyond early adopters to reach a more mainstream group of larger business users.

That’s not to say that there isn’t plenty of work to be done, or that most enterprise companies have completely bought in, but it’s clearly reached a point where containerization is on the table. If you think about it, the whole cloud-native ethos makes sense for the current state of computing and how large companies tend to operate.

If this week’s conference showed us anything, it’s an acknowledgment that it’s a multi-cloud, hybrid world. That means most companies are working with multiple public cloud vendors, while managing a hybrid environment that includes those vendors — as well as existing legacy tools that are probably still on-premises — and they want a single way to manage all of this.

The promise of Kubernetes and cloud-native technologies, in general, is that it gives these companies a way to thread this particular needle, or at least that’s the theory.

Kubernetes to the rescue

Photo: Ron Miller/TechCrunch

If you were to look at the Kubernetes hype cycle, we are probably right about at the peak where many think Kubernetes can solve every computing problem they might have. That’s probably asking too much, but cloud-native approaches have a lot of promise.

Craig McLuckie, VP of R&D for cloud-native apps at VMware, was one of the original developers of Kubernetes at Google in 2014. VMware thought enough of the importance of cloud-native technologies that it bought his former company, Heptio, for $550 million last year.

As we head into this phase of pushing Kubernetes and related tech into larger companies, McLuckie acknowledges it creates a set of new challenges. “We are at this crossing the chasm moment where you look at the way the world is — and you look at the opportunity of what the world might become — and a big part of what motivated me to join VMware is that it’s successfully proven its ability to help enterprise organizations navigate their way through these disruptive changes,” McLuckie told TechCrunch.

He says that Kubernetes does actually solve this fundamental management problem companies face in this multi-cloud, hybrid world. “At the end of the day, Kubernetes is an abstraction. It’s just a way of organizing your infrastructure and making it accessible to the people that need to consume it.

“And I think it’s a fundamentally better abstraction than we have access to today. It has some very nice properties. It is pretty consistent in every environment that you might want to operate, so it really makes your on-prem software feel like it’s operating in the public cloud,” he explained.

Simplifying a complex world

One of the reasons Kubernetes and cloud-native technologies are gaining in popularity is because the technology allows companies to think about hardware differently. There is a big difference between virtual machines and containers, says Joe Fernandes, VP of product for Red Hat cloud platform.

“Sometimes people conflate containers as another form of virtualization, but with virtualization, you’re virtualizing hardware, and the virtual machines that you’re creating are like an actual machine with its own operating system. With containers, you’re virtualizing the process,” he said.

He said that this means it’s not coupled with the hardware. The only thing it needs to worry about is making sure it can run Linux, and Linux runs everywhere, which explains how containers make it easier to manage across different types of infrastructure. “It’s more efficient, more affordable, and ultimately, cloud-native allows folks to drive more automation,” he said.

Bringing it into the enterprise

Photo: Ron Miller/TechCrunch

It’s one thing to convince early adopters to change the way they work, but as this technology enters the mainstream. Gabe Monroy, partner program manager at Microsoft says to carry this technology to the next level, we have to change the way we talk about it.

110 Nursing Homes Cut Off from Health Records in Ransomware Attack

A ransomware outbreak has besieged a Wisconsin based IT company that provides cloud data hosting, security and access management to more than 100 nursing homes across the United States. The ongoing attack is preventing these care centers from accessing crucial patient medical records, and the IT company’s owner says she fears this incident could soon lead not only to the closure of her business, but also to the untimely demise of some patients.

Milwaukee, Wisc. based Virtual Care Provider Inc. (VCPI) provides IT consulting, Internet access, data storage and security services to some 110 nursing homes and acute-care facilities in 45 states. All told, VCPI is responsible for maintaining approximately 80,000 computers and servers that assist those facilities.

At around 1:30 a.m. CT on Nov. 17, unknown attackers launched a ransomware strain known as Ryuk inside VCPI’s networks, encrypting all data the company hosts for its clients and demanding a whopping $14 million ransom in exchange for a digital key needed to unlock access to the files. Ryuk has made a name for itself targeting businesses that supply services to other companies — particularly cloud-data firms — with the ransom demands set according to the victim’s perceived ability to pay.

In an interview with KrebsOnSecurity today, VCPI chief executive and owner Karen Christianson said the attack had affected virtually all of their core offerings, including Internet service and email, access to patient records, client billing and phone systems, and even VCPI’s own payroll operations that serve nearly 150 company employees.

The care facilities that VCPI serves access their records and other systems outsourced to VCPI by using a Citrix-based virtual private networking (VPN) platform, and Christianson said restoring customer access to this functionality is the company’s top priority right now.

“We have employees asking when we’re going to make payroll,” Christianson said. “But right now all we’re dealing with is getting electronic medical records back up and life-threatening situations handled first.”

Christianson said her firm cannot afford to pay the ransom amount being demanded — roughly $14 million worth of Bitcoin — and said some clients will soon be in danger of having to shut their doors if VCPI can’t recover from the attack.

“We’ve got some facilities where the nurses can’t get the drugs updated and the order put in so the drugs can arrive on time,” she said. “In another case, we have this one small assisted living place that is just a single unit that connects to billing. And if they don’t get their billing into Medicaid by December 5, they close their doors. Seniors that don’t have family to go to are then done. We have a lot of [clients] right now who are like, ‘Just give me my data,’ but we can’t.”

The ongoing incident at VCPI is just the latest in a string of ransomware attacks against healthcare organizations, which typically operate on razor thin profit margins and have comparatively little funds to invest in maintaining and securing their IT systems.

Earlier this week, a 1,300-bed hospital in France was hit by ransomware that knocked its computer systems offline, causing “very long delays in care” and forcing staff to resort to pen and paper.

On Nov. 20, Cape Girardeau, Mo.-based Saint Francis Healthcare System began notifying patients about a ransomware attack that left physicians unable to access medical records prior to Jan. 1.

Tragically, there is evidence to suggest that patient outcomes can suffer even after the dust settles from a ransomware infestation at a healthcare provider. New research indicates hospitals and other care facilities that have been hit by a data breach or ransomware attack can expect to see an increase in the death rate among certain patients in the following months or years because of cybersecurity remediation efforts.

Researchers at Vanderbilt University‘s Owen Graduate School of Management took the Department of Health and Human Services (HHS) list of healthcare data breaches and used it to drill down on data about patient mortality rates at more than 3,000 Medicare-certified hospitals, about 10 percent of which had experienced a data breach.

Their findings suggest that after data breaches as many as 36 additional deaths per 10,000 heart attacks occurred annually at the hundreds of hospitals examined. The researchers concluded that for care centers that experienced a breach, it took an additional 2.7 minutes for suspected heart attack patients to receive an electrocardiogram.

Companies hit by the Ryuk ransomware all too often are compromised for months or even years before the intruders get around to mapping out the target’s internal networks and compromising key resources and data backup systems. Typically, the initial infection stems from a booby-trapped email attachment that is used to download additional malware — such as Trickbot and Emotet.

This graphic from US-CERT depicts how the Emotet malware is typically used to lay the groundwork for a full-fledged ransomware infestation.

In this case, there is evidence to suggest that VCPI was compromised by one (or both) of these malware strains on multiple occasions over the past year. Alex Holden, founder of Milwaukee-based cyber intelligence firm Hold Security, showed KrebsOnSecurity information obtained from monitoring dark web communications which suggested the initial intrusion may have begun as far back as September 2018.

Holden said the attack was preventable up until the very end when the ransomware was deployed, and that this attack once again shows that even after the initial Trickbot or Emotet infection, companies can still prevent a ransomware attack. That is, of course, assuming they’re in the habit of regularly looking for signs of an intrusion.

“While it is clear that the initial breach occurred 14 months ago, the escalation of the compromise didn’t start until around November 15th of this year,” Holden said. “When we looked at this in retrospect, during these three days the cybercriminals slowly compromised the entire network, disabling antivirus, running customized scripts, and deploying ransomware. They didn’t even succeed at first, but they kept trying.”

VCPI’s CEO said her organization plans to publicly document everything that has happened so far when (and if) this attack is brought under control, but for now the company is fully focused on rebuilding systems and restoring operations, and on keeping clients informed at every step of the way.

“We’re going to make it part of our strategy to share everything we’re going through,” Christianson said, adding that when the company initially tried several efforts to sidestep the intruders their phone systems came under concerted assault. “But we’re still under attack, and as soon as we can open, we’re going to document everything.”

Going Deep | A Guide to Reversing Smoke Loader Malware

Working in infosec and supporting clients and SOCs has always exposed me to a huge number of alerts and incidents. Some of these are more interesting than others. Recently we stumbled across a particular sample of Smoke Loader malware. Smoke Loader has been in-the-wild since circa 2013 and is often used to distribute additional malicious components or artefacts. While the sample is not new, it did prove to be a good opportunity to revisit this threat and walk through some of the internals. 

This alert was raised against a suspicious file, classified as a trojan, that was killed and quarantined. What raised my curiosity was the number of detections over only a few hours, always from the same workstation, and only from the same user.

image of raw data

Knowing that the threat was killed without doing any harm, I decided to dig into it a bit more. Just looking at the SentinelOne console, I was able to see :

  • The full path where the detection was made.
  • The associated risk level is High: this implies that it’s a positive detection.
  • File unique hash that can be tested for any public Indicators of Compromise (IoC).

image of smoke loader analysis

In order to do a walk-through of malware reverse engineering steps, I downloaded the threat file and started the analysis.

First Layer: A Packed VB Win32 Program

With the downloaded file in my pocket, I quickly fired up an isolated analysis machine equipped with the Flare tools and started to investigate. At first glance, the sample appears to be a Visual Basic program leveraging Win32 APIs.

image of smoke loader PEiD

Let’s see what else we can get from its headers. Looks like pretty standard information, confirming a Visual Basic program due to its import table.

image of smoke loader headers

We can spot some rude and folkloristic words inside the binary strings, some of which make me think of a regional dialect of southern Italy.

image of smoke loader in PE Studio

With a bit of experience, we can safely assume that the file is packed with an external layer of Visual Basic that tries to stop, or at least slow down, static analysis. But what about its runtime behavior?

Observing the sample during runtime, we can observe the process injection: this behaviour is common for VB packers and luckily for us, is often trivial to defeat.

Defeating Visual Basic Packer

We won’t spend too much time on this: there are plenty of resources on how to unpack such packers and I highly recommend the OALabs Youtube video tutorials. It’s necessary and enough to put a breakpoint at CreateProcessInternalW API inside the debugger to stop the execution at the right time.

image of setting breakpoint

At this point, somewhere in memory, there is a PE file ready to be run. We only need to find it. To do so, we can search the entire memory map for a clue: I decided to search for the “DOS” substring that can usually be found as part of the “This program cannot be run in DOS mode” string within the PE.

image of memory search

We got plenty of results for the string, whose hex is 44 4F 53.

image of load address

However, we are particularly interested in just a few locations. Usually the executable is loaded at 0x00400000 address, so the result we had at 0x0040006C looks like our executable itself.

Things become particularly interesting at 0x002F0094, which we can follow in the memory dump.

image of smoke loader memory dump

A memory region with a PE file inside, mapped as Executable, Read, Write. This is definitely our injected file. 

image of smoke loader PE file in memory

We can simply dump out this memory region to file, clean the junk before the MZ header and analyze its headers.

image of memory region

It seems like a legitimate executable, but something is going on: no imports at all. This is interesting!

Second layer: Static Analysis

When we load this new executable in IDA Pro, this was the only chunk of code that was disassembled. 

image of smoke loader in ida pro 1

Here we recognize that it’s a XOR loop that will decode, from address 0x00401567, a blob of code with the size of 0xCD bytes with a XOR key equal to 0xCB. At the end of the loop, the same starting address 0x00401567 is pushed onto the stack and with the RET instruction the program flow will be branched over there.

Decoding the Buffer

With a little bit of IDA scripting, we can XOR the encrypted buffer and move forward in the analysis.

image of decoding the buffer

After de-xoring the buffer we are met with a mixture of anti-disassembly and anti-debug techniques. It is now possible to map the purpose of the code blocks.

image of graph overview

Inside this code, we can observe plenty of tricks that try to fool the disassembly flow. A few examples: 

  • Abusing CALL and RET instruction to mess up function boundaries. The CALL instruction will push the return address onto the stack. The RET instruction will then pop off this address into the EIP register, which effectively makes these two instructions useless. However, these few opcodes make IDA think that the function ends there and that the next instruction is the end of another function.
  • Abusing branch instructions that do nothing: CALL
    and at

    : POP . It’s the easiest way to get an address inside the EIP register and so to control the program’s flow.
  • Abusing JMP instructions: simply putting a lot of JMP instructions that will jump back and forth only to make the life of the analyst miserable.

Obfuscated with these techniques, the malware checks if it’s being debugged. The code that implements this check is nothing complicated: it queries certain flags of the PEB in order to spot the debugger, IsDebuggerPresent.

mov eax, fs:[30h] ; Process Environment Block

cmp b [eax+2], 0 ; check BeingDebugged
jne being_debugged

As said, this code is heavily obfuscated with junk jumps and a lot of instructions with the only purpose of increasing complexity of analysis. As an example, this little chunk of code is the final part of a dozen lines of code used to put value 0x30 inside the EAX register with the purpose of locating the PEB.

image of locating smoke loader PEB

At the end of this function, we spot another XOR stub decoding routine that will decode another blob of code and, after that, redirect the execution flow. Decoding will start at address 0x004014E8, with a buffer size of 0x7F and the same XOR key 0xCB.

image of XOR stub in smoke loader

As before, we can proceed in the static analysis, manually decoding this buffer with the same script.

But wait! Here we go again, another anti-debugging trick, NtGlobalFlag check:

mov eax, fs:[30h] ; Process Environment Block
mov al, [eax+68h] ; NtGlobalFlag
and al, 70h
cmp al, 70h
je  being_debugged


This chunk of code checks if the process is attached to a debugger and, if it goes well, another XOR decoding stub starts from address 0x00401000, with buffer size 0x4E8 and XOR key 0xCB.

image of checking the debugger

After decoding the new buffer, we need to face another anti-disassembly trick; namely, JMP instructions with a constant value. This is the most common trick used by malware to fool static analysis. Basically, it creates jumps into a new location plus one or a few bytes. It results in an erroneous interpretation of the opcode by the disassembler. It’s trivial to defeat but time intensive. 

IAT Resolution at Runtime

At address 0x00401000 there’s a simple call to another address 0x00401049, where it starts to become interesting as the malware appears to dynamically resolve its imports. As we noted before, the binary header analysis showed no imports at all. With this code, from the PEB location found earlier, the malware finds the base address of ntdll.dll.

image of PEB location

But how is this happening? In all recent Windows versions, the GS register points to a data structure called the Thread Environment Block (TEB). At offset 0x30 of the TEB, there’s another data structure, namely the Process Environment Block (PEB) we saw earlier. 

image of Thread Environment Block

We can inspect these data structures with the help of Microsoft public symbols and WinDBG.

image of smoke loader in WinDBG 1

With the same tools we can inspect the PEB too:

image of smoke loader in WinDBG 2

With the third instruction, we are following the offset 0x0C, the _PEB_LDR_DATA structure. This structure is fairly important because it contains a pointer, InInitializationOrderModuleList, to the head of a double-linked list that contains the NTDLL loader data structures for the loaded modules. 

image of PEB LDR Data

Each item in the list is a pointer to an LDR_DATA_TABLE_ENTRY structure. If we inspect this structure, we get the DLLBase.

image of DLLBase

Looking at this inside the debugger helps to shed some light:

image of smoke loader in IDA PRO 2

We got the base address of module ntdll.dll into EDX register, because this is the first module loaded into every process in a Windows environment. We have added comments and renamed select functions to clear up some of the observables.

image of resolve imports

After the malware gets the ntdll.dll base address, it loops twice calling a function named DecryptionFunction. This function receives as input a dword, which here is a hash.  As we’re going to see, it will walk the Export Address Table of the module searching for a particular function with the name matching to the passed hash. With this first loop, the malware finds two functions: strstr and LdrGetDllHandle.

As an example, in this particular case, the DecryptionFunction is walking, as we explained before for ntdll.dll, the module kernel32.dll, retrieving the address of VirtualAlloc put inside the EAX register as return value.

image of Virtual Alloc


After fully disassembling the function(s) we have the following: 

image of Decryption Function

The hashes of the resolved and imported functions appear as follows: 

image of imported functions

After using the debugger to step into the loops of the DecryptionFunction, we were able to find what functions the malware uses next.

This part of the executable almost works the same way through libraries and functions. I highly suggest looking at the disassembly line by line to understand the inner working of the Windows Internal Subsystem and API calls.

Another interesting trick to be even more stealthy is the use of stack strings to build calls to LoadLibraryA. The secret here is that, by definition, the CALL instruction pushes the next address onto the stack as the return address. But this address is an ASCII null terminated string that will be an argument for the next LoadLibraryA call. Here you can see how it loads two libraries: advapi32 and user32.

image of Decryption Loop

Immediately after resolving the imports, the malware sleeps for 10 seconds and then retrieves a filename via GetModuleFileNameA.

image of Get Module File Name A

Interestingly, the image above also shows how the code checks if its own name contains the string “sample” and if so consequently terminates itself. You can see how the call to the strstr function is built and how the previous push is given to check for the “sample” string.

It’s a simple anti-analysis technique that might easily catch you out. Protip: do not call your sample “sample”. 🙂

image of smoke loader anti-analysis feature

Next, the malware performs another check via GetVolumeInformationA, which is thoroughly documented in MSDN. Let’s look inside this call to understand its purpose.

image of Get Volume Information A

From the above disassembly, we can see that it retrieves the volume serial number and checks if it’s equal to some two serials. It then opens a registry key with RegOpenKeyExA, pushing one of the arguments with the same CALL technique. It then obtains the value of the registry key, closes the handle, and converts the value to lowercase before proceeding.

image of convert to lower case

It looks clear when you see it in the debugger.

image of smoke loader in IDA PRO 3

With this string saved somewhere in memory, the code goes on to perform some other checks trying to find any sign of running inside a virtual environment. 

image of Check for virtual environment

As part of the anti-virtual machine checks, it initializes a 4 cycle loop; during this loop it performs a call to the strstr function to search inside the retrieved registry value for any sign of the strings: “qemu”, “virtual”, “vmware”, “xen”. If you notice in the previous debugger screenshot, I’m running the sample inside a VMWare machine, so to continue I will need to patch the return value of strstr function calls to return zero.

Other checks are waiting:

image of Get Module Handle A

As you can see, the malware tries to understand if it’s being debugged or executed inside a sandbox by trying to get a handle to modules sbiedll and dbghelp. If it’s able to detect one of these two libraries, it terminates the process and exits. 

Finally, The Payload!

Having passed all sorts of anti-analysis and anti-debugging checks, we finally reach the payload! Now, the malware begins to reveal its secrets in memory.

image of payload in memory

We can clearly see it’s a PE file, but it’s scrambled somehow. This code will be decoded and managed in memory with a complex routine.

image of obfuscated payload

Digging into this code will require more time and effort than the analyst will normally want to expend. Instead, we can detonate the malware in our isolated environment and observe its execution. As we will see in the next post, this will reveal that a new instance of svchost.exe is loaded into memory, which suggests some sort of process injection. If you enjoyed this deep dive and would like to know when the next Going Deep post is available, just subscribe to the SentinelOne blog newsletter!


Sample Hash 07e81dfc0a01356fd96f5b75efe3d1b1bc86ade4


Smoke Loader {S0226}
Virtualization/Sandbox Evasion {T1497}

Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Celonis, a leader in big data process mining for enterprises, nabs $290M on a $2.5B valuation

More than $1 trillion is spent by enterprises annually on “digital transformation” — the investments that organizations make to update their IT systems to get more out of them and reduce costs — and today one of the bigger startups that’s built a platform to help get the ball rolling is announcing a huge round of funding.

Celonis, a leader in the area of process mining — which tracks data produced by a company’s software, as well as how the software works, in order to provide guidance on what a company could and should do to improve it — has raised $290 million in a Series C round of funding, giving the startup a post-money valuation of $2.5 billion.

Celonis was founded in 2011 in Munich — an industrial and economic center in Germany that you could say is a veritable Petri dish when it comes to large business in need of digital transformation — and has been cash-flow positive from the start. In fact, Celonis waited until it was nearly six years old to take its first outside funding (prior to this Series C it had picked up less than $80 million, see here and here).

The size and timing of this latest equity injection is due to seizing the moment, and tapping networks of people to do so. It has already been growing at a triple-digit rate, with customers like Siemens, Cisco, L’Oréal, Deutsche Telekom and Vodafone among them. 

“Our tech has become its own category with a lot of successful customers,” Bastian Nominacher, the co-CEO who co-founded the company with Alexander Rinke and Martin Klenk, said in an interview. “It’s a key driver for sustainable business operations, and we felt that we needed to have the right network of people to keep momentum in this market.”

To that end, this latest round’s participants lines up with the company’s strategic goals. It is being led by Arena Holdings — an investment firm led by Feroz Dewan — with Ryan Smith, co-founder and CEO of Qualtrics; and Tooey Courtemanche, founder and CEO of Procore, also included, alongside previous investors 83North and Accel.

Celonis said Smith will be a special advisor, working alongside another strategic board member, Hybris founder Carsten Thoma. Dewan, meanwhile, used to run hedge funds for Tiger Global (among other roles) and currently sits on the board of directors of Kraft Heinz.

“Celonis is the clear market leader in a category with open-ended potential. It has demonstrated an enviable record of growth and value creation for its customers and partners,” said Dewan in a statement. “Celonis helps companies capitalise on two inexorable trends that cut across geography and industry: the use of data to enable faster, better decision-making and the desire for all businesses to operate at their full potential.”

The core of Celonis’ offering is to provide process mining around an organizations’ IT systems. Nominacher said that this could include anything from 5 to over 100 different pieces of software, with the main idea being that Celonis’s platform monitors a company’s whole solar system of apps, so to speak, in order to produce its insights — providing and “X-ray” view of the situation, in the words of Rinke.

Those insights, in turn, are used either by the company itself, or by consultants engaged by the organization, to make further suggestions, whether that’s to implement something like robotic process automation (RPA) to speed up a specific process, or use a different piece of software to crunch data better, or reconfigure how staff is deployed, and so on. This is not a one-off thing: the idea is continuous monitoring to pick up new patterns or problems.

In recent times, the company has started to expand the system into a wider set of use cases, by providing tools to monitor operations and customer experience, and to apply its process mining engine to a wider set of company sizes beyond large enterprises, and by bringing in more AI to its basic techniques.

Interestingly, Nominacher said that there are currently no plans to, say, extend into RPA or other “fixing” tools itself, pointing to a kind of laser strategy that is likely part of what has helped it grow so well up to now.

“It’s important to focus on the relevant parts of what you provide,” he said. “We one layer, one that can give the right guidance.”