Tricky Phish Angles for Persistence, Not Passwords

Late last year saw the re-emergence of a nasty phishing tactic that allows the attacker to gain full access to a user’s data stored in the cloud without actually stealing the account password. The phishing lure starts with a link that leads to the real login page for a cloud email and/or file storage service. Anyone who takes the bait will inadvertently forward a digital token to the attackers that gives them indefinite access to the victim’s email, files and contacts — even after the victim has changed their password.

Before delving into the details, it’s important to note two things. First, while the most recent versions of this stealthy phish targeted corporate users of Microsoft’s Office 365 service, the same approach could be leveraged to ensnare users of many other cloud providers. Second, this attack is not exactly new: In 2017, for instance, phishers used a similar technique to plunder accounts at Google’s Gmail service.

Still, this phishing tactic is worth highlighting because recent examples of it received relatively little press coverage. Also, the resulting compromise is quite persistent and sidesteps two-factor authentication, and it seems likely we will see this approach exploited more frequently in the future.

In early December, security experts at PhishLabs detailed a sophisticated phishing scheme targeting Office 365 users that used a malicious link which took people who clicked to an official Office 365 login page — login.microsoftonline.com. Anyone suspicious about the link would have seen nothing immediately amiss in their browser’s address bar, and could quite easily verify that the link indeed took them to Microsoft’s real login page:

This phishing link asks users to log in at Microsoft’s real Office 365 portal (login.microsoftonline.com).

Only by copying and pasting the link or by scrolling far to the right in the URL bar can we detect that something isn’t quite right:

Notice this section of the URL (obscured off-page and visible only by scrolling to the right quite a bit) attempts to grant a malicious app hosted at officesuited.com full access to read the victim’s email and files stored at Microsoft’s Office 365 service.

As we can see from the URL in the image directly above, the link tells Microsoft to forward the authorization token produced by a successful login to the domain officesuited[.]com. From there, the user will be presented with a prompt that says an app is requesting permissions to read your email, contacts, OneNote notebooks, access your files, read/write to your mailbox settings, sign you in, read your profile, and maintain access to that data.

Image: PhishLabs

According to PhishLabs, the app that generates this request was created using information apparently stolen from a legitimate organization. The domain hosting the malicious app pictured above — officemtr[.]com — is different from the one I saw in late December, but it was hosted at the same Internet address as officesuited[.]com and likely signed using the same legitimate company’s credentials.

PhishLabs says the attackers are exploiting a feature of Outlook known as “add-ins,” which are applications built by third-party developers that can be installed either from a file or URL from the Office store.

“By default, any user can apply add-ins to their outlook application,” wrote PhishLabs’ Michael Tyler. “Additionally, Microsoft allows Office 365 add-ins and apps to be installed via side loading without going through the Office Store, and thereby avoiding any review process.”

In an interview with KrebsOnSecurity, Tyler said he views this attack method more like malware than traditional phishing, which tries to trick someone into giving their password to the scammers.

“The difference here is instead of handing off credentials to someone, they are allowing an outside application to start interacting with their Office 365 environment directly,” he said.

Many readers at this point may be thinking that they would hesitate before approving such powerful permissions as those requested by this malicious application. But Tyler said this assumes the user somehow understands that there is a malicious third-party involved in the transaction.

“We can look at the reason phishing is still around, and it’s because people are making decisions they shouldn’t be making or shouldn’t be able to make,” he said. “Even employees who are trained on security are trained to make sure it’s a legitimate site before entering their credentials. Well, in this attack the site is legitimate, and at that point their guard is down. I look at this and think, would I be more likely to type my password into a box or more likely to click a button that says ‘okay’?”

The scary part about this attack is that once a user grants the malicious app permissions to read their files and emails, the attackers can maintain access to the account even after the user changes his password. What’s more, Tyler said the malicious app they tested was not visible as an add-in at the individual user level; only system administrators responsible for managing user accounts could see that the app had been approved.

Furthermore, even if an organization requires multi-factor authentication at sign-in, recall that this phish’s login process takes place on Microsoft’s own Web site. That means having two-factor enabled for an account would do nothing to prevent a malicious app that has already been approved by the user from accessing their emails or files.

Once given permission to access the user’s email and files, the app will retain that access until one of two things happen: Microsoft discovers and disables the malicious app, or an administrator on the victim user’s domain removes the program from the user’s account.

Expecting swift action from Microsoft might not be ideal: From my testing, Microsoft appears to have disabled the malicious app being served from officesuited[.]com sometime around Dec. 19 — roughly one week after it went live.

In a statement provided to KrebsOnSecurity, Microsoft Senior Director Jeff Jones said the company continues to monitor for potential new variations of this malicious activity and will take action to disable applications as they are identified.

“The technique described relies on a sophisticated phishing campaign that invites users to permit a malicious Azure Active Directory Application,” Jones said. “We’ve notified impacted customers and worked with them to help remediate their environments.”

Microsoft’s instructions for detecting and removing illicit consent grants in Office 365 are here. Microsoft says administrators can enable a setting that blocks users from installing third-party apps into Office 365, but it calls this a “drastic step” that “isn’t strongly recommended as it severely impairs your users’ ability to be productive with third-party applications.”

PhishLabs’ Tyler said he disagrees with Microsoft here, and encourages Office 365 administrators to block users from installing apps altogether — or at the very least restrict them to apps from the official Microsoft store.

Apart from that, he said, it’s important for Office 365 administrators to periodically look for suspicious apps installed on their Office 365 environment.

“If an organization were to fall prey to this, your traditional methods of eradicating things involve activating two-factor authentication, clearing the user’s sessions, and so on, but that won’t do anything here,” he said. “It’s important that response teams know about this tactic so they can look for problems. If you can’t or don’t want to do that, at least make sure you have security logging turned on so it’s generating an alert when people are introducing new software into your infrastructure.”

macOS Security Updates Part 1 | Discovering Changes to XProtect & Friends

Researching threats on macOS involves not only keeping up with what threat actors are doing but also with what Apple are doing in terms of updating their built-in tools like XProtect, Gatekeeper, and MRT.app. Apple is renowned (or perhaps notorious…) for its tendency toward security opacity and obfuscation, and it has long had an aversion to publicly sharing its threat intel with the wider security community. Given both Apple’s vast resources and its privileged position in vetting developers and signed software, the company has a unique ability to see threats addressing its platform that other security researchers do not. For this reason, it is important that researchers tap into Apple’s security updates to see what new threats they may have found, and to check that their own solutions already have such threats covered in the event that Apple’s own tools fail to protect the end user.

In this new series of posts, I want to share some examples of how you can go about staying informed of what Apple update and when they do it. Although there are some commercial tools and scripts that either do or have tried to address these questions, it’s a simple enough matter to “roll your own” check and notification scripts, which we will look at in this post. Later, we’ll look at how to run diffs on the plists, SQL databases and binaries involved to see what changes have been made. These should serve well enough to help you build your own tools to check on and analyse new Apple security updates.

image of macos security updates part 1

Discovering macOS Security Updates: the Hard Way

Unlike on some other platforms, Apple’s security updates are invisible to users. They occur transparently in the background without notification or user interaction, unless the user happens to have disabled these updates in System Preferences (not recommended, by the way), in which case they don’t happen at all.

image of system preferences advanced

In that case, or if you just want to manually check what’s going on, you can run software update check from the Terminal. Although the software update tool isn’t specifically limited to security updates, there are some useful commands available here that allow you to drill down on those in particular. The man page is informative, but there are also some hidden commands, such as background-critical that it doesn’t mention. If you’re interested try running strings on the binary and perusing what you find. For example

$ strings -a $(which softwareupdate) | grep -A10 -B10 critical

You can also check the history of earlier security updates that the machine has received in a couple of ways, too. From the command line, you could explore system_profiler:

$ system_profiler SPInstallHistoryDataType | grep -i -A2 -B2 xprotect

image of system profiler

Alternatively, if you prefer the GUI, check out opt- > System Information… and scroll downwards to “Installations”. Incidentally, I regard it as a bug (one reported many years ago) that these two methods don’t show equivalent timestamps for the same installation, even after accounting for the two methods’ use of different timezones and locales.

image of system information

Automating Security Update Checks

While those mechanisms are all well and good for their intended purpose, they won’t help us keep informed of changes as they happen, unless we have little else to do than constantly check for updates, and they won’t offer us the fine-grained detail we need to see what has actually changed in the latest update, either. Fortunately, we can do better. We can build a script that will pull Apple’s software update catalog and parse it for the specific kinds of updates that we’re interested in, namely, XProtect, MRT and Gatekeeper.

At this point, some readers might be thinking: why not just monitor the local version of these files for changes? Indeed, several years ago I built and distributed a free tool that did just that, but there’s a problem with that approach. Apple do not role out these updates evenly. Geographically, users in different regions can see these updates sometimes days apart, and there’s plenty of anecdotal evidence in various user forums of some machines not receiving updates when others do, even on the same network. For those reasons, simply waiting for the updates to arrive on your local machine isn’t a particularly reliable or punctual way to find out what new threats Apple’s security updates are addressing.

Returning to our script, then, once we’ve built and tested it, there’s a couple of other tasks we’ll need to take care of. The first is setting up a schedule to run the script at a chosen interval. The second is to implement some form of notification to alert us that an update has been posted. Beyond that, we will also want some “quick wins” for parsing the differences found in each update. That’ll be the subject of further posts. For now, let’s take a look at how to build a script to look for changes, run it on a schedule and get notifications of updates.

Finding Apple’s Software Update Catalog

To build our script, the first thing we need is the URL of Apple’s software update catalog. To find that, let’s see what the softwareupdate utility can tell us.

Running strings on the utility and grepping it for https turns out not to be very helpful.

image of software update

Since we know the utility has to reach out to a download server at some point, perhaps it’s called with code from a shared library. Let’s check to see what shared libraries the utility calls with otool and the L switch.

image of otool

That private framework looks promising. Let’s try grepping that for network calls.

image of private framework

Bingo! As you can see, there’s more than one, and the URL does change from time to time, so it’s worth knowing how to find this catalog address. The latest one available on this system a (10.14 macOS install) is the final sucatalog entry shown above. However, if we run the same technique on a 10.15 machine you’ll find we get:

https://swscan.apple.com/content/catalogs/others/index-10.15-10.14-10.13-10.12-10.11-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog

Fortunately, I don’t need to be running a 10.15 machine to download and parse that 10.15 catalog file, so we’ll use this URL so that we can see changes made for 10.15 as well. To see what the catalog looks like, change to a convenient working directory in Terminal and issue a CURL command to download it; it’s simply an ASCII text file of around 6MB.

$ curl -sL https://swscan.apple.com/content/catalogs/others/index-10.15-10.14-10.13-10.12-10.11-10.10-10.9-mountainlion-lion-snowleopard-leopard.merged-1.sucatalog -o sucatalog_latest

Yes, that’s a pretty large text file, containing some 90,000+ lines!

$ cat sucatalog_latest | wc -l
-> 90151

That’s because the catalog contains entries for a wide variety of software updates going as far back as 2008 and, as the name implies, covering versions as far back as Snow Leopard! Let’s find the date of the last posted update:

$ grep -A1 PostDate sucatalog_latest | tail -n 1
-> 2019-12-18T19:16:33Z

Creating a Script To Check for Apple Security Updates

Great. Now from here on in, better scripters than I will have their own ideas on how best to parse this, and no doubt some will prefer to use python, or perl or whatever their favorite scripting langauge is. When it comes to this kind of thing, I’m a “quick and dirty” scripter focused on getting the result rather than the niceties or aesthetics of efficient coding. In short, I don’t promise this is the best way of parsing the catalog so feel free to adapt or improve the ideas here for your own use case.

To make a quick and easy tool that can notify us of changes to the catalog, we’ll do a diff on the latest version and a locally saved one. If you’ve already pulled down the catalog from above, make sure you rename it to sucatalog to be consistent with the script that follows and save it in ~/Documents/Security_Updates/. Inside that folder, create another folder called Changes.

The script will begin by changing the current working directory to the Security_Updates folder. It’ll then pull down the most recent copy of the catalog and diff it against the previous one that you saved earlier. If there’s any changes, our script will first write these out to a temporary file, called diff.txt.

image of bash script start

As a quick check we ensure the diffs, if any, contain a new PostDate (we don’t care if there’s been some other change that wasn’t a new item posted to the catalog). Then, we’ll use some regexs to search for the kind of changes we’re interested in. If we find any, we’ll pull out the URLs and save them to a separate file.

image of bash script middle

After the conditional has been evaluated, we’ll clean up any of our temp files and replace the local saved copy of the sucatalog with the one we just downloaded, ready for the next time.

image of bash script end

Setting Up a Schedule

Setting up a schedule to run the script can be as easy or as hard as you like. The ‘proper’ way is probably to run it as a user Launch Agent. These aren’t that hard to create, but it’s easy to make tiny errors in the XML that can be difficult to debug. You can make your life easier if there’s some already existing agents in your ~/Library/LaunchAgents folder. If so, make a copy of any one of them, rename it, and then replace the label and appropriate keys with your own values. Delete any keys you don’t need.

A more efficient method, at least to my mind, is to run a cronjob. These are trivially easy to create with a one-liner. Assuming you call the script suCatalogScript.sh and insert your own username as appropriate, the following command will install a cronjob that calls the script 5 minutes past every hour while you’re logged in:

$ echo '05 */1 * * * /Users//Documents/Security_Updates/suCatalogScript.sh' | crontab -

Note: if you have existing cron jobs don’t use this method, as it’ll overwrite them! Instead, use the -e switch to edit your crontab. See man crontab page for more details.

Setting Up Notifications

The final thing we need to set up is a notification system. We have a couple of choices. We could simply add some osascript to the suCatalogScript.sh that will fire off a notification banner. This is certainly the easiest solution.

image of osascript display notification

Alternatively, you could plump for a dialog alert, replacing the osascript above with that shown below, making sure to escape all those quote marks and the exclamation mark, if you use it.

image of osascrpt display dialog

The advantage of a dialog alert over a display notification is you can use an arbitrarily long message, perhaps reminding yourself where to look to view the changes, and dialog alerts are not so easy to miss as notifications can be.

Personally, I prefer to keep the script and the notification mechanisms separate, which was the reason for saving the Latest_Changes.txt file to a separate Changes folder. With that folder dedicated to only holding this file, we can set up a Folder Action that will alert us whenever that folder is modified. Not so long ago, Folder Actions were a bit flakey, but they’ve become much more reliable in recent versions of macOS. Folder Actions are also pretty simple to set up and manage. You can learn about them and how to set them up here.

Conclusion

In this post, we’ve “rolled our own” notification system to tell us when Apple have made changes to Gatekeeper, MRT and XProtect. But simply being notified of the changes is only half (or less than half!) of the task. The major work involves finding out what has changed. Deobfuscating and running diffs on XProtect’s property list files, Gatekeeper’s SQL databases and MRT.app’s macho binary involves a variety of different techniques, given the different file structures and formats involved, and that’s exactly what we’ll start getting into in the next post!

If you enjoyed this post on Apple’s macOS Security Updates and would like to be notified of when the next one is up, please subscribe to the blog’s weekly newsletter (form to the left) or follow us on social media (links below the line). I hope to see you next time!


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Despite JEDI loss, AWS retains dominant market position

AWS took a hard blow last year when it lost the $10 billion, decade-long JEDI cloud contract to rival Microsoft. Yet even without that mega deal for building out the nation’s Joint Enterprise Defense Infrastructure, the company remains fully in control of the cloud infrastructure market — and it intends to fight that decision.

In fact, AWS still owns almost twice as much cloud infrastructure market share as Microsoft, its closest rival. While the two will battle over the next decade for big contracts like JEDI, for now, AWS doesn’t have much to worry about.

There was a lot more to AWS’s year than simply losing JEDI. Per usual, the news came out with a flurry of announcements and enhancements to its vast product set. Among the more interesting moves was a shift to the edge, the fact the company is getting more serious about the chip business and a big dose of machine learning product announcements.

The fact is that AWS has such market momentum now, it’s a legitimate question to ask if anyone, even Microsoft, can catch up. The market is continuing to expand though, and the next battle is for that remaining market share. AWS CEO Andy Jassy spent more time than in the past trashing Microsoft at 2019’s re:Invent customer conference in December, imploring customers to move to the cloud faster and showing that his company is preparing for a battle with its rivals in the years ahead.

Numbers, please

AWS closed 2019 on a $36 billion run rate, growing from $7.43 billion in in its first report in January to $9 billion in earnings for its most recent earnings report in October. Believe it or not, according to CNBC, that number failed to meet analysts expectations of $9.1 billion, but still accounted for 13% of Amazon’s revenue in the quarter.

Regardless, AWS is a juggernaut, which is fairly amazing when you consider that it started as a side project for Amazon .com in 2006. In fact, if AWS were a stand-alone company, it would be a substantial business. While growth slowed a bit last year, that’s inevitable when you get as large as AWS, says John Dinsdale, VP, chief analyst and general manager at Synergy Research, a firm that follows all aspects of the cloud market.

“This is just math and the law of large numbers. On average over the last four quarters, it has incremented its revenues by well over $500 million per quarter. So it has grown its quarterly revenues by well over $2 billion in a twelve-month period,” he said.

Dinsdale added, “To put that into context, this growth in quarterly revenue is bigger than Google’s total revenues in cloud infrastructure services. In a very large market that is growing at over 35% per year, AWS market share is holding steady.”

Dinsdale says the cloud infrastructure market didn’t quite break $100 billion last year, but even without full Q4 results, his firm’s models project a total of around $95 billion, up 37% over 2018. AWS has more than a third of that. Microsoft is way back at around 17% with Google in third with around 8 or 9%.

While this is from Q1, it illustrates the relative positions of companies in the cloud market. Chart: Synergy Research

JEDI disappointment

It would be hard to do any year-end review of AWS without discussing JEDI. From the moment the Department of Defense announced its decade-long, $10 billion cloud RFP, it has been one big controversy after another.

The Hidden Cost of Ransomware: Wholesale Password Theft

Organizations in the throes of cleaning up after a ransomware outbreak typically will change passwords for all user accounts that have access to any email systems, servers and desktop workstations within their network. But all too often, ransomware victims fail to grasp that the crooks behind these attacks can and frequently do siphon every single password stored on each infected endpoint. The result of this oversight may offer attackers a way back into the affected organization, access to financial and healthcare accounts, or — worse yet — key tools for attacking the victim’s various business partners and clients.

In mid-November 2019, Wisconsin-based Virtual Care Provider Inc. (VCPI) was hit by the Ryuk ransomware strain. VCPI manages the IT systems for some 110 clients that serve approximately 2,400 nursing homes in 45 U.S. states. VCPI declined to pay the multi-million dollar ransom demanded by their extortionists, and the attack cut off many of those elder care facilities from their patient records, email and telephone service for days or weeks while VCPI rebuilt its network.

Just hours after that story was published, VCPI chief executive and owner Karen Christianson reached out to say she hoped I would write a follow-up piece about how they recovered from the incident. My reply was that I’d consider doing so if there was something in their experience that I thought others could learn from their handling of the incident.

I had no inkling at the time of how much I would learn in the days ahead.

EERIE EMAILS

On December 3, I contacted Christianson to schedule a follow-up interview for the next day. On the morning of Dec. 4 (less than two hours before my scheduled call with VCPI and more than two weeks after the start of their ransomware attack) I heard via email from someone claiming to be part of the criminal group that launched the Ryuk ransomware inside VCPI.

That email was unsettling because its timing suggested that whoever sent it somehow knew I was going to speak with VCPI later that day. This person said they wanted me to reiterate a message they’d just sent to the owner of VCPI stating that their offer of a greatly reduced price for a digital key needed to unlock servers and workstations seized by the malware would expire soon if the company continued to ignore them.

“Maybe you chat to them lets see if that works,” the email suggested.

The anonymous individual behind that communication declined to provide proof that they were part of the group that held VPCI’s network for ransom, and after an increasingly combative and personally threatening exchange of messages soon stopped responding to requests for more information.

“We were bitten with releasing evidence before hence we have stopped this even in our ransoms,” the anonymous person wrote. “If you want proof we have hacked T-Systems as well. You may confirm this with them. We havent [sic] seen any Media articles on this and as such you should be the first to report it, we are sure they are just keeping it under wraps.” Security news site Bleeping Computer reported on the T-Systems Ryuk ransomware attack on Dec. 3.

In our Dec. 4 interview, VCPI’s acting chief information security officer — Mark Schafer, CISO at Wisconsin-based SVA Consulting — confirmed that the company received a nearly identical message that same morning, and that the wording seemed “very similar” to the original extortion demand the company received.

However, Schafer assured me that VCPI had indeed rebuilt its email network following the intrusion and strictly used a third-party service to discuss remediation efforts and other sensitive topics.

‘LIKE A COMPANY BATTLING A COUNTRY’

Christianson said several factors stopped the painful Ryuk ransomware attack from morphing into a company-ending event. For starters, she said, an employee spotted suspicious activity on their network in the early morning hours of Saturday, Nov. 16. She said that employee then immediately alerted higher-ups within VCPI, who ordered a complete and immediate shutdown of the entire network.

“The bottom line is at 2 a.m. on a Saturday, it was still a human being who saw a bunch of lights and had enough presence of mind to say someone else might want to take a look at this,” she said. “The other guy he called said he didn’t like it either and called the [chief information officer] at 2:30 a.m., who picked up his cell phone and said shut it off from the Internet.”

Schafer said another mitigating factor was that VCPI had contracted with a third-party roughly six months prior to the attack to establish off-site data backups that were not directly connected to the company’s infrastructure.

“The authentication for that was entirely separate, so the lateral movement [of the intruders] didn’t allow them to touch that,” Schafer said.

Schafer said the move to third-party data backups coincided with a comprehensive internal review that identified multiple areas where VCPI could harden its security, but that the attack hit before the company could complete work on some of those action items.

“We did a risk assessment which was pretty much spot-on, we just needed more time to work on it before we got hit,” he said. “We were doing the right things, just not fast enough. If we’d had more time to prepare, it would have gone better. I feel like we were a company battling a country. It’s not a fair fight, and once you’re targeted it’s pretty tough to defend.”

WHOLESALE PASSWORD THEFT

Just after receiving a tip from a reader about the ongoing Ryuk infestation at VCPI, KrebsOnSecurity contacted Milwaukee-based Hold Security to see if its owner Alex Holden had any more information about the attack. Holden and his team had previously intercepted online traffic between and among multiple ransomware gangs and their victims, and I was curious to know if that held true in the VCPI attack as well.

Sure enough, Holden quickly sent over several logs of data suggesting the attackers had breached VCPI’s network on multiple occasions over the previous 14 months.

“While it is clear that the initial breach occurred 14 months ago, the escalation of the compromise didn’t start until around November 15th of this year,” Holden said at the time. “When we looked at this in retrospect, during these three days the cybercriminals slowly compromised the entire network, disabling antivirus, running customized scripts, and deploying ransomware. They didn’t even succeed at first, but they kept trying.”

Holden said it appears the intruders laid the groundwork for the VPCI using Emotet, a powerful malware tool typically disseminated via spam.

“Emotet continues to be among the most costly and destructive malware,” reads a July 2018 alert on the malware from the U.S. Department of Homeland Security. “Its worm-like features result in rapidly spreading network-wide infection, which are difficult to combat.”

According to Holden, after using Emotet to prime VCPI’s servers and endpoints for the ransomware attack, the intruders deployed a module of Emotet called Trickbot, which is a banking trojan often used to download other malware and harvest passwords from infected systems.

Indeed, Holden shared records of communications from VCPI’s tormentors suggesting they’d unleashed Trickbot to steal passwords from infected VCPI endpoints that the company used to log in at more than 300 Web sites and services, including:

-Identity and password management platforms Auth0 and LastPass
-Multiple personal and business banking portals;
-Microsoft Office365 accounts
-Direct deposit and Medicaid billing portals
-Cloud-based health insurance management portals
-Numerous online payment processing services
-Cloud-based payroll management services
-Prescription management services
-Commercial phone, Internet and power services
-Medical supply services
-State and local government competitive bidding portals
-Online content distribution networks
-Shipping and postage accounts
-Amazon, Facebook, LinkedIn, Microsoft, Twitter accounts

Toward the end of my follow-up interview with Schafer and VCPI’s Christianson, I shared Holden’s list of sites for which the attackers had apparently stolen internal company credentials. At that point, Christianson abruptly ended the interview and got off the line, saying she had personal matters to attend to. Schafer thanked me for sharing the list, noting that it looked like VCPI probably now had a “few more notifications to do.”

Moral of the story: Companies that experience a ransomware attack — or for that matter any type of equally invasive malware infestation — should assume that all credentials stored anywhere on the local network (including those saved inside Web browsers and password managers) are compromised and need to be changed.

Out of an abundance of caution, this process should be done from a pristine (preferably non-Windows-based) system that does not reside within the network compromised by the attackers. In addition, full use should be made of the strongest method available for securing these passwords with multi-factor authentication.

CrowdStrike’s CEO on how to IPO, direct listings and what’s ahead for SaaS startups

A few days before Christmas, TechCrunch caught up with CrowdStrike CEO George Kurtz to chat about his company’s public offering, direct listings and his expectations for the 2020 IPO market. We also spoke about CrowdStrike’s product niche — endpoint security — and a bit more on why he views his company as the Salesforce of security.

The conversation is timely. Of the 2019 IPO cohort, CrowdStrike’s IPO stands out as one of the year’s most successful debuts. As 2020’s IPO cycle is expected to be both busy and inclusive of some of the private market’s biggest names, Kurtz’s views are useful to understand. After all, his SaaS security company enjoyed a strong pricing cycle, a better-than-expected IPO fundraising haul and strong value appreciation after its debut.

Notably, CrowdStrike didn’t opt to pursue a direct listing; after chatting with the CEO of recent IPO Bill.com concerning why his SaaS company also decided on a traditional flotation, we wanted to hear from Kurtz as well. The security CEO called the current conversation around direct listings a “great debate,” before explaining his perspective.

Pulling from a longer conversation, what follows are Kurtz’s four tips for companies gearing up for a public offering, why his company elected chose a traditional public offering over a more exotic method, comments on endpoint security and where CrowdStrike fits inside its market, and, finally, quick notes on upcoming debuts.

The following interview has been condensed and edited for clarity.

How to go public successfully

Share often

What’s most important is the fact that when we IPO’d in June of 2019, we started the process three years earlier. And that is the number one thing that I can point to. When [CrowdStrike CFO Burt Podbere] and I went on the road show everybody knew us, all the buy side investors we had met with for three years, the sell side analysts knew us. The biggest thing that I would say is you can’t go on a road show and have someone not know your company, or not know you, or your CFO.

And we would share — as a private company, you share less — but we would share tidbits of information. And we built a level of consistency over time, where we would share something, and then they would see it come true. And we would share something else, and they would see it come true. And we did that over three years. So we built, I believe, trust with the street, in anticipation of, at some point in the future, an IPO.

Practice early

We spent a lot of time running the company as if it was public, even when we were private. We had our own earnings call as a private company. We would write it up and we would script it.

You’ve seen other companies out there, if they don’t get their house in order it’s very hard to go [public]. And we believe we had our house in order. We ran it that way [which] allowed us to think and operate like a public company, which you want to get out of the way before you come become public. If there’s a takeaway here for folks that are thinking about [going public], run it and act like a public company before you’re public, including simulated earnings calls. And once you become public, you already have that muscle memory.

Raw numbers matter

The third piece is [that] you [have to] look at the numbers. We are in rarified air. At the time of IPO we were the fastest growing SaaS company to IPO ever at scale. So we had the numbers, we had the growth rate, but it really was a combination of preparation beforehand, operating like a public company, […] and then we had the numbers to back it up.

TAM is key, even at scale

One last point, we had the [total addressable market, or TAM] as well. We have the TAM as part of our story; security and where we play is a massive opportunity. So we had that market opportunity as well.


On this topic, Kurtz told TechCrunch two interesting things earlier in the conversation. First that what many people consider as “endpoint security” is too constrained, that the category includes “traditional endpoints plus things like mobile, plus things like containers, IoT devices, serverless, ephemeral cloud instances, [and] on and on.” The more things that fit under the umbrella of endpoint security, CrowdStrike’s focus, the bigger its market is.

Kurtz also discussed how the cloud migration — something that builds TAM for his company’s business — is still in “the early innings,” going on to say that in time “you’re going to start to see more critical workloads migrate to the cloud.” That should generate even more TAM for CrowdStrike and its competitors, like Carbon Black and Tanium.


Why CrowdStrike opted for a traditional IPO instead of a direct listing

BigID bags another $50M round as data privacy laws proliferate

Almost exactly 4 months to the day after BigID announced a $50 million Series C, the company was back today with another $50 million round. The Series D came entirely from Tiger Global Management. The company has raised a total of $144 million.

What warrants $100 million in interest from investors in just four months is BigID’s mission to understand the data a company has and manage that in the context of increasing privacy regulation including GDPR in Europe and CCPA in California, which went into effect this month.

BigID CEO and co-founder Dimitri Sirota admits that his company formed at the right moment when it launched in 2016, but says he and his co-founders had an inkling that there would be a shift in how governments view data privacy.

“Fortunately for us, some of the requirements that we said were going to be critical, like being able to understand what data you collect on each individual across your entire data landscape, have come to [pass],” Sirota told TechCrunch. While he understands that there are lots of competing companies going after this market, he believes that being early helped his startup establish a brand identity earlier than most.

Meanwhile, the privacy regulation landscape continues to evolve. Even as California privacy legislation is taking effect, many other states and countries are looking at similar regulations. Canada is looking at overhauling its existing privacy regulations.

Sirota says that he wasn’t actually looking to raise either the C or the D, and in fact still has B money in the bank, but when big investors want to give you money on decent terms, you take it while the money is there. These investors clearly see the data privacy landscape expanding and want to get involved. He recognizes that economic conditions can change quickly, and it can’t hurt to have money in the bank for when that happens.

That said, Sirota says you don’t raise money to keep it in the bank. At some point, you put it to work. The company has big plans to expand beyond its privacy roots and into other areas of security in the coming year. Although he wouldn’t go into too much detail about that, he said to expect some announcements soon.

For a company that is only four years old, it has been amazingly proficient at raising money with a $14 million Series A and a $30 million Series B in 2018, followed by the $50 million Series C last year, and the $50 million round today. And Sirota said, he didn’t have to even go looking for the latest funding. Investors came to him — no trips to Sand Hill Road, no pitch decks. Sirota wasn’t willing to discuss the company’s valuation, only saying the investment was minimally diluted.

BigID, which is based in New York City, already has some employees in Europe and Asia, but he expects additional international expansion in 2020. Overall the company has around 165 employees at the moment and he sees that going up to 200 by mid-year as they make a push into some new adjacencies.

The Good, the Bad and the Ugly in Cybersecurity – Week 1

Image of The Good, The Bad & The Ugly in CyberSecurity

The Good

The MAZE group actors

The MAZE group actors found themselves lost in their own “maze” yesterday, Thursday, Jan 2, 2020. Their public shaming website was taken down along with the entire platform it was hosted on in Cork, Ireland at https://worldhostingfarm.com

Maze ransomware down

World Hosting Farm appears to have been a possible front for the bad guys (not a legitimate ISP in other words) and is host to many known malicious address ranges:

World Hosting Farm

This comes on the heels of news from earlier in the week when one of Maze’s victims from several weeks ago, a US firm Southwire, was able to gain a secure emergency High Court injunction against two likely Polish nationals and the ISP front company that was just taken down. It is great to see a cross-border injunction granted, and even better to see the website and malicious ISP taken down. This development helps remove a portion of leverage the MAZE extortionists have upon past and future victims.

The Bad

The restaurant group, Landry’s (over 600 famous eateries like Morton’s, McCormick & Schmick’s, Mastro’s, and Joe’s Crab Shack, for example) had an awkward PCI breach this week. Even though their credit card POS systems used end-to-end encryption to prevent malware from being able to scrape card data, it turns out Landry’s Select Club reward card swipe systems did not, and employees were sometimes swiping customer’s credit cards with rewards readers instead. Malware found on reward card order entry systems was able to read the credit card data when employees swiped them!

The restaurant group

Landry’s alerted their customers on their website, letting customers know the activity has been on-going for about a year. No word yet on how many credit cards were compromised.

The Ugly

The City of New Orleans gave an update on it’s recovery status on the heels of their December 14th ransomware attack. The attack took out all 3,400 systems which were connected to the network. 2,658 systems have been restored over the last two and a half weeks, however, eight of the city’s agencies have yet to be restored from backup.

Manual processes are still being used and restoring Public Safety systems, including the NOPD’s EPP and body camera footage, remains the top priority. These systems should be restored by next Monday, January 6th, after three weeks of downtime. Meanwhile, the city hopes to be able to allow property taxes to be paid no later than January 31, a month and a half after the attack began. To give an idea on the level of effort it’s taken so far, over 75 people have been working full time since December 14th on the breach. This represents over 10,000 hours for just the additional 75 people involved. Additionally, up to 20% of the city’s computing assets will not be usable on the newly-restored network, though the reason for this was not mentioned. Perhaps this is the percentage of assets for which data could not be recovered

While the city has turned over relevant data to the FBI, it was not willing to name the type of ransomware used in the attack or speculate who (or what nation state, perhaps) is behind the attack. Earlier reports based on files that were uploaded to VirusTotal pointed to Ryuk:

based on strings from this sample, it was seemingly taken from the computer owned by the city’s IT Security Manager:

City’s IT Security Manager

So there you have it, this week’s UGLY: cities, counties, and schools keep getting debilitated from ransomware, and it’s taking weeks and months to recover.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

The Best, The Worst and The Ugliest in Cybersecurity, 2019 edition

Image of The Good, The Bad & The Ugly in CyberSecurity

Earlier this year we started a Friday round-up of the most important cyber news occurring each week, focusing on stories that fell into one of three broadly defined categories: good news that boded well for the industry and users, bad news that reflected the impact of attacks, hacks and vulnerabilities on enterprise and end users, and, of course, the ugly: avoidable failures, controversial decisions and unfortunate circumstances. As we close out the year, let’s take a look at some of the highlights of our Good, the Bad and the Ugly digests. Here then, is the best, the worst, and the ugliest in cybersecurity of 2019.

The Best

There’s been some great news in cybersecurity this year, from the establishment of the offensive Cybersecurity Directorate (Wk30) to botnet (Wk40) and RAT (Wk49) platform takedowns. The identification of two men behind the notorious Dridex malware (Wk49) and the sentencing of criminals behind the Bayrob (Wk50) scam and those responsible for GozNym malware (Wk52) are all good news for cyber security going into 2020.

image of tweet Dridex hacker reward

The good cause should be further enhanced by the uptick in investment in businesses providing cybersecurity solutions (Wk29, Wk32), a trend that looks set to continue this coming year.

But perhaps the best news for enterprise and end users alike has been the expansion of bug bounty programs by the major players, which should lead to improved security across many of the main digital products and services we all use and rely on. Microsoft announced the Edge Insider Bounty Program (Wk34) just a week before Google (Wk35) initiated a new program to reward the reporting of data abuse issues in Android apps. Apple promised to open up its iOS bug bounty program to all (Wk32) as well as offer a macOS bug bounty program for the first time; the company delivered on those promises just in the nick of time (Wk51). Also worth mentioning among the good news was the publication of an interactive Russian APT map (Wk39), which if nothing else is perhaps the prettiest of all the good news we saw in 2019!

img of apt interactive map

The Worst

Inevitably, there was plenty of bad news this past year, too. BlueKeep (Wk31) and a universal Windows privilege escalation (Wk33) had enterprises on high alert. Thankfully, the predictions of another Eternalblue/Wannacry meltdown have not come to pass, thus far. But that isn’t to say the danger is behind us with so many vulnerable devices still out there.

There’s no doubt that this year we’ve covered more stories regarding ransomware than any other topic (Wk34, Wk35, Wk41, Wk44, Wk45, Wk49, Wk52), and among those Ryuk (Wk36, Wk37, Wk40, Wk47, Wk52) has been the most rampant, with RobinHood (Wk30), Sodinokibi (Wk35) and Maze (Wk46, Wk52) ransomware variants also causing havoc across public and private organizations in the US and abroad. As ransomware as a service continues to spread and make the threat available to a wider, less-technical, criminal audience, it only looks like 2020 will see more of the same.

But perhaps the worst thing we saw this year was the attack on the Kudankulam Nuclear Power Plant in India (Wk44). The sheer recklessness of attacking nuclear power plants, whose safe operation is critical to the safety and health of the entire world, presents the gravest of threats to us all.

image of Indian Nuclear Power Plant
Image Credit: indiawaterportal.org/The Kudankulam Nuclear Power Plant (KKNPP)/Wikimedia Commons

The Ugliest

Things got ugly for Samsung this year when a bug in their fingerprint reader in the Galaxy S10 (Wk42) allowed anyone to bypass it with a clear piece of plastic. Nord VPN (Wk43) were also widely criticized not only for weak security that allowed hackers free reign inside servers belonging to the virtual private network provider but also for failing to disclose the breach to clients for over 18 months. Fortinet, who were targeted by a Chinese APT (Wk36), were also on the receiving end of some harsh comments after it emerged (Wk48) that the company hardcoded an encryption key into several of its products and also failed to fix the bug for a year and a half.

Perhaps the company that’s had the ugliest of cyber security times in 2019 was Cisco. Awarded a 10/10 for severity on the Common Vulnerability Scoring System (CVSS), CVE-2019-12643 (Wk35) allows malicious HTTP requests to bypass authentication and gives attackers the ability to login and execute privileged actions. In more bad news for the company, but good news for corporate whistleblower James Glen, the company were penalized to the tune of $8m (Wk31) by US courts under the False Claims Act after being found guilty of shipping products with known vulnerabilities for several years. Finally, Cisco look set to face challenging times doing business in mainland China after new cybersecurity laws were passed (Wk31) in Beijing putting restrictions on the purchase of US networking equipment, data storage and ‘critical information infrastructure’ hardware. One can only hope for both Cisco and the millions who rely on their products that 2020 will be a better year all round.

image of cisco vuln

Coming Next…

That’s it for this year, but of course, we’ll be back in 2020 with a new series of the Good, the Bad and the Ugly, starting on Friday 3rd, January. Follow us on LinkedIn, Twitter, YouTube or Facebook or sign up for our weekly Blog newsletter and receive these posts right in your inbox. Until then, from all of us at SentinelOne, have a happy and secure New Year 2020!


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

The story of why Marc Benioff gifted the AppStore.com domain to Steve Jobs

In Marc Benioff’s book, Trailblazer, he tells the tale of how Steve Jobs planted the seeds of the idea that would become the first enterprise app store, and how Benioff eventually paid Jobs back with the gift of the AppStore.com domain.

While Salesforce did truly help blaze a trail when it launched as an enterprise cloud service in 1999, it took that a step further in 2006 when it became the first SaaS company to distribute related services in an online store.

In an interview last year around Salesforce’s 20th anniversary, company CTO and co-founder Parker Harris told me that the idea for the app store came out of a meeting with Steve Jobs three years before AppExchange would launch. Benioff, Harris and fellow co-founder Dave Moellenhoff took a trip to Cupertino in 2003 to meet with Jobs. At that meeting, the legendary CEO gave the trio some sage advice: to really grow and develop as a company, Salesforce needed to develop a cloud software ecosystem. While that’s something that’s a given for enterprise SaaS companies today, it was new to Benioff and his team in 2003.

As Benioff tells it in his book, he asked Jobs to elucidate on what he meant by an application ecosystem. Jobs replied that how he implemented the idea was up to him. It took some time for that concept to bake, however. Benioff wrote that the notion of an app store eventually came to him as an epiphany at dinner one night a few years after that meeting. He says that he sketched out that original idea on a napkin while sitting in a restaurant:

One evening over dinner in San Francisco, I was struck by an irresistibly simple idea. What if any developer from anywhere in the world could create their own applications for the Salesforce platform? And what if we offered to store these apps in an online directory that allowed any Salesforce user to download them?

Whether it happened like that or not, the app store idea would eventually come to fruition, but it wasn’t originally called the AppExchange, as it is today. Instead, Benioff says he liked the name AppStore.com so much that he had his lawyers register the domain the next day.

When Benioff talked to customers prior to the launch, while they liked the concept, they didn’t like the name he had come up with for his online store. He eventually relented and launched in 2006 with the name AppExchange.com instead. Force.com would follow in 2007, giving programmers a full-fledged development platform to create applications, and then distribute them in AppExchange.

Meanwhile, AppStore.com sat dormant until 2008, when Benioff was invited back to Cupertino for a big announcement around iPhone. As Benioff wrote, “At the climactic moment, [Jobs] said [five] words that nearly floored me: ‘I give you App Store.”

Benioff wrote that he and his executives actually gasped when they heard the name. Somehow, even after all that time had passed since that the original meeting, both companies had settled upon the same name. Except Salesforce had rejected it, leaving an opening for Benioff to give a gift to his mentor. He says that he went backstage after the keynote and signed over the domain to Jobs.

In the end, the idea of the web domain wasn’t even all that important to Jobs in the context of an app store concept. After all, he put the App Store on every phone, and it wouldn’t require a website to download apps. Perhaps that’s why today the domain points to the iTunes store, and launches iTunes (or gives you the option of opening it).

Even the App Store page on Apple.com uses the sub-domain “app-store” today, but it’s still a good story of how a conversation between Jobs and Benioff would eventually have a profound impact on how enterprise software was delivered, and how Benioff was able to give something back to Jobs for that advice.

Moving storage in-house helped Dropbox thrive

Back in 2013, Dropbox was scaling fast.

The company had grown quickly by taking advantage of cloud infrastructure from Amazon Web Services (AWS), but when you grow rapidly, infrastructure costs can skyrocket, especially when approaching the scale Dropbox was at the time. The company decided to build its own storage system and network — a move that turned out to be a wise decision.

In a time when going from on-prem to cloud and closing private data centers was typical, Dropbox took a big chance by going the other way. The company still uses AWS for certain services, regional requirements and bursting workloads, but ultimately when it came to the company’s core storage business, it wanted to control its own destiny.

Storage is at the heart of Dropbox’s service, leaving it with scale issues like few other companies, even in an age of massive data storage. With 600 million users and 400,000 teams currently storing more than 3 exabytes of data (and growing) if it hadn’t taken this step, the company might have been squeezed by its growing cloud bills.

Controlling infrastructure helped control costs, which improved the company’s key business metrics. A look at historical performance data tells a story about the impact that taking control of storage costs had on Dropbox.

The numbers

In March of 2016, Dropbox announced that it was “storing and serving” more than 90% of user data on its own infrastructure for the first time, completing a 3-year journey to get to this point. To understand what impact the decision had on the company’s financial performance, you have to examine the numbers from 2016 forward.

There is good financial data from Dropbox going back to the first quarter of 2016 thanks to its IPO filing, but not before. So, the view into the impact of bringing storage in-house begins after the project was initially mostly completed. By examining the company’s 2016 and 2017 financial results, it’s clear that Dropbox’s revenue quality increased dramatically. Even better for the company, its revenue quality improved as its aggregate revenue grew.