The Good, the Bad and the Ugly in Cybersecurity – Week 3

Image of The Good, The Bad & The Ugly in CyberSecurity

The Good

The cybersecurity skills shortage is something we’re all concerned about, so it’s good news this week to hear of the launch of the U.S. National High School Cybersecurity Talent Discover Program. The online program involves students playing an aptitude game, CyberStart, which assesses the player’s ability in various skills relevant to the cybersecurity industry. The program is open to both boys and girls in every state, although girls participating in GirlsGoCyberStart must first excel there before being admitted to the boys program. Despite the gender distinction, previous versions of the program were widely praised by parents for encouraging girls to think of cybersecurity as a potential career path. 

image of girls go cyber start

The Bad

There’s no other contender for this week’s bad news: yep, it’s CVE-2020-0601, also going by nicknames like CurveBall and ChainOfFools. The bug in Windows CryptoAPI (Crypt32.dll) allows an attacker to use a fake security certificate to sign malware as trusted code, communicate over HTTPS and pass off malicious files and emails as benign. According to the NSA, who reported the bug to Microsoft, the flaw affects Windows 10, Windows Server 2016, Windows Server 2019 and applications that rely on the Windows OS to provide trust services. Complicating matters further, reports on Reddit suggest that the patch released by Microsoft on Tuesday is failing to install for some users, leaving those users without a path to mitigation. SentinelOne yesterday reassured its customers that any attempt to exploit the bug will be detected by its behavioral engine.

image of error 0x80f0982

The Ugly

At the end of last year, we noted that Citrix disclosed CVE-2019-19781, an arbitrary code execution bug in its NetScaler Application Delivery Controller (ADC) networking product, and provided mitigation steps for the vulnerability. Unfortunately, the mitigation steps may have provided the clue needed by attackers to develop an exploit. Even less pretty, it seems that researchers have been squabbling among themselves about responsible disclosure, and dumps first by Project Zero India and then trustedsec on github have pretty much armed attackers everywhere with the tools needed to exploit an estimated 130,000 vulnerable devices.

image of citrixmash exploit

Social media has unceremoniously hash tagged this flaw as #shitrix and reports of active exploits in the wild are already coming in. Citrix have still not provided an actual patch for the bug, and the mitigation steps previously reported are said to be unreliable. Urgh!

Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Harvestr gathers user feedback in one place

Meet Harvestr, a software-as-a-service startup that wants to help product managers centralize customer feedback from various places. Product managers can then prioritize outstanding issues and feature requests. Finally, the platform helps you get back to your customers once changes have been implemented.

The company just raised a $650,000 funding round led by Bpifrance, with various business angels also participating, such as 360Learning co-founders Nicolas Hernandez and Guillaume Alary, as well as Station F director Roxanne Varza through the Atomico Angel Programme.

Harvestr integrates directly with Zendesk, Intercom, Salesforce, Freshdesk, Slack and Zapier. For instance, if a user opens a ticket on Zendesk and another user interacts with your support team through an Intercom chat widget, everything ends up in Harvestr.

Once you have everything in the system, Harvestr helps you prioritize tasks that seem more urgent or that are going to have a bigger impact.

When you start working on a feature or when you’re about to ship it, you can contact your users who originally reached out to talk to you about it.

Eventually, Harvestr should help you build a strong community of power users around your product. And there are many advantages in pursuing this strategy.

First, you reward your users by keeping them in the loop. It should lead to higher customer satisfaction and lower churn. Your most engaged customers could also become your best ambassadors to spread the word around.

Harvestr costs $49 per month for five seats and $99 per month for 20 seats. People working for 360Learning, HomeExchange, Dailymotion and other companies are currently using it.

Evil Markets | Selling Access To Breached MSPs To Low-Level Criminals

Over the last few years we have seen a drastic uptick in the compromise of MSPs (managed service providers) and similar. In today’s post, we would like to update our readers on just how visible and prevalent the market is for buying and selling access to corporate environments and managed service providers as well as provide guidelines on how to safeguard against this risk. The threat presented by breached MSPs is not exclusive to commercial or private-sector entities either. Government infrastructure is just as susceptible to this attack methodology. 

image of Evil Markets

We have seen many examples of this play out recently, especially over the last 3 to 4 years. The attackers behind Snatch ransomware, Sodinokibi/Revil, Maze ransomware, and Ryuk all focused their attacks, at various points, on managed service providers. 

Why is Access To MSPs Attractive to Hackers?

There are numerous reasons why a criminal would want to purchase access to a target environment rather than establish it themselves. Vendors selling established and functional access to target environments aim to make attacks simpler for their clients. Those who buy these services aim to reduce their overhead and risk as well as leverage their malware and tools more efficiently without having to worry about how they are going to gain initial entry into an environment. 


MSPs are extremely attractive given the potential to reach multiple targets or environments by way of compromising the singular MSP. In other words, targeting the MSP can be seen as a stepping stone to expanding into broader and more vast targets.

In some situations, targeting of MSPs can actually help with persistence and evasion of certain controls like Firewalls and intrusion detection systems (IDS). Communication channels between MSPs and their clients often occur across trusted and private networks, with the boundaries between them turning into somewhat of a grey area. The traffic may remain ‘internal’ to the infrastructure of the MSP, therefore not being susceptible to traditional controls found at the perimeter (Internet facing IDS, Email Content Filters, and the like).

As a result, MSPs are being increasingly targeted at all levels of sophistication from script kiddie to state-sponsored APT groups.

image of access to MSP

Other reasons attackers buy access to breached MSPs can include their own low skill level or profiting from resale in another market or at a later date. In short, it’s a seller’s market as there are always cybercriminals that desire turnkey access to potential target environments.

How Criminals Trade Access To Compromised MSPs

Within the scope of low-to-mid-level cybercrime, the market for buying and selling access to corporate and MSP environments can be observed in the open. Whether it is a specialized Telegram channel, an obscure forum on the dark web, or a hybrid market/community on the surface web, one can always find activity of this nature if you know where to look.

image of access to tax company

Criminal vendors offer a variety of services, and there will pretty much always be a guaranteed buyer if the price is right. Services range from singular, privileged accounts all the way up to full and persistent root shells or remote consoles and shells. While the buying and selling of access to MSPs is highly problematic, the market around individual environments and accounts is just as active. There are many well-established forums, channels and other communities where these services are bought and sold. 

When looking on forums, you will see there is a large variety of offers, prices, and service levels, with sellers advertising specific levels of access. Typically, these public posts only include a few key details; interested buyers are expected to then establish contact to proceed or get more details.

In many cases, you can see prices ranging from as little as ~$1000 to ~$4500 (.5 BTC).

In terms of targets, we’ve seen examples selling access to a Spanish IT Company/MSP, an electronics manufacturer, and a US-based utility company. Types of access vary and include:

  •   Sets of executive-level credentials
  •   Administration of various content management portals (law firms, schools, hospital)
  •   Mail server access (direct)
  •   Full “root” access to *everything*

If you put just those few listings in the context of recent attacks, the potential for damage starts to become quite clear. Buyers of these types of services have the immediate ability to carry out campaigns similar to the recent East Texas School District ransomware attacks, the attacks on the city of Louisiana, the recent wave of ransomware attacks on Spanish IT and media companies, and others.

The next three examples show a little more variety.  

image of financial data for sale

image of network access for sale

image of access to POS machines for sale

These show vendors selling:

  •   Sensitive database and email access to multiple corporate environments
  •   Direct access to 20+ PoS terminals with potential to pivot further into the environment
  •   More direct access to multiple corporate environments

We see prices for access to the POS systems offered at $2000 USD a piece. As we know, a criminal stands to extract data worth far more than that with well-crafted POS-specific malware.

Safeguards to Prevent Breaches in MSPs and Enterprise Environments

First and foremost, invest in a trusted security platform that can prevent compromise in the first place. Whether it’s a phishing attack, malware trojan like Emotet or a rogue device on your network, a modern enterprise can no longer rely on legacy AV software to keep out the range of attacks available today.   

Second, part of securing an environment should include becoming aware of when data relevant to your infrastructure appear in various dark corners of the internet.  Brand and IP (intellectual property) monitoring is critical and can head criminals off at the pass when it comes time for them to try actually using purchased access or accounts. Having the ability to scour various forums, apps and sites (or subscribing to a service that provides this) can be a very powerful control. 

Beyond that, there are a number of steps that can be taken to reduce exposure and risk in the event that access to your environment is being sold. Basic steps can reduce low-level criminals and APT actors ability to maintain access or move laterally.

  •   Use multi-factor authentication where possible
  •   Employ proper separation of networks by resource and/or function
  •   Place properly configured and maintained firewalls and IDS strategically (e.g., at high-risk perimeters, trusted boundaries)
  •   Restrict and carefully monitor access to public sharing and collaboration services. This includes monitoring and restricting flow of data to and from services like Dropbox, Google Drive, ‘Paste’ sites, and similar ‘beachheads’.
  •   Ensure proper logging and review of logs and alerts. Enable additional logging where possible (e.g., PowerShell script-block logging) and put emphasis on a critical review of subsequent logs and alerts.
  •   Restrict where possible the use of well-known adversarial tools and associated communication channels. Examples would include mimikatz, wce, PStools, VNC, net, TeamViewer, WMIC, sdelete, and lazagne.

MSPs, specifically, should ensure:

  •   regular and aggressive rotation of VPN authentication certificates (e.g., every 6 months)
  •   communication to client organizations is via dedicated VPN
  •   traffic to and from the MSP (VPNd) is restricted to specific hosts or services that are required, and nothing beyond
  •   MSP accounts are only created for essential purposes, and do not have unnecessary administrative privileges (e.g., Domain Administrator or Enterprise Administrator) to client systems.
  •   MSP accounts are restricted to only those systems that they need direct access to for management purposes. Otherwise, systems should be segregated as described above.
  •   specific service accounts are used for MSP access. Consider disabling interactive logins for said accounts.
  •   access to accounts is monitored and controlled according to time and date. MSP or 3rd party, restricted, accounts should be designed to have access during specific windows only. Any access attempt outside of said, allowable, windows could be anomalous and should be reviewed and scrutinized.


It’s no secret that this economy, and associated threats, exist in the open. That being said, we are seeing more critical and sensitive environments being offered for sale to mid-to-lower level criminals. As is the case with modern RaaS and MaaS offerings, the potential for non-skilled cybercriminals to severely impact advanced Enterprise environments is becoming greater. The barrier to entry is far lower than it was 5-10 years ago. For that reason, it is good to keep matters like this on our radar and stay as aware as possible with regards to the behaviors and communities of cybercriminals. We urge those in our industry to ingest and maintain this type of intelligence regularly in addition to adhering to the above recommended safeguards.

Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

We Got You Covered | How SentinelOne Protects Against CVE-2020-0601

The 14th of January was a busy, exciting, and concerning day for a lot of the world as Microsoft’s latest vulnerability CVE-2020-0601 has created quite a stir in the industry due to the critical nature of the vulnerability. While there’s no doubt about the seriousness of the flaw, let us offer some practical advice to keep things under control.

We will start by answering the two questions that are at the forefront of most organizations’ concern:

  • Do you have to install the Microsoft security update for this vulnerability?  Yes
  • Does SentinelOne protect you against threats that use this exploit?  Yes

First, as a security vendor and trusted advisor, we recommend that you install the Microsoft security update without delay. While SentinelOne detects and prevents all known samples related to this CVE found to date, proper patch management should always be applied. 

How We Protect Against Threats That May Exploit Vulnerabilities

SentinelOne’s Endpoint Protection Platform uses multiple detection engines to protect against threats. SentinelOne’s Behavioral AI monitors all running processes and is highly effective in mitigating attempted exploitation attempts and threats even if the exploit itself cannot be blocked. 

If an exploit is successful, attackers typically try one of the following approaches to leverage their toehold on the system –

  • Modify behavior of the service or application that they exploited with an intent to steal data or credentials
  • Live off the land by using PowerShell or other scripting engines to reconnoiter, move laterally, destroy or perform other malicious actions
  • Write a new executable to disk and set it to auto-run in an attempt to gain persistence
  • Attempt to turn off installed AV/EPP using escalated privileges
  • Move laterally to other services, applications or other systems

SentinelOne’s Behavioral AI engine (aka DBT-Executables) monitors all processes and network communications to detect all of the above attack patterns and is able to mitigate these threats automatically. We also recommend that you update to SentinelOne Windows agent version 3.6 (latest GA), but the principles described above hold true for all supported versions.

Here is a video showing how we detect a POC for CVE-2020-0601 using our Behavioral AI engine:

Don’t Forget About Visibility

Finally, SentinelOne’s Deep Visibility Threat Hunting module (part of the Complete package) provides an additional layer of safety by logging all the changes made on the system and automatically correlating these events to a TrueContextID, which groups all the variations of related processes together. In an extreme case of a missed threat, admins can watch for and hunt for Indicators of Compromise, mark a TrueContextID as a threat, and rollback all changes in a single-button click in addition to other advanced remediation capabilities. This multi-layered, single-agent approach makes SentinelOne a world-class protection product.  

We are here for you. Should you ever have a finding that you do not know how to respond to, reach out to your SentinelOne team and we will provide an immediate response. 

Visa’s Plaid acquisition shows a shifting financial services landscape

When Visa bought Plaid this week for $5.3 billion, a figure that was twice its private valuation, it was a clear signal that traditional financial services companies are looking for ways to modernize their approach to business.

With Plaid, Visa picks up a modern set of developer APIs that work behind the scenes to facilitate the movement of money. Those APIs should help Visa create more streamlined experiences (both at home and inside other companies’ offerings), build on its existing strengths and allow it to do more than it could have before, alone.

But don’t take our word for it. To get under the hood of the Visa-Plaid deal and understand it from a number of perspectives, TechCrunch got in touch with analysts focused on the space and investors who had put money into the erstwhile startup.

DigitalOcean is laying off staff, sources say 30-50 affected

After appointing a new CEO and CFO last summer, cloud infrastructure provider DigitalOcean is embarking on a wider reorganisation: the startup has announced a round of layoffs, with potentially between 30 and 50 people affected.

DigitalOcean has confirmed the news with the following statement:

“DigitalOcean recently announced a restructuring to better align its teams to its go-forward growth strategy. As part of this restructuring, some roles were, unfortunately, eliminated. DigitalOcean continues to be a high-growth business with $275M in [annual recurring revenues] and more than 500,000 customers globally. Under this new organizational structure, we are positioned to accelerate profitable growth by continuing to serve developers and entrepreneurs around the world.”

Before the confirmation was sent to us this morning, a number of footprints began to emerge last night, when the layoffs first hit, with people on Twitter talking about it, some announcing that they are looking for new opportunities, and some offering help to those impacted. Inbound tips that we received estimate the cuts at between 30 and 50 people. With around 500 employees (an estimate on PitchBook) that would work out to up to 10% of staff affected.

It’s not clear what is going on here — we’ll update as and when we hear more — but when Yancey Spruill and Bill Sorenson were respectively appointed CEO and CFO in July 2019 (Spruill replacing someone who was only in the role for a year), the incoming CEO put out a short statement that, in hindsight, hinted at a refocus of the business in the near future.

“My aspiration is for us to continue to provide everything you love about DO now, but to also enhance our offerings in a way that is meaningful, strategic and most helpful for you over time,” he said at the time.

The company provides a range of cloud infrastructure services to developers, including scalable compute services (“Droplets” in DigitalOcean terminology), managed Kubernetes clusters, object storage, managed database services, Cloud Firewalls, Load Balancers and more, with 12 datacenters globally. It says it works with more than 1 million developers across 195 countries. It’s also been expanding the services that it offers to developers, including more enhancements in its managed database services, and a free hosting option for continuous code testing in partnership with GitLab.

All the same, as my colleague Frederic pointed out when DigitalOcean appointed its latest CEO, while developers have generally been happy with the company, it isn’t as hyped as it once was, and is a smallish player nowadays.

And in an area of business where economies of scale are essential for making good margins on a business, it competes against some of the biggest leviathans in tech: Google (and its Google Cloud Platform), Amazon (which as AWS) and Microsoft (with Azure). That could mean that DigitalOcean is either trimming down as it talks investors for a new round; or to better conserve cash as it sizes up how best to compete against these bigger, deep-pocketed players; or perhaps to start thinking about another kind of exit.

In that context, it’s notable that the company not only appointed a new CFO last summer, but also a CEO with prior CFO experience. It’s been a while since DigitalOcean has raised capital. According to PitchBook, DigitalOcean last raised money in 2017, an undisclosed amount from Mighty Capital, Glean Capital, Viaduct Ventures, Black River Ventures, Hanaco Venture Capital, Torch Capital and EG Capital Advisors. Before that, it took out $130 million in debt, in 2016. Altogether it has raised $198 million and its last valuation was from a round in 2015, $683 million.

It’s been an active week for layoffs among tech startups. Mozilla laid off 70 employees this week; and the weed delivery platform Eaze is also gearing up for more cuts amid an emergency push for funding.

We’ll update this post as we learn more. Best wishes to those affected by the news.

Zendesk launches Sell Marketplace to bring app store to CRM product

Zendesk acquired Base CRM in 2018 to give customers a CRM component to go with its core customer service software. After purchasing the company, it changed the name to Sell, and today the company announced the launch of the new Sell Marketplace.

Officially called The Zendesk Marketplace for Sell, it’s a place where companies can share components that extend the capabilities of the core Sell product. Companies like MailChimp, HubSpot and QuickBooks are available at launch.

App directory in Sell Marketplace. Screenshot: Zendesk

Matt Price, SVP and general manager at Zendesk, sees the marketplace as a way to extend Sell into a platform play, something he thinks could be a “game changer.” He likened it to the impact of app stores on mobile phones.

“It’s that platform that accelerated and really suddenly [transformed smart phones] from being just a product to [launching an] industry. And that’s what the marketplace is doing now, taking Sell from being a really great sales tool to being able to handle anything that you want to throw at it because it’s extensible through apps,” Price explained.

Price says that this ability to extend the product could manifest in several ways. For starters, customers can build private apps with a new application development framework. This enables them to customize Sell for their particular environment, such as connecting to an internal system or building functionality that’s unique to them.

In addition, ISVs can build custom apps, something Price points out they have been doing for some time on the Zendesk customer support side. “Interestingly Zendesk obviously has a very large community of independent developers, hundreds of them, who are [developing apps for] our support product, and now we have another product that they can support,” he said.

Finally, industry partners can add connections to their software. For instance, by installing Dropbox for Sell, it gives sales people a way to save documents to Dropbox and associate them with a deal in Sell.

Of course, what Zendesk is doing here with Sell Marketplace isn’t new. Salesforce introduced this kind of app store concept to the CRM world in 2006 when it launched AppExchange, but the Sell Marketplace still gives Sell users a way to extend the product to meet their unique needs, and that could prove to be a powerful addition.

Microsoft announces global Teams ad push as it combats Slack for the heart of enterprise comms

The long-running contest between Microsoft and its Teams service and Slack’s eponymous application continued this morning, with Redmond announcing what it describes as its first “global” advertising push for its enterprise communication service.

Slack, a recent technology IPO, exploded in the back half of last decade, accreting huge revenues while burrowing into the tech stacks of the startup world. The former startup’s success continued as it increasingly targeted larger companies; it’s easier to stack revenue in enterprise-scale chunks than it is by onboarding upstarts.

Enterprise productivity software, of course, is a large percentage of Microsoft’s bread and butter. And as Slack rose — and Microsoft decided against buying the then-nascent rival — the larger company invested in its competing Teams service. Notably, today’s ad push is not the first advertising salvo between the two companies. Slack owns that record, having welcomed Microsoft to its niche in a print ad that isn’t aging particularly well.

Slack and Teams are competing through public usage announcements. Most recently, Teams announced that it has 20 million daily active users (DAUs); Slack’s most recent number is 12 million. Slack, however, has touted how active its DAUs are, implying that it isn’t entirely sure that Microsoft’s figures line up to its own. Still, the rising gap between their numbers is notable.

Microsoft’s new ad campaign is yet another chapter in the ongoing Slack vs. Teams. The ad push itself is only so important. What matters more is that Microsoft is choosing to expend some of its limited public attention bandwidth on Teams over other options.


While Teams is merely part of the greater Office 365 world that Microsoft has been building for some time, Slack’s product is its business. And since its direct listing, some air has come out of its shares.

Slack’s share price has fallen from the mid-$30s after it debuted to the low-$20s today. I’ve explored that repricing and found that, far from the public markets repudiating Slack’s equity, the company was merely mispriced in its early trading life. The company’s revenue multiple has come down since its first days as a public entity, but remains rich; investors are still pricing Slack like an outstanding company.

Ahead, Slack and Microsoft will continue to trade competing DAU figures. The question becomes how far Slack’s brand can carry it against Microsoft’s enterprise heft.

macOS Security Updates Part 2 | Running Diffs on Apple’s MRT app

In the first part of this series, I looked at how we can keep informed of when Apple make changes to their built-in security tools. In this and the following parts, we’ll look at how to determine what changes have occurred. Our primary methodology is to run diffs against the previous version of the tool and the new one. However, XProtect, Gatekeeper and use a variety of formats (XML, SQL and machO binary, respectively) to store data, and they also use different forms of obfuscation within those formats. In this post, we’ll start with looking at how to extract data and run diffs on Apple’s Malware Removal Tool, aka the The only tools you’ll need to do this are Apple’s free Xcode Command Line tools.

image running diffs MRT

Why the ‘strings’ Tool Won’t Show All Strings

If you’ve ever done any kind of file analysis you’re no doubt aware of the strings tool (and if not start here). While strings is fine for dumping constant strings in a binary, there are other ways of hiding strings that such tools are unable to extract. In a previous post I described how to run the strings tool on’s binary and how to search for and decode other strings hidden in byte arrays using a disassembler. The problem with that approach, however, is two-fold. First, it’s labor intensive: you could literally spend hours (I have!) trying to manually find and decode the strings stored in byte arrays. Second, it requires some skill in knowing your way around a disassembler, extracting and cataloging what you find. Fortunately, as I’ll show below, there’s a better way that’s both fast and doesn’t require manually combing through disassembly.

Before we get to that, it’s also worth reminding ourselves that strings on macOS is not equivalent to the same tool on most Linux distros and lacks the --encoding option for dumping wide-character strings like Unicode UTF-16. For that reason, in general it’s better to use something like FireEye’s FLOSS tool for extracting strings, but that still won’t help us with those strings stored in byte arrays and only assembled in memory upon execution.

Just Enough Disassembly To Get the Idea!

When it comes to strings in, the issue isn’t unicode or some other encoding but rather the fact that some strings are stored as single characters in an array. In order to understand the problem and the solution, let’s take a very quick look at what’s going on when we view the byte arrays in the MRT executable in a disassembler.

The following images were taken from Hopper – a commercial tool, simply because it’s prettier and cleaner for illustration purposes – and the open source radare2 (aka r2). However, you don’t need either to follow along, and indeed we’ll look at similar output from otool (which you will need and is included in the Xcode command line tools) a little further on.

The first image below shows part of one of MRT app’s typical byte arrays. The first column represents the address of each line of code, but we won’t be needing that today. The second column of greyed text represents the op codes or machine instructions while everything else to the right of that is a sort-of human readable translation to help us make sense of what instructions and values the op codes represent. Note, in particular, that the last two places of each line of op code are mirrored in the rightmost column, where they are prefixed with 0x to indicate hexadecimal notation.

image of Hopper byte array

Importantly, those hex numbers are ASCII characters, which you can reveal in Hopper by control-clicking and choosing ‘Characters’.

image of Hopper display character

As we can clearly see below, that’s an array of characters that can be concatenated to form a string. When the particular function containing this array executes in memory, the string “EliteKeyloggerAccessibilityStatus” (among others) will be created.

For the sake of comparison, here’s the same in r2, which nicely renders the ASCII by default, and then in Hopper.

image of disassembly in r2

image of Hopper elite keylogger

Let’s remain with the op codes in the greyed column for a moment because it’s these that are going to prove to be our salvation when trying to automate extracting these strings. You’ll notice that all the lines begin with C6, and all except the first are followed by 40. The byte C6 is an Intel assembly instruction that tells the processor to move some value into the RAX accumulator register. The following instruction simply says “move the hex number 45 into RAX”.

C6 00 45

The 00 in the middle means “with an offset of zero”; in other words, move the number 45 to exactly the address of the RAX register. All the subsequent lines have the C6 followed by 40, which is the assembly instruction for addition, and then an incrementing hexadecimal index, 1 through to the length of the array.

C6 40 01 45
C6 40 02 6C
C6 40 03 69

This tells the processor to move the number in the final two places (e.g., 45) to the address of (RAX + index number of bytes). So, for example, the op code

C6 40 09 6F

tells the processor to move the hexademical number 6F (which is the ASCII code for the lower case ‘o’ character) into an address that = (address of the RAX register + 9 bytes).

There’s only one other op code we need to take note of before we move on to the fun of extracting these strings, and that’s the op code for when a procedure (aka “function”) returns, C3.

image of end of procedure

The significance of this will come into play below.

Extracting Strings Hidden In Byte Arrays

If you already have the Xcode command line tools installed, we can make use of otool and some heavy piping through other provided utilities to dump all the strings contained in byte arrays in one go. For those eager to have a go, try this command, then we’ll go through it to explain how it works, make a few tweaks and suggest saving it as a script.

If you’re on Catalina, try this (triple-click the line below to select all of it):

otool -tvj /Library/Apple/System/Library/CoreServices/ | grep movb | grep (%rax)$ | awk '{ print(NF==7)?"0x0a"$(NF-1):$(NF-1) }' | sed 's/$//g' | grep -v % | sed 's/{0x,,}//g' | grep -v '-' | awk 'length($0)>1' | awk 'length($0)!=3' | xxd -r -p

For Mojave and earlier (triple-click the line below to select all of it):

otool -tvj /System/Library/CoreServices/| grep movb | grep (%rax)$ | awk '{ print(NF==7)?"0x0a"$(NF-1):$(NF-1) }' | sed 's/$//g' | grep -v % | sed 's/{0x,,}//g' | grep -v '-' | awk 'length($0)>1' | awk 'length($0)!=3' | xxd -r -p

Yes, those are some mammoth one-liners! But if all is well, you should get an output that looks like the image below. I’m running Mojave, so I used the second of the two code blocks above (if you get an error, check you’re using the right code for your version of macOS).

image of output

On the current version of, which is v1.52 at the time of writing, if I repeat that command (arrow up to recall last command, for those who are not regular command line users) and append

| wc -l

to get a line count, that returns 169 lines.

Here’s an explanation of how the code works. First, otool -tvj essentially produces an output similar to what we saw in Hopper and r2, but with a slightly different format. If we ran that alone in Terminal and gave it the path to the MRT binary, using what we learned in the last section, we could easily manually find the byte arrays just by searching for c6 40 01 (that would bring us to index 1; remember, the beginning of the array starts the line above at index 0).

image of otool

Note that otool uses the mnemonic movb – move byte – rather than just mov, which makes it a bit more transparent. Thus, our first grep after dumping the disassembly is to pull out every line that contains a movb instruction. Second, every line in a byte array will end with (%rax), so that’s our second grep.

image of grep after dump

Recall that every array starts with C6 00 XX. Because there’s no need to add an index to store the first character, there’s only three bytes in the instruction; in contrast, the remaining lines in the array all have four (review the previous section if you’re not clear why). For this reason, we now use a bit of awk and a ternary operator to test how many columns there are in the line.

image of awk ternary operator

If the line has exactly 7 columns, we know this could be the start of a byte array, so we prepend 0x0a – the new line character – to our output, and then print the character’s ASCII code (prepending 0a is what produces new lines for each decoded string in our output) sitting in the penultimate column (NF -1). If the line has more or less than 7 columns, then we output whatever is in the penultimate column (NF -1), which mostly (see below) will give us values with the pattern “0xXX,”.

From here on, each line is piped through a few grep, sed and awk commands to remove everything but the actual stripped hex number. We also exclude lines that only output a length of 1 or 3, as our hex output needs to be divisible by two for the next and final stage.

image of grep sed and awk

That final stage involves piping the output into xxd and using the -r and -p switches (you can also use rax2  -s as an alternative to xxd -r -p for you radare2 fans!) to turn the hex encoded ASCII back into human-readable language. That process requires each valid hex numeral to have a length of two characters (XX) to produce one printable ASCII character (hence, no piping anything of length 1 or 3 into it or else you’ll be treated to a few error beeps!).

image of xxd

If you’re only interested in finding out what’s new in MRT, our script should be fine as is for doing diffs (see the next section), but for other uses, it needs a tweak. In particular, for threat hunting, it would be useful to see which strings belong together as part of a single detection. This may help us to find samples in public malware repositories like VirusTotal or across our own data collected from endpoints.

At the end of the previous section, I pointed out that the op code for the “end of procedure” instruction was C3. We can now use that to demarcate groups of strings with some nominal delimiter. Here’s a version of the extraction code given above, but which inserts a delimiter character between strings from different functions (note you need to substitute the actual path to MRT binary when you paste this one):

otool -tvj  | egrep 'movb|retq' | egrep '(%rax)$|retq' | awk '{ print(NF==7)?"0x0a"$(NF-1):$(NF-1) }' | sed 's/^c3/5c/' | sed 's/$//g' | grep -v % | sed 's/{0x,,}//g' | grep -v '-' | awk 'length($0)>1' | awk 'length($0)!=3' | xxd -r -p

image of grep for end of procedure

As you can see, the primary difference is we’ve now included a grep for lines containing the mnemonic retq (again, note the difference between otool and Hopper). Later, after the first bunch of awk commands, we substitute the opcode c3 with 5c, which represents the backslash character. In the disassembly, when we encounter this line, the instruction will actually be at the column position NF-1, so it is already captured by the earlier piping. The result of the entire command now helps us to visually see the difference between strings contained in one procedure and another.

image of delimited string collections

There’s a couple of “ToDos” remaining. One, note that because the grep and sed search and replace for c3 is fairly crude, you’ll sometimes get only one forward slash between strings from different functions and sometimes multiple slashes. It would be nice to have a more consistent “end of procedure” marker. Two, we would do well to put these alternative versions (Mojave, Catalina, show/hide “end of procedure” markings) in a script with arguments. This isn’t the place to go into general bash scripting and we’re running out of space, so I’ll deploy the “leave it as an exercise for the reader” evasion here! If you need help with bash scripting, there’s no better place to start than here.

Running Diffs On

In order to do diffs you’re going to need two versions of the MRT binary, so I suggest copying the current one now to a dedicated folder for this kind of work so that when the next iteration arrives, you have the older version safely tucked away for testing. I use a simple nomenclature, MRT_151, MRT_152, for version 1.51, 1.52, etc.

$ cp /System/Library/CoreServices/ ~//MRT_152

Note: For Catalina, remember to prepend /Library/Apple to that path.

From our discussion above, we now have two kinds of string differences we can run on the executable: the ordinary strings with strings or FLOSS and what I’ll call a “byteStrings” command or script using the code detailed above.

Aside from strings, it’s also good to know what classes have changed in a machO binary, what new symbols may have been imported from libraries and so on. The built-in tools nm and otool have some useful options you can check out in the man pages. I also tend to make heavy use of rabin2 (part of the r2 toolset) for binary analysis as some of the examples below show.

In order to do a fast diff on two MRT binaries, we can make use of a shell feature called process substitution, which allows us to treat the result of commands like a file, and thus pipe them into the diff utility.

Using this syntax,

$ diff -y <([commands] path to older file) <([commands] path to newer file)

the -y switch allows us to view the diff side-by-side for convenience. The process substitutions occur inside <().

Let’s try it:

$ diff -y <(strings -a MRT_151) <(strings -a MRT_152)

There’s three possible outcomes for each line, shown between the two columns. A > indicates the entry to the right has been added at that position in the newer file; a < indicates the entry to the left is missing at that position in the newer file, and a | indicates the line at that position has changed.

image of side by side strings diff

However, in practice interpretation is a little more nuanced than that because, due to code refactoring, many lines that may look like they’re missing will appear as added at another point in the diff. The above image shows a good example. Note how the pathWithComponents: line has shifted in MRT v152 (on the right) compared to MRT v151 (on the left).

There’s a couple of ways you can deal with this in general. If the output is short, a manual inspection will usually do. If there’s a lot of output, a bit more grepping should tell you how much content you’re looking for, but you’ll find it easier to manipulate the results in text files. The following two commands sort the additions and omissions into two separate text files, “added.txt” and “missing.txt”, respectively.

$ diff //g' > added.txt

$ diff <(strings -a MRT_151) <(strings -a MRT_152) | grep ' missing.txt

The sed command serves to remove the “greater than” and “less than” symbols, which we need to do so that we can then diff those two files against each other:

$ diff missing.txt added.txt | grep > | sort -u

Aside from strings, another important change to look at is the class names. My favorite tool for class dumping is rabin2. We’ll use the -cc option and grep out the additions and omissions, once again making good use of process substitution.

$ diff -y <(rabin2 -cc MRT_151) <(rabin2 -cc MRT_152) | egrep ''

image of diff with rabin cc

You can also play with other rabin2 options in the same pattern, such as -l instead of -cc to do a diff on the linked libraries (no change between MRT v1.51 and MRT v.152, in this instance). Likewise, diffing the two binaries with nm using process substitution can also sometimes turn up subtle changes.

image of diff with nm

Finally, of course, don’t forget to do a side-by-side diff with our new byteStrings tool!

$ diff -y <(./byteStrings MRT_151) <(./byteStrings MRT_152)

image of some new bytestrings


In this post, we’ve looked at ways to run diffs using different tools that can help us to understand what changes have occurred from one version of a program to another. Along the way, we’ve taken a tour through the basics of disassembly to understand how we can create our own tool to automate part of the process.

Of course, Apple may well decide to chose another method for obfuscating some of their strings in some future version of, and then we’ll have to find another way to decode them. I certainly hope not, not only because Apple’s security team know better than anyone that in the end, you simply can’t prevent code being read on macOS, but more importantly because sharing threat intel with other security researchers and exposing threat actor techniques serves to make everyone safer. That, of course, would involve an admission that threat actors are actively targeting macOS, something that history suggests Apple are disinclined to admit.

However that may be, if nothing else, I hope you enjoyed this tour through diffing and disassembling machO binaries! In the next post, we’ll be looking at how to do diffs on XProtect and Gatekeeper, so sign up to our blog newsletter (form to the left) or follow us on social media (links below the line) to be notified when that’s live.

Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Google Cloud gets a premium support plan with 15-minute response times

Google Cloud today announced the launch of its premium support plans for enterprise and mission-critical needs. This new plan brings Google’s support offerings for the Google Cloud Platform (GCP) in line with its premium G Suite support options.

“Premium Support has been designed to better meet the needs of our customers running modern cloud technology,” writes Google’s VP of Cloud Support, Atul Nanda. “And we’ve made investments to improve the customer experience, with an updated support model that is proactive, unified, centered around the customer, and flexible to meet the differing needs of their businesses.”

The premium plan, which Google will charge for based on your monthly GCP spent (with a minimum cost of what looks to be about $12,500 per month), promises a 15-minute response time for P1 cases. Those are situations when an application or infrastructure is unusable in production. Other features include training and new product reviews, as well as support for troubleshooting third-party systems.

Google stresses that the team that will answer a company’s calls will consist of “content-aware experts” that know your application stack and architecture. As with similar premium plans from other vendors, enterprises will have a Technical Account manager who works through these issues with them. Companies with global operations can opt to have (and pay for) technical account managers available during business hours in multiple regions.

The idea here, however, is also to give GCP users more proactive support, which will soon include a site reliability engineering engagement, for example, that is meant to help customers “design a wrapper of supportability around the Google Cloud customer projects that have the highest sensitivity to downtime.” The Support team will also work with customers to get them ready for special events like Black Friday or other peak events in their industry. Over time, the company plans to add more features and additional support plans.

As with virtually all of Google’s recent cloud moves, today’s announcement is part of the company’s efforts to get more enterprises to move to its cloud. Earlier this week, for example, it launched support for IBM’s Power Systems architecture, as well as new infrastructure solutions for retailers. In addition, it also acquired no-code service AppSheet.