The Good, the Bad and the Ugly in Cybersecurity – Week 35

Image of The Good, The Bad & The Ugly in CyberSecurity

The Good

Apple, Microsoft and now Google have all made important announcements about new and extended bug bounty programs recently. This week, Google took their turn in announcing a new program to reward the reporting of data abuse issues in Android apps, Chrome extensions and OAUth projects through the HackerOne vulnerability disclosure platform. The program is intended to help find abuses by unscrupulous developers who sell or misuse user data or violate Google’s Privacy policies. The company have also expanded their existing Google Play Security Rewards Program to include apps with 100 million or more installs.

image of news of google bug bounty

Another massive botnet bit the dust this week after French police and the FBI remotely removed RETADUP malware from over 850000 infected devices. The French police replaced the attackers’ C2 server, which was being hosted by a provider in the Île-de-France region, with one of their own. The benign server then leveraged the C2’s original protocol to instruct infected clients to remove the malware. In a tweet, the French cyber cops said

“The gendarmerie has dismantled one of the largest networks of pirated computers in the world! In collaboration with the FBI, French cyber police managed to “disinfect” more than 850,000 computers remotely. A world first!”

image of french police tweet

The Bad

The rise in ransomware attacks continues unabated, and now even dental practices are feeling the pain as hundreds this week fell victim to a supply chain attack that delivered Sodinokibi ransomware. A targeted attack on the developers of DDS Safe software resulted in hundreds of dental practices finding themselves locked out of patients records. The attack is particularly bruising for the companies behind the sofware, The Digital Dental Record and PerCSoft. The developers tout their product as a backup service offering customers ransomware protection, but the attack forced them to pay the ransom in order to acquire a decrypter from the criminals and pass it on to their affected customers. It just goes to underline the importance of having real ransomware protection no matter what business you’re in.

image of breach notice

Apple just can’t seem to catch a break at the moment. After last week’s revelation of vulnerabilities affecting almost every version of iOS comes yet more more bad news for the company’s security reputation. Google’s Project Zero team have just released details of a sustained, two-year campaign exploiting no less than fourteen separate vulnerabilities and five privilege escalation exploit chains in iOS (versions 10 through to 12.1). Hackers used the flaws to conduct drive-by attacks placing implants on iOS devices visiting a small number of malicious websites. The implants, which required no interaction from the user other than landing on one of the malicious sites, were used to steal user files and location data. The malware was capable of receiving commands from the attackers’ C2 server and was able to access the databases used by encrypted chat apps such as Messages, Telegram and WhatsApp.

image of malicious code

The Project Zero team said it would be almost impossible for users to detect the presence of the malware themselves. While Apple patched the bugs on February 7th, a number of crucial questions remain unanswered.

image of tweet about iphone hack

The Ugly

Cisco scored 10 out of 10 this week, but not in a measure that’s good for either the company or any of its enterprise customers running the widely used IOS XE operating system. CVE-2019-12643 has been rated at the maximum severity of 10 on the Common Vulnerability Scoring System (CVSS). The vulnerability lies in a REST API virtual service container which, fortunately, is disabled by default. However, for clients running the service, malicious HTTP requests could bypass authentication and allow an attacker to login in and execute privileged actions.

image of cisco vuln

Remember the days when the idea of cyber warfare was just the stuff of movies, the overactive imagination of teenage hackers, and a highly guarded state secret that you didn’t talk about and never confirmed? This week the US government discarded the notion of ‘plausible deniability’ and publicly discussed details of a cyber attack on Iranian computer systems used to track shipping movements and target oil tankers in the Persian Gulf. The attack has proved controversial with some defense experts worried that it will only serve to help adversaries learn more about US Cyber Command’s capabilities, while at the same time allowing Iran – and others they choose to share information with – the chance to close those vulnerabilities. Stealth, it seems, was less of a priority in this case than being seen to act. Whether a brazen show of strength was a good or bad idea for US defense interests in the longer term is hard to say. File it under ‘ugly’ and watch this space.

Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Marc Benioff will discuss building a socially responsible and successful startup at TechCrunch Disrupt

Salesforce chairman, co-founder and CEO Marc Benioff took a lot of big chances when he launched the company 20 years ago. For starters, his was one of the earliest enterprise SaaS companies, but he wasn’t just developing a company on top of a new platform, he was building one from scratch with social responsibility built-in.

Fast-forward 20 years and that company is wildly successful. In its most recent earnings report, it announced a $4 billion quarter, putting it on a $16 billion run rate, and making it by far the most successful SaaS company ever.

But at the heart of the company’s DNA is a charitable streak, and it’s not something they bolted on after getting successful. Even before the company had a working product, in the earliest planning documents, Salesforce wanted to be a different kind of company. Early on, it designed the 1-1-1 philanthropic model that set aside 1% of Salesforce’s equity, and 1% of its product and 1% of its employees’ time to the community. As the company has grown, that model has serious financial teeth now, and other startups over the years have also adopted the same approach using Salesforce as a model.

In our coverage of Dreamforce, the company’s enormous annual customer conference, in 2016, Benioff outlined his personal philosophy around giving back:

You are at work, and you have great leadership skills. You can isolate yourselves and say I’m going to put those skills to use in a box at work, or you can say I’m going to have an integrated life. The way I look at the world, I’m going to put those skills to work to make the world a better place.

This year Benioff is coming to TechCrunch Disrupt in San Francisco to discuss with TechCrunch editors how to build a highly successful business, while giving back to the community and the society your business is part of. In fact, he has a book coming out in mid-October called Trailblazer: The Power of Business as the Greatest Platform for Change, in which he writes about how businesses can be a positive social force.

Benioff has received numerous awards over the years for his entrepreneurial and charitable spirit, including Innovator of the Decade from Forbes, one of the World’s 25 Greatest Leaders from Fortune, one of the 10 Best-Performing CEOs from Harvard Business Review, GLAAD, the Billie Jean King Leadership Initiative for his work on equality and the Variety Magazine EmPOWerment Award.

It’s worth noting that in 2018, a group of 618 Salesforce employees presented Benioff with a petition protesting the company’s contract with the Customs and Border Patrol (CBP). Benioff in public comments stated that the tools were being used in recruitment and management, and not helping to separate families at the border. While Salesforce did not cancel the contract, at the time, co-CEO Keith Block stated that the company would donate $1 million to organizations helping separated families, as well as match any internal employee contributions through its charitable arm,

Disrupt SF runs October 2 to October 4 at the Moscone Center in the heart of San Francisco. Tickets are available here.

Did you know Extra Crunch annual members get 20% off all TechCrunch event tickets? Head over here to get your annual pass, and then email to get your 20% discount. Please note that it can take up to 24 hours to issue the discount code.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-661cf9b1b8f85f5aae09b8946cafadba’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-661cf9b1b8f85f5aae09b8946cafadba’
}, “” );

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘’ !== )

// Verify message is in a format we expect
if ( ‘object’ !== typeof || undefined === )

switch ( ) {
case ‘poll_size:response’:
var iframe = document.getElementById( );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( );


if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

Ransomware Bites Dental Data Backup Firm

PerCSoft, a Wisconsin-based company that manages a remote data backup service relied upon by hundreds of dental offices across the country, is struggling to restore access to client systems after falling victim to a ransomware attack.

West Allis, Wis.-based PerCSoft is a cloud management provider for Digital Dental Record (DDR), which operates an online data backup service called DDS Safe that archives medical records, charts, insurance documents and other personal information for various dental offices across the United States.

The ransomware attack hit PerCSoft on the morning of Monday, Aug. 26, and encrypted dental records for some — but not all — of the practices that rely on DDS Safe.

PercSoft did not respond to requests for comment. But Brenna Sadler, director of  communications for the Wisconsin Dental Association, said the ransomware encrypted files for approximate 400 dental practices, and that somewhere between 80-100 of those clients have now had their files restored.

Sadler said she did not know whether PerCSoft and/or DDR had paid the ransom demand, what ransomware strain was involved, or how much the attackers had demanded.

But updates to PerCSoft’s Facebook page and statements published by both PerCSoft and DDR suggest someone may have paid up: The statements note that both companies worked with a third party software company and were able to obtain a decryptor to help clients regain access to files that were locked by the ransomware.

Update: Several sources are now reporting that PerCSoft did pay the ransom, although it is not clear how much was paid. One member of a private Facebook group dedicated to IT professionals serving the dental industry shared the following screenshot, which is purportedly from a conversation between PerCSoft and an affected dental office, indicating the cloud provider was planning to pay the ransom:

Another image shared by members of that Facebook group indicates the ransomware that attacked PerCSoft is an extremely advanced and fairly recent strain known variously as REvil and Sodinokibi.

Original story:

However, some affected dental offices have reported that the decryptor did not work to unlock at least some of the files encrypted by the ransomware. Meanwhile, several affected dentistry practices said they feared they might be unable to process payroll payments this week as a result of the attack.

Cloud data and backup services are a prime target of cybercriminals who deploy ransomware. In July, attackers hit QuickBooks cloud hosting firm iNSYNQ, holding data hostage for many of the company’s clients. In February, cloud payroll data provider Apex Human Capital Management was knocked offline for three days following a ransomware infestation.

On Christmas Eve 2018, cloud hosting provider took its systems offline in response to a ransomware outbreak on its internal networks. The company was adamant that it would not pay the ransom demand, but it ended up taking several weeks for customers to fully regain access to their data.

The FBI and multiple security firms have advised victims not to pay any ransom demands, as doing so just encourages the attackers and in any case may not result in actually regaining access to encrypted files. In practice, however, many cybersecurity consulting firms are quietly urging their customers that paying up is the fastest route back to business-as-usual.

It remains unclear whether PerCSoft or DDR — or perhaps their insurance provider — paid the ransom demand in this attack. But new reporting from independent news outlet ProPublica this week sheds light on another possible explanation why so many victims are simply coughing up the money: Their insurance providers will cover the cost — minus a deductible that is usually far less than the total ransom demanded by the attackers.

More to the point, ProPublica found, such attacks may be great for business if you’re in the insurance industry.

“More often than not, paying the ransom is a lot cheaper for insurers than the loss of revenue they have to cover otherwise,” said Minhee Cho, public relations director of ProPublica, in an email to KrebsOnSecurity. “But, by rewarding hackers, these companies have created a perverted cycle that encourages more ransomware attacks, which in turn frighten more businesses and government agencies into buying policies.”

“In fact, it seems hackers are specifically extorting American companies that they know have cyber insurance,” Cho continued. “After one small insurer highlighted the names of some of its cyber policyholders on its website, three of them were attacked by ransomware.”

Read the full ProPublica piece here. And if you haven’t already done so, check out this outstanding related reporting by ProPublica from earlier this year on how security firms that help companies respond to ransomware attacks also may be enabling and emboldening attackers.

Phishers are Angling for Your Cloud Providers

Many companies are now outsourcing their marketing efforts to cloud-based Customer Relationship Management (CRM) providers. But when accounts at those CRM providers get hacked or phished, the results can be damaging for both the client’s brand and their customers. Here’s a look at a recent CRM-based phishing campaign that targeted customers of Fortune 500 construction equipment vendor United Rentals.

Stamford, Ct.-based United Rentals [NYSE:URI] is the world’s largest equipment rental company, with some 18,000 employees and earnings of approximately $4 billion in 2018. On August 21, multiple United Rental customers reported receiving invoice emails with booby-trapped links that led to a malware download for anyone who clicked.

While phony invoices are a common malware lure, this particular campaign sent users to a page on United Rentals’ own Web site (

A screen shot of the malicious email that spoofed United Rentals.

In a notice to customers, the company said the unauthorized messages were not sent by United Rentals. One source who had at least two employees fall for the scheme forwarded KrebsOnSecurity a response from UR’s privacy division, which blamed the incident on a third-party advertising partner.

“Based on current knowledge, we believe that an unauthorized party gained access to a vendor platform United Rentals uses in connection with designing and executing email campaigns,” the response read.

“The unauthorized party was able to send a phishing email that appears to be from United Rentals through this platform,” the reply continued. “The phishing email contained links to a purported invoice that, if clicked on, could deliver malware to the recipient’s system. While our investigation is continuing, we currently have no reason to believe that there was unauthorized access to the United Rentals systems used by customers, or to any internal United Rentals systems.”

United Rentals told KrebsOnSecurity that its investigation so far reveals no compromise of its internal systems.

“At this point, we believe this to be an email phishing incident in which an unauthorized third party used a third-party system to generate an email campaign to deliver what we believe to be a banking trojan,” said Dan Higgins, UR’s chief information officer.

United Rentals would not name the third party marketing firm thought to be involved, but passive DNS lookups on the UR subdomain referenced in the phishing email (used by UL for marketing since 2014 and visible in the screenshot above as “”) points to Pardot, an email marketing division of cloud CRM giant Salesforce.

Companies that use cloud-based CRMs sometimes will dedicate a domain or subdomain they own specifically for use by their CRM provider, allowing the CRM to send emails that appear to come directly from the client’s own domains. However, in such setups the content that gets promoted through the client’s domain is actually hosted on the cloud CRM provider’s systems.

Salesforce did not respond to multiple requests for comment. But it seems likely that someone at Pardot with access to United Rental’s account was phished, hacked, or perhaps guilty of password re-use.

This attack comes on the heels of another targeted phishing campaign leveraging Pardot that was documented earlier this month by Netskope, a cloud security firm. Netskope’s Ashwin Vamshi said users of cloud CRM platforms have a high level of trust in the software because they view the data and associated links as internal, even though they are hosted in the cloud.

“A large number of enterprises provide their vendors and partners access to their CRM for uploading documents such as invoices, purchase orders, etc. (and often these happen as automated workflows),” Vamshi wrote. “The enterprise has no control over the vendor or partner device and, more importantly, over the files being uploaded from them. In many cases, vendor- or partner-uploaded files carry with them a high level of implicit trust.”

Cybercriminals increasingly are targeting cloud CRM providers because compromised accounts on these systems can be leveraged to conduct extremely targeted and convincing phishing attacks. According to the most recent stats (PDF) from the Anti-Phishing Working Group, software-as-a-service providers (including CRM and Webmail providers) were the most-targeted industry sector in the first quarter of 2019, accounting for 36 percent of all phishing attacks.

Image: APWG

Gootkit Banking Trojan | Part 2: Persistence & Other Capabilities

From Zero to Hero: Malware Reverse Engineering & Threat Intelligence is a free, 12-week course by Vitali Kremez and Daniel Bunce. Following on from the previous post, Daniel continues exploring the Gootkit banking trojan, revealing its persistence techniques and other capabilities.

The Gootkit Banking Trojan was discovered back in 2014, and utilizes the Node.JS library to perform a range of malicious tasks, from website injections and password grabbing, all the way up to video recording and remote VNC capabilities. Since its discovery in 2014, the actors behind Gootkit have continued to update the codebase to slow down analysis and thwart automated sandboxes.

In the previous post, I explored Gootkit’s Anti-Analysis features. In this post, we’ll take a look into the first stage of Gootkit and figure out how it achieves persistence on an infected system, as well as reveal some other tricks it has available.

MD5 of Packed Sample: 0b50ae28e1c6945d23f59dd2e17b5632

Onboard Configuration

Before we get into the persistence and C2 communication routines, let’s first take a look at the onboard configuration, and how it is stored.

image of gootkit onboard configuration

The first time that the configuration is “mentioned” in the sample is immediately after the anti-analysis mechanisms that were covered in the previous post. A quick glance at the code may leave you thinking that Gootkit is decrypting some shellcode to be used by the sample – but running this in a debugger shows otherwise. The decryption routine is fairly simple; a basic XOR loop with a differentiating key based on imul and idiv calculations. The base key value is 0x22, and the idiv and imul values are constant throughout each iteration; 0x85 and 0x03 respectively. A Python script of this decryption routine can be seen in the image below.

image of python script

After decrypting the data manually, we can easily distinguish that this is in fact the configuration used by Gootkit to retrieve the next stage: 2700 svchost.exe

Each value is split by multiple null bytes, meaning pretty much all of this configuration is null bytes. The first two values are obviously URLs, and the final value is the name of the process that the downloader could inject into. The last two values are also set as environment variables – specifically vendor_id and mainprocessoverride. The vendor_id variable is given the value exe_scheduler_2700, and mainprocessoverride is given the value svchost.exe. These variables are not used in the downloader aside from setup, and so it can be assumed that it is used in the final stage. Once the environment variables have been created and assigned values, four important threads are kicked off; a C2 Retrieve thread, a Browser Injection thread, a Persistence thread, and a Kill Switch thread. Let’s start off with the Persistence thread.

image of persistence thread

Persistence Capabilities

In this sample of Gootkit, there are two persistence options available. First, there is the usual method of achieving persistence through a created service. In this case, Gootkit will generate a random filename, using the Mersenne Twister, based off of filenames in System32, and then proceed to create a file under the same name in the %SystemRoot%. Upon testing this function, a file called msfearch.exe was created. A service is then created under the same name, and then executed. Finally, the original executable cleans up by deleting itself from disk and exiting, leaving the created service running.

image of gootkit service start

The second persistence routine is a lot more interesting, and has been covered quite often before. This routine is most commonly used in Gootkit infections, as creating a service requires administrator privileges – this does not.

It starts by creating a simple .inf file, which is given the same name as the running executable, and placed in the same directory. The contents of the file can be seen below:

image of contents of inf file

Then, the sample will create a registry key located at:


And then create three values inside this key: Count, Path1, and Section1. Count is assigned the value 0x1, Path1 is assigned the path to the INF file, and Section1 is assigned the string [DefaultInstall], which is also present inside the INF file. And that is the setup complete.

The way this functions is explorer.exe will load Group Policy Objects (GPO) whenever it is loaded – specifically at runtime. What Gootkit does is it creates a Pending GPO for the Internet Explorer Administration Kit (IEAK), which points directly at the INF file. When explorer.exe is loaded at runtime, it will execute the [DefaultInstall] inside the created file, which will execute the Gootkit executable.

image of IEAK Pending GPOs

Loader Update Thread

With the persistence thread covered, let’s move onto analyzing the C2 Receive thread. This was particularly difficult to analyze due to the fact that the command and control server went offline very quickly, and so at first glance it looked like the thread was responsible for downloading the final stage and constantly updating it, but as I dug deeper, this was proven incorrect.

image of gootkit get from c2

The function is not extremely complex – to put simply, Gootkit will check if a variable is set to 0 or 1, and if it is set to 1, it will exit the thread. This variable is only activated inside the Kill Switch function, which we will look at soon.

Continuing on, the sample appends  /rpersist4/-1531849038 to the URL, where the -1531849038 is the CRC32 hash of the binary – converted to decimal. Then, depending on the architecture, rbody32 or rbody64 will be appended to the URL.

image of gootkit creating C2 URL

Then the actual connection takes place. Interestingly, there are two means of communication as well – it can either occur through WinInet functions such as InternetOpenW, or it can occur through WinHTTP functions such as WinHttpOpen, although I have yet to see it call the WinHTTP functions – regardless of privileges.

image of gootkit connection function

Before reaching out to the C2, Gootkit will first add to the headers of the GET request. These additions can be seen below:

X-File-Name:            Filename
X-User-Name:            Username
X-ComputerName:         Computername
X-OSVersion:            6.1.7601|Service Pack 1 1.0|1|0x00000100
X-VendorId:             2700
X-IsTrustedComputer:    1
X-HTTP-Agent:           WININET
X-Proxy-Present:        False
X-Proxy-Used:           False
X-Proxy-AutoDetect:     False

The X-IsTrustedComputer is only set to 1 if the crackmeololo environment variable is set, otherwise it is set to 0. This could be seen as another anti-analysis/anti-sandbox/anti-VM mechanism, although it’s difficult to say without seeing the backend.

image of gootkit checking environment

If the connection between the sample and the C2 fails, it will attempt to connect to the other C2s found in the configuration. If the connection is successful and the server returns an executable, Gootkit will create a randomly named file in the Temporary directory, and execute it with the --reinstall argument, using CreateProcessW. As a result of this, we can fully understand that this thread is in fact an “updater” thread, which will continuously check in with the C2 server, waiting for any updates to the loader.

image of gootkit updater thread

Now that this function has been covered, let’s move over to the Kill Switch function briefly, before going onto the Browser Injection function.

Kill Switch

The Kill Switch thread is only triggered if uqjckeguhl.tmp is located in ..­AppDataLocal­Temp or ..­Local Settings­Temp. If the file exists, then Gootkit begins to clean up after itself – it will kill all running threads, and restart the computer. It’s quite unclear as to why this is a feature, as persistence is established before the Kill Switch thread is executed, and so simply restarting the computer will end up executing the loader again – however, if a loader update is issued and installed on the infected system, causing a reboot could be helpful in preventing several instances from running at once.

image of kill swtich

And finally, on to the Browser Injection function.

Browser Injection

The Browser Injection function is quite interesting, as it is responsible for two tasks; executing itself with the --vwxyz argument, and injecting two DLLs into running browsers. We’re going to focus on the second task.

In order to inject a DLL into a browser, there must already be a DLL residing somewhere – which there is. In fact, there are 2 encrypted DLLs stored in the binary; an x86 DLL and an x64 DLL, which are decrypted with a simple XOR. What is also interesting is that there seems to be possible placeholders in other variants, as this sample checks for 0x11223344 and 0x55667788 in both DLLs, in order to replace the values with 0x12 and 0x13 respectively.

image of placeholders

With both executables decrypted, Gootkit alters the values to 0x3 for the following registry keys:

SoftwareMicrosoftWindowsCurrentVersionInternet SettingsZones2500
SoftwareMicrosoftWindowsCurrentVersionInternet SettingsZones12500
SoftwareMicrosoftWindowsCurrentVersionInternet SettingsZones22500
SoftwareMicrosoftWindowsCurrentVersionInternet SettingsZones32500
SoftwareMicrosoftWindowsCurrentVersionInternet SettingsZones42500
SoftwareMicrosoftWindowsCurrentVersionInternet SettingsZones52500

image of disabling IE Protected Mode

This results in disabling Internet Explorer Protected Mode for each security zone in use. From there, Gootkit will move onto scanning all running processes until it locates an active browser. In order to do this, it will import and call NtQuerySystemInformation(), requesting System Process Information. This returns a list of running processes. Using this list, Gootkit will open each process, check the process architecture using IsWow64Process(), and then CRC-32 hash the (uppercase) process name. This hash is then passed onto a function responsible for detection and injection. A list of targeted browsers and their corresponding hashes can be seen below.

Microsoft EdgeCP:     0x2993125A
Internet Explorer:    0x922DF04
Firefox:              0x662D9D39
Chrome:               0xC84F40F0
Opera:                0x3D75A3FF
Safari:               0xDCFC6E80 
Unknown:              0xEB71057E

image of browser injection
image of get browser hashes

The injection technique used by Gootkit is nothing special, and is quite common. The sample calls NtCreateSection, and will then map that section into the Browser using NtMapViewOfSection. Both DLLs seem to be mapped into memory as well, regardless of architecture. Once the files have been injected, the function will return back to the Process Searching function, until another browser is detected. And that brings an end to the browser injection!

image of call Nt Create Section

MD5 of x86 DLL: 57e2f2b611d400c7e26a15d52e63fd7f
MD5 of x64 DLL: 7e9f9b2d12e55177fa790792c824739a

From a quick glance at the injected DLLs, they seem to contain a few hooking functions that seem to hook CertVerifyCertificateChainPolicy and CertGetCertificateChain, as well as potentially acting as some form of proxy to intercept requests and redirect them based on information from the C2 server or the NODE.JS payload – my main reasoning behind this is that infecting a VM with Gootkit and trying to browse the internet using Internet Explorer is unsuccessful, as if connections were being prevented by a proxy, although this does require further analysis.

image of gootkit pseudocode

In the next post, we will take a look at what happens when Gootkit is called with the --vwxyz argument, and then take a quick peek into the final NODE.JS payload that is retrieved from the Command and Control server!

Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Mews grabs $33M Series B to modernize hotel administration

If you think about the traditional hotel business, there hasn’t been a ton of innovation. You mostly still stand in a line to check in, and sometimes even to check out. You let the staff know about your desire for privacy with a sign on the door. Mews believes it’s time to rethink how hotels work in a more modern digital context, especially on the administrative side, and today it announced a $33 million Series B led by Battery Ventures.

When Mews Founder Richard Valtr started his own hotel in Prague in 2012, he wanted to change how hotels have operated traditionally. “I really wanted to change the way that hotel systems are built to make sure that it’s more about the experience that the guest is actually having, rather than facilitating the kind of processes that hotels have built over the last hundred years,” Valtr told TechCrunch.

He said most of the innovation in this space has been in the B2C area, using Airbnb as a prime example. He wants to bring that kind of change to the way hotels operate. “That’s essentially what Mews is trying to do. [We want to shift the focus to] the fundamental things about why we love to travel and why people actually love to stay in hotels, experience hotels, and be cared for by professional staff. We are trying to do that in a way that that actually delivers a really meaningful experience and personalized experience to that one particular customer,” he explained.

For starters, Mews is a cloud-based system that automates a lot of the manual tasks like room assignments that hotel staff at many hotels often still have to handle as part of their jobs. Valtr believes by freeing the staff from these kinds of tedious activities, it enables them to concentrate more on the guests.

It also offers ways for guests and hotels to customize their stays to get the best experience possible. Valtr says this approach brings a new level of flexibility that allows hotels to create new revenue opportunities, while letting guests choose the kind of stay they want.

From a guest perspective, they could by-pass the check-in process altogether, sharing all of their registration details ahead of time, and then getting a pass code sent to their phone to get into the room. The system integrates with third-parting hotel book sites like and Expedia, as well as other services, through its open hospitality API, which offers lots of opportunities for properties to partner with local businesses.

The company is currently operating at 1000 properties across 47 countries, but it lacks a presence in the US and wants to use this round to open an office in NYC and expand into this market.”We really want to attack the US market because that’s essentially where most of the decision makers for all of the major chains are. And we’re not going to change the industry if we don’t actually change the thinking of the biggest brands,” Valtr said.

Today, the company has 270 employees spread across 10 offices around the world. Headquarters are in Prague and London, but the company is in the process of opening that NYC office, and the number of employees will expand when that happens.

Our Take: SentinelOne Placed Furthest for Completeness of Vision Within the Visionaries Quadrant

“Keep your eyes on the stars and your feet on the ground.” – Franklin D. Roosevelt

Your Success is the Reason for Our Success

SentinelOne is proud to be recognized by Gartner as a Visionary in the Magic Quadrant for Endpoint Protection Platforms.  This year, we’re placed furthest for completeness of vision within the Visionaries quadrant.  

For our 2,500+ customers, we thank you for your trust in us!  For those who are evaluating solutions, whether replacing legacy AV, legacy EDR, or looking for their very first EDR solution, we encourage you to evaluate us. 

What those who deploy us know, and those who evaluate see, is our core philosophy.  We dare to do things and go places that our peer companies don’t for ONE single purpose: securing our customers and ensuring their success.  As our customers and competitors know, we’re a team committed to action, perseverance, and tireless pursuit of customer success. We dream big and deliver.

We Deliver Innovation Faster. We Move Cybersecurity Forward.

In the past year, we’ve delivered more than 300 product deliverables and released innovations that forced others to follow.  We believe that IoT devices are endpoints. SentinelOne Ranger allows enterprises to discover, map, and mitigate risk from their IoT devices from our same single agent platform.  We know that our customers are moving faster than ever to the cloud, and our native container and datacenter solution helps customers securely take the next step in their digital transformation.  Our eyes are constantly on the stars.

Results Speak for Themselves.

Back on the ground, we’re blocking millions of attacks each and every day.  Deployed in three of the Fortune 10 and hundreds of the Global 2000, SentinelOne is in hypergrowth with win rates north of 70% for each and every one of our competitors.  Today, we’re the fastest-growing endpoint vendor on the market. We believe this is evidenced by our reviews on Gartner Peer Insights where we were recognized as a Nov. 2018 Gartner Peer Insights Customers Choice Award for EPP and a Jan. 2019 the latest Gartner Peer Insights Customers Choice for EDR.*

Our promise is to always deliver:  innovation, quality, and results.  We believe our positioning in the 2019 Magic Quadrant is symbolic of these commitments that we make to our customers and the market as a whole.  Allow us to help you take your endpoint security – and cybersecurity in general – to places it’s never been before. We help you minimize enterprise risk by maximizing your team’s output with a fully autonomous solution.  Empower the people you have to have a deeper, richer product experience and focus on other security problems. Let us show you how to make dwell time a thing of the past with a proactive SOC working for you on every edge of your enterprise.

Speak with our experts and start your demo today!  With vision and execution, the stars are within reach.  Come with us:  we’ll help you get there.

Download a complimentary copy of the 2019 Gartner Magic Quadrant for Endpoint Protection Platforms

Get the Report

Source: Gartner, Magic Quadrant for Endpoint Protection Platforms, Peter Firstbrook, Dionisio Zumerle, Prateek Bhajanka, Lawrence Pingree, Paul Webber, 20 August 2019.

*Customers’ Choice for Endpoint Detection and Response Solutions: Customers’ Choice for Endpoint Protection Platforms: Gartner Peer Insights Customers’ Choice constitute the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of, nor constitute an endorsement by, Gartner or its affiliates.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

macOS Incident Response | Part 2: User Data, Activity and Behavior

In the previous post in this intro series on macOS Incident Repsonse, we looked at collecting device, file and system data. Today, in the second post we’re going to take a look at retrieving data on user activity and behavior.

image of incident response part 2

Why Investigate User Behavior?

There’s a few reasons why we might be interested in user behavior. First, there’s the possibility of either unintentional or malicious insider threats. What has the user been using the device for, what have they accessed and who have they communicated with?

Second, ‘user behavior’ isn’t necessarily restricted to the authorized or designated user (or users), but also covers unauthorized users including remote and local attackers. Who has accounts on the device, when have they been accessed, and do those access times correlate with the pattern of behaviour we would expect to see from the authorized users? These are all questions that we would want to be able to answer.

Third, a lot of confidential and personal user data is stored away in hidden or obscure databases on macOS. While Apple have made some efforts recently to lock these down, many are still scrapable by processes running with the user’s privileges, but not necessarily their knowledge. By looking at these databases, what they contain, and when they were accessed, we can get a sense of what data the company might have lost in an attack, from everything from personal communications, to contacts, web history, notes, notifications and more.

A Quick Review of SQLite

Although some data we will come across is in Apple’s property plist format and less occassionally plain text files, most of the data we’re interested in is saved in sqlite databases. I am certainly no expert with SQL, but we can very quickly extract interesting data with a few simple commands and utilities. You can use the free DB Browser for SQLite if you want a GUI front end, or you can use the command line. I tend to use the command line for quick, broad-brush looks at what a database contains and turn to the SQLite Browser if I really want to dig deep and run fine-grained queries. Here are some very basic commands that serve me well.

sqlite3 /path to db/ .dump

This is my go-to command, which just pumps out everything in one (potentially huge) flood of data. It’s a great way to quickly look at what kind of info the database might contain. You can grep that output, too, if you’re looking for specific kinds of things like filepaths, email or URL addresses, and piping the output to a plain text file can make it easy to save and review if you don’t want to work directly on the command line all the time.

sqlite3 /path to db/ .tables

Tables gives you a sense of the different kinds of data stored and which might be most interesting to look at in detail.

sqlite3 /path to db/ 'select * from [tablename]'

Another one of my go-to commands, this is equivalent of doing a “dump” on a specific table.

sqlite3 /path to db/ .schema

This command is essential to understand the structure of the tables in the database. The .schema command allows you to understand what columns each table contains and what kind of data they hold. We’ll look at an example of doing this below.

Finding Interesting Data on macOS

There’s a few challenges when investigating user activity on the Mac, and the first is actually finding the databases of interest. Aside from the fact that they are littered all over the user and system folders, they can also move around from one version of macOS to another and have also been known to change structure from time to time.

In the last post, when we played with sysdiagnose, you may recall that one location the utility scraped logs from was /var/db. There’s user data in there, too. For example, in CoreDuet, you may find the Knowledge/knowledgeC.db and the People/interactionC.db. SANS macOS forensics instructor Sarah Edwards did a great post on mining the knowledgeC.db which I highly recommend. From it, you will be able to discern a great deal of information about the user’s Application usage and activity, browser habits and more. Some of this information we’ll also gather from other sources below, but the more corroborating evidence you can gather to base your interpretations on the better.

The interactionC.db may give you insight into the user’s email activity, something we will return to later in this post.

image of coreduet

In the meantime, let’s use this database for a simple example of how we can interpret the SQL databases in general. Start by changing directory to


and listing the contents with ls -al. You should see interactionC.db in the listing.

If we run .tables on this database, we can see it contains some interesting looking items.

image of interactionC database

Let’s dump everything from the ZCONTACTS table and have a look at the data. Each line has a form like this:


Sure, we can see the email address in plain text, but what does the rest of the data mean? This is where .schema helps us out. After running the .schema command, look for the CREATE TABLE ZCONTACTS schema in the output.

Each line tells us the column name and the kind of data the ZCONTACTS table accepts (eg. Z_OPT column takes integers). There are 20 possible colums in this table, and we can match those up with each column from data extracted from the table earlier, where each column in that output is separated by a |. Here we also used the method of converting Cocoa timestamps to human-readable dates that we discussed in Part One .

image of schema example

The data indicates that between 25 June and 28 August the recipient received 17 messages from the email address identified in fields 18 and 19.

However, a word of caution about interpretation. Until you are very familiar with how a given database is populated (and depopulated) over time, do not jump to conclusions about what you think it’s telling you. Could there have been more or less than 17 messages during that time? Unless you know what criteria the underlying process uses for including or removing a record in its database, that’s very difficult to say for sure. In similar vein, note that the timestamps may not always be reliable either. You cannot assume that a single database is sufficient to establish a particular conclusion. That’s why corroborating evidence from other databases and other activity is essential. What we are looking at with these sources of data are indications of particular activity rather than cast-iron proof of it.

Databases in the User Library

A great deal of user data is held in various directories with the ~/Library folder. The following code will pump out an exhaustive list .db files that can be accessed as the current user (try with sudo to see what extras you can get).

cd ~/Library/Application Support; find . -type f -name "*.db" 2> /dev/null

However, here’s another difficulty if you’re working on Mojave or later. Since Apple brought in enhanced user protections, you may find some files off limits even with sudo. To get around that, you could try taking a trip to the System and adding the Terminal to Full Disk Access. That’s assuming, of course, that there are no concerns about ‘contaminating’ a device with your own activity.

Dumping a list of all the possible databases might look daunting, but here’s just a few of the more interesting Apple ones you might want to look at on top of those associated with 3rd party software, email clients, browsers and so on.

./Application Support/Knowledge/knowledgeC.db
./Application Support/

Let’s look at a few examples. Surprisingly, the Messages’ chat.db is entirely unprotected, so you can dump messages in plain text. You might be surprised to find just how unguarded people can be on informal chat platforms like this.

sqlite3 chat.db .tables

image of messages database tables

This user has basically left themselves open to compromise from any process running under their own user name.

sqlite3 chat.db 'select * from message'

image of messages

Mail is also completely readable once you dig down through the hierarchy of folders. Here the messages are not stored in a sqlite database, but use the .emlx format. These encode the email content in base64, which can easily be extracted and decoded.

image of email

You can save yourself a lot of time with emails by reading the snippets.db in the Suggestions folder. This contains databases that are meant to speed up predictive suggestions by the OS in application searches (Contacts, Mail, etc), as well as Spotlight and the browser address bar. The snippets.db contains snippets of email conversations and contact information.

Sometimes you’ll get silent ‘permission denied’ issues on these databases, even when using root and Terminal has Full Disk Access. For example, in the image below, the file size of the queue.db clearly indicates that there’s more data in there than I seem to be getting from sqlite.

image of harvestqueue

When these kind of things happen, a ‘quick and dirty’ solution can be to turn to either the strings command or the xxd utility to quickly dump out the raw ASCII text and see if the contents are worthy of interest.

image of queue db

Mining the Darwin_User_Dir for Data

Apple hide some databases in an obscure folder in /var/folders/. When logged in as a given user, you can leverage the DARWIN_USER_DIR environment variable to get there.

cd $(getconf DARWIN_USER_DIR)

Again, you may find even with Terminal added to Full Disk Access, some directories will remain off limits, even for the root user, like the SafariFamily folder appears to be.

image of root denied

In this case, we can’t even dump the strings because we cannot even get permission to list the file.

The only way to get access to these kinds of protected places is to turn off System Integrity Protection, which may or may not be something you are able to do, depending on the case.

image of sip disabled

Reading User Notifications, Blobs & Plists

One of the databases you’ll find in the folder that the variable DARWIN_USER_DIR takes you to is the database that stores data from Notifications – messages sent from Applications like Mail, Slack and so on to the Notification Center and which appear as alerts and banners in the top right of the screen. Fortunately or unfortunately, depending on how you look at it, we don’t need special permissions to read this database.

If you’re logged in as the user whose Notifications you want to look at, the following command will take you to the directory where the sqlite database is located.

cd $(getconf DARWIN_USER_DIR)/

The Notifications database has changed at some point in time, and if you list the contents of the directory you may see both a db and a db2 folder. Change directory into each in turn and run the .tables and .schema commands to compare the different structures. Both use blobs for the data, so you will need a couple of tricks to learn how to read these.

One way is to open the database in DB Browser for SQLite, click on the blob data and view the source in binary format. You can export that source as blob.bin and then use plutil -p blob.bin to output it in nice human-readable text.

image of blob in DQ SQLite browser

I’m usually in too much of a hurry to do all that. Instead, I’ll do something like

sqlite3 db 'select * from app_info'

to browse through the list of apps that have sent Notifications, then run a few greps on the entire database. For example, if I want to read Slack notifications, I can use something like this:

strings $(getconf DARWIN_USER_DIR)/ | grep -i -A4 slack

And of course I can just change ‘slack’ for ‘mail’ or whatever else looks interesting. I might then use the previous method with the DB Browser and plutil mentioned above to dig deeper.

Reading Data from Notes, More Blob Tricks

There’s plenty of application databases that we haven’t touched on, but one that I want to cover in this overview is Apple’s Notes. Not only might this be a good source of information about user activity, it’s also trickier to deal with than the other databases we’ve looked at.

We can find the Notes database in the ~/Library/Group Containers folder. Let’s quickly review the tables:

sqlite3 ~/Library/Group Containers/ .tables

image of notes

This somewhat byzantine-looking one-liner will dump all the user’s notes to stdout.

for i in $(sqlite3 ~/Library/Group Containers/ "select Z_PK from ZICNOTEDATA;"); do sqlite3 ~/Library/Group Containers/ "select writefile('body1.gz.z', ZDATA) from ZICNOTEDATA where Z_PK = '$i';"; zcat body1.gz.Z ; done

Let’s take a look at how it works. The first part selects Z_PK column – the primary keys or unique identifiers of the notes in the database – and then iterates over each one. The second part takes the primary key and for each note in the ZICNOTEDATA table, it extracts the ZDATA blob containing the note’s content. Next, writefile writes the blob to a temporary compressed file body1.gz.z and finally zcat decompresses it into plain text!

Finding Other Data Stores

If you are interested in a particular application or process but do not know what it uses for a backing store, if anything, there’s a couple of investigative methods you can try. First, see if the process is running in Activity Monitor. If it is, click the Info button and select the ‘Open Files and Ports’ tab and see where it’s writing to. You could also do the same thing with lsof on the command line.

If that doesn’t work, try running strings on the executable file and grepping for / to search for paths that the program might write to. If you’re still out of luck you may have to do a little more macOS reverse engineering to understand what the program is up to and find where it hides its data.


In this post, we’ve taken a tour of various places where macOS stores data on user activity and user behavior and reviewed some of the main ways that you can locate and extract this data for analysis. From Part 1 and Part 2 we have collected data on the device and on user(s) activity. But we also need to look at our device for evidence of manipulation by an attacker that can leave the system vulnerable to future exploitation. We’ll turn to that in the final part of the series. I hope to see you there!

Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

SAP & Pricefx cover hot topics at TechCrunch’s Sept. 5 Enterprise show in SF

You can’t talk enterprise software without talking SAP, one of the giants in a $500 billion industry. And not only will SAP’s CEO Bill McDermott share insights at TC Sessions: Enterprise 2019 on September 5, but the company will also sponsor two breakout sessions.

The editors will sit down with McDermott and talk about SAP’s quick growth due, in part, to several $1 billion-plus acquisitions. We’re also curious to hear about his approach to acquisitions and his strategy for growing the company in a quickly changing market. No doubt he’ll weigh in on the state of enterprise software in general, too.

Now about those breakout sessions. They run in parallel to our Main Stage set and we have a total of two do-not-miss presentations for you to enjoy. On September 5, you’ll enjoy three breakout sessions –two from SAP and one from Pricefx. You can check out the agenda for TC Sessions: Enterprise, but we want to shine the light on the sponsored sessions to give you a sense of the quality content you can expect:

  • Innovating for a Super-Human Future 
    Martin Wezowski (SAP)
    We talk about change, but what are the mechanics and the dynamics behind it? And how fast is it? The noted futurist will discuss what it means to be an innovator is transforming faster than before, and this transformation is deeply rooted in the challenges and promises between cutting-edge tech and humanism. The symbiosis between human creativity & empathy and machine intelligence opens new worlds for our imagination in a time when “now” has never been so temporary, and helps us answer the question: “What is human, and what is work in a superhuman future?” (Sponsored by SAP)
  • Pricing from Day One
    Madhavan Ramanujam (Simon-Kucher & Partners, Gabriel Smith) and Darius Jakubik (Pricefx) A key ingredient distinguishing top performing companies is clear focus on price. To maximize revenue and profits, pricing should be a C-level / boardroom consideration. To optimize pricing, you should think about price when determining which products and features to bring to market; put the people, process and technology in place to optimize it; and maintain flexibility to adjust strategy and tactics to respond to changing markets. By doing so, companies unlock the single greatest profit lever that exists. (Sponsored by Pricefx)
  • Cracking the Code: From Startup to Scaleup in Enterprise Software 
    Ram Jambunathan (SAP.iO), Lonnie Rae Kurlander (Medal), Caitlin MacGregor (Plum) and Dimitri Sirota (BigID) The startup journey is hard. Data shows that 70% of upstart tech companies fail, while only 1% of these startups will go on to gain unicorn status. Success in enterprise software often requires deep industry experience, strong networks, brutally efficient execution and a bit of luck. This panel brings together three successful SAP.iO Fund-backed enterprise startups for an open discussion on lessons learned, challenges of scaling and why the right strategic investors or partners can be beneficial even at early stages. (Sponsored by SAP)

TC Sessions: Enterprise 2019 takes place in San Francisco on September 5. It’s a jam-packed day (agenda here) filled with interviews, panel discussions and breakouts — from some of the top minds in enterprise software. Buy your ticket today and remember: You receive a free Expo-only pass to TechCrunch Disrupt SF 2019 for every ticket you buy.

How to move from VP of Sales to CRO with leading exec recruiter David Ives

It wasn’t so long ago that sales meant just showing up with a deck and a smile. These days, it seems that sales leaders almost need a PhD in statistics just to get through the typical day managing a sales funnel. From SQLs and MQLs to NDRR and managing overall retention, the roles of VP of Sales and Chief Revenue Officers (CROs) are evolving rapidly in tandem with the best practices of SaaS startups.

Few people know this world better than David Ives, who is a partner at True Search, one of the top executive recruiting firms in the country where he co-leads the go-to-market practice. David has led countless CRO and VP of Sales searches, and in the process, has learned not just what CEOs and boards are looking for, but also the kinds of skills that candidates need to shine in these important career inflection points.

David Ives Photo

David Ives. Image via True Search

In our conversation, we talk about the evolving nature of the sales org, how leaders can best position themselves for future advancement, what companies are looking for today in new executive sales hires, and compensation changes in the industry.

This interview has been extensively edited and condensed for clarity

Introduction and background

Danny: Why don’t we start with your background — how did you get into recruiting?

David: So my background was definitely unique. I started as an enterprise sales rep of the truest form selling subscription-based data analytics and systems into capital markets, so into investment banks, trading desks, hedge funds, asset managers, portfolio managers — you name it. Then I drifted purposely, intentionally away from capital markets and did about four different growth technology companies. I landed at NewsCred, and it was a neat time — it was really the birth of the startup landscape with the whole Flatiron district in New York.

Later, I was looking for my next CRO opportunity and was networking with some of the investor folks that I knew. I had a friend of mine who was a talent partner at a private equity firm who said to me, “I’ve always thought that you’d be really good at this and we’re starting to push for our search firms to have operators.” I went and met with Brad and Joe [founders of True], and three weeks later I was in the seat.

Danny: That’s great. And what do you do at True?

David: Well, we moved to a specialization model right when I got here. I don’t know if I was the test case or not, but I didn’t know search, so my skillset was that I knew the role. I run our go-to-market practice with another partner, and we have probably 40, 45 people in that group. We focus exclusively on sales, marketing, customer success, we’ll do biz dev. I probably skew more to CRO than anything else, but I do CMO and VP of marketing as well, and then I do a handful of business development, chief client officers, and VPs of customer success a year. That’s my mix basically.

What is the skillset of a modern CRO?

Danny: You’ve been in the sales leadership space for a long time, and you’ve been in the recruiting space for a couple of years. What are some of the changes that you’re seeing today in terms of candidates, skills, and experiences?

David: I think a big change has been from what I call a backend pipeline manager to what I would call a full funnel manager.