Secret Service Investigates Breach at U.S. Govt IT Contractor

The U.S. Secret Service is investigating a breach at a Virginia-based government technology contractor that saw access to several of its systems put up for sale in the cybercrime underground, KrebsOnSecurity has learned. The contractor claims the access being auctioned off was to old test systems that do not have direct connections to its government partner networks.

In mid-August, a member of a popular Russian-language cybercrime forum offered to sell access to the internal network of a U.S. government IT contractor that does business with more than 20 federal agencies, including several branches of the military. The seller bragged that he had access to email correspondence and credentials needed to view databases of the client agencies, and set the opening price at six bitcoins (~USD $60,000).

A review of the screenshots posted to the cybercrime forum as evidence of the unauthorized access revealed several Internet addresses tied to systems at the U.S. Department of Transportation, the National Institutes of Health (NIH), and U.S. Citizenship and Immigration Services (USCIS), a component of the U.S. Department of Homeland Security that manages the nation’s naturalization and immigration system.

Other domains and Internet addresses included in those screenshots pointed to Miracle Systems LLC, an Arlington, Va. based IT contractor that states on its site that it serves 20+ federal agencies as a prime contractor, including the aforementioned agencies.

In an interview with KrebsOnSecurity, Miracle Systems CEO Sandesh Sharda confirmed that the auction concerned credentials and databases managed by his company, and that an investigating agent from the Secret Service was in his firm’s offices at that very moment looking into the matter.

But he maintained that the purloined data shown in the screenshots was years-old and mapped only to internal test systems that were never connected to its government agency clients.

“The Secret Service came to us and said they’re looking into the issue,” Sharda said. “But it was all old stuff [that was] in our own internal test environment, and it is no longer valid.”

Still, Sharda did acknowledge information shared by Wisconsin-based security firm Hold Security, which alerted KrebsOnSecurity to this incident, indicating that at least eight of its internal systems had been compromised on three separate occasions between November 2018 and July 2019 by Emotet, a malware strain usually distributed via malware-laced email attachments that typically is used to deploy other malicious software.

The Department of Homeland Security did not respond to requests for comment, nor did the Department of Transportation. A spokesperson for the NIH said the agency had investigated the activity and found it was not compromised by the incident.

“As is the case for all agencies of the Federal Government, the NIH is constantly under threat of cyber-attack,” NIH spokesperson Julius Patterson said. “The NIH has a comprehensive security program that is continuously monitoring and responding to security events, and cyber-related incidents are reported to the Department of Homeland Security through the HHS Computer Security Incident Response Center.”

One of several screenshots offered by the dark web seller as proof of access to a federal IT contractor later identified as Arlington, Va. based Miracle Systems. Image: Hold Security.

The dust-up involving Miracle Systems comes amid much hand-wringing among U.S. federal agencies about how best to beef up and ensure security at a slew of private companies that manage federal IT contracts and handle government data.

For years, federal agencies had few options to hold private contractors to the same security standards to which they must adhere — beyond perhaps restricting how federal dollars are spent. But recent updates to federal acquisition regulations allow agencies to extend those same rules to vendors, enforce specific security requirements, and even kill contracts that are found to be in violation of specific security clauses.

In July, DHS’s Customs and Border Patrol (CPB) suspended all federal contracts with Perceptics, a contractor which sells license-plate scanners and other border control equipment, after data collected by the company was made available for download on the dark web. The CPB later said the breach was the result of a federal contractor copying data on its corporate network, which was subsequently compromised.

For its part, the Department of Defense recently issued long-awaited cybersecurity standards for contractors who work with the Pentagon’s sensitive data.

“This problem is not necessarily a tier-one supply level,” DOD Chief Information Officer Dana Deasy told the Senate Armed Services Committee earlier this year. “It’s down when you get to the tier-three and the tier-four” subcontractors.

The Good, the Bad and the Ugly in Cybersecurity – Week 36

Image of The Good, The Bad & The Ugly in CyberSecurity

The Good

The hero of the day is the city of New Bedford, Massachusetts. After (another) typical ransomware attack on a US city (this time, using the Ryuk ransomware) and a ransom request of $5.3 million, the city negotiated it down to $400k, and eventually moved on to recreate their server infrastructure without paying anything to the extortionists. Every paid ransom guarantees another attack. Criminals won’t give up on easy sources of money, and until organizations have proper defenses in place and refuse to give in the attacks will keep coming. So well done New Bedford! Authorities in Flagstaff have also held out against a ransomware attack so far and temporarily closed schools on Thursday while responding to the incident. 

image tweet refuse to pay ransomware

The Bad

Last week it was iOS, this week it’s Android’s turn. Around 50% of all Android smartphones in use (those from vendors Huawei, LG, Samsung and Sony) allow attackers to easily trick users into changing their phone settings to route internet traffic through a man-in-the-middle proxy. Attackers are able to send over-the-air (OTA) provisioning profiles through phishing messages that can appear to come from the user’s network provider. The provisioning profiles, once installed, allow attackers to change settings for the device’s MMS message server, proxy address, email server and browser homepage, among other things. While Huawei, LG and Samsung have either addressed the issue or plan to, the researchers reported that Sony has refused to acknowledge the vulnerability. 

image of accept new settings

The Ugly

There is nothing worse for a security product than to pave the way for malicious actors to gain entry into enterprise networks. That has happened in the past where Bit9 (later to become CarbonBlack) had their certificate stolen and ended up signing malware used to bypass their own whitelisting product back in 2013. This week, it was revealed that Fortinet and Pulse Secure have been targeted by a Chinese APT5 group seeking to exploit vulnerabilities found in both companies’ products. A Black Hat presentation last month detailed bugs in their SSL VPN implementations. Despite the vendors having patched the vulnerabilities with urgency back in May and issuing a further reminder since the Black Hat conference, some of the 500,000+ organizations affected have left themselves open to attack by failing to keep up and patch in time.  

image of ssl vpn vulnerability


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Battlefield winner Forethought adds tool to automate support ticket routing

Last year at this time, Forethought won the TechCrunch Disrupt Battlefield competition. A $9 million Series A investment followed last December. Today at TechCrunch Sessions: Enterprise in San Francisco, the company introduced the latest addition to its platform, called Agatha Predictions.

Forethought CEO and co-founder Deon Nicholas said that after launching its original product, Agatha Answers (to provide suggested answers to customer queries), customers were asking for help with the routing part of the process, as well. “We learned that there’s a whole front end of that problem before the ticket even gets to the agent,” he said. Forethought developed Agatha Predictions to help sort the tickets and get them to the most qualified agent to solve the problem.

“It’s effectively an entire tool that helps triage and route tickets. So when a ticket is coming in, it can predict whether it’s a high-priority or low-priority ticket and which agent is best qualified to handle this question. And this all happens before the agent even touches the ticket. This really helps drive efficiencies across the organization by helping to reduce triage time,” Nicholas explained.

The original product (Agatha Answers) is designed to help agents get answers more quickly and reduce the amount of time it takes to resolve an issue. “It’s a tool that integrates into your Help Desk software, indexes your past support tickets, knowledge base articles and other [related content]. Then we give agents suggested answers to help them close questions with reduced handle time,” Nicholas said.

He says that Agatha Predictions is based on the same underlying AI engine as Agatha Answers. Both use Natural Language Understanding (NLU) developed by the company. “We’ve been building out our product, and the Natural Language Understanding engine, the engine behind the system, works in a very similar manner [across our products]. So as a ticket comes in the AI reads it, understands what the customer is asking about, and understands the semantics, the words being used,” he explained. This enables them to automate the routing and supply a likely answer for the issue involved.

Nicholas maintains that winning Battlefield gave his company a jump start and a certain legitimacy it lacked as an early-stage startup. Lots of customers came knocking after the event, as did investors. The company has grown from five employees when it launched last year at TechCrunch Disrupt to 20 today.

Newly renamed Superside raises $3.5M for its outsourced design platform

Superside, a startup aiming to create a premium alternative to the existing crowdsourced design platforms, is announcing that it has raised $3.5 million in new funding.

It’s also adding new features like the ability to work on user interfaces, interaction design and motion graphics. Co-founder and CEO Fredrik Thomassen said this allows the company to offer “a full-service design solution.”

You may have heard about Superside under its old name Konsus . In a blog post, Thomassen explained the recent change in name and branding, writing, “We changed our name and look to align with what we had become: The world’s top team of international designers and creatives.”

He told me Superside was created to address his own frustrations after trying to use marketplaces like 99designs and Fiverr. He argued that there’s a problem with “adverse selection on those platforms.” In other words, “The best people … don’t remain, because they don’t have a career path — they’re fighting with other freelancers to get the jobs.”

Superside, on the other hand, is picky about the designers it works with — it claims to select 100 designers from the more than 50,000 applications it receives each year. But if they are accepted, they’re guaranteed full-time work.

superside step1 orderwizard

Thomassen said the platform is built for large enterprises that have their own design and marketing teams but still need additional support. Customers include Uber, LinkedIn, L’Oreal, Cisco, Santander, Amazon, Walmart Tiffany & Co. Hewlett Packard and Airbus

In addition to choosing good designers, Superside also built a broader project management platform.

“We’re basically automating everything: Finding people, screening people, on-boarding, on-the-job learning, invoicing of customers, project management, all of the nitty gritty,” Thomassen said. “The only thing not automated is design — that’s where the human element and the creativity come in.”

Plus, Thomassen said Superside can turn around a standard piece of artwork in 12 hours: “Nobody else can do what we’re doing in terms of speed.”

The new funding comes from Freestyle Capital, with participation from High Alpha Ventures, Y Combinator and Alliance Ventures.

“We’re very much a mission-driven company,” Thomassen added. “For me, the reason to go to work in the morning is to help build an online labor market and create equal economic opportunity for everyone in the world.”

Top VCs on the changing landscape for enterprise startups

Yesterday at TechCrunch’s Enterprise event in San Francisco, we sat down with three venture capitalists who spend a lot of their time thinking about enterprise startups. We wanted to ask what trends they are seeing, what concerns they might have about the state of the market and, of course, how startups might persuade them to write out a check.

We covered a lot of ground with the investors — Jason Green of Emergence Capital, Rebecca Lynn of Canvas Ventures and Maha Ibrahim of Canaan Partners — who told us, among other things, that startups shouldn’t expect a big M&A event right now, that there’s no first-mover advantage in the enterprise realm and why grit may be the quality that ends up keeping a startup afloat.

On the growth of enterprise startups:

Jason Green: When we started Emergence 15 years ago, we saw maybe a few hundred startups a year, and we funded about five or six. Today, we see over 1,000 a year; we probably do deep diligence on 25.

APIs are the next big SaaS wave

While the software revolution started out slowly, over the past few years it’s exploded and the fastest-growing segment to-date has been the shift towards software as a service or SaaS.

SaaS has dramatically lowered the intrinsic total cost of ownership for adopting software, solved scaling challenges and taken away the burden of issues with local hardware. In short, it has allowed a business to focus primarily on just that — its business — while simultaneously reducing the burden of IT operations.

Today, SaaS adoption is increasingly ubiquitous. According to IDG’s 2018 Cloud Computing Survey, 73% of organizations have at least one application or a portion of their computing infrastructure already in the cloud. While this software explosion has created a whole range of downstream impacts, it has also caused software developers to become more and more valuable.

The increasing value of developers has meant that, like traditional SaaS buyers before them, they also better intuit the value of their time and increasingly prefer businesses that can help alleviate the hassles of procurement, integration, management, and operations. Developer needs to address those hassles are specialized.

They are looking to deeply integrate products into their own applications and to do so, they need access to an Application Programming Interface, or API. Best practices for API onboarding include technical documentation, examples, and sandbox environments to test.

APIs tend to also offer metered billing upfront. For these and other reasons, APIs are a distinct subset of SaaS.

For fast-moving developers building on a global-scale, APIs are no longer a stop-gap to the future—they’re a critical part of their strategy. Why would you dedicate precious resources to recreating something in-house that’s done better elsewhere when you can instead focus your efforts on creating a differentiated product?

Thanks to this mindset shift, APIs are on track to create another SaaS-sized impact across all industries and at a much faster pace. By exposing often complex services as simplified code, API-first products are far more extensible, easier for customers to integrate into, and have the ability to foster a greater community around potential use cases.

Screen Shot 2019 09 06 at 10.40.51 AM

Graphics courtesy of Accel

Billion-dollar businesses building APIs

Whether you realize it or not, chances are that your favorite consumer and enterprise apps—Uber, Airbnb, PayPal, and countless more—have a number of third-party APIs and developer services running in the background. Just like most modern enterprises have invested in SaaS technologies for all the above reasons, many of today’s multi-billion dollar companies have built their businesses on the backs of these scalable developer services that let them abstract everything from SMS and email to payments, location-based data, search and more.

Simultaneously, the entrepreneurs behind these API-first companies like Twilio, Segment, Scale and many others are building sustainable, independent—and big—businesses.

Valued today at over $22 billion, Stripe is the biggest independent API-first company. Stripe took off because of its initial laser-focus on the developer experience setting up and taking payments. It was even initially known as /dev/payments!

Stripe spent extra time building the right, idiomatic SDKs for each language platform and beautiful documentation. But it wasn’t just those things, they rebuilt an entire business process around being API-first.

Companies using Stripe didn’t need to fill out a PDF and set up a separate merchant account before getting started. Once sign-up was complete, users could immediately test the API with a sandbox and integrate it directly into their application. Even pricing was different.

Stripe chose to simplify pricing dramatically by starting with a single, simple price for all cards and not breaking out cards by type even though the costs for AmEx cards versus Visa can differ. Stripe also did away with a monthly minimum fee that competitors had.

Many competitors used the monthly minimum to offset the high cost of support for new customers who weren’t necessarily processing payments yet. Stripe flipped that on its head. Developers integrate Stripe earlier than they integrated payments before, and while it costs Stripe a lot in setup and support costs, it pays off in brand and loyalty.

Checkr is another excellent example of an API-first company vastly simplifying a massive yet slow-moving industry. Very little had changed over the last few decades in how businesses ran background checks on their employees and contractors, involving manual paperwork and the help of 3rd party services that spent days verifying an individual.

Checkr’s API gives companies immediate access to a variety of disparate verification sources and allows these companies to plug Checkr into their existing on-boarding and HR workflows. It’s used today by more than 10,000 businesses including Uber, Instacart, Zenefits and more.

Like Checkr and Stripe, Plaid provides a similar value prop to applications in need of banking data and connections, abstracting away banking relationships and complexities brought upon by a lack of tech in a category dominated by hundred-year-old banks. Plaid has shown an incredible ramp these past three years, from closing a $12 million Series A in 2015 to reaching a valuation over $2.5 billion this year.

Today the company is fueling an entire generation of financial applications, all on the back of their well-built API.

Screen Shot 2019 09 06 at 10.41.02 AM

Graphics courtesy of Accel

Then and now

Accel’s first API investment was in Braintree, a mobile and web payment systems for e-commerce companies, in 2011. Braintree eventually sold to, and became an integral part of, PayPal as it spun out from eBay and grew to be worth more than $100 billion. Unsurprisingly, it was shortly thereafter that our team decided to it was time to go big on the category. By the end of 2014 we had led the Series As in Segment and Checkr and followed those investments with our first APX conference in 2015.

Plaid, Segment, Auth0, and Checkr had only raised Seed or Series A financings! And we are even more excited and bullish on the space. To convey just how much API-first businesses have grown in such a short period of time, we thought it would be useful perspective to share some metrics over the past five years, which we’ve broken out in the two visuals included above in this article.

While SaaS may have pioneered the idea that the best way to do business isn’t to actually build everything in-house, today we’re seeing APIs amplify this theme. At Accel, we firmly believe that APIs are the next big SaaS wave — having as much if not more impact as its predecessor thanks to developers at today’s fastest-growing startups and their preference for API-first products. We’ve actively continued to invest in the space (in companies like, Scale, mentioned above).

And much like how a robust ecosystem developed around SaaS, we believe that one will continue to develop around APIs. Given the amount of progress that has happened in just a few short years, Accel is hosting our second APX conference to once again bring together this remarkable community and continue to facilitate discussion and innovation.

Screen Shot 2019 09 06 at 10.41.10 AM

Graphics courtesy of Accel

Gootkit Banking Trojan | Part 3: Retrieving the Final Payload

Gootkit’s final payload contains multiple Node.js scripts. Join Daniel Bunce as he reverse engineers the malware to take a deeper look at what it delivers.

The Gootkit Banking Trojan was discovered back in 2014, and utilizes the Node.js library to perform a range of malicious tasks, from website injections and password grabbing, all the way up to video recording and remote VNC capabilities. Since its discovery in 2014, the actors behind Gootkit have continued to update the codebase to slow down analysis and thwart automated sandboxes.

In Part 1 and Part 2 we looked at Gootkit’s anti-analysis features and its ability to maintain persistence. In this post, we’ll reverse the routine Gootkit performs to download and execute the Node.js final payload. We’ll also see how to extract the JS Scripts from the executable and take a brief look at some interesting scripts.

MD5 of Packed Sample: 0b50ae28e1c6945d23f59dd2e17b5632

The –vwxyz Argument

As covered previously, Gootkit contains several arguments that may or may not influence the execution of the process. The most interesting argument inside this sample is --vwxyz. Upon execution, Gootkit will re-execute itself, passing –vwxyz as an argument. This will kick off the function responsible for retrieving the final Node.js payload from the C2 server, decrypting and decompressing it, and finally, executing it. 

The payload retrieval function isn’t irregular; in fact, it uses the same connection function I covered in the previous post. Interestingly though, it performs two requests to the C2 server, first requesting /rbody320, and then requesting /rbody32. Even though the sample of Gootkit I have been analyzing is fairly recent, the C2 server was shut down quite quickly, and so I used ImaginaryC2, a tool developed by Felix Weyne, in order to simulate the Gootkit C2 server and analyze the network related pathways. As a result, the Node.js payload may not be exact to this sample, however, it is also fairly new itself.

Before reaching out to the C2 however, Gootkit will first examine the registry to check if the payload has already been downloaded once before. The reason for this being that once Gootkit downloads the final stage, it will be written to the registry, specifically to the SoftwareAppDataLow registry key – however, instead of storing the whole binary under one value, it will split the payload up into several parts and write each part to the value bthusrde_x, where x is incremented by 1 for each part of the file. If the registry is already filled with the encrypted payload, Gootkit will decrypt and decompress the payload, and then execute it. However, rather than skipping the communications routine, Gootkit will still reach out to the server to check it is running the latest version of the final stage.

Upon first request to the C2 server, a CRC-32 hash of the Node.js payload hosted by the server is returned. In this case, the value is 0xB4DC123B, although it will differ for different campaigns, as the payload can change. The hex value is first compared to 0xFFFFFFFF, and if the SoftwareAppDataLowbthusrde_0 registry key is present, the sample will read the local encrypted payload into memory and call RtlComputeCrc32 on the data. This hash is then compared to the hash sent from the C2, and if it matches, the process will sleep for a randomly generated amount of time, before repeating the check.

If the registry key isn’t present, then the system has not been infected before. Therefore, Gootkit will reach out to the C2 server once more, appending /rbody32 to the URL.

Once the final stage has been successfully downloaded, it will be written to SoftwareAppDataLowbthusrde_x in chunks. In this case, there was a total of 9 registry values created to hold the entire binary. Once it has been written to the registry, the downloader will decrypt and decompress it in memory. The decryption function is the same function used to decrypt the configuration, and in order to decompress it, Gootkit will load and call RtlDecompressBuffer. After decryption and decompression, the resulting file is very large – roughly 5 megabytes in size. This is due to the fact it contains a large amount of Node.js scripts embedded inside, plus the interpreter required to execute these scripts. 

With the executable now fully decrypted and decompressed, the downloader will copy over the local configuration. To do so, it looks for the placeholder DDDD inside the downloaded executable, and once located, it will copy over the URLs inside the configuration using lstrcpyA().

When it comes to executing the prepared payload, Gootkit takes a rather unusual approach. Instead of injecting it into another process or executing it as its own process, the downloader will allocate memory inside of its own process and map the payload into it, before executing it through a call to eax. If the Node.js payload ever exits, the downloader will simply loop around, decrypting the payload stored in the registry and executing that.

With the downloading function covered, let’s move over to analyzing the final stage of Gootkit.

Node.js Payload

Similarly to Python and many other scripting languages that can be compiled, JavaScript executables contain the JavaScript scripts created by the developer, and an interpreter required to execute the scripts. As a result, it is entirely possible to extract each script used by Gootkit to perform its nefarious tasks – it’s just a matter of finding them. Luckily for us, it isn’t very difficult to do so. 

As we are looking for fairly large chunks of (possibly encrypted) data rather than machine code, it should only take some quick searching in IDA to locate strings such as spyware, malware, gootkit_crypt and vmx_detection. Performing cross referencing on these strings leads to what seems to be a large list of arrays in the .data section. Each array contains a string, such as gootkit_crypt, a pointer to an address in the executable containing a chunk of encrypted data, and finally the size of the encrypted data.

Further cross referencing leads us to the function responsible for decrypting the scripts.

Each script is encrypted with RC4, and compressed with ZLib compression. However, the RC4 is slightly different than normal. In this case Gootkit uses a custom RC4 keystate to scramble a generated array containing values from 0 to 256 – under normal circumstances, a key would be used to scramble the array. As a result, this keystate needs to be incorporated into any script decryption tools that are developed. 

After decrypting the data, the first 4 bytes will contain the size of the data to decompress, so make sure to discard that from the data before decompression to avoid issues with incorrect headers. You can check out a full example of the Python decryption script here

So, now we have each decrypted JavaScript file and, thanks to the embedded strings, the names of each script! As focusing on each and every script would take forever, I’ll only be taking a look at these few scripts in this post: malware.js, uploader.js, gootkit_crypt.js and vmx_detection.js.

Malware.js

The malware.js script acts as the entry point for the JavaScript section of Gootkit. It is responsible for querying the registry for values that had been added by the previous stage – so mainprocessoverride and vendor_id, as well as assigning values to global variables such as g_botId, which could be set to either 8.666.jet, or a custom value if one is given.

What is particularly interesting about the script is the fact that certain functions have been commented out by the actor behind it, such as a call to IsSuspProcess(), which checks running processes for pythonw.exe, and pos_trigger.exe, compares the USERDOMAIN to 7SILVIA, and calls a function responsible for checking for a Virtual Machine. This gives the impression that the actor is focusing on infecting as many machines as possible, regardless of whether they are sandboxes or virtual machines – or perhaps it was raising to many false positives and so removing it was the best option.


This script will also perform a check in with the C2 server, appending /200 to the URL. There is also a lot of logging inside, specifically calls to the logging function dlog(), and even console.log() – this is primarily for debugging purposes. dlog() will check to see if the environment variable debug_main is set to true, and if it is, everything will be logged to the console.

Uploader.js

Uploader.js, as the name suggests, is responsible for uploading files to a remote C2 server. The first function, uploadFile() will upload all files to the C2 server, appending /upload to the URL. The headers are all hardcoded, including the value X-Bot-ID, which contains the machine GUID.

The next function, uploadLogFile() will upload a log file to the server, appending /logfile to the URL. There is also another log file upload function, uploadLogFile2, which will upload a file to the C2 server with /logfile2 appended. The headers are once again hardcoded; however, there is an added value – X-File-Prefix

Finally, there is a function labelled dope() inside the script which is completely empty and is never mentioned. Perhaps a placeholder for a future update?

Gootkit_crypt.js

Gootkit_crypt.js contains most of the encryption and encoding algorithms used by Gootkit to encrypt and decrypt data. The first thing you’ll notice in the script is the RC4 custom keystate we saw implemented in the Node.js compiled executable earlier. The three main algorithms used by Gootkit throughout the other scripts are; RC4, TEA, and Base64. Luckily, both the RC4 keystate and TEA encryption key are hardcoded into the script, so decrypting network communications or encrypted files should be fairly simple.

What is quite interesting in this script is that before every algorithm, a comment is left noting that the function is a prototype, which is quite strange as surely prototypes should be tested before implementing them into a banking trojan?

Moving on, they also have a UTF-8 function inside the script, although this is only used locally inside the TEA and Base64 functions.

With that covered, we can move onto the final – and unused – script, vmx_detection.js.

Vmx_detection.js

The vmx_detection.js script performs all of the Virtual Machine checks that were present in the previous stage; however, it also has a few added checks. 

The first added check is a check for IDE or SCSI devices, which are present in Virtual Machines, and usually contains vendor names such as VBox and VMWare. In this sample, the registry values seen below are queried and the values are passed onto the function VmCheckVitrualDisks(), which checks to see if the strings VMware, vbox, and SONI are present.

SYSTEM\CurrentControlSet\Enum\IDE
SYSTEM\CurrentControlSet\Enum\SCSI

If these checks pass successfully, a final check on the CPU will be performed. The CPU value is compared to three “goodIds”, these being GenuineIntel, AuthenticAMD, and AMDisbetter!. If the values do not match, then the script determines that the system is a Virtual Machine and (if the function wasn’t commented out) exits. Otherwise, it will return False, indicating that the system is not a Virtual Machine.

And that brings us to the end of the “mini series” on Gootkit! As mentioned, there are many other scripts inside the binary that I haven’t covered, so hopefully with all of the information covered inside these three posts you can start analyzing the main body of Gootkit yourself!


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Palo Alto Networks intends to acquire Zingbox for $75M

Palo Alto Networks surely loves to buy security startups. Today it added to its growing collection when it announced its intent to acquire IoT security startup Zingbox for $75 million.

The company had raised $23.5 million, according to Crunchbase data. The three co-founders, Xu Zou, May Wang and Jianlin Zeng, will be joining Palo Alto after the sale is official.

With Zingbox, the company gets IoT security chops, something that is increasingly important as companies deploy internet-connected smart devices and sensors. While these tools can greatly benefit customers, they also often carry a huge security risk.

Zingbox, which was founded in 2014, gives Palo Alto Networks a modern cloud-based solution built on a subscription model along with engineering talent to help build out the solution further. Nikesh Arora, chairman and CEO of Palo Alto Networks, certainly sees this.

“The proliferation of IoT devices in enterprises has left customers facing an enormous gap in protection against cybersecurity attacks. With the proposed acquisition of Zingbox, we will provide a first-of-its-kind subscription for our Next-Generation Firewall and Cortex platforms that gives customers the ability to gain control, visibility and security of their connected devices at scale,” Arora said in a statement.

This is the fourth security startup the company has purchased this year. It acquired two companies, nabbing PureSec and Twistlock, on the same day last Spring. Earlier this year, it bought Demisto for $560 million. All of these acquisitions are meant to build up the company’s portfolio of modern security offerings without having to build these kinds of tools in-house from scratch.

BigID announces $50M Series C investment as privacy takes center stage

It turns out GDPR was just the tip of the privacy iceberg. With California’s privacy law coming on line January 1st and dozens more in various stages of development, it’s clear that governments are taking privacy seriously, which means companies have to as well. New York-startup BigID, which has been developing a privacy platform for the last several years, finds itself in a good position to help. Today, the company announced a $50 million Series C.

The round was led by Bessemer Venture Partners with help from SAP.io Fund, Comcast Ventures, Boldstart Ventures, Scale Venture Partners and ClearSky. New investor Salesforce Ventures also participated. Today’s investment brings the total raised to over $96 million, according to Crunchbase.

In addition to the funding, the company is also announcing the formation of a platform of sorts, which will offer a set of privacy services for customers. It includes data discovery, classification and correlation. “We’ve separated the product into some constituent parts. While it’s still sold as a broad-based solution, it’s much more of a platform now in the sense that there’s a core set of capabilities that we heard over and over that customers want,” CEO and co-founder Dimitri Sirota told TechCrunch.

He says that these capabilities really enables customers to see connections in the data across a set of disparate data sources. “There are a lot of products that do the request part, but there’s nobody that’s able to look across your entire data landscape, the hundreds of petabytes, and pick out the data in Salesforce, Workday, AWS, mainframe, and all these places you could have data on [an individual], and show how it’s all tied together,” Sirota explained.

It’s interesting to see the mix of strategic investors and traditional venture capitalists who are investing in the company. The strategics in particular see the privacy landscape as well as anyone, and Sirota says it’s a case of privacy mattering more than ever and his company providing the means to navigate the changing landscape. “Consumers care about privacy, which means legislators care about it, which ultimately means companies have to care about it,” he said. He added, “Strategics, whether they are companies that collect personal data or those that sell to those companies, therefore have an interest in BigID .”

The company has been growing fast and raising money quickly to help it scale to meet demand. Starting in January 2018, it raised $14 million. Just six months later, it raised another $30 million and you can tack on today’s $50 million. Sirota says having money in the bank and seeing these investments helps give enterprise customers confidence that the company is in this for the long haul.

Sirota wouldn’t give an exact valuation, only saying that while the company is not a unicorn, the valuation was a “robust number.” He says the plan now it to keep expanding the platform, and there will be announcements coming soon around partnerships, customers and new capabilities.

Sirota will be appearing at TechCrunch Sessions: Enterprise on September 5th at 11 am on the panel, Cracking the Code: From Startup to Scaleup in Enterprise Software.

Watch TC Sessions: Enterprise live stream right here

TechCrunch is live from San Francisco’s YBCA’s Blue Shield of California Theater, where we’re hosting our first event dedicated to the enterprise. Throughout the day, attendees and viewers can expect to hear from industry experts and partake in discussions about the potential of new technologies like quantum computing and AI, how to deal with the onslaught of security threats, investing in early-stage startups and plenty more

We’ll be joined by some of the biggest names and the smartest and most prescient people in the industry, including Bill McDermott at SAP, Scott Farquhar at Atlassian, Julie Larson-Green at Qualtrics, Wendy Nather at Duo Security, Aaron Levie at Box and Andrew Ng at Landing AI.

Our agenda showcases some of the powerhouses in the space, but also plenty of smaller teams that are building and debunking fundamental technologies in the industry.

AGENDA

Investing with an Eye to the Future
Jason Green (Emergence Capital), Maha Ibrahim (Canaan Partners) and Rebecca Lynn (Canvas Ventures)
9:35 AM – 10:00 AM

In an ever-changing technological landscape, it’s not easy for VCs to know what’s coming next and how to place their bets. Yet, it’s the job of investors to peer around the corner and find the next big thing, whether that’s in AI, serverless, blockchain, edge computing or other emerging technologies. Our panel will look at the challenges of enterprise investing, what they look for in enterprise startups and how they decide where to put their money.


Talking Shop
Scott Farquhar (Atlassian)
10:00 AM – 10:20 AM

With tools like Jira, Bitbucket and Confluence, few companies influence how developers work as much as Atlassian. The company’s co-founder and co-CEO Scott Farquhar will join us to talk about growing his company, how it is bringing its tools to enterprises and what the future of software development in and for the enterprise will look like.


Q&A with Investors 
10:10 AM – 10:40 AM

Your chance to ask questions of some of the greatest investors in enterprise.


Innovation Break: Deliver Innovation to the Enterprise
Monty Gray (Okta), DJ Paoni (
SAP), Sanjay Poonen (VMware) and Shruti Tournatory (Sapphire Ventures)
10:20 AM – 10:40 AM

For startups, the appeal of enterprise clients is not surprising — signing even one or two customers can make an entire business, and it can take just a few hundred to build a $1 billion unicorn company. But while corporate counterparts increasingly look to the startup community for partnership opportunities, making the jump to enterprise sales is far more complicated than scaling up the strategy startups already use to sell to SMBs or consumers. Hear from leaders who have experienced successes and pitfalls through the process as they address how startups can adapt their strategy with the needs of the enterprise in mind. Sponsored by SAP.


Apple in the Enterprise

Susan Prescott (Apple)

10:40 AM – 11:00 AM

Apple’s Susan Prescott has been at the company since the early days of the iPhone, and she has seen the company make a strong push into the enterprise, whether through tooling or via strategic partnerships with companies like IBM, SAP and Cisco.


Box’s Enterprise Journey
Aaron Levie (Box)
11:15 AM – 11:35 AM

Box started life as a consumer file-storage company and transformed early on into a successful enterprise SaaS company, focused on content management in the cloud. Levie will talk about what it’s like to travel the entire startup journey — and what the future holds for data platforms.


Bringing the Cloud to the Enterprise
Mark Russinovich (Microsoft) 

11:35 AM – 12:00 PM

Cloud computing may now seem like the default, but that’s far from true for most enterprises, which often still have tons of legacy software that runs in their own data centers. What does it mean to be all-in on the cloud, which is what Capital One recently accomplished. We’ll talk about how companies can make the move to the cloud easier, what not to do and how to develop a cloud strategy with an eye to the future.


Keeping the Enterprise Secure
Martin Casado (Andreessen Horowitz), Emily Heath (United Airlines) and Wendy Nather (Duo Security)
1:00 PM – 1:25 PM

Enterprises face a litany of threats from both inside and outside the firewall. Now more than ever, companies — especially startups — have to put security first. From preventing data from leaking to keeping bad actors out of your network, enterprises have it tough. How can you secure the enterprise without slowing growth? We’ll discuss the role of a modern CSO and how to move fast… without breaking things.


Keeping an Enterprise Behemoth on Course
Bill McDermott (SAP)

1:25 PM – 1:45 PM

With over $166 billion is market cap, Germany-based SAP is one of the most valuable tech companies in the world today. Bill McDermott took the leadership in 2014, becoming the first American to hold this position. Since then, he has quickly grown the company, in part thanks to a number of $1 billion-plus acquisitions. We’ll talk to him about his approach to these acquisitions, his strategy for growing the company in a quickly changing market and the state of enterprise software in general.


How Kubernetes Changed Everything
Brendan Burns (Microsoft), Tim Hockin (Google Cloud), Craig McLuckie (VMware)
and Aparna Sinha (Google)
1:45 PM – 2:15 PM

You can’t go to an enterprise conference and not talk about Kubernetes, the incredibly popular open-source container orchestration project that was incubated at Google. For this panel, we brought together three of the founding members of the Kubernetes team and the current director of product management for the project at Google to talk about the past, present and future of the project and how it has changed how enterprises think about moving to the cloud and developing software.


Innovation Break: The Future of Data in an Evolving Landscape
Alisa Bergman (Adobe Systems), Jai Das (Sapphire Ventures)
 and Sanjay Kumar (Geospatial Media) moderated by: Nikki Helmer (SAP)
2:15 PM – 2:35 PM

Companies have historically competed by having data in their toolbox, and gleaning insights to make key business decisions. However, increased regulatory and societal scrutiny is requiring companies to rethink this approach. In this session, we explore the challenges and opportunities that businesses will experience as these conversations evolve. Sponsored by SAP.


AI Stakes its Place in the Enterprise
Marco Casalaina (Salesforce)
, Jocelyn Goldfein (Zetta Venture Partners) and Bindu Reddy (Reality Engines)

2:35 PM – 3:00 PM

AI is becoming table stakes for enterprise software as companies increasingly build AI into their tools to help process data faster or make more efficient use of resources. Our panel will talk about the growing role of AI in enterprise for companies big and small.


Q&A with Founders
3:00 PM – 3:30 PM

Your chance to ask questions of some of the greatest startup minds in enterprise technology.


The Trials and Tribulations of Experience Management
Amit Ahuja (Adobe), Julie Larson-Green (Qualtrics) and Peter Reinhardt (Segment)
3:15 PM – 3:40 PM

As companies gather more data about their customers, it should theoretically improve the customer experience, buy myriad challenges face companies as they try to pull together information from a variety of vendors across disparate systems, both in the cloud and on prem. How do you pull together a coherent picture of your customers, while respecting their privacy and overcoming the technical challenges? We’ll ask a team of experts to find out.


Innovation Break: Identifying Overhyped Technology Trends
James Allworth (
Cloudflare), George Mathew (Kespry) and Max Wessel (SAP)
3:40 PM – 4:00 PM

For innovation-focused businesses, deciding which technology trends are worth immediate investment, which trends are worth keeping on the radar and which are simply buzzworthy can be a challenging gray area to navigate and may ultimately make or break the future of a business. Hear from these innovation juggernauts as they provide their divergent perspectives on today’s hottest trends, including Blockchain, 5G, AI, VR and more. Sponsored by SAP.


Fireside Chat
Andrew Ng (Landing AI)
4:00 PM – 4:20 PM

Few technologists have been more central to the development of AI in the enterprise than Andrew Ng. With Landing AI and the backing of many top venture firms, Ng has the foundation to develop and launch the AI companies he thinks will be winners. We will talk about where Ng expects to see AI’s biggest impacts across the enterprise.


The Quantum Enterprise
Jim Clarke (Intel), Jay Gambetta (IBM)
and Krysta Svore (Microsoft)
4:20 PM – 4:45 PM

While we’re still a few years away from having quantum computers that will fulfill the full promise of this technology, many companies are already starting to experiment with what’s available today. We’ll talk about what startups and enterprises should know about quantum computing today to prepare for tomorrow.


Overcoming the Data Glut
Benoit Dageville (Snowflake), Ali Ghodsi (Databricks) and Murli Thirumale (Portworx)
4:45 PM – 5:10 PM

There is certainly no shortage of data in the enterprise these days. The question is how do you process it and put it in shape to understand it and make better decisions? Our panel will discuss the challenges of data management and visualization in a shifting technological landscape where the term “big data” doesn’t begin to do the growing volume justice.