Top VCs on the changing landscape for enterprise startups

Yesterday at TechCrunch’s Enterprise event in San Francisco, we sat down with three venture capitalists who spend a lot of their time thinking about enterprise startups. We wanted to ask what trends they are seeing, what concerns they might have about the state of the market and, of course, how startups might persuade them to write out a check.

We covered a lot of ground with the investors — Jason Green of Emergence Capital, Rebecca Lynn of Canvas Ventures and Maha Ibrahim of Canaan Partners — who told us, among other things, that startups shouldn’t expect a big M&A event right now, that there’s no first-mover advantage in the enterprise realm and why grit may be the quality that ends up keeping a startup afloat.

On the growth of enterprise startups:

Jason Green: When we started Emergence 15 years ago, we saw maybe a few hundred startups a year, and we funded about five or six. Today, we see over 1,000 a year; we probably do deep diligence on 25.

APIs are the next big SaaS wave

While the software revolution started out slowly, over the past few years it’s exploded and the fastest-growing segment to-date has been the shift towards software as a service or SaaS.

SaaS has dramatically lowered the intrinsic total cost of ownership for adopting software, solved scaling challenges and taken away the burden of issues with local hardware. In short, it has allowed a business to focus primarily on just that — its business — while simultaneously reducing the burden of IT operations.

Today, SaaS adoption is increasingly ubiquitous. According to IDG’s 2018 Cloud Computing Survey, 73% of organizations have at least one application or a portion of their computing infrastructure already in the cloud. While this software explosion has created a whole range of downstream impacts, it has also caused software developers to become more and more valuable.

The increasing value of developers has meant that, like traditional SaaS buyers before them, they also better intuit the value of their time and increasingly prefer businesses that can help alleviate the hassles of procurement, integration, management, and operations. Developer needs to address those hassles are specialized.

They are looking to deeply integrate products into their own applications and to do so, they need access to an Application Programming Interface, or API. Best practices for API onboarding include technical documentation, examples, and sandbox environments to test.

APIs tend to also offer metered billing upfront. For these and other reasons, APIs are a distinct subset of SaaS.

For fast-moving developers building on a global-scale, APIs are no longer a stop-gap to the future—they’re a critical part of their strategy. Why would you dedicate precious resources to recreating something in-house that’s done better elsewhere when you can instead focus your efforts on creating a differentiated product?

Thanks to this mindset shift, APIs are on track to create another SaaS-sized impact across all industries and at a much faster pace. By exposing often complex services as simplified code, API-first products are far more extensible, easier for customers to integrate into, and have the ability to foster a greater community around potential use cases.

Screen Shot 2019 09 06 at 10.40.51 AM

Graphics courtesy of Accel

Billion-dollar businesses building APIs

Whether you realize it or not, chances are that your favorite consumer and enterprise apps—Uber, Airbnb, PayPal, and countless more—have a number of third-party APIs and developer services running in the background. Just like most modern enterprises have invested in SaaS technologies for all the above reasons, many of today’s multi-billion dollar companies have built their businesses on the backs of these scalable developer services that let them abstract everything from SMS and email to payments, location-based data, search and more.

Simultaneously, the entrepreneurs behind these API-first companies like Twilio, Segment, Scale and many others are building sustainable, independent—and big—businesses.

Valued today at over $22 billion, Stripe is the biggest independent API-first company. Stripe took off because of its initial laser-focus on the developer experience setting up and taking payments. It was even initially known as /dev/payments!

Stripe spent extra time building the right, idiomatic SDKs for each language platform and beautiful documentation. But it wasn’t just those things, they rebuilt an entire business process around being API-first.

Companies using Stripe didn’t need to fill out a PDF and set up a separate merchant account before getting started. Once sign-up was complete, users could immediately test the API with a sandbox and integrate it directly into their application. Even pricing was different.

Stripe chose to simplify pricing dramatically by starting with a single, simple price for all cards and not breaking out cards by type even though the costs for AmEx cards versus Visa can differ. Stripe also did away with a monthly minimum fee that competitors had.

Many competitors used the monthly minimum to offset the high cost of support for new customers who weren’t necessarily processing payments yet. Stripe flipped that on its head. Developers integrate Stripe earlier than they integrated payments before, and while it costs Stripe a lot in setup and support costs, it pays off in brand and loyalty.

Checkr is another excellent example of an API-first company vastly simplifying a massive yet slow-moving industry. Very little had changed over the last few decades in how businesses ran background checks on their employees and contractors, involving manual paperwork and the help of 3rd party services that spent days verifying an individual.

Checkr’s API gives companies immediate access to a variety of disparate verification sources and allows these companies to plug Checkr into their existing on-boarding and HR workflows. It’s used today by more than 10,000 businesses including Uber, Instacart, Zenefits and more.

Like Checkr and Stripe, Plaid provides a similar value prop to applications in need of banking data and connections, abstracting away banking relationships and complexities brought upon by a lack of tech in a category dominated by hundred-year-old banks. Plaid has shown an incredible ramp these past three years, from closing a $12 million Series A in 2015 to reaching a valuation over $2.5 billion this year.

Today the company is fueling an entire generation of financial applications, all on the back of their well-built API.

Screen Shot 2019 09 06 at 10.41.02 AM

Graphics courtesy of Accel

Then and now

Accel’s first API investment was in Braintree, a mobile and web payment systems for e-commerce companies, in 2011. Braintree eventually sold to, and became an integral part of, PayPal as it spun out from eBay and grew to be worth more than $100 billion. Unsurprisingly, it was shortly thereafter that our team decided to it was time to go big on the category. By the end of 2014 we had led the Series As in Segment and Checkr and followed those investments with our first APX conference in 2015.

Plaid, Segment, Auth0, and Checkr had only raised Seed or Series A financings! And we are even more excited and bullish on the space. To convey just how much API-first businesses have grown in such a short period of time, we thought it would be useful perspective to share some metrics over the past five years, which we’ve broken out in the two visuals included above in this article.

While SaaS may have pioneered the idea that the best way to do business isn’t to actually build everything in-house, today we’re seeing APIs amplify this theme. At Accel, we firmly believe that APIs are the next big SaaS wave — having as much if not more impact as its predecessor thanks to developers at today’s fastest-growing startups and their preference for API-first products. We’ve actively continued to invest in the space (in companies like, Scale, mentioned above).

And much like how a robust ecosystem developed around SaaS, we believe that one will continue to develop around APIs. Given the amount of progress that has happened in just a few short years, Accel is hosting our second APX conference to once again bring together this remarkable community and continue to facilitate discussion and innovation.

Screen Shot 2019 09 06 at 10.41.10 AM

Graphics courtesy of Accel

Gootkit Banking Trojan | Part 3: Retrieving the Final Payload

Gootkit’s final payload contains multiple Node.js scripts. Join Daniel Bunce as he reverse engineers the malware to take a deeper look at what it delivers.

The Gootkit Banking Trojan was discovered back in 2014, and utilizes the Node.js library to perform a range of malicious tasks, from website injections and password grabbing, all the way up to video recording and remote VNC capabilities. Since its discovery in 2014, the actors behind Gootkit have continued to update the codebase to slow down analysis and thwart automated sandboxes.

In Part 1 and Part 2 we looked at Gootkit’s anti-analysis features and its ability to maintain persistence. In this post, we’ll reverse the routine Gootkit performs to download and execute the Node.js final payload. We’ll also see how to extract the JS Scripts from the executable and take a brief look at some interesting scripts.

MD5 of Packed Sample: 0b50ae28e1c6945d23f59dd2e17b5632

The –vwxyz Argument

As covered previously, Gootkit contains several arguments that may or may not influence the execution of the process. The most interesting argument inside this sample is --vwxyz. Upon execution, Gootkit will re-execute itself, passing –vwxyz as an argument. This will kick off the function responsible for retrieving the final Node.js payload from the C2 server, decrypting and decompressing it, and finally, executing it. 

The payload retrieval function isn’t irregular; in fact, it uses the same connection function I covered in the previous post. Interestingly though, it performs two requests to the C2 server, first requesting /rbody320, and then requesting /rbody32. Even though the sample of Gootkit I have been analyzing is fairly recent, the C2 server was shut down quite quickly, and so I used ImaginaryC2, a tool developed by Felix Weyne, in order to simulate the Gootkit C2 server and analyze the network related pathways. As a result, the Node.js payload may not be exact to this sample, however, it is also fairly new itself.

Before reaching out to the C2 however, Gootkit will first examine the registry to check if the payload has already been downloaded once before. The reason for this being that once Gootkit downloads the final stage, it will be written to the registry, specifically to the SoftwareAppDataLow registry key – however, instead of storing the whole binary under one value, it will split the payload up into several parts and write each part to the value bthusrde_x, where x is incremented by 1 for each part of the file. If the registry is already filled with the encrypted payload, Gootkit will decrypt and decompress the payload, and then execute it. However, rather than skipping the communications routine, Gootkit will still reach out to the server to check it is running the latest version of the final stage.

Upon first request to the C2 server, a CRC-32 hash of the Node.js payload hosted by the server is returned. In this case, the value is 0xB4DC123B, although it will differ for different campaigns, as the payload can change. The hex value is first compared to 0xFFFFFFFF, and if the SoftwareAppDataLowbthusrde_0 registry key is present, the sample will read the local encrypted payload into memory and call RtlComputeCrc32 on the data. This hash is then compared to the hash sent from the C2, and if it matches, the process will sleep for a randomly generated amount of time, before repeating the check.

If the registry key isn’t present, then the system has not been infected before. Therefore, Gootkit will reach out to the C2 server once more, appending /rbody32 to the URL.

Once the final stage has been successfully downloaded, it will be written to SoftwareAppDataLowbthusrde_x in chunks. In this case, there was a total of 9 registry values created to hold the entire binary. Once it has been written to the registry, the downloader will decrypt and decompress it in memory. The decryption function is the same function used to decrypt the configuration, and in order to decompress it, Gootkit will load and call RtlDecompressBuffer. After decryption and decompression, the resulting file is very large – roughly 5 megabytes in size. This is due to the fact it contains a large amount of Node.js scripts embedded inside, plus the interpreter required to execute these scripts. 

With the executable now fully decrypted and decompressed, the downloader will copy over the local configuration. To do so, it looks for the placeholder DDDD inside the downloaded executable, and once located, it will copy over the URLs inside the configuration using lstrcpyA().

When it comes to executing the prepared payload, Gootkit takes a rather unusual approach. Instead of injecting it into another process or executing it as its own process, the downloader will allocate memory inside of its own process and map the payload into it, before executing it through a call to eax. If the Node.js payload ever exits, the downloader will simply loop around, decrypting the payload stored in the registry and executing that.

With the downloading function covered, let’s move over to analyzing the final stage of Gootkit.

Node.js Payload

Similarly to Python and many other scripting languages that can be compiled, JavaScript executables contain the JavaScript scripts created by the developer, and an interpreter required to execute the scripts. As a result, it is entirely possible to extract each script used by Gootkit to perform its nefarious tasks – it’s just a matter of finding them. Luckily for us, it isn’t very difficult to do so. 

As we are looking for fairly large chunks of (possibly encrypted) data rather than machine code, it should only take some quick searching in IDA to locate strings such as spyware, malware, gootkit_crypt and vmx_detection. Performing cross referencing on these strings leads to what seems to be a large list of arrays in the .data section. Each array contains a string, such as gootkit_crypt, a pointer to an address in the executable containing a chunk of encrypted data, and finally the size of the encrypted data.

Further cross referencing leads us to the function responsible for decrypting the scripts.

Each script is encrypted with RC4, and compressed with ZLib compression. However, the RC4 is slightly different than normal. In this case Gootkit uses a custom RC4 keystate to scramble a generated array containing values from 0 to 256 – under normal circumstances, a key would be used to scramble the array. As a result, this keystate needs to be incorporated into any script decryption tools that are developed. 

After decrypting the data, the first 4 bytes will contain the size of the data to decompress, so make sure to discard that from the data before decompression to avoid issues with incorrect headers. You can check out a full example of the Python decryption script here

So, now we have each decrypted JavaScript file and, thanks to the embedded strings, the names of each script! As focusing on each and every script would take forever, I’ll only be taking a look at these few scripts in this post: malware.js, uploader.js, gootkit_crypt.js and vmx_detection.js.

Malware.js

The malware.js script acts as the entry point for the JavaScript section of Gootkit. It is responsible for querying the registry for values that had been added by the previous stage – so mainprocessoverride and vendor_id, as well as assigning values to global variables such as g_botId, which could be set to either 8.666.jet, or a custom value if one is given.

What is particularly interesting about the script is the fact that certain functions have been commented out by the actor behind it, such as a call to IsSuspProcess(), which checks running processes for pythonw.exe, and pos_trigger.exe, compares the USERDOMAIN to 7SILVIA, and calls a function responsible for checking for a Virtual Machine. This gives the impression that the actor is focusing on infecting as many machines as possible, regardless of whether they are sandboxes or virtual machines – or perhaps it was raising to many false positives and so removing it was the best option.


This script will also perform a check in with the C2 server, appending /200 to the URL. There is also a lot of logging inside, specifically calls to the logging function dlog(), and even console.log() – this is primarily for debugging purposes. dlog() will check to see if the environment variable debug_main is set to true, and if it is, everything will be logged to the console.

Uploader.js

Uploader.js, as the name suggests, is responsible for uploading files to a remote C2 server. The first function, uploadFile() will upload all files to the C2 server, appending /upload to the URL. The headers are all hardcoded, including the value X-Bot-ID, which contains the machine GUID.

The next function, uploadLogFile() will upload a log file to the server, appending /logfile to the URL. There is also another log file upload function, uploadLogFile2, which will upload a file to the C2 server with /logfile2 appended. The headers are once again hardcoded; however, there is an added value – X-File-Prefix

Finally, there is a function labelled dope() inside the script which is completely empty and is never mentioned. Perhaps a placeholder for a future update?

Gootkit_crypt.js

Gootkit_crypt.js contains most of the encryption and encoding algorithms used by Gootkit to encrypt and decrypt data. The first thing you’ll notice in the script is the RC4 custom keystate we saw implemented in the Node.js compiled executable earlier. The three main algorithms used by Gootkit throughout the other scripts are; RC4, TEA, and Base64. Luckily, both the RC4 keystate and TEA encryption key are hardcoded into the script, so decrypting network communications or encrypted files should be fairly simple.

What is quite interesting in this script is that before every algorithm, a comment is left noting that the function is a prototype, which is quite strange as surely prototypes should be tested before implementing them into a banking trojan?

Moving on, they also have a UTF-8 function inside the script, although this is only used locally inside the TEA and Base64 functions.

With that covered, we can move onto the final – and unused – script, vmx_detection.js.

Vmx_detection.js

The vmx_detection.js script performs all of the Virtual Machine checks that were present in the previous stage; however, it also has a few added checks. 

The first added check is a check for IDE or SCSI devices, which are present in Virtual Machines, and usually contains vendor names such as VBox and VMWare. In this sample, the registry values seen below are queried and the values are passed onto the function VmCheckVitrualDisks(), which checks to see if the strings VMware, vbox, and SONI are present.

SYSTEM\CurrentControlSet\Enum\IDE
SYSTEM\CurrentControlSet\Enum\SCSI

If these checks pass successfully, a final check on the CPU will be performed. The CPU value is compared to three “goodIds”, these being GenuineIntel, AuthenticAMD, and AMDisbetter!. If the values do not match, then the script determines that the system is a Virtual Machine and (if the function wasn’t commented out) exits. Otherwise, it will return False, indicating that the system is not a Virtual Machine.

And that brings us to the end of the “mini series” on Gootkit! As mentioned, there are many other scripts inside the binary that I haven’t covered, so hopefully with all of the information covered inside these three posts you can start analyzing the main body of Gootkit yourself!


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Palo Alto Networks intends to acquire Zingbox for $75M

Palo Alto Networks surely loves to buy security startups. Today it added to its growing collection when it announced its intent to acquire IoT security startup Zingbox for $75 million.

The company had raised $23.5 million, according to Crunchbase data. The three co-founders, Xu Zou, May Wang and Jianlin Zeng, will be joining Palo Alto after the sale is official.

With Zingbox, the company gets IoT security chops, something that is increasingly important as companies deploy internet-connected smart devices and sensors. While these tools can greatly benefit customers, they also often carry a huge security risk.

Zingbox, which was founded in 2014, gives Palo Alto Networks a modern cloud-based solution built on a subscription model along with engineering talent to help build out the solution further. Nikesh Arora, chairman and CEO of Palo Alto Networks, certainly sees this.

“The proliferation of IoT devices in enterprises has left customers facing an enormous gap in protection against cybersecurity attacks. With the proposed acquisition of Zingbox, we will provide a first-of-its-kind subscription for our Next-Generation Firewall and Cortex platforms that gives customers the ability to gain control, visibility and security of their connected devices at scale,” Arora said in a statement.

This is the fourth security startup the company has purchased this year. It acquired two companies, nabbing PureSec and Twistlock, on the same day last Spring. Earlier this year, it bought Demisto for $560 million. All of these acquisitions are meant to build up the company’s portfolio of modern security offerings without having to build these kinds of tools in-house from scratch.

BigID announces $50M Series C investment as privacy takes center stage

It turns out GDPR was just the tip of the privacy iceberg. With California’s privacy law coming on line January 1st and dozens more in various stages of development, it’s clear that governments are taking privacy seriously, which means companies have to as well. New York-startup BigID, which has been developing a privacy platform for the last several years, finds itself in a good position to help. Today, the company announced a $50 million Series C.

The round was led by Bessemer Venture Partners with help from SAP.io Fund, Comcast Ventures, Boldstart Ventures, Scale Venture Partners and ClearSky. New investor Salesforce Ventures also participated. Today’s investment brings the total raised to over $96 million, according to Crunchbase.

In addition to the funding, the company is also announcing the formation of a platform of sorts, which will offer a set of privacy services for customers. It includes data discovery, classification and correlation. “We’ve separated the product into some constituent parts. While it’s still sold as a broad-based solution, it’s much more of a platform now in the sense that there’s a core set of capabilities that we heard over and over that customers want,” CEO and co-founder Dimitri Sirota told TechCrunch.

He says that these capabilities really enables customers to see connections in the data across a set of disparate data sources. “There are a lot of products that do the request part, but there’s nobody that’s able to look across your entire data landscape, the hundreds of petabytes, and pick out the data in Salesforce, Workday, AWS, mainframe, and all these places you could have data on [an individual], and show how it’s all tied together,” Sirota explained.

It’s interesting to see the mix of strategic investors and traditional venture capitalists who are investing in the company. The strategics in particular see the privacy landscape as well as anyone, and Sirota says it’s a case of privacy mattering more than ever and his company providing the means to navigate the changing landscape. “Consumers care about privacy, which means legislators care about it, which ultimately means companies have to care about it,” he said. He added, “Strategics, whether they are companies that collect personal data or those that sell to those companies, therefore have an interest in BigID .”

The company has been growing fast and raising money quickly to help it scale to meet demand. Starting in January 2018, it raised $14 million. Just six months later, it raised another $30 million and you can tack on today’s $50 million. Sirota says having money in the bank and seeing these investments helps give enterprise customers confidence that the company is in this for the long haul.

Sirota wouldn’t give an exact valuation, only saying that while the company is not a unicorn, the valuation was a “robust number.” He says the plan now it to keep expanding the platform, and there will be announcements coming soon around partnerships, customers and new capabilities.

Sirota will be appearing at TechCrunch Sessions: Enterprise on September 5th at 11 am on the panel, Cracking the Code: From Startup to Scaleup in Enterprise Software.

Watch TC Sessions: Enterprise live stream right here

TechCrunch is live from San Francisco’s YBCA’s Blue Shield of California Theater, where we’re hosting our first event dedicated to the enterprise. Throughout the day, attendees and viewers can expect to hear from industry experts and partake in discussions about the potential of new technologies like quantum computing and AI, how to deal with the onslaught of security threats, investing in early-stage startups and plenty more

We’ll be joined by some of the biggest names and the smartest and most prescient people in the industry, including Bill McDermott at SAP, Scott Farquhar at Atlassian, Julie Larson-Green at Qualtrics, Wendy Nather at Duo Security, Aaron Levie at Box and Andrew Ng at Landing AI.

Our agenda showcases some of the powerhouses in the space, but also plenty of smaller teams that are building and debunking fundamental technologies in the industry.

AGENDA

Investing with an Eye to the Future
Jason Green (Emergence Capital), Maha Ibrahim (Canaan Partners) and Rebecca Lynn (Canvas Ventures)
9:35 AM – 10:00 AM

In an ever-changing technological landscape, it’s not easy for VCs to know what’s coming next and how to place their bets. Yet, it’s the job of investors to peer around the corner and find the next big thing, whether that’s in AI, serverless, blockchain, edge computing or other emerging technologies. Our panel will look at the challenges of enterprise investing, what they look for in enterprise startups and how they decide where to put their money.


Talking Shop
Scott Farquhar (Atlassian)
10:00 AM – 10:20 AM

With tools like Jira, Bitbucket and Confluence, few companies influence how developers work as much as Atlassian. The company’s co-founder and co-CEO Scott Farquhar will join us to talk about growing his company, how it is bringing its tools to enterprises and what the future of software development in and for the enterprise will look like.


Q&A with Investors 
10:10 AM – 10:40 AM

Your chance to ask questions of some of the greatest investors in enterprise.


Innovation Break: Deliver Innovation to the Enterprise
Monty Gray (Okta), DJ Paoni (
SAP), Sanjay Poonen (VMware) and Shruti Tournatory (Sapphire Ventures)
10:20 AM – 10:40 AM

For startups, the appeal of enterprise clients is not surprising — signing even one or two customers can make an entire business, and it can take just a few hundred to build a $1 billion unicorn company. But while corporate counterparts increasingly look to the startup community for partnership opportunities, making the jump to enterprise sales is far more complicated than scaling up the strategy startups already use to sell to SMBs or consumers. Hear from leaders who have experienced successes and pitfalls through the process as they address how startups can adapt their strategy with the needs of the enterprise in mind. Sponsored by SAP.


Apple in the Enterprise

Susan Prescott (Apple)

10:40 AM – 11:00 AM

Apple’s Susan Prescott has been at the company since the early days of the iPhone, and she has seen the company make a strong push into the enterprise, whether through tooling or via strategic partnerships with companies like IBM, SAP and Cisco.


Box’s Enterprise Journey
Aaron Levie (Box)
11:15 AM – 11:35 AM

Box started life as a consumer file-storage company and transformed early on into a successful enterprise SaaS company, focused on content management in the cloud. Levie will talk about what it’s like to travel the entire startup journey — and what the future holds for data platforms.


Bringing the Cloud to the Enterprise
Mark Russinovich (Microsoft) 

11:35 AM – 12:00 PM

Cloud computing may now seem like the default, but that’s far from true for most enterprises, which often still have tons of legacy software that runs in their own data centers. What does it mean to be all-in on the cloud, which is what Capital One recently accomplished. We’ll talk about how companies can make the move to the cloud easier, what not to do and how to develop a cloud strategy with an eye to the future.


Keeping the Enterprise Secure
Martin Casado (Andreessen Horowitz), Emily Heath (United Airlines) and Wendy Nather (Duo Security)
1:00 PM – 1:25 PM

Enterprises face a litany of threats from both inside and outside the firewall. Now more than ever, companies — especially startups — have to put security first. From preventing data from leaking to keeping bad actors out of your network, enterprises have it tough. How can you secure the enterprise without slowing growth? We’ll discuss the role of a modern CSO and how to move fast… without breaking things.


Keeping an Enterprise Behemoth on Course
Bill McDermott (SAP)

1:25 PM – 1:45 PM

With over $166 billion is market cap, Germany-based SAP is one of the most valuable tech companies in the world today. Bill McDermott took the leadership in 2014, becoming the first American to hold this position. Since then, he has quickly grown the company, in part thanks to a number of $1 billion-plus acquisitions. We’ll talk to him about his approach to these acquisitions, his strategy for growing the company in a quickly changing market and the state of enterprise software in general.


How Kubernetes Changed Everything
Brendan Burns (Microsoft), Tim Hockin (Google Cloud), Craig McLuckie (VMware)
and Aparna Sinha (Google)
1:45 PM – 2:15 PM

You can’t go to an enterprise conference and not talk about Kubernetes, the incredibly popular open-source container orchestration project that was incubated at Google. For this panel, we brought together three of the founding members of the Kubernetes team and the current director of product management for the project at Google to talk about the past, present and future of the project and how it has changed how enterprises think about moving to the cloud and developing software.


Innovation Break: The Future of Data in an Evolving Landscape
Alisa Bergman (Adobe Systems), Jai Das (Sapphire Ventures)
 and Sanjay Kumar (Geospatial Media) moderated by: Nikki Helmer (SAP)
2:15 PM – 2:35 PM

Companies have historically competed by having data in their toolbox, and gleaning insights to make key business decisions. However, increased regulatory and societal scrutiny is requiring companies to rethink this approach. In this session, we explore the challenges and opportunities that businesses will experience as these conversations evolve. Sponsored by SAP.


AI Stakes its Place in the Enterprise
Marco Casalaina (Salesforce)
, Jocelyn Goldfein (Zetta Venture Partners) and Bindu Reddy (Reality Engines)

2:35 PM – 3:00 PM

AI is becoming table stakes for enterprise software as companies increasingly build AI into their tools to help process data faster or make more efficient use of resources. Our panel will talk about the growing role of AI in enterprise for companies big and small.


Q&A with Founders
3:00 PM – 3:30 PM

Your chance to ask questions of some of the greatest startup minds in enterprise technology.


The Trials and Tribulations of Experience Management
Amit Ahuja (Adobe), Julie Larson-Green (Qualtrics) and Peter Reinhardt (Segment)
3:15 PM – 3:40 PM

As companies gather more data about their customers, it should theoretically improve the customer experience, buy myriad challenges face companies as they try to pull together information from a variety of vendors across disparate systems, both in the cloud and on prem. How do you pull together a coherent picture of your customers, while respecting their privacy and overcoming the technical challenges? We’ll ask a team of experts to find out.


Innovation Break: Identifying Overhyped Technology Trends
James Allworth (
Cloudflare), George Mathew (Kespry) and Max Wessel (SAP)
3:40 PM – 4:00 PM

For innovation-focused businesses, deciding which technology trends are worth immediate investment, which trends are worth keeping on the radar and which are simply buzzworthy can be a challenging gray area to navigate and may ultimately make or break the future of a business. Hear from these innovation juggernauts as they provide their divergent perspectives on today’s hottest trends, including Blockchain, 5G, AI, VR and more. Sponsored by SAP.


Fireside Chat
Andrew Ng (Landing AI)
4:00 PM – 4:20 PM

Few technologists have been more central to the development of AI in the enterprise than Andrew Ng. With Landing AI and the backing of many top venture firms, Ng has the foundation to develop and launch the AI companies he thinks will be winners. We will talk about where Ng expects to see AI’s biggest impacts across the enterprise.


The Quantum Enterprise
Jim Clarke (Intel), Jay Gambetta (IBM)
and Krysta Svore (Microsoft)
4:20 PM – 4:45 PM

While we’re still a few years away from having quantum computers that will fulfill the full promise of this technology, many companies are already starting to experiment with what’s available today. We’ll talk about what startups and enterprises should know about quantum computing today to prepare for tomorrow.


Overcoming the Data Glut
Benoit Dageville (Snowflake), Ali Ghodsi (Databricks) and Murli Thirumale (Portworx)
4:45 PM – 5:10 PM

There is certainly no shortage of data in the enterprise these days. The question is how do you process it and put it in shape to understand it and make better decisions? Our panel will discuss the challenges of data management and visualization in a shifting technological landscape where the term “big data” doesn’t begin to do the growing volume justice.


Atlassian launches free tiers for all its cloud products, extends premium pricing plan

At our TC Sessions: Enterprise event, Atlassian co-CEO Scott Farquhar today announced a number of updates to how the company will sell its cloud-based services. These include the launch of new premium plans for more of its products, as well as the addition of a free tier for all of the company’s services that didn’t already offer one. Atlassian now also offers discounted cloud pricing for academic institutions and nonprofit organizations.

The company previously announced its premium plans for Jira Software Cloud and Confluence Cloud. Now, it is adding Jira Service Desk to this lineup, and chances are it’ll add more of its services over time. The premium plan adds a 99.9% update SLA, unlimited storage and additional support. Until now, Atlassian sold these products solely based on the number of users, but didn’t offer a specific enterprise plan.

As Harsh Jawharkar, the head of go-to-market for Cloud Platform at Atlassian, told me, many of its larger customers, who often ran the company’s products on their own servers before, are now looking to move to the cloud and hand over to Atlassian the day-to-day operations of these services. That’s in part because they are more comfortable with the idea of moving to the cloud at this point — and because Atlassian probably knows how to run its own services better than anybody else. 

For these companies, Atlassian is also introducing a number of new features today. Those include soon-to-launch data residency controls for companies that need to ensure that their data stays in a certain geographic region, as well as the ability to run Jira and Confluence Cloud behind customized URLs that align with a company’s brand, which will launch in early access in 2020. Maybe more important, though, are features to Atlassian Access, the company’s command center that helps enterprises manage its cloud products. Access now supports single sign-on with Google Cloud Identity and Microsoft Active Directory Federation Services, for example. The company is also partnering with McAfee and Bitglass to offer additional advanced security features and launch a cross-product audit log. Enterprise admins will also soon get access to a new dashboard that will help them understand how Atlassian’s tools are being used across the organization.

But that’s not all. The company is also launching new tools to make customer migration to its cloud products easier, with initial support for Confluence and Jira support coming later this year. There’s also new extended cloud trial licenses, which a lot of customers have asked for, Jawharkar told me, because the relatively short trial periods the company previously offered weren’t quite long enough for companies to fully understand their needs.

This is a big slew of updates for Atlassian — maybe its biggest enterprise-centric release since the company’s launch. It has clearly reached a point where it had to start offering these enterprise features if it wanted to grow its market and bring more of these large companies on board. In its early days, Atlassian mostly grew by selling directly to teams within a company. These days, it has to focus a bit more on selling to executives as it tries to bring more enterprises on board — and those companies have very specific needs that the company didn’t have to address before. Today’s launches clearly show that it is now doing so — at least for its cloud-based products.

The company isn’t forgetting about other users either, though. It’ll still offer entry-level plans for smaller teams and it’s now adding free tiers to products like Jira Software, Confluence, Jira Service Desk and Jira Core. They’ll join Trello, Bitbucket and Opsgenie, which already feature free versions. Going forward, academic institutions will receive 50% off their cloud subscriptions and nonprofits will receive 75% off.

It’s obvious that Atlassian is putting a lot of emphasis on its cloud services. It’s not doing away with its self-hosted products anytime, but its focus is clearly elsewhere. The company itself started this process a few years ago and a lot of this work is now coming to fruition. As Anu Bharadwaj, the head of Cloud Platform at Atlassian, told me, this move to a fully cloud-native stack enabled many of today’s announcements, and she expects that it’ll bring a lot of new customers to its cloud-based services.  

Shared inbox startup Front adds WhatsApp support

Front, the company that lets you manage your inboxes as a team, is adding one more channel, WhatsApp. Starting today, you can read and reply to people contacting you through WhatsApp.

This feature is specifically targeted at users of WhatsApp Business. You can get a business phone number through Twilio and then hand out that number to your customers.

After that, you can see the messages coming in Front and treat them like any Front message. In particular, you can assign conversations to a specific team members so that your customers get a relevant answer as quickly as possible. If you need more information, Front integrates with popular CRMs, such as Salesforce, Pipedrive and HubSpot.

You can also discuss with other teammates before sending a reply to your customer. It works like any chat interface — you can at-mention your coworkers and start an in-line chat in the middle of a WhatsApp thread. When you’re ready to answer, you can hit reply and send a WhatsApp message.

Front started with generic email addresses, such as sales@yourcompany or jobs@yourcompany. But the company has added more channels over time, such as Facebook, Twitter, website chat and text messages.

If you’ve already been using Front with text messages, you can now easily add WhatsApp and use the same service for that new channel.

WhatsApp sync

macOS Incident Response | Part 3: System Manipulation

In Part 1 and Part 2, we looked at collecting device, file and system data and how to retrieve data on user activity and behavior. In this final part of the series, we’re going to look for evidence of system manipulation that could leave a device or a user vulnerable to further exploitation. Some of that evidence may already have been collected from our earlier work, while other pieces of information will require some extra digging. Let’s go!

header image

Usurping the Sudoers File

One of the first places I want to look for system manipulation is in the /etc/sudoers file. This file can be used to allow users to run processes with elevated privileges without being challenged for a password. To check whether the sudoers file has been modified, we will use the visudo command rather than opening the file directly in vi or another editor. Using visudo is safer as it prevents the file being saved in an invalid format.

$ sudo visudo

Modifications to the sudoers file will typically be seen at the end of the file. In part, that’s because the easiest way for a process to write to it is by simply appending to it, but also the commands in the file take precedence in reverse order, with the later commands overriding earlier ones. For that reason, it’s important for attackers that their commands override any others that may target the same users, groups or hosts. In this example, we can see that a malicious process has added a line to allow the user ‘sentinel’ – or more importantly any process running as that user – to run the command at the path shown on any host (ALL) without authenticating.

image of visudo

Cuckoos in the PATH

The $PATH environment variable lists the paths where the shell will search for programs to execute that correspond to a given command name. We can see the user’s path list with

$ echo $PATH

In this example, the user’s path contains the following locations:

image of user Path

We can use a short Bash script to iterate over the paths, and list their contents, sorted by date modified in descending order.

#! /bin/bash

while IFS=: read -d: -r path; do
    cd $path
    echo $path
    ls -altR
done <<< "${PATH:+"${PATH}:"}"  

From the results, we can quickly see which files were modified most recently. Pay particular attention to what is at the top of the path, as /usr/local/bin is in the above example. This location will be searched first when a command is issued on the command line, ahead of system paths. A “cuckoo” script named, say, sudo or any other commonly used system utility, inserted at the top of the path would get called before – in other words, instead of – the real utility. A malicious actor could write a fake sudo script which first called the actor’s own routines before passing on the user’s intended actions to the real sudo utility. Done properly, this would be completely transparent to the user, and of course the attacker would have gained elevated privileges along the way.

Bash, Zsh and Other Shells

In a similar way, an attacker could modify one of several files that determine things like shell aliases. An alias in say the .bashrc file could replace every call to sudo with a call to an attacker’s script. To search for this possibility, be sure to check the contents of the following for such manipulations:

~/.bash_profile        # if it exists, read once when you log in to the shell
~/.bash_login          # if it exists, read once if .bash_profile doesn't exist
~/.profile             # if it exists, read once if the two above don't exist 
/etc/profile           # only read if none of the above exist

~/.bashrc              # if it exists, read every time you start a new shell
~/.bash_logout         # if it exists, read when the login shell exits

And look for the same for other shell environments the user might have like .zshrc for Zsh.

Etc, Hosts and Friends

It’s also worth running a time-sorted ls on the etc folder.

$ cd /etc; ls -altR

On this compromised system, it’s very clear what’s been modified recently.

image of files modified in etc

The hosts file is a leftover from the past and the way computers used to resolve domain names to IP addresses, a primitive form of DNS. These days the only use of the hosts file is to loopback certain domain names to the localhost, 127.0.0.1, which effectively prevents the system from reaching out to these domains. The hosts file is often manipulated by malware to stop the system checking in with certain remote services, such as Apple or other software vendors. A healthy hosts file will typically have very few entries, like so:

Networking and Sharing Prefs

While we’re discussing network communications, let’s check on several other areas that can be manipulated. In System Preferences’ Network pane, click Advanced… and look at the Proxies tab. Some malware will use an autoproxy to redirect user’s traffic in order to achieve a man-in-the-middle attack. We can also pull this information from the data we collected from sysdiagnose by searching on “autoproxy”. Here we see the good news that no autoproxy is set.

image of autoproxy search

We can utilise the networksetup utility here to output similar information to what you can see in the System Preferences UI regarding each network service.

#! /bin/bash

n=$(networksetup -listallnetworkservices | grep -v asterisk)
for nt in $n; do
    printf "n$ntn--------n";
    networksetup -getinfo $nt;
done

We can also find this information in the sysdiagnose report in the output of SystemProfiler’s SPNetworkLocationDataType.spx file.

Finding Local and Remote Logins

Let’s start with the obvious. Does the device have any of its sharing preferences enabled? In the User Interface, these are listed in the Sharing Pane:

image of sharing prefs

As explained in Malware Hunting on macOS | A Practical Guide, we can get the same information from netstat by grepping for particular ports associated with sharing services. For convenience, we can run a one-liner on the command line or via script that will succinctly output the Sharing preferences:


rmMgmt=`netstat -na | grep LISTEN | grep tcp46 | grep "*.3283" | wc -l`; scrShrng=`netstat -na | grep LISTEN | egrep 'tcp4|tcp6' | grep "*.5900" | wc -l`; flShrng=`netstat -na | grep LISTEN | egrep 'tcp4|tcp6' | egrep "*.88|*.445|*.548" | wc -l`;rLgn=`netstat -na | grep LISTEN | egrep 'tcp4|tcp6' | grep "*.22" | wc -l`; rAE=`netstat -na | grep LISTEN | egrep 'tcp4|tcp6' | grep "*.3031" | wc -l`; bmM=`netstat -na | grep LISTEN | egrep 'tcp4|tcp6' | grep "*.4488" | wc -l`;printf "nThe following services are OFF if '0', or ON otherwise:nScreen Sharing: %snFile Sharing: %snRemote Login: %snRemote Mgmt: %snRemote Apple Events: %snBack to My Mac: %snn" "$scrShrng" "$flShrng" "$rLgn" "$rmMgmt" "$rAE" "$bmM";

This lengthy pipeline of commands should return something like this.

image of sharing prefs from netstat

In the sysdiagnose/network-info folder, the netstat.txt file will also list, among other things, active internet connections. Alternatively, you can collect much of the same relevant information with the following commands:

Active Internet connections (including servers):
netstat -A -a -l -n -v

Routing tables:
netstat -n -r -a -l

Also, check the user’s home folder for the invisible .ssh directory and the addition of any attacker public keys. Here, an unwanted process has secretly written a known_hosts file into the ssh folder so that the process can ensure it’s connecting to its own C2 server before exfiltrating user data or downloading further components.

Either on the system itself or from the sysdiagnose folder, look for the existence of the kcpassword file. This file only exists if the system has Auto login set up, which allows a user to login to the Mac without providing a user name or password. Although it’s unlikely a remote attacker would chose to set this up, a local one might (such as a co-worker), if they had hopes of physical access in the future. Perhaps more importantly, the file contains the user’s actual login password in encoded but not encrypted form. It’s a simple thing to decode it, but it does require having already achieved elevated privileges to do so.

The /usr/sbin/sysadminctl utility has a few useful options for checking on Guest account and other settings. This one-liner will output some useful status information:

state=("automaticTime" "afpGuestAccess" "filesystem" "guestAccount" "smbGuestAccess"); for i in "${state[@]}"; do sysadminctl -"${i}" status; done;

image of sysadminctl

Achieving Persistence Through Application Bundles

We have already covered macOS persistence techniques, and I encourage you to refer to that for an in-depth treatment. However, it’s worth mentioning here one of the upshots of Apple’s recent change to requiring developers to use Application bundles for things like kexts and login items, which is that it can now be much harder to track these down. In the past, all 3rd party extensions would have been in /Library/Extensions and all login items could be tracked through the loginitems.plist file. Recent changes mean these can now be anywhere that an application can be, and that is pretty much everywhere!

In the first post in this series, we looked at an example of using LSRegister to hunt for unusual or unwanted applications. We can also leverage the Spotlight backend to search for the location of apps once we have a target bundle identifier to hand. For example:

mdfind "kMDItemCFBundleIdentifier == 'com.cnaa4c4d'"

image of mdfind

Be careful with the syntax: the whole search statement is encapsulated in double quotes and the value to search for is within single quotes. More information about using mdfind can be found in the utility’s man page. A list of possible predicate search terms can be printed out with

mdimport -X

Manipulating Users Through Their Browsers

For the vast majority of attacks, the gateway to compromise comes through interaction with the user, so it’s important to check on applications that are used for communications. Have these applications’ default settings been manipulated to make further exploitation and compromise easier for the attacker?

We already took a look at this in general in Part 2, but specifically some of the items we would want to look at are the addition of browser extensions, default home page and search criteria, security settings, additional or privileged users and password use. We should also check the default download location and iterate over that folder for recent activity.

I’ve explained here how we can examine recent downloads that have been tagged with Apple’s LSQuarantine bit, but this bit is easily removed and the records in the LSQuarantine file are not all that reliable. A full listing of the user’s browser history is better scraped from the relevant folders and databases belonging to each browser app. Although browser history does not tell us directly about system manipulation, by tracking the urls of malicious sites visited we can build a picture not only of where malware may have come from, but where it might be sending our user to for further compromises. We can also use any malicious URLs found in browser history as search terms across our collected data.

Although there are many browsers, I will only deal with the major ones here. It should be possible to apply the same principles in these examples to other browsers. Safari, Firefox, Chrome and Opera all have slightly different ways of storing history. Here’s a few examples.

Browser History

To retrieve Safari history (Terminal will require Full Disk Access in Mojave and later):

sqlite3 ~/Library/Safari/History.db "SELECT h.visit_time, i.url FROM history_visits h INNER JOIN history_items i ON h.history_item = i.id"

To retrieve a list of sites that have acquired Push Notifications permissions in Safari:

plutil -p ~/Library/Safari/UserNotificationPermissions.plist | grep -a3 '"Permission" => 1'

To retrieve the last session data from Safari:

plutil -p ~/Library/Safari/LastSession.plist | grep -iv sessionstate .

Chrome history can be gathered with the following command:

sqlite3 ~/Library/Application Support/Google/Chrome/Default/History "SELECT datetime(((v.visit_time/1000000)-11644473600), 'unixepoch'), u.url FROM visits v INNER JOIN urls u ON u.id = v.url;"

Similar will work for Vivaldi and other Chromium based browsers once you substitute the appropriate path to the browser’s database. For example:

sqlite3 ~/Library/Application Support/Vivaldi/Default/History "SELECT datetime(((v.visit_time/1000000)-11644473600), 'unixepoch'), u.url FROM visits v INNER JOIN urls u ON u.id = v.url;"

Firefox History is slightly different.

sqlite3 ~/Library/Application Support/Firefox/Profiles/*/places.sqlite "SELECT datetime(last_visit_date/1000000,'unixepoch'), url, title from moz_places"

Browser Extensions

I’ve previously described how the Safari Extensions format has changed recently and how this can be leveraged by bad actors. To retrieve an old-style list of Safari browser extensions:

plutil -p ~/Library/Safari/Extensions/Extensions.plist| grep "Bundle Directory Name" | sort --ignore-case

The new .appex style which require an Application bundle can be enumerated via the pluginkit utility.

pluginkit -mDvvv -p com.apple.Safari.extension

image of pluginkit

Extensions, particularly in Chrome have long been problematic and an easy way for scammers to control user’s browsing. Extensions can be enumerated in Chromium browsers from the Extensions folder:

$ ~/Library/Application Support/Google/Chrome/Default/Extensions; ls -al

Unfortunately, the randomized names and lack of human-readable identifiers is not helpful.

image of chrome extensions

Suffice to say it is worth going over the contents of each directory thoroughly.

Like Safari, Firefox uses a similar, though reversed, bundleIdentifier format for Extension names, which is far more user-friendly:

cd ~/Library/Application Support/Firefox/Profiles/*/extensions; ls -al

image of firefox extensions

Browser Security Settings

Some adware and malware attempt to turn off the browser’s built-in anti-phishing settings, which is surprisingly easy to do. We can check this setting for various browsers with a few simple one-liners.

For Safari:

defaults read com.apple.Safari WarnAboutFraudulentWebsites

The reply should be 1 to indicate the setting is active.

Chrome and Chromium browsers typically use a “safebrowsing” key in the Preferences file located in the Defaults folder. You can simply grep for “safebrowsing” and look for {"enabled: true,"} in the result to indicate anti-phishing and malware protection is on.

grep 'safebrowsing' ~/Library/Application Support/Google/Chrome/Default/Preferences

Opera is slightly different, using the key “fraud_protection_enabled” rather than ‘safebrowsing’.

grep 'fraud_protection_enabled' ~/Library/Application Support/com.operasoftware.Opera/Preferences

image of opera prefs

In Firefox, preferences are held in the prefs.js file. The following command

grep 'browser.safebrowsing' ~/Library/Application Support/Firefox/Profiles/*/prefs.js

will return “safebrowsing.malware.enabled” and “phishing.enabled” as false if the safe search settings have been disabled, as shown in the following images:


image of firefox browser protections disabled

If the settings are on, those keys will not be present.

There are many other settings that can be mined from the browser’s support folders aside from history, preferences and extensions using the same techniques as above. These locations should also be searched for manipulation of user settings and preferences such as default home page and search engines.

Conclusion

And that brings us to the end of this short series on macOS Incident Response! There is much that we have not covered; the subject is as vast in its breadth as is macOS itself, but we have covered, in Posts 1, 2, and 3, the basics of where and what kind of information you can collect about the device’s activity, the users’ behaviour and the threat actor’s manipulations. For those interested in learning more, you could take a look at OS X Incident Response Scripting & Analysis by Jaron Bradley, which takes a different but useful approach from the one I’ve taken here. If you want to go beyond these kinds of overviews to digital forensics, check out the SANS course run by Sarah Edwards. Finally, of course, please follow me on Twitter if you have comments, questions or suggestions on this series and @SentinelOne to keep up with all the news about macOS security.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Starboard Value takes 7.5% stake in Box

Starboard Value, LP revealed in an SEC Form 13D filing last week that it owns a 7.5% stake in Box, the cloud content management company.

It is probably not a coincidence that Starboard Value invests in companies whose stock has taken a bad turn. Box’s share price has been on a roller coaster ride in the years since 2015, when its stock was priced initially at $14.00/share but then surged to $23.23 on its opening day. In recent years, its share price has gone as high as $28.12, but the declines have been steep: its 52-week low is $12.46 per share.

Screenshot 2019 09 03 17.22.05

“While we do not comment on interactions with our investors, Box is committed to maintaining an active and engaged dialogue with stockholders. The Board of Directors and management team are focused on delivering growth and profitability to drive long-term stockholder value as we continue to pioneer the Cloud Content Management market,” a Box spokesperson told TechCrunch.

Indeed, it is too early to tell what Starboard’s investment will mean longer term. But more generally, it has been known to take a very active role in its portfolio companies, sometimes increasing its stake to secure places on the board and using that position to advocate management changes, restructuring, sales and more.

And Box is now, in a sense, on notice. From Starboard’s filing:

“Depending on various factors including, without limitation, the Issuer’s financial position and investment strategy, the price levels of the Shares, conditions in the securities markets and general economic and industry conditions, the Reporting Persons may in the future take such actions with respect to their investment in the Issuer as they deem appropriate including, without limitation, engaging in communications with management and the Board of Directors of the Issuer, engaging in discussions with stockholders of the Issuer or other third parties about the Issuer and the [Starboard’s] investment, including potential business combinations or dispositions involving the Issuer or certain of its businesses, making recommendations or proposals to the Issuer concerning changes to the capitalization, ownership structure, board structure (including board composition), potential business combinations or dispositions involving the Issuer or certain of its businesses, or suggestions for improving the Issuer’s financial and/or operational performance, purchasing additional Shares, selling some or all of their Shares, engaging in short selling of or any hedging or similar transaction with respect to the Shares…”

Box began life as a consumer storage company but made the transition to enterprise software several years after it launched in 2005. It raised more than $500 million along the way, and it was a Silicon Valley SaaS darling until it filed its S-1 in 2014.

That S-1 revealed massive sales and marketing spending, and critics came down hard on the company. That led to one of the longest IPO delays in memory, taking nine months from the time the company filed until it finally had its IPO in January 2015.

While losses more recently appear to be getting smaller, they are still a very prominent aspect of the company’s financials. In its Q2 earnings report last week, Box announced  $172.5 million in revenue for for the quarter, putting it on a run rate close to $700 million, but it also said its GAAP operating loss was $36.3 million, or 21% of revenue (year-ago GAAP operating loss was $37.2 million, or 25% of revenue).

Non-GAAP operating income meanwhile was $0.5 million, or 0% of revenue (year-ago non-GAAP operating loss was $6.5 million, or 4% of revenue). Negative free cash flow was also up to -$19 million versus -$10 million a year ago. In other words, these are precisely the kind of metrics that attract activist investors to high-profile public companies.

Levie will be appearing at TechCrunch Sessions: Enterprise on Thursday.

We emailed Starboard Value for comment on this article. Should it respond, we will update the article.