Coverage and Context: The Key Measures of MITRE ATT&CK 2020

The Quick Read

MITRE has become the common language of EDR and is the de facto way to evaluate a product’s ability to provide actionable information to the SOC. MITRE ATT&CK’s use of APT29, the notorious threat actor that evaded the DNC, shows us that many of today’s EDR tools fail to cope with advanced techniques. CISOs should carefully evaluate which technologies capture the most information and provide context at each stage in MITRE’s simulation. In this post, we discuss SentinelOne’s performance in MITRE’s ATT&CK Round 2 with the following takeaways.

  • SentinelOne had the lowest number of missed detections. SentinelOne has proved it provides the widest coverage of the MITRE ATT&CK framework – an EDR is primarily measured by its ability to see, analyze, and react – SentinelOne saw more, and provided more insight and context than any other vendor.
  • SentinelOne achieved the highest number of combined high-quality detections and the highest number of correlated detections. SentinelOne delivered the highest number of actionable detections – MITRE have identified the successful attribution of activity to their tactics (good) and their techniques (best) as a significant measure of an EDR’s value and ROI. SentinelOne delivered better MITRE attribution than any other vendor.
  • SentinelOne automatically grouped hundreds of data points over the 3-day test into 11 correlated console alerts. SentinelOne automatically correlates related activity into unified alerts that provide Campaign Level Insight. This reduces the amount of manual effort needed, helps with alert fatigue and significantly lowers the skillset barrier of responding to alerts. SentinelOne aggregates the same amount of visibility into a fraction of the alerts.
  • Human-powered MSSP scores must not be a crutch for failures in a software’s ability to detect. SentinelOne had the highest number of product-only detections and – in parallel – the highest number of human-only “MSSP” detections. Having the top scores in both is a good thing and means that the technology itself is robust and can stand alone without a Managed Detection & Response (MDR) service; for SentinelOne, worldclass Vigilance MDR service is optional – providing verification and actioning should customers desire or require such a service.

Great products will catch the eye of CISOs and SecOps professionals if they provide the ability to do more with less, if they make work easier and more interesting for analysts, and if they can be operationalized without adversely affecting the production environment.

Now Read The Full Story…

The latest MITRE ATT&CK results were released Tuesday, April 21, 2020 and as expected interpreting them is an exercise. Consulting MITRE testing is one component an organization can employ to help evaluate cyberthreat preparedness – specifically how well cybersecurity solutions perform in the face of adversaries. 

SentinelOne’s April 20, 2020 blog delves into the rationale and methodology behind MITRE testing and is a helpful compendium for understanding MITRE ATT&CK. It is important to understand that this MITRE test does not test everything in the MITRE framework, but instead focuses on two specific attack flows over three days in a lab. All told there were 135 substeps. 

High-level Remarks

We caution you out of the gate to not believe all of the claims you will read in relation to this test – question everything and check the data. What you can expect from SentinelOne is that we will present the data and be as helpful as we can to enable you to help yourself to interpret what it all means. We will also articulate our value claims and design principles. Thus by intersecting these two realms, we will provide fertile ground upon which you may draw your own conclusions.

1. Seeing Is Believing: Coverage is Table Stakes

The foundation of a superior EDR solution lies in its ability to consume and correlate data at scale in an economic way harnessing the power of the cloud. Every piece of pertinent data should be captured to provide breadth of visibility for the operator. Data, specifically capturing all events, is the building block of EDR.

As the graphs below show, SentinelOne had the fewest misses of all the participants in Round 2. 

2. Context is the Key to Operationalizing Data

We believe that our MITRE results clearly support the SentinelOne platform’s effectiveness in not only identifying and stopping malicious code early and often but also illustrating the extent to which SentinelOne goes in solving the data overload problem. In the MITRE evaluation, “Techniques” and “Tactics” are the key measures of data precision. 

1. Technique: this is the epitome of relevant and actionable data – fully contextualized data points that tell a story, indicating what happened, why it happened, and crucially, how it happened. 

2. Tactic: this is the next level down in the hierarchy, representing categories of techniques that tell us the actor’s steps in achieving their ultimate goals (persistence, data egress, evasions, etc). In short, the ‘what’ and the ‘why’.

These two detection classifications are the core of the MITRE framework and are of the highest value in creating context. According to MITRE’s published results, out of all participants in the Round 2 evaluation, SentinelOne recorded the highest number of “Techniques” and “Tactics” awards. 

To paraphrase Paul Webber, Gartner Endpoint Protection analyst, today’s security products must turn weak signals into strong detections. This is exactly what SentinelOne does as proven by MITRE’s ATT&CK evaluation. SentinelOne as a standalone product is effective in identifying and placing actionable context in the attack. 

As we explained in our MITRE primer post on Monday, correlation is one of the detection modifiers applied to Technique and Tactic detections. Correlation represents the act of building relationships between data, completed at machine speed, so an analyst doesn’t have to manually stitch data together and waste precious time. SentinelOne had the most correlation modifiers in the MITRE ATT&CK Round 2:

Defenders are past the stage where more data is good. Instead, they need context from related data. We believe that the ideal state for a SOC is articulated stories that have all data pre-indexed and assigned to actionable storyline alerts. The entire MITRE ATT&CK Round 2 testing battery is captured in the SentinelOne console with just 11 alerts. 

Our customers don’t want 3,000 pieces of a Do-It-Yourself telemetry jigsaw puzzle, and they don’t want 135 uncontextualized alerts for every attack. What they want is the jigsaw puzzle ready-assembled into a tidy package that makes it easy for analysts to discern what is happening at first glance. 

Our data ingestion capabilities capture data that, coupled with our patented AI engines, assembles storylines with vivid pictures and rich context to apply autonomous actions at machine speed.

3. Great Products Solve Complex Problems – Services Should Always Be Optional

For an extra layer of completeness, or for those who seek to outsource SOC operations, SentinelOne offers optional Vigilance Managed Detect and Respond (MDR) services. The MITRE data as related to SentinelOne proves unequivocally that our technology paired with our global expert MDR analysts provides absolute coverage across the board. 

Join our MITRE webinar to see SentinelOne’s victorious performance against APT29.
Wednesday, April 29th @ 9 am PST

To summarize conversations with Josh Zelonis, Forrester’s EDR Wave analyst, buyers should ask and ascertain if MDR is a crutch for the product or if it’s value additive. 

MITRE’s ATT&CK data shows that SentinelOne had the highest level of detections delivered exclusively by the product (without any MDR services); in addition, Vigilance MDR services, operating the SentinelOne product, had the most MSSP detections. This is possible because our agent captures a richer set of events/signals that can be analyzed by our Level 4 Vigilance Ninjas to identify targeted attacks.

The experience for a SentinelOne Vigilance MDR customer would have been a single proactive email or phone call (depending on customer preference) with updates across the 11 alerts.  Even in a detection-only policy with the product taking no actions – which is the MITRE ATT&CK simulation – Vigilance would have stopped and remediated the attack in under 20 minutes. 

It is important to note that SentinelOne’s platform operated independently of its MSSP scoring proving that the tool stands alone but that if extra depth is desired, the tool + MDR provides a deep solution.

MITRE 2020 Results Take Away

Great products, and in this case great security products, will catch the eye of CISOs and SecOps professionals if they do these things:

  • Shift work from mundane brain-killing activities to more interesting initiatives
  • Integrate with other parts of the security stack
  • Automate more of the work
  • Defeat adversaries in real time
  • Include granular remediation capabilities for automated cleanup and recovery
  • Encompass preventative measures that handle garden-variety up to advanced attacks 

SentinelOne’s performance in MITRE ATT&CK Round 2 is a strong statement that visibility and AI when coupled together create a powerful EDR solution. As evidenced in the data output of the simulation, SentinelOne excels at detection, and even more importantly, the autonomous mapping and correlating of data into fully indexed and correlated stories. SentinelOne’s storyline technology, powered by our patented Behavior AI, sets us apart from every other vendor on the market. 

Every CISO we speak with tells us the value they fundamentally seek is a solution that not only sees more but also does more – without adding friction, complexity, or cost. Cyber attackers move quickly, especially advanced adversaries. With SentinelOne, CISOs and their team can trust a performant, cloud-native EDR platform that is proven to stay one step ahead with a product-driven approach to understanding, organizing, and actioning data at machine speed.

To learn more about SentinelOne’s performance in MITRE ATT&CK APT29, join us in the webinar on Wednesday, April 29 at 9AM PST.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Fishtown Analytics raises $12.9M Series A for its open-source analytics engineering tool

Philadelphia-based Fishtown Analytics, the company behind the popular open-source data engineering tool dbt, today announced that it has raised a $12.9 million Series A round led by Andreessen Horowitz, with the firm’s general partner Martin Casado joining the company’s board.

“I wrote this blog post in early 2016, essentially saying that analysts needed to work in a fundamentally different way,” Fishtown founder and CEO Tristan Handy told me, when I asked him about how the product came to be. “They needed to work in a way that much more closely mirrored the way the software engineers work and software engineers have been figuring this shit out for years and data analysts are still like sending each other Microsoft Excel docs over email.”

The dbt open-source project forms the basis of this. It allows anyone who can write SQL queries to transform data and then load it into their preferred analytics tools. As such, it sits in-between data warehouses and the tools that load data into them on one end, and specialized analytics tools on the other.

As Casado noted when I talked to him about the investment, data warehouses have now made it affordable for businesses to store all of their data before it is transformed. So what was traditionally “extract, transform, load” (ETL) has now become “extract, load, transform” (ELT). Andreessen Horowitz is already invested in Fivetran, which helps businesses move their data into their warehouses, so it makes sense for the firm to also tackle the other side of this business.

“Dbt is, as far as we can tell, the leading community for transformation and it’s a company we’ve been tracking for at least a year,” Casado said. He also argued that data analysts — unlike data scientists — are not really catered to as a group.

Before this round, Fishtown hadn’t raised a lot of money, even though it has been around for a few years now, except for a small SAFE round from Amplify.

But Handy argued that the company needed this time to prove that it was on to something and build a community. That community now consists of more than 1,700 companies that use the dbt project in some form and over 5,000 people in the dbt Slack community. Fishtown also now has over 250 dbt Cloud customers and the company signed up a number of big enterprise clients earlier this year. With that, the company needed to raise money to expand and also better service its current list of customers.

“We live in Philadelphia. The cost of living is low here and none of us really care to make a quadro-billion dollars, but we do want to answer the question of how do we best serve the community,” Handy said. “And for the first time, in the early part of the year, we were like, holy shit, we can’t keep up with all of the stuff that people need from us.”

The company plans to expand the team from 25 to 50 employees in 2020 and with those, the team plans to improve and expand the product, especially its IDE for data analysts, which Handy admitted could use a bit more polish.

Comet.ml nabs $4.5M for more efficient machine learning model management

As we get further along in the new way of working, the new normal if you will, finding more efficient ways to do just about everything is becoming paramount for companies looking at buying new software services. To that end, Comet.ml announced a $4.5 million investment today as it tries to build a more efficient machine learning platform.

The money came from existing investors Trilogy Equity Partners, Two Sigma Ventures and Founder’s Co-op. Today’s investment comes on top of an earlier $2.3 million seed.

“We provide a self-hosted and cloud-based meta machine learning platform, and we work with data science AI engineering teams to manage their work to try and explain and optimize their experiments and models,” company co-founder and CEO Gideon Mendels told TechCrunch.

In a growing field with lots of competitors, Mendels says his company’s ability to move easily between platforms is a key differentiator.

“We’re essentially infrastructure agnostic, so we work whether you’re training your models on your laptop, your private cluster or on many of the cloud providers. It doesn’t actually matter, and you can switch between them,” he explained.

The company has 10,000 users on its platform across a community product and a more advanced enterprise product that includes customers like Boeing, Google and Uber.

Mendels says Comet has been able to take advantage of the platform’s popularity to build models based on data customers have made publicly available. The first one involves predicting when a model begins to show training fatigue. The Comet model can see when this happening and signal data scientists to shut the model down 30% faster than this kind of fatigue would normally surface.

The company launched in Seattle at TechStars/Alexa in 2017. The community product debuted in 2018.

Medallia acquires voice-to-text specialist Voci Technologies for $59M

M&A has largely slowed down in the current market, but there remain pockets of activity when the timing and price are right. Today, Medallia — a customer experience platform that scans online reviews, social media, and other sources to provide better insights into what a company is doing right and wrong and what needs to get addressed — announced that it would acquire Voci Technologies, a speech-to-text startup, for $59 million in cash.

Medallia plans to integrate the startup’s AI technology so that voice-based interactions — for example from calls into call centers — can be part of the data crunched by its analytics platform. Despite the rise of social media, messaging channels, and (currently) a shift for people to do a lot more online, voice still accounts for the majority of customer interactions for a business, so this is an important area for Medallia to tackle.

“Voci transcribes 100% of live and recorded calls into text that can be analyzed quickly to determine customer satisfaction, adding a powerful set of signals to the Medallia Experience Cloud,” said Leslie Stretch, president and CEO of Medallia, in a statement. “At the same time, Voci enables call analysis moments after each interaction has completed, optimizing every aspect of call center operations securely. Especially important as virtual and remote contact center operations take shape.”

While there are a lot of speech-to-text offerings in the market today, the key with Voci is that it is able to discern a number of other details in the call, including emotion, gender, sentiment, and voice biometric identity. It’s also able to filter out personal identifiable information to ensure more privacy around using the data for further analytics.

Voci started life as a spinout from Carnegie Mellon University (its three founders were all PhDs from the school), and it had raised a total of about $18 million from investors that included Grotech Ventures, Harbert Growth Parnters, and the university itself. It was last valued at $28 million in March 2018 (during a Series B raise), meaning that today’s acquisition was slightly more than double that value.

The company seems to have been on an upswing with its business. Voci has to date processed some 2 billion minutes of speech, and in January, the company published some momentum numbers that said bookings had grown some 63% in the last quarter, boosted by contact center customers.

In addition to contact centers, the company catered to companies in finance, healthcare, insurance and others areas of business process outsourcing, although it does not disclose names. As with all companies and organizations that have products that cater to offering services remotely, Voci has seen stronger demand for its business in recent weeks, at a time when many have curtailed physical contact due to COVID-19-related movement restrictions.

“Our whole company is delighted to be joining forces with experience management leader Medallia. We are thrilled that Voci’s powerful speech to text capabilities will become part of Medallia Experience Cloud,” said Mike Coney, CEO of Voci, in a statement. “The consolidation of all contact center signals with video, survey and other critical feedback is a game changer for the industry.”

It’s not clear whether Voci had been trying to raise money in the last few months, or if this was a proactive approach from Medallia. But more generally, M&A has found itself in a particularly key position in the world of tech: startups are finding it more challenging right now to raise money, and one big question has been whether that will lead to more hail-mary-style M&A plays, as one route for promising businesses and technologies to avoid shutting down altogether.

For its part, Medallia, which went public in July 2019 after raising money from the likes of Sequoia, has seen its stock hit like the rest of the market in recent weeks. Its current market cap is at around $2.8 billion, just $400 million more than its last private valuation.

The deal is expected to close in May 2020, Medallia said.

 

Granulate announces $12M Series A to optimize infrastructure performance

As companies increasingly look to find ways to cut costs, Granulate, an early-stage Israeli startup, has come up with a clever way to optimize infrastructure usage. Today it was rewarded with a tidy $12 million Series A investment.

Insight Partners led the round with participation from TLV Partners and Hetz Ventures. Lonne Jaffe, managing director at Insight Partners, will be joining the Granulate board under the terms of the agreement. Today’s investment brings the total raised to $15.6 million, according to the company.

The startup claims it can cut infrastructure costs, whether on-prem or in the cloud, from between 20% and 80%. This is not insignificant if they can pull this off, especially in the economic maelstrom in which we find ourselves.

Asaf Ezra, co-founder and CEO at Granulate, says the company achieved the efficiency through a lot of studying about how Linux virtual machines work. Over six months of experimentation, they simply moved the bottleneck around until they learned how to take advantage of the way the Linux kernel operates to gain massive efficiencies.

It turns out that Linux has been optimized for resource fairness, but Granulate’s founders wanted to flip this idea on its head and look for repetitiveness, concentrating on one function instead of fair allocation across many functions, some of which might not really need access at any given moment.

“When it comes to production systems, you have a lot of repetitiveness in the machine, and you basically want it to do one thing really well,” he said.

He points out that it doesn’t even have to be a VM. It could also be a container or a pod in Kubernetes. The important thing to remember is that you no longer care about the interactivity and fairness inherent in Linux; instead, you want that the machine to be optimized for certain things.

“You let us know what your utility function for that production system is, then our agents. basically optimize all the decision making for that utility function. That means that you don’t even have to do any code changes to gain the benefit,” Ezra explained.

What’s more, the solution uses machine learning to help understand how the different utility functions work to provide greater optimization to improve performance even more over time.

Insight’s Jaffe certainly recognized the potential of such a solution, especially right now.

“The need to have high-performance digital experiences and lower infrastructure costs has never been more important, and Granulate has a highly differentiated offering powered by machine learning that’s not dependent on configuration management or cloud resource purchasing solutions,” Jaffe said in a statement.

Ezra understands that a product like his could be particularly helpful at the moment. “We’re in a unique position. Our offering right now helps organizations survive the downturn by saving costs without firing people,” he said.

The company was founded in 2018 and currently has 20 employees. They plan to double that by the end of 2020.

Human Capital is an engineering talent agency and a VC fund all in one

Michael Ovitz didn’t invent the idea of a talent agency, but one might argue that he perfected it. He founded the CAA in 1975, and grew it into the world’s leading talent agency, serving as chairman for 20 years. Now, Ovitz is investing in a brand new type of talent agency called Human Capital.

Human Capital is a hybrid organization, one part VC fund, one part recruiting business and one part creative agency. (Human Capital did not invest in its agency startup from its VC fund.) The Human Capital VC fund has $210 million in assets under management.

The Human Capital recruitment/agency company, founded by former General Catalyst associate Armaan Ali and Stanford grad Baris Akis, looks to provide for tech engineers the same services that Ovitz provided to actors and creatives back in the 70s, 80s and 90s. Engineers are some of the most sought-after talent in Silicon Valley and across the globe. And while big corporations and high-growth startups duke it out over these young engineers, the candidates themselves have little to no guidance around where they should go, what they should expect during the process, and, in some cases, what they should expect to earn.

Ovitz — alongside Qasar Younis, founder of Applied Intuition and former partner and COO of YC; Adam Zoia, founder and chairman of Glocap; Stephen Ehikian, co-founder and CEO of Airkit; and other financial institutions and LPs — recently injected $15 million into Human Capital, which is valued in the hundreds of millions according to the company.

Human Capital looks to pair the brightest engineers with the right company for them, while giving startups a new way to approach recruitment. Thus far, the company has 5,000 members (engineers) and has placed them at startups like Brex, Grammarly, Robinhood and more.

Human Capital starts by doing outreach on university campuses with outstanding engineering programs, setting up coffee with engineers who have been recommended or referred by alumni of the program. Once accepted as a member, the engineer explains to Human Capital what type of role they’re interested in, whether it’s at a big corporation, a high-growth startup or an early-stage company where they have the opportunity to build something from scratch.

The recruitment team at Human Capital then coaches the engineer through the interview process and beyond, helping with decision-making around promotions, understanding equity and negotiating new offers.

The org never charges the engineer, but rather takes a commission on the engineer’s annual income for the first year from the startup that recruited them.

Ali explained to TechCrunch how Human Capital is operating during the coronavirus pandemic, describing a situation in which the top talent that is in the market right now has a level of uncertainty about the future, leading them to seek positions at huge companies like Facebook and Google.

“Our hypothesis when we started this was that there are amazing businesses that are being run better at an earlier stage and have a proxy for that same type of stability [at a Google or Facebook] via their access to capital, alongside other foundational pieces of business security, such as their business model, unit economics, long-term vision for the company, gross margin rate, and growth opportunities for individuals at those companies.”

He said that Human Capital believed that, if a macro event occurred in the market place — we’re right in the middle of one of the least predictable and most impactful macro economic events ever — some of those “stable” earlier-stage businesses wouldn’t be hit in the same way as public companies who have to worry about short-term profitability.

“The issue is that you have to know a lot about those businesses in order to be able to discern that, and that’s our job,” said Ali. “And what we’ve seen is that a number of the companies in that position are actually ramping up recruiting right now.”

There is no mandatory link between Human Capital’s venture capital fund and their recruiting/agency entity, though the fund does like to invest in engineers who have gone through the program and move on to start their own businesses. Those types of investments include Brex, Bolt and Qualia, among others. Human Capital also invests in companies for whom they’ve recruited, such as Livongo, Snowflake, Clumio, Wildlife and Trackonomy. Human Capital has a preference for leading rounds only for companies that are started by its engineer members.

The model isn’t unlike SignalFire or Glocap, founded by Adam Zoia (investor in Human Capital). The idea is that VC funds are great for capital injections, but with the cut-throat recruiting atmosphere and a finite number of engineers, that money can be relatively useless if it can’t be used to bring on the best talent. So firms like SignalFire (in the tech world) and Glocap (in the business/finance world) put recruitment front and center in their value proposition. (Glocap doesn’t invest, but is the premier recruitment platform in the financial sector.)

Human Capital is also starting to look at potential acquisitions that can beef up its agency business, recently acqui-hiring Khonvo Corporation, a recruitment agency founded by Archit Bhise and Andrew Rising.

Ovitz explained to TechCrunch that his ultra-successful career as an agent stemmed from his ability to make decisions about people and projects quickly. He sees the same type of intuition in Ali and Akis at a much younger age and with less experience than he had.

“It’s a checklist in your head,” said Ovitz. “It’s a combination of when your brain meets your stomach, your intellect meets your gut that lets you know you’ve hit a winner. The thing that’s allowed Ali and Akis to build a company that’s worth the hundreds of millions in such a short period of time is that they had that when I met them without having an enormous amount of experience.”

He added that access to the internet, which he did not have during his agency days, is an amazing learning tool and an “epic crutch” that, when paired with good instincts, can accelerate the learning curve on building a business.

(It’s worth noting that this isn’t Ovitz’s first foray into Silicon Valley. The entertainment powerhouse was one of the earliest advisors to Marc Andreessen and Ben Horowitz during the formation of the legendary VC firm a16z, helping them model the firm after CAA itself. Ovitz has been quietly investing in and advising tech startups for the past 15 years.)

Google Meet launches improved Zoom-like tiled layout, low-light mode and more

Google Meet, like all video chat products, is seeing rapid growth in user numbers right now, so it’s no surprise that Google is trying to capitalize on this and is quickly iterating on its product. Today, it is officially launching a set of new features that include a more Zoom-like tiled layout, a low-light mode for when you have to make calls at night and the ability to present a single Chrome tab instead of a specific window or your entire screen. Soon, Meet will also get built-in noise cancellation so nobody will hear your dog bark in the background.

If all of this sounds a bit familiar, it’s probably because G Suite exec Javier Soltero already talked to Reuters about these features last week. Google PR is usually pretty straightforward, but in this case, it moved in mysterious ways. Today, though, these features are actually starting to roll out to users, a Google spokesperson told me, and today’s announcement does actually provide more details about each of these features.

For the most part, what’s being announced here is obvious. The tiled layout allows web users to see up to 16 participants at once. Previously, that number was limited to four and Google promises it will offer additional layouts for larger meetings and better presentation layouts, as well as support for more devices in the future.

For the most part, having this many people stare at me from my screen doesn’t seem necessary (and more likely to induce stress than anything else), but the ability to present a single Chrome tab is surely a welcome new feature for many. But what’s probably just as important is that this means you can share higher-quality video content from these tabs than before.

If you often take meetings in the dark, low-light mode uses AI to brighten up your video. Unlike some of the other features, this one is coming to mobile first and will come to web users in the future.

Personally, I’m most excited about the new noise cancellation feature. Typically, noise cancellation works best for noises that repeat and are predictable. Think about the constant drone of an airplane or your neighbor’s old lawnmower. But Google says Meet can now go beyond this and also cancel out barking dogs and your noisy keystrokes. That has increasingly become table stakes, with even Discord offering similar capabilities and Nvidia RTX Voice now making this available in a slew of applications for users of its high-end graphics cards, but it’s nice to see this as a built-in feature for Meet now.

This feature will only roll out in the coming weeks and will initially be available to G Suite Enterprise and G Suite Enterprise for Education users on the web, with mobile support coming later.

Google Cloud’s fully managed Anthos is now generally available for AWS

A year ago, back in the days of in-person conferences, Google officially announced the launch of its Anthos multi-cloud application modernization platform at its Cloud Next conference. The promise of Anthos was always that it would allow enterprises to write their applications once, package them into containers and then manage their multi-cloud deployments across GCP, AWS, Azure and their on-prem data centers.

Until now, support for AWS and Azure was only available in preview, but today, the company is making support for AWS and on-premises generally available. Microsoft Azure support remains in preview, though.

“As an AWS customer now, or a GCP customer, or a multi-cloud customer, […] you can now run Anthos on those environments in a consistent way, so you don’t have to learn any proprietary APIs and be locked in,” Eyal Manor, the GM and VP of engineering in charge of Anthos, told me. “And for the first time, we enable the portability between different infrastructure environments as opposed to what has happened in the past where you were locked into a set of APIs.”

Manor stressed that Anthos was designed to be multi-cloud from day one. As for why AWS support is launching ahead of Azure, Manor said that there was simply more demand for it. “We surveyed the customers and they said, ‘hey, we want, in addition to GCP, we want AWS,’ ” he said. But support for Azure will come later this year and the company already has a number of preview customers for it. In addition, Anthos will also come to bare metal servers in the future.

Looking even further ahead, Manor also noted that better support for machine learning workloads is on the way. Many businesses, after all, want to be able to update and run their models right where their data resides, no matter what cloud that may be. There, too, the promise of Anthos is that developers can write the application once and then run it anywhere.

“I think a lot of the initial response and excitement was from the developer audiences,” Jennifer Lin, Google Cloud’s VP of product management, told me. “Eric Brewer had led a white paper that we did to say that a lot of the Anthos architecture sort of decouples the developer and the operator stakeholder concerns. There hadn’t been a multi-cloud shared software architecture where we could do that and still drive emerging and existing applications with a common shared software stack.”

She also noted that a lot of Google Cloud’s ecosystem partners endorsed the overall Anthos architecture early on because they, too, wanted to be able to write once and run anywhere — and so do their customers.

Plaid is one of the launch partners for these new capabilities. “Our customers rely on us to be always available and as a result we have very high reliability requirements,” said Naohiko Takemura, Plaid’s head of engineering. “We pursued a multi-cloud strategy to ensure redundancy for our critical KARTE service. Google Cloud’s Anthos works seamlessly across GCP and our other cloud providers preventing any business disruption. Thanks to Anthos, we prevent vendor lock-in, avoid managing cloud-specific infrastructure, and our developers are not constrained by cloud providers.”

With this release, Google Cloud is also bringing deeper support for virtual machines to Anthos, as well as improved policy and configuration management.

Over the next few months, the Anthos Service Mesh will also add support for applications that run in traditional virtual machines. As Lin told me, “a lot of this is is about driving better agility and taking the complexity out of it so that we have abstractions that work across any environment, whether it’s legacy or new or on-prem or AWS or GCP.”

AWS launches Amazon AppFlow, its new SaaS integration service

AWS today launched Amazon AppFlow, a new integration service that makes it easier for developers to transfer data between AWS and SaaS applications like Google Analytics, Marketo, Salesforce, ServiceNow, Slack, Snowflake and Zendesk. Like similar services, including Microsoft Azure’s Power Automate, for example, developers can trigger these flows based on specific events, at pre-set times or on-demand.

Unlike some of its competitors, though, AWS is positioning this service more as a data transfer service than a way to automate workflows, and, while the data flow can be bi-directional, AWS’s announcement focuses mostly on moving data from SaaS applications to other AWS services for further analysis. For this, AppFlow also includes a number of tools for transforming the data as it moves through the service.

“Developers spend huge amounts of time writing custom integrations so they can pass data between SaaS applications and AWS services so that it can be analysed; these can be expensive and can often take months to complete,” said AWS principal advocate Martin Beeby in today’s announcement. “If data requirements change, then costly and complicated modifications have to be made to the integrations. Companies that don’t have the luxury of engineering resources might find themselves manually importing and exporting data from applications, which is time-consuming, risks data leakage, and has the potential to introduce human error.”

Every flow (which AWS defines as a call to a source application to transfer data to a destination) costs $0.001 per run, though, in typical AWS fashion, there’s also cost associated with data processing (starting at 0.02 per GB).

“Our customers tell us that they love having the ability to store, process, and analyze their data in AWS. They also use a variety of third-party SaaS applications, and they tell us that it can be difficult to manage the flow of data between AWS and these applications,” said Kurt Kufeld, vice president, AWS. “Amazon AppFlow provides an intuitive and easy way for customers to combine data from AWS and SaaS applications without moving it across the public internet. With Amazon AppFlow, our customers bring together and manage petabytes, even exabytes, of data spread across all of their applications — all without having to develop custom connectors or manage underlying API and network connectivity.”

At this point, the number of supported services remains comparatively low, with only 14 possible sources and four destinations (Amazon Redshift and S3, as well as Salesforce and Snowflake). Sometimes, depending on the source you select, the only possible destination is Amazon’s S3 storage service.

Over time, the number of integrations will surely increase, but for now, it feels like there’s still quite a bit more work to do for the AppFlow team to expand the list of supported services.

AWS has long left this market to competitors, even though it has tools like AWS Step Functions for building serverless workflows across AWS services and EventBridge for connections applications. Interestingly, EventBridge currently supports a far wider range of third-party sources, but as the name implies, its focus is more on triggering events in AWS than moving data between applications.

ForgeRock nabs $93.5M for its ID management platform, gears up next for an IPO

For better or worse, digital identity management services — the process of identifying and authenticating users on networks to access services — has become a ubiquitous part of interacting on the internet, all the more so in the recent weeks as we have been asked to carry out increasingly more of our lives online.

Used correctly, they help ensure that it’s really you logging into your online banking service; used badly, you feel like you can’t innocently watch something silly on YouTube without being watched yourself. Altogether, they are a huge business: worth $16 billion today according to Gartner but growing at upwards of 30% and potentially as big as $30.5 billion by 2024, according to the latest forecasts.

Now, a company called ForgeRock, which has built a platform that is used to help make sure that those accessing services really are who they say are, and help organizations account for how their services are getting used, is announcing a big round of funding to continue expanding its business amid a huge boost in demand.

The company is today announcing that it has raised $93.5 million in funding, a Series E it will use to continue expanding its product and take it to its next step as a business, specifically investing in R&D, cloud services and its ForgeRock Identity Cloud, and general global business development.

The round is being led by Riverwood Capital, and Accenture Ventures, as well as previous investors Accel, Meritech Capital, Foundation Capital and KKR Growth, also participated.

Fran Rosch, the startup’s CEO, said in an interview that this will likely be its final round of funding ahead of an IPO, although given the current static of affairs with a lot of M&A, there is no timing set for when that might happen. (Notably, the company had said its last round of funding — $88 million in 2017 — would be its final ahead of an IPO, although that was under a different CEO.)

This Series E brings the total raised by the company to $230 million. Rosch confirmed it was raised as a material upround, although he declined to give a valuation. For some context, the company’s last post-money valuation was $646.50 million per PitchBook, and so this round values the company at more than $730 million.

ForgeRock has annual recurring revenues of more than $100 million, with annual revenues also at over $100 million, Rosch said. It operates in an industry heavy with competition, with some of the others vying for pole position in the various aspects of identity management including Okta, LastPass, Duo Serurity and Ping Identity.

But within that list it has amassed some impressive traction. In total it has 1,100 enterprise customers, who in turn collectively manage 2 billion identities through ForgeRock’s platform, with considerably more devices also authenticated and managed on top of that.

Customers include the likes of the BBC — which uses ForgeRock to authenticate and log not just 45 million users but also the devices they use to access its iPlayer on-demand video streaming service — Comcast, a number of major banks, the European Union and several other government organizations. ForgeRock was originally founded in Norway about a decade ago, and while it now has its headquarters in San Francisco, it still has about half its employees and half its customers on the other side of the Atlantic.

Currently ForgeRock provides services to businesses related to identity management including password and username creation, identity governance, directory services, privacy and consent gates, which they in turn provide both to their human customers as well as to devices accessing their services, but we’re in a period of change right now when it comes to identity management. It stays away from direct-to-consumer password management services and Rosch said there are no plans to move into that area.

These days, we’ve become more aware of privacy and data protection. Sometimes, it’s been because of the wrong reasons, such as giant security breaches that have leaked some aspect of our personal information into a giant database, or because of a news story that has uncovered how our information has unwittingly been used in ‘legit’ commercial schemes, or other ways we never imagined it would.

Those developments, combined with advances in technology, are very likely to lead us to a place over time where identity management will become significantly more shielded from misuse. These could include more ubiquitous use of federated identities, “lockers” that store our authentication credentials that can be used to log into services but remain separate from their control, and potentially even applications of blockchain technology.

All of this means that while a company like ForgeRock will continue to provide its current services, it’s also investing big in what it believes will be the next steps that we’ll take as an industry, and society, when it comes to digital identity management — something that has had a boost of late.

“There are a lot of interesting things going on, and we are working closely behind the scenes to flesh them out,” Rosch said. “For example, we’re looking at how best to break up data links where we control identities to get access for a temporary period of time but then pull back. It’s a powerful trend that is still about four to five years out. But we are preparing for this, a time when our platform can consume decentralised identity, on par with logins from Google or Facebook today. That is an interesting area.”

He notes that the current market, where there has been an overall surge for all online services as people are staying home to slow the speed of the coronavirus pandemic, has seen big boosts in specific verticals.

Its largest financial services and banking customers have seen traffic up by 50%, and digital streaming has been up by 300% — with customers like the BBC seeing spikes in usage at 5pm every day (at the time of the government COVID-19 briefing) that are as high as its most popular primetime shows or sporting events — and use of government services has also been surging, in part because many services that hadn’t been online are now developing online presences or seeing much more traffic from digital channels than before. Unsurprisingly, its customers in hotel and travel, as well as retail, have seen drops, he added.

“ForgeRock’s comprehensive platform is very well-positioned to capitalize on the enormous opportunity in the Identity & Access Management market,” said Jeff Parks, co-founder and managing partner of Riverwood Capital, in a statement. “ForgeRock is the leader in solving a wide range of workforce and consumer identity use cases for the Global 2000 and is trusted by some of the largest companies to manage millions of user identities. We have seen the growth acceleration and are thrilled to partner with this leadership team.” Parks is joining the board with this round.