AWS announces new savings plans to reduce complexity of reserved instances

Reserved instances (RIs) have provided a mechanism for companies, which expect to use a certain level of AWS infrastructure resources, to get some cost certainty. But as AWS’ Jeff Barr points out, they are on the complex side. To fix that, the company announced a new method called Savings Plans.

“Today we are launching Savings Plans, a new and flexible discount model that provides you with the same discounts as RIs, in exchange for a commitment to use a specific amount (measured in dollars per hour) of compute power over a one or three year period,” Barr wrote in a blog post announcing the new program.

Amazon charges customers in a couple of ways. First, there is an on-demand price, which is basically the equivalent of the rack rate at a hotel. You are going to pay more for this because you’re walking up and ordering it on the fly.

Most organizations know they are going to need a certain level of resources over a period of time, and in these cases, they can save some money by buying in bulk up front. This gives them cost certainty as an organization, and it helps Amazon because it knows it’s going to have a certain level of usage and can plan accordingly.

While Reserved Instances aren’t going away yet, it sounds like Amazon is trying to steer customers to the new savings plans. “We will continue to sell RIs, but Savings Plans are more flexible and I think many of you will prefer them,” Barr wrote.

The Savings Plans come in two flavors. Compute Savings Plans provide up to 66% savings and are similar to RIs in this regard. The aspect that customers should like is that the savings are broadly applicable across AWS products, and you can even move workloads between regions and maintain the same discounted rate.

The other is an EC2 Instance Savings Plan. With this one, also similar to the reserved instance, you can save up to 72% over the on-demand price, but with this option you are limited to a single region. It does offer a measure of flexibility though, allowing you to select different sizes of the same instance type or even switch operating systems from Windows to Linux without affecting your discount with your region of choice.

You can sign up today through the AWS Cost Explorer.

Study: Ransomware, Data Breaches at Hospitals tied to Uptick in Fatal Heart Attacks

Hospitals that have been hit by a data breach or ransomware attack can expect to see an increase in the death rate among heart patients in the following months or years because of cybersecurity remediation efforts, a new study posits. Health industry experts say the findings should prompt a larger review of how security — or the lack thereof — may be impacting patient outcomes.

Researchers at Vanderbilt University‘s Owen Graduate School of Management took the Department of Health and Human Services (HHS) list of healthcare data breaches and used it to drill down on data about patient mortality rates at more than 3,000 Medicare-certified hospitals, about 10 percent of which had experienced a data breach.

As PBS noted in its coverage of the Vanderbilt study, after data breaches as many as 36 additional deaths per 10,000 heart attacks occurred annually at the hundreds of hospitals examined.

The researchers found that for care centers that experienced a breach, it took an additional 2.7 minutes for suspected heart attack patients to receive an electrocardiogram.

“Breach remediation efforts were associated with deterioration in timeliness of care and patient outcomes,” the authors found. “Remediation activity may introduce changes that delay, complicate or disrupt health IT and patient care processes.”

Leo Scanlon, former deputy chief information security officer at the HHS, said the findings in this report practically beg for a similar study to be done in the United Kingdom, whose healthcare system was particularly disrupted by the Wannacry virus, a global contagion in May 2017 that spread through a Microsoft Windows vulnerability prevalent in older healthcare systems.

“The exploitation of cybersecurity vulnerabilities is killing people,” Scanlon told KrebsOnSecurity. “There is a lot of possible research that might be unleashed by this study. I believe that nothing less than a congressional investigation will give the subject the attention it deserves.”

A post-mortem on the impact of WannaCry found the outbreak cost U.K. hospitals almost $100 million pounds and caused significant disruption to patient care, such as the cancellation of some 19,000 appointments — including operations — and the disruption of IT systems for at least a third of all U.K. National Health Service (NHS) hospitals and eight percent of general practitioners. In several cases, hospitals in the U.K. were forced to divert emergency room visitors to other hospitals.

But what isn’t yet known is how Wannacry affected mortality rates among heart attack and stroke patients whose ambulances were diverted to other hospitals because of IT system outages related to the malware. Or how many hospitals and practices experienced delays in getting test results back needed to make critical healthcare decisions.

Scanlon said although he’s asked around quite a bit over the years to see if any researchers have taken up the challenge of finding out, and that so far he hasn’t found anyone doing that analysis.

“A colleague who is familiar with large scale healthcare data sets told me that unless you are associated with a research institution, it would be almost impossible to pry that kind of data out of the institutions that have it,” Scanlon said. “The problem is this data is hard to come by — nobody likes to admit that death can be attributable to a non-natural cause like this — and is otherwise considered sensitive at a very high and proprietary level by the institutions that have the facts.”

A study published in the April 2017 edition of The New England Journal of Medicine would seem to suggest applying the approach used by the Vanderbilt researchers to measuring patient outcomes at U.K. hospitals in the wake of Wannacry might be worth carrying out.

In the NEJM study, morbidity and mortality data was used to show that there is a measurable impact when ambulances and emergency response teams are removed from normal service and redirected to standby during public events like marathons and other potential targets of terrorism.

The study found that “medicare beneficiaries who were admitted to marathon-affected hospitals with acute myocardial infarction or cardiac arrest on marathon dates had longer ambulance transport times before noon (4.4 minutes longer) and higher 30-day mortality than beneficiaries who were hospitalized on nonmarathon dates.”

“Several colleagues and I are convinced that the same can be shown about WannaCry, on the large scale, and also at the small scale when ransomware attacks impact a regional hospital,” Scanlon said.

In November 2018, I was honored to give the keynote at a conference held by the Health Information Sharing and Analysis Center (H-ISAC), a non-profit that promotes the sharing of cyber threat information and best practices in the healthcare sector.

In the weeks leading up to that speech, I interviewed more than a dozen experts in healthcare security to find out what was top of mind for these folks. Incredibly, one response I heard from multiple healthcare industry experts was that there is currently no data available to support the finding of a negative patient outcome as a result of a cybersecurity vulnerability or attack.

As I kept talking to experts, it occurred to me that if smart people in this industry could say something like that with a straight face, it was probably because not a lot of people were looking too hard for evidence to the contrary.

With this Vanderbilt study, that’s demonstrably no longer true.

A copy of the new study is available here (PDF).

Privilege Escalation | macOS Malware & The Path to Root Part 1

In this two-part series, we take a look at privilege escalation on macOS. In Part 1, we look at some of the vulnerabilities that have been discovered by security researchers in recent versions of Apple’s Desktop OS, focusing on those that have been turned into reliable exploits. We draw conclusions for enterprise and end users alike based on this review. In Part 2, we switch from researchers to attackers and explore both how and why the methodology of macOS threat actors takes quite a different path from that of the research community.

image of privilege escalation macOS

What is Privilege Escalation?

Let’s start by defining our terms. Whenever code executes, it does so within the context of a user who invokes it. Technically, users need not always actually be people, but for our purposes here we’ll stick to the simple case of a user deciding to launch some application or script. When that happens, the code has access to the same resources that the user has. That is, access to system files, other users’ files and any other protected resources is usually out of scope.

Unless, of course, that user happens to be the root user, or has launched the process by first requesting to be run as root. Users familiar with the command line will recognize as an example of the second scenario those times when they execute a command prepending with sudo. In the Desktop user interface, the same thing occurs when an installer or other tool asks for permissions to make some changes that the current user does not have the authority to make, at least not without authentication (i.e., providing the required credentials). This can be anything from installing a new helper tool, moving or deleting a folder outside of the user’s home folder or executing a script that requires elevated privileges.

image of elevating privileges

This kind of privilege elevation is all well and good, but privilege escalation occurs when a user or process acquires these same elevated privileges when they are not supposed to. Privilege escalation can occur through software or OS vulnerabilities – largely the subject of this post – but also through social engineering, which we’ll mention more about in Part 2.

How Common is Privilege Escalation on macOS?

If you pay attention to Apple’s product security announcements (you can sign up here for email alerts), you may be surprised at just how many code execution bugs are squashed on each update. That, of course, is great! Apple and a small body of dedicated researchers around the world are constantly exposing and fixing serious security bugs.

image apple security updates

That said, not all of the bugs that Apple fix are privilege escalation bugs. Many are a more general class of bugs known as ‘arbitrary code execution’, meaning only that the flaw could allow an attacker to execute any code they choose in a certain context. Arbitrary code execution can be a precursor of privilege escalation, however. An exploit chain often begins with the ability to execute arbitrary code, but whether the attacker can use that to execute code that will raise privileges is a separate matter.

Even when a vulnerability does allow privilege escalation, not all such vulnerabilities turn into working exploits. The flaw may be mitigated by other circumstances that are so rare as to make the bug merely of technical interest. Take, for example, this macOS 18.7.0 Kernel – Local Privilege Escalation, which the author describes as “pretty much unusable for in-the-wild exploitation” but nevertheless still of interest to security researchers:

image of exploit db 1
image of exploit db 2

Despite this, there’s still been quite a few serious privilege escalation bugs in macOS for which there are working exploits. Let’s take a look at some of them.

Privilege Escalation on macOS El Capitan 10.11, macOS Sierra 10.12

To keep it relevant, we’ll start with exploits that still affect at least El Capitan, the version where Apple first introduced System Integrity Protection and, arguably, began to take security seriously. As of Oct 2019, El Capitan is estimated to hold around 7% of macOS market share, with Sierra a little above that at 8.99%.

image of macos stats

Physmem is an exploit that can be used on older OSs, of course. I’ve personally tested it on systems as old as OSX 10.9 Mavericks, and the author of the exploit believes the code probably dates back to OSX 10.5 Leopard. Also affected are all versions of OSX 10.10 Yosemite, and El Capitan versions 10.11.5 and earlier as well as Sierra 10.12.0 and 10.12.1. Physmem exploits either CVE-2016-1825 or CVE-2016-7617 depending on the target system.

The source code for physmem is publicly available, and thus accessible to attackers. As a local privilege escalation, it requires a user to download and run some 3rd party software, a fairly regular occurrence, which unknown to the victim contains the malicious code. A seemingly benign piece of software (perhaps a fake Adobe Flash installer, Media downloader or similar) could contain the physmem binary and allow that process to elevate to root without the user needing to type in an admin or any other kind of password. Of course, once root is achieved, the software can now do as it wishes without the user’s interaction. The whole process would be invisible to the user from the point that they launch the supposedly benign software, which might – to avoid raising suspicions – perform otherwise exactly as expected.

macOS 10.13 High Sierra Privilege Escalations

The vulnerability that makes physmem possible was patched from 10.12.2 onwards, but there’s a whole class of other vulnerabilities that affect unpatched versions of El Capitan, Sierra and High Sierra and which was detailed in this blog post.

Although far from simple to exploit, that didn’t stop the determined researchers from crafting a working exploit from a flaw in macOS’s Windows Server and achieving arbitrary code execution with system privileges.

Chained with other exploits such as a sandbox escape, CVE-2018-4193 could be used to achieve remote code execution, requiring only that a user click a malicious link. It is estimated that around 18% of macOS installs are still running some version of High Sierra 10.13.

macOS 10.14 Privilege Escalations

Mojave, which accounts for around 42% of macOS installs, is not immune from privilege escalation, either. Product security and vulnerability researcher @CodeColorist has discovered two vulnerabilities, CVE-2019-8565 and CVE-2019-8513 that lead to privilege escalation on macOS Mojave 10.14.3 and earlier. Both have already been incorporated into Metasploit and are available to red teamers.

The first exploits a race condition in a little known but native macOs application called Feedback Assistant. This app resides in an obscure “Applications” folder within the System/Library/CoreServices path.

image of feedback assistant

Feedback Assistant is an application used primarily by developers and macOS bug testers to submit problem reports directly to Apple. As CodeColorist discovered following work by Project Zero’s Ian Beer, the Feedback Assistant leverages a privileged XPC connection with the service name “com.apple.appleseed.fbahelperd”. However, since communication with the privileged XPC service is verified only by the caller’s process identifier, this makes it subject to a race condition whereby the malicious application first spawns the entitled process and then reuses its PID.

image of target feedback assistant

In this example, we execute the exploit on a macOS High Sierra instance, but the exploit is quite reliable on macOS versions 10.14.3 and below.

gif of exploit

Attempting to use the same exploit on 10.14.4, however, fails after Apple patched the bug.

image of exploit on mojave

CVE-2019-8513, which also works up to and including Mojave 10.14.3, shows just how sketchy some of the built-in command line tools can be on macOS. The Time Machine util tmdiagnose is just one example of a whole class of tools that implement XPC logic bugs.

image of tmdiagnose

The utility is largely just an Objective-C wrapper around a bunch of other command line utilities. Prior to 10.14.4, one utility that tmdiagnose called to do a privileged task was awk. Although awk is a full-blown programming utility in its own right, most people who are familiar with it will know it as a powerful and useful utility for processing text files. The code in tmdiagnose uses awk to split up some text output from diskutil list and then make a system call constructed from that output.

This is an incredibly unsafe way to execute privileged processes, since the system call can be manipulated by funking with the output of diskutil list. Cleverly, CodeColorist’s exploit creates a disk image whose name contains the malicious payload. When tmdiagnose is called and runs over the list of disk names, it executes the payload with privileges.

Are There Any Privilege Escalations for macOS Catalina?

And what of post 10.14.4 and 10.15? A quick look at Apple’s most recent security update for macOS Catalina suggests we’ll be seeing quite a few write-ups in the near future of further privilege escalation vulnerabilities and exploits. The count from the most recent security update for macOS Catalina 10.15.1 is at least 9 bugs that target the 10.15 release or 10.14.6 with impacts that allow arbitrary code execution with system or elevated privileges.

image of 10.15.1 security update

These bugs affect a wide variety of APIs and services, including the new System Extensions (CVE-2019-8805), PluginKit (CVE-2019-8715), the Kernel (CVE-2019-8786), Intel Graphics Driver (CVE-2019-8807), GraphicsDriver (CVE-2019-8784), File System Events (CVE-2019-8798), File Quarantine (CVE-2019-8509), AppleGraphicsControl (CVE-2019-8716) and even manpages (CVE-2019-8802).

image of graphics driver bug Catalina

What Can We Learn From macOS Privilege Escalations?

I regularly come across people who are still running older versions of macOS, both privately and at work. The reasons vary from familiarity – “my trusty old 10.xx install” – and mistrust – “What? Another bodged Apple update?” – to hardware or software incompatibility: users with older Macs that won’t support an update or who are still relying on old software with dependencies like Rosetta, 32-bit apps, older Java JDK or JREs. The situation will likely escalate when notarization is in full force in 10.15 and later, as widely expected, when the removal of support for software using kernel extensions (kexts) in 10.16 or 10.17 kicks in over the next year or two.

One of the common responses I hear when talking to people running older versions of macOS is that security is not an issue, by which they mean that they are confident that the built-in protections ensure the older system is safe. Gatekeeper is on, System Integrity Protection is on, and the system settings are set to automatically receive background updates to XProtect and MRT:

image of software update

More circumspect users may have installed some enduser AV suite, and feel that they are more than adequately covered.

Unfailingly, those users express surprise and concern when I point out working exploits like those above are commonly available and not detected by either the OS or many 3rd party security solutions, unless they happen to be monitoring execution at a deep level, which most are not.

Conclusion

What all this should teach macOS users is that, like any complex piece of software – and they don’t come more complex than an operating system – the OS will always have vulnerabilities that dedicated researchers and enthusiasts will eventually uncover. Relying on built-in security or even 3rd party security solutions that don’t have visibility into processes as they execute is no defense. It’s why updates are essential, as is a more robust security solution.

In Part Two, we’ll shift our focus from researchers to attackers. How does all this translate into what we see in the wild? Are threat actors taking the same path to root and privilege escalation as researchers, or have they found alternative ways to reach the same end? Follow us on any of our social media feeds below and we’ll notify you when Part Two is out!


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Microsoft launches the first public preview of its Fluid Framework for collaborative editing

One of the most interesting (and confusing) news announcements of Microsoft’s Build developer conference earlier this year was the first public demo of the company’s Fluid Framework. Fluid is meant to make building collaborative real-time editing experiences easier for developers, but Microsoft is also building it into some of its own tools, like Office and Outlook. It’s nothing less than a re-imagining of what documents should look and feel like.

Today, at its Ignite conference in Orlando, Fla., Microsoft launched the first public preview of the Fluid Framework end-user experience, as well as a private preview for developers.

As Microsoft notes, the Fluid Framework has three main capabilities: the multi-person co-authoring features, the componentized document model and the ability to plug in intelligent agents that can, for example, translate text in real-time or suggest edits. To some degree, this isn’t all that different from Google Docs or even Microsoft’s own collaboration features in Office. But what’s new is that Microsoft is opening this up to developers and that it is looking at the Fluid Framework as a new way to deconstruct and componentize documents, which can then be used across applications.

Microsoft plans to build the Fluid Framework into lots of experiences across Microsoft 365, including Teams, Outlook, SharePoint, OneNote and Office. If you want to see it in action, you can now try the public preview to see what editing documents with it feels like.

Robocorp announces $5.6M seed to bring open-source option to RPA

Robotic Process Automation (RPA) has been a hot commodity in recent years as it helps automate tedious manual workflows inside large organizations. Robocorp, a San Francisco startup, wants to bring open source and RPA together. Today it announced a $5.6 million seed investment.

Benchmark led the round, with participation from Slow Ventures, firstminute Capital, Bret Taylor (president and chief product officer at Salesforce) and Docker CEO Rob Bearden. In addition, Benchmark’s Peter Fenton will be joining the company’s board.

Robocorp co-founder and CEO Antti Karjalainen has been around open-source projects for years, and he saw an enterprise software category that was lacking in open-source options. “We actually have a unique angle on RPA, where we are introducing open source and cloud native technology into the market and focusing on developer-led technologies,” Karjalainen said.

He sees a market that’s top-down and focused on heavy sales cycles. He wants to bring the focus back to the developers who will be using the tools. “We are all about removing friction from developers. So, we are focused on giving developers tools that they like to use, and want to use for RPA, and doing it in an open-source model where the tools themselves are free to use,” he said.

The company is built on the open-source Robot Framework project, which was originally developed as an open-source software testing environment, but he sees RPA having a lot in common with testing, and his team has been able to take the project and apply it to RPA.

If you’re wondering how the company will make money, they are offering a cloud service to reduce the complexity even further of using the open-source tools, and that includes the kinds of features enterprises tend to demand from these projects, like security, identity and access management, and so forth.

Benchmark’s Peter Fenton, who has invested in several successful open-source startups, including JBoss, SpringSource and Elastic, sees RPA as an area that’s ripe for a developer-focused open-source option. “We’re living in the era of the developer, where cloud-native and open source provide the freedom to innovate without constraint. Robocorp’s RPA approach provides developers the cloud native, open-source tools to bring RPA into their organizations without the burdensome constraints of existing offerings,” Fenton said.

The company intends to use the money to add new employees and continue scaling the cloud product, while working to build the underlying open-source community.

While UIPath, a fast-growing startup with a hefty $7.1 billion valuation recently announced it was laying off 400 people, Gartner published a study in June showing that RPA is the fastest growing enterprise software category.

Volterra announces $50M investment to manage apps in hybrid environment

Volterra is an early-stage startup that has been quietly working on a comprehensive solution to help companies manage applications in hybrid environments. The company emerged from stealth today with a $50 million investment and a set of products.

Investors include Khosla Ventures and Mayfield, along with strategic investors M12 (Microsoft’s venture arm), Itochu Technology Ventures and Samsung NEXT. The company, which was founded in 2017, already has 100 employees and more than 30 customers.

What attracted these investors and customers is a full-stack solution that includes both hardware and software to manage applications in the cloud or on-prem. Volterra founder and CEO Ankur Singla says when he was at his previous company, Contrail Systems, which was acquired by Juniper Networks in 2012 for $176 million, he saw first-hand how large companies were struggling with the transition to hybrid.

“The big problem we saw was in building and operating applications that scale is a really hard problem. They were adopting multiple hybrid cloud strategies, and none of them solved the problem of unifying the application and the infrastructure layer, so that the application developers and DevOps teams don’t have to worry about that,” Singla explained.

He says the Volterra solution includes three main products — VoltStack​, VoltMesh and VoltConsole — to help solve this scaling and management problem. As Volterra describes the total solution, “Volterra has innovated a consistent, cloud-native environment that can be deployed across multiple public clouds and edge sites — a distributed cloud platform. Within this SaaS-based offering, Volterra integrates a broad range of services that have normally been siloed across many point products and network or cloud providers.” This includes not only the single management plane, but security, management and operations components.

Diagram: Volterra

The money has come over a couple of rounds, helping to build the solution to this point, and it required a complex combination of hardware and software to do it. They are hoping organizations that have been looking for a cloud-native approach to large-scale applications, such as industrial automation, will adopt this approach.

Coveo raises US$172M at $1B+ valuation for AI-based enterprise search and personalization

Search and personalization services continue to be a major area of investment among enterprises, both to make their products and services more discoverable (and used) by customers, and to help their own workers get their jobs done, with the market estimated to be worth some $100 billion annually. Today, one of the big startups building services in this area raised a large round of growth funding to continue tapping that opportunity.

Coveo, a Canadian company that builds search and personalization services powered by artificial intelligence — used by its enterprise customers by way of cloud-based, software-as-a-service — has closed a C$227 million ($172 million in U.S. dollars) round, which CEO Louis Tetu tells me values the company at “well above” $1 billion, “Canadian or U.S. dollars.”

Specifically, the equity stake of this round is 15.5%, equating to a valuation of $1.46 billion Canadian dollars, or $1.1 billion in U.S. dollars.

The round is being led by Omers Capital Private Growth Equity Group, the investing arm of the Canadian pensions giant that makes large, later-stage bets (the company has been stepping up the pace of investments lately), with participation also from Evergreen Coast Capital, FSTQ and IQ Ventures. Evergreen led the company’s last round of $100 million in April 2018, and in total the company has now raised just over $402 million with this round.

The valuation appears to be a huge leap in the context of Coveo’s funding history: in that last round, it had a post-money valuation of about $370 million, according to PitchBook data.

Part of the reason for that is because of Coveo’s business trajectory, and part is due to the heat of the overall market.

Coveo’s round is coming about two weeks after another company that builds enterprise search solutions, Algolia, raised $110 million. The two aim at slightly different ends of the market, Tetu tells me, not directly competing in terms of target customers, and even services.

“Algolia is in a different ZIP code,” he said. Good thing, too, if that’s the case: Salesforce — which is one of Coveo’s biggest partners and customers — was also a strategic investor in the Algolia round. Even if these two do not compete, there are plenty of others vying for the same end of the enterprise search and personalization continuum — they include Google, Microsoft, Elastic, IBM, Lucidworks and many more. That, again, underscores the size of the market opportunity.

In terms of Coveo’s own business, the company works with some 500 customers today and says SaaS subscription revenues grew more than 55% year-over-year this year. Five hundred may sound like a small number, but it covers a lot of very large enterprises spanning web-facing businesses, commerce-based organizations, service-facing companies and enterprise solutions.

In addition to Salesforce, it includes Visa, Tableau (also Salesforce now!), Honeywell, a Fortune 50 healthcare company (whose name is not getting disclosed) and what Tetu described to me as an Amazon competitor that does $21 billion in sales annually but doesn’t want to be named.

Coveo’s basic selling point is that the better discoverability and personalization that it provides helps its customers avoid as many call-center interactions (reducing operating expenditures), improves sales (boosting conversions and reducing cart abandonment) and helps companies themselves just work faster.

Significantly, the area that Coveo works in is going through a noticeable shift these days.

A swing toward stronger data protection and consumers’ preference for having more control over how their data is used and for what — spurred by high-profile revelations detailing how different organizations manipulated user data across social networking sites and other platforms to target people with sneaky political content and advertising to influence voting, subsequently cracking open the wasp nest to reveal just how much of our data is harvested and used all the time — has meant that there are at times fewer tools than there used to be to provide the kind of “discoverability” and “personalization” that companies like Coveo build for their clients.

Tetu believes there is a way to deliver personalization without compromising how a person wants to exist in the digital world.

“The whole notion is to be able to control data but also have personalizaton in the future,” he said. But there are two dimensions to this, he added:

“The continued and growing regulatory pressure around privacy [such as GDPR] is good, it’s the will of the people and legislation will go that way. The world is going cookie-less,” he said. “But we can’t ignore the arbitrage between privacy and utility. If I understand what you will do with my data and use it to provide more relevance, that can be excellent, too.”

He calls himself an “Amazon addict” but points out that it highlights the two sides of the data coin: “Is it predatory or excellent in doing the job it does? I can’t decide on an answer. I think they are both.”

All the same, it’s working on ways around the “cookie-less” future. The company Coveo acquired in Milan earlier this year, Tetu said, “can do machine learning detection. In five clicks it can detect your propensity to buy and your interest. It means you can’t blame anyone for observing you.”

So, while there are a lot of players out there chasing the same discoverability and personalization market, the attraction here is not just about a company doing it well, but looking to skate to where the puck is going (see what I did there, Canadian startup?).

“We believe that Coveo is the market leader in leveraging data and AI to personalize at scale,” said Mark Shulgan, managing director and head of Growth Equity at Omers, in a statement. “Coveo fits our investment thesis precisely: an A-plus leadership team with deep expertise in enterprise SaaS, a Fortune 1000 customer base who deeply love the product, and a track record of high growth in a market worth over $100 billion. This makes Coveo a highly-coveted asset. We are glad to be partnering to scale this business.”

Alongside business development on its own steam — the company now has around 500 employees — Coveo is going to be using this funding for acquisitions. Tetu notes that Coveo still has a lot of money in the bank from previous rounds.

“We are a real company with real positive economics,” he said. “This round is mostly to have dry powder to invest in a way that is commensurate in the AI space, and within commerce in particular.” To get the ball rolling on that, this past July, Coveo acquired Tooso, a specialist in AI-based digital commerce technology.

Neo4j introduces new cloud service to simplify building a graph database

Neo4j, a popular graph database, is available as an open-source product for anyone to download and use. Its enterprise product aimed at larger organizations is growing fast, but the company recognized there was a big market in between those two extremes, and today it introduced a new managed cloud service called Aura.

They wanted something in the product family for smaller companies, says Emil Eifrem, CEO and co-founder at Neo4j . Aura really gives these smaller players a much more manageable offering with flexible pricing options. “To get started with an enterprise project can run hundreds of thousands of dollars per year. Whereas with Aura, you can get started for about 50 bucks a month, and that means that it opens it up to new segments of the market,” Eifrem told TechCrunch. As he points out, even a startup on a shoestring budget can afford $50 a month.

Aura operates on a flexible pricing model, and offers the kind of value proposition you would expect from a cloud version of the product. The company deals with all of the management, security and updates for you. It will also scale as needed to meet your data requirements as you grow. The idea is to allow developers to concentrate on simply building applications and let Neo4j deal with the database for you.

He says over time, he could see larger businesses, which don’t want to deal with the management side of developing a graph database application, also using the cloud product. “Why would you want to operate your own database? You should probably focus on your core business and building applications to support that core business,” he said. But he recognizes change happens slowly in larger organizations, and not every business will be comfortable with a managed service. That’s why they are offering different options to meet different requirements.

Graph databases allow you to see connections between data. It is the underlying technology, for example, in a social networking app, that lets you see the connection between people you know and people your friends know. It is also the technology on an e-commerce site that can offer recommendations based on what you bought before because people who buy a certain product are more likely to purchase other related products.

Cyber-skills platform Immersive Labs raises $40M in North America expansion

Immersive Labs, a cybersecurity skills platform, has raised $40 million in its Series B, the company’s second round of funding this year following an $8 million Series A in January.

Summit Partners led the fundraise, with Goldman Sachs participating, the Bristol, U.K.-based company confirmed.

Immersive, led by former GCHQ cybersecurity instructor James Hadley, helps corporate employees learn new security skills by using real, up-to-date threat intelligence in a “gamified” way. Its cybersecurity learning platform uses a variety of techniques and psychology to build up immersive and engaging cyber war games to help IT and security teams learn. The platform aims to help users better understand cybersecurity threats, like detecting and understanding phishing and malware reverse-engineering.

It’s a new take on cybersecurity education, as the company’s founder and chief executive Hadley said the ever-evolving threat landscape has made traditional classroom training “obsolete.”

“It creates knowledge gaps that increase risk, offer vulnerabilities and present opportunities for attackers,” said Hadley.

The company said it will use the round to expand further into the U.S. and Canadian markets from its North American headquarters in Boston, Mass.

Since its founding in 2017, Immersive already has big customers to its name, including Bank of Montreal and Citigroup, on top of its U.K. customers, including BT, the National Health Service and London’s Metropolitan Police.

Goldman Sachs, an investor and customer, said it was “impressed” by Immersive’s achievements so far.

“The platform is continually evolving as new features are developed to help address the gap in cyber skills that is impacting companies and governments across the globe,” said James Hayward, the bank’s executive director.

Immersive said it has 750% year-over-year growth in annual recurring revenues and more than 100 employees across its offices.

Meet the Client Workshop | What Can We Learn From A Security Executive?

SentinelOne recently hosted a “Meet the client workshop” with security and marketing professionals, providing them with a rare opportunity to speak with a security executive in a “safe” environment. Our guest for the day was Les Correia, Director – Global Information Security – Architecture, Engineering and Operations at Estée Lauder. Les has a long and distinguished career in IT and security, and he shared with the forum some of his professional insights regarding security products acquisition.

image of meet the client

Increasing Awareness Among CISOs, Executives

Les noted that the executive view of security is changing. Board members and Executives are more involved than ever and the CISO role has become quite a tricky business position, with differing priorities – such as risk versus availability – and principles dictating many of a CISO’s decisions.  The turnover of CISOs is also very rapid, and security executives must take great care in recording their decision process in order to be able to justify it in case things go south.

Day to Day Challenges in Cybersecurity

Les reiterated the lack of skilled manpower to conduct all security engineering, security operations, new product evaluations, threat and risk management roles. This creates massive workloads on personnel and leaves only limited bandwidth for dealing with current and new technologies.

Future Trends in Enterprise Security

Les noted the current trend of quickly migrating all possible technologies and functions to the cloud, whether that be applications, storage, or entire data centers. This rapid push entails better controls to manage risk, both on premises and in the cloud. Also, a global, distributed organization must be able to support a large workforce working in numerous locations. That means Identity, Privacy, Segmentation and other management technologies come to the forefront. Les also noted that the risk from unaccounted IoT devices continues to be a real concern.

Evaluating Enterprise Security Products

When considering new products, security personnel must first look at how the product contributes to the improvement of overall security or reduction of risk. Other considerations include ease of use, vendor roadmap alignment, interoperability and integration with existing products, automation, and financial drivers. A new security product must not, in any way, interfere with operations or revenue generation of the business.

How to Win With Security Executives?

The key is consistency and professionalism throughout the buying cycle. From the first approach to the demo, PoC (Proof of Concept) and deployment, responsiveness is highly regarded. All the sell cycle stages should deliver a positive experience, moving the process forward. A good initial demo is really key to progressing to a full-on PoC.

Security Products: The Acquisition Process

Les defines his process as one that starts with identifying needs, defining the business case, then securing the budget. The next stage involves scoping the market for solutions, and reading market reports, sometimes just to draw up a shortlist from the multitude of solutions on the market. From the shortlist, the next stage is to try and define what criteria a demo or PoC must meet in order to be considered successful.

Consulting with peers and considering references like Gartner Peer Reviews can help narrow the shortlist down to the final contenders before a PoC and reaching a go/no-go decision on a particular solution.

As for the PoC process itself, it must be well defined with realistic KPIs and requirements on both ends. No enterprise will allow full security testing on a real network or endpoint, so we need to figure out how to demonstrate the effectiveness of the technology in a lab environment. That means using real malware along with compatibility testing in a pilot production environment.

It is actually better to test it in an isolated environment using identical hardware models and software versions. Deviance between the testing environment and the production environment is asking for false positives and screw-ups when you roll your product out to the client at a later date, and that kind of result can remain in the collective memory and taint the reputation of the vendor for many years ahead.

Tests don’t have to be perfect, but what does have to be perfect is the responsiveness of the PoC team. Responsiveness shows vendor’s commitment and professionalism. Clients will use the same test scenarios on all vendors and will quantify and compare results, but many times the differences are not huge and the decision is down to which vendor offers better service or responsiveness. 

image of Les Correia and Migo Kedem

Remember that the PoC is not a full-time job, so the PoC will take time to complete as the people running it are simultaneously busy with their daily chores. Les advises security vendors not to ignore documentation. It’s critical to show the technical depth of the product, and good documentation eases the burden on busy IT and security professionals dealing with your product.

Another driver is of course cost. Some excellent products will cost too much over time, and hence will be passed over for a less capable product with better ROI. One should continue to explore products that lessen the load by combining features that enable consolidation. Even a fantastic product is not enough on its own – it has to fit the organization’s budget and needs, and it has to be compatible with other systems in the organization.

Les has formulated a well-defined process of evaluating security products. He has shared his insights in an e-book, which you can download for free here.

What Not to Do When Approaching Security Executives?

image of Les Correia at sentinel one meetup

Cold calls are the worst. Cold email could work, depending on the availability and load. Most executives will visit one of the large industry shows for insights – e.g. Black Hat or RSA as well as some of the smaller cons.

Aggressive marketing is a turn-off, so is badmouthing the competition. Similarly, don’t over sell your product. Don’t promise to solve ALL the customers’ cyber problems. Focus on tangible benefits and provide a roadmap to cover future needs and improvements. 

Wrapping Up…

We are grateful to Les for sharing his experience and all the participants for their contributions. The meetup was a rare opportunity to bring together two distinct populations: security practitioners and the people who market to them and try to sell them security products. We believe that such open, transparent and honest dialogue is crucial in improving the relationships between vendors and clients and, ultimately, will help us all deliver better security.  


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security