Microsoft challenges Twilio with the launch of Azure Communication Services

Microsoft today announced the launch of Azure Communication Services, a new set of features in its cloud that enable developers to add voice and video calling, chat and text messages to their apps, as well as old-school telephony.

The company describes the new set of services as the “first fully managed communication platform offering from a major cloud provider,” and that seems right, given that Google and AWS offer some of these features, including the AWS notification service, for example, but not as part of a cohesive communication service. Indeed, it seems Azure Communication Service is more of a competitor to the core features of Twilio or up-and-coming MessageBird.

Over the course of the last few years, Microsoft has built up a lot of experience in this area, in large parts thanks to the success of its Teams service. Unsurprisingly, that’s something Microsoft is also playing up in its announcement.

“Azure Communication Services is built natively on top a global, reliable cloud — Azure. Businesses can confidently build and deploy on the same low latency global communication network used by Microsoft Teams to support over 5 billion meeting minutes in a single day,” writes Scott Van Vliet, corporate vice president for Intelligent Communication at the company.

Microsoft also stresses that it offers a set of additional smart services that developers can tap into to build out their communication services, including its translation tools, for example. The company also notes that its services are encrypted to meet HIPPA and GDPR standards.

Like similar services, developers access the various capabilities through a set of new APIs and SDKs.

As for the core services, the capabilities here are pretty much what you’d expect. There’s voice and video calling (and the ability to shift between them). There’s support for chat and, starting in October, users will also be able to send text messages. Microsoft says developers will be able to send these to users anywhere, with Microsoft positioning it as a global service.

Provisioning phone numbers, too, is part of the services and developers will be able to provision those for in-bound and out-bound calls, port existing numbers, request new ones and — most importantly for contact-center users — integrate them with existing on-premises equipment and carrier networks.

“Our goal is to meet businesses where they are and provide solutions to help them be resilient and move their business forward in today’s market,” writes Van Vliet. “We see rich communication experiences – enabled by voice, video, chat, and SMS – continuing to be an integral part in how businesses connect with their customers across devices and platforms.”

Mirakl raises $300 million for its marketplace platform

French startup Mirakl has raised a $300 million funding round at a $1.5 billion valuation — the company is now a unicorn. Mirakl helps you launch and manage a marketplace on your e-commerce website. Many customers also rely on Mirakl-powered marketplaces for B2B transactions.

Permira Advisers is leading the round, with existing investors 83North, Bain Capital Ventures, Elaia Partners and Felix Capital also participating.

“We’ve closed this round in 43 days,” co-founder and U.S. CEO Adrien Nussenbaum told me. But the due diligence process has been intense. “[Permira Advisers] made 250 calls to clients, leads, partners and former employees.”

Many e-commerce companies rely on third-party sellers to increase their offering. Instead of having one seller selling to many customers, marketplaces let you sell products from many sellers to many customers. Mirakl has built a solution to manage the marketplace of your e-commerce platform.

300 companies have been working with Mirakl for their marketplace, such as Best Buy Canada, Carrefour, Darty and Office Depot. More recently, Mirakl has been increasingly working with B2B clients as well.

These industry-specific marketplaces can be used for procurement or bulk selling of parts. In this category, clients include Airbus Helicopters, Toyota Material Handling and Accor’s Astore. 60% of Mirakl’s marketplace are still consumer-facing marketplaces, but the company is adding as many B2B and B2C marketplaces these days.

“We’ve developed a lot of features that enable platform business models that go further than simple marketplaces,” co-founder and CEO Philippe Corrot told me. “For instance, we’ve invested in services — it lets our clients develop service platforms.”

In France, Conforama can upsell customers with different services when they buy some furniture for instance. Mirakl has also launched its own catalog manager so that you can merge listings, add information, etc.

The company is using artificial intelligence to do the heavy-lifting on this front. There are other AI-enabled features, such as fraud detection.

Given that Mirakl is a marketplace expert, it’s not surprising that the company has also created a sort of marketplace of marketplaces with Mirakl Connect.

“Mirakl Connect is a platform that is going to be the single entry point for everybody in the marketplace ecosystem, from sellers to operators and partners,” Corrot said.

For sellers, it’s quite obvious. You can create a company profile and promote products on multiple marketplaces at once. But the company is also starting to work with payment service providers, fulfillment companies, feed aggregators and other partners. The company wants to become a one-stop shop on marketplaces with those partners.

Overall, Mirakl-powered marketplaces have generated $1.2 billion in gross merchandise volume (GMV) during the first half of 2020. It represents a 111% year-over-year increase, despite the economic crisis.

With today’s funding round, the company plans to expand across all areas — same features, same business model, but with more resources. It plans to hire 500 engineers and scale its sales and customer success teams.

HubSpot’s new end-to-end sales hub aims to simplify CRM for midmarket customers

HubSpot, the Boston firm that made its name by helping to define the in-bound marketing concept, sees a pandemic landscape that’s changing the way companies sell, forcing more inside sales. Today, the company announced the HubSpot Sales Hub Enterprise at Inbound, their annual conference being held virtually this year.

While the company has been offering a CRM tool for five years now, where they feel they have addressed ease-of-use issues for salespeople, the new tool is about bringing a new end-to-end approach addressing not only the needs of sales people, but management and system admins, as well, says Lou Orfanos, GM and VP of Sales Hub at HubSpot.

“So, this is about [providing customers with a more powerful set of tools] and also just making sure that you can run your sales process end-to-end in our platform. We feel really good about being able to offer that out of the box natively and being able to do everything you need to do [in one tool], which is, I think, pretty unique given the state of the market and having to [cobble] a bunch of things together yourself,” Orfanos explained.

While the previous product was aimed more at smaller businesses, Chief Customer Officer Yamini Rangan, who previously worked at Dropbox, Workday and SAP, says this product is aimed more at midmarket companies with more complex sales workflows.

“What we find is that the customer experience for a 500-person company or for a 1,000-person company is quite different and their expectations are quite different than a 10-person small business. What the Sales Hub Enterprise specifically brings is the ease of use, as well as the powerful features [ … ] to a larger midmarket organization,” Rangan said.

HubSpot specifically sees larger companies in this space, like Adobe, Salesforce and SAP, acquiring different pieces of the stack, then incorporating them into a solution, or customers pulling together different pieces of the stack themselves. The company believes that by building a single integrated solution themselves, it’s going to be naturally easier to use.

“We also find that that’s the size of the company where the tech stack, the sales stack and the marketing stack gets super complex, and they’re spending a lot of time trying to integrate a lot of different point solutions and what we find is having all of this — marketing, CMS, sales underlined by a CRM platform — that gives them visibility that they need to run their entire go-to-market operations,” she said.

While the lower end of the market where HubSpot is targeting probably won’t interest larger competitors, especially Salesforce, as they move up in that market to larger companies, they expect to compete with those companies. Rangan says that she believes by providing this new offering, they are giving customers options they didn’t have before.

But she also sees this as a way into companies as they grow, and if HubSpot can catch them earlier in their evolution, they can grow with them and become their vendor of choice, rather than the usual suspects.

“What we find is that companies will start as a 100-person company and grow to become a 500- or a 1,000-person company, and as they grow up on HubSpot we become their growth suite and we become the core platform of record for them to continue to grow,” she said.

Daily Crunch: Microsoft launches Azure Communication Services

Microsoft takes on Twilio, Google launches a work-tracking tool and Mirakl raises $300 million. This is your Daily Crunch for September 22, 2020.

The big story: Microsoft launches Azure Communication Services

Microsoft announced today that it’s ready to compete with Twilio by launching a set of features that allow developers to add voice and video calling, chat, text messages and old-school telephony to their apps.

“Azure Communication Services is built natively on top a global, reliable cloud — Azure,” wrote Microsoft’s Scott Van Vliet. “Businesses can confidently build and deploy on the same low latency global communication network used by Microsoft Teams to support 5B+ meeting minutes daily.”

This is just one of a number of announcements that Microsoft made at its Ignite conference this morning. Other additions include a platform for detecting biological threats and the Azure Orbital service for satellite operators.

The tech giants

Google launches a work-tracking tool and Airtable rival, Tables — Tables’ bots help users do things like scheduling recurring email reminders when tasks are overdue and messaging a chat room when new form submissions are received.

Amazon adds support for Kannada, Malayalam, Tamil and Telugu in local Indian languages push ahead of Diwali — The company said this move should help it reach an additional 200-300 million users in India.

Pinterest breaks daily download record due to user interest in iOS 14 design ideas — Following the release of iOS 14, the excitement around the ability to customize your iPhone home screen has been paying off for Pinterest.

Startups, funding and venture capital

Mirakl raises $300 million for its marketplace platform — Mirakl helps companies launch and manage a marketplace on their e-commerce websites.

Pure Watercraft ramps up its electric outboard motors with a $23 million series A — Pure Watercraft is building an electric outboard motor that can replace a normal gas one for most boating needs.

Morgan Beller, co-creator of the Libra digital currency, just joined the venture firm NFX — And yes, that means she’s leaving Facebook.

Advice and analysis from Extra Crunch

Despite a rough year for digital media, Blavity and The Shade Room are thriving — A recap of my Disrupt discussion with Morgan DeBaun of Blavity and Angelica Nwandu of The Shade Room.

Big tech has 2 elephants in the room: Privacy and competition — There’s clearly a nervousness among even well-established tech firms to discuss this topic.

How has Corsair Gaming posted such impressive pre-IPO numbers? — The company was founded in 1994, making it more of a mature business than a startup.

(Reminder: Extra Crunch is our subscription membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

TikTok, WeChat and the growing digital divide between the US and China — Catherine Shu discusses the dramatic shift in the relationship between tech companies in both countries.

Tech must radically rethink how it treats independent contractors — Just as COVID-19 has accelerated the move to remote work, our current crisis has accelerated the trend toward hiring independent contractors.

Bose introduces a new pair of sleep-focused earbuds — The timing of the Sleepbuds II could hardly be better.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Revisiting the Pyramid of Pain | Leveraging EDR Data to Improve Cyber Threat Intelligence

Producing and consuming actionable Cyber Threat Intelligence is a large part of a security analyst’s daily work, but threat intelligence comes in many forms. As most experienced analysts’ know, some forms of threat intel are more useful than others, but that usefulness tends to be inversely proportional to availability. File hashes and IP addresses for the latest campaigns are usually the first to be shared among security researchers, but they are also rapidly changed by attackers, limiting their utility. This relationship between availability and usefulness was nicely illustrated by David J Bianco’s Pyramid of Pain. The general points of the Pyramid still hold true, but security solutions have not stood still in the intervening years, and with the right technology to hand, producing and consuming high-value indicators like TTPs can be a whole lot easier than it once was. Let’s see how.

Revisiting the Pyramid of Pain

Let’s recall how Bianco’s Pyramid of Pain works. The ‘pain’ here is supposed to be the pain felt by attackers once a particular kind of indicator for their attack becomes known. However, as we’ll see, the pyramid also describes parallel difficulties for defenders in terms of availability of each class of indicator.

At the base, or widest part, of the pyramid, we have file hashes – the kind of IoCs that we are all used to dealing with on a daily basis. These are easy to acquire and widely shared, so availability of these is typically good. The problem, though, is that it is also relatively painless for attackers to change a file’s hash; indeed, much modern malware even does this “autonomously” – so-called polymorphic malware – and it’s comparatively easy to write malicious software that creates copies of itself with a different file hash each time. Thus, as the pyramid graphic suggests, discovery of particular malicious file hashes causes the attacker virtually no pain at all in terms of adapting to and evading solutions that rely on detecting file hashes.

Source

Much the same can be said of IP addresses and even domain names, which nowadays can be changed even more easily and rapidly than back in 2013 when Bianco first developed the Pyramid of Pain concept.

Network and Host artifacts – distinguishing characteristics of network traffic or host activity – increase the pain for attackers somewhat once these become known. Defenders can use technologies such as Suricata rules and Snort to identify known malicious network traffic, and tools like Yara rules and ProcFilter can similarly match malicious patterns in files and processes executing on a device.

For attackers, getting around these kinds of “signatures” involves some work (aka ‘pain’). First, they have to determine what pattern matching rule or rules are being used; since different security solutions may employ different or multiple rules, that in itself can be a difficult process. Second, once the attackers have determined how they’re being detected, they need to refactor their code in order to avoid the patterns used in the signatures.

Despite that, the pain isn’t that great if you’re a full-time threat actor outfit – it’s just part of the job. A good example of malware that continuously iterates in this way is the script-based Shlayer and ZShlayer malware that targets macOS. Part of the attraction of using scripts, from the malware authors’ point of view, is it’s much easier and much faster to iterate with shell scripts than compiled binaries. Scripting allows far more flexibility in achieving malware objectives than a lot of compiled programming languages, and many binary scanning engines don’t know how to handle scripts anyway, which can also be executed in memory with relative ease.

Things get tougher for threat actors when defenders have a good handle on what particular tools are being used to attack them – these can range from custom-made frameworks to publicly available and open-source toolkits. Switching out one tool – or more likely, set of tools – for another increases the burden on cyber threat actors because it is not always easy to find or introduce tools into a victim’s environment that have the desired capabilities.

For example, if the threat actor is making heavy use of LOLBins like PowerShell and CertUtil, or relying on publicly-available tools like Cobalt Strike or Mimikatz, it may be very a tough challenge to obtain the same functionality with different tools.

Finally, TTPs, which we’ll discuss in more detail in the next section, cause the greatest pain for threat actors because they hone in on the attacker’s actual objectives and seek to block behaviour that attempts to execute those objectives.

Intelligent Tools That Produce Actionable Intelligence

From the above discussion, it should be clear that, from a defender’s point of view, developing awareness of attackers’ tools and TTPs (Tactics, Techniques, and Procedures) – those which cause the threat actor the most pain – is where we should focus our efforts for the most gain. The problem is that Bianco’s Pyramid of Pain also paints a picture of just how easy-to-come by each of those threat intelligence indicators typically are for most enterprises: easy at the bottom, tough at the top.

File hashes are widely available in most threat intel reports but are time-consuming to digest and have a short shelf life. At the other end of the scale, TTPs return the most value, but they are not so widely known or distributed.

However, initiatives like MITRE ATT&CK have added a new dimension to cyber threat intelligence, and security tools with features like SentinelOne’s Rapid Threat Hunting put new-found power into the analyst’s hands.

In the image above, we can see how the SentinelOne Threat Center displays all the behavioral indicators associated with a particular detection, with links to MITRE ATT&CK TTPs, for the analyst’s convenience.

Similarly, suppose you have seen a new threat intelligence report indicating a particular TTP. You could immediately search your entire fleet for any process or event with behavioral characteristics that match that TTP simply by entering the MITRE ID in the SentinelOne console’s Deep Visibility query box.

Focusing on TTPs in particular gives you a great advantage when defining rulesets or watchlists for added protection. For example, you can automate hunts using particular behavioral indicators that belong to known attacks seen in your own environment or in the environment of others. Since Bianco published the influential Pyramid of Pain concept, many threat intelligence researchers, including SentinelOne’s own SentinelLabs, include MITRE ATT&CK TTPs at the end of their reports along with other IoCs. With the right tools to hand, you can easily consume this kind of threat intel directly into your solution for both automated detection and rapid threat hunting.

Rapid Threat Hunting with Storylines
Time always seems to be on the attacker’s side, but security analysts can get ahead by hunting threats faster than ever before.

Conclusion

Utilizing detailed, actionable and effective intelligence is key to thwarting cyber attacks. File hashes, IP addresses and domain names have increasingly limited use as both attackers and malware have evolved to produce campaigns in which these traditional IoCs are rapidly disposed. However, by focusing on indicators that are difficult for attackers to change with technology that can both consume and produce these much sought after indicators, we can increase the cost of business for attackers while improving our ability to detect and defeat the cyber menace.

If you would like to learn more about how SentinelOne’s Singularity platform can help improve your threat intelligence and protect your business, contact us today or request a free demo.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

The Good, the Bad and the Ugly in Cybersecurity – Week 38

The Good

If you’ve been following security news for the last couple of years you may well remember the CCleaner and ASUS ShadowHammer supply chain attacks. Great news this week: five Chinese individuals thought to be responsible for those and more than 100 other hacks have been indicted by the U.S. government. More formerly known as APT41, the group have also been behind ransomware attacks and cryptominer infections.

Zhang Haoran, Tan Dailin, Jiang Lizhi, Qian Chuan and Fu Qiang remain at large, almost certainly in China, and the chances of arrest remain slim so long as they eschew international travel. However, two Malaysian businessmen, Wong Ong Hua and Ling Yang Ching, who helped the gang profit from stolen game currencies are facing extradition from Malaysia to the U.S. and could well see jail time.

The identification of the Chinese gang members came along with the seizure of hundreds of accounts, servers, domain names and other internet assets. Neither the indictment nor the seizures are likely to stop the gang from engaging in further operations, but their identification and the insight gained into their close relationship with the Chinese Ministry of Public Security sends a strong signal to such actors that they can no longer be certain of anonymity or immunity from international sanction.

The Bad

Still with China, CISA issued an advisory this week that Chinese-affiliated nation-state actors are targeting U.S. government agencies in a new wave of attacks leveraging OSINT and publicly available tools. The hackers’ toolkits include pentester favorites such as Shodan, Cobalt Strike and Mimikatz.

On top of that, the threat actors have been exploiting well-known but unpatched networking software vulnerabilities such as CVE-2019-11510 (Pulse Secure VPN), CVE-2019-19781 (Citrix VPN), CVE-2020-0688 (MS Exchange Servers) and CVE-2020-5902 (F5 Networks Big-IP TMUI).

Unpatched VPN software has long been a cause of concern, and this isn’t the first time that CISA have warned companies about APTs targeting critical infrastructure sectors.

The latest advisory also notes that:

To conceal the theft of information from victim networks and otherwise evade detection, the defendants typically packaged victim data in encrypted Roshal Archive Compressed files (RAR files), changed RAR file and victim documents’ names and extensions (e.g., from “.rar” to “.jpg”) and system timestamps, and concealed programs and documents at innocuous-seeming locations on victim networks and in victim networks’ “recycle bins”.

CISA advise organizations to implement robust configuration and patch management programs to prevent attackers making easy use of common vulnerabilities and off-the-shelf tools. While that’s certainly a minimum, some robust EDR should be top of your priority list, too.

The Ugly

This week’s Ugly is a sad tale of how the unintended consequences of a cyber attack can end in real-life tragedy. What appears to have been an attempt at a ransomware attack on a German university by inexperienced hackers ended up encrypting 30 servers in a nearby hospital. The malware dropped a ransom note in the usual way, naming the university directly and providing a means of contact to arrange payment.

The operators were no doubt surprised to hear directly from Düsseldorf police rather than the university administrators. The police informed them that they had missed their intended target and had in fact put the lives of patients at the Düsseldorf University Clinic in jeopardy. The ransomware had crashed the hospital’s servers forcing administrators to redirect emergency admissions to other locations. One patient who needed urgent admission was redirected to a hospital 32km away; this caused an hour’s delay before doctors could treat her for a life-threatening condition. Sadly, due to the delay, there was little they could do and she passed away.

The hackers did provide the police with a decryption key without payment, but have otherwise remained uncontactable. It appears the compromise targeted a software vulnerability in “commercially available software”, which has since been patched. It is not known which strain of ransomware was used, but reportedly no data was exfiltrated.

The police continue to investigate and hope to bring charges of ‘negligent manslaughter’. If ever there was a lesson to make those who think hacking might be a “fun”, “easy way to make money” that “doesn’t do anyone any harm” step back and think again, then this is surely it.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Salesforce announces 12,000 new jobs in the next year just weeks after laying off 1,000

In a case of bizarre timing, Salesforce announced it was laying off 1,000 employees at the end of last month just a day after announcing a monster quarter with over $5 billion in revenue, putting the company on a $20 billion revenue run rate for the first time. The juxtaposition was hard to miss.

Earlier today, Salesforce CEO and co-founder Marc Benioff announced in a tweet that the company would be hiring 4,000 new employees in the next six months, and 12,000 in the next year. While it seems like a mixed message, it’s probably more about reallocating resources to areas where they are needed more.

While Salesforce wouldn’t comment further on the hirings, the company has obviously been doing well in spite of the pandemic, which has had an impact on customers. In the prior quarter, the company forecasted that it would have slower revenue growth due to giving some customers facing hard times with economic downturn time to pay their bills.

That’s why it was surprising when the CRM giant announced its earnings in August and that it had done so well in spite of all that. While the company was laying off those 1,000 people, it did indicate it would give those employees 60 days to find other positions in the company. With these new jobs, assuming they are positions the laid-off employees are qualified for, they could have a variety of positions from which to choose.

The company had 54,000 employees when it announced the layoffs, which accounted for 1.9% of the workforce. If it ends up adding the 12,000 news jobs in the next year, that would put the company at approximately 65,000 employees by this time next year.

SaaS Ventures takes the investment road less traveled

Most venture capital firms are based in hubs like Silicon Valley, New York City and Boston. These firms nurture those ecosystems and they’ve done well, but SaaS Ventures decided to go a different route: it went to cities like Chicago, Green Bay, Wisconsin and Lincoln, Nebraska.

The firm looks for enterprise-focused entrepreneurs who are trying to solve a different set of problems than you might find in these other centers of capital, issues that require digital solutions but might fall outside a typical computer science graduate’s experience.

Saas Ventures looks at four main investment areas: trucking and logistics, manufacturing, e-commerce enablement for industries that have not typically gone online and cybersecurity, the latter being the most mainstream of the areas SaaS Ventures covers.

The company’s first fund, which launched in 2017, was worth $20 million, but SaaS Ventures launched a second fund of equal amount earlier this month. It tends to stick to small-dollar-amount investments, while partnering with larger firms when it contributes funds to a deal.

We talked to Collin Gutman, founder and managing partner at SaaS Ventures, to learn about his investment philosophy, and why he decided to take the road less traveled for his investment thesis.

A different investment approach

Gutman’s journey to find enterprise startups in out of the way places began in 2012 when he worked at an early enterprise startup accelerator called Acceleprise. “We were really the first ones who said enterprise tech companies are wired differently, and need a different set of early-stage resources,” Gutman told TechCrunch.

Through that experience, he decided to launch SaaS Ventures in 2017, with several key ideas underpinning the firm’s investment thesis: after his experience at Acceleprise, he decided to concentrate on the enterprise from a slightly different angle than most early-stage VC establishments.

Collin Gutman from SaaS Ventures

Collin Gutman, founder and managing partner at SaaS Ventures (Image Credits: SaaS Ventures)

The second part of his thesis was to concentrate on secondary markets, which meant looking beyond the popular startup ecosystem centers and investing in areas that didn’t typically get much attention. To date, SaaS Ventures has made investments in 23 states and Toronto, seeking startups that others might have overlooked.

“We have really phenomenal coverage in terms of not just geography, but in terms of what’s happening with the underlying businesses, as well as their customers,” Gutman said. He believes that broad second-tier market data gives his firm an upper hand when selecting startups to invest in. More on that later.

How Ransomware Attacks Are Threatening Our Critical Infrastructure

Threat actors are increasingly targeting critical infrastructure with ransomware, according to independent reports recently. In February, a natural gas compression facility was attacked by ransomware, forcing it to shut operations for two days. Healthcare companies and research labs have been aggressively targeted since the onset of the COVID-19 pandemic. And now, a new academic project from Temple University in Philadelphia tracking ransomware attacks on critical infrastructure over the last seven years shows that 2019 and 2020 saw a sharp increase, accounting for more than half of all reported incidents over the entire period. In this post, we look at the latest data and explore how such attacks can be prevented.

What is Critical Infrastructure?

According to CISA (the Cybersecurity & Infrastructure Security Agency), “critical infrastructure” is the “assets, systems, and networks” that are vital to the functioning of the economy, public health and national security. Attacks that affect critical infrastructure risk having “debilitating effects” on the country’s ability to function.

CISA says critical infrastructure is spread over 16 sectors, namely: Chemical, Commercial Facilities, Communications, Critical Manufacturing, Defense, Education, Emergency Services, Energy, Financial Services, Food and Agriculture, Government Facilities, Healthcare, Information Technology, Nuclear, Transportation, and Water systems.

That’s a considerable attack surface that is made all the more vulnerable by the fact that organizations in many of those sectors are public-funded and often lack both the budget and the expertise of large, well-resourced private enterprises. The spate of ransomware attacks since 2018 on hospitals, schools and cities like Atlanta, Greenville, Baltimore and Riviera Beach City Council being some of the more high-profile cases in point.

How Frequent Are Ransomware Attacks on Critical Infrastructure?

Ransomware attacks on critical infrastructure have risen dramatically in the last two years, and all the indications are that this is a trend that will continue as ransomware tools and RaaS offerings become increasingly available and lower the bar to entry for cyber criminals without technical skills of their own.

Over the last 7 years, the public data collated by Temple University shows that there have been almost 700 ransomware attacks on critical infrastructure; that’s an average of just under 100 per year, but in fact over half of those have occurred since 2019. 440 attacks in less than two years (we’ve still around four months more of data to collect for 2020) presently equates to around 5 critical infrastructure ransomware attacks every week.

The attacks cut across all CI sectors, from food and agriculture to manufacturing, public health and even education. The defense sector has also been targeted, and so too, worryingly in one case, has the nuclear industry.

By far most ransomware attacks on critical infrastructure in recent years have targeted government-run facilities, with 199 reporting ransomware attacks. Education is not far behind that with 106 reports, followed by 61 reported ransomware incidents targeting Emergency Services.

Who Is Responsible for Attacks on Critical Infrastructure?

Attacks against critical infrastructure targets have become increasingly frequent with the prevalence of off-the-shelf ransomware tools like Netwalker sold on the darknet. It’s no surprise to see Maze top of the list of ransomware used in such attacks, as Maze has been on something of a rampage over the last 12 months or so, bringing with it the threat not only of encrypting data but also exfiltrating it to use as leverage against victims unwilling to pay.

It’s a tactic that’s been copied by REvil, Snatch, Netwalker, DoppelPaymer, Nemty and other ransomware operators. The general strategy is: don’t rely on your backups or technical solutions to get you out of trouble, because if you do we’ll just sell or publicise your IP and confidential data anyway.

SentinelOne versus the Maze Ransomware

Aside from Maze, which reportedly was used in at least 57 incidents against critical infrastructure, Wannacry’s “15 minutes of fame” led to it accounting for some 33 attacks on businesses in the 16 essential sectors, the same number as each of the more recent and still ubiquitous Ryuk and Revil/Sodinokibi ransomware strains.

Other ransomware strains reportedly involved in critical infrastructure attacks include DoppelPaymer (12), Netwalker (11), BitPaymer (8), CryptoLocker (7) and CryptoWall (5).

How Much Does A Critical Infrastructure Ransomware Attack Cost?

Unlike APTs and nation-state actors who may look for inroads into critical infrastructure for espionage or sabotage, cyber criminals using ransomware are typically interested in one thing: the financial pay off. To that end, the amount of ransom demanded in 13 recorded cases exceeded 5 million dollars, with another 13 recorded as between $1m and $5m. Some 31 ransomware incidents demanded $1m or less, while 66 sought $50,000 or less.

As noted above, the prevalence of ransomware has increased proportionally to its availability to technically low-level, likely “first-time” cyber criminals. This is evidenced by statistics showing that 54 ransomware attacks against critical infrastructure targets demanded $1,000 or less. Possibly, these actors had taken a “shotgun” or “scattergun” approach to infect random targets and were not fully aware of the nature of the organization they had compromised. Also, some RaaS tools set a fairly low ransom limit on first-time buyers and newbies “trying out” the software to entice these actors to pay for “premium services” after getting a taste of success.

What is the True Cost of a Ransomware Attack? | 6 Factors to Consider
The ransom demand may be the headline figure, but it’s not the only, or the biggest, cost to bear.

How Can We Protect Critical Infrastructure Against Ransomware?

With the nature of modern ransomware attacks now being to exfiltrate data as well as encrypt files, the key to ransomware defense is prevention; in other words, preventing the attackers from getting in where possible, and detecting and blocking them as early as possible in the threat lifecycle where not.

This requires, first and foremost, visibility into your network. What devices are connected and what are they? Discovery and fingerprinting through both active and passive discovery are a prerequisite for defending against intruders. It’s also important to control access, harden configurations and mitigate vulnerabilities through frequent patching. Enforcing VPN connectivity, mandatory disk encryption, and port control will also reduce the attack surface for ransomware.

Ebook: Understanding Ransomware in the Enterprise
This guide will help you understand, plan for, respond to and protect against this now-prevalent threat. It offers examples, recommendations and advice to ensure you stay unaffected by the constantly evolving ransomware menace.

Email and phishing are still the main entry vector for ransomware, so a good and frequent training program with simulations is important. On top of that, ensure that even if users are compromised, they only have access to services and resources necessary for their work.

These are all good measures that should stop opportunistic attacks, but determined threat actors targeting critical infrastructure will find ways around these. That’s why a proven EDR solution that stops attacks early is essential.

Conclusion

The increase in ransomware attacks on critical infrastructure is a major concern. Once the target solely of nation-state actors that would rarely execute “noisy” attacks which could reveal their presence, businesses and organizations within the 16 sectors of critical infrastructure are now seen as prime targets for ransomware operators. Disrupting and potentially damaging vital equipment, networks, assets and services means cyber criminals have a better chance of getting a payout. With data leakage and regulatory fines also a factor, it’s vital that these attacks are stopped in their tracks. If you would like to see how the autonomous SentinelOne platform can help protect your organization against ransomware attacks, contact us today or request a free demo.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Narrator raises $6.2M for a new approach to data modelling that replaces star schema

Snowflake went public this week, and in a mark of the wider ecosystem that is evolving around data warehousing, a startup that has built a completely new concept for modelling warehoused data is announcing funding. Narrator — which uses an 11-column ordering model rather than standard star schema to organise data for modelling and analysis — has picked up a Series A round of $6.2 million, money that it plans to use to help it launch and build up users for a self-serve version of its product.

The funding is being led by Initialized Capital along with continued investment from Flybridge Capital Partners and Y Combinator — where the startup was in a 2019 cohort — as well as new investors, including Paul Buchheit.

Narrator has been around for three years, but its first phase was based around providing modelling and analytics directly to companies as a consultancy, helping companies bring together disparate, structured data sources from marketing, CRM, support desks and internal databases to work as a unified whole. As consultants, using an earlier build of the tool that it’s now launching, the company’s CEO Ahmed Elsamadisi said he and others each juggled queries “for eight big companies single-handedly,” while deep-dive analyses were done by another single person.

Having validated that it works, the new self-serve version aims to give data scientists and analysts a simplified way of ordering data so that queries, described as actionable analyses in a story-like format — or “Narratives,” as the company calls them — can be made across that data quickly — hours rather than weeks — and consistently. (You can see a demo of how it works below provided by the company’s head of data, Brittany Davis.)

The new data-as-a-service is also priced in SaaS tiers, with a free tier for the first 5 million rows of data, and a sliding scale of pricing after that based on data rows, user numbers and Narratives in use.

Image Credits: Narrator

Elsamadisi, who co-founded the startup with Matt Star, Cedric Dussud and Michael Nason, said that data analysts have long lived with the problems with star schema modelling (and by extension the related format of snowflake schema), which can be summed up as “layers of dependencies, lack of source of truth, numbers not matching and endless maintenance,” he said.

“At its core, when you have lots of tables built from lots of complex SQL, you end up with a growing house of cards requiring the need to constantly hire more people to help make sure it doesn’t collapse.”

(We)Work Experience

It was while he was working as lead data scientist at WeWork — yes, he told me, maybe it wasn’t actually a tech company, but it had “tech at its core” — that he had a breakthrough moment of realising how to restructure data to get around these issues.

Before that, things were tough on the data front. WeWork had 700 tables that his team was managing using a star schema approach, covering 85 systems and 13,000 objects. Data would include information on acquiring buildings, to the flows of customers through those buildings, how things would change and customers might churn, with marketing and activity on social networks, and so on, growing in line with the company’s own rapidly scaling empire.  All of that meant a mess at the data end.

“Data analysts wouldn’t be able to do their jobs,” he said. “It turns out we could barely even answer basic questions about sales numbers. Nothing matched up, and everything took too long.”

The team had 45 people on it, but even so it ended up having to implement a hierarchy for answering questions, as there were so many and not enough time to dig through and answer them all. “And we had every data tool there was,” he added. “My team hated everything they did.”

The single-table column model that Narrator uses, he said, “had been theorised” in the past but hadn’t been figured out.

The spark, he said, was to think of data structured in the same way that we ask questions, where — as he described it — each piece of data can be bridged together and then also used to answer multiple questions.

“The main difference is we’re using a time-series table to replace all your data modelling,” Elsamadisi explained. “This is not a new idea, but it was always considered impossible. In short, we tackle the same problem as most data companies to make it easier to get the data you want but we are the only company that solves it by innovating on the lowest-level data modelling approach. Honestly, that is why our solution works so well. We rebuilt the foundation of data instead of trying to make a faulty foundation better.”

Narrator calls the composite table, which includes all of your data reformatted to fit in its 11-column structure, the Activity Stream.

Elsamadisi said using Narrator for the first time takes about 30 minutes, and about a month to learn to use it thoroughly. “But you’re not going back to SQL after that, it’s so much faster,” he added.

Narrator’s initial market has been providing services to other tech companies, and specifically startups, but the plan is to open it up to a much wider set of verticals. And in a move that might help with that, longer term, it also plans to open source some of its core components so that third parties can build data products on top of the framework more quickly.

As for competitors, he says that it’s essentially the tools that he and other data scientists have always used, although “we’re going against a ‘best practice’ approach (star schema), not a company.” Airflow, DBT, Looker’s LookML, Chartio’s Visual SQL, Tableau Prep are all ways to create and enable the use of a traditional star schema, he added. “We’re similar to these companies — trying to make it as easy and efficient as possible to generate the tables you need for BI, reporting and analysis — but those companies are limited by the traditional star schema approach.”

So far the proof has been in the data. Narrator says that companies average around 20 transformations (the unit used to answer questions) compared to hundreds in a star schema, and that those transformations average 22 lines compared to 1,000+ lines in traditional modelling. For those that learn how to use it, the average time for generating a report or running some analysis is four minutes, compared to weeks in traditional data modelling. 

“Narrator has the potential to set a new standard in data,” said Jen Wolf, ​Initialized Capital COO and partner and new Narrator board member​, in a statement. “We were amazed to see the quality and speed with which Narrator delivered analyses using their product. We’re confident once the world experiences Narrator this will be how data analysis is taught moving forward.”