Near acquires the location data company formerly known as UberMedia

Data intelligence company Near is announcing the acquisition of another company in the data business — UM.

In some ways, this echoes Near’s acquisition of Teemo last fall. Just as that deal helped Singapore-headquartered Near expand into Europe (with Teemo founder and CEO Benoit Grouchko becoming Near’s chief privacy officer), CEO Anil Mathews said that this new acquisition will help Near build a presence in the United States, turning the company into “a truly global organization,” while also tailoring its product to offer “local flavors” in each country.

The addition of UM’s 60-person team brings Near’s total headcount to around 200, with UM CEO Gladys Kong becoming CEO of Near North America.

At the same time, Mathews suggested that this deal isn’t simply about geography, because the data offered by Near and UM are “very complementary,” allowing both teams to upsell current customers on new offerings. He described Near’s mission as “merging two diverse worlds, the online world and the offline world,” essentially creating a unified profile of consumers for marketers and other businesses. Apparently, UM is particularly strong on the offline side, thanks to its focus on location data.

Near CEO Anil Mathews and UM CEO Gladys Kong

Near CEO Anil Mathews and UM CEO Gladys Kong. Image Credits: Near

“UM has a very strong understanding of places, they’ve mastered their understanding of footfalls and dwell times,” Mathews added. “As a result, most of the use cases where UM is seeing growth — in tourism, retail, real estate — are in industries struggling due to the pandemic, where they’re using data to figure out, ‘How do we come out of the pandemic?’ ”

TechCrunch readers may be more familiar with UM under its old name, UberMedia, which created social apps like Echofon and UberSocial before pivoting its business to ad attribution and location data. Kong said that contrary to her fears, the company had “an amazing 2020” as businesses realized they needed UM’s data (its customers include RAND Corporation, Hawaii Tourism Authority, Columbia University and Yale University).

And the year was capped by connecting with Near and realizing that the two companies have “a lot of synergies.” In fact, Kong recalled that UM’s rebranding last month was partly at Mathews’ suggestion: “He said, ‘Why do you have media in your name when you don’t do media?’ And we realized that’s probably how the world saw us, so we decided to change [our name] to make it clear what we do.”

Founded in 2010, UM raised a total of $34.6 million in funding, according to Crunchbase. The financial terms of the acquisition were not disclosed.

 

RapidDeploy raises $29M for a cloud-based dispatch platform aimed at 911 centers

The last year of pandemic living has been real-world, and sometimes harrowing, proof of how important it can be to have efficient and well-equipped emergency response services in place. They can help people remotely if need be, and when they cannot, they make sure that in-person help can be dispatched quickly in medical and other situations. Today, a company that’s building cloud-based tools to help with this process is announcing a round of funding as it continues to grow.

RapidDeploy, which provides computer-aided dispatch technology as a cloud-based service for 911 centers, has closed a round of $29 million, a Series B round of funding that will be used both to grow its business and continue expanding the SaaS tools that it provides to its customers. In the startup’s point of view, the cloud is essential to running emergency response in the most efficient manner.

“911 response would have been called out on a walkie talkie in the early days,” said Steve Raucher, the co-founder and CEO of RapidDeploy, in an interview. “Now the cloud has become the nexus of signals.”

Washington, DC-based RapidDeploy provides data and analytics to 911 centers — the critical link between people calling for help and connecting those calls with the nearest medical, police or fire assistance — and today it has about 700 customers using its RadiusPlus, Eclipse Analytics and Nimbus CAD products.

That works out to about 10% of all 911 centers in the U.S. (7,000 in total), and covering 35% of the population (there are more centers in cities and other dense areas). Its footprint includes state coverage in Arizona, California and Kansas. It also has operations in South Africa, where it was originally founded.

The funding is coming from an interesting mix of financial and strategic investors. Led by Morpheus Ventures, the round also had participation from GreatPoint Ventures, Ericsson Ventures, Samsung Next Ventures, Tao Capital Partners and Tau Ventures, among others. It looks like the company had raised about $30 million before this latest round, according to PitchBook data. Valuation is not being disclosed.

Ericsson and Samsung, as major players in the communication industry, have a big stake in seeing through what will be the next generation of communications technology and how it is used for critical services. (And indeed, one of the big leaders in legacy and current 911 communications is Motorola, a would-be competitor of both.) AT&T is also a strategic go-to-market (distribution and sales) partner of RapidDeploy’s, and it also has integrations with Apple, Google, Microsoft and OnStar to feed data into its system.

The business of emergency response technology is a fragmented market. Raucher describes them as “mom-and-pop” businesses, with some 80% of them occupying four seats or less (a testament to the fact that a lot of the U.S. is actually significantly less urban than its outsized cities might have you think it is), and in many cases a lot of these are operating on legacy equipment.

However, in the U.S. in the last several years — buffered by innovations like the Jedi project and FirstNet, a next-generation public safety network — things have been shifting. RapidDeploy’s technology sits alongside (and in some areas competes with) companies like Carbyne and RapidSOS, which have been tapping into the innovations of cell phone technology both to help pinpoint people and improve how to help them.

RapidDeploy’s tech is based around its RadiusPlus mapping platform, which uses data from smart phones, vehicles, home security systems and other connected devices and channels it to its data stream, which can help a center determine not just location but potentially other aspects of the condition of the caller. Its Eclipse Analytics services, meanwhile, are meant to act as a kind of assistant to those centers to help triage situations and provide insights into how to respond. The Nimbus CAD then helps figure out who to call out and routing for response. 

Longer term, the plan will be to leverage cloud architecture to bring in new data sources and ways of communicating between callers, centers and emergency care providers.

“It’s about being more of a triage service rather than a message switch,” Raucher said. “As we see it, the platform will evolve with customers’ needs. Tactical mapping ultimately is not big enough to cover this. We’re thinking about unified communications.” Indeed, that is the direction that many of these services seem to be going, which can only be a good thing for us consumers.

“The future of emergency services is in data, which creates a faster, more responsive 9-1-1 center,” said Mark Dyne, founding partner at Morpheus Ventures, in a statement. “We believe that the platform RapidDeploy has built provides the necessary breadth of capabilities that make the dream of Next-Gen 9-1-1 service a reality for rural and metropolitan communities across the nation and are excited to be investing in this future with Steve and his team.” Dyne has joined the RapidDeploy board with this round.

Vectra AI picks up $130M at a $1.2B valuation for its network approach to threat detection and response

Cybersecurity nightmares like the SolarWinds hack highlight how malicious hackers continue to exploit vulnerabilities in software and apps to do their dirty work. Today a startup that’s built a platform to help organizations protect themselves from this by running threat detection and response at the network level is announcing a big round of funding to continue its growth.

Vectra AI, which provides a cloud-based service that uses artificial intelligence technology to monitor both on-premise and cloud-based networks for intrusions, has closed a round of $130 million at a post-money valuation of $1.2 billion.

The challenge that Vectra is looking to address is that applications — and the people who use them — will continue to be weak links in a company’s security set-up, not least because malicious hackers are continually finding new ways to piece together small movements within them to build, lay and finally use their traps. While there will continue to be an interesting, and mostly effective, game of cat-and-mouse around those applications, a service that works at the network layer is essential as an alternative line of defense, one that can find those traps before they are used.

“Think about where the cloud is. We are in the wild west,” Hitesh Sheth, Vectra’s CEO, said in an interview. “The attack surface is so broad and attacks happen at such a rapid rate that the security concerns have never been higher at the enterprise. That is driving a lot of what we are doing.”

Sheth said that the funding will be used in two areas. First, to continue expanding its technology to meet the demands of an ever-growing threat landscape — it also has a team of researchers who work across the business to detect new activity and build algorithms to respond to it. And second, for acquisitions to bring in new technology and potentially more customers.

(Indeed, there has been a proliferation of AI-based cybersecurity startups in recent years, in areas like digital forensics, application security and specific sectors like SMBs, all of which complement the platform that Vectra has built, so you could imagine a number of interesting targets.)

The funding is being led by funds managed by Blackstone Growth, with unnamed existing investors participating (past backers include Accel, Khosla and TCV, among other financial and strategic investors). Vectra today largely focuses on enterprises, highly demanding ones with lots at stake to lose. Blackstone was initially a customer of Vectra’s, using the company’s flagship Cognito platform, Viral Patel — the senior MD who led the investment for the firm — pointed out to me.

The company has built some specific products that have been very prescient in anticipating vulnerabilities in specific applications and services. While it said that sales of its Cognito platform grew 100% last year, Cognito Detect for Microsoft Office 365 (a separate product) sales grew over 700%. Coincidentally, Microsoft’s cloud apps have faced a wave of malicious threats. Sheth said that implementing Cognito (or indeed other network security protection) “could have prevented the SolarWinds hack” for those using it.

“Through our experience as a client of Vectra, we’ve been highly impressed by their world-class technology and exceptional team,” John Stecher, CTO at Blackstone, said in a statement. “They have exactly the types of tools that technology leaders need to separate the signal from the noise in defending their organizations from increasingly sophisticated cyber threats. We’re excited to back Vectra and Hitesh as a strategic partner in the years ahead supporting their continued growth.”

Looking ahead, Sheth said that endpoint security will not be a focus for the moment because “in cloud there is so much open territory”. Instead it partners with the likes of CrowdStrike, SentinelOne, Carbon Black and others.

In terms of what is emerging as a stronger entry point, social media is increasingly coming to the fore, he said. “Social media tends to be an effective vector to get in and will remain to be for some time,” he said, with people impersonating others and suggesting conversations over encrypted services like WhatsApp. “The moment you move to encryption and exchange any documents, it’s game over.”

Wasabi scores $112M Series C on $700M valuation to take on cloud storage hyperscalers

Taking on Amazon S3 in the cloud storage game would seem to be a fool-hearty proposition, but Wasabi has found a way to build storage cheaply and pass the savings onto customers. Today the Boston-based startup announced a $112 million Series C investment on a $700 million valuation.

Fidelity Management & Research Company led the round with participation from previous investors. It reports that it has now raised $219 million in equity so far, along with additional debt financing, but it takes a lot of money to build a storage business.

CEO David Friend says that business is booming and he needed the money to keep it going. “The business has just been exploding. We achieved a roughly $700 million valuation on this round, so  you can imagine that business is doing well. We’ve tripled in each of the last three years and we’re ahead of plan for this year,” Friend told me.

He says that demand continues to grow and he’s been getting requests internationally. That was one of the primary reasons he went looking for more capital. What’s more, data sovereignty laws require that certain types of sensitive data like financial and healthcare be stored in-country, so the company needs to build more capacity where it’s needed.

He says they have nailed down the process of building storage, typically inside co-location facilities, and during the pandemic they actually became more efficient as they hired a firm to put together the hardware for them onsite. They also put channel partners like managed service providers (MSPs) and value added resellers (VARs) to work by incentivizing them to sell Wasabi to their customers.

Wasabi storage starts at $5.99 per terabyte per month. That’s a heck of a lot cheaper than Amazon S3, which starts at 0.23 per gigabyte for the first 50 terabytes or $23.00 a terabyte, considerably more than Wasabi’s offering.

But Friend admits that Wasabi still faces headwinds as a startup. No matter how cheap it is, companies want to be sure it’s going to be there for the long haul and a round this size from an investor with the pedigree of Fidelity will give the company more credibility with large enterprise buyers without the same demands of venture capital firms.

“Fidelity to me was the ideal investor. […] They don’t want a board seat. They don’t want to come in and tell us how to run the company. They are obviously looking toward an IPO or something like that, and they are just interested in being an investor in this business because cloud storage is a virtually unlimited market opportunity,” he said.

He sees his company as the typical kind of market irritant. He says that his company has run away from competitors in his part of the market and the hyperscalers are out there not paying attention because his business remains a fraction of theirs for the time being. While an IPO is far off, he took on an institutional investor this early because he believes it’s possible eventually.

“I think this is a big enough market we’re in, and we were lucky to get in at just the right time with the right kind of technology. There’s no doubt in my mind that Wasabi could grow to be a fairly substantial public company doing cloud infrastructure. I think we have a nice niche cut out for ourselves, and I don’t see any reason why we can’t continue to grow,” he said.

IBM is acquiring cloud app and network management firm Turbonomic for up to $2B

IBM today made another acquisition to deepen its reach into providing enterprises with AI-based services to manage their networks and workloads. It announced that it is acquiring Turbonomic, a company that provides tools to manage application performance (specifically resource management), along with Kubernetes and network performance — part of its bigger strategy to bring more AI into IT ops, or as it calls it, AIOps.

Financial terms of the deal were not disclosed, but according to data in PitchBook, Turbonomic was valued at nearly $1 billion — $963 million, to be exact — in its last funding round in September 2019. A report in Reuters rumoring the deal a little earlier today valued it at between $1.5 billion and $2 billion. A source tells us the figure is accurate.

The Boston-based company’s investors included General Atlantic, Cisco, Bain, Highland Capital Partners, and Red Hat. The last of these, of course, is now a part of IBM (so it was theoretically also an investor), and together Red Hat and IBM have been developing a range of cloud-based tools addressing telco, edge and enterprise use cases.

This latest deal will help extend that further, and it has more generally been an area that IBM has been aggressive in recently. Last November IBM acquired another company called Instana to bring application performance management into its stable, and it pointed out today that the Turbonomic deal will complement that and the two technologies’ tools will be integrated together, IBM said.

Turbonomic’s tools are particularly useful in hybrid cloud architectures, which involve not just on-premise and cloud workloads, but workloads that typically are extended across multiple cloud environments. While this may be the architecture people apply for more resilience, reasons of cost, location or other practicalities, the fact of the matter is that it can be a challenge to manage. Turbonomic’s tools automate management, analyse performance, and suggest changes for network operations engineers to make to meet usage demands.

“Businesses are looking for AI-driven software to help them manage the scale and complexity challenges of running applications cross-cloud,” said Ben Nye, CEO, Turbonomic, in a statement. “Turbonomic not only prescribes actions, but allows customers to take them. The combination of IBM and Turbonomic will continuously assure target application response times even during peak demand.”

The bigger picture for IBM is that it’s another sign of how the company is continuing to move away from its legacy business based around servers and deeper into services, and specifically services on the infrastructure of the future, cloud-based networks.

“IBM continues to reshape its future as a hybrid cloud and AI company,” said Rob Thomas, SVP, IBM Cloud and Data Platform, in a statement. “The Turbonomic acquisition is yet another example of our commitment to making the most impactful investments to advance this strategy and ensure customers find the most innovative ways to fuel their digital transformations.”

A large part of the AI promise in the world of network operations and IT ops is how it will afford companies to rely more on automation, another area where IBM has been very active. (In a very different application of this technology — in business services — this month, it acquired MyInvenio in Italy to bring process mining technology in house.)

The promise of automation, meanwhile, is lower operation costs, a critical issue for managing network performance and availability in hybrid cloud deployments.

“We believe that AI-powered automation has become inevitable, helping to make all information-centric jobs more productive,” said Dinesh Nirmal, General Manager, IBM Automation, in a statement. “That’s why IBM continues to invest in providing our customers with a one-stop shop of AI-powered automation capabilities that spans business processes and IT. The addition of Turbonomic now takes our portfolio another major step forward by ensuring customers will have full visibility into what is going on throughout their hybrid cloud infrastructure, and across their entire enterprise.”

Healthcare is the next wave of data liberation

Why can we see all our bank, credit card and brokerage data on our phones instantaneously in one app, yet walk into a doctor’s office blind to our healthcare records, diagnoses and prescriptions? Our health status should be as accessible as our checking account balance.

The liberation of financial data enabled by startups like Plaid is beginning to happen with healthcare data, which will have an even more profound impact on society; it will save and extend lives. This accessibility is quickly approaching.

As early investors in Quovo and PatientPing, two pioneering companies in financial and healthcare data, respectively, it’s evident to us the winners of the healthcare data transformation will look different than they did with financial data, even as we head toward a similar end state.

For over a decade, government agencies and consumers have pushed for this liberation.

In 2009, the Health Information Technology for Economic and Clinical Health Act (HITECH) gave the first big industry push, catalyzing a wave of digitization through electronic health records (EHR). Today, over 98% of medical records are digitized. This market is dominated by multi‐billion‐dollar vendors like Epic, Cerner and Allscripts, which control 70% of patient records. However, these giant vendors have yet to make these records easily accessible.

A second wave of regulation has begun to address the problem of trapped data to make EHRs more interoperable and valuable. Agencies within the Department of Health and Human Services have mandated data sharing among payers and providers using a common standard, the Fast Healthcare Interoperability Resources (FHIR) protocol.

Image Credits: F-Prime Capital

This push for greater data liquidity coincides with demand from consumers for better information about cost and quality. Employers have been steadily shifting a greater share of healthcare expenses to consumers through high-deductible health plans – from 30% in 2012 to 51% in 2018. As consumers pay for more of the costs, they care more about the value of different health options, yet are unable to make those decisions without real-time access to cost and clinical data.

Image Credits: F-Prime Capital

Tech startups have an opportunity to ease the transmission of healthcare data and address the push of regulation and consumer demands. The lessons from fintech make it tempting to assume that a Plaid for healthcare data would be enough to address all of the challenges within healthcare, but it is not the right model. Plaid’s aggregator model benefited from a relatively high concentration of banks, a limited number of data types and low barriers to data access.

By contrast, healthcare data is scattered across tens of thousands of healthcare providers, stored in multiple data formats and systems per provider, and is rarely accessed by patients directly. Many people log into their bank apps frequently, but few log into their healthcare provider portals, if they even know one exists.

HIPPA regulations and strict patient consent requirements also meaningfully increase friction to data access and sharing. Financial data serves mostly one-to-one use cases, while healthcare data is a many-to-many problem. A single patient’s data is spread across many doctors and facilities and is needed by just as many for care coordination.

Because of this landscape, winning healthcare technology companies will need to build around four propositions:

Experian API Exposed Credit Scores of Most Americans

Big-three consumer credit bureau Experian just fixed a weakness with a partner website that let anyone look up the credit score of tens of millions of Americans just by supplying their name and mailing address, KrebsOnSecurity has learned. Experian says it has plugged the data leak, but the researcher who reported the finding says he fears the same weakness may be present at countless other lending websites that work with the credit bureau.

Bill Demirkapi, an independent security researcher who’s currently a sophomore at the Rochester Institute of Technology, said he discovered the data exposure while shopping around for student loan vendors online.

Demirkapi encountered one lender’s site that offered to check his loan eligibility by entering his name, address and date of birth. Peering at the code behind this lookup page, he was able to see it invoked an Experian Application Programming Interface or API — a capability that allows lenders to automate queries for FICO credit scores from the credit bureau.

“No one should be able to perform an Experian credit check with only publicly available information,” Demirkapi said. “Experian should mandate non-public information for promotional inquiries, otherwise an attacker who found a single vulnerability in a vendor could easily abuse Experian’s system.”

Demirkapi found the Experian API could be accessed directly without any sort of authentication, and that entering all zeros in the “date of birth” field let him then pull a person’s credit score. He even built a handy command-line tool to automate the lookups, which he dubbed “Bill’s Cool Credit Score Lookup Utility.”

Demirkapi’s Experian credit score lookup tool.

KrebsOnSecurity put that tool to the test, asking permission from a friend to have Demirkapi look up their credit score. The friend agreed and said he would pull his score from Experian (at this point I hadn’t told him that Experian was involved). The score he provided matched the score returned by Demirkapi’s lookup tool.

In addition to credit scores, the Experian API returns for each consumer up to four “risk factors,” indicators that might help explain why a person’s score is not higher.

For example, in my friend’s case Bill’s tool said his mid-700s score could be better if the proportion of balances to credit limits was lower, and if he didn’t owe so much on revolving credit accounts.

“Too many consumer finance company accounts,” the API concluded about my friend’s score.

The reason I could not test Demirkapi’s findings on my own credit score is that we have a security freeze on our files at the three major consumer credit reporting bureaus, and a freeze blocks this particular API from pulling the information.

Demirkapi declined to share with Experian the name of the lender or the website where the API was exposed. He refused because he said he suspects there may be hundreds or even thousands of companies using the same API, and that many of those lenders could be similarly leaking access to Experian’s consumer data.

“If we let them know about the specific endpoint, they can just ban/work with the loan vendor to block these requests on this one case, which doesn’t fix the systemic problem,” he explained.

Nevertheless, after being contacted by this reporter Experian figured out on its own which lender was exposing their API; Demirkapi said that vendor’s site now indicates the API access has been disabled.

“We have been able to confirm a single instance of where this situation has occurred and have taken steps to alert our partner and resolve the matter,” Experian said in a written statement. “While the situation did not implicate or compromise any of Experian’s systems, we take this matter very seriously. Data security has always been, and always will be, our highest priority.”

Demirkapi said he’s disappointed that Experian did exactly what he feared they would do.

“They found one endpoint I was using and sent it into maintenance mode,” he said. “But this doesn’t address the systemic issue at all.”

Leaky and poorly-secured APIs like the one Demirkapi found are the source of much mischief in the hands of identity thieves. Earlier this month, auto insurance giant Geico disclosed that fraudsters abused a bug in its site to steal drivers license numbers from Americans.

Geico said the data was used by thieves involved in fraudulently applying for unemployment insurance benefits. Many states now require drivers license numbers as a way of verifying an applicant’s identity.

In 2013, KrebsOnSecurity broke the news about an identity theft service in the underground that programmatically pulled sensitive consumer credit data directly from a subsidiary of Experian. That service was run by a Vietnamese hacker who’d told the Experian subsidiary he was a private investigator. The U.S. Secret Service later said the ID theft service “caused more material financial harm to more Americans than any other.”

Additional reading: Experian’s Credit Freeze Security is Still a Joke (Apr. 27, 2021)

Task Force Seeks to Disrupt Ransomware Payments

Some of the world’s top tech firms are backing a new industry task force focused on disrupting cybercriminal ransomware gangs by limiting their ability to get paid, and targeting the individuals and finances of the organized thieves behind these crimes.

In a 81-page report delivered to the Biden administration this week, top executives from Amazon, Cisco, FireEye, McAfee, Microsoft and dozens of other firms joined the U.S. Department of Justice (DOJ), Europol and the U.K. National Crime Agency in calling for an international coalition to combat ransomware criminals, and for a global network of ransomware investigation hubs.

The Ransomware Task Force urged the White House to make finding, frustrating and apprehending ransomware crooks a priority within the U.S. intelligence community, and to designate the current scourge of digital extortion as a national security threat.

The Wall Street Journal recently broke the news that the DOJ was forming its own task force to deal with the “root causes” of ransomware. An internal DOJ memo reportedly “calls for developing a strategy that targets the entire criminal ecosystem around ransomware, including prosecutions, disruptions of ongoing attacks and curbs on services that support the attacks, such as online forums that advertise the sale of ransomware or hosting services that facilitate ransomware campaigns.”

According to security firm Emsisoft, almost 2,400 U.S.-based governments, healthcare facilities and schools were victims of ransomware in 2020.

“The costs of ransomware go far beyond the ransom payments themselves,” the task force report observes. “Cybercrime is typically seen as a white-collar crime, but while ransomware is profit-driven and ‘non-violent’ in the traditional sense, that has not stopped ransomware attackers from routinely imperiling lives.”

A proposed framework for a public-private operational ransomware campaign. Image: IST.

It is difficult to gauge the true cost and size of the ransomware problem because many victims never come forward to report the crimes. As such, a number of the task force’s recommendations focus on ways to encourage more victims to report the crimes to their national authorities, such as requiring victims and incident response firms who pay a ransomware demand to report the matter to law enforcement and possibly regulators at the U.S. Treasury Department.

Last year, Treasury issued a controversial memo warning that ransomware victims who end up sending digital payments to people already being sanctioned by the U.S. government for money laundering and other illegal activities could result in hefty fines.

Philip Reiner, CEO of the Institute for Security and Technology and executive director of the industry task force, said the reporting recommendations are one of several areas where federal agencies will likely need to dedicate more employees. For example, he said, expecting victims to clear ransomware payments with the Treasury Department first assumes the agency has the staff to respond in any kind of timeframe that might be useful for a victim undergoing a ransomware attack.

“That’s why we were so dead set in putting forward comprehensive framework,” Reiner said. “That way, Department of Homeland Security can do what they need to do, the State Department, Treasury gets involved, and it all needs to be synchronized for going after the bad guys with the same alacrity.”

Some have argued that making it illegal to pay a ransom is one way to decrease the number of victims who acquiesce to their tormentors’ demands. But the task force report says we’re nowhere near ready for that yet.

“Ransomware attackers require little risk or effort to launch attacks, so a prohibition on ransom payments would not necessarily lead them to move into other areas,” the report observes. “Rather, they would likely continue to mount attacks and test the resolve of both victim organizations and their regulatory authorities. To apply additional pressure, they would target organizations considered more essential to society, such as healthcare providers, local governments, and other custodians of critical infrastructure.”

“As such, any intent to prohibit payments must first consider how to build organizational cybersecurity maturity, and how to provide an appropriate backstop to enable organizations to weather the initial period of extreme testing,” the authors concluded in the report. “Ideally, such an approach would also be coordinated internationally to avoid giving ransomware attackers other avenues to pursue.”

The task force’s report comes as federal agencies have been under increased pressure to respond to a series of ransomware attacks that were mass-deployed as attackers began exploiting four zero-day vulnerabilities in Microsoft Exchange Server email products to install malicious backdoors. Earlier this month, the DOJ announced the FBI had conducted a first-of-its-kind operation to remove those backdoors from hundreds of Exchange servers at state and local government facilities.

Many of the recommendations in the Ransomware Task Force report are what you might expect, such as encouraging voluntary information sharing on ransomware attacks; launching public awareness campaigns on ransomware threats; exerting pressure on countries that operate as safe havens for ransomware operators; and incentivizing the adoption of security best practices through tax breaks.

A few of the more interesting recommendations (at least to me) included:

-Limit legal liability for ISPs that act in good faith trying to help clients secure their systems.

-Create a federal “cyber response and recovery fund” to help state and local governments or critical infrastructure companies respond to ransomware attacks.

-Require cryptocurrency exchanges to follow the same “know your customer” (KYC) and anti-money laundering rules as financial institutions, and aggressively targeting exchanges that do not.

-Have insurance companies measure and assert their aggregated ransomware losses and establish a common “war chest” subrogation fund “to evaluate and pursue strategies aimed at restitution, recovery, or civil asset seizures, on behalf of victims and in conjunction with law enforcement efforts.”

-Centralize expertise in cryptocurrency seizure, and scaling criminal seizure processes.

-Create a standard format for reporting ransomware incidents.

-Establish a ransomware incident response network.

Enterprise Environments, Exposed Endpoints and Operating Systems – The Hunt For The Right Security Solution

As security professionals, one of our primary challenges and responsibilities is understanding how to protect, detect, and respond to cyber attacks across all the operating systems within our enterprise environment. Most organizations today have mixed fleets running various flavors of Windows, macOS, and Linux. The operational reality is that in many cases a sizable portion of some or all of these endpoints will not be on the latest release of their respective operating system version.

Pause and think about your own environment: What is the percentage of endpoints running Windows 10 20H2? What is the primary operating system version for your servers? Are they really all on Windows Server 2019, or are they predominantly on Windows Server 2012 R2? If you are in the Pharmaceuticals or Manufacturing industry, what is the operating system that runs the production line?

In this post, we discuss why it is essential for security professionals to understand the complexities of different operating systems and endpoint types in their enterprise environment when choosing security products.

Attackers Always Look for the Weakest Link

Attackers do not care where the target is located, which operating system they are targeting, or if this endpoint is a known or unknown device. The chances are very high that during an attack, as they are jumping between endpoints, they are targeting different types of operating systems, and some of these endpoints might be in a blind spot for the security team. In the end, as an attacker, they want to reach their end goal by any means necessary.

Working in IT Security means understanding how to harden the environment and protect, detect, respond, and recover against cyber attacks. When choosing the technology you need to achieve these goals, it is critical to deeply understand operating system differences, their respective attack surfaces, and what capabilities your chosen tools have to perform tasks like Digital Forensics Incident Response (DFIR).

For example, the last thing you want when responding to an incident is to realize that your ability to open a remote shell is only available on 10% of your fleet because your tool only supports that feature on Windows 10 1809 or higher. Faced with such a roadblock at such a critical juncture, you would be forced to collect each endpoint physically before being able to perform some vital forensics tasks. By then, it may be too late.

Understanding Threats in Context is Vital

Dealing with commodity malware should be trivial. Today, SOC analysts shouldn’t need to spend time investigating commodity malware that even a legacy Antivirus solution should be able to just prevent. Instead, SOC analysts should spend most of their time defending against more sophisticated attacks; for that, it is critical to understand the correlations between a chain of activities.

To aid with that, the SentinelOne platform provides ActiveEDR powered by Storyline – a powerful capability that provides real-time actionable correlation and context, allowing SOC analysts to understand the whole storyline of the attack as it’s happening.

Visibility Is Crucial for Effective Threat Hunting

Sometimes, just preventing or detecting threats isn’t enough and we need to unleash the threat hunter in us. Consider the situation when you receive new threat intelligence and decide to sweep the environment for a specific Indicator of Compromise (IOC) or Indicator of Attack (IOA). For that, you can’t just rely on the prevention or detection logs from your security solution. Because when your CISO or Head of Security Operations asks “are we impacted by XYZ?” the response can’t be “Not sure, but our tools didn’t alert on it.” They expect you to know, so you must be able to sweep the environment. That’s the time when you need to be able to access contextualized telemetry data in order to hunt for the unknown.

SentinelOne’s Deep Visibility was built with this kind of situation in mind. You can leverage an intuitive, SQL-like query language to quickly search the entire contextualized dataset of your SentinelOne instance.

Of course, when we are investigating a threat, it can sometimes be handy or even a necessity to perform more advanced digital forensics activities. For example, you might need to collect hibernation files, memory dumps or list some of the registry hives. To do so, historically we would need to gain physical access to an endpoint. Nowadays, with employees working from home or on remote sites, this has become exponentially more difficult.

The answer is a secure, remote shell capability. With SentinelOne’s Full Remote Shell capability, you can connect directly to the target endpoint to carry out any necessary tasks across all your SentinelOne-protected endpoints, regardless of what OS your endpoints are running.

Remediation and Mitigation Needs To Be Simple

Once we understand the incident, it’s time to move into the remediation and containment phase. Typically, you are looking for mitigation actions like stopping all processes related to a threat, encrypting and moving the threat and its executables into quarantine, deleting all files and system changes created by a threat and, if required, to restore file configurations that the threat changed.

While SOC analysts often have their hands full resolving incidents, administrators need to manage the security posture of the enterprise environment including such things as managing device control policies, hardening policies, firewall policies and access vulnerability management information.

Historically, this work required both analysts and security administrators to master a variety of different IT systems depending on the operating system, and sometimes even waste time opening an IT help desk ticket to implement a change.

When choosing a modern security solution, look for tools that simplify the security stack and offer a single and simple interface for managing security controls, mitigation, remediation, threat hunting and all other aspects related to security.

The benefits here include being able to train people faster on a common platform and avoiding baked-in institutional knowledge that revolves around a few specialists who know how to handle obscure and often legacy software. With the right interface, you can reduce the time it takes to deal with threats, which should be simple, intuitive and easy to learn.

With SentinelOne, all these functions are available within one console, empowering everyone dealing with security, from the SOC team to security administrators, to manage whichever functions are appropriate to their work across Windows, macOS, and Linux. Features like customizable Role-based access control ensure that each person and group in your organization only has access to the features they need to do their work effectively.

We Are All In This Together: One Team

As we know cybersecurity is a team sport and not every organization has the resources to staff a dedicated SOC. That’s why SentinelOne offers Vigilance MDR, a Managed Desktop Response (MDR) and Digital Forensics Incident Response (DFIR) service to support your security department. Our analysts are subject matter experts when it comes to detecting and responding to cyber-attacks. They monitor and where needed, react to all suspicious behaviours regardless of which operating system type, version or location of the endpoint.

Gain Visibility Into the Unknown

When looking into an enterprise environment today, do we really know what is connected? Or is it more likely that we have a relatively good understanding of what should be connected, and we might just assume that we prevent unauthorized access to the network?

Today, because our networks often contain not only traditional endpoints but a variety of other devices such as IoT and Operational Technology (OT)-type devices, it can be difficult for admins to know exactly what’s on the network.

SentinelOne’s Ranger capability addresses this problem by turning your SentinelOne agents into a distributed sensor network that combines passive and active reconnaissance techniques to build a map of everything in your environment. With that, you gain critical visibility across managed and unmanaged endpoints regardless of their operating system and endpoint type.

Conclusion

As security professionals, every day brings us fresh challenges. Some of us might be working on responding to an incident; others might be working on identifying ways to improve the security posture of the enterprise environment. In the end, our line of work requires us to have a good understanding of the enterprise environment. Only by having the right tools with the right level of visibility will we be successful in protecting, detecting, responding, and recovering in times of need. Therefore, it is crucial to select your endpoint security vendor based on whether they can provide you with the comprehensive capabilities required across your entire digital estate.

At SentinelOne, we understand that enterprises have different operating system types, versions and endpoint types. Therefore, we provide a comprehensive endpoint security platform that supports all major operating systems and offers feature parity for different versions of Windows, macOS, and Linux.

If you would like to learn more about how SentinelOne can help protect your entire organization, contact us for a free demo.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Atlassian launches a Jira for every team

Atlassian today announced a new edition of its Jira project management tool, Jira Work Management. The company has long been on a journey of bringing Jira to teams beyond the software development groups it started out with. With Jira Service Management, it is successfully doing that with IT teams. With Jira Core, it also moved further in this direction, but Jira Work Management takes this a step further (and will replace Jira Core). The idea here is to offer a version of Jira that enables teams across marketing, HR, finance, design and other groups to manage their work and — if needed — connect it to that of a company’s development teams.

“JIRA Software’s this de-facto standard,” Atlassian’s VP of Product Noah Wasmer told me. “We’re making just huge inroads with Jira Service Management right now, bringing IT teams into that loop. We have over 100,000 customers now on those two products. So it’s really doing incredibly well. But one of the things that CIOs say is that it’s really tough to put JIRA Software in front of an HR team and the legal team. They often ask, what is code? What is a pull request?”

Image Credits: Atlassian

Wasmer also noted that even though Jira Software is specifically meant for developers, about half of its users are already in other teams that work with these development teams. “We think that [Jira Work Management] gives them the more contextually relevant tool — a tool that actually helps them accelerate and move faster,” Wasmer said.

With Jira Work Management, the company is looking at making it easier for any team to track and manage their work in what Wasmer described as a “universal system and family of product.” As company’s look at how to do remote and hybrid work, Atlassian believes that they’ll need this kind of core product to keep track of the work that is being done. But it’s also about the simple fact that every business is now a software business and while every team’s work touches upon this, marketing and design teams often still work in their own silos.

Image Credits: Atlassian

These different teams, though, also have quite different expectations of the user interface they need to manage their work most effectively. So while Jira Work Management features all of the automation features and privacy controls of its brethren, it is based around a slightly different and simplified user interface than Jira Software, for example.

What’s even more important, though, is that Jira Work Management offers a variety of views for teams to enter and manipulate their data. To get new users onboarded quickly, Atlassian built a set of templates for some of the most common use cases it expects, though users are obviously free to customize all these different views to their hearts’ — and business needs’ — content.

Atlassian also changed some of the language around Jira tickets. There are no ‘stories’ and ‘bugs’ in Jira Work Management (unless you add them yourself) and instead, these templates use words like ‘tasks,’ ‘assets’ (for design use cases) or ‘candidates’ (for HR).

Image Credits: Atlassian

Given the fact that spreadsheets are the universal language of business, it’s maybe no surprise that the List view is core here, with an Excel/Airtable-like experience that should immediately feel familiar to any business user. It’s in-line editable and completely abstracts away the usual Jira ticket, even though underneath, it’s the same taxonomy and infrastructure.

“We really wanted people to walk into this product and just understand that there is work that needs to be done,” Chase Wilson, the head of product marketing for Jira Work Management, said. He noted that the team worked on making the experience feel snappy.

Image Credits: Atlassian

The other view available are pretty straightforward a calendar and Gantt chart-like timeline view, as well as the traditional Kanban board that has long been at the core of Jira (and Agile in general).

Jira Work Management also lets users build forms, using a drag and drop editor that makes it easy for anybody inside an organization to build forms and collect requests that way. Only a few weeks ago, Atlassian announced the acquisition of ThinkTilt, the company behind the popular no-code from builder ProForma and it looks like it is already putting this acquisition to work here.

As Wasmer stressed, Jira Work Management is meant to help different teams get work done in a way that works best for them. But because Jira is now a family of products, it also enables a lot more cross-team collaboration. That means a development team that is working on implementing a GDPR requirement can now build a workflow that ties in with the project board for a legal team that then allows legal to hold up a software release until it approves this new feature.

“We hear about this all the time today,” he said. “They just stick the legal team into Jira Software — and it over-inundates them with information that’s not relevant to what they’re trying to get done. Now we can expose them. And we also then get that legal team, that marketing team, exposed to different templates for different work. What they’re finding is that once they get used to it for that must-do use case, they start saying: Well, hey, why don’t I use this for contract approvals at the end of the quarter?’”

Image Credits: Atlassian

As for pricing, Atlassian follows its same standard template here, offering a free tier for teams with up to 10 users and then the paid tiers start at $5/user/month, with discounts for larger teams.

Looking ahead, Atlassian plans to add more reporting capabilities, native approvals for faster signoffs and more advanced functionality across the new work views.

It’s worth noting that Jira Work Management is the first product to come out of Point A, Atlassian’s new innovation program “dedicated to connecting early adopter customers with product teams to build the next generation of teamwork tools.”