SambaNova raises $676M at a $5.1B valuation to double down on cloud-based AI software for enterprises

Artificial intelligence technology holds a huge amount of promise for enterprises — as a tool to process and understand their data more efficiently; as a way to leapfrog into new kinds of services and products; and as a critical stepping stone into whatever the future might hold for their businesses. But the problem for many enterprises is that they are not tech businesses at their cores and so bringing on and using AI will typically involve a lot of heavy lifting. Today, one of the startups building AI services is announcing a big round of funding to help bridge that gap.

SambaNova — a startup building AI hardware and integrated systems that run on it that only officially came out of three years in stealth last December — is announcing a huge round of funding today to take its business out into the world. The company has closed in on $676 million in financing, a Series D that co-founder and CEO Rodrigo Liang has confirmed values the company at $5.1 billion.

The round is being led by SoftBank, which is making the investment via Vision Fund 2. Temasek and the Government of Singapore Investment Corp. (GIC), both new investors, are also participating, along with previous backers BlackRock, Intel Capital, GV (formerly Google Ventures), Walden International and WRVI, among other unnamed investors. (Sidenote: BlackRock and Temasek separately kicked off an investment partnership yesterday, although it’s not clear if this falls into that remit.)

Co-founded by two Stanford professors, Kunle Olukotun and Chris Ré, and Liang, who had been an engineering executive at Oracle, SambaNova has been around since 2017 and has raised more than $1 billion to date — both to build out its AI-focused hardware, which it calls DataScale and to build out the system that runs on it. (The “Samba” in the name is a reference to Liang’s Brazilian heritage, he said, but also the Latino music and dance that speaks of constant movement and shifting, not unlike the journey AI data regularly needs to take that makes it too complicated and too intensive to run on more traditional systems.)

SambaNova on one level competes for enterprise business against companies like Nvidia, Cerebras Systems and Graphcore — another startup in the space which earlier this year also raised a significant round. However, SambaNova has also taken a slightly different approach to the AI challenge.

In December, the startup launched Dataflow-as-a-service as an on-demand, subscription-based way for enterprises to tap into SambaNova’s AI system, with the focus just on the applications that run on it, without needing to focus on maintaining those systems themselves. It’s the latter that SambaNova will be focusing on selling and delivering with this latest tranche of funding, Liang said.

SambaNova’s opportunity, Liang believes, lies in selling software-based AI systems to enterprises that are keen to adopt more AI into their business, but might lack the talent and other resources to do so if it requires running and maintaining large systems.

“The market right now has a lot of interest in AI. They are finding they have to transition to this way of competing, and it’s no longer acceptable not to be considering it,” said Liang in an interview.

The problem, he said, is that most AI companies “want to talk chips,” yet many would-be customers will lack the teams and appetite to essentially become technology companies to run those services. “Rather than you coming in and thinking about how to hire scientists and hire and then deploy an AI service, you can now subscribe, and bring in that technology overnight. We’re very proud that our technology is pushing the envelope on cases in the industry.”

To be clear, a company will still need data scientists, just not the same number, and specifically not the same number dedicating their time to maintaining systems, updating code and other more incremental work that comes managing an end-to-end process.

SambaNova has not disclosed many customers so far in the work that it has done — the two reference names it provided to me are both research labs, the Argonne National Laboratory and the Lawrence Livermore National Laboratory — but Liang noted some typical use cases.

One was in imaging, such as in the healthcare industry, where the company’s technology is being used to help train systems based on high-resolution imagery, along with other healthcare-related work. The coincidentally-named Corona supercomputer at the Livermore Lab (it was named after the 2014 lunar eclipse, not the dark cloud of a pandemic that we’re currently living through) is using SambaNova’s technology to help run calculations related to some Covid-19 therapeutic and antiviral compound research, Marshall Choy, the company’s VP of product, told me.

Another set of applications involves building systems around custom language models, for example in specific industries like finance, to process data quicker. And a third is in recommendation algorithms, something that appears in most digital services and frankly could always do to work a little better than it does today. I’m guessing that in the coming months it will release more information about where and who is using its technology.

Liang also would not comment on whether Google and Intel were specifically tapping SambaNova as a partner in their own AI services, but he didn’t rule out the prospect of partnering to go to market. Indeed, both have strong enterprise businesses that span well beyond technology companies, and so working with a third party that is helping to make even their own AI cores more accessible could be an interesting prospect, and SambaNova’s DataScale (and the Dataflow-as-a-service system) both work using input from frameworks like PyTorch and TensorFlow, so there is a level of integration already there.

“We’re quite comfortable in collaborating with others in this space,” Liang said. “We think the market will be large and will start segmenting. The opportunity for us is in being able to take hold of some of the hardest problems in a much simpler way on their behalf. That is a very valuable proposition.”

The promise of creating a more accessible AI for businesses is one that has eluded quite a few companies to date, so the prospect of finally cracking that nut is one that appeals to investors.

“SambaNova has created a leading systems architecture that is flexible, efficient and scalable. This provides a holistic software and hardware solution for customers and alleviates the additional complexity driven by single technology component solutions,” said Deep Nishar, Senior Managing Partner at SoftBank Investment Advisers, in a statement. “We are excited to partner with Rodrigo and the SambaNova team to support their mission of bringing advanced AI solutions to organizations globally.”

ParkMobile Breach Exposes License Plate Data, Mobile Numbers of 21M Users

Someone is selling account information for 21 million customers of ParkMobile, a mobile parking app that’s popular in North America. The stolen data includes customer email addresses, dates of birth, phone numbers, license plate numbers, hashed passwords and mailing addresses.

KrebsOnSecurity first heard about the breach from Gemini Advisory, a New York City based threat intelligence firm that keeps a close eye on the cybercrime forums. Gemini shared a new sales thread on a Russian-language crime forum that included my ParkMobile account information in the accompanying screenshot of the stolen data.

Included in the data were my email address and phone number, as well as license plate numbers for four different vehicles we have used over the past decade.

Asked about the sales thread, Atlanta-based ParkMobile said the company published a notification on Mar. 26 about “a cybersecurity incident linked to a vulnerability in a third-party software that we use.”

“In response, we immediately launched an investigation with the assistance of a leading cybersecurity firm to address the incident,” the notice reads. “Out of an abundance of caution, we have also notified the appropriate law enforcement authorities. The investigation is ongoing, and we are limited in the details we can provide at this time.”

The statement continues: “Our investigation indicates that no sensitive data or Payment Card Information, which we encrypt, was affected. Meanwhile, we have taken additional precautionary steps since learning of the incident, including eliminating the third-party vulnerability, maintaining our security, and continuing to monitor our systems.”

Asked for clarification on what the attackers did access, ParkMobile confirmed it included basic account information – license plate numbers, and if provided, email addresses and/or phone numbers, and vehicle nickname.

“In a small percentage of cases, there may be mailing addresses,” spokesman Jeff Perkins said.

ParkMobile doesn’t store user passwords, but rather it stores the output of a fairly robust one-way password hashing algorithm called bcrypt, which is far more resource-intensive and expensive to crack than common alternatives like MD5. The database stolen from ParkMobile and put up for sale includes each user’s bcrypt hash.

“You are correct that bcrypt hashed and salted passwords were obtained,” Perkins said when asked about the screenshot in the database sales thread.

“Note, we do not keep the salt values in our system,” he said. “Additionally, the compromised data does not include parking history, location history, or any other sensitive information. We do not collect social security numbers or driver’s license numbers from our users.”

ParkMobile says it is finalizing an update to its support site confirming the conclusion of its investigation. But I wonder how many of its users were even aware of this security incident. The Mar. 26 security notice does not appear to be linked to other portions of the ParkMobile site, and it is absent from the company’s list of recent press releases.

It’s also curious that ParkMobile hasn’t asked or forced its users to change their passwords as a precautionary measure. I used the ParkMobile app to reset my password, but there was no messaging in the app that suggested this was a timely thing to do.

So if you’re a ParkMobile user, changing your account password might be a pro move. If it’s any consolation, whoever is selling this data is doing so for an insanely high starting price ($125,000) that is unlikely to be paid by any cybercriminal to a new user with no reputation on the forum.

More importantly, if you used your ParkMobile password at any other site tied to the same email address, it’s time to change those credentials as well (and stop re-using passwords).

The breach comes at a tricky time for ParkMobile. On March 9, the European parking group EasyPark announced its plans to acquire the company, which operates in more than 450 cities in North America.

Why Your macOS EDR Solution Shouldn’t Be Running Under Rosetta 2

Last week, SentinelOne announced the early availability of its v5.0 agent, becoming the first endpoint security agent to natively support Apple’s new M1 (aka Apple silicon, aka arm64 Mac) architecture. With native support, the Sentinel agent is freed from having to run under Apple’s translation software layer, known as Rosetta 2, unlike other macOS EDR/XDR security solutions.

In this post, we explain what all the hot terms being thrown around in this space mean – from ‘Rosetta2’ and ‘Apple Silicon’ to ‘arm64 architecture’ and ‘Universal 2’ binaries – and explain why running security software natively on Apple silicon has clear performance and security benefits.

A New Architecture and…Names, Lots of Names

Apple made big news last year with the announcement that they would be building their own CPUs for Mac devices instead of relying on Intel for their processors. These new devices started shipping in late 2020 and are differentiated from Intel Macs by several designators.

The first term Apple coined to market their new devices was “Apple silicon”, by which they mean to refer to the CPU chip being based on a chip design created by ARM. Apple licenses the base design and produces their own take on it. Evidently, they didn’t want to brand their own take as “just an ARM chip”, preferring to take ownership of it through the distinctive brand name of “Apple silicon”.

Apple have, of course, been using their own custom-made ARM chips in iOS for years, designated with an ‘A’ and a numerical specifier, such as A10, A11, A12 and so on (current versions of iOS ship with the A14 chip). Presumably to maintain some parallelism with this convention, the first Mac ARM chip was designated ‘M1’. We expect to see M2 and M3 and so on over time as Apple iterates on the design. So, as well as being known as ‘Apple silicon Macs’, Apple’s first generation of non-Intel devices are also known as “M1 Macs”.

And that brings us to binaries – the executable file format that underlies both Apple and third-party software that can run on these new M1 chips. These must of course have a format that is compatible with the CPU architecture. On Intel machines, we have x86_64 Mach-O executables; for the M1/Apple silicon Macs, the native binary format is the arm64e Mach-O.

So, we have an ARM-based M1/Apple silicon processor architecture and arm64 binaries that run on it. That seems straightforward enough, but there’s a hitch: what about all the software that was written for Intel Macs over the last 15 years or more? And let’s not overlook the fact that Apple are still shipping – and still building – Intel Macs for at least another two years. It would be untenable to have two hardware product lines with two entirely incompatible software catalogs. Apple needed to find a way to allow software built on Intel machines to run on the new M1 machines.

Enter Rosetta 2 and the Universal 2 file format.

Intel, ARM and the Need for Translation Software

The name Rosetta is, of course, derived from the famous ‘Rosetta Stone’ that first allowed us to translate Egyptian hieroglyphics into modern language. Apple’s original Rosetta software actually helped the company translate an earlier architecture, PowerPC, to Intel when Apple made that transition in the mid-2000s.

At that time, they used a Universal file format that was a ‘FAT’ binary containing both the PowerPC and the Intel binaries within it. Regardless of whether the CPU was PowerPC or Intel, the OS extracted the correct file from the FAT, Universal binary and ran it natively on the CPU. However, if the Intel CPU came across software that only had the PowerPC binary, it would instead execute the Rosetta translator and pass the PowerPC binary off to Rosetta to execute.

With the M1 Macs, Apple took a similar approach: there’s a Universal 2 binary format that developers (like SentinelOne) use to ship both Intel and arm64 versions of their software in one release. With the Universal 2 binary, the OS checks to see what architecture it’s running on and automatically selects the appropriate arm64 (for M1 Macs) or Intel (for x86_64) slice to execute. However, if an M1 Mac encounters an Intel binary from developers who have not yet made the native transition, it passes the binary off to the Rosetta 2 translation mechanism to deal with.

That entire process should be ‘transparent’ to users, say Apple, where the word ‘transparent’ here means ‘invisible’ rather than ‘obvious’. However, Rosetta 2 doesn’t work in quite the same way as the original Rosetta, and this has both performance and security consequences users should be aware of.

A New Architecture Means New Challenges For Endpoint Security

The primary difference between the original Rosetta and Rosetta 2 is when translation takes place. With Rosetta 2, Apple wanted to avoid performance issues that affected some heavy resource-sapping software under the original Rosetta mechanism (Adobe CS2 was a particular complaint at the time).

The problem with the original Rosetta was that it translated the software each time it was run, taxing the CPU repeatedly through every launch. Apple’s approach with Rosetta 2 was to avoid that as much as possible by allowing an ‘Ahead-of-Time’ (AOT) translation to occur the first time the software launched and saving that translation for future launches.

While that is a great way to improve the performance of emulated software, there are downsides, particularly from the perspective of security software.

Native arm64 code has at least two performance advantages over translated code that are particularly relevant to large, complex programs such as EDR offerings.

The Ahead-of-Time translation we mentioned above is brokered by a helper program called oahd_helper. This program is responsible for creating the translated AOT binaries the first time the x86_64 binary is run. The larger the size of the x86_64 code to be translated, the longer the launch time. This in turn can result in heavy memory and CPU usage on the device when the oahd_helper is required to translate very large Intel executable files.

Secondly, complete AOT translation is not possible for parts of complex program code that need to be resolved at runtime. Exactly what code needs to be run can sometimes not be determined until runtime due to local environment variables and conditions. While theoretically (perhaps) a developer could compile all possible code branches ahead of time, that’s both inefficient and error prone. It’s far more efficient and bug-proof to determine, when necessary, what code needs to be run and compile it on the fly, a process known as Just-in-Time or JIT compilation.

Other things being equal, JIT compilation is fine when you’re running native code on a native processor, but when that code has to be translated through Rosetta it means that some amount of Just-in-Time compilation has to occur despite the AOT compilation. When this condition occurs, the kernel transfers control to a special Rosetta translation stub that takes care of the work. In short, any sufficiently complex program (such as an EDR solution) is going to need to have at least some of its Intel code translated via Rosetta on the fly, and that translation is going to incur a performance penalty compared to a security solution that’s running native arm64 code.

This fact is noted in Apple’s own documentation: “the translation process takes time, so users might perceive that translated apps launch or run more slowly at times”.

Native M1 Software Is More Secure By Design

But performance isn’t the only thing to worry about. More importantly, native M1 code is simply safer than running Intel code through Rosetta translation. That’s because one of the changes Apple brought in with Big Sur that only applies to Apple silicon Macs is that native arm64 code cannot execute on an M1 Mac unless it has a valid code signature.

An Apple silicon Mac doesn’t permit native arm64 code execution under any conditions unless a valid signature is attached. Translated x86_64 code, however, is not subject to this restriction: translated x86_64 code is permitted to execute through Rosetta with no signature information at all.

You can easily verify this on an M1 Mac with a simple ‘hello world’ program. If we first compile the program below as arm64e, note how the OS kills it when we try to execute, but once we re-compile the same executable as x86_64, we can run our hello.out without a code signature and without hindrance:

This allows for the possibility of software tampering: a piece of software running only as an Intel binary through Rosetta translation could have its code signature removed, its code altered, and the program executed through Rosetta without the valid developer’s code signature.

Although there are other barriers to clear for an attacker trying to carry out such an attack, it nevertheless remains the case that native arm64 code is inherently safer on an M1 Mac than translated Intel code, which can run without any code signing checks at all.

This naturally leads to the question of whether Rosetta itself could be used as an attack vector. Although all the components are locked down via System Integrity Protection, Rosetta is a multi-faceted and complicated mechanism consisting of many interlocking parts, each of which presents a potential attack surface.

Some initial, excellent reverse engineering on some of Rosetta’s components has been done here, but there is still much more to be learned about this translation layer and research is ongoing.

Conclusion

There is no question that Apple has made great strides with Rosetta 2 over the original Rosetta and this may account for why some software developers have yet to make the transition, perhaps not understanding the advantages of native M1 support. Other developers may prefer to leave their solutions running under Rosetta to take advantage of legacy Intel modules that they have not or cannot translate to ARM.

Yet as we’ve explained above, the benefits of running security software natively on Apple silicon are clear for both performance and security reasons. And as we have noted elsewhere, there is some suggestion that Apple may disable Rosetta 2 in some regions with little notice. Finally, it is also inevitable that – just as the original Rosetta reached EOL a few years after Apple had finally transitioned entirely off the PowerPC architecture for Intel – Apple will eventually drop support for translated software on the Apple silicon platform. Let’s hope, though, that other security software developers don’t wait that long to bring the performance and security benefits to their users.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Microsoft is acquiring Nuance Communications for $19.7B

Microsoft agreed today to acquire Nuance Communications, a leader in speech to text software, for $19.7 billion. Bloomberg broke the story over the weekend that the two companies were in talks.

In a post announcing the deal, the company said this was about increasing its presence in the healthcare vertical, a place where Nuance has done well in recent years. In fact, the company announced the Microsoft Cloud for Healthcare last year, and this deal is about accelerating its presence there. Nuance’s products in this area include Dragon Ambient eXperience, Dragon Medical One and PowerScribe One for radiology reporting.

“Today’s acquisition announcement represents the latest step in Microsoft’s industry-specific cloud strategy,” the company wrote. The acquisition also builds on several integrations and partnerships the two companies have made in the last couple of years.

The company boasts 10,000 healthcare customers, according to information on the website. Those include AthenaHealth, Johns Hopkins, Mass General Brigham and Cleveland Clinic to name but a few, and it was that customer base that attracted Microsoft to pay the price it did to bring Nuance into the fold.

Nuance CEO Mark Benjamin will remain with the company and report to Scott Guthrie, Microsoft’s EVP in charge of the cloud and AI group.

Nuance has a complex history. It went public in 2000 and began buying speech recognition products including Dragon Dictate from Lernout Hauspie in 2001. It merged with a company called ScanSoft in 2005. That company began life as Visioneer, a scanning company in 1992.

Today, the company has a number of products including Dragon Dictate, a consumer and business text to speech product that dates back to the early 1990s. It’s also involved in speech recognition, chat bots and natural language processing particularly in healthcare and other verticals.

The company has 6,000 employees spread across 27 countries. In its most recent earnings report from November 2020, which was for Q42020, the company reported $352.9 million in revenue compared to $387.6 million in the same period a year prior. That’s not the direction a company wants to go in, but it is still a run rate of over $1.4 billion.

At the time of that earnings call, the company also announced it was selling its medical transcription and electronic health record (EHR) Go-Live services to Assured Healthcare Partners and Aeries Technology Group. Company CEO Benjamin said this was about helping the company concentrate on its core speech services.

“With this sale, we will reach an important milestone in our journey towards a more focused strategy of advancing our Conversational AI, natural language understanding and ambient clinical intelligence solutions,” Benjamin said in a statement at the time.

It’s worth noting that Microsoft already has a number speech recognition and chat bot products of its own including desktop speech to text services in Windows and on Azure, but it took a chance to buy a market leader and go deeper into the healthcare vertical.

The transaction has already been approved by both company boards and Microsoft reports it expects the deal to close by the end of this year, subject to standard regulatory oversight and approval by Nuance shareholders.

This would mark the second largest purchase by Microsoft ever, only surpassed by the $26.2 billion the company paid for LinkedIn in 2016.

Microsoft goes all in on healthcare with $19.7B Nuance acquisition

When Microsoft announced it was acquiring Nuance Communications this morning for $19.7 billion, you could be excused for doing a Monday morning double take at the hefty price tag.

That’s surely a lot of money for a company on a $1.4 billion run rate, but Microsoft, which has already partnered with the speech-to-text market leader on several products over the last couple of years, saw a company firmly embedded in healthcare and it decided to go all in.

And $20 billion is certainly all in, even for a company the size of Microsoft. But 2020 forced us to change the way we do business from restaurants to retailers to doctors. In fact, the pandemic in particular changed the way we interact with our medical providers. We learned very quickly that you don’t have to drive to an office, wait in waiting room, then in an exam room, all to see the doctor for a few minutes.

Instead, we can get on the line, have a quick chat and be on our way. It won’t work for every condition of course — there will always be times the physician needs to see you — but for many meetings such as reviewing test results or for talk therapy, telehealth could suffice.

Microsoft CEO Satya Nadella says that Nuance is at the center of this shift, especially with its use of cloud and artificial intelligence, and that’s why the company was willing to pay the amount it did to get it.

“AI is technology’s most important priority, and healthcare is its most urgent application. Together, with our partner ecosystem, we will put advanced AI solutions into the hands of professionals everywhere to drive better decision-making and create more meaningful connections, as we accelerate growth of Microsoft Cloud in Healthcare and Nuance,” Nadella said in a post announcing the deal.

Holger Mueller, an analyst at Constellation Research, says says that may be so, but he believes that Microsoft missed the boat with Cortana and this is about helping the company catch up on a crucial technology. “Nuance will be not only give Microsoft technology help in regards to neural network based speech recognition, but also a massive improvement from vertical capabilities, call center functionality and the MSFT IP position in speech,” he said.

Microsoft sees this deal doubling what was already a considerable total addressable market to nearly $500 billion. While TAMs always tend to run high, that is still a substantial number.

It also fits with Gartner data, which found that by 2022, 75% of healthcare organizations will have a formal cloud strategy in place. The AI component only adds to that number and Nuance brings 10,000 existing customers to Microsoft including some of the biggest healthcare organizations in the world.

Brent Leary, founder and principal analyst at CRM Essentials, says the deal could provide Microsoft with a ton of health data to help feed the underlying machine learning models and make them more accurate over time.

“There is going be a ton of health data being captured by the interactions coming through telemedicine interactions, and this could create a whole new level of health intelligence,” Leary told me.

That of course could drive a lot of privacy concerns where health data is involved, and it will be up to Microsoft, which just experienced a major breach on its Exchange email server products last month, to assure the public that their sensitive health data is being protected.

Leary says that ensuring data privacy is going to be absolutely key to the success of the deal. “The potential this move has is pretty powerful, but it will only be realized if the data and insights that could come from it are protected and secure — not only protected from hackers but also from unethical use. Either could derail what could be a game changing move,” he said.

Microsoft also seemed to recognize that when it wrote, “Nuance and Microsoft will deepen their existing commitments to the extended partner ecosystem, as well as the highest standards of data privacy, security and compliance.”

We are clearly on the edge of a sea change when it comes to how we interact with our medical providers in the future. COVID pushed medicine deeper into the digital realm in 2020 out of simple necessity. It wasn’t safe to go into the office unless absolutely necessary.

The Nuance acquisition, which is expected to close some time later this year, could help Microsoft shift deeper into the market. It could even bring Teams into it as a meeting tool, but it’s all going to depend on the trust level people have with this approach, and it will be up to the company to make sure that both healthcare providers and the people they serve have that.

The Good, the Bad and the Ugly in Cybersecurity – Week 15

The Good

It’s that time of year again when security researchers get to show off their skills and reap the rewards in big cash prizes from Pwn2Own.

This year’s event was live streamed across social media sites as well as the event’s own site and featured 23 attempts targeting 10 different products from categories including Web Browsers, Virtualization software, Servers and Enterprise Communications.

So far we’ve seen big cash payouts for three separate exploits that allowed guest-to-host escape in macOS virtualization software Parallels Desktop ($40,000 each for the researchers), a $200,000 payout to two researchers who exploited a bug chain in Zoom messenger to achieve code execution on a target system, and another $200,000 payout to a researcher for exploiting a pair of bugs in Microsoft Teams. Two researchers also received a combined payout of $200,000 for combining an authentication bypass with a local privilege escalation to pwn an MS Exchange server. You can read all the details here.

Given that in this section we regularly highlight the penalties meted out to (sometimes) talented hackers for misusing their interest and skill for financial reward, we hope that highlighting this kind of competition might help to push those tempted by the Dark Side towards a more socially responsible and profitable use of their interests. CTFs, Bug Bounties, and – maybe one day – a high-dollar reward in a future Pwn2Own are all better paths for those with legitimate interest in computer hacking. May the Force be with you!

The Bad

CISA released an alert this week warning businesses running “outdated or misconfigured” SAP applications of malicious cyber activity. The alert comes on the back of research from Onapsis and SAP that found evidence of over 300 automated exploitations leveraging seven SAP-specific attack vectors.

The researchers found evidence that the exploits were in use by multiple threat actors and warned that patched vulnerabilities were being weaponized in less than 72 hours. Where enterprises deployed unpatched SAP applications in IaaS cloud environments, the researchers found evidence that the time to discovery and compromise was as little as 3 hours.

Source

The specific bugs seen exploited were CVE-2010-5326, CVE-2016-3976, CVE-2016-9563, CVE-2018-2380, CVE-2020-6207, and CVE-2020-6287.

Once compromised, the exploited SAP applications could be used to bypass common security and compliance controls, enable the theft of sensitive data and potentially lead to a ransomware attack. CISA similarly warned that the vulnerabilities could lead to financial fraud and disruption of mission critical business processes.

As is unfortunately so often the case, these attacks rely on organizations failing to apply patches for known vulnerabilities. In some cases, patches have been available to customers for months and even years. Given the turnaround time to exploitation noted in the Onapsis report, organizations failing to keep up with updating their SAP applications are leaving themselves at high risk of compromise. Accordingly, operators of SAP systems are strongly advised to review the Onapsis research and apply all necessary updates and mitigations as a matter of urgency.

The Ugly

Job hunting is stressful enough at the best of times, and certainly that hasn’t been helped for most in the era of COVID-19. But on top of the usual anxieties, there is also the threat of being compromised by malware when opening documents pertaining to potential job offers. Any way that threat actors can lure a phishing target to open a poisoned document is going to prove attractive, and job hunters are among the most likely to be eager to do just that.

It’s thus no surprise to learn that APT34 have been using this tried and trusted technique against job hunters by posing as a fictitious U.S. based consulting firm called ‘Ntvia’ and circulating a booby-trapped Word doc named Job-Details.doc. The document advertises vacancies for a wide variety of positions including Accountants, Project Managers and IT Managers in locations such as Saudi Arabia, Kuwait and the UAE. The Iranian APT34 threat actor has been known to deliver malicious documents via LinkedIn messages in the past, but it’s not clear as yet how this most recent APT campaign is being distributed.

Analysis by researchers reveals that the Job-Details.doc uses malicious Macros with DNS tunneling to drop a backdoor dubbed ‘SideTwist’, which includes functionality for downloading payloads and exfiltrating user data. Shell command execution is also available to the attackers. The VB macros in the document are also responsible for persistence via registering a scheduled task named SystemFailureReporter, which executes the 2nd stage payload at 5 minute intervals.

Based on multiple similarities to previous campaigns as well as the content of the advertised job vacancies, the researchers attribute the campaign to Iran-backed APT34 with high confidence and believe it to be targeting Lebanese and Middle Eastern job hunters in particular.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Industry experts bullish on $500M KKR investment in Box, but stock market remains skeptical

When Box announced it was getting a $500 million investment from private equity firm KKR this morning, it was hard not to see it as a positive move for the company. It has been operating under the shadow of Starboard Value, and this influx of cash could give it a way forward independent of the activist investors.

Industry experts we spoke to were all optimistic about the deal, seeing it as a way for the company to regain control, while giving it a bushel of cash to make some moves. However, early returns from the stock market were not as upbeat as the stock price was plunging this morning.

Alan Pelz-Sharpe, principal analyst at Deep Analysis, a firm that follows the content management market closely, says that it’s a significant move for Box and opens up a path to expanding through acquisition.

“The KKR move is probably the most important strategic move Box has made since it IPO’d. KKR doesn’t just bring a lot of money to the deal, it gives Box the ability to shake off some naysayers and invest in further acquisitions,” Pelz-Sharpe told me, adding “Box is no longer a startup its a rapidly maturing company and organic growth will only take you so far. Inorganic growth is what will take Box to the next level.”

Dion Hinchcliffe, an analyst at Constellation Research, who covers the work-from-home trend and the digital workplace, sees it similarly, saying the investment allows the company to focus longer term again.

“Box very much needs to expand in new markets beyond its increasingly commoditized core business. The KKR investment will give them the opportunity to realize loftier ambitions long term so they can turn their established market presence into a growth story,” he said.

Pelz-Sharpe says that it also changes the power dynamic after a couple of years of having Starboard pushing the direction of the company.

“In short, as a public company there are investors who want a quick flip and others that want to grow this company substantially before an exit. This move with KKR potentially changes the dynamic at Box and may well put Aaron Levie back in the driver’s seat.”

Josh Stein, a partner at DFJ and early investor in Box, who was a longtime board member, says that it shows that Box is moving in the right direction.

“I think it makes a ton of sense. Management has done a great job growing the business and taking it to profitability. With KKR’s new investment, you have two of the top technology investors in the world putting significant capital into going long on Box,” Stein said.

Perhaps Stein’s optimism is warranted. In its most recent earnings report from last month, the company announced revenue of $198.9 million, up 8% year-over-year with FY2021 revenue closing at $771 million up 11%. What’s more, the company is cash-flow positive, and has predicted an optimistic future outlook.

“As previously announced, Box is committed to achieving a revenue growth rate between 12-16%, with operating margins of between 23-27%, by fiscal 2024,” the company reiterated in a statement this morning.

Investors remains skeptical, however, with the company stock price getting hammered this morning. As of publication the share price was down more than 9%. At this point, market investors may be waiting for the next earnings report to see if the company is headed in the right direction. For now, the $500 million certainly gives the company options, regardless of what Wall Street thinks in the short term.

Quiq acquires Snaps to create a combined customer messaging platform

At first glance, Quiq and Snaps might sound like similar startups — they both help businesses talk to their customers via text messaging and other messaging apps. But Snaps CEO Christian Brucculeri said “there’s almost no overlap in what we do” and that the companies are “almost complete complements.”

That’s why Quiq (based in Bozeman, Montana) is acquiring Snaps (based in New York). The entire Snaps team is joining Quiq, with Brucculeri becoming senior vice president of sales and customer success for the combined organization.

Quiq CEO Mike Myer echoed Bruccleri’s point, comparing the situation to dumping two pieces of a jigsaw puzzle on the floor and discovering “the two pieces fit perfectly.”

More specifically, he told me that Quiq has generally focused on customer service messaging, with a “do it yourself, toolset approach.” After all, the company was founded by two technical co-founders, and Myer joked, “We can’t understand why [a customer] can’t just call an API.” Snaps, meanwhile, has focused more on marketing conversations, and on a managed service approach where it handles all of the technical work for its customers.

In addition, Myer said that while Quiq has “really focused on the platform aspect from the beginning” — building integrations with more than a dozen messaging channels including Apple Business Chat, Google’s Business Messages, Instagram, Facebook Messenger and WhatsApp — it doesn’t have “a deep natural language or conversational AI capability” the way Snaps does.

Myer said that demand for Quiq’s offering has been growing dramatically, with revenue up 300% year-over-year in the last six months of 2020. At the same time, he suggested that the divisions between marketing and customer service are beginning to dissolve, with service teams increasingly given sales goals, and “at younger, more commerce-focused organizations, they don’t have this differentiation between marketing and customer service” at all.

Apparently the two companies were already working together to create a combined offering for direct messaging on Instagram, which prompted broader discussions about how to bring the two products together. Moving forward, they will offer a combined platform for a variety of customers under the Quiq brand. (Quiq’s customers include Overstock.com, West Elm, Men’s Wearhouse and Brinks Home Security, while Snaps’ include Bryant, Live Nation, General Assembly, Clairol and Nioxin.) Brucculeri said this will give businesses one product to manage their conversations across “the full customer journey.”

“The key term you’re hearing is conversation,” Myer added. “It’s not about a ticket or a case or a question […] it’s an ongoing conversation.”

Snaps had raised $13 million in total funding from investors including Signal Peak Ventures. The financial terms of the acquisition were not disclosed.

 

Immersion cooling to offset data centers’ massive power demands gains a big booster in Microsoft

LiquidStack does it. So does Submer. They’re both dropping servers carrying sensitive data into goop in an effort to save the planet. Now they’re joined by one of the biggest tech companies in the world in their efforts to improve the energy efficiency of data centers, because Microsoft is getting into the liquid-immersion cooling market.

Microsoft is using a liquid it developed in-house that’s engineered to boil at 122 degrees Fahrenheit (lower than the boiling point of water) to act as a heat sink, reducing the temperature inside the servers so they can operate at full power without any risks from overheating.

The vapor from the boiling fluid is converted back into a liquid through contact with a cooled condenser in the lid of the tank that stores the servers.

“We are the first cloud provider that is running two-phase immersion cooling in a production environment,” said Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development in Redmond, Washington, in a statement on the company’s internal blog. 

While that claim may be true, liquid cooling is a well-known approach to dealing with moving heat around to keep systems working. Cars use liquid cooling to keep their motors humming as they head out on the highway.

As technology companies confront the physical limits of Moore’s Law, the demand for faster, higher performance processors mean designing new architectures that can handle more power, the company wrote in a blog post. Power flowing through central processing units has increased from 150 watts to more than 300 watts per chip and the GPUs responsible for much of Bitcoin mining, artificial intelligence applications and high end graphics each consume more than 700 watts per chip.

It’s worth noting that Microsoft isn’t the first tech company to apply liquid cooling to data centers and the distinction that the company uses of being the first “cloud provider” is doing a lot of work. That’s because bitcoin mining operations have been using the tech for years. Indeed, LiquidStack was spun out from a bitcoin miner to commercialize its liquid immersion cooling tech and bring it to the masses.

“Air cooling is not enough”

More power flowing through the processors means hotter chips, which means the need for better cooling or the chips will malfunction.

“Air cooling is not enough,” said Christian Belady, vice president of Microsoft’s datacenter advanced development group in Redmond, in an interview for the company’s internal blog. “That’s what’s driving us to immersion cooling, where we can directly boil off the surfaces of the chip.”

For Belady, the use of liquid cooling technology brings the density and compression of Moore’s Law up to the datacenter level

The results, from an energy consumption perspective, are impressive. The company found that using two-phase immersion cooling reduced power consumption for a server by anywhere from 5 percent to 15 percent (every little bit helps).

Microsoft investigated liquid immersion as a cooling solution for high performance computing applications such as AI. Among other things, the investigation revealed that two-phase immersion cooling reduced power consumption for any given server by 5% to 15%. 

Meanwhile, companies like Submer claim they reduce energy consumption by 50%, water use by 99%, and take up 85% less space.

For cloud computing companies, the ability to keep these servers up and running even during spikes in demand, when they’d consume even more power, adds flexibility and ensures uptime even when servers are overtaxed, according to Microsoft.

“[We] know that with Teams when you get to 1 o’clock or 2 o’clock, there is a huge spike because people are joining meetings at the same time,” Marcus Fontoura, a vice president on Microsoft’s Azure team, said on the company’s internal blog. “Immersion cooling gives us more flexibility to deal with these burst-y workloads.”

At this point, data centers are a critical component of the internet infrastructure that much of the world relies on for… well… pretty much every tech-enabled service. That reliance however has come at a significant environmental cost.

“Data centers power human advancement. Their role as a core infrastructure has become more apparent than ever and emerging technologies such as AI and IoT will continue to drive computing needs. However, the environmental footprint of the industry is growing at an alarming rate,” Alexander Danielsson, an investment manager at Norrsken VC noted last year when discussing that firm’s investment in Submer.

Solutions under the sea

If submerging servers in experimental liquids offers one potential solution to the problem — then sinking them in the ocean is another way that companies are trying to cool data centers without expending too much power.

Microsoft has already been operating an undersea data center for the past two years. The company actually trotted out the tech as part of a push from the tech company to aid in the search for a COVID-19 vaccine last year.

These pre-packed, shipping container-sized data centers can be spun up on demand and run deep under the ocean’s surface for sustainable, high-efficiency and powerful compute operations, the company said.

The liquid cooling project shares most similarity with Microsoft’s Project Natick, which is exploring the potential of underwater datacenters that are quick to deploy and can operate for years on the seabed sealed inside submarine-like tubes without any onsite maintenance by people. 

In those data centers nitrogen air replaces an engineered fluid and the servers are cooled with fans and a heat exchanger that pumps seawater through a sealed tube.

Startups are also staking claims to cool data centers out on the ocean (the seaweed is always greener in somebody else’s lake).

Nautilus Data Technologies, for instance, has raised over $100 million (according to Crunchbase) to develop data centers dotting the surface of Davey Jones’ Locker. The company is currently developing a data center project co-located with a sustainable energy project in a tributary near Stockton, Calif.

With the double-immersion cooling tech Microsoft is hoping to bring the benefits of ocean-cooling tech onto the shore. “We brought the sea to the servers rather than put the datacenter under the sea,” Microsoft’s Alissa said in a company statement.

Ioannis Manousakis, a principal software engineer with Azure (left), and Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development (right), walk past a container at a Microsoft datacenter where computer servers in a two-phase immersion cooling tank are processing workloads. Photo by Gene Twedt for Microsoft.

Daily Crunch: KKR invests $500M into Box

Box gets some financial ammunition against an activist investor, Samsung launches the Galaxy SmartTag+ and we look at the history of CryptoPunks. This is your Daily Crunch for April 8, 2021.

The big story: KKR invests $500M into Box

Private equity firm KKR is making an investment into Box that should help the cloud content management company buy back shares from activist investor Starboard Value, which might otherwise have claimed a majority of board seats and forced a sale.

After the investment, Aaron Levie will remain with Box as its CEO, but independent board member Bethany Mayer will become the chair, while KKR’s John Park is joining the board as well.

“The KKR move is probably the most important strategic move Box has made since it IPO’d,” said Alan Pelz-Sharpe of Deep Analysis. “KKR doesn’t just bring a lot of money to the deal, it gives Box the ability to shake off some naysayers and invest in further acquisitions.”

The tech giants

Samsung’s AirTags rival, the Galaxy SmartTag+, arrives to help you find lost items via AR — This is a version of Samsung’s lost-item finder that supports Bluetooth Low Energy and ultra-wideband technology.

Spotify stays quiet about launch of its voice command ‘Hey Spotify’ on mobile — Access to the “Hey Spotify” voice feature is rolling out more broadly, but Spotify isn’t saying anything officially.

Verizon and Honda want to use 5G and edge computing to make driving safer — The two companies are piloting different safety scenarios at the University of Michigan’s Mcity, a test bed for connected and autonomous vehicles.

Startups, funding and venture capital

Norway’s Kolonial rebrands as Oda, bags $265M on a $900M valuation to grow its online grocery delivery business in Europe — Oda’s aim is to provide “a weekly shop” for prices that compete against those of traditional supermarkets.

Tines raises $26M Series B for its no-code security automation platform — Tines co-founders Eoin Hinchy and Thomas Kinsella were both in senior security roles at DocuSign before they left to start their own company in 2018.

Yext co-founder unveils Dynascore, which dynamically synchronizes music and video — This is the first product from Howard Lerman’s new startup Wonder Inventions.

Advice and analysis from Extra Crunch

Four strategies for getting attention from investors — MaC Venture Capital founder Marlon Nichols joined us at TechCrunch Early Stage to discuss his strategies for early-stage investing, and how those lessons can translate into a successful launch for budding entrepreneurs.

How to get into a startup accelerator —  Neal Sáles-Griffin, managing director of Techstars Chicago, explains when and how to apply to a startup accelerator.

Understanding how fundraising terms can affect early-stage startups — Fenwick & West partner Dawn Belt breaks down some of the terms that trip up first-time entrepreneurs.

(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)

Everything else

The Cult of CryptoPunks — Ethereum’s “oldest NFT project” may not actually be the first, but it’s the wildest.

Biden proposes gun control reforms to go after ‘ghost guns’ and close loopholes — President Joe Biden has announced a new set of initiatives by which he hopes to curb the gun violence he described as “an epidemic” and “an international embarrassment.”

Apply to Startup Battlefield at TechCrunch Disrupt 2021 — All you need is a killer pitch, an MVP, nerves of steel and the drive and determination to take on all comers to claim the coveted Disrupt Cup.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.