InVision deepens integrations with Atlassian

InVision today announced a newly expanded integration and strategic partnership with Atlassian that will let users of Confluence, Trello and Jira see and share InVision prototypes from within those programs.

Atlassian’s product suite is built around making product teams faster and more efficient. These tools streamline and organize communication so developers and designers can focus on getting the job done. Meanwhile, InVision’s collaboration platform has caught on to the idea that design is now a team sport, letting designers, engineers, executives and other shareholders be involved in the design process right from the get-go.

Specifically, the expanded integration allows designers to share InVision Studio designs and prototypes right within Jira, Trello and Confluence. InVision Studio was unveiled late last year, offering designers an alternative to Sketch and Adobe.

Given the way design and development teams use both product suites, it only makes sense to let these product suites communicate with one another.

As part of the partnership, Atlassian has also made a strategic financial investment in InVision, though the companies declined to share the amount.

Here’s what InVision CEO Clark Valberg had to say about it in a prepared statement:

In today’s digital world creating delightful, highly effective customer experiences has become a central business imperative for every company in the world. InVision and Atlassian represent the essential platforms for organizations looking to unleash the potential of their design and development teams. We’re looking forward to all the opportunities to deepen our relationship on both a product and strategic basis, and build toward a more cohesive digital product operating system that enables every organization to build better products, faster.

InVision has been working to position itself as the Salesforce of the design world. Alongside InVision and InVision Studio, the company has also built out an asset and app store, as well as launched a small fund to invest in design startups. In short, InVision wants the design ecosystem to revolve around it.

Considering that InVision has raised more than $200 million, and serves 4 million users, including 80 percent of the Fortune 500, it would seem that the strategy is paying off.

OpenStack’s latest release focuses on bare metal clouds and easier upgrades

The OpenStack Foundation today released the 18th version of its namesake open-source cloud infrastructure software. The project has had its ups and downs, but it remains the de facto standard for running and managing large private clouds.

What’s been interesting to watch over the years is how the project’s releases have mirrored what’s been happening in the wider world of enterprise software. The core features of the platform (compute, storage, networking) are very much in place at this point, allowing the project to look forward and to add new features that enterprises are now requesting.

The new release, dubbed Rocky, puts an emphasis on bare metal clouds, for example. While the majority of enterprises still run their workloads in virtual machines, a lot of them are now looking at containers as an alternative with less overhead and the promise of faster development cycles. Many of these enterprises want to run those containers on bare metal clouds and the project is reacting to this with its “Ironic” project that offers all of the management and automation features necessary to run these kinds of deployments.

“There’s a couple of big features that landed in Ironic in the Rocky release cycle that we think really set it up well for OpenStack bare metal clouds to be the foundation for both running VMs and containers,” OpenStack Foundation VP of marketing and community Lauren Sell told me. 

Ironic itself isn’t new, but in today’s update, Ironic gets user-managed BIOS settings (to configure power management, for example) and RAM disk support for high-performance computing workloads. Magnum, OpenStack’s service for using container engines like Docker Swarm, Apache Mesos and Kubernetes, is now also a Kubernetes certified installer, meaning that users can be confident that OpenStack and Kubernetes work together just like a user would expect.

Another trend that’s becoming quite apparent is that many enterprises that build their own private clouds do so because they have very specific hardware needs. Often, that includes GPUs and FPGAs, for example, for machine learning workloads. To make it easier for these businesses to use OpenStack, the project now includes a lifecycle management service for these kinds of accelerators.

“Specialized hardware is getting a lot of traction right now,” OpenStack CTO Mark Collier noted. “And what’s interesting is that FPGAs have been around for a long time but people are finding out that they are really useful for certain types of AI, because they’re really good at doing the relatively simple math that you need to repeat over and over again millions of times. It’s kind of interesting to see this kind of resurgence of certain types of hardware that maybe was seen as going to be disrupted by cloud and now it’s making a roaring comeback.”

With this update, the OpenStack project is also enabling easier upgrades, something that was long a daunting process for enterprises. Because it was so hard, many chose to simply not update to the latest releases and often stayed a few releases behind. Now, the so-called Fast Forward Upgrade feature allows these users to get on new releases faster, even if they are well behind the project’s own cycle. Oath, which owns TechCrunch, runs a massive OpenStack cloud, for example, and the team recently upgraded a 20,000-core deployment from Juno (the 10th OpenStack release) to Ocata (the 15th release).

The fact that Vexxhost, a Canadian cloud provider, is already offering support for the Rocky release in its new Silicon Valley cloud today is yet another sign that updates are getting a bit easier (and the whole public cloud side of OpenStack, too, often gets overlooked, but continues to grow).

Amazon is quietly doubling down on cryptographic security

The growth of cloud services — with on-demand access to IT services over the Internet — has become one of the biggest evolutions in enterprise technology, but with it, so has the threat of security breaches and other cybercriminal activity. Now it appears that one of the leading companies in cloud services is looking for more ways to double down and fight the latter. Amazon’s AWS has been working on a range of new cryptographic and AI-based tools to help manage the security around cloud-based enterprise services, and it currently has over 130 vacancies for engineers with cryptography skills to help build and run it all.

One significant part of the work has been within a division of AWS called the Automated Reasoning Group, which focuses on identifying security issues and developing new tools to fix them for AWS and its customers based on automated reasoning, a branch of artificial intelligence that covers both computer science and mathematical logic and is aimed at helping computers automatically reason completely or nearly completely.

In recent times, Amazon has registered two new trademarks, Quivela and SideTrail, both of which have connections to ARG.

Classified in its patent application as “computer software for cryptographic protocol specification and verification,” Quivela also has a Github repository within AWS Labs’ profile that describes it as a “prototype tool for proving the security of cryptographic protocols,” developed by the AWS Automated Reasoning Group. (The ARG also has as part of its mission to share code and ideas with the community.)

SideTrail is not on Github, but Byron Cook, an academic who is the founder and director of the AWS Automated Reasoning Group, has co-authored a research paper called “SideTrail: Verifying the Time Balancing of Cryptosystems.” However, the link to the paper, describing what this is about, is no longer working.

The trademark application for SideTrail includes a long list of potential applications (as trademark applications often do). The general idea is cryptography-based security services. Among them: “Computer software, namely, software for monitoring, identifying, tracking, logging, analyzing, verifying, and profiling the health and security of cryptosystems; network encryption software; computer network security software,” “Providing access to hosted operating systems and computer applications through the Internet,” and a smattering of consulting potential: “Consultation in the field of cloud computing; research and development in the field of security and encryption for cryptosystems; research and development in the field of software; research and development in the field of information technology; computer systems analysis.”

Added to this, in July, a customer of AWS started testing out two other new cryptographic tools developed by the ARG also for improving an organization’s cybersecurity — with the tools originally released the previous August (2017). Tiros and Zelkova, as the two tools are called, are math-based techniques that variously evaluate access control schemes, security configurations and feedback based on different setups to help troubleshoot and prove the effectiveness of security systems across storage (S3) buckets.

Amazon has not trademarked Tiros and Zelkova. A Zelkova trademark, for financial services, appears to be registered as an LLC called “Zelkova Acquisition” in Las Vegas, while there is no active trademark listed for Tiros.

Amazon declined to respond to our questions about the trademarks. A selection of people we contacted associated with the projects did not respond to requests for comment.

More generally, cryptography is a central part of how IT services are secured: Amazon’s Automated Reasoning Group has been around since 2014 working in this area. But Amazon appears to be doing more now both to ramp up the tools it produces and consider how it can be applied across the wider business. A quick look on open vacancies at the company shows that there are currently 132 openings at Amazon for people with cryptography skills.

“Cloud is the new computer, the Earth is the motherboard and data centers are the cards,” Cook said in a lecture he delivered recently describing AWS and the work that the ARG is doing to help AWS grow. “The challenge is that as [AWS] scales it needs to be ever more secure… How does AWS continue to scale quickly and securely?

“AWS has made a big bet on our community,” he continued, as one answer to that question. That’s led to an expansion of the group’s activities in areas like formal verification and beyond, as a way of working with customers and encouraging them to move more data to the cloud.

Amazon is also making some key acquisitions also to build up its cloud security footprint, such as Sqrrl and Harvest.ai, two AI-based security startups whose founding teams both happen to have worked at the NSA.

Amazon’s AWS division pulled in over $6 billion in revenues last quarter with $1.6 billion in operating income, a healthy margin that underscores the shift that businesses and other organizations are making to cloud-based services.

Security is an essential component of how that business will continue to grow for Amazon and the wider industry: more trust in the infrastructure, and more proofs that cloud architectures can work better than using and scaling the legacy systems that businesses use today, will bolster the business. And it’s also essential, given the rise of breaches and ever more sophisticated cyber crimes. Gartner estimates that cloud-based security services will be a $6.9 billion market this year, rising to nearly $9 billion by 2020.

Automated tools that help human security specialists do their jobs better is an area that others like Microsoft are also eyeing up. Last year, it acquired Israeli security firm Hexadite, which offers remediation services to complement and bolster the work done by enterprise security specialists.

Can Whitelisting Win over Advanced Persistent Threats?

In recent years, security products utilizing application whitelisting have gained popularity as a cost-effective alternative for fighting malware and advanced persistent threats. The basic idea behind whitelisting is to deny execution permission to any application or process that has not been specifically approved. In this post, we will discuss the limitations of relying on whitelisting to combat both common threats and APTs.

First Generation Whitelisting

Whitelisting mechanisms were first introduced by operating system vendors (e.g., Windows’ AppLocker). They provided limited protection, were cumbersome to maintain and required skilled security administrators to be configured effectively. In addition, creating the initial “gold” image could be quite challenging. Furthermore, typical IT environments are complex and constantly changing due to, among other things, software updates and additional downloads, all of which must be maintained in the whitelisting database. This has led to the emergence of third-party whitelisting solutions offering simplified management. Some examples include Bit9 Parity (later to become Carbon Black) and McAfee Application Protection.

Let’s examine some of the common issues with the whitelisting approach, including why it cannot replace patching or protect against a variety of common exploitation vectors.

Exploitation of Whitelisted Applications and Processes

Modern attacks, and advanced persistent threats in particular, frequently take advantage of unpatched software vulnerabilities, which can allow malicious code to hijack the execution flow of whitelisted software. Once this occurs, an exploit might run additional malicious payloads that perform various activities on the targeted machine, such as data exfiltration and data collection. Modern malware does this by piggybacking valid processes, often to circumvent antivirus scanners that look for anomalies.

Trusted Directories and Certificates

The same evasion techniques used by malware creators against AV products are just as effective against whitelisting engines. In most cases, malware can run without interference if it is able to operate with system-level privileges, whether through a privilege escalation vulnerability or other methods of infection. For example, a common rule used in whitelisting allows binaries in C:Windows to run freely, which means any payload that is dropped in that location (after obtaining system privileges) can run undetected. Another common rule is to allow digitally signed binaries to run at all times, meaning that all an attacker has to do is obtain a valid private key and sign their own software.

After the Fact Custom Rules

Increasingly, attackers exploit vulnerabilities in the Windows operating systems, often by using specially crafted PowerPoint, Word, or Excel files. Since these are loaded and executed by legitimate, whitelisted applications, such attacks can circumvent whitelisting solutions. Sometime after the threat is detected and analyzed, whitelist vendors may respond by creating customized rules that prevent a related dll from being loaded by a specific application. These rules have three major drawbacks:

  1. They are only effective once the exploits have been fully analyzed and the damage has been done.
  2. They are not effective against similar exploits that target other Microsoft applications. For example, since OLE is a Windows component used by numerous Office applications, a PowerPoint-targeted rule does not protect other applications.
  3. They can negatively affect usability, since valid features introduced by packager.dll may be no longer available.

Exploiting OS Functionality

The ability of attacks to bypass whitelisting by exploiting the operating system is well documented by researchers. There has been an abundance of Java and Flash vulnerabilities in recent years, and built-in Windows mechanisms like WMI and PowerShell offer further attack vectors that whitelisting cannot address. Other methods attackers can use to bypass whitelisting include:

  • Executing malicious activities as the system (“root”) user
  • Running kernel code
  • Using one of the numerous built-in Windows IT maintenance mechanisms, such as Task Scheduler, Windows Accessibility tools and others

Conclusion

Whitelisting mechanisms at their core are about establishing trust. They do introduce another layer of protection against malicious binaries on a system, but in most cases, this protection only remains effective against threats that target careless or non-admin users who download unauthorized software. Statistics show that APTs commonly utilize software exploits (whether 0-day or not) in legitimate applications to compromise machines. This implies that in the real world, relying solely on whitelisting against watering hole attacks, spearphishing emails, and many other forms of targeted attacks remains ineffective.

Want to see how SentinelOne can help improve your security efforts?

Get a Demo Now

Like this article? Follow us on LinkedInTwitter, YouTube or Facebook to see the content we post.

Read more about Windows Security

“ALL YOUR FILES ARE ENCRYPTED” – Unless set to Russian Locale

New Windows 10 File Type Can Be Abused for Running Malicious Applications

Hancitor Banking Trojan is Back | Using Malicious Word Attachment

SentinelOne Detects and Blocks New Variant of Powershell CryptoWorm

Storage provider Cloudian raises $94M

Cloudian, a company that specializes in helping businesses store petabytes of data, today announced that it has raised a $94 million Series E funding round. Investors in this round, which is one of the largest we have seen for a storage vendor, include Digital Alpha, Fidelity Eight Roads, Goldman Sachs, INCJ, JPIC (Japan Post Investment Corporation), NTT DOCOMO Ventures and WS Investments. This round includes a $25 million investment from Digital Alpha, which was first announced earlier this year.

With this, the seven-year-old company has now raised a total of $174 million.

As the company told me, it now has about 160 employees and 240 enterprise customers. Cloudian has found its sweet spot in managing the large video archives of entertainment companies, but its customers also include healthcare companies, automobile manufacturers and Formula One teams.

What’s important to stress here is that Cloudian’s focus is on on-premise storage, not cloud storage, though it does offer support for multi-cloud data management, as well. “Data tends to be most effectively used close to where it is created and close to where it’s being used,” Cloudian VP of worldwide sales Jon Ash told me. “That’s because of latency, because of network traffic. You can almost always get better performance, better control over your data if it is being stored close to where it’s being used.” He also noted that it’s often costly and complex to move that data elsewhere, especially when you’re talking about the large amounts of information that Cloudian’s customers need to manage.

Unsurprisingly, companies that have this much data now want to use it for machine learning, too, so Cloudian is starting to get into this space, as well. As Cloudian CEO and co-founder Michael Tso also told me, companies are now aware that the data they pull in, whether from IoT sensors, cameras or medical imaging devices, will only become more valuable over time as they try to train their models. If they decide to throw the data away, they run the risk of having nothing with which to train their models.

Cloudian plans to use the new funding to expand its global sales and marketing efforts and increase its engineering team. “We have to invest in engineering and our core technology, as well,” Tso noted. “We have to innovate in new areas like AI.”

As Ash also stressed, Cloudian’s business is really data management — not just storage. “Data is coming from everywhere and it’s going everywhere,” he said. “The old-school storage platforms that were siloed just don’t work anywhere.”

Box builds a digital hub to help fight content fragmentation

The interconnectedness of the cloud has allowed us to share content widely with people inside and outside the organization and across different applications, but that ability has created a problem of its own, a kind of digital fragmentation. How do you track how that piece of content is being used across a range of cloud services? It’s a problem Box wants to solve with its latest features, Activity Stream and Recommended Apps.

The company made the announcements at BoxWorks, its annual customer conference being held this week in San Francisco,

Activity Stream provides a way to track your content in real time as it moves through the organization, including who touches it and what applications it’s used in, acting as a kind of digital audit trail. One of the big problems with content in the cloud age is understanding what happened to it after you created it. Did it get used in Salesforce or ServiceNow or Slack? You can now follow the path of your content and see how people have shared it, and this could help remove some of the disconnect people feel in the digital world.

As Jeetu Patel, Box’s Chief Product and Chief Strategy Officer points out, an average large company could have more than a thousand apps and there is no good way to connect the dots when it comes to tracking unstructured content and getting a unified view of the digital trail.

“We integrate with over 1400 applications, and as we integrate with those applications, we thought if we could surface those events, it would be insanely useful to our users,” he said. Patel sees this as the beginning of an important construct, the notion of a content hub where you can see the entire transaction record associated with a piece of content.

Activity Stream sidebar inside Box. Photo: Box

But Box didn’t want to stop with just a laundry list of the connections. It also created deep links into the applications being used, so a user can click a link, open the application and view the content in the context of that other application. “It seems like Box was a logical place to get a bird’s eye view of how content is being used,” Patel said, explaining Box’s thinking in creating this feature.

A related feature is a list of Recommended Apps. Based the Box Graph, and what Box knows about the user, the content they use, and how it’s interconnected with other cloud apps, it also displays a list of recommended apps right in the Box interface. This lets users access those applications in the context of their work, so for instance, they could share the content in Slack right from the document.

Recommended Apps bar inside Box. Photo: Box

For starters, Recommended Apps integrations include G Suite apps, Slack, Salesforce, DocuSign and Netsuite, but Patel says anyone who is integrated with the web app via the API will start showing up in Activity Stream.

While the products were announced today, Box is still working out the kinks in terms of how this will work. They expect these features to be available early next year. If they can pull this off, it will go a long way toward solving the digital fragmentation problem and making Box the content center for organizations.

Google takes a step back from running the Kubernetes development infrastructure

Google today announced that it is providing the Cloud Native Computing Foundation (CNCF) with $9 million in Google Cloud credits to help further its work on the Kubernetes container orchestrator and that it is handing over operational control of the project to the community. These credits will be split over three years and are meant to cover the infrastructure costs of building, testing and distributing the Kubernetes software.

Why does this matter? Until now, Google hosted virtually all the cloud resources that supported the project, like its CI/CD testing infrastructure, container downloads and DNS services on its cloud. But Google is now taking a step back. With the Kubernetes community reaching a state of maturity, Google is transferring all of this to the community.

Between the testing infrastructure and hosting container downloads, the Kubernetes project regularly runs more than 150,000 containers on 5,000 virtual machines, so the cost of running these systems quickly adds up. The Kubernetes container registry has served almost 130 million downloads since the launch of the project.

It’s also worth noting that the CNCF now includes a wide range of members that typically compete with each other. We’re talking Alibaba Cloud, AWS, Microsoft Azure, Google Cloud, IBM Cloud, Oracle, SAP and VMware, for example. All of these profit from the work of the CNCF and the Kubernetes community. Google doesn’t say so outright, but it’s fair to assume that it wanted others to shoulder some of the burdens of running the Kubernetes infrastructure, too. Similarly, some of the members of the community surely didn’t want to be so closely tied to Google’s infrastructure, either.

“By sharing the operational responsibilities for Kubernetes with contributors to the project, we look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project operations,” Google Kubernetes Engine product manager William Deniss writes in today’s announcement. He also notes that a number of Google’s will still be involved in running the Kubernetes infrastructure.

“Google’s significant financial donation to the Kubernetes community will help ensure that the project’s constant pace of innovation and broad adoption continue unabated,” said Dan Kohn, the executive director of the CNCF. “We’re thrilled to see Google Cloud transfer management of the Kubernetes testing and infrastructure projects into contributors’ hands — making the project not just open source, but openly managed, by an open community.”

It’s unclear whether the project plans to take some of the Google-hosted infrastructure and move it to another cloud, but it could definitely do so — and other cloud providers could step up and offer similar credits, too.

Very Good Security makes data ‘unhackable’ with $8.5M from Andreessen

“You can’t hack what isn’t there,” Very Good Security co-founder Mahmoud Abdelkader tells me. His startup assumes the liability of storing sensitive data for other companies, substituting dummy credit card or Social Security numbers for the real ones. Then when the data needs to be moved or operated on, VGS injects the original info without clients having to change their code.

It’s essentially a data bank that allows businesses to stop storing confidential info under their unsecured mattress. Or you could think of it as Amazon Web Services for data instead of servers. Given all the high-profile breaches of late, it’s clear that many companies can’t be trusted to house sensitive data. Andreessen Horowitz is betting that they’d rather leave it to an expert.

That’s why the famous venture firm is leading an $8.5 million Series A for VGS, and its partner Alex Rampell is joining the board. The round also includes NYCA, Vertex Ventures, Slow Ventures and PayPal mafioso Max Levchin. The cash builds on VGS’ $1.4 million seed round, and will pay for its first big marketing initiative and more salespeople.

“Hey! Stop doing this yourself!,” Abdelkader asserts. “Put it on VGS and we’ll let you operate on your data as if you possess it with none of the liability.” While no data is ever 100 percent unhackable, putting it in VGS’ meticulously secured vaults means clients don’t have to become security geniuses themselves and instead can focus on what’s unique to their business.

“Privacy is a part of the UN Declaration of Human Rights. We should be able to build innovative applications without sacrificing our privacy and security,” says Abdelkader. He got his start in the industry by reverse-engineering games like StarCraft to build cheats and trainer software. But after studying discrete mathematics, cryptology and number theory, he craved a headier challenge.

Abdelkader co-founded Y Combinator-backed payment system Balanced in 2010, which also raised cash from Andreessen. But out-muscled by Stripe, Balanced shut down in 2015. While transitioning customers over to fellow YC alumni Stripe, Balanced received interest from other companies wanting it to store their data so they could be PCI-compliant.

Very Good Security co-founder and CEO Mahmoud Abdelkader

Now Abdelkader and his VP from Balanced, Marshall Jones, have returned with VGS to sell that as a service. It’s targeting startups that handle data like payment card information, Social Security numbers and medical info, though eventually it could invade the larger enterprise market. It can quickly help these clients achieve compliance certifications for PCI, SOC2, EI3PA, HIPAA and other standards.

VGS’ innovation comes in replacing this data with “format preserving aliases” that are privacy safe. “Your app code doesn’t know the difference between this and actually sensitive data,” Abdelkader explains. In 30 minutes of integration, apps can be reworked to route traffic through VGS without ever talking to a salesperson. VGS locks up the real strings and sends the aliases to you instead, then intercepts those aliases and swaps them with the originals when necessary.

“We don’t actually see your data that you vault on VGS,” Abdelkader tells me. “It’s basically modeled after prison. The valuables are stored in isolation.” That means a business’ differentiator is their business logic, not the way they store data.

For example, fintech startup LendUp works with VGS to issue virtual credit card numbers that are replaced with fake numbers in LendUp’s databases. That way if it’s hacked, users’ don’t get their cards stolen. But when those card numbers are sent to a processor to actually make a payment, the real card numbers are subbed in last-minute.

VGS charges per data record and operation, with the first 500 records and 100,000 sensitive API calls free; $20 a month gets clients double that, and then they pay 4 cent per record and 2 cents per operation. VGS provides access to insurance too, working with a variety of underwriters. It starts with $1 million policies that can be much larger for Fortune 500s and other big companies, which might want $20 million per incident.

Obviously, VGS has to be obsessive about its own security. A breach of its vaults could kill its brand. “I don’t sleep. I worry I’ll miss something. Are we a giant honey pot?,” Abdelkader wonders. “We’ve invested a significant amount of our money into 24/7 monitoring for intrusions.”

Beyond the threat of hackers, VGS also has to battle with others picking away at part of its stack or trying to compete with the whole, like TokenEx, HP’s Voltage, Thales’ Vormetric, Oracle and more. But it’s do-it-yourself security that’s the status quo and what VGS is really trying to disrupt.

But VGS has a big accruing advantage. Each time it works with a clients’ partners like Experian or TransUnion for a company working with credit checks, it already has a relationship with them the next time another clients has to connect with these partners. Abdelkader hopes that, “Effectively, we become a standard of data security and privacy. All the institutions will just say ‘why don’t you use VGS?’”

That standard only works if it’s constantly evolving to win the cat-and-mouse game versus attackers. While a company is worrying about the particular value it adds to the world, these intelligent human adversaries can find a weak link in their security — costing them a fortune and ruining their relationships. “I’m selling trust,” Abdelkader concludes. That peace of mind is often worth the price.

Microsoft will soon automatically transcribe video files in OneDrive for Office 365 subscribers

Microsoft today announced a couple of AI-centric updates for OneDrive and SharePoint users with an Office 365 subscription that bring more of the company’s machine learning smarts to its file storage services.

All of these features will launch at some point later this year. With the company’s Ignite conference in Orlando coming up next month, it’s probably a fair guess that we’ll see some of these updates make a reappearance there.

The highlight of these announcements is that starting later this year, both services will get automated transcription services for video and audio files. While video is great, it’s virtually impossible to find any information in these files without spending a lot of time. And once you’ve found it, you still have to transcribe it. Microsoft says this new service will handle the transcription automatically and then display the transcript as you’re watching the video. The service can handle over 320 file types, so chances are it’ll work with your files, too.

Other updates the company today announced include a new file view for OneDrive and Office.com that will recommend files to you by looking at what you’ve been working on lately across the Microsoft 365 and making an educated guess as to what you’ll likely want to work on now. Microsoft will also soon use a similar set of algorithms to prompt you to share files with your colleagues after you’ve just presented them in a meeting with PowerPoint, for example.

Power users will also soon see access statistics for any file in OneDrive and SharePoint.

VMware acquires CloudHealth Technologies for multi-cloud management

VMware is hosting its VMworld customer conference in Las Vegas this week, and to get things going it announced that it’s acquiring Boston-based CloudHealth Technologies. They did not disclose the terms of the deal, but Reuters is reporting the price is $500 million.

CloudHealth provides VMware with a crucial multi-cloud management platform that works across AWS, Microsoft Azure and Google Cloud Platform, giving customers a way to manage cloud cost, usage, security and performance from a single interface.

Although AWS leads the cloud market by a large margin, it is a vast and growing market and most companies are not putting their eggs in a single vendor basket. Instead, they are looking at best of breed options for different cloud services.

This multi-cloud approach is great for customers in that they are not tied down to any single provider, but it does create a management headache as a consequence. CloudHealth gives multi-cloud users a way to manage their environment from a single tool.

CloudHealth multi-cloud management. Photo: CloudHealth Technologies

VMware’s chief operating officer for products and cloud services, Raghu Raghuram, says CloudHealth solves the multi-cloud operational dilemma. “With the addition of CloudHealth Technologies we are delivering a consistent and actionable view into cost and resource management, security and performance for applications across multiple clouds,” Raghuram said in a statement.

CloudHealth began offering support for Google Cloud Platform just last month. CTO Joe Kinsella told TechCrunch why they had decided to expand their platform to include GCP support: “I think a lot of the initiatives that have been driven since Diane Greene joined Google [at the end of 2015] and began really driving towards the enterprise are bearing fruit. And as a result, we’re starting to see a really substantial uptick in interest.”

It also gave them a complete solution for managing across the three of the biggest cloud vendors. That last piece very likely made them an even more attractive target for a company like VMware, who apparently was looking for a solution to buy that would help customers manage across a hybrid and multi-cloud environment.

The company had been planning future expansion to manage not just the public cloud, but also private clouds and data centers from one place, a strategy that should fit well with what VMware has been trying to do in recent years to help companies manage a hybrid environment, regardless of where their virtual machines live.

With CloudHealth, VMware not only gets the multi-cloud management solution, it gains its 3000 customers which include Yelp, Dow Jones, Zendesk and Pinterest.

CloudHealth was founded in 2012 and has raised over $87 million. Its most recent round was a $46 million Series D in June 2017 led by Kleiner Perkins. Other lead investors across earlier rounds have included Sapphire Ventures, Scale Venture Partners and .406 Ventures.