What each cloud company could bring to the Pentagon’s $10 B JEDI cloud contract

The Pentagon is going to make one cloud vendor exceedingly happy when it chooses the winner of the $10 billion, ten-year enterprise cloud project dubbed the Joint Enterprise Defense Infrastructure (or JEDI for short). The contract is designed to establish the cloud technology strategy for the military over the next 10 years as it begins to take advantage of current trends like Internet of Things, artificial intelligence and big data.

Ten billion dollars spread out over ten years may not entirely alter a market that’s expected to reach $100 billion a year very soon, but it is substantial enough give a lesser vendor much greater visibility, and possibly deeper entree into other government and private sector business. The cloud companies certainly recognize that.

Photo: Glowimages/Getty Images

That could explain why they are tripping over themselves to change the contract dynamics, insisting, maybe rightly, that a multi-vendor approach would make more sense.

One look at the Request for Proposal (RFP) itself, which has dozens of documents outlining various criteria from security to training to the specification of the single award itself, shows the sheer complexity of this proposal. At the heart of it is a package of classified and unclassified infrastructure, platform and support services with other components around portability. Each of the main cloud vendors we’ll explore here offers these services. They are not unusual in themselves, but they do each bring a different set of skills and experiences to bear on a project like this.

It’s worth noting that it’s not just interested in technical chops, the DOD is also looking closely at pricing and has explicitly asked for specific discounts that would be applied to each component. The RFP process closes on October 12th and the winner is expected to be chosen next April.

Amazon

What can you say about Amazon? They are by far the dominant cloud infrastructure vendor. They have the advantage of having scored a large government contract in the past when they built the CIA’s private cloud in 2013, earning $600 million for their troubles. It offers GovCloud, which is the product that came out of this project designed to host sensitive data.

Jeff Bezos, Chairman and founder of Amazon.com. Photo: Drew Angerer/Getty Images

Many of the other vendors worry that gives them a leg up on this deal. While five years is a long time, especially in technology terms, if anything, Amazon has tightened control of the market. Heck, most of the other players were just beginning to establish their cloud business in 2013. Amazon, which launched in 2006, has maturity the others lack and they are still innovating, introducing dozens of new features every year. That makes them difficult to compete with, but even the biggest player can be taken down with the right game plan.

Microsoft

If anyone can take Amazon on, it’s Microsoft. While they were somewhat late the cloud they have more than made up for it over the last several years. They are growing fast, yet are still far behind Amazon in terms of pure market share. Still, they have a lot to offer the Pentagon including a combination of Azure, their cloud platform and Office 365, the popular business suite that includes Word, PowerPoint, Excel and Outlook email. What’s more they have a fat contract with the DOD for $900 million, signed in 2016 for Windows and related hardware.

Microsoft CEO, Satya Nadella Photo: David Paul Morris/Bloomberg via Getty Images

Azure Stack is particularly well suited to a military scenario. It’s a private cloud you can stand up and have a mini private version of the Azure public cloud. It’s fully compatible with Azure’s public cloud in terms of APIs and tools. The company also has Azure Government Cloud, which is certified for use by many of the U.S. government’s branches, including DOD Level 5. Microsoft brings a lot of experience working inside large enterprises and government clients over the years, meaning it knows how to manage a large contract like this.

Google

When we talk about the cloud, we tend to think of the Big Three. The third member of that group is Google. They have been working hard to establish their enterprise cloud business since 2015 when they brought in Diane Greene to reorganize the cloud unit and give them some enterprise cred. They still have a relatively small share of the market, but they are taking the long view, knowing that there is plenty of market left to conquer.

Head of Google Cloud, Diane Greene Photo: TechCrunch

They have taken an approach of open sourcing a lot of the tools they used in-house, then offering cloud versions of those same services, arguing that who knows better how to manage large-scale operations than they do. They have a point, and that could play well in a bid for this contract, but they also stepped away from an artificial intelligence contract with DOD called Project Maven when a group of their employees objected. It’s not clear if that would be held against them or not in the bidding process here.

IBM

IBM has been using its checkbook to build a broad platform of cloud services since 2013 when it bought Softlayer to give it infrastructure services, while adding software and development tools over the years, and emphasizing AI, big data, security, blockchain and other services. All the while, it has been trying to take full advantage of their artificial intelligence engine, Watson.

IBM Chairman, President and CEO Ginni Romett Photo: Ethan Miller/Getty Images

As one of the primary technology brands of the 20th century, the company has vast experience working with contracts of this scope and with large enterprise clients and governments. It’s not clear if this translates to its more recently developed cloud services, or if it has the cloud maturity of the others, especially Microsoft and Amazon. In that light, it would have its work cut out for it to win a contract like this.

Oracle

Oracle has been complaining since last spring to anyone who will listen, including reportedly the president, that the JEDI RFP is unfairly written to favor Amazon, a charge that DOD firmly denies. They have even filed a formal protest against the process itself.

That could be a smoke screen because the company was late to the cloud, took years to take it seriously as a concept, and barely registers today in terms of market share. What it does bring to the table is broad enterprise experience over decades and one of the most popular enterprise databases in the last 40 years.

Larry Ellison, chairman of Oracle Corp.

Larry Ellison, chairman of Oracle. Photo: David Paul Morris/Bloomberg via Getty Images

It recently began offering a self-repairing database in the cloud that could prove attractive to DOD, but whether its other offerings are enough to help it win this contract remains to be to be seen.

Dropbox overhauls internal search to improve speed and accuracy

Over the last several months, Dropbox has been undertaking an overhaul of its internal search engine for the first time since 2015. Today, the company announced that the new version, dubbed Nautilus, is ready for the world. The latest search tool takes advantage of a new architecture powered by machine learning to help pinpoint the exact piece of content a user is looking for.

While an individual user may have a much smaller body of documents to search across than the World Wide Web, the paradox of enterprise search says that the fewer documents you have, the harder it is to locate the correct one. Yet Dropbox faces of a host of additional challenges when it comes to search. It has more than 500 million users and hundreds of billions of documents, making finding the correct piece for a particular user even more difficult. The company had to take all of this into consideration when it was rebuilding its internal search engine.

One way for the search team to attack a problem of this scale was to put machine learning to bear on it, but it required more than an underlying level of intelligence to make this work. It also required completely rethinking the entire search tool from an architectural level.

That meant separating two main pieces of the system, indexing and serving. The indexing piece is crucial of course in any search engine. A system of this size and scope needs a fast indexing engine to cover the number of documents in a whirl of changing content. This is the piece that’s hidden behind the scenes. The serving side of the equation is what end users see when they query the search engine, and the system generates a set of results.

Nautilus Architecture Diagram: Dropbox

Dropbox described the indexing system in a blog post announcing the new search engine: “The role of the indexing pipeline is to process file and user activity, extract content and metadata out of it, and create a search index.” They added that the easiest way to index a corpus of documents would be to just keep checking and iterating, but that couldn’t keep up with a system this large and complex, especially one that is focused on a unique set of content for each user (or group of users in the business tool).

They account for that in a couple of ways. They create offline builds every few days, but they also watch as users interact with their content and try to learn from that. As that happens, Dropbox creates what it calls “index mutations,” which they merge with the running indexes from the offline builds to help provide ever more accurate results.

The indexing process has to take into account the textual content assuming it’s a document, but it also has to look at the underlying metadata as a clue to the content. They use this information to feed a retrieval engine, whose job is to find as many documents as it can, as fast it can and worry about accuracy later.

It has to make sure it checks all of the repositories. For instance, Dropbox Paper is a separate repository, so the answer could be found there. It also has to take into account the access-level security, only displaying content that the person querying has the right to access.

Once it has a set of possible results, it uses machine learning to pinpoint the correct content. “The ranking engine is powered by a [machine learning] model that outputs a score for each document based on a variety of signals. Some signals measure the relevance of the document to the query (e.g., BM25), while others measure the relevance of the document to the user at the current moment in time,” they explained in the blog post.

After the system has a list of potential candidates, it ranks them and displays the results for the end user in the search interface, but a lot of work goes into that from the moment the user types the query until it displays a set of potential files. This new system is designed to make that process as fast and accurate as possible.

Alphabet’s Chronicle launches an enterprise version of VirusTotal

VirusTotal, the virus- and malware-scanning service owned by Alphabet’s Chronicle, launched an enterprise-grade version of its service today.

VirusTotal Enterprise offers significantly faster and more customizable malware search, as well as a new feature called Private Graph, which allows enterprises to create their own private visualizations of their infrastructure and malware that affects their machines.

The Private Graph makes it easier for enterprises to create an inventory of their internal infrastructure and users to help security teams investigate incidents (and where they started). In the process of building this graph, VirtusTotal also looks are commonalities between different nodes to be able to detect changes that could signal potential issues.

The company stresses that these graphs are obviously kept private. That’s worth noting because VirusTotal already offered a similar tool for its premium users — the VirusTotal Graph. All of the information there, however, was public.

As for the faster and more advanced search tools, VirusTotal notes that its service benefits from Alphabet’s massive infrastructure and search expertise. This allows VirusTotal Enterprise to offer a 100x speed increase, as well as better search accuracy. Using the advanced search, the company notes, a security team could now extract the icon from a fake application, for example, and then return all malware samples that share the same file.

VirusTotal says that it plans to “continue to leverage the power of Google infrastructure” and expand this enterprise service over time.

Google acquired VirusTotal back in 2012. For the longest time, the service didn’t see too many changes, but earlier this year, Google’s parent company Alphabet moved VirusTotal under the Chronicle brand and the development pace seems to have picked up since.

SKREAM: Kernel-Mode Exploits Mitigations For the Rest of Us

Background

When dealing with kernel exploits, the main goal of an attacker would usually be to elevate itself from low privileges to high or system-level privileges. This type of attack is commonly referred to as LPE (local privilege escalation), and can be achieved through a myriad of exploitation techniques, exploiting different classes of vulnerabilities in kernel code, either in NTOSKRNL itself or in 3rd-party drivers.

While Microsoft does a great job in mitigating many of these vulnerabilities, there’s always more room for improvement. As part of this ongoing effort, we started a new, open-source project entitled SKREAM (SentinelOne’s KeRnel Exploits Advanced Mitigations). This project will host multiple independent features meant to detect or mitigate different types/phases of the kernel exploitation lifecycle. Right now it only contains one such mitigation, but stay tuned for more.

In this blog post we will explore the very first mitigation introduced by SKREAM. This mitigation addresses a specific exploitation technique, used mostly when weaponizing pool overflow vulnerabilities, and renders the use of it ineffective on Windows 7 and 8 systems.

Introduction to Kernel Pool Overflows

Kernel pool overflow is a well-known class of vulnerabilities, used extensively by LPE exploits for the past few years. It becomes available when a kernel-mode driver copies user-supplied data into a pool allocation without first validating its size. This allows a malicious user to supply data that is longer than the allocation made by the kernel driver and thus overwrite pool memory that belongs to the next adjacent allocation.

Figure 1 – Schematic illustration of a pool overflow vulnerability

Combined with kernel pool spraying techniques, the contents of the following pool allocation can be made predictable, thus allowing the attacker to completely engineer its overwrite.

TypeIndex Overwrite

There are various methods for actually exploiting a pool overflow vulnerability. In this blog post, we will concentrate our efforts on one specific technique, which works by overwriting the TypeIndex member found in every OBJECT_HEADER structure.

As documented by numerous sources, every object allocated by the Windows object manager has an object header describing it, which immediately follows the respective pool header in memory. This object header contains a member called “TypeIndex”, which serves as an index into the nt!ObTypeIndexTable array.

Figure 2 – The in-memory layout of an OBJECT_HEADER structure. Highlighted in red is the TypeIndex member.

The nt!ObTypeIndexTable is an array of OBJECT_TYPE structures, each describing one of the many object types available on Windows (e.g. process, event, desktop. etc.). The OBJECT_TYPE structure supplies Windows – among other things – with the information necessary for performing various operations on the object, such as what methods should be called when the object is opened, closed, deleted, etc.

Figure 3 – A few of the methods implemented by every OBJECT_TYPE

As is turns out, the first 2 entries of the nt!ObTypeIndexTable array are actually pseudo-entries which don’t seem to point to actual OBJECT_TYPE structures. The first entry holds a NULL pointer and the second one holds the magic constant 0xbad0b0b0:

Figure 4 – The first 2 pseudo-entries in the nt!ObTypeIndexTable array

Since on x64 both of these values are zero-extended into user-mode addresses, they unintentionally provide a way for attackers to achieve code execution with kernel-level privileges. Given a pool overflow vulnerability, an attacker could:

  1. Allocate the page 0xbad0b0b0 resides in and construct there a fake OBJECT_TYPE structure. This fake object type will contain function pointers pointing to the attacker’s elevation-of-privilege code (on Windows 7 x86, we could also allocate the NULL page for that purpose, but on newer Windows versions this is no longer possible).
  2. Spray the pool with objects of a known type and size. This ensures that the contents of the overflown allocation will be known to the attacker.
  3. Free some of these objects to create “holes” in the pool. Ideally to the attacker, the overflowing allocation will inhabit one of these “holes”.
  4. Trigger the vulnerability in order to overflow into the next object, smashing its OBJECT_HEADER and changing its TypeIndex member to 1.
  5. Trigger some action on the overflown object (e.g. close a handle to it). This will cause the system to fetch the OBJECT_TYPE from 0xbad0b0b0 and call one of its methods (in our example the CloseProcedure). Since this function pointer was provided by the attacker, it actually runs the attacker’s code with kernel-mode permissions, resulting in elevation of privileges.

Figure 5 – Pool overflow visualization, by Nikita Tarakanov (source)

This technique was found and developed by Nikita Tarakanov and was first published by him here.

The Mitigation

Our proposed mitigation aims to thwart this exploitation technique on a per-process basis by pre-allocating the memory region that contains 0xbad0b0b0, before any exploit gets a chance to abuse it. In order for our mitigation to be as effective as possible we can’t just allocate this page, but have to secure it against malicious attempts to unmap, free or modify it in any way.

A similar project was conducted by Tarjei Mandt back in 2011, where he demonstrated the ability to secure the NULL page against NULL-page dereference exploits on Windows 7 systems. To do so, he wrote a kernel-mode driver which utilized a bunch of VAD (Virtual Address Descriptors) manipulation techniques to manually build a VAD entry for the NULL page and then insert it into the VAD tree as a “synthetic” leaf entry.

We needed to do pretty much the same thing but wanted to do so in a more “organic” fashion – i.e. to offload as much work as possible to the Windows virtual memory manager and thus avoid potential complications. The initial creation of the VAD entry as well as the insertion of it into the VAD tree could be easily offloaded to the system by means of simply allocating the region containing 0xbad0b0b0 (since we know every virtual memory allocation ultimately translates into VAD entry creation).

Then we went on to retrieve the VAD entry we have just created so that we could further edit it to meet our needs (i.e. secure the memory range). To do so we borrowed the ‘BBFindVad’ utility function from the Blackbone library, which very conveniently implements this functionality for different Windows versions.

Unfortunately, the VAD entry that was retrieved from the VAD tree was of type MMVAD_SHORT, while the VAD we needed (according to Tarjei’s paper) was of type MMVAD_LONG. It also seemed that the flags we were interested in were located in the MMVAD_LONG structure, and not in the smaller MMVAD_SHORT we had.

Figure 6 – The VAD describing the memory region containing 0cbad0b0b0, before being secured

To overcome this discrepancy, we allocated our own MMVAD_LONG structure and started initializing it. As it turns out, every MMVAD_LONG structure has an MMVAD_SHORT substructure embedded inside it, so we could setup this portion by simply copying the MMVAD_SHORT we retrieved earlier.

The next step was to edit the VAD flags so as to make it secure. According to Tarjei’s paper, the modifications we had to make eventually boiled down to this:

These changes made our VAD look like this:

Figure 7 – The VAD describing the memory region containing 0cbad0b0b0, after being secured

Finally, we needed to replace the MMVAD_SHORT in the VAD tree with our newly-secured MMVAD_LONG. In a nutshell, this involved a three-phase operation:

  1. Setting the parent node of the short VAD’s children to point to our MMVAD_LONG entry.
  2. Setting the appropriate child node (either left or right) of the short VAD’s parent to point to our MMVAD_LONG entry.
  3. Freeing the short VAD entry, as it is no longer referenced by the VAD tree.

Figure 8 – Replacing the MMVAD_SHORT in the VAD tree with our MMVAD_LONG

Video of SKREAM mitigating pool overflow vulnerability in KdExploitMe

Windows 8

On Windows 8 there were a few minor changes to the various VAD structures, which forced us to make slight adjustments to our codebase:

  1. There is no longer an MMVAD_LONG structure, only MMVAD_SHORT and MMVAD.
  2. The flags we set on Windows 7 have either changed their location inside the MMVAD structure (NoChange, StartVA, EndVA), or don’t seem to exist at all (OneSecured).

After taking these into consideration, our code for making the necessary VAD modifications on Windows 8 looked something like this:

Windows 8.1 and Beyond the Infinite

The specific exploitation technique described in this blog post only works on Windows 7 and Windows 8 systems. Starting from Windows 8.1, nt!ObTypeIndexTable[1] no longer points to 0xbad0b0b0, and instead holds the value of nt!MmBadPointer which is guaranteed to cause an access violation when dereferenced. Additionally, in Windows 10 the TypeIndex value stored in the OBJECT_HEADER structure is encoded using a security cookie, so this technique will obviously no longer work on newer Windows systems without some additional work on the attacker’s behalf.

 

LinkedIn steps into business intelligence with the launch of Talent Insights

LinkedIn may be best known as a place where people and organizations keep public pages of their professional profiles, using that as a starting point for networking, recruitment and more — a service that today that has racked up more than 575 million users, 20 million companies and 15 million active job listings. But now under the ownership of Microsoft, the company has increasingly started to build a number of other services; today sees the latest of these, the launch of a new feature called Talent Insights.

Talent Insights is significant in part because it is LinkedIn’s first foray into business intelligence, that branch of enterprise analytics aimed at helping execs and other corporate end users make more informed business decisions.

Talent Insights is also notable because it’s part of a trend, where LinkedIn has been launching a number of other services that take it beyond being a straight social network, and more of an IT productivity tool. They have included a way for users to look at and plan commutes to potential jobs (or other businesses); several integrations with Microsoft software including resume building in Word and Outlook integrations; and adding in more CRM tools to its Sales Navigator product.

Interestingly, it has been nearly a year between LinkedIn first announcing Talent Insights and actually launching it today. The company says part of the reason for the gap is because it has been tinkering with it to get the product right: it’s been testing it with a number of customers — there are now 100 using Talent Insights — with employees in departments like human resources, recruitment and marketing using it.

The product that’s launching today is largely similar to what the company previewed a year ago: there are two parts to it, one focused on people at a company, called “Talent Pool,” and another focused on data about a company, “Company Report.”

 

The first of these will let businesses run searches across the LinkedIn database to discover talent with characteristics similar to those what a business might already be hiring, and figure out where they are at the moment (in terms of location and company affiliation), and where they are moving, what skills they might have in common, and how to better spot those who might be on the way up based on all of this.

The second set of data tools (Company Report) provides a similar analytics profile but about your organisation and those that you would like to compare against it in areas like relative education levels and schools of the respective workforces; which skills employees have or don’t have; and so on.

Dan Francis, a senior product manager running Talent Insights, said in an interview that for now the majority of the data that’s being used to power Talent Insights is primarily coming from LinkedIn itself, although there are other data sources also added into it, such as material from the Bureau of Labor Statistics. (And indeed, even some of LinkedIn’s other data troves, for example in its recruitment listings, or even in its news/content play, the material that populates both comes from third parties.)

He also added that letting companies feed in their own data to use that in number crunching — either for their own reports or those of other companies — “is on our roadmap,” an indication that LinkedIn sees some mileage in this product.

Adding in more data sources could also help the company appear more impartial and accurate: although LinkedIn is huge and the biggest repository of information of its kind when it comes to professional profiles, it’s not always accurate and in some cases can be completely out of date or intentionally misleading.

(Related: LinkedIn has yet to launch any “verified”-style profiles for people, such as you get on Facebook or Twitter, to prove they are who they say they are, that they work where they claim to work, and that their backgrounds are what they claim them to be. My guess as to why that has not been rolled out is that it would be very hard, if not impossible, to verify everything in a clear way, and so LinkedIn relies on the power of public scrutiny to keep people mostly honest.)

“We’re pretty transparent about this,” said Francis. “We don’t position this as a product as comprehensive, but as a representative sample. Ensuring data quality is good is something that we are careful about. We know sometimes data is not perfect. In some cases it is directional.”

Chef launches deeper integration with Microsoft Azure

DevOps automation service Chef today announced a number of new integrations with Microsoft Azure. The news, which was announced at the Microsoft Ignite conference in Orlando, Florida, focuses on helping enterprises bring their legacy applications to Azure and ranges from the public preview of Chef Automate Managed Service for Azure to the integration of Chef’s InSpec compliance product with Microsoft’s cloud platform.

With Chef Automate as a managed service on Azure, which provides ops teams with a single tool for managing and monitoring their compliance and infrastructure configurations, developers can now easily deploy and manage Chef Automate and the Chef Server from the Azure Portal. It’s a fully managed service and the company promises that businesses can get started with using it in as little as thirty minutes (though I’d take those numbers with a grain of salt).

When those configurations need to change, Chef users on Azure can also now use the Chef Workstation with Azure Cloud Shell, Azure’s command line interface. Workstation is one of Chef’s newest products and focuses on making ad-hoc configuration changes, no matter whether the node is managed by Chef or not.

And to remain in compliance, Chef is also launching an integration of its InSpec security and compliance tools with Azure. InSpec works hand in hand with Microsoft’s new Azure Policy Guest Configuration (who comes up with these names?) and allows users to automatically audit all of their applications on Azure.

“Chef gives companies the tools they need to confidently migrate to Microsoft Azure so users don’t just move their problems when migrating to the cloud, but have an understanding of the state of their assets before the migration occurs,” said Corey Scobie, the senior vice president of products and engineering at Chef, in today’s announcement. “Being able to detect and correct configuration and security issues to ensure success after migrations gives our customers the power to migrate at the right pace for their organization.”

more Microsoft Ignite 2018 coverage

Salesforce wants to end customer service frustration with Customer 360

How many times have you called into a company, answered a bunch of preliminary questions about the purpose of your call, then found that those answers didn’t make their way to the CSR who ultimately took your call.

This usually is because System A can’t talk to System B and it’s frustrating for the caller, who is already angry about having to repeat the same information again. Salesforce wants to help bring an end to that problem with their new Customer 360 product announced today at Dreamforce, the company’s customer conference taking place this week in San Francisco.

What’s interesting about Customer 360 from a product development perspective is that Salesforce took the technology from the $6.5 billion Mulesoft acquisition, and didn’t just turn that into a product, it also used the same technology internally to pull the various pieces together into a more unified view of the Salesforce product family. This should in theory allow the customer service representative talking to you on the phone to get the total picture of your interactions with the company, thereby reducing that need to repeat yourself because the information wasn’t passed on.

Screenshot: Salesforce

The idea here is to bring all of the different products — sales, service, community, commerce and marketing — into a single unified view of the customer. And they allow you to do this without actually writing any code, according to the company.

Adding a data source to Customer 360 Gif: Salesforce

This allows anyone who interacts with the customer to see the whole picture, a process that has eluded many companies and upset many customers. The customer record in Salesforce CRM is only part of the story, as is the marketing pitches and the ecommerce records. It all comes together to tell a story about that customer, but if the data is often trapped in silos, nobody can see that. That’s what Customer 360 is supposed to solve.

While Bret Taylor, Salesforce’s president and chief product officer says there were ways to make this happen before in Salesforce, they have never offered a product that does so in such a direct way. He says that the big brands like Apple, Amazon and Google have changed expectations in terms of how we presume to be treated when we connect with a brand. Customer 360 is focused on helping companies achieve that expectation level.

“Now, when people don’t get that experience, where the company that you’re interacting with doesn’t know who you are, it’s gone from a pleasant experience to an expectation, and that’s what we hear time and time again from our customers. And that’s why we’re so focused on integration, that single view of the customer is the ultimate value proposition of these experiences,” Taylor explained.

This product is aimed at the Salesforce admins who have been responsible in the past for configuring and customizing Salesforce products for the unique needs of each department or overall organization. They can configure the Customer 360 to pull data from Salesforce and other products too.

Customer 360 is being piloted in North America right now and should GA some time next year.

With MuleSoft in fold, Salesforce gains access to data wherever it lives

When Salesforce bought MuleSoft last spring for the tidy sum of $6.5 billion, it looked like money well spent for the CRM giant. After all, it was providing a bridge between the cloud and the on-prem data center and that was a huge missing link for a company with big ambitions like Salesforce.

When you want to rule the enterprise, you can’t be limited by where data lives and you need to be able to share information across disparate systems. Partly that’s a simple story of enterprise integration, but on another level it’s purely about data. Salesforce introduced its intelligence layer, dubbed Einstein, at Dreamforce in 2016.

With MuleSoft in the fold, it’s got access to data cross systems wherever it lives, in the cloud or on-prem. Data is the fuel of artificial intelligence, and Salesforce has been trying desperately to get more data for Einstein since its inception.

It lost out on LinkedIn to Microsoft, which flexed its financial muscles and reeled in the business social network for $26.5 billion a couple of years ago. It’s undoubtedly a rich source of data that the company longed for. Next, it set its sights on Twitter (although Twitter was ultimately never sold, of course). After board and stockholder concerns, the company walked away.

Each of these forays was all about the data; frustrated, Salesforce went back to the drawing board. While MuleSoft did not supply the direct cache of data that a social network would have, it did provide a neat way for them to get at backend data sources — the very type of data that matters most to its enterprise customers.

Today, they have extended that notion beyond pure data access to a graph. You can probably see where this is going. The idea of a graph, the connections between say a buyer and the things they tend to buy or a person on a social network and people they tend to interact with, can be extended even to the network/API level, and that is precisely the story that Salesforce is trying to tell this week at the Dreamforce customer conference in San Francisco.

Visualizing connections in a data integration network in MuleSoft. Screenshot: Salesforce/MuleSoft

Maureen Fleming, program vice president for integration and process automation research at IDC, says that it is imperative that organizations view data as a strategic asset and act accordingly. “Very few companies are getting all the value from their data as they should be, as it is locked up in various applications and systems that aren’t designed to talk to each other. Companies who are truly digitally capable will be able to connect these disparate data sources, pull critical business-level data from these connections, and make informed business decisions in a way that delivers competitive advantage,” Fleming explained in a statement.

Configuring data connections on MuleSoft Anypoint Platform. Gif: Salesforce/MuleSoft

It’s hard to underestimate the value of this type of data is to Salesforce, which has already put MuleSoft to work internally to help build the new Customer 360 product announced today. It can point to how it’s providing this very type of data integration to which Fleming is referring on its own product set.

Bret Taylor, president and chief product officer at Salesforce, says that for his company all of this is ultimately about enhancing the customer experience. You need to be able to stitch together these different computing environments and data silos to make that happen.

“In the short term, [customer] infrastructure is often fragmented. They often have some legacy applications on premise, they’ll have some cloud applications like Salesforce, but some infrastructure in on Amazon or Google and Azure, and to actually transform the customer experience, they need to bring all this data together. And so it’s a really a unique time for integration technologies, like MuleSoft because it enables you to create a seamless customer experience, no matter where that data lives, and that means you don’t need to wait for infrastructure to be perfect before you can transform your customer experience.”

Instana raises $30M for its application performance monitoring service

Instana, an application performance monitoring (APM) service with a focus on modern containerized services, today announced that it has raised a $30 million Series C funding round. The round was led by Meritech Capital, with participation from existing investor Accel. This brings Instana’s total funding to $57 million.

The company, which counts the likes of Audi, Edmunds.com, Yahoo Japan and Franklin American Mortgage as its customers, considers itself an APM 3.0 player. It argues that its solution is far lighter than those of older players like New Relic and AppDynamics (which sold to Cisco hours before it was supposed to go public). Those solutions, the company says, weren’t built for modern software organizations (though I’m sure they would dispute that).

What really makes Instana stand out is its ability to automatically discover and monitor the ever-changing infrastructure that makes up a modern application, especially when it comes to running containerized microservices. The service automatically catalogs all of the endpoints that make up a service’s infrastructure, and then monitors them. It’s also worth noting that the company says that it can offer far more granular metrics that its competitors.

Instana says that its annual sales grew 600 percent over the course of the last year, something that surely attracted this new investment.

“Monitoring containerized microservice applications has become a critical requirement for today’s digital enterprises,” said Meritech Capital’s Alex Kurland. “Instana is packed with industry veterans who understand the APM industry, as well as the paradigm shifts now occurring in agile software development. Meritech is excited to partner with Instana as they continue to disrupt one of the largest and most important markets with their automated APM experience.”

The company plans to use the new funding to fulfill the demand for its service and expand its product line.

Putting the Pentagon $10B JEDI cloud contract into perspective

Sometimes $10 billion isn’t as much as you think.

It’s true that when you look at the bottom line number of the $10 billion Joint Enterprise Defense Infrastructure (JEDI) cloud contract, it’s easy to get lost in the sheer size of it, and the fact that it’s a one-vendor deal. The key thing to remember as you think about this deal is that while it’s obviously a really big number, it’s spread out over a long period of time and involves a huge and growing market.

It’s also important to remember that the Pentagon has given itself lots of out clauses in the way the contract is structured. This could be important for those who are worried about one vendor having too much power in a deal like this. “This is a two-year contract, with three option periods: one for three years, another for three years, and a final one for two years,” Heather Babb, Pentagon spokeswoman told TechCrunch.

The contract itself has been set up to define the department’s cloud strategy for the next decade. The thinking is that by establishing a relationship with a single vendor, it will improve security and simplify overall management of the system. It’s also part of a broader view of setting technology policy for the next decade and preparing the military for more modern requirements like Internet of Things and artificial intelligence applications.

Many vendors have publicly expressed unhappiness at the winner-take-all, single vendor approach, which they believe might be unfairly tilted toward market leader Amazon. Still, the DOD, which has stated that the process is open and fair, seems determined to take this path, much to the chagrin of most vendors, who believe that a multi-vendor strategy makes more sense.

John Dinsdale, chief analyst at Synergy Research Group, a firm that keeps close tabs on the cloud market, says it’s also important to keep the figure in perspective compared to the potential size of the overall market.

“The current worldwide market run rate is equivalent to approximately $60 billion per year and that will double in less than three years. So in very short order you’re going to see a market that is valued at greater than $100 billion per year – and is continuing to grow rapidly,” he said.

Put in those terms, $10 billion over a decade, while surely a significant figure, isn’t quite market altering if the market size numbers are right. “If the contract is truly worth $10 billion that is clearly a very big number. It would presumably be spread over many years which then puts it at only a very small share of the total market,” he said.

He also acknowledges that it would be a big feather in the cap of whichever company wins the business, and it could open the door for other business in the government and private sector. After all, if you can handle the DOD, chances are you can handle just about any business where a high level of security and governance would be required.

Final RFPs are now due on October 12th with a projected award date of April 2019, but even at $10 billion, an astronomical sum of money to be sure, it ultimately might not shift the market in the way you think.