Posts

Dasha AI is calling so you don’t have to

While you’d be hard pressed to find any startup not brimming with confidence over the disruptive idea they’re chasing, it’s not often you come across a young company as calmly convinced it’s engineering the future as Dasha AI.

The team is building a platform for designing human-like voice interactions to automate business processes. Put simply, it’s using AI to make machine voices a whole lot less robotic.

“What we definitely know is this will definitely happen,” says CEO and co-founder Vladislav Chernyshov. “Sooner or later the conversational AI/voice AI will replace people everywhere where the technology will allow. And it’s better for us to be the first mover than the last in this field.”

“In 2018 in the US alone there were 30 million people doing some kind of repetitive tasks over the phone. We can automate these jobs now or we are going to be able to automate it in two years,” he goes on. “If you multiple it with Europe and the massive call centers in India, Pakistan and the Philippines you will probably have something like close to 120M people worldwide… and they are all subject for disruption, potentially.”

The New York based startup has been operating in relative stealth up to now. But it’s breaking cover to talk to TechCrunch — announcing a $2M seed round, led by RTP Ventures and RTP Global: An early stage investor that’s backed the likes of Datadog and RingCentral. RTP’s venture arm, also based in NY, writes on its website that it prefers engineer-founded companies — that “solve big problems with technology”. “We like technology, not gimmicks,” the fund warns with added emphasis.

Dasha’s core tech right now includes what Chernyshov describes as “a human-level, voice-first conversation modelling engine”; a hybrid text-to-speech engine which he says enables it to model speech disfluencies (aka, the ums and ahs, pitch changes etc that characterize human chatter); plus “a fast and accurate” real-time voice activity detection algorithm which detects speech in under 100 milliseconds, meaning the AI can turn-take and handle interruptions in the conversation flow. The platform can also detect a caller’s gender — a feature that can be useful for healthcare use-cases, for example.

Another component Chernyshov flags is “an end-to-end pipeline for semi-supervised learning” — so it can retrain the models in real time “and fix mistakes as they go” — until Dasha hits the claimed “human-level” conversational capability for each business process niche. (To be clear, the AI cannot adapt its speech to an interlocutor in real-time — as human speakers naturally shift their accents closer to bridge any dialect gap — but Chernyshov suggests it’s on the roadmap.)

“For instance, we can start with 70% correct conversations and then gradually improve the model up to say 95% of correct conversations,” he says of the learning element, though he admits there are a lot of variables that can impact error rates — not least the call environment itself. Even cutting edge AI is going to struggle with a bad line.

The platform also has an open API so customers can plug the conversation AI into their existing systems — be it telephony, Salesforce software or a developer environment, such as Microsoft Visual Studio.

Currently they’re focused on English, though Chernyshov says the architecture is “basically language agnostic” — but does requires “a big amount of data”.

The next step will be to open up the dev platform to enterprise customers, beyond the initial 20 beta testers, which include companies in the banking, healthcare and insurance sectors — with a release slated for later this year or Q1 2020.

Test use-cases so far include banks using the conversation engine for brand loyalty management to run customer satisfaction surveys that can turnaround negative feedback by fast-tracking a response to a bad rating — by providing (human) customer support agents with an automated categorization of the complaint so they can follow up more quickly. “This usually leads to a wow effect,” says Chernyshov.

Ultimately, he believes there will be two or three major AI platforms globally providing businesses with an automated, customizable conversational layer — sweeping away the patchwork of chatbots currently filling in the gap. And of course Dasha intends their ‘Digital Assistant Super Human Alike’ to be one of those few.

“There is clearly no platform [yet],” he says. “Five years from now this will sound very weird that all companies now are trying to build something. Because in five years it will be obvious — why do you need all this stuff? Just take Dasha and build what you want.”

“This reminds me of the situation in the 1980s when it was obvious that the personal computers are here to stay because they give you an unfair competitive advantage,” he continues. “All large enterprise customers all over the world… were building their own operating systems, they were writing software from scratch, constantly reinventing the wheel just in order to be able to create this spreadsheet for their accountants.

“And then Microsoft with MS-DOS came in… and everything else is history.”

That’s not all they’re building, either. Dasha’s seed financing will be put towards launching a consumer-facing product atop its b2b platform to automate the screening of recorded message robocalls. So, basically, they’re building a robot assistant that can talk to — and put off — other machines on humans’ behalf.

Which does kind of suggest the AI-fuelled future will entail an awful lot of robots talking to each other… 🤖🤖🤖

Chernyshov says this b2c call screening app will most likely be free. But then if your core tech looks set to massively accelerate a non-human caller phenomenon that many consumers already see as a terrible plague on their time and mind then providing free relief — in the form of a counter AI — seems the very least you should do.

Not that Dasha can be accused of causing the robocaller plague, of course. Recorded messages hooked up to call systems have been spamming people with unsolicited calls for far longer than the startup has existed.

Dasha’s PR notes Americans were hit with 26.3BN robocalls in 2018 alone — up “a whopping” 46% on 2017.

Its conversation engine, meanwhile, has only made some 3M calls to date, clocking its first call with a human in January 2017. But the goal from here on in is to scale fast. “We plan to aggressively grow the company and the technology so we can continue to provide the best voice conversational AI to a market which we estimate to exceed $30BN worldwide,” runs a line from its PR.

After the developer platform launch, Chernyshov says the next step will be to open up access to business process owners by letting them automate existing call workflows without needing to be able to code (they’ll just need an analytic grasp of the process, he says).

Later — pegged for 2022 on the current roadmap — will be the launch of “the platform with zero learning curve”, as he puts it. “You will teach Dasha new models just like typing in a natural language and teaching it like you can teach any new team member on your team,” he explains. “Adding a new case will actually look like a word editor — when you’re just describing how you want this AI to work.”

His prediction is that a majority — circa 60% — of all major cases that business face — “like dispatching, like probably upsales, cross sales, some kind of support etc, all those cases” — will be able to be automated “just like typing in a natural language”.

So if Dasha’s AI-fuelled vision of voice-based business process automation come to fruition then humans getting orders of magnitude more calls from machines looks inevitable — as machine learning supercharges artificial speech by making it sound slicker, act smarter and seem, well, almost human.

But perhaps a savvier generation of voice AIs will also help manage the ‘robocaller’ plague by offering advanced call screening? And as non-human voice tech marches on from dumb recorded messages to chatbot-style AIs running on scripted rails to — as Dasha pitches it — fully responsive, emoting, even emotion-sensitive conversation engines that can slip right under the human radar maybe the robocaller problem will eat itself? I mean, if you didn’t even realize you were talking to a robot how are you going to get annoyed about it?

Dasha claims 96.3% of the people who talk to its AI “think it’s human”, though it’s not clear what sample size the claim is based on. (To my ear there are definite ‘tells’ in the current demos on its website. But in a cold-call scenario it’s not hard to imagine the AI passing, if someone’s not paying much attention.)

The alternative scenario, in a future infested with unsolicited machine calls, is that all smartphone OSes add kill switches, such as the one in iOS 13 — which lets people silence calls from unknown numbers.

And/or more humans simply never pick up phone calls unless they know who’s on the end of the line.

So it’s really doubly savvy of Dasha to create an AI capable of managing robot calls — meaning it’s building its own fallback — a piece of software willing to chat to its AI in future, even if actual humans refuse.

Dasha’s robocall screener app, which is slated for release in early 2020, will also be spammer-agnostic — in that it’ll be able to handle and divert human salespeople too, as well as robots. After all, a spammer is a spammer.

“Probably it is the time for somebody to step in and ‘don’t be evil’,” says Chernyshov, echoing Google’s old motto, albeit perhaps not entirely reassuringly given the phrase’s lapsed history — as we talk about the team’s approach to ecosystem development and how machine-to-machine chat might overtake human voice calls.

“At some point in the future we will be talking to various robots much more than we probably talk to each other — because you will have some kind of human-like robots at your house,” he predicts. “Your doctor, gardener, warehouse worker, they all will be robots at some point.”

The logic at work here is that if resistance to an AI-powered Cambrian Explosion of machine speech is futile, it’s better to be at the cutting edge, building the most human-like robots — and making the robots at least sound like they care.

Dasha’s conversational quirks certainly can’t be called a gimmick. Even if the team’s close attention to mimicking the vocal flourishes of human speech — the disfluencies, the ums and ahs, the pitch and tonal changes for emphasis and emotion — might seem so at first airing.

In one of the demos on its website you can hear a clip of a very chipper-sounding male voice, who identifies himself as “John from Acme Dental”, taking an appointment call from a female (human), and smoothly dealing with multiple interruptions and time/date changes as she changes her mind. Before, finally, dealing with a flat cancelation.

A human receptionist might well have got mad that the caller essentially just wasted their time. Not John, though. Oh no. He ends the call as cheerily as he began, signing off with an emphatic: “Thank you! And have a really nice day. Bye!”

If the ultimate goal is Turing Test levels of realism in artificial speech — i.e. a conversation engine so human-like it can pass as human to a human ear — you do have to be able to reproduce, with precision timing, the verbal baggage that’s wrapped around everything humans say to each other.

This tonal layer does essential emotional labor in the business of communication, shading and highlighting words in a way that can adapt or even entirely transform their meaning. It’s an integral part of how we communicate. And thus a common stumbling block for robots.

So if the mission is to power a revolution in artificial speech that humans won’t hate and reject then engineering full spectrum nuance is just as important a piece of work as having an amazing speech recognition engine. A chatbot that can’t do all that is really the gimmick.

Chernyshov claims Dasha’s conversation engine is “at least several times better and more complex than [Google] Dialogflow, [Amazon] Lex, [Microsoft] Luis or [IBM] Watson”, dropping a laundry list of rival speech engines into the conversation.

He argues none are on a par with what Dasha is being designed to do.

The difference is the “voice-first modelling engine”. “All those [rival engines] were built from scratch with a focus on chatbots — on text,” he says, couching modelling voice conversation “on a human level” as much more complex than the more limited chatbot-approach — and hence what makes Dasha special and superior.

“Imagination is the limit. What we are trying to build is an ultimate voice conversation AI platform so you can model any kind of voice interaction between two or more human beings.”

Google did demo its own stuttering voice AI — Duplex — last year, when it also took flak for a public demo in which it appeared not to have told restaurant staff up front they were going to be talking to a robot.

Chernyshov isn’t worried about Duplex, though, saying it’s a product, not a platform.

“Google recently tried to headhunt one of our developers,” he adds, pausing for effect. “But they failed.”

He says Dasha’s engineering staff make up more than half (28) its total headcount (48), and include two doctorates of science; three PhDs; five PhD students; and ten masters of science in computer science.

It has an R&D office in Russian which Chernyshov says helps makes the funding go further.

“More than 16 people, including myself, are ACM ICPC finalists or semi finalists,” he adds — likening the competition to “an Olympic game but for programmers”. A recent hire — chief research scientist, Dr Alexander Dyakonov — is both a doctor of science professor and former Kaggle No.1 GrandMaster in machine learning. So with in-house AI talent like that you can see why Google, uh, came calling…

Dasha

 

But why not have Dasha ID itself as a robot by default? On that Chernyshov says the platform is flexible — which means disclosure can be added. But in markets where it isn’t a legal requirement the door is being left open for ‘John’ to slip cheerily by. Bladerunner here we come.

The team’s driving conviction is that emphasis on modelling human-like speech will, down the line, allow their AI to deliver universally fluid and natural machine-human speech interactions which in turn open up all sorts of expansive and powerful possibilities for embeddable next-gen voice interfaces. Ones that are much more interesting than the current crop of gadget talkies.

This is where you could raid sci-fi/pop culture for inspiration. Such as Kitt, the dryly witty talking car from the 1980s TV series Knight Rider. Or, to throw in a British TV reference, Holly the self-depreciating yet sardonic human-faced computer in Red Dwarf. (Or indeed Kryten the guilt-ridden android butler.) Chernyshov’s suggestion is to imagine Dasha embedded in a Boston Dynamics robot. But surely no one wants to hear those crawling nightmares scream…

Dasha’s five-year+ roadmap includes the eyebrow-raising ambition to evolve the technology to achieve “a general conversational AI”. “This is a science fiction at this point. It’s a general conversational AI, and only at this point you will be able to pass the whole Turing Test,” he says of that aim.

“Because we have a human level speech recognition, we have human level speech synthesis, we have generative non-rule based behavior, and this is all the parts of this general conversational AI. And I think that we can we can — and scientific society — we can achieve this together in like 2024 or something like that.

“Then the next step, in 2025, this is like autonomous AI — embeddable in any device or a robot. And hopefully by 2025 these devices will be available on the market.”

Of course the team is still dreaming distance away from that AI wonderland/dystopia (depending on your perspective) — even if it’s date-stamped on the roadmap.

But if a conversational engine ends up in command of the full range of human speech — quirks, quibbles and all — then designing a voice AI may come to be thought of as akin to designing a TV character or cartoon personality. So very far from what we currently associate with the word ‘robotic’. (And wouldn’t it be funny if the term ‘robotic’ came to mean ‘hyper entertaining’ or even ‘especially empathetic’ thanks to advances in AI.)

Let’s not get carried away though.

In the meanwhile, there are ‘uncanny valley’ pitfalls of speech disconnect to navigate if the tone being (artificially) struck hits a false note. (And, on that front, if you didn’t know ‘John from Acme Dental’ was a robot you’d be forgiven for misreading his chipper sign off to a total time waster as pure sarcasm. But an AI can’t appreciate irony. Not yet anyway.)

Nor can robots appreciate the difference between ethical and unethical verbal communication they’re being instructed to carry out. Sales calls can easily cross the line into spam. And what about even more dystopic uses for a conversation engine that’s so slick it can convince the vast majority of people it’s human — like fraud, identity theft, even election interference… the potential misuses could be terrible and scale endlessly.

Although if you straight out ask Dasha whether it’s a robot Chernyshov says it has been programmed to confess to being artificial. So it won’t tell you a barefaced lie.

Dasha

How will the team prevent problematic uses of such a powerful technology?

“We have an ethics framework and when we will be releasing the platform we will implement a real-time monitoring system that will monitor potential abuse or scams, and also it will ensure people are not being called too often,” he says. “This is very important. That we understand that this kind of technology can be potentially probably dangerous.”

“At the first stage we are not going to release it to all the public. We are going to release it in a closed alpha or beta. And we will be curating the companies that are going in to explore all the possible problems and prevent them from being massive problems,” he adds. “Our machine learning team are developing those algorithms for detecting abuse, spam and other use cases that we would like to prevent.”

There’s also the issue of verbal ‘deepfakes’ to consider. Especially as Chernyshov suggests the platform will, in time, support cloning a voiceprint for use in the conversation — opening the door to making fake calls in someone else’s voice. Which sounds like a dream come true for scammers of all stripes. Or a way to really supercharge your top performing salesperson.

Safe to say, the counter technologies — and thoughtful regulation — are going to be very important.

There’s little doubt that AI will be regulated. In Europe policymakers have tasked themselves with coming up with a framework for ethical AI. And in the coming years policymakers in many countries will be trying to figure out how to put guardrails on a technology class that, in the consumer sphere, has already demonstrated its wrecking-ball potential — with the automated acceleration of spam, misinformation and political disinformation on social media platforms.

“We have to understand that at some point this kind of technologies will be definitely regulated by the state all over the world. And we as a platform we must comply with all of these requirements,” agrees Chernyshov, suggesting machine learning will also be able to identify whether a speaker is human or not — and that an official caller status could be baked into a telephony protocol so people aren’t left in the dark on the ‘bot or not’ question. 

“It should be human-friendly. Don’t be evil, right?”

Asked whether he considers what will happen to the people working in call centers whose jobs will be disrupted by AI, Chernyshov is quick with the stock answer — that new technologies create jobs too, saying that’s been true right throughout human history. Though he concedes there may be a lag — while the old world catches up to the new.

Time and tide wait for no human, even when the change sounds increasingly like we do.

Prodly announces $3.5M seed to automate low-code cloud deployments

Low-code programming is supposed to make things easier on companies, right? Low-code means you can count on trained administrators instead of more expensive software engineers to handle most tasks, but like any issue solved by technology, there are always unintended consequences. While running his former company, Steelbrick, which he sold to Salesforce in 2015 for $360 million, Max Rudman identified a persistent problem with low-code deployments. He decided to fix it with automation and testing, and the idea for his latest venture, Prodly, was born.

The company announced a $3.5 million seed round today, but more important than the money is the customer momentum. In spite of being a very early-stage startup, the company already has 100 customers using the product, a testament to the fact that other people were probably experiencing that same pain point Rudman was feeling, and there is a clear market for his idea.

As Rudman learned with his former company, going live with the data on a platform like Salesforce is just part of the journey. If you are updating configuration and pricing information on a regular basis, that means updating all the tables associated with that information. Sure, it’s been designed to be point and click, but if you have changes across 48 tables, it becomes a very tedious task, indeed.

The idea behind Prodly is to automate much of the configuration, provide a testing environment to be sure all the information is correct and, finally, automate deployment. For now, the company is just concentrating on configuration, but with the funding it plans to expand the product to solve the other problems, as well.

Rudman is careful to point out that his company’s solution is not built strictly for the Salesforce platform. The startup is taking aim at Salesforce admins for its first go-round, but he sees the same problem with other cloud services that make heavy use of trained administrators to make changes.

“The plan is to start with Salesforce, but this problem actually exists on most cloud platforms — ServiceNow, Workday — none of them have the tools we have focused on for admins, and making the admins more productive and building the tooling that they need to efficiently manage a complex application,” Rudman told TechCrunch.

Customers include Nutanix, Johnson & Johnson, Splunk, Tableau and Verizon (which owns this publication). The $3.5 million round was led by Shasta Ventures, with participation from Norwest Venture Partners.

Amazon acquires flash-based cloud storage startup E8 Storage

Amazon has acquired Israeli storage tech startup E8 Storage, as first reported by Reuters, CNBC and Globes and confirmed by TechCrunch. The acquisition will bring the team and technology from E8 in to Amazon’s existing Amazon Web Services center in Tel Aviv, per reports.

E8 Storage’s particular focus was on building storage hardware that employs flash-based memory to deliver faster performance than competing offerings, according to its own claims. How exactly AWS intends to use the company’s talent or assets isn’t yet known, but it clearly lines up with their primary business.

AWS acquisitions this year include TSO Logic, a Vancouver-based startup that optimizes data center workload operating efficiency, and Israel-based CloudEndure, which provides data recovery services in the event of a disaster.

Save with group discounts and bring your team to TechCrunch’s first-ever Enterprise event Sept. 5 in SF

Get ready to dive into the fiercely competitive waters of enterprise software. Join more than 1,000 attendees for TC Sessions Enterprise 2019 on September 5 to navigate this rapidly evolving category with the industry’s brightest minds, biggest names and exciting startups.

Our $249 early-bird ticket price remains in play, which saves you $100. But one is the loneliest number, so why not take advantage of our group discount, buy in bulk and bring your whole team? Save an extra 20% when you buy four or more tickets at once.

We’ve packed this day-long conference with an outstanding lineup of presentations, interviews, panel discussions, demos, breakout sessions and, of course, networking. Check out the agenda, which includes both industry titans and boundary-pushing startups eager to disrupt the status quo.

We’ll add more surprises along the way, but these sessions provide a taste of what to expect — and why you’ll need your posse to absorb as much intel as possible.

Talking Developer Tools
Scott Farquhar (Atlassian)

With tools like Jira, Bitbucket and Confluence, few companies influence how developers work as much as Atlassian. The company’s co-founder and co-CEO Scott Farquhar will join us to talk about growing his company, how it is bringing its tools to enterprises and what the future of software development in and for the enterprise will look like.

Keeping the Enterprise Secure
Martin Casado (Andreessen Horowitz), Wendy Nather (Duo Security), Emily Heath (United Airlines)

Enterprises face a litany of threats from both inside and outside the firewall. Now more than ever, companies — especially startups — have to put security first. From preventing data from leaking to keeping bad actors out of your network, enterprises have it tough. How can you secure the enterprise without slowing growth? We’ll discuss the role of a modern CSO and how to move fast — without breaking things.

Keeping an Enterprise Behemoth on Course
Bill McDermott (SAP)

With over $166 billion in market cap, Germany-based SAP is one of the most valuable tech companies in the world today. Bill McDermott took the leadership in 2014, becoming the first American to hold this position. Since then, he has quickly grown the company, in part thanks to a number of $1 billion-plus acquisitions. We’ll talk to him about his approach to these acquisitions, his strategy for growing the company in a quickly changing market and the state of enterprise software in general.

The Quantum Enterprise
Jim Clarke (Intel), Jay Gambetta (IBM
and Krysta Svore (Microsoft)
4:20 PM – 4:45 PM

While we’re still a few years away from having quantum computers that will fulfill the full promise of this technology, many companies are already starting to experiment with what’s available today. We’ll talk about what startups and enterprises should know about quantum computing today to prepare for tomorrow.

TC Sessions Enterprise 2019 takes place on September 5. You can’t be everywhere at once, so bring your team, cover more ground and increase your ROI. Get your group discount tickets and save.

Calling all hardware startups! Apply to Hardware Battlefield @ TC Shenzhen

Got hardware? Well then, listen up, because our search continues for boundary-pushing, early-stage hardware startups to join us in Shenzhen, China for an epic opportunity; launch your startup on a global stage and compete in Hardware Battlefield at TC Shenzhen on November 11-12.

Apply here to compete in TC Hardware Battlefield 2019. Why? It’s your chance to demo your product to the top investors and technologists in the world. Hardware Battlefield, cousin to Startup Battlefield, focuses exclusively on innovative hardware because, let’s face it, it’s the backbone of technology. From enterprise solutions to agtech advancements, medical devices to consumer product goods — hardware startups are in the international spotlight.

If you make the cut, you’ll compete against 15 of the world’s most innovative hardware makers for bragging rights, plenty of investor love, media exposure and $25,000 in equity-free cash. Just participating in a Battlefield can change the whole trajectory of your business in the best way possible.

We chose to bring our fifth Hardware Battlefield to Shenzhen because of its outstanding track record of supporting hardware startups. The city achieves this through a combination of accelerators, rapid prototyping and world-class manufacturing. What’s more, TC Hardware Battlefield 2019 takes place as part of the larger TechCrunch Shenzhen that runs November 9-12.

Creativity and innovation no know boundaries, and that’s why we’re opening this competition to any early-stage hardware startup from any country. While we’ve seen amazing hardware in previous Battlefields — like robotic armsfood testing devicesmalaria diagnostic tools, smart socks for diabetics and e-motorcycles, we can’t wait to see the next generation of hardware, so bring it on!

Meet the minimum requirements listed below, and we’ll consider your startup:

Here’s how Hardware Battlefield works. TechCrunch editors vet every qualified application and pick 15 startups to compete. Those startups receive six rigorous weeks of free coaching. Forget stage fright. You’ll be prepped and ready to step into the spotlight.

Teams have six minutes to pitch and demo their products, which is immediately followed by an in-depth Q&A with the judges. If you make it to the final round, you’ll repeat the process in front of a new set of judges.

The judges will name one outstanding startup the Hardware Battlefield champion. Hoist the Battlefield Cup, claim those bragging rights and the $25,000. This nerve-wracking thrill-ride takes place in front of a live audience, and we capture the entire event on video and post it to our global audience on TechCrunch.

Hardware Battlefield at TC Shenzhen takes place on November 11-12. Don’t hide your hardware or miss your chance to show us — and the entire tech world — your startup magic. Apply to compete in TC Hardware Battlefield 2019, and join us in Shenzhen!

Is your company interested in sponsoring or exhibiting at Hardware Battlefield at TC Shenzhen? Contact our sponsorship sales team by filling out this form.

Confluera snags $9M Series A to help stop cyberattacks in real time

Just yesterday, we experienced yet another major breach when Capital One announced it had been hacked and years of credit card application information had been stolen. Another day, another hack, but the question is how can companies protect themselves in the face of an onslaught of attacks. Confluera, a Palo Alto startup, wants to help with a new tool that purports to stop these kinds of attacks in real time.

Today the company, which launched last year, announced a $9 million Series A investment led by Lightspeed Venture Partners . It also has the backing of several influential technology execs, including John W. Thompson, who is chairman of Microsoft and former CEO at Symantec; Frank Slootman, CEO at Snowflake and formerly CEO at ServiceNow; and Lane Bess, former CEO of Palo Alto Networks.

What has attracted this interest is the company’s approach to cybersecurity. “Confluera is a real-time cybersecurity company. We are delivering the industry’s first platform to deterministically stop cyberattacks in real time,” company co-founder and CEO Abhijit Ghosh told TechCrunch.

To do that, Ghosh says, his company’s solution watches across the customer’s infrastructure, finds issues and recommends ways to mitigate the attack. “We see the problem that there are too many solutions which have been used. What is required is a platform that has visibility across the infrastructure, and uses security information from multiple sources to make that determination of where the attacker currently is and how to mitigate that,” he explained.

Microsoft chairman John Thompson, who is also an investor, says this is more than just real-time detection or real-time remediation. “It’s not just the audit trail and telling them what to do. It’s more importantly blocking the attack in real time. And that’s the unique nature of this platform, that you’re able to use the insight that comes from the science of the data to really block the attacks in real time.”

It’s early days for Confluera, as it has 19 employees and three customers using the platform so far. For starters, it will be officially launching next week at Black Hat. After that, it has to continue building out the product and prove that it can work as described to stop the types of attacks we see on a regular basis.

Catalyst raises $15M from Accel to transform data-driven customer success

Managing your customers has changed a lot in the past decade. Out are the steak dinners and ballgame tickets to get a sense of a contract’s chance at renewal, and in are churn analysis and a whole bunch of data science to learn whether a customer and their users like or love your product. That customer experience revolution has been critical to the success of SaaS products, but it can remain wickedly hard to centralize all the data needed to drive top performance in a customer success organization.

That’s where Catalyst comes in. The company, founded in New York City in 2017 and launched April last year, wants to centralize all of your disparate data sources on your customers into one easy-to-digest tool to learn how to approach each of them individually to optimize for the best experience.

The company’s early success has attracted more top investors. It announced today that it has raised a $15 million Series A led by Vas Natarajan of Accel, who previously backed enterprise companies like Frame.io, Segment, InVision, and Blameless. The company had previously raised $3 million from NYC enterprise-focused Work-Bench and $2.4 million from True Ventures. Both firms participated in this new round.

Catalyst CEO Edward Chiu told me that Accel was attractive because of the firm’s recent high-profile success in the enterprise space, including IPOs like Slack, PagerDuty, and CrowdStrike.

When we last spoke with Catalyst a year and a half ago, the firm had just raised its first seed round and was just the company’s co-founders — brothers Edward and Kevin Chiu — and a smattering of employees. Now, the company has 19 employees and is targeting 40 employees by the end of the year.

Team Photo

In that time, the product has continued to evolve as it has worked with its customers. One major feature of Catalyst’s product is a “health score” that determines whether a customer is likely to grow or churn in the coming months based on ingested data around usage. CEO Chiu said that “we’ve gotten our health score to be very very accurate” and “we have the ability to take automated action based on that health score.” Today, the company offers “prefect sync” with Salesforce, Mixpanel, Zendesk, among other services, and will continue to make investments in new integrations.

One high priority for the company has been increasing the speed of integration when a new customer signs up for Catalyst. Chiu said that new customers can be onboarded in minutes, and they can use the platform’s formula builder to define the exact nuances of their health score for their specific customers. “We mold to your use case,” he said.

One lesson the company has learned is that as success teams increasingly become critical to the lifeblood of companies, other parts of the organization and senior executives are working together to improve their customer’s experiences. Chiu told me that the startup often starts with onboarding a customer success team, only to later find that C-suite and other team leads have also joined and are also interacting together on the platform.

An interesting dynamic for the company is that it does its own customer success on its customer success platform. “We are our own best customer,” Chiu said. “We login every day to see the health of our customers… our product managers login to Catalyst every day to read product feedback.”

Since the last time we checked in, the company has added a slew of senior execs, including Cliff Kim as head of product, Danny Han as head of engineering, and Jessica Marucci as head of people, with whom the two Chius had worked together at cloud infrastructure startup DigitalOcean.

Moving forward, Chiu expects to invest further in data analysis and engineering. “One of the most unique things about us is that we are collecting so much unique data: usage patterns, [customer] spend fluctuations, [customer] health scores,” Chiu said. “It would be a hugely missed opportunity not to analyze that data and work on churn.”

Yes, Slack is down. Update: Slack’s back

Update: It’s baaaaaack. Back to work, erm, slackers. Official word, per Slack:

We’re pleased to report that we have the all clear, and all functionality is now restored. Thanks so much for bearing with us in the meantime.

Are your co-workers ignoring you? Welcome to my world! In your case, however, that is probably because Slack is currently down (as of about 11AM EST). According to its status page, some workspaces are experiencing issues with messages sending and loading.

Slack outage notice

Slack outage notice

 

The outage follows a number of recent issues for the popular workplace chat service, include a big one that hit in late-June. Interestingly, the company just issued a major update to its underlying infrastructure. The refresh didn’t include any cosmetic changes to the service, but instead presented a large-scale push away from jQuery and other older technologies to a newer stack.

Update: Things appear to be on the upswing now. Here’s what’s been going on, per Slack:

Customers may be running into trouble accessing their Slack workspaces completely. We’re actively looking into this and apologize for the interruption to your day.

Some workspaces might be experiencing issues with messages sending, and slow performance across the board. Our team is on the case and we’ll report back once we have an update to share.

Bindu Reddy, co-founder and CEO at RealityEngines, is coming to TechCrunch Sessions: Enterprise

There is surely no shortage of data in the modern enterprise, and data is the fuel for AI. Yet packaging that data in machine learning models remains a huge challenge for large companies. Without that capability, automating processes with AI underpinnings remains elusive for many companies.

RealityEngines wants to change that by creating research-driven cloud services that can reduce some of the inherent complexity of working with AI tools. We are excited to be including Bindu Reddy, co-founder and CEO at RealityEngines, at TechCrunch Sessions: Enterprise, taking place in San Francisco on September 5.

Reddy will be joining investor Jocelyn Goldfein, a managing director at Zetta Venture Partners, and others. They will be discussing with TechCrunch editors the growing role of AI in the enterprise, as companies try to take advantage of the capabilities machines have over humans to process large amounts of information quickly.

She knows from whence she speaks. Before founding RealityEngines, Reddy helped launch AI Verticals at AWS where she served as general manager. She was responsible for bringing to market Amazon Personalize and Amazon Forecast, two tools that help organizations create machine learning models.

Before that, she was CEO and co-founder at yet another AI startup called Post Intelligence, a company that purported to help social media influencers write AI-driven tweets. She later sold that company to Uber. If that isn’t enough for you, she served as head of Products for Google Apps, where she was in charge of Docs, Sheets, Slides, Sites and Blogger.

Early-bird tickets to see Bindu and our lineup of enterprise influencers at TC Sessions: Enterprise are on sale for just $249 when you book here; but hurry, prices go up by $100 soon! Students, grab your discounted tickets for just $75 here.

Adobe’s latest Customer Experience Platform updates take aim at data scientists

Adobe’s Customer Experience Platform provides a place to process all of the data that will eventually drive customer experience applications in the Adobe Experience Cloud. This involves bringing in vast amounts of transactional and interactional data being created across commerce platforms. This process is complex and involves IT, applications developers and data scientists.

Last fall, the company introduced a couple of tools in beta for the last group. Data scientists need familiar kinds of tools to work with the data as it streams into the platform in order to create meaningful models for the application developers to build upon. Today, it made two of those tools generally available — Query Service and Data Science Workspaces — which should go a long way toward helping data scientists feel comfortable working with data on this platform.

Ronell Hugh, group manager at Adobe Experience Platform, says these tools are about helping data scientists move beyond pure data management and getting into deriving more meaningful insights from it. “Data scientists were just bringing data in and trying to manage and organize it, and now we see that with Experience Platform, they are able to do that in a more seamless way, and can spend more time doing what they really want to do, which is deriving insights from the data to be actionable in the organization,” Hugh told TechCrunch.

Part of that is being able to do queries across the data sets they have brought into the platform. The newly released Query Service will enable data scientists and analysts to write queries to understand the data better and get specific answers based on the data faster.

“With Query Service in Adobe Experience Platform, analysts and data scientists can now poll all of their data sets stored in Experience Platform to answer specific cross-channel and cross-platform questions, faster than ever before. This includes behavioral data, as well as point-of-sale (POS), customer relationship management (CRM) and more,” the company wrote in a blog post announcing the new tool.

In addition, the company made the Data Science Workspace generally available. As the name implies, it provides a place for data scientists to work with the data and build models derived from it. The idea behind this tool is to use artificial intelligence to help automate some of the more mundane aspects of the data science job.

“Data scientists can take advantage of this new AI that fuels deeper data discovery by using Adobe Sensei pre-built models, bringing their existing models or creating custom models from scratch in Experience Platform,” the company wrote in the announcement blog post.

Today, it was the data scientists’ turn, but the platform is designed to help IT manage underlying infrastructure, whether in the cloud or on premises, and for application developers to take advantage of the data models and build customer experience applications on top of that. It’s a complex, yet symbiotic relationship, and Adobe is attempting to pull all of it together in a single platform.