Confluera snags $9M Series A to help stop cyberattacks in real time

Just yesterday, we experienced yet another major breach when Capital One announced it had been hacked and years of credit card application information had been stolen. Another day, another hack, but the question is how can companies protect themselves in the face of an onslaught of attacks. Confluera, a Palo Alto startup, wants to help with a new tool that purports to stop these kinds of attacks in real time.

Today the company, which launched last year, announced a $9 million Series A investment led by Lightspeed Venture Partners . It also has the backing of several influential technology execs, including John W. Thompson, who is chairman of Microsoft and former CEO at Symantec; Frank Slootman, CEO at Snowflake and formerly CEO at ServiceNow; and Lane Bess, former CEO of Palo Alto Networks.

What has attracted this interest is the company’s approach to cybersecurity. “Confluera is a real-time cybersecurity company. We are delivering the industry’s first platform to deterministically stop cyberattacks in real time,” company co-founder and CEO Abhijit Ghosh told TechCrunch.

To do that, Ghosh says, his company’s solution watches across the customer’s infrastructure, finds issues and recommends ways to mitigate the attack. “We see the problem that there are too many solutions which have been used. What is required is a platform that has visibility across the infrastructure, and uses security information from multiple sources to make that determination of where the attacker currently is and how to mitigate that,” he explained.

Microsoft chairman John Thompson, who is also an investor, says this is more than just real-time detection or real-time remediation. “It’s not just the audit trail and telling them what to do. It’s more importantly blocking the attack in real time. And that’s the unique nature of this platform, that you’re able to use the insight that comes from the science of the data to really block the attacks in real time.”

It’s early days for Confluera, as it has 19 employees and three customers using the platform so far. For starters, it will be officially launching next week at Black Hat. After that, it has to continue building out the product and prove that it can work as described to stop the types of attacks we see on a regular basis.

Catalyst raises $15M from Accel to transform data-driven customer success

Managing your customers has changed a lot in the past decade. Out are the steak dinners and ballgame tickets to get a sense of a contract’s chance at renewal, and in are churn analysis and a whole bunch of data science to learn whether a customer and their users like or love your product. That customer experience revolution has been critical to the success of SaaS products, but it can remain wickedly hard to centralize all the data needed to drive top performance in a customer success organization.

That’s where Catalyst comes in. The company, founded in New York City in 2017 and launched April last year, wants to centralize all of your disparate data sources on your customers into one easy-to-digest tool to learn how to approach each of them individually to optimize for the best experience.

The company’s early success has attracted more top investors. It announced today that it has raised a $15 million Series A led by Vas Natarajan of Accel, who previously backed enterprise companies like Frame.io, Segment, InVision, and Blameless. The company had previously raised $3 million from NYC enterprise-focused Work-Bench and $2.4 million from True Ventures. Both firms participated in this new round.

Catalyst CEO Edward Chiu told me that Accel was attractive because of the firm’s recent high-profile success in the enterprise space, including IPOs like Slack, PagerDuty, and CrowdStrike.

When we last spoke with Catalyst a year and a half ago, the firm had just raised its first seed round and was just the company’s co-founders — brothers Edward and Kevin Chiu — and a smattering of employees. Now, the company has 19 employees and is targeting 40 employees by the end of the year.

Team Photo

In that time, the product has continued to evolve as it has worked with its customers. One major feature of Catalyst’s product is a “health score” that determines whether a customer is likely to grow or churn in the coming months based on ingested data around usage. CEO Chiu said that “we’ve gotten our health score to be very very accurate” and “we have the ability to take automated action based on that health score.” Today, the company offers “prefect sync” with Salesforce, Mixpanel, Zendesk, among other services, and will continue to make investments in new integrations.

One high priority for the company has been increasing the speed of integration when a new customer signs up for Catalyst. Chiu said that new customers can be onboarded in minutes, and they can use the platform’s formula builder to define the exact nuances of their health score for their specific customers. “We mold to your use case,” he said.

One lesson the company has learned is that as success teams increasingly become critical to the lifeblood of companies, other parts of the organization and senior executives are working together to improve their customer’s experiences. Chiu told me that the startup often starts with onboarding a customer success team, only to later find that C-suite and other team leads have also joined and are also interacting together on the platform.

An interesting dynamic for the company is that it does its own customer success on its customer success platform. “We are our own best customer,” Chiu said. “We login every day to see the health of our customers… our product managers login to Catalyst every day to read product feedback.”

Since the last time we checked in, the company has added a slew of senior execs, including Cliff Kim as head of product, Danny Han as head of engineering, and Jessica Marucci as head of people, with whom the two Chius had worked together at cloud infrastructure startup DigitalOcean.

Moving forward, Chiu expects to invest further in data analysis and engineering. “One of the most unique things about us is that we are collecting so much unique data: usage patterns, [customer] spend fluctuations, [customer] health scores,” Chiu said. “It would be a hugely missed opportunity not to analyze that data and work on churn.”

Capital One Data Theft Impacts 106M People

Federal prosecutors this week charged a Seattle woman with stealing data from more than 100 million credit applications made with Capital One Financial Corp. Incredibly, much of this breach played out publicly over several months on social media and other open online platforms. What follows is a closer look at the accused, and what this incident may mean for consumers and businesses.

Paige “erratic” Thompson, in an undated photo posted to her Slack channel.

On July 29, FBI agents arrested Paige A. Thompson on suspicion of downloading nearly 30 GB of Capital One credit application data from a rented cloud data server. Capital One said the incident affected approximately 100 million people in the United States and six million in Canada.

That data included approximately 140,000 Social Security numbers and approximately 80,000 bank account numbers on U.S. consumers, and roughly 1 million Social Insurance Numbers (SINs) for Canadian credit card customers.

“Importantly, no credit card account numbers or log-in credentials were compromised and over 99 percent of Social Security numbers were not compromised,” Capital One said in a statement posted to its site.

“The largest category of information accessed was information on consumers and small businesses as of the time they applied for one of our credit card products from 2005 through early 2019,” the statement continues. “This information included personal information Capital One routinely collects at the time it receives credit card applications, including names, addresses, zip codes/postal codes, phone numbers, email addresses, dates of birth, and self-reported income.”

The FBI says Capital One learned about the theft from a tip sent via email on July 17, which alerted the company that some of its leaked data was being stored out in the open on the software development platform Github. That Github account was for a user named “Netcrave,” which includes the resume and name of one Paige A. Thompson.

The tip that alerted Capital One to its data breach.

The complaint doesn’t explicitly name the cloud hosting provider from which the Capital One credit data was taken, but it does say the accused’s resume states that she worked as a systems engineer at the provider between 2015 and 2016. That resume, available on Gitlab here, reveals Thompson’s most recent employer was Amazon Inc.

Further investigation revealed that Thompson used the nickname “erratic” on Twitter, where she spoke openly over several months about finding huge stores of data intended to be secured on various Amazon instances.

The Twitter user “erratic” posting about tools and processes used to access various Amazon cloud instances.

According to the FBI, Thompson also used a public Meetup group under the same alias, where she invited others to join a Slack channel named “Netcrave Communications.”

KrebsOnSecurity was able to join this open Slack channel Monday evening and review many months of postings apparently made by Erratic about her personal life, interests and online explorations. One of the more interesting posts by Erratic on the Slack channel is a June 27 comment listing various databases she found by hacking into improperly secured Amazon cloud instances.

That posting suggests Erratic may also have located tens of gigabytes of data belonging to other major corporations:

According to Erratic’s posts on Slack, the two items in the list above beginning with “ISRM-WAF” belong to Capital One.

Erratic also posted frequently to Slack about her struggles with gender identity, lack of employment, and persistent suicidal thoughts. In several conversations, Erratic makes references to running a botnet of sorts, although it is unclear how serious those claims were. Specifically, Erratic mentions one botnet involved in cryptojacking, which uses snippets of code installed on Web sites — often surreptitiously — designed to mine cryptocurrencies.

None of Erratic’s postings suggest Thompson sought to profit from selling the data taken from various Amazon cloud instances she was able to access. But it seems likely that at least some of that data could have been obtained by others who may have followed her activities on different social media platforms.

Ray Watson, a cybersecurity researcher at cloud security firm Masergy, said the Capital One incident contains the hallmarks of many other modern data breaches.

“The attacker was a former employee of the web hosting company involved, which is what is often referred to as insider threats,” Watson said. “She allegedly used web application firewall credentials to obtain privilege escalation. Also the use of Tor and an offshore VPN for obfuscation are commonly seen in similar data breaches.”

“The good news, however, is that Capital One Incidence Response was able to move quickly once they were informed of a possible breach via their Responsible Disclosure program, which is something a lot of other companies struggle with,” he continued.

In Capital One’s statement about the breach, company chairman and CEO Richard D. Fairbank said the financial institution fixed the configuration vulnerability that led to the data theft and promptly began working with federal law enforcement.

“Based on our analysis to date, we believe it is unlikely that the information was used for fraud or disseminated by this individual,” Fairbank said. “While I am grateful that the perpetrator has been caught, I am deeply sorry for what has happened. I sincerely apologize for the understandable worry this incident must be causing those affected and I am committed to making it right.”

Capital One says it will notify affected individuals via a variety of channels, and make free credit monitoring and identity protection available to everyone affected.

Bloomberg reports that in court on Monday, Thompson broke down and laid her head on the defense table during the hearing. She is charged with a single count of computer fraud and faces a maximum penalty of five years in prison and a $250,000 fine. Thompson will be held in custody until her bail hearing, which is set for August 1.

A copy of the complaint against Thompson is available here.

Update, 3:38 p.m. ET: I’ve reached out to several companies that appear to be listed in the last screenshot above. Infoblox [an advertiser on this site] responded with the following statement:

“Infoblox is aware of the pending investigation of the Capital One hacking attack, and that Infoblox is among the companies referenced in the suspected hacker’s alleged online communications. Infoblox is continuing to investigate the matter, but at this time there is no indication that Infoblox was in any way involved with the reported Capital One breach. Additionally, there is no indication of an intrusion or data breach involving Infoblox causing any customer data to be exposed.”

Can Tricky TxHollower Malware Evade Your AV?

TxHollower is a loader-type malware that has been found to deliver a variety of payloads including AZORult, FormBook, GandCrab ransomware, LokiBot, NetWire, njRat, Pony, Remcos backdoor and SmokeLoader. Infections attributable to TXHollower have been occurring since early 2018 and have been rising rapidly thanks in part due to TXHollower’s ability to avoid some vendors’ security solutions. In this post, we take a look at TXHollower and give it a spin on one of our endpoints.

image of tx hollower

What is TxHollower?

TxHollower leverages Windows’ deprecated Transactional NTSF APIs (TxF) to perform Process Doppelgänging and Process Hollowing, two related threat actor techniques that make it possible to inject malicious code into memory by replacing that of a legitimate process.

Process Doppelgänging also allows TxHollower to avoid detection by some security solutions that over rely on monitoring particular system calls known to be abused by malware such as SetThreadContext and NtUnmapViewOfSection. Instead, TXHollower makes use of lesser known Transactional calls like NtCreateProcessEx and NtCreateThreadEx to perform tricks like stack pivoting to control the flow of program execution.

image of TxHollower in x32dbg

As a loader rather than a dropper, TxHollower carries encrypted versions of second-stage malware within its own executable, removing the need for calling out to a C2 server to obtain the payload. Loading in this way also serves to avoid another possible failure point for a malware infection. Once C2 IPs are seen in the wild, vendors will quickly update their definitions to alert on those known malicious IPs. TxHollower cuts off this avenue of detection by carrying the payload, which may be ransomware like GandCrab or a remote access trojan like Remcos, within the first stage infection executable file.

Why is TxHollower On the Rise?

Researchers agree that samples of TxHollower found in the wild are unlikely to be from the same origin, in part because of the wide variety of payloads being seen, which suggests different kinds of threat actor objectives and campaigns. Rather, it looks as if the loader malware is being distributed in criminal circles to multiple actors, possibly sold as part of an exploit kit or framework. It’s also been noted that there are a number of variants of TxHollower itself with slightly different configurations and capabilities.

As TxHollower appears to be in use by various actors, that likely explains why no single infection vector has been identified. There has been some suggestion that malicious Word documents expoiting CVE-2017-11882 in Microsoft Office’s Equation Editor may carry the loader malware. As is common with attacks that rely on poisoned documents, these may be associated with fake invoices and receipts.

Can TxHollower Be Detected?

As we’ve noted, TxHollower has been crafted to avoid some security solutions by using lesser known API calls and by carrying an encrypted payload to avoid calling out to a C2 server. Despite these tricks, when we ran a sample on a SentinelOne-protected endpoint, the agent immediately alerted on the threat, both pre-execution and on-execution.

SHA1: f4f56f7830fc71658ddc90f8b39de6fac656682d

image of TxHollower in Sentinel One console

For the purpose of the demo, we set up the agent so that it would alert only in order to observe the malware in action. In a real-life deployment, the Protect policy would alert and block the malware from executing. After testing the malware and observing the detection in the management console, the admin was able to simply remediate the threat remotely to clean up the infected endpoint and return it to a healthy state.

If you’d like to see the full demo in action, check out the video below.

Conclusion

TxHollower is a threat to enterprises that are not sufficiently protected by a security solution that can detect in-memory, fileless malware using process injection techniques. The malware appears to be used as a loader for a variety of threats from ransomware to backdoor trojans and banking malware such as Osiris and Kronos. If your business is not yet protected by SentinelOne’s active EDR solution, contact us for a free demo and see how it can keep your organization safe.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Yes, Slack is down. Update: Slack’s back

Update: It’s baaaaaack. Back to work, erm, slackers. Official word, per Slack:

We’re pleased to report that we have the all clear, and all functionality is now restored. Thanks so much for bearing with us in the meantime.

Are your co-workers ignoring you? Welcome to my world! In your case, however, that is probably because Slack is currently down (as of about 11AM EST). According to its status page, some workspaces are experiencing issues with messages sending and loading.

Slack outage notice

Slack outage notice

 

The outage follows a number of recent issues for the popular workplace chat service, include a big one that hit in late-June. Interestingly, the company just issued a major update to its underlying infrastructure. The refresh didn’t include any cosmetic changes to the service, but instead presented a large-scale push away from jQuery and other older technologies to a newer stack.

Update: Things appear to be on the upswing now. Here’s what’s been going on, per Slack:

Customers may be running into trouble accessing their Slack workspaces completely. We’re actively looking into this and apologize for the interruption to your day.

Some workspaces might be experiencing issues with messages sending, and slow performance across the board. Our team is on the case and we’ll report back once we have an update to share.

Bindu Reddy, co-founder and CEO at RealityEngines, is coming to TechCrunch Sessions: Enterprise

There is surely no shortage of data in the modern enterprise, and data is the fuel for AI. Yet packaging that data in machine learning models remains a huge challenge for large companies. Without that capability, automating processes with AI underpinnings remains elusive for many companies.

RealityEngines wants to change that by creating research-driven cloud services that can reduce some of the inherent complexity of working with AI tools. We are excited to be including Bindu Reddy, co-founder and CEO at RealityEngines, at TechCrunch Sessions: Enterprise, taking place in San Francisco on September 5.

Reddy will be joining investor Jocelyn Goldfein, a managing director at Zetta Venture Partners, and others. They will be discussing with TechCrunch editors the growing role of AI in the enterprise, as companies try to take advantage of the capabilities machines have over humans to process large amounts of information quickly.

She knows from whence she speaks. Before founding RealityEngines, Reddy helped launch AI Verticals at AWS where she served as general manager. She was responsible for bringing to market Amazon Personalize and Amazon Forecast, two tools that help organizations create machine learning models.

Before that, she was CEO and co-founder at yet another AI startup called Post Intelligence, a company that purported to help social media influencers write AI-driven tweets. She later sold that company to Uber. If that isn’t enough for you, she served as head of Products for Google Apps, where she was in charge of Docs, Sheets, Slides, Sites and Blogger.

Early-bird tickets to see Bindu and our lineup of enterprise influencers at TC Sessions: Enterprise are on sale for just $249 when you book here; but hurry, prices go up by $100 soon! Students, grab your discounted tickets for just $75 here.

Adobe’s latest Customer Experience Platform updates take aim at data scientists

Adobe’s Customer Experience Platform provides a place to process all of the data that will eventually drive customer experience applications in the Adobe Experience Cloud. This involves bringing in vast amounts of transactional and interactional data being created across commerce platforms. This process is complex and involves IT, applications developers and data scientists.

Last fall, the company introduced a couple of tools in beta for the last group. Data scientists need familiar kinds of tools to work with the data as it streams into the platform in order to create meaningful models for the application developers to build upon. Today, it made two of those tools generally available — Query Service and Data Science Workspaces — which should go a long way toward helping data scientists feel comfortable working with data on this platform.

Ronell Hugh, group manager at Adobe Experience Platform, says these tools are about helping data scientists move beyond pure data management and getting into deriving more meaningful insights from it. “Data scientists were just bringing data in and trying to manage and organize it, and now we see that with Experience Platform, they are able to do that in a more seamless way, and can spend more time doing what they really want to do, which is deriving insights from the data to be actionable in the organization,” Hugh told TechCrunch.

Part of that is being able to do queries across the data sets they have brought into the platform. The newly released Query Service will enable data scientists and analysts to write queries to understand the data better and get specific answers based on the data faster.

“With Query Service in Adobe Experience Platform, analysts and data scientists can now poll all of their data sets stored in Experience Platform to answer specific cross-channel and cross-platform questions, faster than ever before. This includes behavioral data, as well as point-of-sale (POS), customer relationship management (CRM) and more,” the company wrote in a blog post announcing the new tool.

In addition, the company made the Data Science Workspace generally available. As the name implies, it provides a place for data scientists to work with the data and build models derived from it. The idea behind this tool is to use artificial intelligence to help automate some of the more mundane aspects of the data science job.

“Data scientists can take advantage of this new AI that fuels deeper data discovery by using Adobe Sensei pre-built models, bringing their existing models or creating custom models from scratch in Experience Platform,” the company wrote in the announcement blog post.

Today, it was the data scientists’ turn, but the platform is designed to help IT manage underlying infrastructure, whether in the cloud or on premises, and for application developers to take advantage of the data models and build customer experience applications on top of that. It’s a complex, yet symbiotic relationship, and Adobe is attempting to pull all of it together in a single platform.

The Exit: The acquisition charting Salesforce’s future

Before Tableau was the $15.7 billion key to Salesforce’s problems, it was a couple of founders arguing with a couple of venture capitalists over lunch about why its Series A valuation should be higher than $12 million pre-money.

Salesforce has generally been one to signify corporate strategy shifts through their acquisitions, so you can understand why the entire tech industry took notice when the cloud CRM giant announced its priciest acquisition ever last month.

The deal to acquire the Seattle-based data visualization powerhouse Tableau was substantial enough that Salesforce CEO Marc Benioff publicly announced it was turning Seattle into its second HQ. Tableau’s acquisition doesn’t just mean big things for Salesforce. With the deal taking place just days after Google announced it was paying $2.6 billion for Looker, the acquisition showcases just how intense the cloud wars are getting for the enterprise tech companies out to win it all.

The Exit is a new series at TechCrunch. It’s an exit interview of sorts with a VC who was in the right place at the right time but made the right call on an investment that paid off. [Have feedback? Shoot me an email at lucas@techcrunch.com]

Scott Sandell, a general partner at NEA (New Enterprise Associates) who has now been at the firm for 25 years, was one of those investors arguing with two of Tableau’s co-founders, Chris Stolte and Christian Chabot. Desperate to close the 2004 deal over their lunch meeting, he went on to agree to the Tableau founders’ demands of a higher $20 million valuation, though Sandell tells me it still feels like he got a pretty good deal.

NEA went on to invest further in subsequent rounds and went on to hold over 38% of the company at the time of its IPO in 2013 according to public financial docs.

I had a long chat with Sandell, who also invested in Salesforce, about the importance of the Tableau deal, his rise from associate to general partner at NEA, who he sees as the biggest challenger to Salesforce, and why he thinks scooter companies are “the worst business in the known universe.”

The interview has been edited for length and clarity. 


Lucas Matney: You’ve been at this investing thing for quite a while, but taking a trip down memory lane, how did you get into VC in the first place? 

Scott Sandell: The way I got into venture capital is a little bit of a circuitous route. I had an opportunity to get into venture capital coming out of Stanford Business School in 1992, but it wasn’t quite the right fit. And so I had an interest, but I didn’t have the right opportunity.

Microsoft acquires data privacy and governance service BlueTalon

Microsoft today announced that it has acquired BlueTalon, a data privacy and governance service that helps enterprises set policies for how their employees can access their data. The service then enforces those policies across most popular data environments and provides tools for auditing policies and access, too.

Neither Microsoft nor BlueTalon disclosed the financial details of the transaction. Ahead of today’s acquisition, BlueTalon had raised about $27.4 million, according to Crunchbase. Investors include Bloomberg Beta, Maverick Ventures, Signia Venture Partners and Stanford’s StartX fund.

BlueTalon Policy Engine How it works

“The IP and talent acquired through BlueTalon brings a unique expertise at the apex of big data, security and governance,” writes Rohan Kumar, Microsoft’s corporate VP for Azure Data. “This acquisition will enhance our ability to empower enterprises across industries to digitally transform while ensuring right use of data with centralized data governance at scale through Azure.”

Unsurprisingly, the BlueTalon team will become part of the Azure Data Governance group, where the team will work on enhancing Microsoft’s capabilities around data privacy and governance. Microsoft already offers access and governance control tools for Azure, of course. As virtually all businesses become more data-centric, though, the need for centralized access controls that work across systems is only going to increase and new data privacy laws aren’t making this process easier.

“As we began exploring partnership opportunities with various hyperscale cloud providers to better serve our customers, Microsoft deeply impressed us,” BlueTalon CEO Eric Tilenius, who has clearly read his share of “our incredible journey” blog posts, explains in today’s announcement. “The Azure Data team was uniquely thoughtful and visionary when it came to data governance. We found them to be the perfect fit for us in both mission and culture. So when Microsoft asked us to join forces, we jumped at the opportunity.”

Google teams up with VMware to bring more enterprises to its cloud

Google today announced a new partnership with VMware that will make it easier for enterprises to run their VMware workloads on Google Cloud. Specifically, Google Cloud will now support VMware Cloud Foundation, the company’s system for deploying and running hybrid clouds. The solution was developed by CloudSimple, not VMware or Google, and Google will offer first-line support, working together with CloudSimple.

While Google would surely love for all enterprises to move to containers and utilize its Anthos hybrid cloud service, most large companies currently use VMware. They may want to move those workloads to a public cloud, but they aren’t ready to give up a tool that has long worked for them. With this new capability, Google isn’t offering anything that is especially new or innovative, but that’s not what this is about. Instead, Google is simply giving enterprises fewer reasons to opt for a competitor without even taking its offerings into account.

“Customers have asked us to provide broad support for VMware, and now with Google Cloud VMware Solution by CloudSimple, our customers will be able to run VMware vSphere-based workloads in GCP,” the company notes in the announcement, which we got an early copy of but which for reasons unknown to us will only go live on the company’s blog tomorrow. “This brings customers a wide breadth of choices for how to run their VMware workloads in a hybrid deployment, from modern containerized applications with Anthos to VM-based applications with VMware in GCP.”

The new solution will offer support for the full VMware stack, including the likes of vCenter, vSAN and NSX-T.

“Our partnership with Google Cloud has always been about addressing customers’ needs, and we’re excited to extend the partnership to enable our mutual customers to run VMware workloads on VMware Cloud Foundation in Google Cloud Platform,” said Sanjay Poonen, chief operating officer, customer operations at VMware. “With VMware on Google Cloud Platform, customers will be able to leverage all of the familiarity and investment protection of VMware tools and training as they execute on their cloud strategies, and rapidly bring new services to market and operate them seamlessly and more securely across a hybrid cloud environment.”

While Google’s announcement highlights that the company has a long history of working with VMware, it’s interesting to note that at least the technical aspects of this partnership are more about CloudSimple than VMware. It’s also worth noting that VMware has long had a close relationship with Google’s cloud competitor AWS, and Microsoft Azure, too, offers tools for running VMware-based workloads on its cloud.