Build Your Own Ransomware (Project Root) | Behind Enemy Lines Part 2

A short while back, we highlighted a recent addition to the Ransomware As a Service (RaaS) universe. Project Root didn’t so much burst onto the scene in October of this year, but rather had more of a sputtery start…generating non-functional binaries upon the initial launch. However, by around October 15th, we started to intercept working payloads generated by this service.

At the time, their .onion TOR-based portal was advertising both a “standard” and “Pro” version of the service. The major differentiator between the two being anyone who purchases the “Pro” version gets their own copy of the source code, along with a stand-alone builder app. This allows malicious actors to generate their own payloads from the source, fully independent of the established RaaS portal. We have seen this model before with ATOM, Shark and similar. 

image of root_pro_ver1

Even more interesting though is the ability to modify the source, allowing a malicious actor to deviate or modify the threat to suit their specific needs. This offering is even more attractive given that the threat is written in Golang, which is relatively simple to read, understand and modify, even for those who are not true coders, and which is becoming increasingly popular in crimeware and APT malware.

We have continued to monitor Project Root and its related activities. Since our previous post on this subject, Project Root has also launched an Android version of their toolset.

image of Proj_Root_Android_edit

The Android version is offered in a similar way to the Windows and Linux versions. For $500 USD (as of this writing) you get access to the Android source code and associated portal and management tools. According to Project Root’s portal, any ne’er-do-wells who purchased the Windows or Linux offerings will have to pay again if they want the Android packages. Previous “Pro” buyers are not grandfathered into the new offering.

Circling back to the Windows and Linux offerings, our research team has recently intercepted additional artefacts associated with the “Pro” package. Specifically, we have uncovered functional versions of the offline builder as well as portions of the actual source code. It appears as though, with the “Pro” purchase, buyers are provided with source code for the following (in Golang):

  •       Windows payloads (x86 & x64)
  •       Linux payloads (64-bit)
  •       Windows decryption/recovery tool
  •       Wallpaper: Cross-platform Golang library to get/set desktop backgrounds.
  •       FileEncryption: Cross-platform Golang library for encrypting large files using stream encryption
  •       Control Panel and DB setup binaries (hosting and management)
  •       Requests file (req.py) for hosting setup
  •       A terse, yet helpful “guide”

image of Pro_Root_Guide_Excerpt_1

For our journey today, we will take a high-level look at the offline Project Root builder as well as some of the source code. 

Project Root: The Offline Builder

The Project Root builder is a relatively simple tool, based on Chrome.

image of Builder_main_blank_1

Before running the tool, users must ensure that their system is properly setup for Golang development.  Essentially, this boils down to installing Golang (and related tools) along with the two required open-source components (Wallpaper, FileEncryption).  With these pieces in place, along with the appropriate system configurations for general Golang development, the builder will be able to function. There are no connectivity requirements for simply running the builder.

The options available in the builder tool are identical to those in the online RaaS portal.  You are able to configure the following items:

  •       OS
  •       Architecture
  •       Anti-VM. (sandbox / analysis evasion)
  •       Extension (extension used for encrypted files)
  •       Wallpaper link (link to attackers desired desktop wallpaper)
  •       Server URL. (link to attacker server)
  •       Note Text (raw text for the “Ransom Note”)
  •       Recovery Code (32-bit key used by victim for recovery purposes)

The built-in options breakdown as follows:

  • OS (Windows / Linux)
  • Architecture (32 or 64 bit)
  • Anti-VM (Enabled or Disabled)

image of Builder_main_filled_out_2

When building new ransomware payloads, it is as simple as configuring these various options, then clicking “BUILD”. At that point, a quick PowerShell script is executed, which builds the new payload and deposits the file into the same directly as the builder application.  

If we enable full PowerShell scriptblock logging, we can see how the tool is utilizing PowerShell to invoke the build and write the binary to disk with the desired configuration options.

image of powershell_log_pr_1

From that point forward, the process is purely up to the threat actor, in that they then need to stage the malware, fully configure their server with the supplied panel and database, and proceed from there.

The control panel provided is a duplicate of that seen at the live RaaS portal.   

image of panel_dash_template1

As with any other managed malware, all the required assets are provided to allow the actor to setup a full instance, however they see fit.

image of panel_dash_filelist_1

Also, as a side note, many of the template images provided are related to Assassin’s Creed.

image of panel_images_1

Project Root: The Golang Source Code

As noted previously, there are three primary source files (.go) provided for the “Pro” version, along with the required open-source items (Wallpaper / FileEncryption).

The FileEncryption component is bundled as Lulz.go:

image of Lulz_go_1
image of Lulz_go_2

The modules for manipulating the desktop wallpaper are provided for both Windows and Linux (windows.go and linux.go respectively)

image of wallpaper_go_1
image of wallpaper_go_2

The source code behind the actual ransomware payloads (generated via the builder) is quite simple and straightforward.

The primary source files for the ransomware binaries are:

  •      Linux.go
  •      Windows.go
  •      Nowindows.go

The difference between the two Windows-related source files is slight and related to the Anti-VM functionality available in the builder GUI.  The Windows.go source file contains the ANTI-VM (sandbox/analysis evasion) checks. That is, if (in the GUI) you opt to enable the ANTI-VM feature, the tool will utilize the Windows.go source. If you disable ANTI-VM, the Nowindows.go source will be used instead.

The ANTI-VM feature does some simple checks to determine if the ransomware is running in either VMware or VirtualBox. If either is detected, it will exit or fail to fully execute beyond that point.

The ANTI-VM checks in windows.go can be seen in the following screen capture:

image of code

These checks are fairly trivial. The tool will call Systeminfo directly, and then parse out very specific strings which are indicative of running on either VMware or VirtualBox. In the sample above, the VMware string evaluation is focused on the presence of a VMware virtual NIC.

Conclusion

It is important to understand where these types of evil tools are, and how they work, for a variety of reasons. Perhaps most importantly it serves as a reminder of just how easy it can be for a malicious actor of ANY skill level to critically impact a target. Wide-scale ransomware attacks do NOT require a robust and heavily orchestrated plan, with genius programmers in tow.  With tools like “Project Root”, just about anyone off the street could severely impact the target of their choice…be it the administration department of a school district, the central computing system of a hospital or a Fortune 100 enterprise. In that context, understanding how to choose, utilize, and maintain proper and modern endpoint security controls is paramount (not that it ever was not), and we hope that this blog series serves as a good reminder to that. Stay aware and stay safe.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Reimagine inside sales to ramp up B2B customer acquisition

Slack makes customer acquisition look easy.

The day we acquired our first Highspot customer, it was raining hard in Seattle. I was on my way to a startup event when I answered my cell phone and our prospect said, “We’re going with Highspot.” Relief, then excitement, hit me harder than the downpour outside. It was a milestone moment – one that came after a long journey of establishing product-market fit, developing a sustainable competitive advantage, and iterating repeatedly based on prospect feedback. In other words, it was anything but easy.

User-first products are driving rapid company growth in an era where individuals discover, adopt, and share software they like throughout their organizations. This is great if you’re a Slack, Shopify, or Dropbox, but what if your company doesn’t fit that profile?

Product-led growth is a strategy that works for the right technologies, but it’s not the end-all, be-all for B2B customer acquisition. For sophisticated enterprise software platforms designed to drive company-wide value, such as Marketo, ServiceNow and Workday, that value is realized when the product is adopted en masse by one or more large segments.

If you’re selling broad account value, rather than individual user or team value, acquisition boils down to two things: elevating account based-selling and revolutionizing the inside sales model. Done correctly, you lay a foundation capable of doubling revenue growth year-over-year, 95 percent company-wide retention, and more than 100 percent growth in new customer logos annually. Here are the steps you can take to build a model that realizes on-par results.

Work the account, not the deal

Account-based selling is not a new concept, but the availability of data today changes the game. Advanced analytics enable teams to develop comprehensive and personalized approaches that meet modern customers’ heightened expectations. And when 77 percent of business buyers feel that technology has significantly changed how companies should interact with them, you have no choice but to deliver.

Despite the multitude of products created to help sellers be more productive and personal, billions of cookie-cutter emails are still flooding the inboxes of a few decision makers. The market is loud. Competition is cut throat. It’s no wonder 40 percent of sales reps say getting a response from a prospect is more difficult than ever before. Even pioneers of sales engagement are recognizing the need for evolution – yesterday’s one-size-fits-all approach to outreach only widens the gap between today’s sellers and buyers.

Companies must radically change their approach to account-based selling by building trusted relationships over time from the first-touch onward. This requires that your entire sales force – from account development representatives to your head of sales – adds tailored, tangible value at every stage of the journey. Modern buyers don’t want to be sold. They want to be advised. But the majority of companies are still missing the mark, favoring spray-and-pray tactics over personalized guidance.

One reason spamming remains prevalent, despite growing awareness of the need for quality over quantity, is that implementing a tailored approach is hard work. However, companies can make great strides by doing just three things:

  • Invest in personalization: Sales reps have quota, and sales leaders carry revenue targets. The pressure is as real as the numbers. But high velocity outreach tactics simply don’t work consistently. New research from Monetate and WBR Research found that 93% of businesses with advanced personalization strategies increased their revenue last year. And while scaling personalization may sound like an oxymoron, we now have artificial intelligence (AI) technology capable of doing just that. Of course, not all AI is created equal, so take the time to discern AI-powered platforms that deliver real value from the imposters. With a little research, you’ll find sales tools that discard  rinse-and-repeat prospecting methods in favor of intelligent guidance and actionable analytics.

Google makes converting VMs to containers easier with the GA of Migrate for Anthos

At its Cloud Next event in London, Google today announced a number of product updates around its managed Anthos platform, as well as Apigee and its Cloud Code tools for building modern applications that can then be deployed to Google Cloud or any Kubernetes cluster.

Anthos is one of the most important recent launches for Google, as it expands the company’s reach outside of Google Cloud and into its customers’ data centers and, increasingly, edge deployments. At today’s event, the company announced that it is taking Anthos Migrate out of beta and into general availability. The overall idea behind Migrate is that it allows enterprises to take their existing, VM-based workloads and convert them into containers. Those machines could come from on-prem environments, AWS, Azure or Google’s Compute Engine, and — once converted — can then run in Anthos GKE, the Kubernetes service that’s part of the platform.

“That really helps customers think about a leapfrog strategy, where they can maintain the existing VMs but benefit from the operational model of Kubernetes,” Google VP of product management Jennifer Lin told me. “So even though you may not get all of the benefits of a cloud-native container day one, what you do get is consistency in the operational paradigm.”

As for Anthos itself, Lin tells me that Google is seeing some good momentum. The company is highlighting a number of customers at today’s event, including Germany’s Kaeser Kompressoren and Turkey’s Denizbank.

Lin noted that a lot of financial institutions are interested in Anthos. “A lot of the need to do data-driven applications, that’s where Kubernetes has really hit that sweet spot because now you have a number of distributed datasets and you need to put a web or mobile front end on [them],” she explained. “You can’t do it as a monolithic app, you really do need to tap into a number of datasets — you need to do real-time analytics and then present it through a web or mobile front end. This really is a sweet spot for us.”

Also new today is the general availability of Cloud Code, Google’s set of extensions for IDEs like Visual Studio Code and IntelliJ that helps developers build, deploy and debug their cloud-native applications more quickly. The idea, here, of course, is to remove friction from building containers and deploying them to Kubernetes.

In addition, Apigee hybrid is now also generally available. This tool makes it easier for developers and operators to manage their APIs across hybrid and multi-cloud environments, a challenge that is becoming increasingly common for enterprises. This makes it easier to deploy Apigee’s API runtimes in hybrid environments and still get the benefits of Apigees monitoring and analytics tools in the cloud. Apigee hybrid, of course, can also be deployed to Anthos.

Google Cloud launches Bare Metal Solution

Google Cloud today announced the launch of a new bare metal service, dubbed the Bare Metal Solution. We aren’t talking about bare metal servers offered directly by Google Cloud here, though. Instead, we’re talking about a solution that enterprises can use to run their specialized workloads on certified hardware that’s co-located in the Google Cloud data centers and directly connect them to Google Cloud’s suite of other services. The main workload that makes sense for this kind of setup is databases, Google notes, and specifically Oracle Database.

Bare Metal Solution is, as the name implies, a fully integrated and fully managed solution for setting up this kind of infrastructure. It involves a completely managed hardware infrastructure that includes servers and the rest of the data center facilities like power and cooling; support contracts with Google Cloud and billing are handled through Google’s systems, as well as an SLA. The software that’s deployed on those machines is managed by the customer — not Google.

The overall idea, though, is clearly to make it easier for enterprises with specialized workloads that can’t easily be migrated to the cloud to still benefit from the cloud-based services that need access to the data from these systems. Machine learning is an obvious example, but Google also notes that this provides these companies with a bridge to slowly modernize their tech infrastructure in general (where “modernize” tends to mean “move to the cloud”).

“These specialized workloads often require certified hardware and complicated licensing and support agreements,” Google writes. “This solution provides a path to modernize your application infrastructure landscape, while maintaining your existing investments and architecture. With Bare Metal Solution, you can bring your specialized workloads to Google Cloud, allowing you access and integration with GCP services with minimal latency.”

Because this service is co-located with Google Cloud, there are no separate ingress and egress charges for data that moves between Bare Metal Solution and Google Cloud in the same region.

The servers for this solution, which are certified to run a wide range of applications (including Oracle Database) range from dual-socket 16-core systems with 384 GB of RAM to quad-socket servers with 112 cores and 3072 GB of RAM. Pricing is on a monthly basis, with a preferred term length of 36 months.

Obviously, this isn’t the kind of solution that you self-provision, so the only way to get started — and get pricing information — is to talk to Google’s sales team. But this is clearly the kind of service that we should expect from Google Cloud, which is heavily focused on providing as many enterprise-ready services as possible.

DDoS-for-Hire Boss Gets 13 Months Jail Time

A 21-year-old Illinois man was sentenced last week to 13 months in prison for running multiple DDoS-for-hire services that launched millions of attacks over several years. This individual’s sentencing comes more than five years after KrebsOnSecurity interviewed both the defendant and his father and urged the latter to take a more active interest in his son’s online activities.

A screenshot of databooter[.]com, circa 2017. Image: Cisco Talos.

The jail time was handed down to Sergiy P. Usatyuk of Orland Park, Ill., who pleaded guilty in February to one count of conspiracy to cause damage to Internet-connected computers and owning, administering and supporting illegal “booter” or “stresser” services designed to knock Web sites offline, including exostress[.]in, quezstresser[.]com, betabooter[.]com, databooter[.]com, instabooter[.]com, polystress[.]com and zstress[.]net.

According to the U.S. Justice Department, in just the first 13 months of the 27-month long conspiracy, Usatyuk’s booter users ordered approximately 3,829,812 DDoS attacks. As of September 12, 2017, ExoStresser advertised on its website that this one booter service had launched 1,367,610 DDoS attacks, and caused targets to suffer 109,186.4 hours of network downtime (-4,549 days).

Usatyuk — operating under the hacker aliases “Andrew Quez” and “Brian Martinez,” among others — admitted developing, controlling and operating the aforementioned booter services from around August 2015 through November 2017. But Usatyuk’s involvement in the DDoS-for-hire space very much predates that period.

In February 2014, KrebsOnSecurity reached out to Usatyuk’s father Peter Usatyuk, an assistant professor at the University of Illinois at Chicago. I did so because a brief amount of sleuthing on Hackforums[.]net revealed that his then 15-year-old son Sergiy — who at the time went by the nicknames “Rasbora” and “Mr. Booter Master” — was heavily involved in helping to launch crippling DDoS attacks.

I phoned Usatyuk the elder because Sergiy’s alter egos had been posting evidence on Hackforums and elsewhere that he’d just hit KrebsOnSecurity.com with a 200 Gbps DDoS attack, which was then considered a fairly impressive DDoS assault.

“I am writing you after our phone conversation just to confirm that you may call evening time/weekend to talk to my son Sergio regarding to your reasons,” Peter Usatyuk wrote in an email to this author on Feb. 13, 2014. “I also have [a] major concern what my 15 yo son [is] doing. If you think that is any kind of illegal work, please, let me know.”

That 2014 story declined to quote Rasbora by name because he was a minor then, but his father seemed alarmed enough about my inquiry that he insisted his son speak with me about the matter.

Here’s  an excerpt of what I wrote about Sergiy at the time:

Rasbora’s most recent project just happens to be gathering, maintaining huge “top quality” lists of servers that can be used to launch amplification attacks online. Despite his insistence that he’s never launched DDoS attacks, Rasbora did eventually allow that someone reading his posts on Hackforums might conclude that he was actively involved in DDoS attacks for hire.

“I don’t see what a wall of text can really tell you about what someone does in real life though,” said Rasbora, whose real-life identity is being withheld because he’s a minor. This reply came in response to my reading him several posts that he’d made on Hackforums not 24 hours earlier that strongly suggested he was still in the business of knocking Web sites offline: In a Feb. 12 post on a thread called “Hiring a hit on a Web site” that Rasbora has since deleted, he tells a fellow Hackforums user, “If all else fails and you just want it offline, PM me.”

Rasbora has tried to clean up some of his more self-incriminating posts on Hackforums, but he remains defiantly steadfast in his claim that he doesn’t DDoS people. Who knows, maybe his dad will ground him and take away his Internet privileges.

I’m guessing young Sergiy never had his Internet privileges revoked, nor did he heed advice to use his skills for less destructive activities. His dad hung up on me when I called Wednesday evening requesting comment.

In addition to serving the 13-month jail sentence and three years of supervised release, Usatyuk will forfeit $542,925 in proceeds from the scheme, as well as dozens of servers and other computer equipment that powered his many DDoS-for-hire businesses.

The Cerebras CS-1 computes deep learning AI problems by being bigger, bigger, and bigger than any other chip

Deep learning is all the rage these days in enterprise circles, and it isn’t hard to understand why. Whether it is optimizing ad spend, finding new drugs to cure cancer, or just offering better, more intelligent products to customers, machine learning — and particularly deep learning models — have the potential to massively improve a range of products and applications.

The key word though is ‘potential.’ While we have heard oodles of words sprayed across enterprise conferences the last few years about deep learning, there remain huge roadblocks to making these techniques widely available. Deep learning models are highly networked, with dense graphs of nodes that don’t “fit” well with the traditional ways computers process information. Plus, holding all of the information required for a deep learning model can take petabytes of storage and racks upon racks of processors in order to be usable.

There are lots of approaches underway right now to solve this next-generation compute problem, and Cerebras has to be among the most interesting.

As we talked about in August with the announcement of the company’s “Wafer Scale Engine” — the world’s largest silicon chip according to the company — Cerebras’ theory is that the way forward for deep learning is to essentially just get the entire machine learning model to fit on one massive chip. And so the company aimed to go big — really big.

Today, the company announced the launch of its end-user compute product, the Cerebras CS-1, and also announced its first customer of Argonne National Laboratory.

The CS-1 is a “complete solution” product designed to be added to a data center to handle AI workflows. It includes the Wafer Scale Engine (or WSE, i.e. the actual processing core) plus all the cooling, networking, storage, and other equipment required to operate and integrate the processor into the data center. It’s 26.25 inches tall (15 rack units), and includes 400,000 processing cores, 18 gigabytes of on-chip memory, 9 petabytes per second of on-die memory bandwidth, 12 gigabit ethernet connections to move data in and out of the CS-1 system, and sucks just 20 kilowatts of power.

A cross-section look at the CS-1. Photo via Cerebras

Cerebras claims that the CS-1 delivers the performance of more than 1,000 leading GPUs combined — a claim that TechCrunch hasn’t verified, although we are intently waiting for industry-standard benchmarks in the coming months when testers get their hands on these units.

In addition to the hardware itself, Cerebras also announced the release of a comprehensive software platform that allows developers to use popular ML libraries like TensorFlow and PyTorch to integrate their AI workflows with the CS-1 system.

In designing the system, CEO and co-founder Andrew Feldman said that “We’ve talked to more than 100 customers over the past year and a bit,“ in order to determine the needs for a new AI system and the software layer that should go on top of it. “What we’ve learned over the years is that you want to meet the software community where they are rather than asking them to move to you.”

I asked Feldman why the company was rebuilding so much of the hardware to power their system, rather than using already existing components. “If you were to build a Ferrari engine and put it in a Toyota, you cannot make a race car,” Feldman analogized. “Putting fast chips in Dell or [other] servers does not make fast compute. What it does is it moves the bottleneck.” Feldman explained that the CS-1 was meant to take the underlying WSE chip and give it the infrastructure required to allow it to perform to its full capability.

A diagram of the Cerebras CS-1 cooling system. Photo via Cerebras.

That infrastructure includes a high-performance water cooling system to keep this massive chip and platform operating at the right temperatures. I asked Feldman why Cerebras chose water, given that water cooling has traditionally been complicated in the data center. He said, “We looked at other technologies — freon. We looked at immersive solutions, we looked at phase-change solutions. And what we found was that water is extraordinary at moving heat.”

A side view of the CS-1 with its water and air cooling systems visible. Photo via Cerebras.

Why then make such a massive chip, which as we discussed back in August, has huge engineering requirements to operate compared to smaller chips that have better yield from wafers. Feldman said that “ it massively reduces communication time by using locality.”

In computer science, locality is placing data and compute in the right places within, let’s say a cloud, that minimizes delays and processing friction. By having a chip that can theoretically host an entire ML model on it, there’s no need for data to flow through multiple storage clusters or ethernet cables — everything that the chip needs to work with is available almost immediately.

According to a statement from Cerebras and Argonne National Laboratory, Cerebras is helping to power research in “cancer, traumatic brain injury and many other areas important to society today” at the lab. Feldman said that “It was very satisfying that right away customers were using this for things that are important and not for 17-year-old girls to find each other on Instagram or some shit like that.”

(Of course, one hopes that cancer research pays as well as influencer marketing when it comes to the value of deep learning models).

Cerebras itself has grown rapidly, reaching 181 engineers today according to the company. Feldman says that the company is hands down on customer sales and additional product development.

It has certainly been a busy time for startups in the next-generation artificial intelligence workflow space. Graphcore just announced this weekend that it was being installed in Microsoft’s Azure cloud, while I covered the funding of NUVIA, a startup led by the former lead chip designers from Apple who hope to apply their mobile backgrounds to solve the extreme power requirements these AI chips force on data centers.

Expect ever more announcements and activity in this space as deep learning continues to find new adherents in the enterprise.

Eden office management platform raises $25 million Series B

Eden, the workplace management platform that connects office managers with service providers, today announced the close of a $25 million Series B round led by Reshape. Participants in the round also include Fifth Wall Ventures, Mitsui Fudosan, RXR Realty, Thor Equities, Bessemer Venture Partners, Alate Partners, Quiet Capital, S28 Capital, Canvas Ventures, Comcast Ventures, Upshift Partners, Impala Ventures, ENIAC Ventures, and Crystal Towers, among others.

Eden was founded by Joe Du Bey and Kyle Wilkinson back in 2015 and launched out of Y Combinator as an on-demand tech repair and support service, sending IT specialists to consumers’ homes to help set up a printer or repair a cracked phone screen. Within the first year, Eden had pivoted its business entirely to the enterprise, helping B2B clients with their IT issues at much cheaper cost than employing an IT specialist full time.

By 2017, Eden had expanded well beyond IT support into other office management categories, like inventory management around supplies, cleaning, handiwork and more. Indeed, revenue shifted dramatically from Eden’s W2 wizards toward third-party vendors and service providers, with around 75 percent coming from third parties.

Today, 100 percent of Eden’s revenue comes from connecting offices with third-party providers. The company is live in 25 markets, including a few international cities like Berlin and London. Eden now has more than 2,000 service providers on the platform.

The next phase of the company, according to Du Bey, is to focus on the full spectrum of property management, zooming out to landlords and property managers.

“The broader vision we have is that everyone in the workplace will use Eden to have a better day at work, from the landlord of the building to the software engineer to the office manager, who is our primary client,” said Du Bey. “One thing we’ve learned is that there is a meaningful part of the world you can serve by working directly with the business or the office or facilities manager. But it might be the majority of our category where you really need to build a relationship with the landlord and the property manager to really be successful.”

To that end, Eden is currently in beta with software aimed at landlords and property managers that could facilitate registered guests and check-ins, as well as building-related maintenance and service issues.

Eden has raised just over $40 million in funding since inception.

SocialRank sells biz to Trufan, pivots to a mobile LinkedIn

What do you do when your startup idea doesn’t prove big enough? Run it as a scrawny but profitable lifestyle business? Or sell it to a competitor and take another swing at the fences? Social audience analytics startup SocialRank chose the latter and is going for glory.

Today, SocialRank announced it’s sold its business, brand, assets, and customers to influencer marketing campaign composer and distributor Trufan which will run it as a standalone product. But SocialRank’s team isn’t joining up. Instead, the full six-person staff is sticking together to work on a mobile-first professional social network called Upstream aiming to nip at LinkedIn.

SocialRank co-founder and CEO Alex Taub

Started in 2014 amidst a flurry of marketing analytics tools, SocialRank had raised $2.1 million from Rainfall Ventures and others before hitting profitability in 2017. But as the business plateaued, the team saw potential to use data science about people’s identity to get them better jobs.

“A few months ago we decided to start building a new product (what has become Upstream). And when we came to the conclusion to go all-in on Upstream, we knew we couldn’t run two businesses at the same time” SocialRank co-founder and CEO Alex Taub tells me. “We decided then to run a bit of a process. We ended up with a few offers but ultimately felt like Trufan was the best one to continue the business into the future.”

The move lets SocialRank avoid stranding its existing customers like the NFL, Netflix, and Samsung that rely on its audience segmentation software. Instead, they’ll continue to be supported by Trufan where Taub and fellow co-founder Michael Schonfeld will become advisors.

“While we built a sustainable business, we essentially knew that if we wanted to go real big, we would need to go to the drawing board” Taub explains.

SocialRank

Two-year-old Trufan has raised $1.8 million Canadian from Round13 Capital, local Toronto startup Clearbanc’s founders, and several NBA players. Trufan helps brands like Western Union and Kay Jewellers design marketing initiatives that engage their customer communities through social media. It’s raising an extra $400,000 USD in venture debt from Round13 to finance the acquisition, which should make Trufan cash-flow positive by the end of the year.

Why isn’t the SocialRank team going along for the ride? Taub said LinkedIn was leaving too much opportunity on the table. While it’s good for putting resumes online and searching for people, “All the social stuff are sort of bolt-ons that came after Facebook and Twitter arrived. People forget but LinkedIn is the oldest active social network out there”, Taub tells me, meaning it’s a bit outdated.

Trufan’s team

Rather than attack head-on, the newly forged Upstream plans to pick the Microsoft-owned professional network apart with better approaches to certain features. “I love the idea of ‘the unbundling of LinkedIn’, ala what’s been happening with Craigslist for the past few years” says Taub. “The first foundational piece we are building is a social professional network around giving and getting help. We’ll also be focused on the unbundling of the groups aspect of LinkedIn.”

Taub concludes that entrepreneurs can shackle themselves to impossible goals if they take too much venture capital for the wrong business. As we’ve seen with SoftBank, investors demand huge returns that can require pursuing risky and unsustainable expansion strategies.

“We realized that SocialRank had potential to be a few hundred million dollar in revenue business but venture growth wasn’t exactly the model for it” Taub says. “You need the potential of billions in revenue and a steep growth curve.” A professional network for the smartphone age has that kind of addressable market. And the team might feel better getting out of bed each day knowing they’re unlocking career paths for people instead of just getting them to click ads.

Salesforce, AWS expand partnership to bring Amazon Connect to Service Cloud

Salesforce and AWS announced an expansion of their on-going partnership that actually goes back to a $400 million 2016 infrastructure services agreement, and expanded last year to include data integration between the two companies. This year, Salesforce announced it will be offering AWS telephony and call transcription services with Amazon Connect as part of its Service Cloud call center solution.

“We have a strategic partnership with Amazon Web Services, which will allow customers to purchase Amazon Connect from us, and then it will be pre-integrated and out of the box to provide a full transcription of the call, and of course that’s alongside of an actual call recording of the call,” Patrick Beyries, VP of product management for Service Cloud. explained.

It’s worth noting that the company will be partnering with other telephony vendors as well, so that customers can choose the Amazon solution or another from Cisco, Avaya or Genesys, Beyries said.

These telephony partnerships fill in a gap in the Service Cloud call center offering, and give Salesforce direct access to the call itself. The telephony vendors will handle call transcription and hand that off to Salesforce, which can then use its intelligence layer called Einstein to “read” the transcript and offer the CSR next best actions in real time, something the company has been able to do with interactions from chat and other channels, but couldn’t do with voice.

“As this conversation evolves, the consumer is explaining what their problem is, and Einstein is [monitoring] that conversation. As the conversation gets to a critical mass, Einstein begins to understand what the content is about and suggests a specific solution to the agent,” Beyries said.

Salesforce will begin piloting this new Service Cloud capability in the spring with general availability expected next summer.

Only last week, Salesforce announced a major partnership with Microsoft to move Salesforce Marketing Cloud to Azure. These announcements show Salesforce will continue to use multiple cloud partners when it makes sense for the business. Today, it’s Amazon’s turn.

Clumio raises $135M Series C for its backup as a service platform

Clumio, a 100-people startup that offers a SaaS-like service for enterprise backup, today announced that it has raised a $135 million Series C round, led by existing investor Sutter Hill Ventures and new investor Altimeter Captial. The announcement comes shortly after the company’s disclosure in August that it had quietly raised a total of $51 million in Series A and B rounds in 2017 and 2018. The company says it plans to use this new funding to “accelerate its vision to deliver a globally consolidated data protection service in and for the public cloud.”

Given the amount of money invested in the company, chances are Clumio is getting close to a $1 billion valuation, but the company is not disclosing its valuation at this point.

The overall mission of Clumio is to build a platform on public clouds that gives enterprises a single data protection service that can handle backups of their data in on-premises, cloud and SaaS applications. When it came out of stealth, the company’s focus was on VMware on premises. Since then, the team has expanded this to include VMware running on public clouds.

“When somebody moves to the cloud, they don’t want to be in the business of managing software or infrastructure and all that, because the whole reason to move to the cloud was essentially to get away from the mundane,” explained Clumio CEO and co-founder Poojan Kumar.

The next step in this process, as the company also announced today, is to make it easier for enterprises to protect the cloud-native applications they are building now. The company today launched this service for AWS and will likely expand it to other clouds like Microsoft Azure, soon.

The market for enterprise backup is only going to expand in the coming years. We’ve now reached a point, after all, where it’s not unheard of to talk about enterprises that run thousands of different applications. For them, Clumio wants to become the one-stop-shop for all things data protection — and its investors are obviously buying into the company’s vision and momentum.

“When there’s a foundational change, like the move to the cloud, which is as foundational a change, at least, as the move from mainframe to open systems in the 80s and 90s,” said Mike Speiser, Managing Director at Sutter Hill Ventures . “When there’s a change like that, you have to re-envision, you have to refactor and think of the world — the new world — in a new way and start from scratch. If you don’t, what’s gonna end up happening is people make decisions that are short term decisions that seem like they will work but end up being architectural dead ends. And those companies never ever end up winning. They just never end up winning and that’s the opportunity right now on this big transition across many markets, including the backup market for Clumio.”

Speiser also noted that SaaS allows for a dramatically larger market opportunity for companies like Clumio. “What SaaS is doing, is it’s not only allowing us to go after the traditional Silicon Valley, high end, direct selling, expensive markets that were previously buying high-end systems and data centers. But what we’re seeing — and we’re seeing this with Snowflake and […] we will see it with Clumio — is there’s an opportunity to go after a much broader market opportunity.”

Starting next year, Clumio will expand that market by adding support for data protection for a first SaaS app, with more to follow, as well as support for backup in more regions and clouds. Right now, the service’s public cloud tool focuses on AWS — and only in the United States. Next year, it plans to support international regions as well.

Kumar stressed that he wants to build Clumio for the long run, with an IPO as part of that roadmap. His investors probably wouldn’t mind that, either.