Tag Archive for: IT

Cloud Foundry’s Kubernetes bet with Project Eirini hits 1.0

Cloud Foundry, the open-source platform-as-a-service that, with the help of lots of commercial backers, is currently in use by the majority of Fortune 500 companies, launched well before containers, and especially the Kubernetes orchestrator, were a thing. Instead, the project built its own container service, but the rise of Kubernetes obviously created a lot of interest in using it for managing Cloud Foundry’s container implementation. To do so, the organization launched Project Eirini last year; today, it’s officially launching version 1.0, which means it’s ready for production usage.

Eirini/Kubernetes doesn’t replace the old architecture. Instead, for the foreseeable future, they will operate side-by-side, with the operators deciding on which one to use.

The team working on this project shipped a first technical preview earlier this year and a number of commercial vendors, too, started to build their own commercial products around it and shipped it as a beta product.

“It’s one of the things where I think Cloud Foundry sometimes comes at things from a different angle,” IBM’s Julz Friedman told me. “Because it’s not about having a piece of technology that other people can build on in order to build a platform. We’re shipping the end thing that people use. So 1.0 for us — we have to have a thing that ticks all those boxes.”

He also noted that Diego, Cloud Foundry’s existing container management system, had been battle-tested over the years and had always been designed to be scalable to run massive multi-tenant clusters.

“If you look at people doing similar things with Kubernetes at the moment,” said Friedman, “they tend to run lots of Kubernetes clusters to scale to that kind of level. And Kubernetes, although it’s going to get there, right now, there are challenges around multi-tenancy, and super big multi-tenant scale”

But even without being able to get to this massive scale, Friedman argues that you can already get a lot of value even out of a small Kubernetes cluster. Most companies don’t need to run enormous clusters, after all, and they still get the value of Cloud Foundry with the power of Kubernetes underneath it (all without having to write YAML files for their applications).

As Cloud Foundry CTO Chip Childers also noted, once the transition to Eirini gets to the point where the Cloud Foundry community can start applying less effort to its old container engine, those resources can go back to fulfilling the project’s overall mission, which is about providing the best possible developer experience for enterprise developers.

“We’re in this phase in the industry where Kubernetes is the new infrastructure and [Cloud Foundry] has a very battle-tested developer experience around it,” said Childers. “But there’s also really interesting ideas that are out there that are coming from our community, so one of the things that I’ve suggested to the community writ large is, let’s use this time as an opportunity to not just evolve what we have, but also make sure that we’re paying attention to new workflows, new models, and figure out what’s going to provide benefit to that enterprise developer that we’re so focused on — and bring those types of capabilities in.”

Those new capabilities may be around technologies like functions and serverless, for example, though Friedman at least is more focused on Eirini 1.1 for the time being, which will include closing the gaps with what’s currently available in Cloud Foundry’s old scheduler, like Docker image support and support for the Cloud Foundry v3 API.

Making sense of a multi-cloud, hybrid world at KubeCon

More than 12,000 attendees gathered this week in San Diego to discuss all things containers, Kubernetes and cloud-native at KubeCon.

Kubernetes, the container orchestration tool, turned five this year, and the technology appears to be reaching a maturity phase where it accelerates beyond early adopters to reach a more mainstream group of larger business users.

That’s not to say that there isn’t plenty of work to be done, or that most enterprise companies have completely bought in, but it’s clearly reached a point where containerization is on the table. If you think about it, the whole cloud-native ethos makes sense for the current state of computing and how large companies tend to operate.

If this week’s conference showed us anything, it’s an acknowledgment that it’s a multi-cloud, hybrid world. That means most companies are working with multiple public cloud vendors, while managing a hybrid environment that includes those vendors — as well as existing legacy tools that are probably still on-premises — and they want a single way to manage all of this.

The promise of Kubernetes and cloud-native technologies, in general, is that it gives these companies a way to thread this particular needle, or at least that’s the theory.

Kubernetes to the rescue

Photo: Ron Miller/TechCrunch

If you were to look at the Kubernetes hype cycle, we are probably right about at the peak where many think Kubernetes can solve every computing problem they might have. That’s probably asking too much, but cloud-native approaches have a lot of promise.

Craig McLuckie, VP of R&D for cloud-native apps at VMware, was one of the original developers of Kubernetes at Google in 2014. VMware thought enough of the importance of cloud-native technologies that it bought his former company, Heptio, for $550 million last year.

As we head into this phase of pushing Kubernetes and related tech into larger companies, McLuckie acknowledges it creates a set of new challenges. “We are at this crossing the chasm moment where you look at the way the world is — and you look at the opportunity of what the world might become — and a big part of what motivated me to join VMware is that it’s successfully proven its ability to help enterprise organizations navigate their way through these disruptive changes,” McLuckie told TechCrunch.

He says that Kubernetes does actually solve this fundamental management problem companies face in this multi-cloud, hybrid world. “At the end of the day, Kubernetes is an abstraction. It’s just a way of organizing your infrastructure and making it accessible to the people that need to consume it.

“And I think it’s a fundamentally better abstraction than we have access to today. It has some very nice properties. It is pretty consistent in every environment that you might want to operate, so it really makes your on-prem software feel like it’s operating in the public cloud,” he explained.

Simplifying a complex world

One of the reasons Kubernetes and cloud-native technologies are gaining in popularity is because the technology allows companies to think about hardware differently. There is a big difference between virtual machines and containers, says Joe Fernandes, VP of product for Red Hat cloud platform.

“Sometimes people conflate containers as another form of virtualization, but with virtualization, you’re virtualizing hardware, and the virtual machines that you’re creating are like an actual machine with its own operating system. With containers, you’re virtualizing the process,” he said.

He said that this means it’s not coupled with the hardware. The only thing it needs to worry about is making sure it can run Linux, and Linux runs everywhere, which explains how containers make it easier to manage across different types of infrastructure. “It’s more efficient, more affordable, and ultimately, cloud-native allows folks to drive more automation,” he said.

Bringing it into the enterprise

Photo: Ron Miller/TechCrunch

It’s one thing to convince early adopters to change the way they work, but as this technology enters the mainstream. Gabe Monroy, partner program manager at Microsoft says to carry this technology to the next level, we have to change the way we talk about it.

Celonis, a leader in big data process mining for enterprises, nabs $290M on a $2.5B valuation

More than $1 trillion is spent by enterprises annually on “digital transformation” — the investments that organizations make to update their IT systems to get more out of them and reduce costs — and today one of the bigger startups that’s built a platform to help get the ball rolling is announcing a huge round of funding.

Celonis, a leader in the area of process mining — which tracks data produced by a company’s software, as well as how the software works, in order to provide guidance on what a company could and should do to improve it — has raised $290 million in a Series C round of funding, giving the startup a post-money valuation of $2.5 billion.

Celonis was founded in 2011 in Munich — an industrial and economic center in Germany that you could say is a veritable Petri dish when it comes to large business in need of digital transformation — and has been cash-flow positive from the start. In fact, Celonis waited until it was nearly six years old to take its first outside funding (prior to this Series C it had picked up less than $80 million, see here and here).

The size and timing of this latest equity injection is due to seizing the moment, and tapping networks of people to do so. It has already been growing at a triple-digit rate, with customers like Siemens, Cisco, L’Oréal, Deutsche Telekom and Vodafone among them. 

“Our tech has become its own category with a lot of successful customers,” Bastian Nominacher, the co-CEO who co-founded the company with Alexander Rinke and Martin Klenk, said in an interview. “It’s a key driver for sustainable business operations, and we felt that we needed to have the right network of people to keep momentum in this market.”

To that end, this latest round’s participants lines up with the company’s strategic goals. It is being led by Arena Holdings — an investment firm led by Feroz Dewan — with Ryan Smith, co-founder and CEO of Qualtrics; and Tooey Courtemanche, founder and CEO of Procore, also included, alongside previous investors 83North and Accel.

Celonis said Smith will be a special advisor, working alongside another strategic board member, Hybris founder Carsten Thoma. Dewan, meanwhile, used to run hedge funds for Tiger Global (among other roles) and currently sits on the board of directors of Kraft Heinz.

“Celonis is the clear market leader in a category with open-ended potential. It has demonstrated an enviable record of growth and value creation for its customers and partners,” said Dewan in a statement. “Celonis helps companies capitalise on two inexorable trends that cut across geography and industry: the use of data to enable faster, better decision-making and the desire for all businesses to operate at their full potential.”

The core of Celonis’ offering is to provide process mining around an organizations’ IT systems. Nominacher said that this could include anything from 5 to over 100 different pieces of software, with the main idea being that Celonis’s platform monitors a company’s whole solar system of apps, so to speak, in order to produce its insights — providing and “X-ray” view of the situation, in the words of Rinke.

Those insights, in turn, are used either by the company itself, or by consultants engaged by the organization, to make further suggestions, whether that’s to implement something like robotic process automation (RPA) to speed up a specific process, or use a different piece of software to crunch data better, or reconfigure how staff is deployed, and so on. This is not a one-off thing: the idea is continuous monitoring to pick up new patterns or problems.

In recent times, the company has started to expand the system into a wider set of use cases, by providing tools to monitor operations and customer experience, and to apply its process mining engine to a wider set of company sizes beyond large enterprises, and by bringing in more AI to its basic techniques.

Interestingly, Nominacher said that there are currently no plans to, say, extend into RPA or other “fixing” tools itself, pointing to a kind of laser strategy that is likely part of what has helped it grow so well up to now.

“It’s important to focus on the relevant parts of what you provide,” he said. “We one layer, one that can give the right guidance.”

Linear takes $4.2M led by Sequoia to build a better bug tracker and more

Software will eat the world, as the saying goes, but in doing so, some developers are likely to get a little indigestion. That is to say, building products requires working with disparate and distributed teams, and while developers may have an ever-growing array of algorithms, APIs and technology at their disposal to do this, ironically the platforms to track it all haven’t evolved with the times. Now three developers have taken their own experience of that disconnect to create a new kind of platform, Linear, which they believe addresses the needs of software developers better by being faster and more intuitive. It’s bug tracking you actually want to use.

Today, Linear is announcing a seed round of $4.2 million led by Sequoia, with participation also from Index Ventures and a number of investors, startup founders and others that will also advise Linear as it grows. They include Dylan Field (Founder and CEO, Figma), Emily Choi (COO, Coinbase), Charlie Cheever (Co-Founder of Expo & Quora), Gustaf Alströmer (Partner, Y Combinator), Tikhon Berstram (Co-Founder, Parse), Larry Gadea (CEO, Envoy), Jude Gomila (CEO, Golden), James Smith (CEO, Bugsnag), Fred Stevens-Smith (CEO, Rainforest), Bobby Goodlatte, Marc McGabe, Julia DeWahl and others.

Cofounders Karri Saarinen, Tuomas Artman, and Jori Lallo — all Finnish but now based in the Bay Area — know something first-hand about software development and the trials and tribulations of working with disparate and distributed teams. Saarinen was previously the principal designer of Airbnb, as well as the first designer of Coinbase; Artman had been staff engineer and architect at Uber; and Lallo also had been at Coinbase as a senior engineer building its API and front end.

“When we worked at many startups and growth companies we felt that the tools weren’t matching the way we’re thinking or operating,” Saarinen said in an email interview. “It also seemed that no-one had took a fresh look at this as a design problem. We believe there is a much better, modern workflow waiting to be discovered. We believe creators should focus on the work they create, not tracking or reporting what they are doing. Managers should spend their time prioritizing and giving direction, not bugging their teams for updates. Running the process shouldn’t sap your team’s energy and come in the way of creating.”

Linear cofounders (from left): KarriSaarinen, Jori Lallo, and Tuomas Artma

All of that translates to, first and foremost, speed and a platform whose main purpose is to help you work faster. “While some say speed is not really a feature, we believe it’s the core foundation for tools you use daily,” Saarinen noted.

A ⌘K command calls up a menu of shortcuts to edit an issue’s status, assign a task, and more so that everything can be handled with keyboard shortcuts. Pages load quickly and synchronise in real time (and search updates alongside that). Users can work offline if they need to. And of course there is also a dark mode for night owls.

The platform is still very much in its early stages. It currently has three integrations based on some of the most common tools used by developers — GitHub (where you can link Pull Requests and close Linear issues on merge), Figma designs (where you can get image previews and embeds of Figma designs), and Slack (you can create issues from Slack and then get notifications on updates). There are plans to add more over time.

We started solving the problem from the end-user perspective, the contributor, like an engineer or a designer and starting to address things that are important for them, can help them and their teams,” Saarinen said. “We aim to also bring clarity for the teams by making the concepts simple, clear but powerful. For example, instead of talking about epics, we have Projects that help track larger feature work or tracks of work.”

Indeed, speed is not the only aim with Linear. Saarinen also said another area they hope to address is general work practices, with a take that seems to echo a turn away from time spent on manual management and more focus on automating that process.

“Right now at many companies you have to manually move things around, schedule sprints, and all kinds of other minor things,” he said. “We think that next generation tools should have built in automated workflows that help teams and companies operate much more effectively. Teams shouldn’t spend a third or more of their time a week just for running the process.”

The last objective Linear is hoping to tackle is one that we’re often sorely lacking in the wider world, too: context.

“Companies are setting their high-level goals, roadmaps and teams work on projects,” he said. “Often leadership doesn’t have good visibility into what is actually happening and how projects are tracking. Teams and contributors don’t always have the context or understanding of why they are working on the things, since you cannot follow the chain from your task to the company goal. We think that there are ways to build Linear to be a real-time picture of what is happening in the company when it comes to building products, and give the necessary context to everyone.”

Linear is a late entrant in a world filled with collaboration apps, and specifically workflow and collaboration apps targeting the developer community. These include not just Slack and GitHub, but Atlassian’s Trello and Jira, as well as Asana, Basecamp and many more.

Saarinen would not be drawn out on which of these (or others) that it sees as direct competition, noting that none are addressing developer issues of speed, ease of use and context as well as Linear is.

“There are many tools in the market and many companies are talking about making ‘work better,’” he said. “And while there are many issue tracking and project management tools, they are not supporting the workflow of the individual and team. A lot of the value these tools sell is around tracking work that happens, not actually helping people to be more effective. Since our focus is on the individual contributor and intelligent integration with their workflow, we can support them better and as a side effect makes the information in the system more up to date.”

Stephanie Zhan, the partner at Sequoia whose speciality is seed and Series A investments and who has led this round, said that Linear first came on her radar when it first launched its private beta (it’s still in private beta and has been running a waitlist to bring on new users. In that time it’s picked up hundreds of companies, including Pitch, Render, Albert, Curology, Spoke, Compound and YC startups including Middesk, Catch and Visly). The company had also been flagged by one of Sequoia’s Scouts, who invested earlier this year

Sequoia Logo Natalie Miyake

Although Linear is based out of San Francisco, it’s interesting that the three founders’ roots are in Finland (with Saarinen in Helsinki this week to speak at the Slush event), and brings up an emerging trend of Silicon Valley VCs looking at founders from further afield than just their own back yard.

“The interesting thing about Linear is that as they’re building a software company around the future of work, they’re also building a remote and distributed team themselves,” Zahn said. The company currently has only four employees.

In that vein, we (and others, it seems) had heard that Sequoia — which today invests in several Europe-based startups, including Tessian, Graphcore, Klarna, Tourlane, Evervault  and CEGX — has been considering establishing a more permanent presence in this part of the world, specifically in London.

Sources familiar with the firm, however, tell us that while it has been sounding out VCs at other firms, saying a London office is on the horizon might be premature, as there are as yet no plans to set up shop here. However, with more companies and European founders entering its portfolio, and as more conversations with VCs turn into decisions to make the leap to help Sequoia source more startups, we could see this strategy turning around quickly.

Reimagine inside sales to ramp up B2B customer acquisition

Slack makes customer acquisition look easy.

The day we acquired our first Highspot customer, it was raining hard in Seattle. I was on my way to a startup event when I answered my cell phone and our prospect said, “We’re going with Highspot.” Relief, then excitement, hit me harder than the downpour outside. It was a milestone moment – one that came after a long journey of establishing product-market fit, developing a sustainable competitive advantage, and iterating repeatedly based on prospect feedback. In other words, it was anything but easy.

User-first products are driving rapid company growth in an era where individuals discover, adopt, and share software they like throughout their organizations. This is great if you’re a Slack, Shopify, or Dropbox, but what if your company doesn’t fit that profile?

Product-led growth is a strategy that works for the right technologies, but it’s not the end-all, be-all for B2B customer acquisition. For sophisticated enterprise software platforms designed to drive company-wide value, such as Marketo, ServiceNow and Workday, that value is realized when the product is adopted en masse by one or more large segments.

If you’re selling broad account value, rather than individual user or team value, acquisition boils down to two things: elevating account based-selling and revolutionizing the inside sales model. Done correctly, you lay a foundation capable of doubling revenue growth year-over-year, 95 percent company-wide retention, and more than 100 percent growth in new customer logos annually. Here are the steps you can take to build a model that realizes on-par results.

Work the account, not the deal

Account-based selling is not a new concept, but the availability of data today changes the game. Advanced analytics enable teams to develop comprehensive and personalized approaches that meet modern customers’ heightened expectations. And when 77 percent of business buyers feel that technology has significantly changed how companies should interact with them, you have no choice but to deliver.

Despite the multitude of products created to help sellers be more productive and personal, billions of cookie-cutter emails are still flooding the inboxes of a few decision makers. The market is loud. Competition is cut throat. It’s no wonder 40 percent of sales reps say getting a response from a prospect is more difficult than ever before. Even pioneers of sales engagement are recognizing the need for evolution – yesterday’s one-size-fits-all approach to outreach only widens the gap between today’s sellers and buyers.

Companies must radically change their approach to account-based selling by building trusted relationships over time from the first-touch onward. This requires that your entire sales force – from account development representatives to your head of sales – adds tailored, tangible value at every stage of the journey. Modern buyers don’t want to be sold. They want to be advised. But the majority of companies are still missing the mark, favoring spray-and-pray tactics over personalized guidance.

One reason spamming remains prevalent, despite growing awareness of the need for quality over quantity, is that implementing a tailored approach is hard work. However, companies can make great strides by doing just three things:

  • Invest in personalization: Sales reps have quota, and sales leaders carry revenue targets. The pressure is as real as the numbers. But high velocity outreach tactics simply don’t work consistently. New research from Monetate and WBR Research found that 93% of businesses with advanced personalization strategies increased their revenue last year. And while scaling personalization may sound like an oxymoron, we now have artificial intelligence (AI) technology capable of doing just that. Of course, not all AI is created equal, so take the time to discern AI-powered platforms that deliver real value from the imposters. With a little research, you’ll find sales tools that discard  rinse-and-repeat prospecting methods in favor of intelligent guidance and actionable analytics.

Google makes converting VMs to containers easier with the GA of Migrate for Anthos

At its Cloud Next event in London, Google today announced a number of product updates around its managed Anthos platform, as well as Apigee and its Cloud Code tools for building modern applications that can then be deployed to Google Cloud or any Kubernetes cluster.

Anthos is one of the most important recent launches for Google, as it expands the company’s reach outside of Google Cloud and into its customers’ data centers and, increasingly, edge deployments. At today’s event, the company announced that it is taking Anthos Migrate out of beta and into general availability. The overall idea behind Migrate is that it allows enterprises to take their existing, VM-based workloads and convert them into containers. Those machines could come from on-prem environments, AWS, Azure or Google’s Compute Engine, and — once converted — can then run in Anthos GKE, the Kubernetes service that’s part of the platform.

“That really helps customers think about a leapfrog strategy, where they can maintain the existing VMs but benefit from the operational model of Kubernetes,” Google VP of product management Jennifer Lin told me. “So even though you may not get all of the benefits of a cloud-native container day one, what you do get is consistency in the operational paradigm.”

As for Anthos itself, Lin tells me that Google is seeing some good momentum. The company is highlighting a number of customers at today’s event, including Germany’s Kaeser Kompressoren and Turkey’s Denizbank.

Lin noted that a lot of financial institutions are interested in Anthos. “A lot of the need to do data-driven applications, that’s where Kubernetes has really hit that sweet spot because now you have a number of distributed datasets and you need to put a web or mobile front end on [them],” she explained. “You can’t do it as a monolithic app, you really do need to tap into a number of datasets — you need to do real-time analytics and then present it through a web or mobile front end. This really is a sweet spot for us.”

Also new today is the general availability of Cloud Code, Google’s set of extensions for IDEs like Visual Studio Code and IntelliJ that helps developers build, deploy and debug their cloud-native applications more quickly. The idea, here, of course, is to remove friction from building containers and deploying them to Kubernetes.

In addition, Apigee hybrid is now also generally available. This tool makes it easier for developers and operators to manage their APIs across hybrid and multi-cloud environments, a challenge that is becoming increasingly common for enterprises. This makes it easier to deploy Apigee’s API runtimes in hybrid environments and still get the benefits of Apigees monitoring and analytics tools in the cloud. Apigee hybrid, of course, can also be deployed to Anthos.

Google Cloud launches Bare Metal Solution

Google Cloud today announced the launch of a new bare metal service, dubbed the Bare Metal Solution. We aren’t talking about bare metal servers offered directly by Google Cloud here, though. Instead, we’re talking about a solution that enterprises can use to run their specialized workloads on certified hardware that’s co-located in the Google Cloud data centers and directly connect them to Google Cloud’s suite of other services. The main workload that makes sense for this kind of setup is databases, Google notes, and specifically Oracle Database.

Bare Metal Solution is, as the name implies, a fully integrated and fully managed solution for setting up this kind of infrastructure. It involves a completely managed hardware infrastructure that includes servers and the rest of the data center facilities like power and cooling; support contracts with Google Cloud and billing are handled through Google’s systems, as well as an SLA. The software that’s deployed on those machines is managed by the customer — not Google.

The overall idea, though, is clearly to make it easier for enterprises with specialized workloads that can’t easily be migrated to the cloud to still benefit from the cloud-based services that need access to the data from these systems. Machine learning is an obvious example, but Google also notes that this provides these companies with a bridge to slowly modernize their tech infrastructure in general (where “modernize” tends to mean “move to the cloud”).

“These specialized workloads often require certified hardware and complicated licensing and support agreements,” Google writes. “This solution provides a path to modernize your application infrastructure landscape, while maintaining your existing investments and architecture. With Bare Metal Solution, you can bring your specialized workloads to Google Cloud, allowing you access and integration with GCP services with minimal latency.”

Because this service is co-located with Google Cloud, there are no separate ingress and egress charges for data that moves between Bare Metal Solution and Google Cloud in the same region.

The servers for this solution, which are certified to run a wide range of applications (including Oracle Database) range from dual-socket 16-core systems with 384 GB of RAM to quad-socket servers with 112 cores and 3072 GB of RAM. Pricing is on a monthly basis, with a preferred term length of 36 months.

Obviously, this isn’t the kind of solution that you self-provision, so the only way to get started — and get pricing information — is to talk to Google’s sales team. But this is clearly the kind of service that we should expect from Google Cloud, which is heavily focused on providing as many enterprise-ready services as possible.

The Cerebras CS-1 computes deep learning AI problems by being bigger, bigger, and bigger than any other chip

Deep learning is all the rage these days in enterprise circles, and it isn’t hard to understand why. Whether it is optimizing ad spend, finding new drugs to cure cancer, or just offering better, more intelligent products to customers, machine learning — and particularly deep learning models — have the potential to massively improve a range of products and applications.

The key word though is ‘potential.’ While we have heard oodles of words sprayed across enterprise conferences the last few years about deep learning, there remain huge roadblocks to making these techniques widely available. Deep learning models are highly networked, with dense graphs of nodes that don’t “fit” well with the traditional ways computers process information. Plus, holding all of the information required for a deep learning model can take petabytes of storage and racks upon racks of processors in order to be usable.

There are lots of approaches underway right now to solve this next-generation compute problem, and Cerebras has to be among the most interesting.

As we talked about in August with the announcement of the company’s “Wafer Scale Engine” — the world’s largest silicon chip according to the company — Cerebras’ theory is that the way forward for deep learning is to essentially just get the entire machine learning model to fit on one massive chip. And so the company aimed to go big — really big.

Today, the company announced the launch of its end-user compute product, the Cerebras CS-1, and also announced its first customer of Argonne National Laboratory.

The CS-1 is a “complete solution” product designed to be added to a data center to handle AI workflows. It includes the Wafer Scale Engine (or WSE, i.e. the actual processing core) plus all the cooling, networking, storage, and other equipment required to operate and integrate the processor into the data center. It’s 26.25 inches tall (15 rack units), and includes 400,000 processing cores, 18 gigabytes of on-chip memory, 9 petabytes per second of on-die memory bandwidth, 12 gigabit ethernet connections to move data in and out of the CS-1 system, and sucks just 20 kilowatts of power.

A cross-section look at the CS-1. Photo via Cerebras

Cerebras claims that the CS-1 delivers the performance of more than 1,000 leading GPUs combined — a claim that TechCrunch hasn’t verified, although we are intently waiting for industry-standard benchmarks in the coming months when testers get their hands on these units.

In addition to the hardware itself, Cerebras also announced the release of a comprehensive software platform that allows developers to use popular ML libraries like TensorFlow and PyTorch to integrate their AI workflows with the CS-1 system.

In designing the system, CEO and co-founder Andrew Feldman said that “We’ve talked to more than 100 customers over the past year and a bit,“ in order to determine the needs for a new AI system and the software layer that should go on top of it. “What we’ve learned over the years is that you want to meet the software community where they are rather than asking them to move to you.”

I asked Feldman why the company was rebuilding so much of the hardware to power their system, rather than using already existing components. “If you were to build a Ferrari engine and put it in a Toyota, you cannot make a race car,” Feldman analogized. “Putting fast chips in Dell or [other] servers does not make fast compute. What it does is it moves the bottleneck.” Feldman explained that the CS-1 was meant to take the underlying WSE chip and give it the infrastructure required to allow it to perform to its full capability.

A diagram of the Cerebras CS-1 cooling system. Photo via Cerebras.

That infrastructure includes a high-performance water cooling system to keep this massive chip and platform operating at the right temperatures. I asked Feldman why Cerebras chose water, given that water cooling has traditionally been complicated in the data center. He said, “We looked at other technologies — freon. We looked at immersive solutions, we looked at phase-change solutions. And what we found was that water is extraordinary at moving heat.”

A side view of the CS-1 with its water and air cooling systems visible. Photo via Cerebras.

Why then make such a massive chip, which as we discussed back in August, has huge engineering requirements to operate compared to smaller chips that have better yield from wafers. Feldman said that “ it massively reduces communication time by using locality.”

In computer science, locality is placing data and compute in the right places within, let’s say a cloud, that minimizes delays and processing friction. By having a chip that can theoretically host an entire ML model on it, there’s no need for data to flow through multiple storage clusters or ethernet cables — everything that the chip needs to work with is available almost immediately.

According to a statement from Cerebras and Argonne National Laboratory, Cerebras is helping to power research in “cancer, traumatic brain injury and many other areas important to society today” at the lab. Feldman said that “It was very satisfying that right away customers were using this for things that are important and not for 17-year-old girls to find each other on Instagram or some shit like that.”

(Of course, one hopes that cancer research pays as well as influencer marketing when it comes to the value of deep learning models).

Cerebras itself has grown rapidly, reaching 181 engineers today according to the company. Feldman says that the company is hands down on customer sales and additional product development.

It has certainly been a busy time for startups in the next-generation artificial intelligence workflow space. Graphcore just announced this weekend that it was being installed in Microsoft’s Azure cloud, while I covered the funding of NUVIA, a startup led by the former lead chip designers from Apple who hope to apply their mobile backgrounds to solve the extreme power requirements these AI chips force on data centers.

Expect ever more announcements and activity in this space as deep learning continues to find new adherents in the enterprise.

Eden office management platform raises $25 million Series B

Eden, the workplace management platform that connects office managers with service providers, today announced the close of a $25 million Series B round led by Reshape. Participants in the round also include Fifth Wall Ventures, Mitsui Fudosan, RXR Realty, Thor Equities, Bessemer Venture Partners, Alate Partners, Quiet Capital, S28 Capital, Canvas Ventures, Comcast Ventures, Upshift Partners, Impala Ventures, ENIAC Ventures, and Crystal Towers, among others.

Eden was founded by Joe Du Bey and Kyle Wilkinson back in 2015 and launched out of Y Combinator as an on-demand tech repair and support service, sending IT specialists to consumers’ homes to help set up a printer or repair a cracked phone screen. Within the first year, Eden had pivoted its business entirely to the enterprise, helping B2B clients with their IT issues at much cheaper cost than employing an IT specialist full time.

By 2017, Eden had expanded well beyond IT support into other office management categories, like inventory management around supplies, cleaning, handiwork and more. Indeed, revenue shifted dramatically from Eden’s W2 wizards toward third-party vendors and service providers, with around 75 percent coming from third parties.

Today, 100 percent of Eden’s revenue comes from connecting offices with third-party providers. The company is live in 25 markets, including a few international cities like Berlin and London. Eden now has more than 2,000 service providers on the platform.

The next phase of the company, according to Du Bey, is to focus on the full spectrum of property management, zooming out to landlords and property managers.

“The broader vision we have is that everyone in the workplace will use Eden to have a better day at work, from the landlord of the building to the software engineer to the office manager, who is our primary client,” said Du Bey. “One thing we’ve learned is that there is a meaningful part of the world you can serve by working directly with the business or the office or facilities manager. But it might be the majority of our category where you really need to build a relationship with the landlord and the property manager to really be successful.”

To that end, Eden is currently in beta with software aimed at landlords and property managers that could facilitate registered guests and check-ins, as well as building-related maintenance and service issues.

Eden has raised just over $40 million in funding since inception.

SocialRank sells biz to Trufan, pivots to a mobile LinkedIn

What do you do when your startup idea doesn’t prove big enough? Run it as a scrawny but profitable lifestyle business? Or sell it to a competitor and take another swing at the fences? Social audience analytics startup SocialRank chose the latter and is going for glory.

Today, SocialRank announced it’s sold its business, brand, assets, and customers to influencer marketing campaign composer and distributor Trufan which will run it as a standalone product. But SocialRank’s team isn’t joining up. Instead, the full six-person staff is sticking together to work on a mobile-first professional social network called Upstream aiming to nip at LinkedIn.

SocialRank co-founder and CEO Alex Taub

Started in 2014 amidst a flurry of marketing analytics tools, SocialRank had raised $2.1 million from Rainfall Ventures and others before hitting profitability in 2017. But as the business plateaued, the team saw potential to use data science about people’s identity to get them better jobs.

“A few months ago we decided to start building a new product (what has become Upstream). And when we came to the conclusion to go all-in on Upstream, we knew we couldn’t run two businesses at the same time” SocialRank co-founder and CEO Alex Taub tells me. “We decided then to run a bit of a process. We ended up with a few offers but ultimately felt like Trufan was the best one to continue the business into the future.”

The move lets SocialRank avoid stranding its existing customers like the NFL, Netflix, and Samsung that rely on its audience segmentation software. Instead, they’ll continue to be supported by Trufan where Taub and fellow co-founder Michael Schonfeld will become advisors.

“While we built a sustainable business, we essentially knew that if we wanted to go real big, we would need to go to the drawing board” Taub explains.

SocialRank

Two-year-old Trufan has raised $1.8 million Canadian from Round13 Capital, local Toronto startup Clearbanc’s founders, and several NBA players. Trufan helps brands like Western Union and Kay Jewellers design marketing initiatives that engage their customer communities through social media. It’s raising an extra $400,000 USD in venture debt from Round13 to finance the acquisition, which should make Trufan cash-flow positive by the end of the year.

Why isn’t the SocialRank team going along for the ride? Taub said LinkedIn was leaving too much opportunity on the table. While it’s good for putting resumes online and searching for people, “All the social stuff are sort of bolt-ons that came after Facebook and Twitter arrived. People forget but LinkedIn is the oldest active social network out there”, Taub tells me, meaning it’s a bit outdated.

Trufan’s team

Rather than attack head-on, the newly forged Upstream plans to pick the Microsoft-owned professional network apart with better approaches to certain features. “I love the idea of ‘the unbundling of LinkedIn’, ala what’s been happening with Craigslist for the past few years” says Taub. “The first foundational piece we are building is a social professional network around giving and getting help. We’ll also be focused on the unbundling of the groups aspect of LinkedIn.”

Taub concludes that entrepreneurs can shackle themselves to impossible goals if they take too much venture capital for the wrong business. As we’ve seen with SoftBank, investors demand huge returns that can require pursuing risky and unsustainable expansion strategies.

“We realized that SocialRank had potential to be a few hundred million dollar in revenue business but venture growth wasn’t exactly the model for it” Taub says. “You need the potential of billions in revenue and a steep growth curve.” A professional network for the smartphone age has that kind of addressable market. And the team might feel better getting out of bed each day knowing they’re unlocking career paths for people instead of just getting them to click ads.