IBM is moving OpenPower Foundation to The Linux Foundation

IBM makes the Power Series chips, and as part of that has open-sourced some of the underlying technologies to encourage wider use of these chips. The open-source pieces have been part of the OpenPower Foundation. Today, the company announced it was moving the foundation under The Linux Foundation, and while it was at it, announced it was open-sourcing several other important bits.

Ken King, general manager for OpenPower at IBM, says that at this point in his organization’s evolution, they wanted to move it under the auspices of the Linux Foundation . “We are taking the OpenPower Foundation, and we are putting it as an entity or project underneath The Linux Foundation with the mindset that we are now bringing more of an open governance approach and open governance principles to the foundation,” King told TechCrunch.

But IBM didn’t stop there. It also announced that it was open-sourcing some of the technical underpinnings of the Power Series chip to make it easier for developers and engineers to build on top of the technology. Perhaps most importantly, the company is open-sourcing the Power Instruction Set Architecture (ISA). These are “the definitions developers use for ensuring hardware and software work together on Power,” the company explained.

King sees open-sourcing this technology as an important step for a number of reasons around licensing and governance. “The first thing is that we are taking the ability to be able to implement what we’re licensing, the ISA instruction set architecture, for others to be able to implement on top of that instruction set royalty free with patent rights,” he explained.

The company is also putting this under an open governance workgroup at the OpenPower Foundation. This matters to open-source community members because it provides a layer of transparency that might otherwise be lacking. What that means in practice is that any changes will be subject to a majority vote, so long as the changes meet compatibility requirements, King said.

Jim Zemlin, executive director at the Linux Foundation, says that making all of this part of the Linux Foundation open-source community could drive more innovation. “Instead of a very, very long cycle of building an application and working separately with hardware and chip designers, because all of this is open, you’re able to quickly build your application, prototype it with hardware folks, and then work with a service provider or a company like IBM to take it to market. So there’s not tons of layers in between the actual innovation and value captured by industry in that cycle,” Zemlin explained.

In addition, IBM made several other announcements around open-sourcing other Power Chip technologies designed to help developers and engineers customize and control their implementations of Power chip technology. “IBM will also contribute multiple other technologies including a softcore implementation of the Power ISA, as well as reference designs for the architecture-agnostic Open Coherent Accelerator Processor Interface (OpenCAPI) and the Open Memory Interface (OMI). The OpenCAPI and OMI technologies help maximize memory bandwidth between processors and attached devices, critical to overcoming performance bottlenecks for emerging workloads like AI,” the company said in a statement.

The softcore implementation of the Power ISA, in particular, should give developers more control and even enable them to build their own instruction sets, Hugh Blemings, executive director of the OpenPower Foundation explained. “They can now actually try crafting their own instruction sets, and try out new ways of the accelerated data processes and so forth at a lower level than previously possible,” he said.

The company is announcing all of this today at the The Linux Foundation Open Source Summit and OpenPower Summit in San Diego.

Box introduces Box Shield with increased security controls and threat protection

Box has always had to balance the idea of sharing content broadly while protecting it as it moved through the world, but the more you share, the more likely something can go wrong, such as misconfigured shared links that surfaced earlier this year. In an effort to make the system more secure, the company announced Box Shield today in Beta, a set of tools to help employees sharing Box content better understand who they are sharing with, while helping the security team see when content is being misused.

Link sharing is a natural part of what companies do with Box, and as Chief Product- and Chief Strategy Officer Jeetu Patel says, you don’t want to change the way people use Box. Instead, he says it’s his job to make it easier to make it secure and that is the goal with today’s announcement.

“We’ve introduced Box Shield, which embeds these content controls and protects the content in a way that doesn’t compromise user experience, while ensuring safety for the administrator and the company, so their intellectual property is protected,” Patel explained.

He says this involves two components. The first is about raising user awareness and helping them understand what they’re sharing. In fact, sometimes companies use Box as a content management backend to distribute files like documentation on the internet on purpose. They want them to be indexed in Google. Other times, however, it’s through misuse of the file sharing component, and Box wants to fix that with this release by making it clear who they are sharing with and what that means.

They’ve updated the experience on the web and mobile products to make it much clearer through messaging and interface design what the sharing level they have chosen means. Of course, some users will ignore all these messages, so there is a second component to give administrators more control.

2. Box Shield Smart Access

Box Shield access controls. Photo: Box

This involves helping customers build guardrails into the product to prevent leakage of an entire category of documents that you would never want leaked like internal business plans, salary lists or financial documents, or even to granularly protect particular files or folders. “The second thing we’re trying to do is make sure that Box itself has some built-in security guardrails and boundary conditions that can help people reduce the risk around employee negligence or inadvertent disclosures, and then make sure that you have some very precision-based, granular security controls that can be applied to classifications that you’ve set on content,” he explained.

In addition, the company wants to help customers detect when employees are abusing content, perhaps sharing sensitive data like customers lists with a personal account, and flag these for the security team. This involves flagging anomalous downloads, suspicious sessions or unusual locations inside Box.

The tool can also work with existing security products already in place, so that whatever classification has been applied in Box travels with a file, and anomalies or misuse, can be captured by the company’s security apparatus before the file leaves the company’s boundaries.

While Patel acknowledges, there is no way to prevent user misuse or abuse in all cases, by implementing Box Shield, the company is attempting to provide customers with a set of tools to help them reduce the possibility of it going undetected. Box Shield is in private Beta today and will be released in the Fall.

Forced Password Reset? Check Your Assumptions

Almost weekly now I hear from an indignant reader who suspects a data breach at a Web site they frequent that has just asked the reader to reset their password. Further investigation almost invariably reveals that the password reset demand was not the result of a breach but rather the site’s efforts to identify customers who are reusing passwords from other sites that have already been hacked.

But ironically, many companies taking these proactive steps soon discover that their explanation as to why they’re doing it can get misinterpreted as more evidence of lax security. This post attempts to unravel what’s going on here.

Over the weekend, a follower on Twitter included me in a tweet sent to California-based job search site Glassdoor, which had just sent him the following notice:

The Twitter follower expressed concern about this message, because it suggested to him that in order for Glassdoor to have done what it described, the company would have had to be storing its users’ passwords in plain text. I replied that this was in fact not an indication of storing passwords in plain text, and that many companies are now testing their users’ credentials against lists of hacked credentials that have been leaked and made available online.

The reality is Facebook, Netflix and a number of big-name companies are regularly combing through huge data leak troves for credentials that match those of their customers, and then forcing a password reset for those users. Some are even checking for password re-use on all new account signups.

The idea here is to stymie a massively pervasive problem facing all companies that do business online today: Namely, “credential-stuffing attacks,” in which attackers take millions or even billions of email addresses and corresponding cracked passwords from compromised databases and see how many of them work at other online properties.

So how does the defense against this daily deluge of credential stuffing work? A company employing this strategy will first extract from these leaked credential lists any email addresses that correspond to their current user base.

From there, the corresponding cracked (plain text) passwords are fed into the same process that the company relies upon when users log in: That is, the company feeds those plain text passwords through its own password “hashing” or scrambling routine.

Password hashing is designed to be a one-way function which scrambles a plain text password so that it produces a long string of numbers and letters. Not all hashing methods are created equal, and some of the most commonly used methods — MD5 and SHA-1, for example — can be far less secure than others, depending on how they’re implemented (more on that in a moment). Whatever the hashing method used, it’s the hashed output that gets stored, not the password itself.

Back to the process: If a user’s plain text password from a hacked database matches the output of what a company would expect to see after running it through their own internal hashing process, that user is then prompted to change their password to something truly unique.

Now, password hashing methods can be made more secure by amending the password with what’s known as a “salt” — or random data added to the input of a hash function to guarantee a unique output. And many readers of the Twitter thread on Glassdoor’s approach reasoned that the company couldn’t have been doing what it described without also forgoing this additional layer of security.

My tweeted explanatory reply as to why Glassdoor was doing this was (in hindsight) incomplete and in any case not as clear as it should have been. Fortunately, Glassdoor’s chief information officer Anthony Moisant chimed in to the Twitter thread to explain that the salt is in fact added as part of the password testing procedure.

“In our [user] database, we’ve got three columns — username, salt value and scrypt hash,” Moisant explained in an interview with KrebsOnSecurity. “We apply the salt that’s stored in the database and the hash [function] to the plain text password, and that resulting value is then checked against the hash in the database we store. For whatever reason, some people have gotten it into their heads that there’s no possible way to do these checks if you salt, but that’s not true.”

CHECK YOUR ASSUMPTIONS

You — the user — can’t be expected to know or control what password hashing methods a given site uses, if indeed they use them at all. But you can control the quality of the passwords you pick.

I can’t stress this enough: Do not re-use passwords. And don’t recycle them either. Recycling involves rather lame attempts to make a reused password unique by simply adding a digit or changing the capitalization of certain characters. Crooks who specialize in password attacks are wise to this approach as well.

If you have trouble remembering complex passwords (and this describes most people), consider relying instead on password length, which is a far more important determiner of whether a given password can be cracked by available tools in any timeframe that might be reasonably useful to an attacker.

In that vein, it’s safer and wiser to focus on picking passphrases instead of passwords. Passphrases are collections of multiple (ideally unrelated) words mushed together. Passphrases are not only generally more secure, they also have the added benefit of being easier to remember.

According to a recent blog entry by Microsoft group program manager Alex Weinert, none of the above advice about password complexity amounts to a hill of beans from the attacker’s standpoint.

Weinert’s post makes a compelling argument that as long as we’re stuck with passwords, taking full advantage of the most robust form of multi-factor authentication (MFA) offered by a site you frequent is the best way to deter attackers. Twofactorauth.org has a handy list of your options here, broken down by industry.

“Your password doesn’t matter, but MFA does,” Weinert wrote. “Based on our studies, your account is more than 99.9% less likely to be compromised if you use MFA.”

Glassdoor’s Moisant said the company doesn’t currently offer MFA for its users, but that it is planning to roll that out later this year to both consumer and business users.

Password managers also can be useful for those who feel encumbered by having to come up with passphrases or complex passwords. If you’re uncomfortable with entrusting a third-party service or application to handle this process for you, there’s absolutely nothing wrong with writing down your passwords, provided a) you do not store them in a file on your computer or taped to your laptop or screen or whatever, and b) that your password notebook is stored somewhere relatively secure, i.e. not in your purse or car, but something like a locked drawer or safe.

Although many readers will no doubt take me to task on that last bit of advice, as in all things security related it’s important not to let the perfect become the enemy of the good. Many people (think moms/dads/grandparents) can’t be bothered to use password managers  — even when you go through the trouble of setting them up on their behalf. Instead, without an easier, non-technical method they will simply revert to reusing or recycling passwords.

What’s New With Bluekeep? Are Your Devices Vulnerable?

On May 14th, Microsoft released its May 2019 Patch Tuesday. Among several security fixes, it also included a fix for a high-severity security flaw in Microsoft Windows’ RDP (Remote Desktop) component. The vulnerability, dubbed “Bluekeep” and cataloged as CVE-2019-0708 allows attackers to gain remote code execution on machines without being authenticated. The vulnerable versions of Windows are Windows XP, Windows Server 2003, Windows Server 2008 R2 and Windows 7.

The vulnerability is so severe that Microsoft has released exceptional patches for no-longer supported versions of the OS: Windows XP and Windows Server 2003.

Are Your Devices Vulnerable To Bluekeep?

Recently, Shodan added a new dashboard for tracking Bluekeep, Eternalblue and HeartBleed per country. In the US alone, for example, there are currently 101,744 Bluekeep unpatched servers.

image of shodan

According to BinaryEdge, there are almost 1 million worldwide exposed vulnerable machines on the net. 

That’s the reason why “Bluekeep” was marked by the security community as the next “WannaCry” because of its potential for mass damage just like the outbreak of the infamous malware back in 2017.

Has Bluekeep Been Seen In The Wild?

Despite the high profile and the fact that several researchers tweeted about having a working POC, there is still no evidence of an active campaign using this exploit.

Although the security community has shown respectable signs of maturity by not sharing working POCs (for all the understandable reasons), one can’t help but wonder how come the “bad guys” haven’t abused this vulnerability so far? Aren’t they “talented” enough to build a stable exploit? Is it a matter of ROI and it being much easier to use email phishing to infect machines? Or maybe the “bad guys” are keeping their exploits for the right time and for targeted attacks only? 

On July 22nd @MalwareTech, a popular Twitter account focused on hacking and security, tweeted about a published Chinese slide deck “explaining how to turn the crash POC into RCE.”

image of malwaretech tweet

On July 23rd, security researcher @polarply tweeted the following:

image of polarply tweet

On the same day, a researcher going by the name 0xeb-bp published a Bluekeep POC aimed at Windows XP. This POC could be expanded relatively easily by trained hackers into working malicious exploits, and not only for Windows XP. As “0xeb-bp” himself put it in his published analysis:

image of BlueKeep poc

Several cybersecurity companies started to use Bluekeep exploits as part of their pentest services. Among them were Immunity Inc, who added Bluekeep exploit to Canvas – its pentest framework, and NCC Group Infosec who has published at the beginning of August that its consultants are now “armed” with a Bluekeep exploit.

On August 7th, Metasploit added a new DoS exploit to its existing Bluekeep module.

image of metasploit bluekeep module

The point is clear: winter is (coming) getting closer…

SentinelOne Has You Covered

As a NG-AV solution vendor, SentinelOne deploys various honeypots all around the internet. We constantly observe thousands of RDP scans per day. These scans try to brute-force the RDP credentials. In the past few weeks, however, our honeypots have started to detect scans searching for Bluekeep vulnerable machines. At this stage, we haven’t seen exploitation attempts, but the enumeration of such machines indicates that there are “players” who are preparing for the right time to attack (waiting for the exploit to become public?).

Analyzing 20 days of monitored data revealed a pattern of three scans per day for Bluekeep. Following is a list of the source IPs that have scanned our honeypots for that vulnerability.

IOC – source IPs of the Bluekeep Scanners:

142.93.153.141
169.197.108.6
173.255.204.83
184.105.139.68
184.105.139.70
185.156.177.219
185.209.0.70
185.230.127.229
207.154.245.162
209.126.230.71
212.83.191.95
212.92.112.81
212.92.122.96
216.218.206.68
45.32.64.125
5.45.73.53
54.39.134.24
54.39.134.36
54.39.134.39
66.36.230.17
74.82.47.2
74.82.47.4
78.128.112.70
80.82.77.240
87.236.212.183

Almost all of the above IPs are well known IPs of botnets scanning the Internet trying to brute-force” the credentials of several exposed protocols like FTP, Telnet, SSH and RDP to login to those machines. It seems that these botnets are expanding their business to also scan the Internet for machines vulnerable to Bluekeep.

SentinelOne Protects Enterprises From Bluekeep

SentinelOne Agent introduces detection and blocking of exploits targeting the Bluekeep vulnerability, including the POC mentioned above.

If you have any questions about Bluekeep and how it might affect your organization, or if you would like to try a free demo to see how SentinelOne can protect your business from various cyberthreats from ransomware and cryptominers to targeted attacks and APT groups, don’t hesitate to contact us today.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

The five great reasons to attend TechCrunch’s Enterprise show Sept. 5 in SF

The vast enterprise tech category is Silicon Valley’s richest, and today it’s poised to change faster than ever before. That’s probably the biggest reason to come to TechCrunch’s first-ever show focused entirely on enterprise. But here are five more reasons to commit to joining TechCrunch’s editors on September 5 at San Francisco’s Yerba Buena Center for an outstanding day (agenda here) addressing the tech tsunami sweeping through enterprise. 

No. 1: Artificial intelligence
At once the most consequential and most hyped technology, no one doubts that AI will change business software and increase productivity like few, if any, technologies before it. To peek ahead into that future, TechCrunch will interview Andrew Ng, arguably the world’s most experienced AI practitioner at huge companies (Baidu, Google) as well as at startups. AI will be a theme across every session, but we’ll address it again head-on in a panel with investor Jocelyn Goldfein (Zetta), founder Bindu Reddy (Reality Engines) and executive John Ball (Salesforce / Einstein). 

No. 2: Data, the cloud and Kubernetes
If AI is at the dawn of tomorrow, cloud transformation is the high noon of today. Indeed, 90% of the world’s data was created in the past two years, and no enterprise can keep its data hoard on-prem forever. Azure’s CTO
Mark Russinovitch will discuss Microsft’s vision for the cloud. Leaders in the open-source Kubernetes revolution — Joe Beda (VMware), Aparna Sinha (Google) and others — will dig into what Kubernetes means to companies making the move to cloud. And last, there is the question of how to find signal in all the data — which will bring three visionary founders to the stage: Benoit Dageville (Snowflake), Ali Ghodsi (Databricks) and Murli Thirumale (Portworx). 

No. 3: Everything else on the main stage!
Let’s start with a fireside chat with
SAP CEO Bill McDermott and Qualtrics Chief Experience Officer Julie Larson-Green. We have top investors talking where they are making their bets, and security experts talking data and privacy. And then there is quantum computing, the technology revolution waiting on the other side of AI: Jay Gambetta, the principal theoretical scientist behind IBM’s quantum computing effort, Jim Clarke, the director of quantum hardware at Intel Labs and Krysta Svore, who leads Microsoft’s quantum effort.

All told, there are 21 programming sessions.

No. 4: Network and get your questions answered
There will be two Q&A breakout sessions with top enterprise investors; this is for founders (and anyone else) to query investors directly. Plus, TechCrunch’s unbeatable CrunchMatch app makes it really easy to set up meetings with the other attendees, an
incredible array of folks, plus the 20 early-stage startups exhibiting on the expo floor.

No. 5: SAP
Enterprise giant SAP is our sponsor for the show, and they are not only bringing a squad of top executives, they are producing four parallel track sessions, featuring key SAP Chief Innovation Officer
Max Wessel, SAP Chief Designer and Futurist Martin Wezowski and SAP.IO’s managing director Ram Jambunathan (SAP.iO), in sessions including how to scale-up an enterprise startup, how startups win large enterprise customers, and what the enterprise future looks like.

Check out the complete agenda. Don’t miss this show! This line-up is a view into the future like none other. 

Grab your $349 tickets today, and don’t wait til the day of to book because prices go up at the door!

We still have two Startup Demo Tables left. Each table comes with four tickets and a prime location to demo your startup on the expo floor. Book your demo table now before they’re all gone!

Ally raises $8M Series A for its OKR solution

OKRs, or Objectives and Key Results, are a popular planning method in Silicon Valley. Like most of those methods that make you fill in some form once every quarter, I’m pretty sure employees find them rather annoying and a waste of their time. Ally wants to change that and make the process more useful. The company today announced that it has raised an $8 million Series A round led by Accel Partners, with participation from Vulcan Capital, Founders Co-op and Lee Fixel. The company, which launched in 2018, previously raised a $3 million seed round.

Ally founder and CEO Vetri Vellore tells me that he learned his management lessons and the value of OKR at his last startup, Chronus. After years of managing large teams at enterprises like Microsoft, he found himself challenged to manage a small team at a startup. “I went and looked for new models of running a business execution. And OKRs were one of those things I stumbled upon. And it worked phenomenally well for us,” Vellore said. That’s where the idea of Ally was born, which Vellore pursued after selling his last startup.

Most companies that adopt this methodology, though, tend to work with spreadsheets and Google Docs. Over time, that simply doesn’t work, especially as companies get larger. Ally, then, is meant to replace these other tools. The service is currently in use at “hundreds” of companies in more than 70 countries, Vellore tells me.

One of its early adopters was Remitly . “We began by using shared documents to align around OKRs at Remitly. When it came time to roll out OKRs to everyone in the company, Ally was by far the best tool we evaluated. OKRs deployed using Ally have helped our teams align around the right goals and have ultimately driven growth,” said Josh Hug, COO of Remitly.

Desktop Team OKRs Screenshot

Vellore tells me that he has seen teams go from annual or bi-annual OKRs to more frequently updated goals, too, which is something that’s easier to do when you have a more accessible tool for it. Nobody wants to use yet another tool, though, so Ally features deep integrations into Slack, with other integrations in the works (something Ally will use this new funding for).

Since adopting OKRs isn’t always easy for companies that previously used other methodologies (or nothing at all), Ally also offers training and consulting services with online and on-site coaching.

Pricing for Ally starts at $7 per month per user for a basic plan, but the company also offers a flat $29 per month plan for teams with up to 10 users, as well as an enterprise plan, which includes some more advanced features and single sign-on integrations.

Join The New Stack for Pancake & Podcast with Q&A at TC Sessions: Enterprise

Popular enterprise news and research site The New Stack is coming to TechCrunch Sessions: Enterprise on September 5 for a special Pancake & Podcast session with live Q&A, featuring, you guessed it, delicious pancakes and awesome panelists!

Here’s the “short stack” of what’s going to happen:

  • Pancake buffet opens at 7:45 am on Thursday, September 5 at TC Sessions: Enterprise
  • At 8:15 am the panel discussion/podcast kicks off; the topic, “The People and Technology You Need to Build a Modern Enterprise
  • After the discussion, the moderators will host a live audience Q&A session with the panelists
  • Once the Q&A is done, attendees will get the chance to win some amazing raffle prizes

You can only take part in this fun pancake-breakfast podcast if you register for a ticket to  TC Sessions: Enterprise. Use the code TNS30 to get 30% off the conference registration price!

Here’s the longer version of what’s going to happen:

At 8:15 a.m., The New Stack founder and publisher Alex Williams takes the stage as the moderator and host of the panel discussion. Our topic: “The People and Technology You Need to Build a Modern Enterprise.” We’ll start with intros of our panelists and then dive into the topic with Sid Sijbrandij, founder and CEO at GitLab, and Frederic Lardinois, enterprise reporter and editor at TechCrunch, as our initial panelists. More panelists to come!

Then it’s time for questions. Questions we could see getting asked (hint, hint): Who’s on your team? What makes a great technical team for the enterprise startup? What are the observations a journalist has about how the enterprise is changing? What about when the time comes for AI? Who will I need on my team?

And just before 9 a.m., we’ll pick a ticket out of the hat and announce our raffle winner. It’s the perfect way to start the day.

On a side note, the pancake breakfast discussion will be published as a podcast on The New Stack Analysts

But there’s only one way to get a prize and network with fellow attendees, and that’s by registering for TC Sessions: Enterprise and joining us for a short stack with The New Stack. Tickets are now $349, but you can save 30% with code TNS30.

The five technical challenges Cerebras overcame in building the first trillion-transistor chip

Superlatives abound at Cerebras, the until-today stealthy next-generation silicon chip company looking to make training a deep learning model as quick as buying toothpaste from Amazon. Launching after almost three years of quiet development, Cerebras introduced its new chip today — and it is a doozy. The “Wafer Scale Engine” is 1.2 trillion transistors (the most ever), 46,225 square millimeters (the largest ever), and includes 18 gigabytes of on-chip memory (the most of any chip on the market today) and 400,000 processing cores (guess the superlative).

CS Wafer Keyboard Comparison

Cerebras’ Wafer Scale Engine is larger than a typical Mac keyboard (via Cerebras Systems).

It’s made a big splash here at Stanford University at the Hot Chips conference, one of the silicon industry’s big confabs for product introductions and roadmaps, with various levels of oohs and aahs among attendees. You can read more about the chip from Tiernan Ray at Fortune and read the white paper from Cerebras itself.

Superlatives aside though, the technical challenges that Cerebras had to overcome to reach this milestone I think is the more interesting story here. I sat down with founder and CEO Andrew Feldman this afternoon to discuss what his 173 engineers have been building quietly just down the street here these past few years, with $112 million in venture capital funding from Benchmark and others.

Going big means nothing but challenges

First, a quick background on how the chips that power your phones and computers get made. Fabs like TSMC take standard-sized silicon wafers and divide them into individual chips by using light to etch the transistors into the chip. Wafers are circles and chips are squares, and so there is some basic geometry involved in subdividing that circle into a clear array of individual chips.

One big challenge in this lithography process is that errors can creep into the manufacturing process, requiring extensive testing to verify quality and forcing fabs to throw away poorly performing chips. The smaller and more compact the chip, the less likely any individual chip will be inoperative, and the higher the yield for the fab. Higher yield equals higher profits.

Cerebras throws out the idea of etching a bunch of individual chips onto a single wafer in lieu of just using the whole wafer itself as one gigantic chip. That allows all of those individual cores to connect with one another directly — vastly speeding up the critical feedback loops used in deep learning algorithms — but comes at the cost of huge manufacturing and design challenges to create and manage these chips.

CS Wafer Sean

Cerebras’ technical architecture and design was led by co-founder Sean Lie. Feldman and Lie worked together on a previous startup called SeaMicro, which sold to AMD in 2012 for $334 million (via Cerebras Systems).

The first challenge the team ran into, according to Feldman, was handling communication across the “scribe lines.” While Cerebras’ chip encompasses a full wafer, today’s lithography equipment still has to act like there are individual chips being etched into the silicon wafer. So the company had to invent new techniques to allow each of those individual chips to communicate with each other across the whole wafer. Working with TSMC, they not only invented new channels for communication, but also had to write new software to handle chips with trillion-plus transistors.

The second challenge was yield. With a chip covering an entire silicon wafer, a single imperfection in the etching of that wafer could render the entire chip inoperative. This has been the block for decades on whole-wafer technology: due to the laws of physics, it is essentially impossible to etch a trillion transistors with perfect accuracy repeatedly.

Cerebras approached the problem using redundancy by adding extra cores throughout the chip that would be used as backup in the event that an error appeared in that core’s neighborhood on the wafer. “You have to hold only 1%, 1.5% of these guys aside,” Feldman explained to me. Leaving extra cores allows the chip to essentially self-heal, routing around the lithography error and making a whole-wafer silicon chip viable.

Entering uncharted territory in chip design

Those first two challenges — communicating across the scribe lines between chips and handling yield — have flummoxed chip designers studying whole-wafer chips for decades. But they were known problems, and Feldman said that they were actually easier to solve than expected by re-approaching them using modern tools.

He likens the challenge to climbing Mount Everest. “It’s like the first set of guys failed to climb Mount Everest, they said, ‘Shit, that first part is really hard.’ And then the next set came along and said ‘That shit was nothing. That last hundred yards, that’s a problem.’ ”

And indeed, the toughest challenges, according to Feldman, for Cerebras were the next three, since no other chip designer had gotten past the scribe line communication and yield challenges to actually find what happened next.

The third challenge Cerebras confronted was handling thermal expansion. Chips get extremely hot in operation, but different materials expand at different rates. That means the connectors tethering a chip to its motherboard also need to thermally expand at precisely the same rate, lest cracks develop between the two.

As Feldman explained, “How do you get a connector that can withstand [that]? Nobody had ever done that before, [and so] we had to invent a material. So we have PhDs in material science, [and] we had to invent a material that could absorb some of that difference.”

Once a chip is manufactured, it needs to be tested and packaged for shipment to original equipment manufacturers (OEMs) who add the chips into the products used by end customers (whether data centers or consumer laptops). There is a challenge though: Absolutely nothing on the market is designed to handle a whole-wafer chip.

CS Wafer Inspection

Cerebras designed its own testing and packaging system to handle its chip (via Cerebras Systems).

“How on earth do you package it? Well, the answer is you invent a lot of shit. That is the truth. Nobody had a printed circuit board this size. Nobody had connectors. Nobody had a cold plate. Nobody had tools. Nobody had tools to align them. Nobody had tools to handle them. Nobody had any software to test,” Feldman explained. “And so we have designed this whole manufacturing flow, because nobody has ever done it.” Cerebras’ technology is much more than just the chip it sells — it also includes all of the associated machinery required to actually manufacture and package those chips.

Finally, all that processing power in one chip requires immense power and cooling. Cerebras’ chip uses 15 kilowatts of power to operate — a prodigious amount of power for an individual chip, although relatively comparable to a modern-sized AI cluster. All that power also needs to be cooled, and Cerebras had to design a new way to deliver both for such a large chip.

It essentially approached the problem by turning the chip on its side, in what Feldman called “using the Z-dimension.” The idea was that rather than trying to move power and cooling horizontally across the chip as is traditional, power and cooling are delivered vertically at all points across the chip, ensuring even and consistent access to both.

And so, those were the next three challenges — thermal expansion, packaging and power/cooling — that the company has worked around-the-clock to deliver these past few years.

From theory to reality

Cerebras has a demo chip (I saw one, and yes, it is roughly the size of my head), and it has started to deliver prototypes to customers, according to reports. The big challenge, though, as with all new chips, is scaling production to meet customer demand.

For Cerebras, the situation is a bit unusual. Because it places so much computing power on one wafer, customers don’t necessarily need to buy dozens or hundreds of chips and stitch them together to create a compute cluster. Instead, they may only need a handful of Cerebras chips for their deep-learning needs. The company’s next major phase is to reach scale and ensure a steady delivery of its chips, which it packages as a whole system “appliance” that also includes its proprietary cooling technology.

Expect to hear more details of Cerebras technology in the coming months, particularly as the fight over the future of deep learning processing workflows continues to heat up.

Reputation.com nabs $30M more to help enterprises manage their profiles online

In these days where endorsements from influential personalities online can make or break a product, a startup that’s built a business to help companies harness all the long-tail firepower they can muster to get their name out there in a good way has raised some funding to expand deeper into feedback and other experience territory. Reputation.com, which works with big enterprises in areas like automotive and healthcare to help improve their visibility online and provide more accurate reports to the businesses about how their brands are perceived by customers and others, has raised $30 million in equity financing, money that CEO Joe Fuca said the company will use to continue to expand its tech platform to source more feedback and to future-proof it for further global expansion.

The funding — led by Ascension Ventures, with participation also from new backers Akkadian Ventures, Industry Ventures and River City Ventures and returning investors Kleiner Perkins, August Capital, Bessemer Venture Partners, Heritage Group and Icon Ventures — is the second round Reputation.com has raised since its pivot away from services aimed at individuals. Fuca said the company’s valuation is tripling with this round, and while he wouldn’t go into the details from what I understand from sources (which is supported by data in PitchBook), it had been around $120-130 million in its last round, making it now valued at between $360-390 million now.

Part of the reason that the company’s valuation has tripled is because of its growth. The company doesn’t disclose many customer names (for possibly obvious reasons) but said that three of the top five automotive OEMs and as well as over 10,000 auto dealerships in the U.S. use it, with those numbers now also growing in Europe. Among healthcare providers, it now has 250 customers — including three of the top five — and in the world of property management, more than 100 companies are using Reputation.com. Other verticals that use the company include financial services, hospitality and retail services.

The company competes with other firms that provide services like SEO and other online profile profile management and sees the big challenge as trying to convince businesses that there is more to having a strong profile than just an NPS score (providers of which are also competitors). So, in addition to the metrics that are usually used to compile this figure (based on customer feedback surveys typically), Reputation.com uses unstructured data as well (for example sentiment analysis from social media) and applies algorithms to this to calculate a Reputation Score.

Reputation.com has been around actually since 2006, with its original concept being managing individuals’ online reputations — not exactly in the Klout or PR-management sense, but with a (now very prescient-sounding) intention of providing a way for people to better control their personal information online. Its original name was ReputationDefender and founded by Michael Fertik, it was a pioneer in what came to be called personal information management.

The company proposed an idea of a “vault” for your information, which could still be used and appropriated by so-called data brokers (which help feed the wider ad-tech and marketing tech machines that underpin a large part of the internet economy), but would be done with user consent and compensation.

The idea was hard to scale, however. “I think it was an addressable market issue,” said Fuca, who took over as CEO last year the company was reorienting itself to enterprise services (it sold off the consumer/individual business at the same time to a PE firm), with Fertik taking the role of executive chairman, among other projects. “Individuals seeking reputation defending is only certain market size.”

Not so in the world of enterprise, the area the startup (and I think you can call Reputation.com a startup, given its pivot and restructure and venture backing) has been focusing on exclusively for the better part of a year.

The company today integrates closely with Google — which is not only a major platform for disseminating information in the form of SEO management, but a data source as a repository of user reviews — but despite the fact that Google holds so many cards in the stack, Fuca (who had previously been an exec at DocuSign before coming to Reputation.com) said he doesn’t see it as a potential threat or competitor.

A recent survey from the company about reputation management for the automotive sector underscores just how big of a role Google does play:

Screenshot 2019 08 20 at 11.48.57“We don’t worry about google as competitor,” Fuca said. “It is super attracted to working with partners like us because we drive domain activity, and they love it when people like us explain to customers how to optimise on Google. For Google, it’s almsot like we are an optimization partner and so it helps their entire ecosystem, and so I don’t see them being a competitor or wanting to be.”

Nevertheless, the fact that the bulk of Reputation.com’s data sources are essentially secondary — that is publically available information that is already online and collected by others — will be driving some of the company’s next stage of development. The plan is to start to add in more of its own primary-source data gathering in the form of customer surveys and feedback forms. That will open the door too to more questions of how the company will handle privacy and personal data longer term.

“Ascension Ventures is excited to deepen its partnership with Reputation.com as it enters its next critical stage of growth,” said John Kuelper, Managing Director at Ascension Ventures, in a statement. “We’ve watched Reputation.com’s industry leading reputation management offering grow into an even more expansive CX platform. We’re seeing some of the world’s largest brands and service providers achieve terrific results by partnering with Reputation.com to analyze and take action on customer feedback — wherever it originates — at scale and in real-time. We’re excited to make this additional investment in Reputation.com as it continues to grow and expand its market leadership.”

H2O.ai announces $72.5M Series D led by Goldman Sachs

H2O.ai‘s mission is to democratize AI by providing a set of tools that frees companies from relying on teams of data scientists. Today it got a bushel of money to help. The company announced a $72.5 million Series D round led by Goldman Sachs and Ping An Global Voyager Fund.

Previous investors Wells Fargo, NVIDIA and Nexus Venture Partners also participated. Under the terms of the deal, Jade Mandel from Goldman Sachs will be joining the H2O.ai Board. Today’s investment brings the total raised to $147 million.

It’s worth noting that Goldman Sachs isn’t just an investor. It’s also a customer. Company CEO and co-founder Sri Ambati says the fact that customers, Wells Fargo and Goldman Sachs, have led the last two rounds is a validation for him and his company. “Customers have risen up from the ranks for two consecutive rounds for us. Last time the Series C was led by Wells Fargo where we were their platform of choice. Today’s round was led by Goldman Sachs, which has been a strong customer for us and strong supporters of our technology,” Ambati told TechCrunch.

The company’s main product, H2O Driverless AI, introduced in 2017, gets its name from the fact it provides a way for people who aren’t AI experts to still take advantage of AI without a team of data scientists. “Driverless AI is automatic machine learning, which brings the power of a world class data scientists in the hands of everyone. lt builds models automatically using machine learning algorithms of every kind,” Ambati explained.

They introduced a new recipe concept today, that provides all of the AI ingredients and instructions for building models for different business requirements. H2O.ai’s team of data scientists has created and open sourced 100 recipes for things like credit risk scoring, anomaly detection and property valuation.

The company has been growing since its Series C round in 2017 when it had 70 employees. Today it has 175 and has tripled the number of customers since the prior round, although Ambati didn’t discuss an exact number.  The company has its roots in open source and has 20,000 users of its open source products, according to Ambati.

He didn’t want to discuss valuation and wouldn’t say when the company might go public, saying it’s early days for AI and they are working hard to build a company for the long haul.