Canonical’s Mark Shuttleworth on dueling open-source foundations

At the Open Infrastructure Summit, which was previously known as the OpenStack Summit, Canonical founder Mark Shuttleworth used his keynote to talk about the state of open-source foundations — and what often feels like the increasing competition between them. “I know for a fact that nobody asked to replace dueling vendors with dueling foundations,” he said. “Nobody asked for that.”

He then put a point on this, saying, “what’s the difference between a vendor that only promotes the ideas that are in its own interest and a foundation that does the same thing. Or worse, a foundation that will only represent projects that it’s paid to represent.”

Somewhat uncharacteristically, Shuttleworth didn’t say which foundations he was talking about, but since there are really only two foundations that fit the bill here, it’s pretty clear that he was talking about the OpenStack Foundation and the Linux Foundation — and maybe more precisely the Cloud Native Computing Foundation, the home of the incredibly popular Kubernetes project.

It turns out, that’s only part of his misgivings about the current state of open-source foundations, though. I sat down with Shuttleworth after his keynote to discuss his comments, as well as Canonical’s announcements around open infrastructure.

One thing that’s worth noting at the outset is that the OpenStack Foundation is using this event to highlight that fact that it has now brought in more new open infrastructure projects outside of the core OpenStack software, with two of them graduating from their pilot phase. Shuttleworth, who has made big bets on OpenStack in the past and is seeing a lot of interest from customers, is not a fan. Canonical, it’s worth noting, is also a major sponsor of the OpenStack Foundation. He, however, believes, the foundation should focus on the core OpenStack project.

“We’re busy deploying 27 OpenStack clouds — that’s more than double the run rate last year,” he said. “OpenStack is important. It’s very complicated and hard. And a lot of our focus has been on making it simpler and cleaner, despite the efforts of those around us in this community. But I believe in it. I think that if you need large-scale, multi-tenant virtualization infrastructure, it’s the best game in town. But it has problems. It needs focus. I’m super committed to that. And I worry about people losing their focus because something newer and shinier has shown up.”

To clarify that, I asked him if he essentially believes that the OpenStack Foundation is making a mistake by trying to be all things infrastructure. “Yes, absolutely,” he said. “At the end of the day, I think there are some projects that this community is famous for. They need focus, they need attention, right? It’s very hard to argue that they will get focus and attention when you’re launching a ton of other things that nobody’s ever heard of, right? Why are you launching those things? Who is behind those decisions? Is it a money question as well? Those are all fair questions to ask.”

He doesn’t believe all of the blame should fall on the Foundation leadership, though. “I think these guys are trying really hard. I think the common characterization that it was hapless isn’t helpful and isn’t accurate. We’re trying to figure stuff out.” Shuttleworth indeed doesn’t believe the leadership is hapless, something he stressed, but he clearly isn’t all that happy with the current path the OpenStack Foundation is on either.

The Foundation, of course, doesn’t agree. As OpenStack Foundation COO Mark Collier told me, the organization remains as committed to OpenStack as ever. “The Foundation, the board, the community, the staff — we’ve never been more committed to OpenStack,” he said. “If you look at the state of OpenStack, it’s one of the top-three most active open-source projects in the world right now […] There’s no wavering in our commitment to OpenStack.” He also noted that the other projects that are now part of the foundation are the kind of software that is helpful to OpenStack users. “These are efforts which are good for OpenStack,” he said. In addition, he stressed that the process of opening up the Foundation has been going on for more than two years, with the vast majority of the community (roughly 97 percent) voting in favor.

OpenStack board member Allison Randal echoed this. “Over the past few years, and a long series of strategic conversations, we realized that OpenStack doesn’t exist in a vacuum. OpenStack’s success depends on the success of a whole network of other open-source projects, including Linux distributions and dependencies like Python and hypervisors, but also on the success of other open infrastructure projects which our users are deploying together. The OpenStack community has learned a few things about successful open collaboration over the years, and we hope that sharing those lessons and offering a little support can help other open infrastructure projects succeed too. The rising tide of open source lifts all boats.”

As far as open-source foundations in general, he surely also doesn’t believe that it’s a good thing to have numerous foundations compete over projects. He argues that we’re still trying to figure out the role of open-source foundations and that we’re currently in a slightly awkward position because we’re still trying to determine how to best organize these foundations. “Open source in society is really interesting. And how we organize that in society is really interesting,” he said. “How we lead that, how we organize that is really interesting and there will be steps forward and steps backward. Foundations tweeting angrily at each other is not very presidential.”

He also challenged the notion that if you just put a project into a foundation, “everything gets better.” That’s too simplistic, he argues, because so much depends on the leadership of the foundation and how they define being open. “When you see foundations as nonprofit entities effectively arguing over who controls the more important toys, I don’t think that’s serving users.”

When I asked him whether he thinks some foundations are doing a better job than others, he essentially declined to comment. But he did say that he thinks the Linux Foundation is doing a good job with Linux, in large parts because it employs Linus Torvalds . “I think the technical leadership of a complex project that serves the needs of many organizations is best served that way and something that the OpenStack Foundation could learn from the Linux Foundation. I’d be much happier with my membership fees actually paying for thoughtful, independent leadership of the complexity of OpenStack rather than the sort of bizarre bun fights and stuffed ballots that we see today. For all the kumbaya, it flatly doesn’t work.” He believes that projects should have independent leaders who can make long-term plans. “Linus’ finger is a damn useful tool and it’s hard when everybody tries to get reelected. It’s easy to get outraged at Linus, but he’s doing a fucking good job, right?”

OpenStack, he believes, often lacks that kind of decisiveness because it tries to please everybody and attract more sponsors. “That’s perhaps the root cause,” he said, and it leads to too much “behind-the-scenes puppet mastering.”

In addition to our talk about foundations, Shuttleworth also noted that he believes the company is still on the path to an IPO. He’s obviously not committing to a time frame, but after a year of resetting in 2018, he argues that Canonical’s business is looking up. “We want to be north of $200 million in revenue and a decent growth rate and the right set of stories around the data center, around public cloud and IoT.” First, though, Canonical will do a growth equity round.

UiPath nabs $568M at a $7B valuation to bring robotic process automation to the front office

Companies are on the hunt for ways to reduce the time and money it costs their employees to perform repetitive tasks, so today a startup that has built a business to capitalize on this is announcing a huge round of funding to double down on the opportunity.

UiPath — a robotic process automation startup originally founded in Romania that uses artificial intelligence and sophisticated scripts to build software to run these tasks — today confirmed that it has closed a Series D round of $568 million at a post-money valuation of $7 billion.

From what we understand, the startup is “close to profitability” and is going to keep growing as a private company. Then, an IPO within the next 12-24 months is the “medium term” plan.

“We are at the tipping point. Business leaders everywhere are augmenting their workforces with software robots, rapidly accelerating the digital transformation of their entire business and freeing employees to spend time on more impactful work,” said Daniel Dines, UiPath co-founder and CEO, in a statement. “UiPath is leading this workforce revolution, driven by our core determination to democratize RPA and deliver on our vision of a robot helping every person.”

This latest round of funding is being led by Coatue, with participation from Dragoneer, Wellington, Sands Capital, and funds and accounts advised by T. Rowe Price Associates, Accel, Alphabet’s CapitalG, Sequoia, IVP and Madrona Venture Group.

CFO Marie Myers said in an interview in London that the plan will be to use this funding to expand UiPath’s focus into more front-office and customer-facing areas, such as customer support and sales.

“We want to move into automation into new levels,” she said. “We’re advancing quickly into AI and the cloud, with plans to launch a new AI product in the second half of the year that we believe will demystify it for our users.” The product, she added, will be focused around “drag and drop” architecture and will work both for attended and unattended bots — that is, those that work as assistants to humans, and those that work completely on their own. “Robotics has moved out of the back office and into the front office, and the time is right to move into intelligent automation.”

Today’s news confirms Kate’s report from last month noting that the round was in progress: in the end, the amount UiPath raised was higher than the target amount we’d heard ($400 million), with the valuation on the more “conservative” side (we’d said the valuation would be higher than $7 billion).

“Conservative” is a relative term here. The company has been on a funding tear in the last year, raising $418 million ($153 million at Series A and $265 million at Series B) in the space of 12 months, and seeing its valuation go from a modest $110 million in April 2017 to $7 billion today, just two years later.

Up to now, UiPath has focused on internal and back-office tasks in areas like accounting, human resources paperwork, and claims processing — a booming business that has seen UiPath expand its annual run rate to more than $200 million (versus $150 million six months ago) and its customer base to more than 400,000 people.

Customers today include American Fidelity, BankUnited, CWT (formerly known as Carlson Wagonlit Travel), Duracell, Google, Japan Exchange Group (JPX), LogMeIn, McDonalds, NHS Shared Business Services, Nippon Life Insurance Company, NTT Communications, Orange, Ricoh Company, Ltd., Rogers Communications, Shinsei Bank, Quest Diagnostics, Uber, the US Navy, Voya Financial, Virgin Media, and World Fuel Services.

Moving into more front-office tasks is an ambitious but not surprising leap for UiPath. Looking at that customer list, it’s notable that many of these organizations have customer-facing operations, often with their own sets of repetitive processes that are ripe for improving by tapping into the many facets of AI — from computer vision to natural language processing and voice recognition, through to machine learning — alongside other technology.

It also begs the question of what UiPath might look to tackle next. Having customer-facing tools and services is one short leap from building consumer services, an area where the likes of Amazon, Google, Apple and Microsoft are all pushing hard with devices and personal assistant services. (That would indeed open up the competitive landscape quite a lot for UiPath, beyond the list of RPA companies like AutomationAnywhere, Kofax and Blue Prism who are its competitors today.)

Robotics has been given a somewhat bad rap in the world of work. Critics worry that they are “taking over all the jobs“, removing humans and their own need to be industrious from the equation; and in the worst-case scenarios, the work of a robot lacks the nuance and sophsitication you get from the human touch.

UiPath and the bigger area of RPA are interesting in this regard. The aim (the stated aim, at least) isn’t to replace people, but to take tasks out of their hands to make it easier for them to focus on the non-repetitive work that “robots” — and in the case of UiPath, software scripts and robots — cannot do.

Indeed, that “future of work” angle is precisely what has attracted investors.

“UiPath is enabling the critical capabilities necessary to advance how companies perform and how employees better spend their time,” said Greg Dunham, vice president at T. Rowe Price Associates, Inc., in a statement. “The industry has achieved rapid growth in such a short time, with UiPath at the head of it, largely due to the fact that RPA is becoming recognized as the paradigm shift needed to drive digital transformation through virtually every single industry in the world.”

As we’ve written before, the company has has been a big hit with investors because of the rapid traction it has seen with enterprise customers.

There is an interesting side story to the funding that speaks to that traction: Myers, the CFO, came to UiPath by way of one of those engagement. She had been a senior finance executive with HP tasked with figuring out how to make some of its accounting more efficient. She issued an RFP for the work, and the only company she thought really addressed the task with a truly tech-first solution, at a very competitive price, was an unlikely startup out of Romania, which turned out to be UiPath. She became one of the company’s first customers, and eventually Dines offered her a job to help build his company to the next level, which she leaped to take.

“UiPath is improving business performance, efficiency and operation in a way we’ve never seen before,” said Philippe Laffont, founder of Coatue Management, in a statement. “The Company’s rapid growth over the last two years is a testament to the fact that UiPath is transforming how companies manage their resources. RPA presents an enormous opportunity for companies around the world who are embracing artificial intelligence, driving a new era of productivity, efficiency and workplace satisfaction.” 

Docker updates focus on simplifying containerization for developers

Over the last five years, Docker has become synonymous with software containers, but that doesn’t mean every developer understands the technical details of building, managing and deploying them. At DockerCon this week, the company’s customer conference taking place in San Francisco, it announced new tools that have been designed to make it easier for developers, who might not be Docker experts, to work with containers.

As the technology has matured, the company has seen the market broaden, but in order to take advantage of that, it needs to provide a set of tools that make it easier to work with. “We’ve found that customers typically have a small cadre of Docker experts, but there are hundreds, if not thousands, of developers who also want to use Docker. And we reasoned, how can we help them get productive very, very quickly, without them having to become Docker experts,” Scott Johnston, chief product officer at Docker, told TechCrunch.

To that end, it announced a beta of Docker Enterprise 3.0, which includes several key components. For starters, Docker Desktop Enterprise lets IT set up a Docker environment with the kind of security and deployment templates that make sense for each customer. The developers can then pick the templates that make sense for their implementations, while conforming with compliance and governance rules in the company.

“These templates already have IT-approved container images, and have IT-approved configuration settings. And what that means is that IT can provide these templates through these visual tools that allow developers to move fast and choose the ones they want without having go back for approval,” Johnston explained.

The idea is to let the developers concentrate on building applications, and the templates provide all the Docker tooling pre-built and ready to go, so they don’t have to worry about all of that.

Another piece of this is Docker Applications, which allows developers to build complex containerized applications as a single package and deploy them to any infrastructure they wish — on-prem or in the cloud. Five years ago, when Docker really got started with containers, they were a simpler idea, often involving just a single one, but as developers broke down those larger applications into microservices, it created a new level of difficulty, especially for operations that had to deploy these increasingly large sets of application containers.

“Operations can now programmatically change the parameters for the containers, depending on the environments, without having to go in and change the application. So you can imagine that ability lowers the friction of having to manage all these files in the first place,” he said.

The final piece of that is the orchestration layer, and the popular way to handle that today is with Kubernetes. Docker has created its own flavor of Kubernetes, based on the open-source tool. Johnston says, as with the other two pieces, the goal here is to take a powerful tool like Kubernetes and reduce the overall complexity associated with running it, while making it fully compatible with a Docker environment.

For that, Docker announced Docker Kubernetes Service (DKS), which has been designed with Docker users in mind, including support for Docker Compose, a scripting tool that has been popular with Docker users. While you are free to use any flavor of Kubernetes you wish, Docker is offering DKS as a Docker-friendly version for developers.

All of these components have one thing in common besides being part of Docker Enterprise 3.0. They are trying to reduce the complexity associated with deploying and managing containers and to abstract away the most difficult parts, so that developers can concentrate on developing without having to worry about connecting to the technical underpinnings of building and deploying containers. At the same time, Docker is trying to make it easier for the operations team to manage it all. That is the goal, at least. In the end, DevOps teams will be the final judges of how well Docker has done, once these tools become generally available later this year.

The Docker Enterprise 3.0 beta will be available later this quarter.

Docker looks to partners and packages to ease container implementation

Docker appears to be searching for ways to simplify the core value proposition of the company — creating, deploying and managing containers. While most would agree it has revolutionized software development, like many technology solutions, it takes a certain level of expertise and staffing to pull off. At DockerCon, the company’s customer conference taking place this week in San Francisco, Docker announced several ways it could help customers with the tough parts of implementing a containerized solution.

For starters, the company announced a beta of Docker Enterprise 3.0 this morning. That update is all about making life simpler for developers. As companies move to containerized environments, it’s a challenge for all but the largest organizations like Google, Amazon and Facebook, all of whom have massive resource requirements and correspondingly large engineering teams.

Most companies don’t have that luxury though, and Docker recognizes if it wants to bring containerization to a larger number of customers, it has to create packages and programs that make it easier to implement.

Docker Enterprise 3.0 is a step toward providing a solution that lets developers concentrate on the development aspects, while working with templates and other tools to simplify the deployment and management side of things.

The company sees customers struggling with implementation and how to configure and build a containerized workflow, so it is working with systems integrators to help smooth out the difficult parts. Today, the company announced Docker Enterprise as a Service, with the goal of helping companies through the process of setting up and managing a containerized environment, using the Docker stack and adjacent tooling like Kubernetes.

The service provider will take care of operational details like managing upgrades, rolling out patches, doing backups and undertaking capacity planning — all of those operational tasks that require a high level of knowledge around enterprise container stacks.

Capgemini will be the first go-to-market partner. “Capgemini has a combination of automation, technology tools, as well as services on the back end that can manage the installation, provisioning and management of the enterprise platform itself in cases where customers don’t want to do that, and they want to pay someone to do that for them,” Scott Johnston, chief product officer at Docker told TechCrunch.

The company has released tools in the past to help customers move legacy applications into containers without a lot of fuss. Today, the company announced a solution bundle called Accelerate Greenfield, a set of tools designed to help customers get up and running as container-first development companies.

“This is for those organizations that may be a little further along. They’ve gone all-in on containers committing to taking a container-first approach to new application development,” Johnston explained. He says this could be cloud native microservices or even a LAMP stack application, but the point is that they want to put everything in containers on a container platform.

Accelerate Greenfield is designed to do that. “They get the benefits where they know that from the developer to the production end point, it’s secure. They have a single way to define it all the way through the life cycle. They can make sure that it’s moving quickly, and they have that portability built into the container format, so they can deploy [wherever they wish],” he said.

These programs and products are all about providing a level of hand-holding, either by playing a direct consultative role, working with a systems integrator or providing a set of tools and technologies to walk the customer through the containerization life cycle. Whether they provide a sufficient level of help that customers require is something we will learn over time as these programs mature.

Facebook Messenger will get desktop apps, co-watching, emoji status

To win chat, Facebook Messenger must be as accessible as SMS, yet more entertaining than Snapchat. Today, Messenger pushes on both fronts with a series of announcements at Facebook’s F8 conference. Those include that it will launch Mac and PC desktop apps, a faster and smaller mobile app, simultaneous video co-watching and a revamped Friends tab, where friends can use an emoji to tell you what they’re up to or down for.

Facebook is also beefing up its tools for the 40 million active businesses and 300,000 businesses on Messenger, up from 200,000 businesses a year ago. Merchants will be able to let users book appointments at salons and masseuses, collect information with new lead generation chatbot templates and provide customer service to verified customers through authenticated m.me links. Facebook hopes this will boost the app beyond the 20 billion messages sent between people and businesses each month, which is up 10X from December 2017.

“We believe you can build practically any utility on top of messaging,” says Facebook’s head of Messenger Stan Chudnovsky. But he stresses that “All of the engineering behind it is has been redone” to make it more reliable, and to comply with CEO Mark Zuckerberg’s directive to unite the backends of Messenger, WhatsApp and Instagram Direct. “Of course, if we didn’t have to do all that, we’d be able to invest more in utilities. But we feel that utilities will be less functional if we don’t do that work. They need to go hand-in-hand together. Utilities will be more powerful, more functional and more desired if built on top of a system that’s interoperable and end-to-end encrypted.”

Here’s a look at the major Messenger announcements and why they’re important:

Messenger Desktop – A stripped-down version of Messenger focused on chat, audio and video calls will debut later this year. Chudnovsky says it will remove the need to juggle and resize browser tabs by giving you an always-accessible version of Messenger that can replace some of the unofficial knock-offs. Especially as Messenger focuses more on businesses, giving them a dedicated desktop interface could convince them to invest more in lead generation and customer service through Messenger.

Facebook Messenger’s upcoming desktop app

Project Lightspeed – Messenger is reengineering its app to cut 70 mb off its download size so people with low-storage phones don’t have to delete as many photos to install it. In testing, the app can cold start in merely 1.3 seconds, which Chudnovsky says is just 25 percent of where Messenger and many other apps are today. While Facebook already offers Messenger Light for the developing world, making the main app faster for everyone else could help Messenger swoop in and steal users from the status quo of SMS. The Lightspeed update will roll out later this year.

Video Co-Watching – TechCrunch reported in November that Messenger was building a Facebook Watch Party-style experience that would let users pick videos to watch at the same time as a friend, with reaction cams of their faces shown below the video. Now in testing before rolling out later this year, users can pick any Facebook video, invite one or multiple friends and laugh together. Unique capabilities like this could make Messenger more entertaining between utilitarian chat threads and appeal to a younger audience Facebook is at risk of losing.

Watch Videos Together on Messenger

Business Tools – After a rough start to its chatbot program a few years ago, where bots couldn’t figure out users’ open-ended responses, Chudnovsky says the platform is now picking up steam with 300,000 developers on board. One option that’s worked especially well is lead-generation templates, which teach bots to ask people standardized questions to collect contact info or business intent, so Messenger is adding more of those templates with completion reminders and seamless hand-off to a live agent.

To let users interact with appointment-based businesses through a platform they’re already familiar with, Messenger launched a beta program for barbers, dentists and more that will soon open to let any business handle appointment booking through the app. And with new authenticated m.me links, a business can take a logged-in user on their website and pass them to Messenger while still knowing their order history and other info. Getting more businesses hooked on Messenger customer service could be very lucrative down the line.

Appointment booking on Messenger

Close Friends and Emoji Status – Perhaps the most interesting update to Messenger, though, is its upcoming effort to help you make offline plans. Messenger is in the early stages of rebuilding its Friends tab into “Close Friends,” which will host big previews of friends’ Stories, photos shared in your chats, and let people overlay an emoji on their profile pic to show friends what they’re doing. We first reported this “Your Emoji” status update feature was being built a year ago, but it quietly cropped up in the video for Messenger Close Friends. This iteration lets you add an emoji like a home, barbell, low battery or beer mug, plus a short text description, to let friends know you’re back from work, at the gym, might not respond or are interested in getting a drink. These will show up atop the Close Friends tab as well as on location-sharing maps and more once this eventually rolls out.

Messenger’s upcoming Close Friends tab with Your Emoji status

Facebook Messenger is the best poised app to solve the loneliness problem. We often end up by ourselves because we’re not sure which of our friends are free to hang out, and we’re embarrassed to look desperate by constantly reaching out. But with emoji status, Messenger users could quietly signal their intentions without seeming needy. This “what are you doing offline” feature could be a whole social network of its own, as apps like Down To Lunch have tried. But with 1.3 billion users and built-in chat, Messenger has the ubiquity and utility to turn a hope into a hangout.

Click below to check out all of TechCrunch’s Facebook conference coverage from today:

Privacy 2019: Fixing a 16 year-old privacy problem in TLS with ESNI

This post is the second post in a series covering privacy, anonymity and security on the internet in recent times, with a focus on real issues affecting people in the real world. Censorship and pervasive state-sponsored surveillance is a daily reality for hundreds of millions of people around the world.

This post continues from where we left off in our previous post. This post discusses a suggested solution to the aforementioned problems with SNI.

Fixing a 16 year-old privacy problem in TLS with ESNI

SNI – What Is It Good For?

In our previous post, we discussed Server Name Indication (SNI), an extension to SSL/TLS that dates back to 2003 and that has since become mandatory (TLS 1.3).

So what problem does SNI actually solve and why is it necessary?

SNI allows the server handling SSL/TLS termination to know which host you’re connecting to; this is important because the server needs to hand you a certificate with a Subject Name (SN)/SAN that is valid for the hostname being accessed.

This becomes a problem when you have more than one host on one IP address. This is very common today and is also crucial to help save addresses in the dwindling IPv4 address space.

Today with cloud providers such as Google, Amazon, and edge network providers such as Cloudflare or Akamai, a single IP can serve as a load-balancer or front-facing server for an arbitrarily large number of hosts. All of the services mentioned offer some sort of load-balancing and TLS termination service. All-in-all, SNI is an important and a necessary feature of the internet as it is today.

As you probably know from our last post, SNI unfortunately has also been used and abused for censorship and monitoring since its inception, and the most commonly known way of side-stepping that with domain fronting has been closed by most cloud providers.

image of tls and sni

This all makes SNI a necessity despite the major info-leak it causes. Luckily though, a fix is on the way and being shaped in the IETF draft we’ll be discussing for the rest of this post: Encrypted Server Name Indication for TLS 1.3

So How Does eSNI Fix SNI?

The following explanation is based upon the latest working draft [4f3ce56], The latest published draft currently is #3; the main differences between these versions so far revolve around the DNS lookup and resolution of ESNIKeys. Due to #2 being implemented and already deployed by Cloudflare and Firefox Nightly, we will attempt to address differences.

To encrypt our SNI we need to first obtain an encryption key, which is done by querying an ESNI-type record via DNS. In previous draft versions (up to draft #3), this was done by querying a TXT record called _esni. For example, to query ESNI information for sentinelone.com one would query the TXT record _esni.sentinelone.com.

The current draft asks IANA to update the RR Registry with a with a new record type ESNI. It’s not a huge difference, but it is important to note since there are deployed implementations (Cloudflare & Firefox) that use the TXT record version.

image of Query ESNI Via DNS Over TLS

The data retrieved by DNS is a struct called ESNIKeys.

The most important information in ESNIKeys is a list of named {EC,∅}DHE groups and their matching public key share components. These are used to derive a symmetric key, and that key is used to encrypt and authenticate (AEAD) the associated SNI data.

image of Creating Encrypted SNI

To enable the server to also be able to decrypt and verify the data, the client sends his key share entry along with the encrypted SNI. Using this, the server is able to derive the same symmetric key using {EC,∅}DHE.

Inside of the encrypted SNI, a random nonce is also included. The response the server sends to ClientHello (ServerHello) includes an (encrypted) list of the accepted extensions. If ESNI was accepted(esni_accept), the server will include the random nonce originally sent in the ClientHello; the client must verify it is indeed the same nonce.

The server can also request the client to switch ESNI keys in this phase. This is useful to prevent an ESNI key-rotation from causing a denial-of-service.

This (in a nutshell) is how ESNI works in the latest draft. Some details obviously had to be left out to make this summary readable, but if you’re interested in learning more, the full draft is available on Github.

image of TLS with ESNI

DNS Considerations

Evil DNS

A compromised DNS server compromises the anonymity that ESNI can offer if the ESNI keys can be switched by an attacker. If attackers gain control of the A/AAAA record, they can use this to know which website the user attempted to access since they will be able to decrypt the encrypted_server_name during ClientHello.

If an attacker has control of a major DNS resolver such as an ISP and can perform this action on a large number of domains, this might be effective. In this case, though, the attacker most likely also controls the A/AAAA records and has already won the battle since the attacker can send users to an IP address that will mark them as having accessed some domain.

Another potential issue with this attack is it may cause a denial of service against those sites; assuming correct implementation of the protocol, a failed decryption of the ESNI should result in a connection abort with “decrypt_error” (this is marked as MUST in the draft).

tl;dr — ESNI is not TOFU. You’re trusting keys given to you by DNS.

Replay Attacks

Protection against replay attacks is partial.

The TLS Client Hello packet contains a 32-byte random. This is used to derive the Pre-master secret, the server on response sends its own random and they together derive this key.

This does provide protection against a full TLS session replay as the server will respond with a different random, which will cause a different pre-master key to be derived and the rest of the recorded session will be useless.

This, however, doesn’t prevent an attacker from repeating the same ClientHello and seeing if there is a response. That makes it possible to presence-check whether a server is still being served by a service. An example of this is shown in the image below, which shows a ClientHello with a valid ESNI being replayed. All these connections received the Server Hello, meaning the SNI is still being served by the host.

This means if you recorded a ClientHello to a hidden service, you can presence-check if it is still being served at least until a public key rotation occurs in ESNIKeys.

This failure can be differentiated from the service being gone due to a decryption failure causing a TLS Alert “decryption_error”

image of tls alert

There is currently a suggestion on Github to add a timestamp inside the ESNI, limiting the length of its validity.

Conclusion

As we wrap up the coverage of SNI that we began in our previous post, we hope you now have a good understanding of what SNI actually is, why we need it and why we need to fix it formally. We saw the rise and demise of domain fronting, which attempted to both sidestep the SNI issue and use it as leverage for cloaking in-risk users (and some malicious activities).

Since this post covers a protocol that is still in-flux, upcoming changes may alter some aspects of the protocol mentioned here. Once the final document is released, we will review it and if there are major changes we will update this post as necessary.

In Our Next Episode…

We will be discussing one of the core aspects of trust on the internet – Certificates. We will be looking through the lens of multiple technologies such as HSTS, HPKP and Certificate transparency and how it attempts to solve the rogue CA problem and what are some of the downsides and strings attached to it. Stay tuned!


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Tray.io hauls in $37 million Series B to keep expanding enterprise automation tool

Tray.io, the startup that wants to put automated workflows within reach of line of business users, announced a $37 million Series B investment today.

Spark Capital led the round with help from Meritech Capital, along with existing investors GGV Capital, True Ventures and Mosaic Ventures. Under the terms of the deal Spark’s Alex Clayton will be joining the Tray’s board of directors. The company has now raised over $59 million.

Rich Waldron, CEO at Tray, says the company looked around at the automation space and saw tools designed for engineers and IT pros and wanted to build something for less technical business users.

“We set about building a visual platform that would enable folks to essentially become programmers without needing to have an engineering background, and enabling them to be able to build out automation for their day-to-day role.”

He added, “As a result, we now have a service that can be used in departments across an organization, including IT, whereby they can build extremely powerful and flexible workflows that gather data from all these disparate sources, and carry out automation as per their desire.”

Alex Clayton from lead investor Spark Capital sees Tray filling in a big need in the automation space in a spot between high end tools like Mulesoft, which Salesforce bought last year for $6.5 billion, and simpler tools like Zapier. The problem, he says, is that there’s a huge shortage of time and resources to manage and really integrate all these different SaaS applications companies are using today to work together.

“So you really need something like Tray because the problem with the current Status Quo [particularly] in marketing sales operations, is that they don’t have the time or the resources to staff engineering for building integrations on disparate or bespoke applications or workflows,” he said.

Tray is a seven year old company, but started slowly taking the first 4 years to build out the product. They got $14 million Series A 12 months ago and have been taking off ever since. The company’s annual recurring revenue (ARR) is growing over 450 percent year over year with customers growing by 400 percent, according to data from the company. It already has over 200 customers including Lyft, Intercom, IBM and SAP.

The company’s R&D operation is in London, with headquarters in San Francisco. It currently has 85 employees, but expects to have 100 by the end of the quarter as it begins to put the investment to work.

With Kata Containers and Zuul, OpenStack graduates its first infrastructure projects

Over the course of the last year and a half, the OpenStack Foundation made the switch from purely focusing on the core OpenStack project to opening itself up to other infrastructure-related projects as well. The first two of these projects, Kata Containers and the Zuul project gating system, have now exited their pilot phase and have become the first top-level Open Infrastructure Projects at the OpenStack Foundation.

The Foundation made the announcement at its Open Infrastructure Summit (previously known as the OpenStack Summit) in Denver today after the organization’s board voted to graduate them ahead of this week’s conference. “It’s an awesome milestone for the projects themselves,” OpenStack Foundation executive direction Jonathan Bryce told me. “It’s a validation of the fact that in the last 18 months, they have created sustainable and productive communities.”

It’s also a milestone for the OpenStack Foundation itself, though, which is still in the process of reinventing itself in many ways. It can now point at two successful projects that are under its stewardship, which will surely help it as it goes out and tries to attract others who are looking to bring their open-source projects under the aegis of a foundation.

In addition to graduating these first two projects, Airship — a collection of open-source tools for provisioning private clouds that is currently a pilot project — hit version 1.0 today. “Airship originated within AT&T,” Bryce said. “They built it from their need to bring a bunch of open-source tools together to deliver on their use case. And that’s why, from the beginning, it’s been really well-aligned with what we would love to see more of in the open-source world and why we’ve been super excited to be able to support their efforts there.”

With Airship, developers use YAML documents to describe what the final environment should look like and the result of that is a production-ready Kubernetes cluster that was deployed by OpenStack’s Helm tool — though without any other dependencies on OpenStack.

AT&T’s assistant vice president, Network Cloud Software Engineering, Ryan van Wyk, told me that a lot of enterprises want to use certain open-source components, but that the interplay between them is often difficult and that while it’s relatively easy to manage the life cycle of a single tool, it’s hard to do so when you bring in multiple open-source tools, all with their own life cycles. “What we found over the last five years working in this space is that you can go and get all the different open-source solutions that you need,” he said. “But then the operator has to invest a lot of engineering time and build extensions and wrappers and perhaps some orchestration to manage the life cycle of the various pieces of software required to deliver the infrastructure.”

It’s worth noting that nothing about Airship is specific to the telco world, though it’s no secret that OpenStack is quite popular in the telco world and unsurprisingly, the Foundation is using this week’s event to highlight the OpenStack project’s role in the upcoming 5G rollouts of various carriers.

In addition, the event will showcase OpenStack’s bare-metal capabilities, an area the project has also focused on in recent releases. Indeed, the Foundation today announced that its bare-metal tools now manage more than a million cores of compute. To codify these efforts, the Foundation also today launched the OpenStack Ironic Bare Metal program, which brings together some of the project’s biggest users, like Verizon Media (home of TechCrunch, though we don’t run on the Verizon cloud), 99Cloud, China Mobile, China Telecom, China Unicom, Mirantis, OVH, Red Hat, SUSE, Vexxhost and ZTE.

Mirantis makes configuring on-premises clouds easier

Mirantis, the company you may still remember as one of the biggest players in the early days of OpenStack, launched an interesting new hosted SaaS service today that makes it easier for enterprises to build and deploy their on-premises clouds. The new Mirantis Model Designer, which is available for free, lets operators easily customize their clouds — starting with OpenStack clouds next month and Kubernetes clusters in the coming months — and build the configurations to deploy them.

Typically, doing so involves writing lots of YAML files by hand, something that’s error-prone and few developers love. Yet that’s exactly what’s at the core of the infrastructure-as-code model. Model Designer, on the other hand, takes what Mirantis learned from its highly popular Fuel installer for OpenStack and takes it a step further. The Model Designer, which Mirantis co-founder and CMO Boris Renski demoed for me ahead of today’s announcement, presents users with a GUI interface that walks them through the configuration steps. What’s smart here is that every step has a difficulty level (modeled after Doom’s levels, ranging from “I’m too young to die” to “ultraviolence” — though it’s missing Dooms “nightmare” setting), which you can choose based on how much you want to customize the setting.

Model Designer is an opinionated tool, but it does give users quite a bit of freedom, too. Once the configuration step is done, Mirantis actually takes the settings and runs them through its Jenkins automation server to validate the configuration. As Renski pointed out, that step can’t take into account all of the idiosyncrasies of every platform, but it can ensure that the files are correct. After this, the tool provides the user with the configuration files, and actually deploying the OpenStack cloud is then simply a matter of taking the files, together with the core binaries that Mirantis makes available for download, to the on-premises cloud and executing a command-line script. Ideally, that’s all there is to the process. At this point, Mirantis’ DriveTrain tools take over and provision the cloud. For upgrades, users simply have to repeat the process.

Mirantis’ monetization strategy is to offer support, which ranges from basic support to fully managing a customer’s cloud. Model Designer is yet another way for the company to make more users aware of itself and then offer them support as they start using more of the company’s tools.

Slack files to go public, reports $138.9M in losses on revenue of $400.6M

Slack has filed to go public via a direct listing. Similar to what Spotify did last year, this means that the company won’t have a traditional IPO, and will instead allow existing shareholders to sell their stock to investors.

The company’s S-1 filing says it plans to make $100 million worth of shares available, but that’s probably a placeholder figure.

The S-1 offers data about the company’s financial performance, reporting a net loss of $138.9 million and revenue of $400.6 million in the fiscal year ending January 31, 2019. That’s compared to a loss of $140.1 million on revenue of $220.5 million for the year before.

The company attributes these losses to its decision “to invest in growing our business to capitalize on our market opportunity,” and notes that they’re shrinking as a percentage of revenue.

Slack also says that in the three months ending on January 31, it had more than 10 million daily active users across more than 600,000 organizations — 88,000 on the paid plan and 550,000 on the free plan.

In the filing, the company says the Slack team created the product to meet its own collaboration needs.

“Since our public launch in 2014, it has become apparent that organizations worldwide have similar needs, and are now finding the solution with Slack,” it says. “Our growth is largely due to word-of-mouth recommendations. Slack usage inside organizations of all kinds is typically initially driven bottoms-up, by end users. Despite this, we (and the rest of the world) still have a hard time explaining Slack. It’s been called an operating system for teams, a hub for collaboration, a connective tissue across the organization, and much else. Fundamentally, it is a new layer of the business technology stack in a category that is still being defined.”

The company suggests that the total market opportunity for Slack and other makers of workplace collaboration software is $28 billion, and it plans to grow through strategies like expanding its footprint within organizations already using Slack, investing in more enterprise features, expanding internationally and growing the developer ecosystem.

The risk factors mentioned in the filing sound pretty boilerplate and/or similar to other internet companies going public, like the aforementioned net losses and the fact that its current growth rate might not be sustainable, as well as new compliance risks under Europe’s GDPR.

Slack has previously raised a total of $1.2 billion in funding, according to Crunchbase, from investors including Accel, Andreessen Horowitz, Social Capital, SoftBank, Google Ventures and Kleiner Perkins.