Env0 announces $3.3M seed to bring more control to Infrastructure as Code

Env0, a startup that wants to help companies bring some order to delivery of Infrastructure as Code, announced a $3.3 million seed investment today and the release of the Beta of the company’s first product.

Boldstart Ventures and Grove Ventures co-led the round with participation from several angel investors including Guy Podjarny of Snyk.

Company co-founder and CEO Ohad Maislish says the ability of developers to deliver code quickly is a blessing and a curse, and his company wants to give IT some control over how and when code gets committed.

“The challenge companies have is how to balance between self-service and oversight of cloud resources in a cloud native kind of way, and to balance this with visibility, predictability, and most importantly, governance around cloud security and costs,” Maislish said.

The product lets companies define when it’s OK for developers to deliver code and how much they can spend instead of letting them deliver anything, at any time, at any cost. You do this by giving overall control of the process to an administrator, who can then define templates and projects. The templates define which repositories and products you can use for a given cloud vendor and the projects correlate to the users allowed to access those templates.

Image Credit: Env0

Ed Sim, founder and managing partner at Boldstart says the startup has been able to find a good balance between governance and the need for speed that today’s developers require in a continuous delivery environment. “Env0 is the first SaaS solution that meets all of those needs by offering self-service cloud environments with centralized governance,” Sim said in a statement.

It’s not easy launching an early-stage company in the middle of the current economic situation, but Maislish believes his company is in a decent position as it provides a way to control self-service development, something that is even more important when your developers are working from home outside of the purview of IT and security.

The company launched 18 months ago and has been in private beta for some time. Today marks the launch of the public beta. It currently has 10 employees.

Confluent lands another big round with $250M Series E on $4.5B valuation

The pandemic may feel all-encompassing at the moment, but Confluent announced a $250 million Series E today, showing that major investment continues in spite of the dire economic situation at the moment. The company is now valued at $4.5 billion.

Today’s round follows last year’s $125 million Series D. At that point the company was valued at a mere $2.5 billion. Investors obviously see a lot of potential here.

Coatue Management led the round, with help from Altimeter Capital and Franklin Templeton. Existing investors Index Ventures and Sequoia Capital also participated. Today’s investment brings the total raised to $456 million.

The company is based on Apache Kafka, the open-source streaming data project that emerged from LinkedIn in 2011. Confluent launched in 2014 and has gained steam, funding and gaudy valuations along the way.

CEO and co-founder Jay Kreps reports that growth continued last year when sales grew 100% over the previous year. A big part of that is the cloud product the company launched in 2017. It added a free tier last September, which feels pretty prescient right about now.

But the company isn’t making money giving stuff away, so much as attracting users, who can become customers at some point as they make their way through the sales funnel. The beauty of the cloud product is that you can buy by the sip.

The company has big plans for the product this year. Although Kreps was loath to go into detail, he says that there will be a series of changes coming up this year that will add significantly to the product’s capabilities.

“As part of this we’re going to have a major new set of capabilities for our cloud service, and for open-source Kafka, and for our product that we’re going to announce every month for the rest of the year,” Kreps told TechCrunch. These will start rolling out the first week in May.

While he wouldn’t get specific, he says that it relates to the changing nature of cloud infrastructure deployment. “This whole infrastructure area is really evolving as it moves to the cloud. And so it has to become much, much more elastic and scalable as it really changes how it works. And we’re going to have announcements around what we think are the core capabilities of event streaming in the cloud,” he said.

While a round this big with a valuation this high and an institutional investor like Franklin Templeton involved typically means an IPO could be the next step, Kreps was not ready to talk about that, except to say the company does plan to begin behaving in the cadence of a public company with a set of quarterly earnings, just not for public consumption yet.

The company was founded in 2014. It has 1,000 employees and has plans to continue to hire and to expand the product. Kreps sees plenty of opportunity here in spite of the current economics.

“I don’t think you want to just turtle up and hang on to your existing customers and not expand if you’re in a market that’s really growing. What really got this round of investors excited is the fact that we’re onto something that has a huge market, and we want to continue to advance, even in these really weird uncertain times,” he said.

And then there was one: Co-CEO Jennifer Morgan to depart SAP

In a surprising move, SAP ended its co-CEO experiment yesterday when the company announced Jennifer Morgan will be exiting stage left on April 30th, leaving Christian Klein as the lone CEO.

The pair took over at the end of last year when Bill McDermott left the company to become CEO at ServiceNow, and it looked like SAP was following Oracle’s model of co-CEOs, which had Safra Catz and Mark Hurd sharing the job for several years before Hurd passed away last year.

SAP indicated that Morgan and the board came to a mutual decision, and that it felt that it would be better moving forward with a single person at the helm. The company made it sound like going with a single CEO was always in the plans, and they were just speeding up the time table, but it feels like it might have been a bit more of a board decision and a bit less Morgan, as these things tend to go.

“More than ever, the current environment requires companies to take swift, determined action which is best supported by a very clear leadership structure. Therefore, the decision to transfer from Co-CEO to sole CEO model was taken earlier than planned to ensure strong, unambiguous steering in times of an unprecedented crisis,” the company wrote in a statement announcing the change.

The move also means that the company is moving away from having a woman at the helm, something that’s unfortunately still rare in tech. Why the company decided to move on from the shared role isn’t clear, beyond using the current economic situation as cover. Neither is it clear why they chose to go with Klein over Morgan, but it seems awfully soon to be making a move like this when the two took over so recently.

AWS and Facebook launch an open-source model server for PyTorch

AWS and Facebook today announced two new open-source projects around PyTorch, the popular open-source machine learning framework. The first of these is TorchServe, a model-serving framework for PyTorch that will make it easier for developers to put their models into production. The other is TorchElastic, a library that makes it easier for developers to build fault-tolerant training jobs on Kubernetes clusters, including AWS’s EC2 spot instances and Elastic Kubernetes Service.

In many ways, the two companies are taking what they have learned from running their own machine learning systems at scale and are putting this into the project. For AWS, that’s mostly SageMaker, the company’s machine learning platform, but as Bratin Saha, AWS VP and GM for Machine Learning Services, told me, the work on PyTorch was mostly motivated by requests from the community. And while there are obviously other model servers like TensorFlow Serving and the Multi Model Server available today, Saha argues that it would be hard to optimize those for PyTorch.

“If we tried to take some other model server, we would not be able to quote optimize it as much, as well as create it within the nuances of how PyTorch developers like to see this,” he said. AWS has lots of experience in running its own model servers for SageMaker that can handle multiple frameworks, but the community was asking for a model server that was tailored toward how they work. That also meant adapting the server’s API to what PyTorch developers expect from their framework of choice, for example.

As Saha told me, the server that AWS and Facebook are now launching as open source is similar to what AWS is using internally. “It’s quite close,” he said. “We actually started with what we had internally for one of our model servers and then put it out to the community, worked closely with Facebook, to iterate and get feedback — and then modified it so it’s quite close.”

Bill Jia, Facebook’s VP of AI Infrastructure, also told me, he’s very happy about how his team and the community has pushed PyTorch forward in recent years. “If you look at the entire industry community — a large number of researchers and enterprise users are using AWS,” he said. “And then we figured out if we can collaborate with AWS and push PyTorch together, then Facebook and AWS can get a lot of benefits, but more so, all the users can get a lot of benefits from PyTorch. That’s our reason for why we wanted to collaborate with AWS.”

As for TorchElastic, the focus here is on allowing developers to create training systems that can work on large distributed Kubernetes clusters where you might want to use cheaper spot instances. Those are preemptible, though, so your system has to be able to handle that, while traditionally, machine learning training frameworks often expect a system where the number of instances stays the same throughout the process. That, too, is something AWS originally built for SageMaker. There, it’s fully managed by AWS, though, so developers never have to think about it. For developers who want more control over their dynamic training systems or to stay very close to the metal, TorchElastic now allows them to recreate this experience on their own Kubernetes clusters.

AWS has a bit of a reputation when it comes to open source and its engagement with the open-source community. In this case, though, it’s nice to see AWS lead the way to bring some of its own work on building model servers, for example, to the PyTorch community. In the machine learning ecosystem, that’s very much expected, and Saha stressed that AWS has long engaged with the community as one of the main contributors to MXNet and through its contributions to projects like Jupyter, TensorFlow and libraries like NumPy.

Pulumi brings support for more languages to its infrastructure-as-code platform

Seattle-based Pulumi has quickly made a name for itself as a modern platform that lets developers specify their infrastructure through writing code in their preferred programming language — and not YAML. With the launch of Pulumi 2.0, those languages now include JavaScript, TypeScript, Go and .NET, in addition to its original support for Python. It’s also now extending its reach beyond its core infrastructure features to include deeper support for policy enforcement, testing and more.

As the company also today announced, it now has over 10,000 users and more than 100 paying customers. With that, it’s seeing a 10x increase in its year-over-year annual run rate, though without knowing the exact numbers, it’s obviously hard to know what exactly to make of that number. Current customers include the likes of Cockroach Labs, Mercedes-Benz and Tableau .

When the company first launched, its messaging was very much around containers and serverless. But as Pulumi founder and CEO Joe Duffy told me, today the company is often directly engaging with infrastructure teams that are building the platforms for the engineers in their respective companies.

As for Pulumi 2.0, Duffy says that “this is really taking the original Pulumi vision of infrastructure as code — using your favorite language — and augmenting it with what we’re calling superpowers.” That includes expanding the product’s overall capabilities from infrastructure provisioning to the adjacent problem spaces. That includes continuous delivery, but also policy-as-code. This extends the original Pulumi vision beyond just infrastructure but now also lets developers encapsulate their various infrastructure policies as code, as well.

Another area is testing. Because Pulumi allows developers to use “real” programming languages, they can also use the same testing techniques they are used to from the application development world to test the code they use to build their underlying infrastructure and catch mistakes before they go into production. And with all of that, developers can also use all of the usual tools they use to write code for defining the infrastructure that this code will then run on.

“The underlying philosophy is taking our heritage of using the best of what we know and love about programming languages — and really applying that to the entire spectrum of challenges people face when it comes to cloud infrastructure, from development to infrastructure teams to security engineers, really helping the entire organization be more productive working together,” said Duffy. “I think that’s the key: moving from infrastructure provisioning to something that works for the whole organization.”

Duffy also highlighted that many of the company’s larger enterprise users are relying on Pulumi to encode their own internal architectures as code and then roll them out across the company.

“We still embrace what makes each of the clouds special. AWS, Azure, Google Cloud and Kubernetes,” Duffy said. “We’re not trying to be a PaaS that abstracts over all. We’re just helping to be the consistent workflow across the entire team to help people adopt the modern approaches.”

Will China’s coronavirus-related trends shape the future for American VCs?

For the past month, VC investment pace seems to have slacked off in the U.S., but deal activities in China are picking up following a slowdown prompted by the COVID-19 outbreak.

According to PitchBook, “Chinese firms recorded 66 venture capital deals for the week ended March 28, the most of any week in 2020 and just below figures from the same time last year,” (although 2019 was a slow year). There is a natural lag between when deals are made and when they are announced, but still, there are some interesting trends that I couldn’t help noticing.

While many U.S.-based VCs haven’t had a chance to focus on new deals, recent investment trends coming out of China may indicate which shifts might persist after the crisis and what it could mean for the U.S. investor community.

Image Credits: PitchBook

The Complete Guide To Understanding MITRE’s 2020 ATT&CK Evaluation

What is MITRE ATT&CK and Why Does It Matter?

The latest MITRE ATT&CK evaluation has been underway since last summer, and the results will soon be released. The work MITRE is doing to bring a common language to cybersecurity is of monumental value to defenders everywhere. MITRE’s innovative approach to tool effectiveness evaluation has been broadly welcomed in the industry – both among vendors and enterprise customers. At SentinelOne, we have fully embraced our experiences participating in each MITRE ATT&CK evaluation, deeply integrating MITRE’s framework into the design and ongoing innovation of our solution. But what does the evaluation mean to your business, and how can you use it to better understand and use the security tools at your disposal?

In this post, we explain everything you need to know about the latest MITRE evaluation and the current round of tests to help you make the most of the upcoming results.

What is MITRE ATT&CK Framework?

MITRE describes its framework as “a curated knowledge base and model for cyber adversary behavior, reflecting the various phases of an adversary’s attack lifecycle and the platforms they are known to target.” 

The key words here are phases and behavior. When an adversary has a strategic objective – think data exfiltration or establishing long term command and control – they will use multiple tactics in phases. Each phase consists of behaviors which are simply a set of techniques. Techniques, in turn, have varying sets of procedures. Therefore, the end goal comprises an initial tactic with one or more techniques, followed by another tactic with its techniques, and so on until the adversary’s objective is met. This layering of general tactics down to specific procedures is where we get TTP: Tactic, Technique, Procedure. 

In MITRE’s ATT&CK framework matrix, tactics are represented in the column headers, techniques in the items listed in each column, and procedures – the detailed implementation of a technique – are described in each entry’s listing.

A Common Language For Threat Actor Behavior

The purpose of MITRE’s ATT&CK framework is to create a modular, common language to describe how threat actors operate so that we, as defenders, can use our detective security controls more effectively. 

To illustrate the point, think of a baker who creates an array of desserts and breads. What kind of baked goods does he tend to produce? How does he go about making them? We might describe a given product in terms of the recipe needed to produce it, which essentially must detail the many techniques, step by step, needed to achieve the desired result. 

For each item the baker produces, there will be a different recipe, but these recipes often have many steps in common. And insofar as the techniques vary here and there, the end results will differ to varying degrees. British scones are very similar to American biscuits, but both are quite different from a French baguette. However, all three have similar preparation steps and ingredients applied in subtly different ways to create differentiated end products. 

By predefining how recipes tend to flow at a high level (tactics) and the baking techniques and procedures used across that flow, we can define a baking model that can be used to define the factors and actions necessary for the creation of all sorts of treats, and maybe even attribute which baker made which particular treat.

MITRE points out that it is a “mid-level adversary model”, meaning that it is not too generalized and not too specific. High-level models like the Lockheed Martin Cyber Kill Chain® illustrate adversary goals but aren’t specific about how the goals are achieved. Conversely, exploit and malware databases very specifically define one or two jigsaw pieces in a large puzzle but aren’t necessarily connected to how the bad guys use them or to identifying who the bad guys are. MITRE’s TTP model is that happy medium where tactics are intermediate goals and the “why” of a technique and procedures represent how each tactic is achieved. 

So How Can MITRE’s Framework Help Defenders?

MITRE’s model represents the attacker’s perspective. It is a representation of how I, as the attacker, go through my process to exfiltrate data from you, the victim. Crucially – and herein lies the real power of integrating the MITRE ATT&CK framework into a security solution like SentinelOne – tactics do not exist in isolation. 

Each tactic is in a context with the previous and succeeding tactics. Context across tactics creates a story we can tell about a campaign.

In an attack, each tactic is related to those preceding it and those that follow it. Understanding TTPs in context allows us to create a story that we can tell about a campaign, and consequently offers defenders a far more powerful means of detecting attacks. Integrating the MITRE ATT&CK framework into our detection capabilities means that we can recognize events in our environment which alone may be insignificant – think about the problem of distinguishing Living off the Land techniques from false positives. However, when several “LOL” binaries are executed in a particular sequence, they can be seen as related to each other and understood as a tactic to achieve an adversarial aim. 

How Does MITRE ATT&CK Evaluate Security Products?

Now that we have a clear understanding of the framework and its relevance, let’s look at how the MITRE ATT&CK evaluation tests security vendors’ products. 

The evaluation sets out to emulate an attack from a known-real world APT group. In Round 1, MITRE chose to emulate attacks used by APT3. In this year’s Round 2, they chose APT29. 

Attack emulation sets out to chain together a set of techniques that have been publicly attributed to the adversary in question. For example, if the adversary has been seen using certain privilege escalation and persistence techniques in their campaigns, the emulation may chain those together in the test, even though they may not have been used together in actual real-world attacks. The aim is to put together a complete, logical attack that moves through all the stages of a comprehensive, successful attack from initial compromise to persistence, lateral movement, data exfiltration, and so on. In other words, the emulation doesn’t necessarily follow the actual logic used by the adversary in the wild; it is a constructed logic based on the adversary’s known TTPs. 

The MITRE ATT&CK emulation does not aim to test each and every TTP in the framework; only known TTPs of the chosen adversary are tested.

The environment for the attack emulation involves providing vendors with a “lab” of several virtual machines, protected by the vendor’s products. The evaluation then sets out to penetrate these virtual machines using MITRE ATT&CK framework TTPs that have been seen in the wild used by (in the case of Round 2) APT29. Vendor solutions are awarded various “detections” (such as whether they produced an alert, or logged telemetry) for each MITRE TTP in the test. In the Round 2 evaluation, two attacks were performed over two days, with each attack having 10 stages comprising 70 sub-steps. In total, 140 sub-steps were used in the test.

For example, an adversary may aim to achieve discovery on a system by enumerating process IDs (T1057), gathering the OS version (T1082) and looking for AV and firewall software (T1063). For each sub-step, a vendor may receive one or more detection awards, depending on what information was presented as part of the detection that was recorded in the vendor product. 

The detection awards have the potential to cause the most confusion for anyone trying to consume the MITRE evaluation results. In Round 2, MITRE is using a hierarchical award system for detections, which can roughly be understood as describing detections from “the richest” (at the top) to the “least rich” at the bottom. There are 13 possible categories, split into two types: “Main detection category” and “Detection Modifiers”. The full list appears on MITRE’s website here. However, the most important categories in terms of minimizing dwell time – the time between an attack taking place and a detection occurring – are the main categories Technique, Tactic and General, and the modifier categories “Alert” and “Correlated”. From a defender’s point of view, “Alerts” (priority notifications) are crucial as they can decrease the dwell time rapidly, particularly if the solution has automated response capabilities.

The Technique, Tactic and General categories not only indicate a tool’s ability to detect an attack autonomously (and without human analysis delay) but also serve to indicate how ‘enriched’ the data is. MITRE awards the ‘Technique’ category to a tool when it provides rich data that answers the question of precisely what was done and why. The category of ‘Tactic’ is awarded when the tool provides sufficient information to answer the question of why the detection took place (e.g, a process set up persistence), and ‘General’ is awarded if the tool identifies malicious or abnormal behaviour but without sufficient enrichment to answer either the ‘how’ or the ‘why’ questions.

What’s New in the MITRE ATT&CK 2020 Evaluation?

As noted above, for Round 2 MITRE has refined the detection categories since Round 1 and also chosen TTPs associated with APT29, a Russian state-sponsored threat actor with a history of targeting Western, Asian, African and Middle Eastern governments and organizations. Their recent activity has tended to fall into large-scale “smash-and-grab” spear-phishing campaigns that attempt to exfiltrate as much data as possible, and smaller targeted campaigns with a focus on stealth and persistence. The MITRE attack emulation aims to represent the first kind of campaign on Day One and the second on Day Two.

In terms of the actual TTPs that are in scope for Round 2, there is some overlap from Round 1. In the image below, yellow represents TTPs that are in scope for the first time with Round 2, purple those that were also in scope during Round 1, and red are TTPs from the earlier round that are no longer in scope. Being ‘in scope’ doesn’t mean the TTP will actually be used, only that it may be included in the attack by the testers.

Why Does the MITRE ATT&CK Evaluation Matter?

There are two general problems for enterprise when evaluating any security solution or tool. First: how can you be confident that it will work during a real attack? Second: how will it work during an attack? What responses will your SOC or IT team see? What will they need to do, and what should they be looking out for? 

Testing security solutions has long-been problematic and ill-suited to determining real-world capability. From the original EICAR test to the dedicated third-party testing labs that have been around for some years now, there’s always been a strong disconnect between the artificial test and real-world efficacy. Vendors themselves have long been aware that their customers need both reassurance and training with their products, and they naturally set out to showcase their solutions in situations that best suit their own strengths.

What MITRE brings to the table is unique. First, the evaluation provides independent, non-partisan, and open test criteria and results. Importantly, the test does not seek to rank or judge vendor products against one another. The aim is to show how the product responds to specific stages of an attack. This helps enterprise users understand how the product they have adopted or may be considering adopting is likely to perform in the real world.

Second, with some caveats that we’ll note in a moment, it’s as close to a real-world experience as anything else currently available. By chaining together observed, in-the-wild TTPs and applying these in phases that emulate the behavior of an entire attack lifecycle, consumers get a far richer insight into how a product will perform than they can from testing against a compendium of known and unknown malware samples.

That said, it must be understood that the MITRE ATT&CK evaluation is still an emulation of an attack in artificial conditions. Note, for example, that the lab environment used in the test has no real (or simulated) user activity. The attack is the only ‘noise’ in the environment, and that often makes a big difference as to how a security solution really performs in action. Secondly, the attack only emulates a limited number of TTPs from MITRE’s ATT&CK framework (see the previous section for which TTPs are in scope) and for that reason cannot be considered a way to measure a particular tool’s depth of coverage or behavior against TTPs that were not in scope for this particular test.

Conclusion

With those caveats in mind, however, we eagerly await MITRE publishing the Round 2 results in the near future. It is our hope that this blog post will enrich not only your understanding of MITRE’s ATT&CK framework but also highlight how you can use its evaluation results to inform your understanding of the vendors’ products included in this year’s round. Our foundational pointer for absorbing Round 2: follow the hierarchy of detections to understand each product’s capabilities.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Leverice is a team messenger app that’s taking aim at information overload

Meet Leverice: A team messenger and collaboration platform that’s aiming to compete with b2b giants like Slack by tackling an issue that continues to plague real-time messaging — namely, ‘always-on’ information overload. This means these tools can feel like they’re eating into productivity as much as aiding it. Or else leave users stressed and overwhelmed about how to stay on top of the work comms firehose. 

Leverice’s pitch is that it’s been built from the ground up to offer a better triage structure so vital bits of info aren’t lost in rushing rivers of chatter than flow across less structured chat platforms.

It does this by giving users the ability to organize chat content into nested subchannels. So its theory is that hyper structured topic channels will let users better direct and navigate info flow, freeing them from the need to check everything or perform lots of searches in order to find key intel. Instead they can just directly drill down to specific subchannels, tuning out the noise.

The overarching aim is to bring a little asynchronicity to the world of real-time collaboration platforms, per co-founder and COO Daniel Velton.

“Most messaging and collaboration tools are designed for and built around synchronous communications, instant back-and-forth. But most members of remote teams communicate at their own pace — and there was no go-to messaging tool built around asynchronous communications,” he tells TechCrunch.

“We set out to solve that problem, to build a messenger and collaboration platform that breaks rivers down into rivulets. To do that, we needed a tech stack and unique architecture that would allow teams to efficiently work with hundreds of channels and subchannels distributed between scores of channel branches of varying depths. Having that granularity ensures that each little shelf maintains topical integrity.

“We’re not discussing Feature 2.1.1 and 2.1.2 and 2.1.3 and 2.1.4 inside a single ‘Features’ channel, where the discussions would blend together. Each has its own little home.”

Of course Slack isn’t blind to the info-overload issues its platform can generate. Last month it announced “a simpler, more organized Slack”, which includes the ability for users to organize channels, messages and apps into “custom, collapsible sections”. Aka folders.

So how is Leverice’s subchannel architecture a great leap forward on the latest version of Slack — which does let users organize themselves (and is now in the process of being rolled out across its user-base)?

“All structuring (including folders) on other popular messengers is essentially an individual preference setting,” says Velton. “It does not reflect on a teamwide channel tree. It’s definitely a step in the right direction but it’s about each user adding a tiny bit of structure to their own private interface, not having a structure that affects and improves the way an entire team communicates.

“Leverice architecture is based on structuring of channels and subchannels into branches of unlimited depth. This kind of deep structuring is not something you can simply ‘overlay’ on top of an existing messenger that was designed around a single layer of channels. A tremendous number of issues arise when you work with a directory-like structure of infinite depth, and these aren’t easily solved or addressed unless the architecture is built around it.”

“Sure, in Leverice you can build the ‘6-lane autobahns’,” he adds, using an analogy of vehicle traffic on roads to illustrate the concept of a hierarchy of topic channels. “But we are the only messenger where you can also construct a structured network of ‘country roads’. It’s more ‘places’ but each ‘place’ is so narrow and topical that working through it all becomes more manageable, quick and pleasant, and it’s something you can do at your own pace without fear of missing important kernels of information as they fly by on the autobahn.”

To be clear, while Slack has now started letting users self-organize — by creating a visual channel hierarchy that suits them — Leverice’s structure means the same structured tree of channels/subchannels applies for the whole team.

“At the end of the day, for communications to work, somebody on a team needs to be organized,” argues Velton. “What we allow is structuring that affects the channel tree for an entire team, not just an individual preference that reflects only on a user’s local device.”

Leverice has other features in the pipeline which it reckons will further help users cut through the noise — with a plan to apply AI-powered prioritization to surface the most pressing inbound comms.

There will also be automated alerts for conversation forks when new subchannels are created. (Though generating lots of subchannel alerts doesn’t sound exactly noise-free…)

“We have features coming that alert users to forks in a conversation and nudge the user toward those new subchannels. At this stage those forks are created manually, although our upcoming AI module will have nudges based on those forks,” says Velton.

“The architecture (deep structuring) also opens the door to scripting of automated workflows and open source plug-ins,” he adds.

Leverice officially launched towards the end of February after a month-long beta which coincided with the coronavirus-induced spike in remote work.

At this stage they have “members of almost 400 teams” registered on the platform, per Velton, with initial traction coming from mid-size tech companies — who he says are either unhappy with the costs of their current messaging platform or with distraction/burnout caused by “channel fatigue”; or who are facing info fragmentation as internal teams are using different p2p/messaging tools and lack a universal choice.

“We have nothing but love and respect for our competitors,” he adds. “Slack, Teams, WhatsApp, Telegram, Skype, Viber, etc.: each have their own benefits and many teams are perfectly content to use them. Our product is for teams looking for more focus and structure than existing solutions offer. Leverice’s architecture is unique on the market, and it opens the door to powerful features that are neither technically nor practically feasible in a messenger with a single layer containing a dozen or two dozen channels.”

Other differentiating features he highlights as bringing something fresh to the team messaging platform conversation are a whiteboard feature that lets users collaborate in the app for brainstorming or listing ideas, prorities; and a Jira integration for managing and discussing tasks in the project- and issue-tracking tool. The team is planning further integrations including with Zoom, Google Docs and “other services you use most”.

The startup — which was founded by CEO Rodion Zhitomirsky in Minsk but is now headquartered in San Jose, California, also with offices in Munich, Germany — has been bootstrapping development for around two years, taking in angel investment of around $600,000.

“We are three friends who managed complex project-based teams and personally felt the pains of all the popular messengers out there,” says Velton, discussing how they came to set up the business. “We used all the usual suspects, and even tried using p2p messengers as substitutes. They all led us and our teams to the same place: we couldn’t track large amounts of communications unless we were in “always-on” mode. We knew there had to be a better way, so we set out to build Leverice.”

The third co-founder is Dennis Dokutchitz.

Leverice’s business model is freemium, with a free tier, a premium tier, and a custom enterprise tier. As well as offering the platform as SaaS via the cloud, they do on-premise installations — for what Velton describes as “the highest level of security and privacy”.

On the security front the product is not end-to-end encrypted but he says the team is developing e2e encrypted channels to supplement the client-server encryption it applies as standard.

Velton notes these forthcoming channels would not support the usual search features, while AI analysis would be limited to “meta-information analysis”, i.e. excluding posts’ content.

“We don’t process customer or message data for commercial purposes, only for internal analytics and features to improve the product for users,” he adds when asked about any additional uses made of customer data. (Leverice’s Privacy Policy can be found here.)

With remote work the order of the day across most of the globe because of the COVID-19 pandemic, it seems likely there will be a new influx of collaboration tools being unboxed to help home workers navigate a new ‘professionally distant’ normal.

“We’ve only been on the market for 6 weeks and have no meaningful revenue to speak of as of yet,” adds Velton.

Verizon’s BlueJeans acquisition is about more than the work-from-home trend

It would be easy to assume that Verizon’s purchase last week of video-conferencing tool BlueJeans was an opportunistic move to capitalize on the sudden shift to remote work, but the ball began rolling last June and has implications far beyond current work-from-home requirements.

The video-chat darling of the moment is Zoom, but BlueJeans is considered by many to be the enterprise tool of choice. The problem, it seems, is that it had grown as far as it could on its own and went looking for a larger partner to help it reach the next level.

BlueJeans started working with Verizon (which owns this publication) as an authorized reseller before the talks turned toward a deeper relationship that culminated in the acquisition. Assuming the deal passes regulatory scrutiny, Verizon will use its emerging 5G technology to produce much more advanced video-conferencing scenarios.

We spoke to the principals involved in this deal and several industry experts to get a sense of where this could lead. As with any large company buying a startup, outcomes are uncertain; sometimes the acquired company gets lost in the larger corporate bureaucracy, and sometimes additional resources will help grow the company much faster than it could have on its own.

What is BlueJeans?

Who’s Behind the “Reopen” Domain Surge?

The past few weeks have seen a large number of new domain registrations beginning with the word “reopen” and ending with U.S. city or state names. The largest number of them were created just hours after President Trump sent a series of all-caps tweets urging citizens to “liberate” themselves from new gun control measures and state leaders who’ve enacted strict social distancing restrictions in the face of the COVID-19 pandemic. Here’s a closer look at who and what appear to be behind these domains.

A series of inciteful tweets sent by President Trump on April 17, the same day dozens of state-themed “reopen” domains were registered — mostly by conservative groups and gun rights advocates.

KrebsOnSecurity began this research after reading a fascinating Reddit thread over the weekend on several “reopen” sites that seemed to be engaged in astroturfing, which involves masking the sponsors of a message or organization to make it appear as though it originates from and is supported by grassroots participants.

The Reddit discussion focused on a handful of new domains — including reopenmn.com, reopenpa.com, and reopenva.com — that appeared to be tied to various gun rights groups in those states. Their registrations have roughly coincided with contemporaneous demonstrations in Minnesota, California and Tennessee where people showed up to protest quarantine restrictions over the past few days.

A “reopen California” protest over the weekend in Huntington Beach, Calif. Image: Reddit.

Suspecting that these were but a subset of a larger corpus of similar domains registered for every state in the union, KrebsOnSecurity ran a domain search report at DomainTools [an advertiser on this site], requesting any and all domains registered in the past month that begin with “reopen” and end in “.com.”

That lookup returned approximately 150 domains; in addition to those named after the individual 50 states, some of the domains refer to large American cities or counties, and others to more general concepts, such as “reopeningchurch.com” or “reopenamericanbusiness.com.”

Many of the domains are still dormant, leading to parked pages and registration records obscured behind privacy protection services. But a review of other details about these domains suggests a majority of them are tied to various gun rights groups, state Republican Party organizations, and conservative think tanks, religious and advocacy groups.

For example, reopenmn.com forwards to minnesotagunrights.org, but the site’s WHOIS registration records (obscured since the Reddit thread went viral) point to an individual living in Florida. That same Florida resident registered reopenpa.com, a site that forwards to the Pennsylvania Firearms Association, and urges the state’s residents to contact their governor about easing the COVID-19 restrictions.

Reopenpa.com is tied to a Facebook page called Pennsylvanians Against Excessive Quarantine, which sought to organize an “Operation Gridlock” protest at noon today in Pennsylvania among its 68,000 members.

Both the Minnesota and Pennsylvania gun advocacy sites include the same Google Analytics tracker in their source code: UA-60996284. A cursory Internet search on that code shows it also is present on reopentexasnow.comreopenwi.com and reopeniowa.com.

More importantly, the same code shows up on a number of other anti-gun control sites registered by the Dorr Brothers, real-life brothers who have created nonprofits (in name only) across dozens of states that are so extreme in their stance they make the National Rifle Association look like a liberal group by comparison.

This 2019 article at cleveland.com quotes several 2nd Amendment advocates saying the Dorr brothers simply seek “to stir the pot and make as much animosity as they can, and then raise money off that animosity.” The site dorrbrotherscams.com also is instructive here.

A number of other sites — such as reopennc.com — seem to exist merely to sell t-shirts, decals and yard signs with such slogans as “Know Your Rights,” “Live Free or Die,” and “Facts not Fear.” WHOIS records show the same Florida resident who registered this North Carolina site also registered one for New York — reopenny.com — just a few minutes later.

Merchandise available from reopennc.com.

Some of the concept reopen domains — including reopenoureconomy.com (registered Apr. 15) and reopensociety.com (Apr. 16) — trace back to FreedomWorks, a conservative group that the Associated Press says has been holding weekly virtual town halls with members of Congress, “igniting an activist base of thousands of supporters across the nation to back up the effort.”

Reopenoc.com — which advocates for lifting social restrictions in Orange County, Calif. — links to a Facebook page for Orange County Republicans, and has been chronicling the street protests there. The messaging on Reopensc.com — urging visitors to digitally sign a reopen petition to the state governor — is identical to the message on the Facebook page of the Horry County, SC Conservative Republicans.

Reopenmississippi.com was registered on April 16 to In Pursuit of LLC, an Arlington, Va.-based conservative group with a number of former employees who currently work at the White House or in cabinet agencies. A 2016 story from USA Today says In Pursuit Of LLC is a for-profit communications agency launched by billionaire industrialist Charles Koch.

Many of the reopen sites that have redacted names and other information about their registrants nevertheless hold other clues, mainly based on precisely when they were registered. Each domain registration record includes a date and timestamp down to the second that the domain was registered. By grouping the timestamps for domains that have obfuscated registration details and comparing them to domains that do include ownership data, we can infer more information.

For example, more than 50 reopen domains were registered within an hour of each other on April 17 — between 3:25 p.m. ET and 4:43 ET. Most of these lack registration details, but a handful of them did (until the Reddit post went viral) include the registrant name Michael Murphy, the same name tied to the aforementioned Minnesota and Pennsylvania gun rights domains (reopenmn.com and reopenpa.com) that were registered within seconds of each other on April 8.

A large number of “reopen” domains were registered within the same one-hour period on April 17, and tie back to the same name used in the various reopen domains connected to gun rights groups. A link to the spreadsheet where this screen shot is drawn from is included below.

A Google spreadsheet documenting much of the domain information sourced in this story is available here.

No one responded to the email addresses and phone numbers tied to Mr. Murphy, who may or may not have been involved in this domain registration scheme. Those contact details suggest he runs a store in Florida that makes art out of reclaimed or discarded items, and that he operates a Web site design company in Florida.

However, various social media profiles tied to Mr. Murphy’s contact details suggest this persona may not present a complete picture. A Twitter account tied to Murphy’s email address promoted nothing but spammy paid surveys for years. And a Skype lookup on his phone number curiously returns a Russian profile under the name валентина сынах (translated as “Valentine Sons”).

As much as President Trump likes to refer to stories critical of him and his administration as “fake news,” this type of astroturfing is not only dangerous to public health, but it’s reminiscent of the playbook used by Russia to sow discord, create phony protest events, and spread disinformation across America in the lead-up to the 2016 election.

This entire astroturfing campaign also brings to mind a “local news” network called Local Government Information Services (LGIS), an organization founded in 2018 which operates a huge network of hundreds of sites that purport to be local news sites in various states. However, most of the content is generated by automated computer algorithms that consume data from reports released by U.S. executive branch federal agencies.

The relatively scarce actual bylined content on these LGIS sites is authored by freelancers who are in most cases nowhere near the localities they cover. Other content not drawn from government reports often repurpose press releases from conservative Web sites, including gunrightswatch.com, taxfoundation.org, and The Heritage Foundation. For more on LGIS, check out the 2018 coverage from The Chicago Tribune and the Columbia Journalism Review.