Facebook’s Workplace hits 3M paying users, launches Portal app in a wider push for video

The rapid rise of Slack — which has recently broken the 100,000 mark for paying businesses using its service — has ushered in a rush of competition from other companies across the worlds of social media and enterprise software, all aiming to become the go-to conversation layer for businesses. Today, Workplace, Facebook’s effort in that race, announced a milestone in its growth, along with a bigger push into video services and other improvements.

The service — which starts at $1.50 per month per front-line worker and then has tiers of $4 and $8 — now has passed 3 million paying users, adding 1 million workers from mostly enterprise businesses in the last eight months.

And to capitalize on Facebook’s growing focus on video in its consumer service, Workplace is announcing several steps of its own into video. It’s releasing a special app that can be used on the Portal, Facebook’s video screen; and alongside that, it’s announcing new video features: captioning at the bottom of videos; auto-translating starting with 14 languages; and a new P2P architecture that will speed up video transmission for those who might be watching videos on Workplace in places where bandwidth is constrained.

The features and milestone number are all being announced today at Flock, the Workplace user conference that Facebook puts on each year. Alongside all these, Facebook also announced several other features for its enterprise app (more on the other new features below).

The push to video comes at an interesting time for Workplace on the competitive front. Karandeep Anand, who came to Facebook from Microsoft and currently heads up Workplace with Julien Codorniou managing business development, has made a point of differentiating Workplace from others in the field of workplace collaboration by emphasizing how it’s used by very large enterprises like Walmart (the world’s largest single employer) to bring together on to a single communication platform not just white-collar knowledge workers but also frontline workers.

The company says that today, its customers include 150 companies with over 10,000 active users apiece, with other names on its books including Starbucks, Spotify, AstraZeneca, Deliveroo and Kering.

The push to video follows that trajectory: it’s a way for Workplace (and Facebook) to differentiate the experience and use cases for the product to businesses, which might already be using Slack but might consider buying this as well, if not migrating away from the other product altogether. (Teams is a different ballgame, of course, as it has a strong video component of its own and also likes to position itself as a product for all kinds of employees, too.)

Workplace’s video efforts here will mark the first time that Facebook is positioning Portal as a product for businesses. This is notable, when you consider there has been some adoption of Amazon’s Alexa in workplace scenarios, too; and that there has been some pushback from consumers about the prospect of having a Facebook video device sitting in their homes. This gives Facebook’s $179 hardware (which will be sold at the same price to businesses) a new avenue for sales.

Video has been a cornerstone of how Workplace has been developing for a while now, with companies using it as a way for, say, the big boss to send out more personalised communications to workers, and for people in workgroups to create video chats with each other. A dedicated screen for video chats takes this idea to the next level, and plays on the fact that video conferencing services like Zoom have caught on like wildfire in modern offices, where people who work together often work in disparate locations.

There is another way that Portal could find some traction with businesses: videoconferencing solutions tend to be very expensive, in part because of the hefty hardware investments that need to be made. Offering a device at $179 drastically undercuts that investment. Codorniou declined to comment on whether Facebook might make a more concerted effort to push this as a cost-effective videoconferencing alternative down the line, but he did point out that today Facebook and Zoom have a close relationship.

The other video features that Workplace is announcing today will further enhance the experience: Facebook will now give users the option to include automatic captions at the bottoms of videos, with the bonus of translation, initially in 14 languages. And the improved video quality for those with limited bandwidth is significantly not something that Facebook has rolled out in its consumer app: the aim is to improve the quality of broadcasting in scenarios where bandwidth might not be as strong but there are simultaneous people watching the same event — something you could imagine applying, say, at a company all-hands or town hall event with remote participants.

Alongside all of these video features, Workplace is adding in a host of other features to expand the use cases for the product beyond basic chatting:

  • New learning product. This is not about e-learning per se, but Workplace is now offering a way for HR to add onboarding teaching and videos into Workplace for new employees or new services at the company. There are no plans right now to expand this to educational content, Codorniou said.
  • Surveys are also coming to Workplace. These will be set by administrators — not any worker at any time — and it seems that for now there will be no anonymity, so that will mean it’s unlikely that these will cover any sensitive topics, and might in any case see a chilling effect in how people feel they can respond.
  • Frontline access is getting overhauled in Workplace, where people who do not use company email addresses will now be able to create accounts using generated codes.
  • Those admins that are trying to track how well Workplace is actually working for them will also be able to track engagement and other metrics on the platform.

Workplace is also adding in some gamification features to the platform, where people can publicly thank people, set and follow workplace goals and award badges to individuals who have achieved something in areas like sales, anniversaries or other positive milestones.

As with the video features, the idea is to bring services to Workplace that you are not necessarily getting in Slack and other competitive products. That is the maxim also when the features are replicas of features you might have seen elsewhere, but not all in one consolidated place.

Asked what he thought about the claims that Facebook is too much of a “copycat” when it came to building new features, Codorniou was defensive. “I think Workplace itself is getting to a market that has been untouched before. When it comes to badges or goals, for example, yes people have but these before, but the difference is that we are offering them to a wide network of people. If you have to use a separate app, it’s not a great experience.”

And, he added, “everything that we ship is the result of customer feedback and requests. If they tell us they want these, it means they’re not finding what they needed on the market.”

Nadella warns government conference not to betray user trust

Microsoft CEO Satya Nadella, delivering the keynote at the Microsoft Government Leaders Summit in Washington, DC today, had a message for attendees to maintain user trust in their tools technologies above all else.

He said it is essential to earn user trust, regardless of your business. “Now, of course, the power law here is all around trust because one of the keys for us, as providers of platforms and tools, trust is everything,” he said today. But he says it doesn’t stop with the platform providers like Microsoft. Institutions using those tools also have to keep trust top of mind or risk alienating their users.

“That means you need to also ensure that there is trust in the technology that you adopt, and the technology that you create, and that’s what’s going to really define the power law on this equation. If you have trust, you will have exponential benefit. If you erode trust it will exponentially decay,” he said.

He says Microsoft sees trust along three dimensions: privacy, security and ethical use of artificial intelligence. All of these come together in his view to build a basis of trust with your customers.

Nadella said he sees privacy as a human right, pure and simple, and it’s up to vendors to ensure that privacy or lose the trust of their customers. “The investments around data governance is what’s going to define whether you’re serious about privacy or not,” he said. For Microsoft, they look at how transparent they are about how they use the data, their terms of service and how they use technology to ensure that’s being carried out at runtime.

He reiterated the call he made last year for a federal privacy law. With GDPR in Europe and California’s CCPA coming on line in January, he sees a centralized federal law as a way to streamline regulations for business.

As for security, as you might expect, he defined it in terms of how Microsoft was implementing it, but the message was clear that you needed security as part of your approach to trust, regardless of how you implement that. He asked several key questions of attendees.

“Cyber is the second area where we not only have to do our work, but you have to [ask], what’s your operational security posture, how have you thought about having the best security technology deployed across the entire chain, whether it’s on the application side, the infrastructure side or on the endpoint, side, and most importantly, around identity,” Nadella said.

The final piece, one which he said was just coming into play, was how you use artificial intelligence ethically, a sensitive topic for a government audience, but one he wasn’t afraid to broach. “One of the things people say is, ‘Oh, this AI thing is so unexplainable, especially deep learning.’ But guess what, you created that deep learning [model]. In fact, the data on top of which you train the model, the parameters and the number of parameters you use — a lot of things are in your control. So we should not abdicate our responsibility when creating AI,” he said.

Whether Microsoft or the U.S. government can adhere to these lofty goals is unclear, but Nadella was careful to outline them both for his company’s benefit and this particular audience. It’s up to both of them to follow through.

Arm brings custom instructions to its embedded CPUs

At its annual TechCon event in San Jose, Arm today announced Custom Instructions, a new feature of its Armv8-M architecture for embedded CPUs that, as the name implies, enables its customers to write their own custom instructions to accelerate their specific use cases for embedded and IoT applications.

“We already have ways to add acceleration, but not as deep and down to the heart of the CPU. What we’re giving [our customers] here is the flexibility to program your own instructions, to define your own instructions — and have them executed by the CPU,” ARM senior director for its automotive and IoT business, Thomas Ensergueix, told me ahead of today’s announcement.

He noted that Arm always had a continuum of options for acceleration, starting with its memory-mapped architecture for connecting over a bus GPUs and today’s neural processor units. This allows the CPU and the accelerator to run in parallel, but with the bus being the bottleneck. Customers also can opt for a co-processor that’s directly connected to the CPU, but today’s news essentially allows Arm customers to create their own accelerated algorithms that then run directly on the CPU. That means the latency is low, but it’s not running in parallel, as with the memory-mapped solution.

arm instructions

As Arm argues, this setup allows for the lowest-cost (and risk) path for integrating customer workload acceleration, as there are no disruptions to the existing CPU features and it still allows its customers to use the existing standard tools with which they are already familiar.

custom assemblerFor now, custom instructions will only be available to be implemented in the Arm Cortex-M33 CPUs, starting in the first half of 2020. By default, it’ll also be available for all future Cortex-M processors. There are no additional costs or new licenses to buy for Arm’s customers.

Ensergueix noted that as we’re moving to a world with more and more connected devices, more of Arm’s customers will want to optimize their processors for their often very specific use cases — and often they’ll want to do so because by creating custom instructions, they can get a bit more battery life out of these devices, for example.

Arm has already lined up a number of partners to support Custom Instructions, including IAR Systems, NXP, Silicon Labs and STMicroelectronics .

“Arm’s new Custom Instructions capabilities allow silicon suppliers like NXP to offer their customers a new degree of application-specific instruction optimizations to improve performance, power dissipation and static code size for new and emerging embedded applications,” writes NXP’s Geoff Lees, SVP and GM of Microcontrollers. “Additionally, all these improvements are enabled within the extensive Cortex-M ecosystem, so customers’ existing software investments are maximized.”

In related embedded news, Arm also today announced that it is setting up a governance model for Mbed OS, its open-source operating system for embedded devices that run an Arm Cortex-M chip. Mbed OS has always been open source, but the Mbed OS Partner Governance model will allow Arm’s Mbed silicon partners to have more of a say in how the OS is developed through tools like a monthly Product Working Group meeting. Partners like Analog Devices, Cypress, Nuvoton, NXP, Renesas, Realtek,
Samsung and u-blox are already participating in this group.

What is Deepfake? (And Should You Be Worried?)

We hear a lot about Artificial Intelligence and Machine Learning being used for good in the Cybersecurity world – detecting and responding to malicious and suspicious behavior to safeguard the enterprise – but like many technologies, AI can be used for bad as well as for good. One area which has received increasing attention in the last couple of years is the ability to create ‘Deepfakes’ of audio and video content using generative adversarial networks or GANs. In this post, we take a look at Deepfake and ask whether we should be worried about it.

image of deep fake

What is Deepfake?

A Deepfake is the use of machine (“deep“) learning to produce a kind of fake media content – typically a video with or without audio – that has been ‘doctored’ or fabricated to make it appear that some person or persons did or said something that in fact they did not.

Going beyond older techniques of achieving the same effect such as advanced audio-video editing and splicing, Deepfake takes advantage of computer processing and machine learning to produce realistic video and audio content depicting events that never happened. Currently, this is more or less limited to “face swapping”: placing the head of one or more persons onto other people’s bodies and lip-syncing the desired audio. Nevertheless, the effects can be quite stunning, as seen in this Deepfake of Steve Buscemi faceswapped onto the body of Jennifer Lawrence.

deep fake steve buscemi
real jennifer lawrence
Source

It all started in 2017 when Reddit user ‘deepfakes’ produced sleazy Deep Fakes of celebs engaged in sex acts by transposing the faces of famous people onto the bodies of actors in adult movies. Before long, many people began posting similar kinds of video until moderators banned deepfakes and the subreddit entirely.

image of deepfakes banned from reddit

Of course, that was only the beginning. Once the technology had been prized out of the hands of academics and turned into usable code that didn’t require a deep understanding of Artificial Intelligence concepts and techniques, many others were able to start playing around with it. Political figures, actors and other public figures soon found themselves ‘faceswapped’ and appearing all over Youtube and other video platforms. Reddit itself contains other, non-pornographic Deep Fakes such as the ‘Safe For Work’ Deepfakes subreddit r/SFWdeepfakes.

How Easy Is It To Create Deep Fakes?

While not trivial, it’s not that difficult either for anyone with average computer skills. As we have seen, where once it would have required vast resources and skills only available to a few specialists, there are now tools available on github with which anyone can easily experiment and create their own Deep Fakes using off-the-shelf computer equipment.

At minimal, you can do it with a few Youtube URLs. As Gaurav Oberoi explained in this post last year, anyone can use an automated tool that takes a list of half a dozen or so Youtube videos of a few minutes each and then extracts the frames containing the faces of the target and substitute.

The software will inspect various attributes of the original such as skin tone, the subject’s facial expression, the angle or tilt of the head and the ambient lighting, and then try to reconstruct the image with the substitute’s face in that context. Early versions of the software could take days or even weeks to churn through all these details, but in Oberoi’s experiment, it only took around 72 hours with an off-the-shelf GPU (an NVIDIA GTX 1018 TI) to produce a realistic swap.

Although techniques that use less-sophisticated methods can be just as devastating, as witnessed recently in the doctored video of CNN journalist Jim Acosta and the fake Nancy Pelosi drunk video, real Deep Fakes use a method called generative adversarial networks (GANs). This involves using two competing machine learning algorithms in which one produces the image and the other tries to detect it. When a detection is made, the first AI improves the output to get past the detection. The second AI now has to improve its decision-making to spot the fakery and improve its detection. This process is iterated multiple times until the Deep Fake-producing AI beats the detection AI or produces an image that the creator feels is good enough to fool human viewers.

Access to GANs is no longer restricted to those with huge budgets and supercomputers.

image of faceswap deep learning tool

How Do Deep Fakes Affect Cybersecurity?

Deep Fakes are a new twist on an old ploy: media manipulation. From the old days of splicing audio and video tapes to using photoshop and other editing suites, GANs offer us a new way to play with media. Even so, we’re not convinced that they open up a specifically new channel to threat actors, and articles like this one seem to be stretching their credibility when they try to exaggerate the connection between Deep Fakes and conventional phishing threats.

Of course, Deep Fakes do have the potential to catch a lot of attention, circulate and go viral as people marvel at fake media portraying some unlikely turn of events: a politician with slurred speech, a celebrity in a compromising position, controversial quotes from a public figure and the like. By creating content with the ability to attract large amounts of shares, it’s certainly possible that hackers could utilize Deep Fakes in the same way as other phishing content by luring people into clicking on something that has a malicious component hidden inside, or stealthily redirects users to malicious websites while displaying the content. But as we already noted above with the Jim Acosta and Nancy Pelosi fakes, you don’t really need to go deep to achieve that effect.

The one thing we know about criminals is that they are not fond of wasting time and effort on complicated methods when perfectly good and easier ones abound. There’s no shortage of people falling for all the usual, far simpler phishing bait that’s been circulating for years and is still clearly highly effective. For that reason, we don’t see Deep Fakes as a particularly severe threat for this kind of cybercrime at the time being.

That said, be aware that there have been a small number of reported cases of Deep Fake voice fraud in attempts to convince company employees to wire money to fraudulent accounts. This appears to be a new twist on the business email compromise phishing tactic, with the fraudsters using Deep Fake audio of a senior employee issuing instructions for a payment to be made. It just shows that criminals will always experiment with new tactics in the hope of a payday and you can never be too careful.

Perhaps of greater concern are uses of Deepfake content in personal defamation attacks, attempts to discredit the reputations of individuals, whether in the workplace or personal life, and the widespread use of fake pornographic content. So-called “revenge porn” can be deeply distressing even when it is widely acknowledged as fake. The possibility of Deep Fakes being used to discredit executives or businesses by competitors is also not beyond the realms of possibility. Perhaps the most likeliest threat, though, comes from information warfare during times of national emergency and political elections – here comes 2020! – with such events widely thought to be ripe for disinformation campaigns using Deepfake content.

How Can You Spot A Deep Fake?

Depending on the sophistication of the GAN used and the quality of the final image, it may be possible to spot flaws in a Deep Fake in the same way that close inspection can often reveal sharp contrasts, odd lighting or other disjunctions in “photoshopped” images. However, generative adversarial networks have the capacity to produce extremely high quality images that perhaps only another, generative adversarial network might be able to detect. Since the whole idea of using a GAN is to ultimately defeat detection by another generative adversarial network, that too may not always be possible.

image of faceswap algorithm
Source

By far the best judge of fake content, however, is our ability to look at things in context. Individual events or artefacts like video and audio recordings may be – or become – indistinguishable from the real thing in isolation, but detection is a matter of judging something in light of other evidence. To take a trivial example, it would take more than a video of a flying horse to convince us that such animals really exist. We should want not only independent verification (such as a video from another source) but also corroborating evidence. Who witnessed it take off? Where did it land? Where is it currently located? and so on.

We need to be equally circumspect when viewing consumable media, particularly when it makes surprising or outlandlish claims. Judging veracity in context is not a new approach. It’s the same idea behind multi-factor authentication; it’s the standard for criminal investigations; it underpins scientific method and, indeed, it’s at the heart of contextual malware detection as used in SentinelOne.

What is new, perhaps, is that we may have to start applying more rigour in our judgement of media content that depicts far-fetched or controversial actions and events. That’s arguably something we should be doing anyway. Perhaps the rise of Deepfake will encourage us all to get better at it.

Conclusion

With accessible tools for creating Deep Fakes now available to anyone, it’s understandable that there should be concern about the possibility of this technology being used for nefarious purposes. But that’s true of pretty much all technological innovation; there will always be some people that will find ways to use it to the detriment of others. Nonetheless, Deepfake technology comes out of the same advancements as other machine learning tools that improve our lives in immeasurable ways, including in the detection of malware and malicious actors.

While creating a fake video for the purposes of information warfare is not beyond the realms of possibility or even likelihood, it is not beyond our means to recognize disinformation by judging it in the context of other things that we know to be true, reasonable or probable. Should we be worried about Deep Fakes? As with all critical reasoning, we should be worried about taking on trust extraordinary claims that are not supported by an extraordinary amount of other credible evidence.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

83North closes $300M fifth fund focused on Europe, Israel

83North has closed its fifth fund, completing an oversubscribed $300 million raise and bringing its total capital under management to $1.1BN+.

The VC firm, which spun out from Silicon Valley giant Greylock Partners in 2015 — and invests in startups in Europe and Israel, out of offices in London and Tel Aviv — last closed a $250M fourth fund back in 2017.

It invests in early and growth stage startups in consumer and enterprise sectors across a broad range of tech areas including fintech, data centre & cloud, enterprise software and marketplaces.

General partner Laurel Bowden, who leads the fund, says the latest close represents investment business as usual, with also no notable changes to the mix of LPs investing for this fifth close.

“As a fund we’re really focused on keeping our fund size down. We think that for just the investment opportunity in Europe and Israel… these are good sized funds to raise and then return and make good multiples on,” she tells TechCrunch. “If you go back in the history of our fundraising we’re always somewhere between $200M-$300M. And that’s the size we like to keep.”

“Of course we do think there’s great opportunities in Europe and Israel but not significantly different than we’ve thought over the last 15 years or so,” she adds.

83North has made around 70 investments to date — which means its five partners are usually making just one investment apiece per year.

The fund typically invests around $1M at the seed level; between $4M-$8M at the Series A level and up to $20M for Series B, with Bowden saying around a quarter of its investments go into seed (primarily into startups out of Israel); ~40% into Series A; and ~30% Series B.

“It’s somewhat evenly mixed between seed, Series A, Series B — but Series A is probably bigger than everything,” she adds.

It invests roughly half and half in its two regions of focus.

The firm has had 15 exits of portfolio companies (three of which it claims as unicorns). Recent multi-billion dollar exits for Bowden are: Just Eat, Hybris (acquired by SAP), iZettle (acquired by PayPal) and Qlik.

While 83North has a pretty broad investment canvas, it’s open to new areas — moving into IoT (with recent investments in Wiliot and VDOO), and also taking what it couches as a “growing interest” in healthtech and vertical SaaS. 

“Some of my colleagues… are looking at areas like lidar, in-vehicle automation, looking at some of the drone technologies, looking at some even healthtech AI,” says Bowden. “We’ve looked at a couple of those in Europe as well. I’ve looked, actually, at some healthtech AI. I haven’t done anything but looked.

“And also all things related to data. Of course the market evolves and the technology evolves but we’ve done things related to BI to process automation through to just management of data ops, management of data. We always look at that area. And think we’ll carry on for a number of years. ”

“In venture you have to expand,” she adds. “You can’t just stay investing in exactly the same things but it’s more small additional add-ons as the market evolves, as opposed to fundamental shifts of investment thesis.”

Discussing startup valuations, Bowden says European startups are not insulated from wider investment dynamics that have been pushing startup valuations higher — and even, arguably, warping the market — as a consequence of more capital being raised generally (not only at the end of the pipe).

“Definitely valuations are getting pushed up,” she says. “Definitely things are getting more competitive but that comes back to exactly why we’re focused on raising smaller funds. Because we just think then we have less pressure to invest if we feel that valuations have got too high or there’s just a level… where startups just feel the inclination to raise way more money than they probably need — and that’s a big reason why we like to keep our fund size relatively small.”

CyberSecurity Breakthrough Awards Name SentinelOne Overall Antivirus Solution Provider of 2019

We are thrilled to share that our platform has been selected as the winner of the Overall Antivirus Solution Provider of the Year Award by CyberSecurity Breakthrough Awards

CyberSecurity Breakthrough Awards perform one of the deepest evaluations of the information security industry each year to select and highlight the “breakthrough” cybersecurity solutions and companies, and they had over 3,500 nominations this year from all over the world.

The program is extremely competitive, and the win highlights SentinelOne’s ability to protect cloud workloads as well as other attack surfaces — servers, data centers, endpoints, and IoT devices — with autonomous, cloud-native technology powered by static and behavioral artificial intelligence models. Our solution, harnessing the scale and compute power of the cloud, is the fastest growing enterprise security product on the market today.  

The challenge? Endpoints are everywhere, from classic laptops and desktops, to workloads in the cloud and the datacenter, and all IoT devices – the network edge, including the cloud, is the real perimeter. Traditional on-premise signature database protection models are ineffective, reactive, and lack administrator visibility. With the constantly evolving threat landscape, enterprises too often fall prey to ransomware and fileless attacks which commonly run undetected in enterprises of all sizes. The tools of yesterday simply can’t keep up with adversaries.

Converging threat prevention, detection, response, and hunting into a proprietary single platform architecture, SentinelOne is the first to take AI-based device protection from the cloud to the edge, covering IoT endpoints, and workloads in the cloud with a completely autonomous solution. Our platform has zero reliance on humans, services, or even cloud connectivity to deploy and operate the solution. We are the only cybersecurity platform that protects every endpoint in the enterprise regardless of its physical location, across any cloud environment (public, private or hybrid).

Our solution is the only one that autonomously defends every endpoint against every type of attack, at every stage in the threat lifecycle – truly a breakthrough in cybersecurity, and we now have the hardware to prove it.

Interested in learning more? Schedule your free demo today

Request a demo


Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

The Good, the Bad and the Ugly in Cybersecurity – Week 40

Image of The Good, The Bad & The Ugly in CyberSecurity

The Good

What do Mirai, Hakai, Fbot, and Tsunami have in common? Answer…they are all primarily IoT-based malware variants and all were part of a large-scale DDoS botnet takedown this week spearheaded by Dutch police.   

image of dutch police takedown botnet

The primary target of the operation was KV Solutions BV, a “bulletproof” hosting provider. Current reports indicate that for at least two years, KV Solutions was harboring criminal activity and playing host to numerous DDoS attack operations, and facilitating their ongoing success. A majority of the activity was Mirai-based, and targeting devices from ZTE, MikroTik, JAWS, Huawei, GPON, ASUS, Netgear and others. Outside of Mirai, and close derivatives, additional malware families were tracked as well. These include Tsunami, Gafgyt, Fbot, Moobot, Yowai, Hakai, and Handymanny. At any given time, some of these operations were supported by tens of thousands of hosts (bots). The activity tracked within KV”s infrastructure was not limited to “private” use. Many of the actors associated with this activity would sell access to their botnets for DDoS “for hire” services. During the raid, two individuals were arrested, those being “Angelo K”. and “Marco B.” aka the registered owners of KV Solutions BV. All associated sites and servers have been fully seized and taken down at this time. These types of takedowns are a combined effort of individuals across the law enforcement and security industry communities. They also take a great deal of careful effort, and huge swaths of time and dedication from all involved.

image of kv solutions

This week the NSA launched their Cybersecurity Directorate. We previewed this program in a previous post, when it was originally announced in July 2019. According to the NSA,

“The Cybersecurity Directorate will reinvigorate NSA’s white hat mission by sharing critical threat information and collaborating with partners and customers to better equip them to defend against malicious cyber activity. The new directorate will also better position NSA to operationalize its threat intelligence, vulnerability assessments, and cyberdefense expertise by integrating these efforts to deliver prioritized outcomes.”

This effort should result in improved and more secure methods of sharing information between the NSA and their trusted partners (public and private).

The Bad

Early this week, multiple hospitals, between Alabama and Australia, were targeted by crippling ransomware attacks. Perhaps hardest hit were three DCH Health System hospitals in Alabama. All three of these entities were forced to turn away new patients as a result of the attack. First responders and private citizens alike were directed to alternate facilities after the ransomware took hold. It is reported that affected systems were still locked and impacted well past the 24-hour mark post-infection. Current intelligence indicates that the malware involved was Ryuk, which has played a role in several similar attacks recently. Other very prevalent malware families (Emotet, Trickbot) are also known to be used as a delivery mechanism for Ryuk. DCH was quick to alert the public once the attack was underway. Their first public statement noted that “A criminal is limiting our ability to use our computer systems in exchange for an as-yet unknown payment,”

image of ransomware attack notice

On October 2nd, the following update was issued:

image of ransomware attack update

This whole episode serves as yet another reminder of how critical a proper protection strategy is. 

The Ugly

It has been an interesting week for the Infosec industry as a whole. For starters, we have reports of FireEye considering a possible sale to a private equity firm. It’s too early to speculate on a possible outcome, but it does make a certain “statement” about the goals and strategy for FireEye going forward. Is the goal there to protect customers, or to ensure ongoing financial windfalls for the executive staff?

And in another corner…we have CrowdStrike. Their latest issue is actually separate from their being entangled in recent political issues. CrowdStrike was recently forced to revoke Windows sensor version 5.19.10101 from their Falcon update servers. Numerous customers, world-wide, reported Blue Screen of Death (BSOD) errors following the update to this new version. While root cause has not been fully identified (as of this writing) there are some reports that indicate a conflict with another vendor”s DLP product.

image of crowdstrike bsod

When issues like this occur, we can all be reminded of the importance of tiered or staged rollouts of critical software updates. Or better yet, in the case of security products…embrace technology that does not require frequent updates that negatively impact business. 


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

A startup factory? $1.2B-exit team launches $65M super{set}

Think Jack Dorsey’s jobs are tough? Well, Tom Chavez is running six startups. He thinks building businesses can be boiled down to science, so today he’s unveiling his laboratory for founding, funding and operating companies. He and his team have already proven they can do it themselves after selling their startups Rapt to Microsoft and Krux to Salesforce for a combined $1.2 billion. Now they’ve raised a $65 million fund for “super{set}”, an enterprise startup studio with a half-dozen companies currently in motion.

The idea is that {super}set either conceptualizes a company or brings in founders whose dream they can make a reality. The studio provides early funding and expertise while the startup works from their shared space in San Francisco, plus future ones in New York and Boston. The secret sauce is the “super{set} Code,” an execution playbook plus technological tools and building blocks that guide the strategy and eliminate redundant work. “Our belief is that we can make the companies 10x faster and increase capital efficiency by 5X,” says Chavez of his partnership with {super}set co-founders Vivek Vaidya, who acts as CTO, and Jae Lim who manages the fund.

Superset Team

The {super}set team (from left): Tom Chavez, Jae Lim, Jen Elena and Vivek Vaidya

Perhaps the question isn’t whether the portfolio startups can scale, but if the humans behind them can without breaking. It’s stressful running a single company, let alone six. Even with the order of operations nailed down, each encounters unique challenges and no plan is one-size-fits-all. But after delivering 17.5X returns to their past investors, Chavez et al. have proven their power to repeatedly recognize what enterprises need and build admittedly boring but bountiful products in customer data management, and advertising yield.

The studio’s playbooks cover business plan formation, pitch strategies, go to market, revenue, machine learning, management principles, HR processes, sales methods, pipeline measurement, product sequencing, finance, legal and more. There’s also shared engineering code it provides, so each startup doesn’t have to reinvent the wheel. “I don’t think you can systemize it but I do think you can accelerate and de-risk the path,” Chavez explains.

Superset Code 1

{super}set Code

Today, the first {super}set company is coming out of stealth. Eskalera helps enterprises retain top talent by tracking diversity and inclusion stats of employees to engage them with career growth and community programs. Chavez is the CEO, but plans to install a new one shortly so he can focus more time on founding more startups. There are 55 employees across the first six companies, with two already generating revenue and most ready to emerge in the next nine months.

The funding for Eskalera and other {super}set companies comes with unique terms. Because Chavez and the team aren’t just board members you hear from once a quarter but “shoulder to shoulder with the entrepreneurs” as he repeats several times in our interview, the startups pay more equity for the cash.

The hope is having seasoned leadership aboard is worth it. “We’re product people first and foremost,” Chavez tells me. “What are you going to build? Who’s going to buy it? Why? What’s the technical moat? We’re not people doing jazz hands.” The {super}set team has plenty of skin in the game, though, given Chavez himself put in a big chunk of the $65 million, and the fund sticks to a standard management fee.

Eskalera

Eskalera

To supercharge the companies, {super}set brings in expert staffers in artificial intelligence, data science and more, who then align with the most relevant companies in the portfolio. They get equity grants to incentivize them to work hard on the startups’ behalf. “The worry I have about these larger funds is that they have an incentive disconnect where they work for the fees” Chavez says. His fund hopes to win through follow-on funding of its winners.

Tom Chavez Superset

{super}set co-founder Tom Chavez

If portfolio companies hit hard times, Chavez says {super}set will stick with them. “My first company had multiple layoffs and a major pivot. We had an enterperenur that walked away. They lost conviction, but we brought that company to an $180 million exit after people said there was no effing way and that felt really good,” Chavez says of staying the course. “The good entrepreneurs have that demonic energy.” But if everyone involved agrees a project isn’t working, they’ll shutter it. “It comes back to opportunity cost of people’s time.”

Chavez has respect for studios taking different approaches, like Atomic in consumer startups, Science in e-commerce and Pioneer Square Labs, which maintains a larger fund staff. “What excites me is moving entrepreneurship a step forward. Why couldn’t we franchise this in other cities?” He hopes {super}set can attract top talent that “just want to work on cool shit” rather than getting sucked into a single company.

Can {super}set keep all the plates spinning and really lower their risk? “If we’re wrong there will be a giant orange plume streak across the sky. The early returns are promising but we have to prove it,” Chavez says. But after accruing plenty of wealth for himself, he says the thrill that keeps him in the startup game is seeing life-changing outcomes for his teams. “I have spreadsheets showing the wealth generated by employees of companies I’ve built and nothing makes me happier than seeing them pay for tuitions, property, or retiring.”

T2D3 Software Update: Embracing the Founder to CEO (F2C) Journey

It’s been four years since TechCrunch published my blog post The SaaS Adventure, which introduced the concept of a “T2D3” roadmap to help SaaS companies scale — and, as an aside, explored how well my mom understood my job as an “adventure capitalist.” The piece detailed seven distinct stages that enterprise cloud startups must navigate to achieve $100 million in annualized revenue. Specifically, the post encouraged companies to “triple, triple, double, double, double” their revenue as they hit certain milestones.

I was blown away by the response to the piece and gratified that so many founders and investors found the T2D3 framework helpful. Looking back now, I think a lot of the advice has stood the test of time. But plenty has also changed in the broader tech and software markets since 2015, and I wanted to update this advice for founders of hyper-growth companies in light of the market shifts that have occurred.

Perhaps the most notable change in the last four years is that the number of playbooks for companies to follow as they sell software has expanded. Today, more companies are embracing product-led growth and a less-formal, bottoms-up model — employees are swiping credit cards to buy a product, and not necessarily interacting with a human salesperson.

Many of the most high-profile, recent software IPOs structure their go-to-market operations this way. T2D3’s stages, by contrast, focus quite a bit on scaling a company’s internal sales function to grow. Indeed, both a product-led and a sales-led approach are viable in today’s growing B2B-tech market.

What’s more, the revenue needed for a software company to go public has increased dramatically in the last four years. This means that software founders need to focus not only on building a scalable product and finding scalable go-to-market channels, but also building a scalable org chart. These days, what is scarce for software founders isn’t money from investors; it’s great human talent.

So in addition to T2D3, my firm and I are now focusing on another founder journey: F2C, or the transition from founder/CEO to CEO/founder. This journey can take many paths, but ideally it starts with the traditional hustle to find early product/market fit.

Snaplogic raises $72M more for its enterprise data integration platform

Cloud services and the adoption of apps that rely on them continue to grow in popularity, but a persistent theme in enterprise technology has been that a lot of organizations still continue to use legacy software and architectures, for reasons of cost, migration headaches and simply because sometimes, if it ain’t broke, don’t fix it. That doesn’t mean they couldn’t benefit from a better way of integrating some of those workflows, and better leveraging the data coming out of those different apps, and today a startup that’s built a service to help them do that has raised a growth round of funding.

Snaplogic, which has built an integration platform that lets enterprises bring in and integrate both legacy and cloud apps to better monitor them and let them work together, has closed $72 million in growth financing, money that it will be using to expand its business globally. According to analysis from PitchBook, this latest funding comes at a $260 million pre-money valuation, which would work out to about $332 million post-money. We are checking with Snaplogic to see if it can confirm those numbers directly.

This latest round, which brings the total raised by Snaplogic to $208 million, is being led by growth equity VC Arrowroot Capital, with participation also from Golub Capital and existing investors. Past investors are an illustrious group that has included a mix of financial and strategic backers such as Andreessen Horowitz, Vitruvian (which led its previous round), Capital One, Ignition Parnters, Microsoft and a number of others.

The company is not disclosing how big its customer base is currently. In its last round in 2016, it had grown to 700 enterprises, adding 300 in just one year, which was an especially big amount of growth. Current customers feature a number of big names like Adobe, Verizon (which owns TechCrunch), AstraZeneca, Bristol-Myers Squibb, Emirates, Schneider Electric, Siemens, Sony and Wendy’s. It describes the bigger integration market as a $30 billion opportunity.

The defining characteristic in that list is that these are businesses that pre-date the big cloud revolution, and so they are more likely than not grappling with a mix of new and legacy apps that need to be balanced against one another, brought together in some instances to work together and harnessed in terms of their data to help in a company’s wider efforts around big data for projects in areas like application integration, data integration, API management, B2B integration and data engineering.

“This is an exciting time for SnapLogic,” said Gaurav Dhillon, CEO at Snaplogic, in a statement. “We’re extremely proud to have built a modern and innovative solution that is solving really hard problems for our enterprise customers. This latest investment is a testament to the hard work and ongoing support of our customers, partners, and employees around the world. Together, we’ll continue to chart the way forward, making integration even faster and easier so enterprises can realize their data-driven ambitions.”

There has been an interesting wave of startups that have emerged specifically to tackle the opportunity of providing tools to businesses that are still using old kit and older software to give them the ability to take advantage of new innovations in computing and how to use their bigger pool of data. Others include Workato (which itself has raised money in the last year), MuleSoft (now a part of Salesforce) and Microsoft itself, and in that context, Snaplogic has been taking a very measured approach in how it raises capital and expands.

“Our approach is to do successive up rounds with straightforward terms rather than chase a big slug with onerous terms,” Dhillon told TechCrunch once. He’s a repeat entrepreneur and has a track record of conservative but sound growth. “We built Informatica with just $13.5 million, so my approach is to raise funds as needed.”

It’s an approach that is resonating with investors. “SnapLogic is attacking a huge and surging market opportunity with a uniquely modern and powerful platform,” said Matthew Safaii, founder and managing partner at Arrowroot Capital, in a statement. “They’ve built an amazing product, work with an impressive roster of customers, and are led by an experienced executive team. As SnapLogic sets its sights on continued product leadership and global expansion, we look forward to partnering with them to help get their pioneering integration platform into the hands of even more enterprises around the globe.”

“SnapLogic is reinventing application and data integration for the modern era,” said Robert Sverbilov, director at Golub Capital, added. “We are excited to support SnapLogic’s next generation SaaS application integration platform and to help secure its footing as a leader in the iPaaS (Integration Platform as a Service) vertical.”