Security Megatrends | Cyber Security in 2019

Threats abound, but people are out there trying to deal with them. Organizations continue to fall behind, finding it increasingly difficult to identify and respond to threats in a timely manner. Security Megatrends 2019, a SentinelOne commissioned report, delves into several areas of concern today including cloud security issues, SecOps frustrations and tools, the Internet of Things, data sharing and leakage, DDoS, endpoint security, and artificial intelligence.

Here’s some of the challenges and perceptions that enterprises, midmarket companies, and SMBs are facing as we move into 2019.

Security Megatrends | Cyber Security in 2019 - Download the Security Megatrends 2019 Report - Learn How to Handle Threats Better

 

Budgets

On the whole, IT budgets have been on the increase, but manufacturing and healthcare/pharma/medical are significantly below average, with budgets in these two verticals around half the average.

IT budget increases from 2017-2018 by industry

IT budget increases from 2017-2018 by industry

The issue with manufacturing and healthcare/pharma/medical is a significant one. Those verticals have consistently lagged in IT and security, and now hackers target them. Personal health records (PHR) are the most sought-after records. They continue to drive the highest price on the black market because they can be used for the broadest range of theft from opening new credit accounts, to purchases, and even full identity theft. Manufacturing is a target due to the rise in industrialization in developing countries. The theft of cutting-edge manufacturing techniques is huge business, especially for competing companies in places like China and India.

SecOps Frustrations

One of the key areas of frustration among security professionals is alert fatigue. Alert fatigue stems from the large volume of alerts presented to analysts that they are required to validate, identifying whether they are really high severity or at the other extreme—if they are false positives that are really nothing to worry about. In many environments, there is highly insufficient context for the systems to properly judge the criticality, so over 95 percent of the tickets that come in are classified as the highest priority.

Comparison of severe tickets to overall tickets

Comparison of severe tickets to overall tickets

A major area of frustration covered in this report is inter-team handoffs. Seventy-six percent of respondents identified some level of impediment when trying to resolve an incident requiring inter-team handoffs or support.

Impediments experienced during incident investigation

When trying to investigate and resolve an incident, security analysts are often required to engage members of other teams for one or more phases of the incident prior to closing. These frustrations are encountered at some level daily, which leads to job dissatisfaction. After enough frustration, personnel leave.

Consolidation and Integration

There are over 1,400 different vendors that supply cyber security tools, and SecOps typically have between 10 and 22 management interfaces to get the security job done. As a result, SecOps teams are actively trying to reduce the number of interfaces they deal with. When queried about the most important security management features to meet their business requirements, the majority of respondents said that integration with other IT management products was first order

Most important is integration with other IT mgmt. products

Endpoint Protection

The battle for the endpoint is raging. Across antivirus, detection, prevention, and all combinations thereof, there are approximately 50 companies operating in the endpoint defense space. Seventy-three percent of respondents have been affected by some form of endpoint attack, and only 58 percent of organizations are highly confident they could detect an important security incident before it caused significant impact.

Among the salient points indicated in the endpoint research were these:

  • Ninety percent of respondents that experienced an attack causing significant to severe impact believed an advanced endpoint solution would have performed better than traditional AV.
  • Moreover, all of the respondents who experienced severe impacts from a malware attack indicated they now intend to replace their traditional AV product with an advanced endpoint solution.

Endpoint attacks bypassing current endpoint solutions that required six or more hours to resolve

Conclusion

Security Megatrends 2019 is a comprehensive SentinelOne-sponsored report covering security issues and challenges facing organizations of all sizes and industry verticals today. The report looks across SMBs, midmarkets, and enterprises as well as multiple industry verticals to understand the commonalities and divergence in trends. Ultimately, the report will help readers understand how to handle threats better, no matter where they stand now.

Get the Whole Report


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Suggested Posts:

Google’s Cloud Firestore NoSQL database hits general availability

Google today announced that Cloud Firestore, its serverless NoSQL document database for mobile, web and IoT apps, is now generally available. In addition, Google is also introducing a few new features and bringing the service to 10 new regions.

With this launch, Google is giving developers the option to run their databases in a single region. During the beta, developers had to use multi-region instances, and, while that obviously has some advantages with regard to resilience, it’s also more expensive and not every app needs to run in multiple regions.

“Some people don’t need the added reliability and durability of a multi-region application,” Google product manager Dan McGrath told me. “So for them, having a more cost-effective regional instance is very attractive, as well as data locality and being able to place a Cloud Firestore database as close as possible to their user base.”

The new regional instance pricing is up to 50 percent cheaper than the current multi-cloud instance prices. Which solution you pick does influence the SLA guarantee Google gives you, though. While the regional instances are still replicated within multiple zones inside the region, all of the data is still within a limited geographic area. Hence, Google promises 99.999 percent availability for multi-region instances and 99.99 percent availability for regional instances.

And talking about regions, Cloud Firestore is now available in 10 new regions around the world. Firestore launched with a single location when it launched and added two more during the beta. With this, Firestore is now available in 13 locations (including the North America and Europe multi-region offerings). McGrath tells me Google is still in the planning stage for deciding the next phase of locations, but he stressed that the current set provides pretty good coverage across the globe.

Also new in this release is deeper integration with Stackdriver, the Google Cloud monitoring service, which can now monitor read, write and delete operations in near-real time. McGrath also noted that Google plans to add the ability to query documents across collections and increment database values without needing a transaction.

It’s worth noting that while Cloud Firestore falls under the Google Firebase brand, which typically focuses on mobile developers, Firestore offers all of the usual client-side libraries for Compute Engine or Kubernetes Engine applications, too.

“If you’re looking for a more traditional NoSQL document database, then Cloud Firestore gives you a great solution that has all the benefits of not needing to manage the database at all,” McGrath said. “And then, through the Firebase SDK, you can use it as a more comprehensive back-end as a service that takes care of things like authentication for you.”

One of the advantages of Firestore is that it has extensive offline support, which makes it ideal for mobile developers but also IoT solutions. Maybe it’s no surprise, then, that Google is positioning it as a tool for both Google Cloud and Firebase users.

How Secure Is Open Source Software?

Who is responsible for the security of your open source software dependencies, and what are the risks? Find out the surprising answer here.

A couple of years back when the Equifax breach occurred, there was a lot of talk about open source code and how secure it is, or isn’t. The issue hasn’t gone away, either, with both real and imagined hacks frighteningly easy to pull off. A recent report suggests that more than 60 of the Fortune 100 companies may still be using code containing the same vulnerability that led to the Equifax breach. In this post, we take a look at open source security and how it can impact the enterprise.

How Secure Is Open Source Software?

Dependencies Everywhere

The Equifax breach was a result of a bug in Apache Struts, but that was neither unique nor extraordinary. OpenSSL, an open source implementation of SSL and TLS used in web servers, contained the heartbleed flaw that affected at least half a million websites. Heartbleed didn’t just affect servers, but also applications that relied on the affected versions of OpenSSL, including offerings from Oracle, McAfee and VMware.

By one estimate, over 5000 vulnerabilities have been discovered in open source software since 2017.

A screenshot of Software Dependencies

If you’re not actually a developer, you might be surprised at just how much of your organization’s software relies on open source components. Using community-produced software saves development time and cost, and allows organizations to essentially outsource maintenance to a worldwide community of organizations and volunteer developers. These wins have led to suggestions that there’s more open source code than proprietary code in the majority of organizational codebases, with on average a single codebase containing over 250 open source components. From package managers like the node.js npm and Python’s PyPI and pip, to Rubygems and plug-ins for build tools like Maven or development tools like Visual Studio, free, shared code is everywhere.

How Many Eyes, Really?

As far as security is concerned, the big win in using open source software is supposed to be transparency. Open source projects mean that everyone and anyone can inspect the source code. At least in theory, the fact that there are “many eyes” on the code should mean that bugs and flaws are spotted and fixed quickly.

But there are two “gotchas” about the “many eyes” theory. First, by far the majority of projects are maintained by either a solitary developer or a small team of volunteers. How often they actually have the time and resources to look at and update their code is a complete unknown, and certainly not subject to any formal process. In other cases, the software may not be maintained at all. Those who create and contribute free software are under no obligation to maintain it.  Indeed, most such software usually comes with some kind of “AS IS” disclaimer:

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE

The reverse side of that disclaimer is if the developer isn’t responsible for the code, then it is clearly the responsibility of the consumer to make sure the code is safe and “fit for purpose”.

Second, even in the case of a large project with many volunteers, the reality is that when there are “many eyes” it’s easy for everyone to think someone else is looking. That no one may in fact be looking was evident with the Heartbleed bug, which was introduced to the public repository in 2012 and wasn’t spotted until April 2014.

Owning Your Sources

There’s no doubt that open source code is both a boon for businesses and consumers. But it’s important to recognize that free code still comes at a cost: the cost of responsibility. It is up to businesses to ensure that their codebase is secure because it is the business that will bear the brunt of any losses, both financial and reputational.  

In practical terms, that means you need visibility both into what code you are dependent on and what that code is doing on your system. There are tools that can be used to audit open source code for known vulnerabilities and databases that can be manually searched.

A screenshot of Vulnerability databases

However, you also need automated security software capable of detecting behavioural anomalies and linking vulnerabilities back to known tactics and techniques, such as those catalogued by the MITRE ATT&CK framework. Being able to see the attack storyline and putting it into context helps you to understand how the attack occurred and to close down vulnerable gaps across your entire network.

SentinelOne Mitre indicators

Conclusion

Open source code is just another part of your supply chain, and an attack that leverages vulnerabilities in an open source library, package or application is just another kind of supply-chain attack. Therefore, no matter what dependencies you have, whether they are open-source or proprietary, you need to treat all code on your network with the same suspicion and monitor not just where it came from but also what it is doing. An EPP and EDR solution like SentinelOne can help protect your organization from issues arising from bad code, whatever its source.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Timescale announces $15M investment and new enterprise version of TimescaleDB

It’s a big day for Timescale, makers of the open-source time-series database, TimescaleDB. The company announced a $15 million investment and a new enterprise version of the product.

The investment is technically an extension of the $12.4 million Series A it raised last January, which it’s referring to as A1. Today’s round is led by Icon Ventures, with existing investors Benchmark, NEA and Two Sigma Ventures also participating. With today’s funding, the startup has raised $31 million.

Timescale makes a time-series database. That means it can ingest large amounts of data and measure how it changes over time. This comes in handy for a variety of use cases, from financial services to smart homes to self-driving cars — or any data-intensive activity you want to measure over time.

While there are a number of time-scale database offerings on the market, Timescale co-founder and CEO Ajay Kulkarni says that what makes his company’s approach unique is that it uses SQL, one of the most popular languages in the world. Timescale wanted to take advantage of that penetration and build its product on top of Postgres, the popular open-source SQL database. This gave it an offering that is based on SQL and is highly scalable.

Timescale admittedly came late to the market in 2017, but by offering a unique approach and making it open source, it has been able to gain traction quickly. “Despite entering into what is a very crowded database market, we’ve seen quite a bit of community growth because of this message of SQL and scale for time series,” Kulkarni told TechCrunch.

In just over 22 months, the company has more than a million downloads and a range of users from older guard companies like Charter, Comcast and Hexagon Mining to more modern companies like Nutanix and and TransferWise.

With a strong base community in place, the company believes that it’s now time to commercialize its offering, and in addition to an open-source license, it’s introducing a commercial license. “Up until today, our main business model has been through support and deployment assistance. With this new release, we also will have enterprise features that are available with a commercial license,” Kulkarni explained.

The commercial version will offer a more sophisticated automation layer for larger companies with greater scale requirements. It will also provide better lifecycle management, so companies can get rid of older data or move it to cheaper long-term storage to reduce costs. It’s also offering the ability to reorder data in an automated fashion when that’s required, and, finally, it’s making it easier to turn the time series data into a series of data points for analytics purposes. The company also hinted that a managed cloud version is on the road map for later this year.

The new money should help Timescale continue fueling the growth and development of the product, especially as it builds out the commercial offering. Timescale, which was founded in 2015 in NYC, currently has 30 employees. With the new influx of cash, it expects to double that over the next year.

SAP job cuts prove harsh realities of enterprise transformation

As traditional enterprise companies like IBM, Oracle and SAP try to transform into more modern cloud companies, they are finding that making that transition, while absolutely necessary, could require difficult adjustments along the way. Just this morning, SAP announced that it was restructuring in order to save between €750 million and €800 million (between approximately $856 million and $914 million).

While the company tried to put as positive a spin on the announcement as possible, it could involve up to 4,000 job cuts as SAP shifts into more modern technologies. “We are going to move our people and our focus to the areas where the new economy needs SAP the most: artificial intelligence, deep machine learning, IoT, blockchain and quantum computing,” CEO Bill McDermott told a post-earnings press conference.

If that sounds familiar, it should. It is precisely the areas on which IBM has been trying to concentrate its transformation over the last several years. IBM has struggled to make this change and has also framed workforce reduction as moving to modern skill sets. It’s worth pointing out that SAP’s financial picture has been more positive than IBM’s.

CFO Luka Mucic tried to stress this was not about cost-cutting, so much as ensuring the long-term health of the company, but did admit it did involve job cuts. These could include early retirement and other incentives to leave the company voluntarily. “We still expect that there will be a number probably slightly higher than what we saw in the 2015 program, where we had around 3,000 employees leave the company, where at the end of this process will leave SAP,” he said.

The company believes that in spite of these cuts, it will actually have more employees by this time next year than it has now, but they will be shifted to these new technology areas. “This is a growth company move, not a cost-cutting move; every dollar that we gain from a restructuring initiative will be invested back into headcount and more jobs,” McDermott said. SAP kept stressing that cloud revenue will reach $35 billion in revenue by 2023.

Holger Mueller, an analyst who watches enterprise companies like SAP for Constellation Research, says the company is doing what it has to do in terms of transformation. “SAP is in the midst of upgrading its product portfolio to the 21st century demands of its customer base,” Mueller told TechCrunch. He added that this is not easy to pull off, and it requires new skill sets to build, operate and sell the new technologies.

McDermott stressed that the company would be offering a generous severance package to any employee leaving the company as a result of today’s announcement.

Today’s announcement comes after the company made two multi-billion-dollar acquisitions to help in this transition in 2018, paying $8 billion for Qualtrics and $2.4 billion for CallidusCloud.

Figma’s design and prototyping tool gets new enterprise collaboration features

Figma, the design and prototyping tool that aims to offer a web-based alternative to similar tools from the likes of Adobe, is launching a few new features today that will make the service easier to use to collaborate across teams in large organizations. Figma Organization, as the company calls this new feature set, is the company’s first enterprise-grade service that features the kind of controls and security tools that large companies expect. To develop and test these tools, the company partnered with companies like Rakuten, Square, Volvo and Uber, and introduced features like unified billing and audit reports for the admins and shared fonts, browsable teams and organization-wide design systems for the designers.

For designers, one of the most important new features here is probably organization-wide design systems. Figma already had tools to create design systems, of course, but this enterprise version now makes it easier for teams to share libraries and fonts with each other to ensure that the same styles are applied to products and services across a company.

Businesses can now also create as many teams as they would like and admins will get more controls over how files are shared and with whom they can be shared. That doesn’t seem like an especially interesting feature, but because many larger organizations work with customers outside of the company, it’s something that will make Figma more interesting to these large companies.

After working with Figma on these new tools, Uber, for example, moved all of its company over to the service and 90 percent of its product design work now happens on the platform. “We needed a way to get people in the right place at the right time — in the right team with the right assets,” said Jeff Jura, staff product designer who focuses on Uber’s design systems. “Figma does that.”

Other new enterprise features that matter in this context are single sign-on support, activity logs for tracking activities across users, teams, projects and files, and draft ownership to ensure that all the files that have been created in an organization can be recovered after an employee leaves the company.

Figma still offers free and professional tiers (at $12/editor/month). Unsurprisingly, the new Organization tier is a bit more expensive and will cost $45/editor/month.

5 Common Cyber Security Threats That Bypass Legacy AV

Traditional antivirus software is designed to block file-based malware. It works by scanning files on the hard-drive and quarantining any malicious executables it finds. This solution was fine in the early days of security software, but attacks have evolved to bypass this kind of protection in a number of ways. In this post, we look at the five most common cybersecurity threats that can bypass traditional AV solutions.

5 Common Cyber Security Threats That Bypass Legacy AV with SentinelOne

1. Polymorphic Malware – Same Same, But Different

Take a look at any public malware database like VirusTotal and you’ll see the same old threats being uploaded on a daily basis. A lot of common malware is re-generated – sometimes as often as every few hours – with a completely different file hash. Some malware changes its content based on local device parameters, resulting in a fresh hash every time they run. 

What’s a hash? Let’s take a quick-dive into how this works. With a known malicious file, defenders can generate a unique checksum that will identify a copy of that file, regardless of its name or location, on any system using common utilities like sha and md5:

Screenshot of example of a file hash

It used to be a great idea, but the rapid recycling of old samples into what legacy AV would see as a “new” threat has reached such epic proportions that it’s impossible for hash-based solutions to keep up.

This is easy for attackers to do: a single byte added to a file will change the resulting hash. The purpose of such tinkering is to defeat signature-based AV tools that rely on checking a file’s hash against a known database of malware hashes.

2. Advanced Threats – If It Ain’t Known, It Ain’t Shown!

Many AV tools have recognized the inadequacy of just checking for signatures, and have moved to using a rule-based scanning system as well, typically the YARA tool invented by Victor Alvarez. YARA offers an improvement over simple file-hashing because it allows a scanner to conduct several tests on a file’s contents. For example, a rule could be created that looks not only for certain fixed strings in the malware but also searches for regex patterns:

Screenshot of an example of a Yara rule

YARA rules were a great step-forward and are deployed by many AV solutions, but there’s two problems that make it easy for malware to avoid detection by such rules.

First, as with file hashes, malware authors can figure out which strings a given engine is using to detect their malware and change the strings to avoid detection. Here’s what a real YARA rule looks like:

Screenshot of a real YARA rule

In the above example, which might look quite obscure to the untrained eye, the rules are just plain text strings written in hexadecimal. We – and attackers, of course – can easily convert them back to see what strings are actually being detected. For instance, the highlighted line in the image above is:

Screenshot of decoding hex to string

Second, and more problematic, is that this technique relies on the engine having already seen the malware at least once in order to analyze it and develop a rule for its detection. That means the defender is always one-step behind the attacker, and sometimes a window of a few days is enough for attackers to get in and out of their targets without detection.

3. Malicious Documents – When Is A Doc Not A Doc?

We tend to think of documents as harmless collections of formatted data, a very different species of file from executables or binaries, which are able to run code on our machines. This distinction, though, is blurred when documents contain dynamic elements like JavaScript in PDF files or code execution functionalities like macros and DDE in MS Office document types. Simply opening a file that contains these functionalities can lead to a compromise as they are executed as soon as the document is loaded.

Sometimes a maliciously-formatted document is used to exploit vulnerabilities in the opening application to achieve code execution, rather than relying on functions like macros. Such documents depend on coding errors in the application that can lead to a buffer overflow or heap spraying, a technique whereby embedded shellcode is written to multiple memory locations in the hope that one or more will allow execution of the attacker’s code. Adobe Reader and Microsoft Office are popular targets for these kind of malicious documents both because of their ubiquity, and – at least, in the former’s case – a history of repeated vulnerabilities.

For legacy AV solutions that rely on signatures, detecting these kinds of malicious documents can be difficult for two reasons. File hashes can easily be changed just by creating a document with different “normal content”, and even scanning via YARA rules can be defeated with simple code obfuscation as in this example:

Screenshot of avoiding YARA rule detection

4. Fileless Malware – It Doesn’t Have To Be Seen To Be Real

When most people think of malware, they typically think of some kind of malicious file that gets downloaded onto their computer and starts to cause damage or steal personal data. In the last few years, however, attackers have realised that traditional AV solutions have a gaping blindspot: malicious processes can be executed in-memory without dropping telltale files for AV scanners to find.

Fileless malware attacks have become increasingly common over the last few years, with notable examples including Angler, Duqu, Poweliks and WannaCry. The key to the rise of fileless malware has largely been the advent and widespread adoption of PowerShell, although JavaScript and PDF, Macros and DDE (as mentioned above) have also been deployed in fileless attacks.

What makes fileless attacks so difficult for traditional antivirus software to spot is the fact that they typically subvert trusted processes, such as PowerShell and rundll32.exe – an essential Windows executable that loads dynamic libraries of shared code for other programs.

Screenshot of rundll32 process

5. Encrypted Traffic – Hiding The Threat Pretty Securely

Another blindspot for legacy AV is encrypted traffic, which thanks to pressure from Google and others has now become the norm for most websites. While https and SSL certificates are a great way to help secure your communications with a trusted website, they just as “helpfully” protect attackers’ communications, too.

Malicious actors can hide their activities from inspection by ensuring, just like regular websites, that traffic between the victim and the attacker’s command-and-control (C2) server is protected by end-to-end encryption.

Recent figures also suggest that nearly half of all phishing sites are now using the secure https protocol to mask their activities from both users and much security software.

How SentinelOne Can Help

At SentinelOne, we understand that attackers have not and never will stand still, and they will continue to evolve their techniques. That’s why we have built a product that doesn’t rely on traditional solutions but takes the fight to the attackers by using active EDR that predicts whether a process is malicious regardless of where it comes from. Whether it’s polymorphic or novel malware, a malicious document or fileless attack, our single-agent solution leverages behavioural AI among other engines to detect attacks both pre-execution and on-execution. We also provide deep visibility into encrypted traffic for post-execution threat-hunting. We were the first security product to introduce a ransomware guarantee, as early as 3 years ago, as proof of confidence in our AI technology.

Conclusion

Malware and malware authors haven’t abandoned their old techniques, but they’ve added significant new ones to counter the moves made by traditional AV software some years ago. Once upon a time, signature detections and YARA rules might have provided “good enough” defense, but these days any solution that’s not deploying a behavioural AI engine with machine learning is going to be outwitted by today’s attackers. With phishing, ransomware and cryptomining all on the increase, the modern enterprise needs a modern solution. If you haven’t tried out the SentinelOne offering yet, click the Free Demo button above and see the difference our easy-to-deploy solution can make to the security of your business.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Dropbox snares HelloSign for $230M, gets workflow and e-signature

Dropbox announced today that it intends to purchase HelloSign, a company that provides lightweight document workflow and e-signature services. The company paid a hefty $230 million for the privilege.

Dropbox’s SVP of engineering, Quentin Clark, sees this as more than simply bolting on electronic signature functionality to the Dropbox solution. For him, the workflow capabilities that HelloSign added in 2017 were really key to the purchase.

“What is unique about HelloSign is that the investment they’ve made in APIs and the workflow products is really so aligned with our long-term direction,” Clark told TechCrunch. “It’s not just a thing to do one more activity with Dropbox, it’s really going to help us pursue that broader vision,” he added. That vision involves extending the storage capabilities that is at the core of the Dropbox solution.

This can also been seen in the context of the Extension capability that Dropbox added last year. HelloSign was actually one of the companies involved at launch. While Clark says the company will continue to encourage companies to extend the Dropbox solution, today’s acquisition gives it a capability of its own that doesn’t require a partnership and already is connected to Dropbox via Extensions.

Fast integration

Alan Pelz-Sharpe, founder and principal analyst at Deep Analysis, who has been following this market for many years, says the fact it’s an Extensions partner should allow much faster integration than would happen normally in an acquisition like this. “Simple document processes that relate to small and medium business are still largely manual. The fact that HelloSign has solutions for things like real estate, insurance and customer/employee on boarding, plus the existing extension to Dropbox, means it can be leveraged quickly for revenue growth by Dropbox, Pelz-Sharpe explained.

He added that the size of the deal shows there is high demand for these kinds capabilities. “It is a very high multiple, but in such a fast growth area not an unreasonable one to demand for a startup showing such growth potential. The price suggests that there were almost certainly other highly motivated bidders for the deal,” he said.

HelloSign CEO Joseph Walla says being part of Dropbox gives HelloSign access to resources of a much larger public company, which should allow it to reach a broader market than it could on its own. “Together with Dropbox, we can bring more seamless document workflows to even more customers and dramatically accelerate our impact,” Walla said in a blog post announcing the deal.

HelloSign remains standalone

Whitney Bouck, COO at HelloSign, who previously held stints at Box and EMC Documentum, said the company will remain an independent entity. That means it will continue to operate with its current management structure as part of the Dropbox family. In fact, Clark indicated that all of the HelloSign employees will be offered employment at Dropbox as part of the deal.

“We’re going to remain effectively a standalone business within the Dropbox family, so that we can continue to focus on developing the great products that we have and delivering value. So the good news is that our customers won’t really experience any massive change. They just get more opportunity,” Bouck said.

Alan Lepofsky, an analyst at Constellation Research who specializes in enterprise workflow, sees HelloSign giving Dropbox an enterprise-class workflow tool, but adds that the addition of Bouck and her background in enterprise content management is also a nice bonus for Dropbox in this deal. “While this is not an acqui-hire, Dropbox does end up with Whitney Bouck, a proven leader in expanding offerings into enterprise scale accounts. I believe she could have a large impact in Dropbox’s battle with her former employer Box,” Lepofsky told TechCrunch.

Clark said that it was too soon to say exactly how it will bundle and incorporate HelloSign functionality beyond the Extensions. But he expects that the company will find a way to integrate the two products where it make sense, even while HelloSign operates as a separate company with its own customers.

When you consider that HelloSign, a Bay Area startup that launched in 2011, raised just $16 million, it appears to be an impressive return for investors and a solid exit for the company. 

The deal is expected to close in Q1 and is, per usual, dependent on regulatory approval.

Has the fight over privacy changed at all in 2019?

Few issues divide the tech community quite like privacy. Much of Silicon Valley’s wealth has been built on data-driven advertising platforms, and yet, there remain constant concerns about the invasiveness of those platforms.

Such concerns have intensified in just the last few weeks as France’s privacy regulator placed a record fine on Google under Europe’s General Data Protection Regulation (GDPR) rules which the company now plans to appeal. Yet with global platform usage and service sales continuing to tick up, we asked a panel of eight privacy experts: “Has anything fundamentally changed around privacy in tech in 2019? What is the state of privacy and has the outlook changed?” 

This week’s participants include:

TechCrunch is experimenting with new content forms. Consider this a recurring venue for debate, where leading experts – with a diverse range of vantage points and opinions – provide us with thoughts on some of the biggest issues currently in tech, startups and venture. If you have any feedback, please reach out: Arman.Tabatabai@techcrunch.com.


Thoughts & Responses:


Albert Gidari

Albert Gidari is the Consulting Director of Privacy at the Stanford Center for Internet and Society. He was a partner for over 20 years at Perkins Coie LLP, achieving a top-ranking in privacy law by Chambers, before retiring to consult with CIS on its privacy program. He negotiated the first-ever “privacy by design” consent decree with the Federal Trade Commission. A recognized expert on electronic surveillance law, he brought the first public lawsuit before the Foreign Intelligence Surveillance Court, seeking the right of providers to disclose the volume of national security demands received and the number of affected user accounts, ultimately resulting in greater public disclosure of such requests.

There is no doubt that the privacy environment changed in 2018 with the passage of California’s Consumer Privacy Act (CCPA), implementation of the European Union’s General Data Protection Regulation (GDPR), and new privacy laws enacted around the globe.

“While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.””

For one thing, large tech companies have grown huge privacy compliance organizations to meet their new regulatory obligations. For another, the major platforms now are lobbying for passage of a federal privacy law in the U.S. This is not surprising after a year of privacy miscues, breaches and negative privacy news. But does all of this mean a fundamental change is in store for privacy? I think not.

The fundamental model sustaining the Internet is based upon the exchange of user data for free service. As long as advertising dollars drive the growth of the Internet, regulation simply will tinker around the edges, setting sideboards to dictate the terms of the exchange. The tech companies may be more accountable for how they handle data and to whom they disclose it, but the fact is that data will continue to be collected from all manner of people, places and things.

Indeed, if the past year has shown anything it is that two rules are fundamental: (1) everything that can be connected to the Internet will be connected; and (2) everything that can be collected, will be collected, analyzed, used and monetized. It is inexorable.

While privacy regulation seeks to make tech companies betters stewards of the data they collect and their practices more transparent, in the end, it is a deception to think that users will have more “privacy.” No one even knows what “more privacy” means. If it means that users will have more control over the data they share, that is laudable but not achievable in a world where people have no idea how many times or with whom they have shared their information already. Can you name all the places over your lifetime where you provided your SSN and other identifying information? And given that the largest data collector (and likely least secure) is government, what does control really mean?

All this is not to say that privacy regulation is futile. But it is to recognize that nothing proposed today will result in a fundamental shift in privacy policy or provide a panacea of consumer protection. Better privacy hygiene and more accountability on the part of tech companies is a good thing, but it doesn’t solve the privacy paradox that those same users who want more privacy broadly share their information with others who are less trustworthy on social media (ask Jeff Bezos), or that the government hoovers up data at rate that makes tech companies look like pikers (visit a smart city near you).

Many years ago, I used to practice environmental law. I watched companies strive to comply with new laws intended to control pollution by creating compliance infrastructures and teams aimed at preventing, detecting and deterring violations. Today, I see the same thing at the large tech companies – hundreds of employees have been hired to do “privacy” compliance. The language is the same too: cradle to grave privacy documentation of data flows for a product or service; audits and assessments of privacy practices; data mapping; sustainable privacy practices. In short, privacy has become corporatized and industrialized.

True, we have cleaner air and cleaner water as a result of environmental law, but we also have made it lawful and built businesses around acceptable levels of pollution. Companies still lawfully dump arsenic in the water and belch volatile organic compounds in the air. And we still get environmental catastrophes. So don’t expect today’s “Clean Privacy Law” to eliminate data breaches or profiling or abuses.

The privacy world is complicated and few people truly understand the number and variety of companies involved in data collection and processing, and none of them are in Congress. The power to fundamentally change the privacy equation is in the hands of the people who use the technology (or choose not to) and in the hands of those who design it, and maybe that’s where it should be.


Gabriel Weinberg

Gabriel Weinberg is the Founder and CEO of privacy-focused search engine DuckDuckGo.

Coming into 2019, interest in privacy solutions is truly mainstream. There are signs of this everywhere (media, politics, books, etc.) and also in DuckDuckGo’s growth, which has never been faster. With solid majorities now seeking out private alternatives and other ways to be tracked less online, we expect governments to continue to step up their regulatory scrutiny and for privacy companies like DuckDuckGo to continue to help more people take back their privacy.

“Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information.”

We’re also seeing companies take action beyond mere regulatory compliance, reflecting this new majority will of the people and its tangible effect on the market. Just this month we’ve seen Apple’s Tim Cook call for stronger privacy regulation and the New York Times report strong ad revenue in Europe after stopping the use of ad exchanges and behavioral targeting.

At its core, this groundswell is driven by the negative effects that stem from the surveillance business model. The percentage of people who have noticed ads following them around the Internet, or who have had their data exposed in a breach, or who have had a family member or friend experience some kind of credit card fraud or identity theft issue, reached a boiling point in 2018. On top of that, people learned of the extent to which the big platforms like Google and Facebook that collect the most data are used to propagate misinformation, discrimination, and polarization. Consumers don’t necessarily feel they have anything to hide – but they just don’t want corporations to profit off their personal information, or be manipulated, or unfairly treated through misuse of that information. Fortunately, there are alternatives to the surveillance business model and more companies are setting a new standard of trust online by showcasing alternative models.


Melika Carroll

Melika Carroll is Senior Vice President, Global Government Affairs at Internet Association, which represents over 45 of the world’s leading internet companies, including Google, Facebook, Amazon, Twitter, Uber, Airbnb and others.

We support a modern, national privacy law that provides people meaningful control over the data they provide to companies so they can make the most informed choices about how that data is used, seen, and shared.

“Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.”

Internet companies believe all Americans should have the ability to access, correct, delete, and download the data they provide to companies.

Americans will benefit most from a federal approach to privacy – as opposed to a patchwork of state laws – that protects their privacy regardless of where they live. If someone in New York is video chatting with their grandmother in Florida, they should both benefit from the same privacy protections.

It’s also important to consider that all companies – both online and offline – use and collect data. Any national privacy framework should provide the same protections for people’s data across industries, regardless of whether it is gathered offline or online.

Two other important pieces of any federal privacy law include user expectations and the context in which data is shared with third parties. Expectations may vary based on a person’s relationship with a company, the service they expect to receive, and the sensitivity of the data they’re sharing. For example, you expect a car rental company to be able to track the location of the rented vehicle that doesn’t get returned. You don’t expect the car rental company to track your real-time location and sell that data to the highest bidder. Additionally, the same piece of data can have different sensitivities depending on the context in which it’s used or shared. For example, your name on a business card may not be as sensitive as your name on the sign in sheet at an addiction support group meeting.

This is a unique time in Washington as there is bipartisan support in both chambers of Congress as well as in the administration for a federal privacy law. Our industry is committed to working with policymakers and other stakeholders to find an American approach to privacy that protects individuals’ privacy and allows companies to innovate and develop products people love.


Johnny Ryan

Dr. Johnny Ryan FRHistS is Chief Policy & Industry Relations Officer at Brave. His previous roles include Head of Ecosystem at PageFair, and Chief Innovation Officer of The Irish Times. He has a PhD from the University of Cambridge, and is a Fellow of the Royal Historical Society.

Tech companies will probably have to adapt to two privacy trends.

“As lawmakers and regulators in Europe and in the United States start to think of “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.”

First, the GDPR is emerging as a de facto international standard.

In the coming years, the application of GDPR-like laws for commercial use of consumers’ personal data in the EU, Britain (post-EU), Japan, India, Brazil, South Korea, Malaysia, Argentina, and China will bring more than half of global GDP under a similar standard.

Whether this emerging standard helps or harms United States firms will be determined by whether the United States enacts and actively enforces robust federal privacy laws. Unless there is a federal GDPR-like law in the United States, there may be a degree of friction and the potential of isolation for United States companies.

However, there is an opportunity in this trend. The United States can assume the global lead by doing two things. First, enact a federal law that borrows from the GDPR, including a comprehensive definition of “personal data”, and robust “purpose specification”. Second, invest in world-leading regulation that pursues test cases, and defines practical standards. Cutting edge enforcement of common principles-based standards is de facto leadership.

Second, privacy and antitrust law are moving closer to each other, and might squeeze big tech companies very tightly indeed.

Big tech companies “cross-use” user data from one part of their business to prop up others. The result is that a company can leverage all the personal information accumulated from its users in one line of business, and for one purpose, to dominate other lines of business too.

This is likely to have anti-competitive effects. Rather than competing on the merits, the company can enjoy the unfair advantage of massive network effects even though it may be starting from scratch in a new line of business. This stifles competition and hurts innovation and consumer choice.

Antitrust authorities in other jurisdictions have addressed this. In 2015, the Belgian National Lottery was fined for re-using personal information acquired through its monopoly for a different, and incompatible, line of business.

As lawmakers and regulators in Europe and in the United States start to think of “purpose specification” as a tool for anti-trust enforcement, tech giants should beware.


John Miller

John Miller is the VP for Global Policy and Law at the Information Technology Industry Council (ITI), a D.C. based advocate group for the high tech sector.  Miller leads ITI’s work on cybersecurity, privacy, surveillance, and other technology and digital policy issues.

Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike. However, as times change and innovation progresses at a rapid rate, it’s clear the laws protecting consumers’ data and privacy must evolve as well.

“Data has long been the lifeblood of innovation. And protecting that data remains a priority for individuals, companies and governments alike.”

As the global regulatory landscape shifts, there is now widespread agreement among business, government, and consumers that we must modernize our privacy laws, and create an approach to protecting consumer privacy that works in today’s data-driven reality, while still delivering the innovations consumers and businesses demand.

More and more, lawmakers and stakeholders acknowledge that an effective privacy regime provides meaningful privacy protections for consumers regardless of where they live. Approaches, like the framework ITI released last fall, must offer an interoperable solution that can serve as a model for governments worldwide, providing an alternative to a patchwork of laws that could create confusion and uncertainty over what protections individuals have.

Companies are also increasingly aware of the critical role they play in protecting privacy. Looking ahead, the tech industry will continue to develop mechanisms to hold us accountable, including recommendations that any privacy law mandate companies identify, monitor, and document uses of known personal data, while ensuring the existence of meaningful enforcement mechanisms.


Nuala O’Connor

Nuala O’Connor is president and CEO of the Center for Democracy & Technology, a global nonprofit committed to the advancement of digital human rights and civil liberties, including privacy, freedom of expression, and human agency. O’Connor has served in a number of presidentially appointed positions, including as the first statutorily mandated chief privacy officer in U.S. federal government when she served at the U.S. Department of Homeland Security. O’Connor has held senior corporate leadership positions on privacy, data, and customer trust at Amazon, General Electric, and DoubleClick. She has practiced at several global law firms including Sidley Austin and Venable. She is an advocate for the use of data and internet-enabled technologies to improve equity and amplify marginalized voices.

For too long, Americans’ digital privacy has varied widely, depending on the technologies and services we use, the companies that provide those services, and our capacity to navigate confusing notices and settings.

“Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away.”

We are burdened with trying to make informed choices that align with our personal privacy preferences on hundreds of devices and thousands of apps, and reading and parsing as many different policies and settings. No individual has the time nor capacity to manage their privacy in this way, nor is it a good use of time in our increasingly busy lives. These notices and choices and checkboxes have become privacy theater, but not privacy reality.

In 2019, the legal landscape for data privacy is changing, and so is the public perception of how companies handle data. As more information comes to light about the effects of companies’ data practices and myriad stewardship missteps, Americans are surprised and shocked about what they’re learning. They’re increasingly paying attention, and questioning why they are still overburdened and unprotected. And with intensifying scrutiny by the media, as well as state and local lawmakers, companies are recognizing the need for a clear and nationally consistent set of rules.

Personal privacy is the cornerstone of the digital future people want. Americans deserve comprehensive protections for personal information – protections that can’t be signed, or check-boxed, away. The Center for Democracy & Technology wants to help craft those legal principles to solidify Americans’ digital privacy rights for the first time.


Chris Baker

Chris Baker is Senior Vice President and General Manager of EMEA at Box.

Last year saw data privacy hit the headlines as businesses and consumers alike were forced to navigate the implementation of GDPR. But it’s far from over.

“…customers will have trust in a business when they are given more control over how their data is used and processed”

2019 will be the year that the rest of the world catches up to the legislative example set by Europe, as similar data regulations come to the forefront. Organizations must ensure they are compliant with regional data privacy regulations, and more GDPR-like policies will start to have an impact. This can present a headache when it comes to data management, especially if you’re operating internationally. However, customers will have trust in a business when they are given more control over how their data is used and processed, and customers can rest assured knowing that no matter where they are in the world, businesses must meet the highest bar possible when it comes to data security.

Starting with the U.S., 2019 will see larger corporations opt-in to GDPR to support global business practices. At the same time, local data regulators will lift large sections of the EU legislative framework and implement these rules in their own countries. 2018 was the year of GDPR in Europe, and 2019 be the year of GDPR globally.


Christopher Wolf

Christopher Wolf is the Founder and Chair of the Future of Privacy Forum think tank, and is senior counsel at Hogan Lovells focusing on internet law, privacy and data protection policy.

With the EU GDPR in effect since last May (setting a standard other nations are emulating),

“Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.”

with the adoption of a highly-regulatory and broadly-applicable state privacy law in California last Summer (and similar laws adopted or proposed in other states), and with intense focus on the data collection and sharing practices of large tech companies, the time may have come where Congress will adopt a comprehensive federal privacy law. Complicating the adoption of a federal law will be the issue of preemption of state laws and what to do with the highly-developed sectoral laws like HIPPA and Gramm-Leach-Bliley. Also to be determined is the expansion of FTC regulatory powers. Regardless of the outcome of the debate over a new federal privacy law, the issue of the privacy and protection of personal data is unlikely to recede.

5 Ways a CISO Can Tackle the CyberSecurity Skills Shortage Now

Every CISO knows that finding skilled security staff these days is not only hard but getting harder. The number of organizations reporting a cybersecurity skills shortage has risen every year from 42% in 2015 to 53% last year. Estimates suggest this will translate into a shortfall of around 2 million unfilled cybersecurity positions in 2019, rising to 3.5 million by 2021.

There’s no shortage of people talking about the problem either, with increasing demands for more cooperation between universities, private organizations and government to boost training opportunities and encourage more diverse applicants into the field. But if your organization is facing a shortage problem today, you can’t wait for a talent pipeline to emerge in 3 or 5 years time. What practical steps can you take now?

cybersecurity skills shortage

1. Lower the Skill Level

It might sound revolutionary, but one obvious way to match the skills that are available to the skills that are required is to lower the skill requirements. How? Look for and invest in tools that provide the functionality you need in a simpler, more intuitive way. In other words, tools that require less specialist human skill because they use machine smarts to automate complex tasks.

Many CISOs today understand the need to move away from ineffective, labor-intensive legacy AV security products. Security leaders in many organizations are reducing hiring problems by moving toward automated endpoint detection and response solutions that use machine learning to process vastly more data than human “agents” can.

Beware though: there are many “next-gen” products on the market, but they are not all created equal. You don’t want a solution that just changes the work your staff have to do, or that makes it harder for them just to stand still instead of moving forward. Look for next-gen AV products like SentinelOne that are manageable by the staff you have now, with the training they have now, and which makes onboarding for new employees straightforward.

At the very minimum, reducing the workload means a single-agent solution that integrates across all your platforms and lets you manage sites from a simple, intuitive console. You want a solution that won’t bury your staff in multiple alerts for each suspected attack—that is just going to make your staff work more, not less—but instead provides a single alert and a contextual attack storyline that can be understood by any competent IT staff without the need for long training courses or specialist, expensive certification.

2. Spread the Load For Your Security Professionals

Imagine if every one of your employees – not just your IT team – were a part-time, volunteer “security officer”. Where would your skills shortage be then? Of course, that’s not going to be a realistic proposition in many cases, but the idea of getting more of your current staff involved goes hand-in-hand with having streamlined security solutions that are easy to learn. Consider rotating staff from other departments into your IT or security teams on a regular basis so that as many employees as possible know the basics of how your security team (and its tools!) function.

Aside from helping you to spot potential talent from unexpected areas of your organization, having transparency into what your Security Operations Center (SOC) or security team does, what it deals with and how it handles it, will increase understanding and vigilance across your business.

3. Raise Awareness About Cyber Attacks

You might not be able to give every member of your staff a taste of “a day in the life of a security engineer”, but for those that you can’t, education is a powerful weapon that will reduce your SOC’s workload. Increase the conversations in your workplace that concern security with more than just occasional “Security awareness” seminars (although don’t forget to run those, too!). Increasing awareness creates more vigilant staff, and more vigilance means less chance of attacks ever getting past your weakest line of defense: the people on your network. That, in turn, will help lessen the burden on your SOC or IT security team.

With phishing and spear-phishing campaigns the primary vector of credential theft, consider running regular phishing awareness and phishing simulation campaigns on your staff to make them aware of just how convincing phishing attacks can be.

On top of that, if you’re not employing some kind of media or device control on your endpoints, raise awareness about the dangers of infected USBs and just how easy it is for employees to unwittingly compromise the firm’s security. You could even consider replicating the famous USB key dropping test carried out at the University of Illinois. Social engineering keys are the easiest to create as they use simple HTML files and phish users for credentials.

A screenshot of dropping USB keys

The point is try to think creatively about how to engage staff with security issues that intersect with their everyday work and practices. Whether it’s the folk in Marketing and Sales, Finance and Accounting, or R&D and Engineering, cybersecurity comes into contact with them all. However you do it, aim to integrate all your staff as “security partners” and avoid isolating your IT team. If your security team is hiding away under the stairs or in a small back office, you’re insulating your staff and the knowledge they hold not just from each other but also from the security issues that face your entire business.

4. Increase Network Visibility

Your cybersecurity skills shortage is related to the complexity of your network. The variety of devices that connect to your network, whether they be running Windows, Linux, or macOS, whether they be Desktop, Notebook, mobile or smart “IoT” devices, the greater the attack surface presented to adversaries, and the more work you have to do to monitor and protect them.

Then there’s the supply chain to consider. How well-protected is the development cycle of your third-party vendors? For those that have access to your network, how well do they protect the keys to your kingdom? It’s a lot of bases to cover, especially if your security team is lean.

The answer to network complexity is network visibility. If you can’t see what devices on your network are doing, you can’t protect your network against them. Automated AI solutions can help bring visibility to your network so that you can see who is traversing it and what they are doing. However, make sure the next-gen AV product you choose has the ability to inspect encrypted traffic, as bad actors are increasingly operating with SSL certificates and communicating via https. This is still a blind spot for many next-gen security products.

5. Plan for Tomorrow

Sure, that talent pipeline may be a few years away, but your organization isn’t going anywhere and neither is the demand for a certain amount of skilled staff. Ensure that you’re building for your cybersecurity needs not only for today but also for tomorrow. You’ll attract better candidates if you can offer an organization that has industry-leading tools and an enlightened, company-wide approach to managing security.

While interviewers like to ask candidates “where do you see yourself in 5 years time?”, it’s a good question to ask the same about your current cybersecurity defenses and strategies. Are they going to keep up with an increase in automated attacks, new devices and new working practices? Will they help simplify the security tasks you have to tackle today and tomorrow, or will they just burden you with an ever-increasing need to hire an army of experts to protect your customers, data and reputation?


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security