CrowdStrike’s CEO on how to IPO, direct listings and what’s ahead for SaaS startups

A few days before Christmas, TechCrunch caught up with CrowdStrike CEO George Kurtz to chat about his company’s public offering, direct listings and his expectations for the 2020 IPO market. We also spoke about CrowdStrike’s product niche — endpoint security — and a bit more on why he views his company as the Salesforce of security.

The conversation is timely. Of the 2019 IPO cohort, CrowdStrike’s IPO stands out as one of the year’s most successful debuts. As 2020’s IPO cycle is expected to be both busy and inclusive of some of the private market’s biggest names, Kurtz’s views are useful to understand. After all, his SaaS security company enjoyed a strong pricing cycle, a better-than-expected IPO fundraising haul and strong value appreciation after its debut.

Notably, CrowdStrike didn’t opt to pursue a direct listing; after chatting with the CEO of recent IPO Bill.com concerning why his SaaS company also decided on a traditional flotation, we wanted to hear from Kurtz as well. The security CEO called the current conversation around direct listings a “great debate,” before explaining his perspective.

Pulling from a longer conversation, what follows are Kurtz’s four tips for companies gearing up for a public offering, why his company elected chose a traditional public offering over a more exotic method, comments on endpoint security and where CrowdStrike fits inside its market, and, finally, quick notes on upcoming debuts.

The following interview has been condensed and edited for clarity.

How to go public successfully

Share often

What’s most important is the fact that when we IPO’d in June of 2019, we started the process three years earlier. And that is the number one thing that I can point to. When [CrowdStrike CFO Burt Podbere] and I went on the road show everybody knew us, all the buy side investors we had met with for three years, the sell side analysts knew us. The biggest thing that I would say is you can’t go on a road show and have someone not know your company, or not know you, or your CFO.

And we would share — as a private company, you share less — but we would share tidbits of information. And we built a level of consistency over time, where we would share something, and then they would see it come true. And we would share something else, and they would see it come true. And we did that over three years. So we built, I believe, trust with the street, in anticipation of, at some point in the future, an IPO.

Practice early

We spent a lot of time running the company as if it was public, even when we were private. We had our own earnings call as a private company. We would write it up and we would script it.

You’ve seen other companies out there, if they don’t get their house in order it’s very hard to go [public]. And we believe we had our house in order. We ran it that way [which] allowed us to think and operate like a public company, which you want to get out of the way before you come become public. If there’s a takeaway here for folks that are thinking about [going public], run it and act like a public company before you’re public, including simulated earnings calls. And once you become public, you already have that muscle memory.

Raw numbers matter

The third piece is [that] you [have to] look at the numbers. We are in rarified air. At the time of IPO we were the fastest growing SaaS company to IPO ever at scale. So we had the numbers, we had the growth rate, but it really was a combination of preparation beforehand, operating like a public company, […] and then we had the numbers to back it up.

TAM is key, even at scale

One last point, we had the [total addressable market, or TAM] as well. We have the TAM as part of our story; security and where we play is a massive opportunity. So we had that market opportunity as well.


On this topic, Kurtz told TechCrunch two interesting things earlier in the conversation. First that what many people consider as “endpoint security” is too constrained, that the category includes “traditional endpoints plus things like mobile, plus things like containers, IoT devices, serverless, ephemeral cloud instances, [and] on and on.” The more things that fit under the umbrella of endpoint security, CrowdStrike’s focus, the bigger its market is.

Kurtz also discussed how the cloud migration — something that builds TAM for his company’s business — is still in “the early innings,” going on to say that in time “you’re going to start to see more critical workloads migrate to the cloud.” That should generate even more TAM for CrowdStrike and its competitors, like Carbon Black and Tanium.


Why CrowdStrike opted for a traditional IPO instead of a direct listing

BigID bags another $50M round as data privacy laws proliferate

Almost exactly 4 months to the day after BigID announced a $50 million Series C, the company was back today with another $50 million round. The Series D came entirely from Tiger Global Management. The company has raised a total of $144 million.

What warrants $100 million in interest from investors in just four months is BigID’s mission to understand the data a company has and manage that in the context of increasing privacy regulation including GDPR in Europe and CCPA in California, which went into effect this month.

BigID CEO and co-founder Dimitri Sirota admits that his company formed at the right moment when it launched in 2016, but says he and his co-founders had an inkling that there would be a shift in how governments view data privacy.

“Fortunately for us, some of the requirements that we said were going to be critical, like being able to understand what data you collect on each individual across your entire data landscape, have come to [pass],” Sirota told TechCrunch. While he understands that there are lots of competing companies going after this market, he believes that being early helped his startup establish a brand identity earlier than most.

Meanwhile, the privacy regulation landscape continues to evolve. Even as California privacy legislation is taking effect, many other states and countries are looking at similar regulations. Canada is looking at overhauling its existing privacy regulations.

Sirota says that he wasn’t actually looking to raise either the C or the D, and in fact still has B money in the bank, but when big investors want to give you money on decent terms, you take it while the money is there. These investors clearly see the data privacy landscape expanding and want to get involved. He recognizes that economic conditions can change quickly, and it can’t hurt to have money in the bank for when that happens.

That said, Sirota says you don’t raise money to keep it in the bank. At some point, you put it to work. The company has big plans to expand beyond its privacy roots and into other areas of security in the coming year. Although he wouldn’t go into too much detail about that, he said to expect some announcements soon.

For a company that is only four years old, it has been amazingly proficient at raising money with a $14 million Series A and a $30 million Series B in 2018, followed by the $50 million Series C last year, and the $50 million round today. And Sirota said, he didn’t have to even go looking for the latest funding. Investors came to him — no trips to Sand Hill Road, no pitch decks. Sirota wasn’t willing to discuss the company’s valuation, only saying the investment was minimally diluted.

BigID, which is based in New York City, already has some employees in Europe and Asia, but he expects additional international expansion in 2020. Overall the company has around 165 employees at the moment and he sees that going up to 200 by mid-year as they make a push into some new adjacencies.

The Good, the Bad and the Ugly in Cybersecurity – Week 1

Image of The Good, The Bad & The Ugly in CyberSecurity

The Good

The MAZE group actors

The MAZE group actors found themselves lost in their own “maze” yesterday, Thursday, Jan 2, 2020. Their public shaming website was taken down along with the entire platform it was hosted on in Cork, Ireland at https://worldhostingfarm.com

Maze ransomware down

World Hosting Farm appears to have been a possible front for the bad guys (not a legitimate ISP in other words) and is host to many known malicious address ranges:

World Hosting Farm

This comes on the heels of news from earlier in the week when one of Maze’s victims from several weeks ago, a US firm Southwire, was able to gain a secure emergency High Court injunction against two likely Polish nationals and the ISP front company that was just taken down. It is great to see a cross-border injunction granted, and even better to see the website and malicious ISP taken down. This development helps remove a portion of leverage the MAZE extortionists have upon past and future victims.

The Bad

The restaurant group, Landry’s (over 600 famous eateries like Morton’s, McCormick & Schmick’s, Mastro’s, and Joe’s Crab Shack, for example) had an awkward PCI breach this week. Even though their credit card POS systems used end-to-end encryption to prevent malware from being able to scrape card data, it turns out Landry’s Select Club reward card swipe systems did not, and employees were sometimes swiping customer’s credit cards with rewards readers instead. Malware found on reward card order entry systems was able to read the credit card data when employees swiped them!

The restaurant group

Landry’s alerted their customers on their website, letting customers know the activity has been on-going for about a year. No word yet on how many credit cards were compromised.

The Ugly

The City of New Orleans gave an update on it’s recovery status on the heels of their December 14th ransomware attack. The attack took out all 3,400 systems which were connected to the network. 2,658 systems have been restored over the last two and a half weeks, however, eight of the city’s agencies have yet to be restored from backup.

Manual processes are still being used and restoring Public Safety systems, including the NOPD’s EPP and body camera footage, remains the top priority. These systems should be restored by next Monday, January 6th, after three weeks of downtime. Meanwhile, the city hopes to be able to allow property taxes to be paid no later than January 31, a month and a half after the attack began. To give an idea on the level of effort it’s taken so far, over 75 people have been working full time since December 14th on the breach. This represents over 10,000 hours for just the additional 75 people involved. Additionally, up to 20% of the city’s computing assets will not be usable on the newly-restored network, though the reason for this was not mentioned. Perhaps this is the percentage of assets for which data could not be recovered

While the city has turned over relevant data to the FBI, it was not willing to name the type of ransomware used in the attack or speculate who (or what nation state, perhaps) is behind the attack. Earlier reports based on files that were uploaded to VirusTotal pointed to Ryuk:

based on strings from this sample, it was seemingly taken from the computer owned by the city’s IT Security Manager:

City’s IT Security Manager

So there you have it, this week’s UGLY: cities, counties, and schools keep getting debilitated from ransomware, and it’s taking weeks and months to recover.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

The Best, The Worst and The Ugliest in Cybersecurity, 2019 edition

Image of The Good, The Bad & The Ugly in CyberSecurity

Earlier this year we started a Friday round-up of the most important cyber news occurring each week, focusing on stories that fell into one of three broadly defined categories: good news that boded well for the industry and users, bad news that reflected the impact of attacks, hacks and vulnerabilities on enterprise and end users, and, of course, the ugly: avoidable failures, controversial decisions and unfortunate circumstances. As we close out the year, let’s take a look at some of the highlights of our Good, the Bad and the Ugly digests. Here then, is the best, the worst, and the ugliest in cybersecurity of 2019.

The Best

There’s been some great news in cybersecurity this year, from the establishment of the offensive Cybersecurity Directorate (Wk30) to botnet (Wk40) and RAT (Wk49) platform takedowns. The identification of two men behind the notorious Dridex malware (Wk49) and the sentencing of criminals behind the Bayrob (Wk50) scam and those responsible for GozNym malware (Wk52) are all good news for cyber security going into 2020.

image of tweet Dridex hacker reward

The good cause should be further enhanced by the uptick in investment in businesses providing cybersecurity solutions (Wk29, Wk32), a trend that looks set to continue this coming year.

But perhaps the best news for enterprise and end users alike has been the expansion of bug bounty programs by the major players, which should lead to improved security across many of the main digital products and services we all use and rely on. Microsoft announced the Edge Insider Bounty Program (Wk34) just a week before Google (Wk35) initiated a new program to reward the reporting of data abuse issues in Android apps. Apple promised to open up its iOS bug bounty program to all (Wk32) as well as offer a macOS bug bounty program for the first time; the company delivered on those promises just in the nick of time (Wk51). Also worth mentioning among the good news was the publication of an interactive Russian APT map (Wk39), which if nothing else is perhaps the prettiest of all the good news we saw in 2019!

img of apt interactive map

The Worst

Inevitably, there was plenty of bad news this past year, too. BlueKeep (Wk31) and a universal Windows privilege escalation (Wk33) had enterprises on high alert. Thankfully, the predictions of another Eternalblue/Wannacry meltdown have not come to pass, thus far. But that isn’t to say the danger is behind us with so many vulnerable devices still out there.

There’s no doubt that this year we’ve covered more stories regarding ransomware than any other topic (Wk34, Wk35, Wk41, Wk44, Wk45, Wk49, Wk52), and among those Ryuk (Wk36, Wk37, Wk40, Wk47, Wk52) has been the most rampant, with RobinHood (Wk30), Sodinokibi (Wk35) and Maze (Wk46, Wk52) ransomware variants also causing havoc across public and private organizations in the US and abroad. As ransomware as a service continues to spread and make the threat available to a wider, less-technical, criminal audience, it only looks like 2020 will see more of the same.

But perhaps the worst thing we saw this year was the attack on the Kudankulam Nuclear Power Plant in India (Wk44). The sheer recklessness of attacking nuclear power plants, whose safe operation is critical to the safety and health of the entire world, presents the gravest of threats to us all.

image of Indian Nuclear Power Plant
Image Credit: indiawaterportal.org/The Kudankulam Nuclear Power Plant (KKNPP)/Wikimedia Commons

The Ugliest

Things got ugly for Samsung this year when a bug in their fingerprint reader in the Galaxy S10 (Wk42) allowed anyone to bypass it with a clear piece of plastic. Nord VPN (Wk43) were also widely criticized not only for weak security that allowed hackers free reign inside servers belonging to the virtual private network provider but also for failing to disclose the breach to clients for over 18 months. Fortinet, who were targeted by a Chinese APT (Wk36), were also on the receiving end of some harsh comments after it emerged (Wk48) that the company hardcoded an encryption key into several of its products and also failed to fix the bug for a year and a half.

Perhaps the company that’s had the ugliest of cyber security times in 2019 was Cisco. Awarded a 10/10 for severity on the Common Vulnerability Scoring System (CVSS), CVE-2019-12643 (Wk35) allows malicious HTTP requests to bypass authentication and gives attackers the ability to login and execute privileged actions. In more bad news for the company, but good news for corporate whistleblower James Glen, the company were penalized to the tune of $8m (Wk31) by US courts under the False Claims Act after being found guilty of shipping products with known vulnerabilities for several years. Finally, Cisco look set to face challenging times doing business in mainland China after new cybersecurity laws were passed (Wk31) in Beijing putting restrictions on the purchase of US networking equipment, data storage and ‘critical information infrastructure’ hardware. One can only hope for both Cisco and the millions who rely on their products that 2020 will be a better year all round.

image of cisco vuln

Coming Next…

That’s it for this year, but of course, we’ll be back in 2020 with a new series of the Good, the Bad and the Ugly, starting on Friday 3rd, January. Follow us on LinkedIn, Twitter, YouTube or Facebook or sign up for our weekly Blog newsletter and receive these posts right in your inbox. Until then, from all of us at SentinelOne, have a happy and secure New Year 2020!


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

The story of why Marc Benioff gifted the AppStore.com domain to Steve Jobs

In Marc Benioff’s book, Trailblazer, he tells the tale of how Steve Jobs planted the seeds of the idea that would become the first enterprise app store, and how Benioff eventually paid Jobs back with the gift of the AppStore.com domain.

While Salesforce did truly help blaze a trail when it launched as an enterprise cloud service in 1999, it took that a step further in 2006 when it became the first SaaS company to distribute related services in an online store.

In an interview last year around Salesforce’s 20th anniversary, company CTO and co-founder Parker Harris told me that the idea for the app store came out of a meeting with Steve Jobs three years before AppExchange would launch. Benioff, Harris and fellow co-founder Dave Moellenhoff took a trip to Cupertino in 2003 to meet with Jobs. At that meeting, the legendary CEO gave the trio some sage advice: to really grow and develop as a company, Salesforce needed to develop a cloud software ecosystem. While that’s something that’s a given for enterprise SaaS companies today, it was new to Benioff and his team in 2003.

As Benioff tells it in his book, he asked Jobs to elucidate on what he meant by an application ecosystem. Jobs replied that how he implemented the idea was up to him. It took some time for that concept to bake, however. Benioff wrote that the notion of an app store eventually came to him as an epiphany at dinner one night a few years after that meeting. He says that he sketched out that original idea on a napkin while sitting in a restaurant:

One evening over dinner in San Francisco, I was struck by an irresistibly simple idea. What if any developer from anywhere in the world could create their own applications for the Salesforce platform? And what if we offered to store these apps in an online directory that allowed any Salesforce user to download them?

Whether it happened like that or not, the app store idea would eventually come to fruition, but it wasn’t originally called the AppExchange, as it is today. Instead, Benioff says he liked the name AppStore.com so much that he had his lawyers register the domain the next day.

When Benioff talked to customers prior to the launch, while they liked the concept, they didn’t like the name he had come up with for his online store. He eventually relented and launched in 2006 with the name AppExchange.com instead. Force.com would follow in 2007, giving programmers a full-fledged development platform to create applications, and then distribute them in AppExchange.

Meanwhile, AppStore.com sat dormant until 2008, when Benioff was invited back to Cupertino for a big announcement around iPhone. As Benioff wrote, “At the climactic moment, [Jobs] said [five] words that nearly floored me: ‘I give you App Store.”

Benioff wrote that he and his executives actually gasped when they heard the name. Somehow, even after all that time had passed since that the original meeting, both companies had settled upon the same name. Except Salesforce had rejected it, leaving an opening for Benioff to give a gift to his mentor. He says that he went backstage after the keynote and signed over the domain to Jobs.

In the end, the idea of the web domain wasn’t even all that important to Jobs in the context of an app store concept. After all, he put the App Store on every phone, and it wouldn’t require a website to download apps. Perhaps that’s why today the domain points to the iTunes store, and launches iTunes (or gives you the option of opening it).

Even the App Store page on Apple.com uses the sub-domain “app-store” today, but it’s still a good story of how a conversation between Jobs and Benioff would eventually have a profound impact on how enterprise software was delivered, and how Benioff was able to give something back to Jobs for that advice.

Moving storage in-house helped Dropbox thrive

Back in 2013, Dropbox was scaling fast.

The company had grown quickly by taking advantage of cloud infrastructure from Amazon Web Services (AWS), but when you grow rapidly, infrastructure costs can skyrocket, especially when approaching the scale Dropbox was at the time. The company decided to build its own storage system and network — a move that turned out to be a wise decision.

In a time when going from on-prem to cloud and closing private data centers was typical, Dropbox took a big chance by going the other way. The company still uses AWS for certain services, regional requirements and bursting workloads, but ultimately when it came to the company’s core storage business, it wanted to control its own destiny.

Storage is at the heart of Dropbox’s service, leaving it with scale issues like few other companies, even in an age of massive data storage. With 600 million users and 400,000 teams currently storing more than 3 exabytes of data (and growing) if it hadn’t taken this step, the company might have been squeezed by its growing cloud bills.

Controlling infrastructure helped control costs, which improved the company’s key business metrics. A look at historical performance data tells a story about the impact that taking control of storage costs had on Dropbox.

The numbers

In March of 2016, Dropbox announced that it was “storing and serving” more than 90% of user data on its own infrastructure for the first time, completing a 3-year journey to get to this point. To understand what impact the decision had on the company’s financial performance, you have to examine the numbers from 2016 forward.

There is good financial data from Dropbox going back to the first quarter of 2016 thanks to its IPO filing, but not before. So, the view into the impact of bringing storage in-house begins after the project was initially mostly completed. By examining the company’s 2016 and 2017 financial results, it’s clear that Dropbox’s revenue quality increased dramatically. Even better for the company, its revenue quality improved as its aggregate revenue grew.

These ten enterprise M&A deals totaled over $40B in 2019

It would be hard to top the 2018 enterprise M&A total of a whopping $87 billion, and predictably this year didn’t come close. In fact, the top 10 enterprise M&A deals in 2019 were less than half last year’s, totaling $40.6 billion.

This year’s biggest purchase was Salesforce buying Tableau for $15.7 billion, which would have been good for third place last year behind IBM’s mega deal plucking Red Hat for $34 billion and Broadcom grabbing CA Technologies for $18.8 billion.

Contributing to this year’s quieter activity was the fact that several typically acquisitive companies — Adobe, Oracle and IBM — stayed mostly on the sidelines after big investments last year. It’s not unusual for companies to take a go-slow approach after a big expenditure year. Adobe and Oracle bought just two companies each with neither revealing the prices. IBM didn’t buy any.

Microsoft didn’t show up on this year’s list either, but still managed to pick up eight new companies. It was just that none was large enough to make the list (or even for them to publicly reveal the prices). When a publicly traded company doesn’t reveal the price, it usually means that it didn’t reach the threshold of being material to the company’s results.

As always, just because you buy it doesn’t mean it’s always going to integrate smoothly or well, and we won’t know about the success or failure of these transactions for some years to come. For now, we can only look at the deals themselves.

It’s Beginning to Look a IoT Like Christmas

Tis the season, and as I look at what gifts are trending on various sites, I am amazed at the amount and variety of connected, IoT devices. I also had a personal epiphany when asking family what they were hoping to receive from Santa Claus. Each of my parents wanted a smart photo frame so they could cycle through up to date photos of family and especially the grandchildren. They, for some reason, needed a WIFI enabled display device with the ability to link to the latest cloud photos and social media to display on their desks at work.

This led me to think about how many devices like these will end up in the office in January. Devices to help find keys, wallets and cars, devices to better organize our days and remind us of our loved ones during tough days. Not to mention facilities managers unveiling the new smart coffee machine, vending machine or smart fridge added to the corporate network.

How Do Security Professionals Securely Maintaining IT Hygiene with All These Devices?

As we head for this post-holiday IoT apocalypse, how do security professionals support the advance in productivity, engagement and enjoyment in the workplace whilst safely maintaining IT hygiene and control so not to expose the enterprise to vulnerabilities and disaster? 

The answer lies in three things:

  1. Constantly know what devices are on your network, where and why
  2. Immediately tell the difference between IT, IoT and OT devices
  3. Have confidence that your cyber hygiene process can accommodate this influx

IoT devices, smart devices and industrial control, deliver business growth and profitability, but there is no real way to secure them using traditional means. Because these devices often fly under the radar of your traditional control, device security, vulnerability management and IT hygiene, “point in time” scans will no longer suffice and gaining this awareness and inventory through manual processes is simply impossible.

It is also important to note that the existing scanning methods may be too heavy for some devices and may harm these devices which may actually turn out to be critical OT devices.

To avoid mistakes like these you need the ability to fingerprint different devices on your network to tell whether they are smartphones, Ip enabled cameras or critical Industrial control devices. This also allows you to understand the risk associated with their capabilities and whether you can bring them into management, add security software and scan for significant vulnerabilities ripe for exploiting. 

Once you have visibility of your network, you know who’s who and what’s what. You can now assess the potential risks associated with decisions and you can create or easily review good cyber hygiene policies. You understand your estate and you can implement and adjust network segmentation. 

What if this could include AI automation to further reduce manual processes? What if this could be achieved without buying extra equipment or agents?

Introducing SentinelOne’s Ranger

Introducing SentinelOne’s Ranger, the industry’s first solution that allows machines to autonomously protect each other and notify security teams of vulnerabilities, rogue devices, and anomalous behaviour.

SentinelOne Ranger uses your managed endpoints to discover and protect other devices.

Your endpoints become environmentally aware and fend off attacks from one another, without human intervention. The technology enables constant environment visibility with fingerprinting, profiling and categorization of devices at discovery. It uses AI to monitor and control the access of every IoT device and enable immediate action, ultimately solving a problem that has been previously impossible to address at scale.

Want to learn how? Read our datasheet


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about IoT Security

InsightFinder gets a $2M seed to automate outage prevention

InsightFinder, a startup from North Carolina based on 15 years of academic research, wants to bring machine learning to system monitoring to automatically identify and fix common issues. Today, the company announced a $2 million seed round.

IDEA Fund Partners, a VC out of Durham, N.C.,​ led the round, with participation from ​Eight Roads Ventures​ and Acadia Woods Partners. The company was founded by North Carolina State University professor Helen Gu, who spent 15 years researching this problem before launching the startup in 2015.

Gu also announced that she had brought on former Distil Networks co-founder and CEO Rami Essaid to be chief operating officer. Essaid, who sold his company earlier this year, says his new company focuses on taking a proactive approach to application and infrastructure monitoring.

“We found that these problems happen to be repeatable, and the signals are there. We use artificial intelligence to predict and get out ahead of these issues,” he said. He adds that it’s about using technology to be proactive, and he says that today the software can prevent about half of the issues before they even become problems.

If you’re thinking that this sounds a lot like what Splunk, New Relic and Datadog are doing, you wouldn’t be wrong, but Essaid says that these products take a siloed look at one part of the company technology stack, whereas InsightFinder can act as a layer on top of these solutions to help companies reduce alert noise, track a problem when there are multiple alerts flashing and completely automate issue resolution when possible.

“It’s the only company that can actually take a lot of signals and use them to predict when something’s going to go bad. It doesn’t just help you reduce the alerts and help you find the problem faster, it actually takes all of that data and can crunch it using artificial intelligence to predict and prevent [problems], which nobody else right now is able to do,” Essaid said.

For now, the software is installed on-prem at its current set of customers, but the startup plans to create a SaaS version of the product in 2020 to make it accessible to more customers.

The company launched in 2015, and has been building out the product using a couple of National Science Foundation grants before this investment. Essaid says the product is in use today in 10 large companies (which he can’t name yet), but it doesn’t have any true go-to-market motion. The startup intends to use this investment to begin to develop that in 2020.

The Millennium Bug 20 Years On | How Safe is Cyber in 2020?

Anyone who’s over 30 years old must remember that 20 years ago the world was going to end. The Millennium dawned, and among many apocalyptic prophecies one stood above all others: the Millennium Bug, aka The Year 2000 problem, Y2K problem, the Y2K, the Y2K bug, the Y2K glitch. These names refer to a class of computer bugs related to the formatting and storage of calendar data for dates beginning in the year 2000. Problems were anticipated because many programs represented four-digit years with only the last two digits, making the year 2000 indistinguishable from 1900.

The Millennium Bug 20 Years On | How Safe is Cyber in 2020?

The assumption of a 20th century date in such programs could cause various errors, such as the incorrect display of dates and the inaccurate ordering of automated dated records or real-time events. All sorts of doomsday scenarios were heard all over the globe from nuclear meltdown (due to the computer system’s failure) and planes falling from the skies to communication breakdown and global shutdown.

Fortunately, none of these disaster scenarios materialized. The Y2K bug is now remembered as a moment of hysteria, a funny anecdote in time, much like the incidents that occurred 1000 years before it, when many were certain that the new Millennium would spell doom to them all (spoiler: that also didn’t happen!).

But the importance of The Millennium Bug was that for the first time in history, decision makers and ordinary citizens alike were considering cyber as a serious threat to our way of life. Fast forward 20 years and the internet is everywhere; we all use smartphones and order stuff online that is delivered to us overnight (and very soon by drones and electric vehicles). But with everything that has changed since the dawn of the millennium, are we more or less vulnerable than we have been before? Let’s examine some factors.

Connectivity: Power With a Fatal Weakness

The most notable difference between now and then is just how connected the world has become. In this sense, we are much more vulnerable today than we were before. We cannot imagine our lives without constant connectivity and all its benefits: online shopping, social media and online journalism. If this were taken away from us, even for a short while, panic might well ensue. Just remember the Dyn attacks of 2016 that resulted in “internet blackout” across the US east coast.

Connectivity is the backbone that enables the modern data economy and global commerce, but since we’ve become 100% reliant on it, if something were to happen that prevented our using it, the results would be grave.  

Open Source: Free Software, Free Vulnerabilities

20 years ago many companies were still selling perpetual software licenses, and it was impossible to imagine that free, open source software, developed by a community of hobbyists, would help many organizations run their businesses. But now open source software is an important component of almost every technology stack.

However convenient and cheap, it embodies many risks. For instance, a recent study found that the most copied StackOverflow Java code snippet of all time contains a bug. A Java developer from Big data software company Palantir submitted this code back in 2010, and since then this code has been copied and embedded in more than 6,000 GitHub Java projects, more than any other StackOverflow Java snippet.

Utilizing someone else’s software has never been easier, but in doing so, we’re exposing our products to dependencies that may contain flaws and vulnerabilities as well as risking the possibility of a hard-to-detect supply-chain attack.

Mobiles: Universal Trackers, Universal Attackers

We had mobile phones back then. I mean, there were phones, and they were mobile: you could carry a phone in your pocket and talk to and text people and, well, that’s about all you could do with a phone in the year 2000. Fast forward to today, and it is hard to imagine how we could pass a single day without our smartphones, glued to the screen or broadcasting every aspect of our lives to the entire world in words, pictures and videos.

Unfortunately, this reliance on mobile technology makes us all more vulnerable. Cyber criminals know this and utilize this in myriad ways for fraud, theft and other exploits. In addition, since the mobile phone has become everyone’s “mobile command center” it has become the target of choice for reconnaissance and espionage efforts, which target users with crafted spearphishing and smishing attacks and tailored exploits for Android and iOS.

With mobile devices increasingly used on corporate networks, loaded with apps that are rarely evaluated for vulnerabilities, backdoors or data scraping and with a history of having been connected to a variety of external, possibly insecure networks, they present a rising threat to both personal and enterprise security.

The Cloud of Uncertainty: Who Has My Data?

The cloud represents an even bigger revolution than the smartphone. It was obvious to anyone back in 1999 that mobile phones would become more powerful and serve us to consume and create media. But very few people believed back then that we would all be storing our data on someone else’s Linux server, sitting quietly in some remote location completely unknown to us.

Moreover, no one would have believed that enterprises and governments would also utilize this same infrastructure to host data and run applications. And yet, thanks to Amazon and Microsoft, the traditional IT infrastructure (which required a chilled data center at every physical location) has been replaced by a virtual infrastructure hosted somewhere in a huge data center on the other side of the world.

Our dependency on cloud services is complete. We cannot operate the global commerce and knowledge economy without it, but when an outage occurs like that which happened to MS Azure back in November – resulting in outage of several Microsoft services including Office 365, Xbox App, Xbox Live, Skype, Microsoft Azure – or the AWS outage of September, it has a tremendous impact on individuals, businesses and governments.

When mission critical services rely on data held outside our own immediate control, the notion of ‘security’ becomes an article of faith. Who is to say if those remote servers won’t lock us out unexpectedly? How are we to know who else has access to our data or whether the devices holding it have been compromised without our knowledge?

The Internet of Things: Network Entry Points, Everywhere

The cloud is also the enabler of the next revolution, that of connected ‘smart’ devices, aka ‘Machine to Machine (M2M) or ‘Internet of Things’ (IoT) devices. This connectivity bridges the divide between the physical and the online world and enables wired devices to “sense” their environment and then “talk” to other devices, or, through the cloud, with their owners.

This kind of connectivity is being brought to everything from garbage cans to street lights to autonomous vehicles and aviation. However, it also enables nefarious cyber activities on a scale we’ve never seen before, like the Mirai botnet that generated the largest DDoS attack the world had seen to that point, and other huge botnets, sometimes comprising as many as 850,000 computers, that are then used for cryptocurrency mining.

IoT devices bring security risks and privacy risks. Increasingly, wired ‘Smart’ devices are being recruited by botnets to gain entry into networks, and many devices leak personally identifiable information.

Meet Cybercrime: The New ‘Cost of Doing Business’

As we’ve seen, the changes that have taken place over the last 20 years have given various threat actors fertile ground on which to flourish, and flourish they have. Cybercrime has become a truly global phenomenon which impacts most industries and is expected to cost the world over $6 trillion annually by 2021.

On the defenders side, cybersecurity-related spending is predicted to reach $133 billion in 2022, and the market has grown more than 30x in the last 20 years, adding to the overall financial burden on companies and governments, most of which see the money invested in cyber as a loss or “cost of doing business”, as it is generally viewed as an expense that does not yield profit or generate revenue.

However, this ‘new’ cost of doing business is a reality that no modern enterprise can afford to ignore. From script kiddies with ransomware projects to sophisticated attackers targeting universities, the only way to do business in 2020 is with cybersecurity firmly factored in to the operational budget.

From the smallest business to the largest multinational organization, being part of the connected world in 2020 exposes you to risks that simply didn’t exist in the year 2000.

Final Thoughts…

December 1999 feels like a log time ago. Indeed, it really is closer in nature to the remnant of the previous century and even millennia than to our time today. It is highly unlikely that a single point of failure (like the Millennium Bug) could lead to the “end of the world”. But on the other hand, our hyper-connected environment makes us more vulnerable on so many levels, in our offices, cars and even our homes. Luckily, the technology hasn’t stood still and modern security mechanisms now exist that are capable of dealing with these threats across platforms, including IoT, using the latest in our tech arsenal, leveraging AI and machine learning. The Y2K bug hasn’t taken us back to the analogue world, and if we continue to safeguard our connected way of living, neither will the hackers.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security