SAP’s Bill McDermott on stepping down as CEO

SAP’s CEO Bill McDermott today announced that he wouldn’t seek to renew his contract for the next year and would step down immediately after nine years at the helm of the German enterprise giant.

Shortly after the announcement, I talked to McDermott, as well as SAP’s new co-CEOs Jennifer Morgan and Christian Klein. During the call, McDermott stressed that his decision to step down was very much a personal one, and that while he’s not ready to retire just yet, he simply believes that now is the right time for him to pass on the reins of the company.

To say that today’s news came as a surprise is a bit of an understatement, but it seems like it’s something McDermott has been thinking about for a while. But after talking to McDermott, Morgan and Klein, I can’t help but think that the actual decision came rather recently.

I last spoke to McDermott about a month ago, during a fireside chat at our TechCrunch Sessions: Enterprise event. At the time, I didn’t come away with the impression that this was a CEO on his way out (though McDermott reminded me that if he had already made his decision a month ago, he probably wouldn’t have given it away).

Keeping an Enterprise Behemoth on Course with Bill McDermott SAPDSC00240

“I’m not afraid to make decisions. That’s one of the things I’m known for,” he told me when I asked him about how the process unfolded. “This one, I did a lot of deep soul searching. I really did think about it very heavily — and I know that it’s the right time and that’s why I’m so happy. When you can make decisions from a position of strength, you’re always happy.”

He also noted that he has been with SAP for 17 years, with almost 10 years as CEO, and that he recently spent some time talking to fellow high-level CEOs.

“The consensus was 10 years is about the right amount of time for a CEO because you’ve accomplished a lot of things if you did the job well, but you certainly didn’t stay too long. And if you did really well, you had a fantastic success plan,” he said.

In “the recent past,” McDermott met with SAP chairman and co-founder Hasso Plattner to explain to him that he wouldn’t renew his contract. According to McDermott, both of them agreed that the company is currently at “maximum strength” and that this would be the best time to put the succession plan into action.

SAP's new co-CEO Jennifer Morgan.

SAP co-CEO Jennifer Morgan

“With the continuity of Jennifer and Christian obviously already serving on the board and doing an unbelievable job, we said let’s control our destiny. I’m not going to renew, and these are the two best people for the job without question. Then they’ll get a chance to go to Capital Markets Day [in November]. Set that next phase of our growth story. Kick off the New Year — and do so with a clean slate and a clean run to the finish line.

“Very rarely do CEOs get the joy of handing over a company at maximum strength. And today is a great day for SAP. It’s a great day for me personally and Hasso Plattner, the chairman and [co-]founder of SAP. And also — and most importantly — a great day for Jennifer Morgan and Christian Klein.”

Don’t expect McDermott to just fade into the background, though, now that he is leaving SAP. If you’ve ever met or seen McDermott speak, you know that he’s unlikely to simply retire. “I’m busy. I’m passionate and I’m just getting warmed up,” he said.

As for the new leadership, Morgan and Klein noted that they hadn’t had a lot of time to think about the strategy going forward. Both previously held executive positions in the company and served on SAP’s board together for the last few years. For now, it seems, they are planning to continue on a similar path as McDermott.

“We’re excited about creating a renewed focus on the engineering DNA of SAP, combining the amazing strength and heritage of SAP — and many of the folks who have built the products that so many customers around the world run today — with a new DNA that’s come in from many of the cloud acquisitions that we’ve made,” Morgan said, noting that both she and Klein spent a lot of time over the last few months bringing their teams together in new ways. “So I think for us, that tapestry of talent and that real sense of urgency and support of our customers and innovation is top of mind for us.”

SAP co-CEO Christian Klein

SAP co-CEO Christian Klein

Klein also stressed that he believes SAP’s current strategy is the right one. “We had unbelievable deals again in Q3 where we actually combined our latest innovations — where we combined Qualtrics with SuccessFactors with S/4 [Hana] to drive unbelievable business value for our customers. This is the way to go. The business case is there. I see a huge shift now towards S/4, and the core and business case is there, supporting new business models, driving automation, steering the company in real time. All of these assets are now coming together with our great cloud assets, so for me, the strategy works.”

Having co-CEOs can be a recipe for conflict, but McDermott’s time as CEO also started out as co-CEO, so the company does have some experience there. Morgan and Klein noted that they worked together on the SAP board before and know each other quite well.

What’s next for the new CEOs? “There has to be a huge focus on Q4,” Klein said. “And then, of course, we will continue like we did in the past. I’ve known Jen now for quite a while — there was a lot of trust there in the past and I’m really now excited to really move forward together with her and driving huge business outcomes for our customers. And let’s not forget our employees. Our employee morale is at an all-time high. And we know how important that is to our employees. We definitely want that to continue.”

It’s hard to imagine SAP with McDermott, but we’ve clearly not seen the last of him yet. I wouldn’t be surprised if we saw him pop up as the CEO of another company soon.

Below is my interview with McDermott from TechCrunch Sessions: Enterprise.

Why it might have been time for new leadership at SAP

SAP CEO Bill McDermott announced he was stepping down last night after a decade at the helm in an announcement that shocked many. It’s always tough to measure the performance of an enterprise leader when he or she leaves. Some people look at stock price over their tenure. Some at culture. Some at the acquisitions made. Whatever the measure, it will be up to the new co-CEOs Jennifer Morgan and Christian Klein to put their own mark on the company.

What form that will take remains to be seen. McDermott’s tenure ended without much warning, but it also happened against a wider backdrop that includes other top executives and board members leaving the company over the last year, an activist investor coming on board and some controversial licensing changes in recent years.

Why now?

The timing certainly felt sudden. McDermott, who was interviewed at TechCrunch Sessions: Enterprise last month sounded more like a man who was fully engaged in the job, not one ready to leave, but a month later he’s gone.

But as McDermott told our own Frederic Lardinois last night, after 10 years, it seemed like the right time to leave. “The consensus was 10 years is about the right amount of time for a CEO because you’ve accomplished a lot of things if you did the job well, but you certainly didn’t stay too long. And if you did really well, you had a fantastic success plan,” he said in the interview.

There is no reason to doubt that, but you should at least look at context and get a sense of what has been going in the company. As the new co-CEOs take over for McDermott, several other executives including SAP SuccessFactors COO Brigette McInnis-Day, Robert Enslin, president of its cloud business and a board member, CTO Björn Goerke and Bernd Leukert, a member of the executive board have all left this year.

Descartes Labs snaps up $20M more for its AI-based geospatial imagery analytics platform

Satellite imagery holds a wealth of information that could be useful for industries, science and humanitarian causes, but one big and persistent challenge with it has been a lack of effective ways to tap that disparate data for specific ends.

That’s created a demand for better analytics, and now, one of the startups that has been building solutions to do just that is announcing a round of funding as it gears up for expansion. Descartes Labs, a geospatial imagery analytics startup out of Santa Fe, New Mexico, is today announcing that it has closed a $20 million round of funding, money that CEO and founder Mark Johnson described to me as a bridge round ahead of the startup closing and announcing a larger growth round.

The funding is being led by Union Grove Venture Partners, with Ajax Strategies, Crosslink Capital, and March Capital Partners (which led its previous round) also participating. It brings the total raised by Descartes Labs to $60 million, and while Johnson said the startup would not be disclosing its valuation, PitchBook notes that it is $220 million ($200 million pre-money in this round).

As a point of comparison, another startup in the area of geospatial analytics, Orbital Insight, is reportedly now raising money at a $430 million valuation (that data is from January of this year, and we’ve contacted the company to see if it ever closed).

Santa Fe — a city popular with retirees that counts tourism as its biggest industry — is an unlikely place to find a tech startup. Descartes Labs’ presence there is a result of that fact that it is a spinoff from the Los Alamos National Laboratory near the city.

Johnson — who had lived in San Francisco before coming to Santa Fe to help create Descartes Labs (his previous experience building Zite for media, he said, led the Los Alamos scientists to first conceive of the Descartes Labs IP as the basis of a kind of search engine) — admitted that he never thought the company would stay headquartered there beyond a short initial phase of growth of six months.

However, it turned out that the trends around more distributed workforces (and cloud computing to enable that), engineers looking for employment alternatives to living in pricey San Francisco, plus the heated competition for talent you get in the Valley all came together in a perfect storm that helped Descartes Labs establish and thrive on its home turf.

Descartes Labs — named after the seminal philosopher/mathematician Rene Descartes — describes itself as a “data refinery”. By this, it means it injests a lot of imagery and unstructured data related to the earth that is picked up primarily by satellites but also other sensors (Johnson notes that its sources include data from publicly available satellites; data from NASA and the European space agency, and data from the companies themselves); applies AI-based techniques including computer vision analysis and machine learning to make sense of the sometimes-grainy imagery; and distills and orders it to create insights into what is going on down below, and how that is likely to evolve.

Screenshot 2019 10 11 at 13.26.33

This includes not just what is happening on the surface of the earth, but also in the air above it: Descartes Labs has worked on projects to detect levels of methane gas in oil fields, the spread of wildfires, and how crops might grow in a particular area, and the impact of weather patterns on it all.

It has produced work for a range of clients that have included governments (the methane detection, pictured above, was commissioned as part of New Mexico’s effort to reduce greenhouse gas emissions), energy giants and industrial agribusiness, and traders.

“The idea is to help them take advantage of all the new data going online,” Johnson said, noting that this can help, for example, bankers forecast how much a commodity will trade for, or the effect of a change in soil composition on a crop.

The fact that Descartes Labs’ work has connected it with the energy industry gives an interesting twist to the use of the phrase “data refinery”. But in case you were wondering, Johnson said that the company goes through a process of vetting potential customers to determine if the data Descartes Labs provides to them is for a positive end, or not.

“We have a deep belief that we can help them become more efficient,” he said. “Those looking at earth data are doing so because they care about the planet and are working to try to become more sustainable.”

Johnson also said (in answer to my question about it) that so far, there haven’t been any instances where the startup has been prohibited to work with any customers or countries, but you could imagine how — in this day of data being ‘the new oil’ and the fulcrum of power — that could potentially be an issue. (Related to this: Orbital Insight counts In-Q-Tel, the CIA’s venture arm, as one of its backers.)

Looking ahead, the company is building what it describes as a “digital twin” of the earth, the idea being that in doing so it can better model the imagery that it injests and link up data from different regions more seamlessly (since, after all, a climatic event in one part of the world inevitably impacts another). Notably, “digital twinning” is a common concept that we see applied in other AI-based enterprises to better predict activity: this is the approach that, for example, Forward Networks takes when building models of an enterprise’s network to determine how apps will behave and identify the reasons behind an outage.

In addition to the funding round, Descartes Labs named Phil Fraher its new CFO, and is announcing Veery Maxwell, Director for Energy Innovation and Patrick Cairns, who co-founded UGVP, as new board observers.

Writing Malware Configuration Extractors for ISFB/Ursnif

The Zero2Hero malware course continues with Daniel Bunce demonstrating how to automate IOC extraction using python scripts and an example of ISFB/Ursnif malware.

image of writing ursnif extractor

For many AV companies, Threat Intelligence companies, and Blue teams in general, automation is key. When analyzing a widespread sample for the first time, such as in the case of ISFB (also known as Ursnif), it is crucial that some form of automated IOC extraction is set up. This could range from basic information such as a Bot ID and hardcoded Command and Control (C2) server addresses, all the way up to what functions are enabled or disabled. This information can be transmitted to the malware in the initial stages of infection; however, it is most commonly stored inside the binary, typically as an encrypted or compressed blob of data – this is what is referred to as the “configuration” of the malware. Immediate and successful extraction of these IOCs can assist in blacklisting, taking down the malicious C2 servers, and even emulation in order to query and download information from the live C2 servers such as webinjects or additional modules. So, with that covered, let’s take a look at a recent sample of ISFB v2 and see how we can write a fairly simple Python script to extract the configuration.

SHA 256: 2aba7530b4cfdad5bd36d94ff32a0bd93dbf8b9599e0fb00701d58a29922c75f

image of config_parser

I won’t be analyzing the config_parser() function in-depth, so this post assumes knowledge of how the configuration is stored in the binary and any encryption/compression algorithms used to secure it.

The config_parser() function is called twice in this sample, so we can guess that there are at least 2 different blobs of data containing configuration information. We can also determine what each blob of data is, by taking a look at the hash pushed as the third argument. Looking at the image below, the variable cookie (generated during the BSS decryption function – 0x78646F86) is XOR’ed with a value before each call. The first call to config_parser() has the cookie XOR’ed with the value 0x3711A121, resulting in the value 0x4F75CEA7. This value corresponds to the name CRC_CLIENT32. This indicates that the first blob of data contains an executable, which is likely the next stage. Even though no URLs or RSA keys are stored in these blobs, the config_parser() remains the same, so once we have a working extraction script, we can extract the next stage file, and then pass that into the script to extract any additional blocks of data.

image of config_parse_,main

Stepping into the parsing function, we can see the cookie being XOR’ed with 0x25CC, resulting in the value 0x5DA84A4A. The last 2 bytes of this value is then moved into ECX, overwriting the first 2 bytes with zeros. This leaves the hex value 0x4A4A, which as a string is “JJ” – this is the header string of the configuration. 

image of config_parser 3

The sample then begins to traverse the executable header to locate the configuration. Here, the pointer to the address of the configuration is stored inside [esi+edx+0x40]. This address points just underneath the section table, where we can see a block of data about 60 bytes in size. 

image of hex dump

This block of data is comprised of 3 different sets of data, based on the fact the JJ header appears three times. Each set is 20 bytes in length, and follows the structure seen below.

image of config_lookup

Now we know where the lookup table is located and know the structure of it, we can begin to write our extraction script. At the moment, we need three functions; one to locate the JJ structures, one to parse the structures, and one to get the blobs of data.

image of extract blob

Writing the locate_structs() Function

This function is responsible for locating the offset of the JJ structures inside the binary, and then calculating how many structures are actually stored, returning the positioning of the start of each structure. In order to do all this, we need to import two modules; pefile and re. Using pefile, we can traverse the executable header until we reach the offset of the JJ structures. From there, we can use re to iterate through the blob of structures to determine how many structures are inside the blob, and the starting offset of each. This is then returned back to the main() function.

image of locate_structs function

With that complete, we now need to move onto parsing these structures.

Writing the parse_structs() Function

This function is fairly short and simple. First, it will split the structure blob into the individual structures, which are stored in a list. Next, the three important values (XOR key, Blob offset, Blob size) are stored inside a final list which is then returned back to the main() function.

Now that we are able to parse the structures, let’s finally look at extracting the blobs from the executable.

image of parse_structs

Writing the extract_blob() Function

Once again, this function is fairly short and simple. First, we read the data from the designated file, and then enter a loop that takes the offset and blob size from each list value, changes the endianness of the value, and uses that to locate the blob to store in a list that will be returned to main().

image of extract blob function

So, now we have successfully extracted the blobs, we need to perform decompression on them. ISFB/Ursnif uses APLib to compress the configurations, so we need to use APLib to decompress them. Rather than reinvent the wheel and write our own APLib decompression script, we can utilize some open source scripts, such as this script by Sandor Nemes. In order to decompress the blobs, we can simply pass them to the decompress function like so; decompress(blob).

image of main python function

We can also implement a write_to_file() function that uses the UUID module, allowing us to generate random names for each blob we extract and dump.

image of write to file function

Wrapping Up…

And that is pretty much all! There are a few things to add, such as a function that will XOR the first 4 bytes of the executables with the XOR key in the structure in order to get the correct MZ header, or you can just add it manually. Once you have extracted the executables, you can simply replay the script against them to extract even more IOCs for further analysis. Obviously this only works on this particular version of ISFB, but that doesn’t stop you from going ahead and repurposing it for other variants such as version 3 which is more effective at storing the config.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Bill McDermott steps down as SAP’s CEO

SAP today announced that Bill McDermott, its CEO for the last nine years, is stepping down immediately. The company says he decided not to renew his contract. SAP Executive Board members Jennifer Morgan and Christian Klein have been appointed co-CEOs.

McDermott, who started his business career as a deli owner in Amityville, Long Island and recently spoke at our TechCrunch Sessions: Enterprise event, joined SAP in 2002 as the head of SAP North America. He became co-CEO, together with SAP co-founder Hasso Plattner, in 2008 and the company’s sole CEO in 2014, making him the first American to take on this role at the German enterprise giant. Under his guidance, SAP’s annual revenue and stock price continued to increase. He’ll remain with the company in an advisory role for the next two months.

It’s unclear why McDermott decided to step down at this point. Activist investor Elliott Management recently disclosed a $1.35 billion stake in SAP, but when asked for a comment about today’s news, an Elliott spokesperson told us that it didn’t have any “immediate comment.”

It’s also worth noting that the company saw a number of defections among its executive ranks in recent months, with both SAP SuccessFactors COO Brigette McInnis-Day and Robert Enslin, the president of its cloud business and a board member, leaving the company for Google Cloud.

Keeping an Enterprise Behemoth on Course with Bill McDermott SAPDSC00248

“SAP would not be what it is today without Bill McDermott,” said Plattner in today’s announcement. “Bill made invaluable contributions to this company and he was a main driver of SAP’s transition to the cloud, which will fuel our growth for many years to come. We thank him for everything he has done for SAP. We also congratulate Jennifer and Christian for this opportunity to build on the strong foundation we have for the future of SAP. Bill and I made the decision over a year ago to expand Jennifer and Christian’s roles as part of a long-term process to develop them as our next generation of leaders. We are confident in their vision and capabilities as we take SAP to its next phase of growth and innovation.”

McDermott’s biggest bet in recent years came with the acquisition of Qualtrics for $8 billion. At our event last month, McDermott compared this acquisition to Apple’s acquisition of Next and Facebook’s acquisition of Instagram. “Qualtrics is to SAP what those M&A moves were to those wonderful companies,” he said. Under his leadership, SAP also acquired corporate expense and travel management company Concur for $8.3 billion and SuccessFactors for $3.4 billion.

“Now is the moment for everyone to begin an exciting new chapter, and I am confident that Jennifer and Christian will do an outstanding job,” McDermott said in today’s announcement. “I look forward to supporting them as they finish 2019 and lay the foundation for 2020 and beyond. To every customer, partner, shareholder and colleague who invested their trust in SAP, I can only relay my heartfelt gratitude and enduring respect.”

Updating…

macOS Catalina | The Big Upgrade, Don’t Get Caught Out!

Tuesday saw Apple drop the first public release of macOS Catalina, a move which has caught out a number of developers, including some offering security solutions, as well as organizations and ordinary macOS users. While SentinelOne is already Catalina-compatible (more details below), Apple’s unannounced release date has left some scrambling to catch up as macOS 10.15 introduces some major changes under the hood, undoubtedly the biggest we’ve seen in some time. Anyone considering a Catalina upgrade should be aware of how these changes could affect current enterprise workflows, whether further updates for dependency code are required and are available, and whether the new version of macOS is going to necessitate a shift to new software or working practices. In this post, we cover the major changes and challenges that Catalina brings to enterprise macOS fleets.

feature image catalina

Does SentinelOne Work With macOS Catalina?

First things first: Yes, it does. SentinelOne macOS Agent version 3.2.1.2800 was rolled out on the same day that Apple released macOS 10.15 Catalina. This Agent is supported with Management Consoles Grand Canyon & Houston. Ideally, you should update your SentinelOne Agent version before updating to Catalina to ensure the smoothest upgrade flow.

Developers Play Catalina Catch-up

image of catalina upgrade

Contrary to popular (mis)belief, kexts or kernel extensions are still alive and well in Catalina, and the move to a new “kextless” future with Apple’s SystemExtensions framework remains optional at least for the time being. However, that doesn’t mean your current array of kernel extensions from other developers are necessarily going to be unproblematic during an upgrade.

New rules for kexts mean developers at a minimum have to notarize them, and users will have to restart the Mac after approving them. On top of that, developers – particularly those distributing security software – will need to update their kexts and solutions to be compatible with Catalina’s new TCC and user privacy rules, changes in partition architecture and discontinued support for 32-bit apps (see below), among other things.

Upgrading a Mac to 10.15 with incompatible kexts already installed could lead to one or more kernel panics.

image of kernel panic

The safest bet is to contact vendors to check on their Catalina support before you pull the trigger on the Catalina upgrade. If for some reason that’s not possible or you have legacy kexts installed which are out of support, the best advice is to remove those before you upgrade a test machine, then immediately test for compatibility as part of your post-install routine.

Bye Bye, 32-Bit Applications

Apple called time on 32-bit applications several releases ago, offering increasingly urgent warnings of their impending doom through High Sierra and Mojave. However, in macOS Mojave these would still run after users dismissed the one-time warning alert, but Catalina finally drops the axe on 32-bit applications.

Before upgrading, check what legacy applications you have installed. From the command line, you can output a report with:

system_profiler SPLegacySoftwareDataType

For GUI users, you can take a trip to Apple > About This Mac and click the System Report… button.

image of system information

Scroll down the sidebar to “Legacy Apps” and click on it. Here you’ll see a list of all the apps that won’t run on Catalina. macOS 10.15 itself will also list any legacy apps during the upgrade process, but it’s wise to be prepared before you get that far.

image of legacy apps

VPP & Apple School/Business Manager Support

Catalina continues to allow various enterprise upgrade paths through its Mobile Device Management (MDM) framework, Device Enrollment Program (DEP) and Apple Configurator. For organizations enrolled in Apple’s Volume Purchase Program or with Apple Business Manager or Apple School Manager licensing, Catalina is supported right out of the door, saving you the bother of having to manually download, package and then install multiple instances of 10.15.

New in Catalina are Managed Apple IDs for Business, which attempt to separate the user’s work identity from their personal identity, allowing them to use separate accounts for things like iCloud Notes, iCloud Drive, Mail, Contacts and other services.

There is a plus here for user privacy, but for admins used to having total control over managed endpoints, be aware that a device with an enrollment profile and managed Apple ID means the business loses power over things like remote wipe and access to certain user data. Effectively, the device is separated in to “personal” and “managed” (i.e., business use), with a separate APFS volume for the managed accounts, apps and data.

Privacy Controls Reach New Heights

That’s not the only thing to be aware of with regards user data. The biggest change that end users are going to notice as they get to work on a newly upgraded macOS 10.15 Catalina install is Apple’s extended privacy control policies, which will manifest themselves in a number of ways.

In the earlier, macOS 10.14 Mojave, there are 12 items listed in the Privacy tab of the Security & Privacy pane in System Preferences. Catalina adds five more, with Speech Recognition, Input Monitoring, Files and Folders, Screen Recording, and Developer Tools added in the new version of macOS.

image of catalina sys prefs

Here’s what the first three control:

Speech Recognition: Apps that have requested access to speech recognition.
Input Monitoring: Apps that have requested access to monitor input from your keyboard.
Screen Recording: Apps that have requested access to record the contents of the screen.

Importantly, the three items above can only be allowed at the specific time when applications try to touch any of these services. Although applications can be pre-denied by MDM provisioning and configuration profiles, they cannot be pre-allowed. That has important implications for your workflows since any software in the enterprise that requires these permissions must obtain user approval in the UI in order to function correctly, or indeed at all. Be aware that Catalina’s implementation of Transparency, Consent and Control is not particularly forthcoming with feedback. Applications may simply silently fail when permission is denied.

The most obvious, but certainly not only, place where privacy controls are going to cause issue is with video meeting/conferencing software like Zoom, Skype and similar. Prompts from the OS that suggest applications must be restarted after permission has been granted for certain services like Screen Recording have raised fears that clicking ‘Allow’ during a meeting might kick users out of the conference while the app re-launches. Conversely, users who inadvertently click ‘Don’t Allow’ may wonder why later attempts to use the software continue to fail.

What all this means is that with macOS Catalina, there is a greater onus on sysadmins to engage in user education to preempt these kinds of issues before they arise. Thoroughly test how the apps you rely on are going to behave and what workflow users need to follow to ensure minimal interruption to their daily activities.

The remaining two additional items are:

Files and Folders: Allow the apps in the list to access files and folders.
Developer Tools: Allow the apps in the list to run software locally that does not meet the system's security policy.

These last two can both be pre-approved. The first grants access to user files in places like Desktop, Downloads, and Documents folders. The second allows developers to run their own software that isn’t yet notarized, signed or ready to be distributed (and thus subject to macOS’s full system policy).

And New Lows…

Here’s a good example of what all this might mean in practice. Let’s take as destination a user’s machine on which File Sharing, Remote Management (which allows Screen Sharing) and Remote Login (for SSH) have been enabled.

Suppose, as admin, I choose to both Screen Share and File Share from my source machine into this user’s computer. These two different services only require the same credentials – user name and password for a registered user on the destination device – to be entered a single time per session to simultaneously enable both services, but they have confusingly different restrictions.

Trying to navigate to the destination’s Desktop folder via File Sharing in the Finder from the source indicates that the user’s Desktop folder is empty rather than inaccessible.

image of file sharing in catalina

If I persist in trying to access any of these protected folders, the misleading Finder display is eventually replaced with a permission denied alert.

image of file sharing permission denied

While Screen Sharing in the same session, however, I can see the Desktop folder’s contents without a problem; in fact, in this case it contains 17 items. Indeed, via Screen Sharing, I can move these items from the Desktop folder to any other folder that is accessible through File Sharing, such as the ~/Public folder. That, in a roundabout and inconvenient way, means I can get past the permission denial thrown above. Further, because I can enable other services in the Privacy pane from my Screen Sharing session, such as Full Disk Access, I can also use those to grant myself SSH access, with which I am similarly also able to work around the File Share permission denied problem.

This kind of inconsistency and complexity is unfortunate. Aside from making legitimate users jump through these hoops for no security pay-off, it raises this question: what does a legitimate user need to do to make File Sharing work properly? It seems we should go to the Files and Folders pane in System preferences and add the required process. But what process needs to be added? There’s simply no help here for those trying to figure out how to manage Apple’s user privacy controls. As it turns out, there also appears to be a bug in the UI that prevents anything at all being added to Files and Folders, so at the moment we can’t answer that question for you either.

Catalina’s Vista of Alerts: Cancel or Allow?

This expansion of user privacy controls has one very significant and obvious consequence for everyone using macOS 10.15 Catalina, graphically portrayed in this tweet by Tyler Hall.

image of twitter user alert fatigue

The spectacle of numerous alerts has made some liken Apple’s investment in user privacy through consent to Microsoft’s much-maligned Windows Vista release, which had a similarly poor reputation for irritating users with an array of constant popups and dialogs, many of which seemed quite unnecessary.

Yes, your macOS users are going to be hit by a plethora of authorization requests, alerts and notifications. While Tyler Hall’s image was undoubtedly designed to illustrate the effect in dramatic fashion, there’s no doubt that Catalina’s insistence on popping alerts is going to cause a certain amount of irritation among many users after they upgrade, and who then try getting down to some work only to be interrupted multiple times. However, if the trade-off for a bit of disruption to workflows is improved security, then that’s surely not such a bad thing?

The question is whether security is improved in this way or not. Experience has taught malware authors that users are easily manipulated, a well-recognized phenomenon that led to the coining of the phrase “social engineering” and the prevalence of phishing and spearphishing attacks as the key to business compromise.

On the one hand, some will feel that these kinds of alerts and notifications help educate users about what applications are doing – or attempting to do – behind the scenes, and user education is always a net positive in terms of security.

On the other hand, the reality is that most users are simply trying to use a device to get work done. Outside of admins, IT and security folk, the overwhelming majority of users have no interest in how devices work or what applications are doing, as much as we ‘tech people’ would like it to be otherwise. What users want is to be productive, and they expect technology and policy to ensure that they are productive in a safe environment rather than harangued by lots of operating system noise.

image of complex alert

The alert shown above illustrates the point. How informative would that really be to most users, who are unlikely to have even heard of System Events.app or understand the consequences adumbrated in the message text?

Critically, consent dialogs rely on the user making an immediate decision about security for which they are not sufficiently informed, at a time when it’s not convenient, and by an “actor” – the application that’s driving the alert and whose developer writes the alert message text – whose interests lie in the user choosing to allow.

As the user has opened the application with the intent to do something productive, their own interests lie in responding quickly and taking the path that will cause least further interruption. In that context, it seems that users are overwhelmingly likely to choose to allow the request regardless of whether that’s the most secure thing to do or not.

The urgency of time, the paucity of information and the combined interests of the user and the developer to get the app up and running conspire to make these kinds of controls a poor choice for a security mechanism. We talk a lot about “defense in depth”, but when a certain layer of that security posture relies on annoying users with numerous alerts, it could be argued that technology is failing the user. Security needs to be handled in a better way that leaves users to get on with their work and lets automated security solutions take care of the slog of deciding what’s malicious and what’s not.

Conclusion

If you are an enterprise invested in a Mac fleet, then upgrading to Catalina is a question of “when” rather than “if”. Given the massive changes presented by Catalina – from dropping support for 32-bit apps and compatibility issues with existing kernel extensions to new restrictions on critical business software like meeting apps and user consent alerts – there’s no doubt that that’s a decision not to be rushed into. Test your workflows, look at your current dependencies and roll out your upgrades with caution.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Suse’s OpenStack Cloud dissipates

Suse, the newly independent open-source company behind the eponymous Linux distribution and an increasingly large set of managed enterprise services, today announced a bit of a new strategy as it looks to stay on top of the changing trends in the enterprise developer space. Over the course of the last few years, Suse put a strong emphasis on the OpenStack platform, an open-source project that essentially allows big enterprises to build something in their own data centers akin to the core services of a public cloud like AWS or Azure. With this new strategy, Suse is transitioning away from OpenStack . It’s ceasing both production of new versions of its OpenStack Cloud and sales of its existing OpenStack product.

“As Suse embarks on the next stage of our growth and evolution as the world’s largest independent open source company, we will grow the business by aligning our strategy to meet the current and future needs of our enterprise customers as they move to increasingly dynamic hybrid and multi-cloud application landscapes and DevOps processes,” the company said in a statement. “We are ideally positioned to execute on this strategy and help our customers embrace the full spectrum of computing environments, from edge to core to cloud.”

What Suse will focus on going forward are its Cloud Application Platform (which is based on the open-source Cloud Foundry platform) and Kubernetes-based container platform.

Chances are, Suse wouldn’t shut down its OpenStack services if it saw growing sales in this segment. But while the hype around OpenStack died down in recent years, it’s still among the world’s most active open-source projects and runs the production environments of some of the world’s largest companies, including some very large telcos. It took a while for the project to position itself in a space where all of the mindshare went to containers — and especially Kubernetes — for the last few years. At the same time, though, containers are also opening up new opportunities for OpenStack, as you still need some way to manage those containers and the rest of your infrastructure.

The OpenStack Foundation, the umbrella organization that helps guide the project, remains upbeat.

“The market for OpenStack distributions is settling on a core group of highly supported, well-adopted players, just as has happened with Linux and other large-scale, open-source projects,” said OpenStack Foundation COO Mark Collier in a statement. “All companies adjust strategic priorities from time to time, and for those distro providers that continue to focus on providing open-source infrastructure products for containers, VMs and bare metal in private cloud, OpenStack is the market’s leading choice.”

He also notes that analyst firm 451 Research believes there is a combined Kubernetes and OpenStack market of about $11 billion, with $7.7 billion of that focused on OpenStack. “As the overall open-source cloud market continues its march toward eight figures in revenue and beyond — most of it concentrated in OpenStack products and services — it’s clear that the natural consolidation of distros is having no impact on adoption,” Collier argues.

For Suse, though, this marks the end of its OpenStack products. As of now, though, the company remains a top-level Platinum sponsor of the OpenStack Foundation and Suse’s Alan Clark remains on the Foundation’s board. Suse is involved in some of the other projects under the OpenStack brand, so the company will likely remain a sponsor, but it’s probably a fair guess that it won’t continue to do so at the highest level.

Patch Tuesday Lowdown, October 2019 Edition

On Tuesday Microsoft issued software updates to fix almost five dozen security problems in Windows and software designed to run on top of it. By most accounts, it’s a relatively light patch batch this month. Here’s a look at the highlights.

Happily, only about 15 percent of the bugs patched this week earned Microsoft’s most dire “critical” rating. Microsoft labels flaws critical when they could be exploited by miscreants or malware to seize control over a vulnerable system without any help from the user.

Also, Adobe has kindly granted us another month’s respite from patching security holes in its Flash Player browser plugin.

Included in this month’s roundup is something Microsoft actually first started shipping in the third week of September, when it released an emergency update to fix a critical Internet Explorer zero-day flaw (CVE-2019-1367) that was being exploited in the wild.

That out-of-band security update for IE caused printer errors for many Microsoft users whose computers applied the emergency update early on, according to Windows update expert Woody Leonhard. Apparently, the fix available through this month’s roundup addresses those issues.

Security firm Ivanti notes that the patch for the IE zero day flaw was released prior to today for Windows 10 through cumulative updates, but that an IE rollup for any pre-Windows 10 systems needs to be manually downloaded and installed.

Once again, Microsoft is fixing dangerous bugs in its Remote Desktop Client, the Windows feature that lets a user interact with a remote desktop as if they were sitting in front of the other PC. On the bright side, this critical bug can only be exploited by tricking a user into connecting to a malicious Remote Desktop server — not exactly the most likely attack scenario.

Other notable vulnerabilities addressed this month include a pair of critical security holes in Microsoft Excel versions 2010-2019 for Mac and Windows, as well as Office 365. These flaws would allow an attacker to install malware just by getting a user to open a booby-trapped Office file.

Windows 10 likes to install patches all in one go and reboot your computer on its own schedule. Microsoft doesn’t make it easy for Windows 10 users to change this setting, but it is possible. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update. To get there, click the Windows key on your keyboard and type “windows update” into the box that pops up.

Staying up-to-date on Windows patches is good. Updating only after you’ve backed up your important data and files is even better. A reliable backup means you’re not pulling your hair out if the odd buggy patch causes problems booting the system. So do yourself a favor and backup your files before installing any patches.

As always, if you experience any problems installing any of the patches this month, please feel free to leave a comment about it below; there’s a decent chance other readers have experienced the same and may even chime in here with some helpful tips.

You Thought Ransomware Was Declining? Think Again!

Two years have passed since the outbreak of the ransomware attacks Petya and Wannacry, which had devastating effects across the world. In 2018, there was a slight decline in their frequency and impact (especially towards the end of the year) as cryptojacking briefly became more attractive to unsophisticated cyber criminals, so you might have thought the worst was behind us as far as the ransomware threat was concerned. Unfortunately, it looks like 2019 could quite accurately be labeled The Year of the Ransomware Comeback.

Ransomware Now A Matter of National Security

Ransomware attacks worldwide have more than doubled over the past 12 months, with attacks in the U.S. responsible for more than half of all incidents recorded.

The situation has become so dire that it is now considered a threat to U.S. National Security. Ann Neuberger, director of the NSA’s cybersecurity division, said that ransomware (along with Advanced Cyber threats from nation states such as Russia, China, Iran and North Korea) are now the top cyber threats. US officials have stated their concerns that there is a high probability that ransomware attacks will interfere with the upcoming 2020 U.S. elections, either through voter database encryption or the disabling of voting machines through any one of an increasing number of ransomware strains.

The Ongoing Evolution of Ransomware

As malware variants are becoming more and more sophisticated, so their use is becoming more diverse. Over the past year, the number of different types of ransomware discovered by security researchers around the world has doubled, and their sophistication and maliciousness has intensified. 

The creators of the MegaCortex ransomware have combined a variety of elusive features that prevent legacy defense mechanisms from identifying and blocking this attack. Other types of ransomware increase the psychological pressure on victims in order to secure and hasten payment – such as the ransom malware called Jigsaw, which not only encrypts user files but also gradually deletes them over time. This means victims need to respond quickly – they only have 24 hours to pay the $150 ransom or the malware begins its slow but sure process of destruction, deleting the victim’s files with no possibility of recovery.

Hackers are also becoming more creative in their infection methods. In addition to traditional infection via phishing and spearphishing methods, the use of Remote Desktop Protocols (RDP) is increasing, leveraging stolen RDP certificates to obtain permissions for the distribution and execution of malicious activities on corporate networks.

Ransomware Authors Are Getting More Creative

Extremely creative hackers also use Managed Security Service Providers (MSSP) as intrusion channels into organizations – in one case hackers broke into the provision of such services and utilized remote-controlled security products to infect clients with Sodinokibi Ransomware.

Hackers are also testing new targets in addition to Windows-based systems. Ransomware is now a cross-platform threat: thousands of Linux servers have been infected and their files have been encrypted by a new breed of ransomware called Lilu that only attacks Linux-based systems, and there have even been examples of ransomware attacks targeting macOS users in the past.

The watershed moment for the ransomware attacks of 2019 was the attack against the Baltimore City computer systems that occurred in May 2019. The attack left the city offline for weeks, resulting in a costly recovery. Baltimore Municipality estimates the cost of the financial attack to be $18.2 million – the city’s Information Technology Office has spent $4.6 million on recovery efforts since the attack and expects to spend another $5.4 million by the end of the year. Another $8.2 million comes from potential lost or delayed revenue, such as money from property taxes, real estate fees and fine collection.

After this attack, there were many (sometimes coordinated) attacks on cities and municipalities. Most notable was a series of attacks against 22 cities and agencies in Texas.

In addition to the municipalities, the education sector has been hit hardest by ransomware. Since the beginning of 2019, there have been about 533 attacks against US public schools – more than the total number of attacks in 2018. Ransomware attacks have delayed the start of the school year and cost educational institutions a small fortune. Some school districts have been paralyzed for months because of such attacks.

Cascading Costs of Ransomware Attacks

In addition to restoring damaged reputation, the direct monetary damages caused by ransomware are on the rise. A Long Island school paid $100,000 to release its systems in August, and a New York state school paid $88,000 the same month. The Ryuk ransomware is largely responsible for the massive increase in ransomware payments. The malware operators demand an average of $288,000 for the release of systems, compared to the $10,000 price required by other criminal gangs. At times, the demands have been outrageous – the Riviera Beach City of Florida paid a $600,000 ransom in June 2019 to recover files following an attack, and another cybercrime gang demanded $5.3 million from New Bedford, but the city offered to pay “only” $400,000. Fortunately in that case, city officials were able to recover their data from a backup and escaped without paying anything.

Summary

The current ransomware epidemic is the latest in a series of cyberattacks that have hit organizations and posed a significant challenge to our modern way of life. Unlike some forms of cyber threats such as those conducted by nation states, the motive for ransomware attacks is purely financial. As such, this kind of threat can only be addressed through economic metrics, namely the post-incident cost of an attack (downtime + damage to reputation + insurance premiums + fees and other indirect expenses) versus the cost of investment in a strong preventive security solution.

Fortunately, a modern endpoint detection and response solution will provide an almost hermetic seal against the data-destructive rampage of ransomware and should be the first thing to consider when facing this challenge. It is rare enough in cyber security that the solution is simple, effective and readily available. Any organization that has not suffered from a ransomware attack should take advantage of this fact and deploy a robust endpoint security solution throughout the enterprise and avoid becoming the next sorry victim in a long line of organizational casualties.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Satya Nadella looks to the future with edge computing

Speaking today at the Microsoft Government Leaders Summit in Washington, DC, Microsoft CEO Satya Nadella made the case for edge computing, even while pushing the Azure cloud as what he called “the world’s computer.”

While Amazon, Google and other competitors may have something to say about that, marketing hype aside, many companies are still in the midst of transitioning to the cloud. Nadella says the future of computing could actually be at the edge, where computing is done locally before data is then transferred to the cloud for AI and machine learning purposes. What goes around, comes around.

But as Nadella sees it, this is not going to be about either edge or cloud. It’s going to be the two technologies working in tandem. “Now, all this is being driven by this new tech paradigm that we describe as the intelligent cloud and the intelligent edge,” he said today.

He said that to truly understand the impact the edge is going to have on computing, you have to look at research, which predicts there will be 50 billion connected devices in the world by 2030, a number even he finds astonishing. “I mean this is pretty stunning. We think about a billion Windows machines or a couple of billion smartphones. This is 50 billion [devices], and that’s the scope,” he said.

The key here is that these 50 billion devices, whether you call them edge devices or the Internet of Things, will be generating tons of data. That means you will have to develop entirely new ways of thinking about how all this flows together. “The capacity at the edge, that ubiquity is going to be transformative in how we think about computation in any business process of ours,” he said. As we generate ever-increasing amounts of data, whether we are talking about public sector kinds of use case, or any business need, it’s going to be the fuel for artificial intelligence, and he sees the sheer amount of that data driving new AI use cases.

“Of course when you have that rich computational fabric, one of the things that you can do is create this new asset, which is data and AI. There is not going to be a single application, a single experience that you are going to build, that is not going to be driven by AI, and that means you have to really have the ability to reason over large amounts of data to create that AI,” he said.

Nadella would be more than happy to have his audience take care of all that using Microsoft products, whether Azure compute, database, AI tools or edge computers like the Data Box Edge it introduced in 2018. While Nadella is probably right about the future of computing, all of this could apply to any cloud, not just Microsoft.

As computing shifts to the edge, it’s going to have a profound impact on the way we think about technology in general, but it’s probably not going to involve being tied to a single vendor, regardless of how comprehensive their offerings may be.