macOS Catalina | The Big Upgrade, Don’t Get Caught Out!

Tuesday saw Apple drop the first public release of macOS Catalina, a move which has caught out a number of developers, including some offering security solutions, as well as organizations and ordinary macOS users. While SentinelOne is already Catalina-compatible (more details below), Apple’s unannounced release date has left some scrambling to catch up as macOS 10.15 introduces some major changes under the hood, undoubtedly the biggest we’ve seen in some time. Anyone considering a Catalina upgrade should be aware of how these changes could affect current enterprise workflows, whether further updates for dependency code are required and are available, and whether the new version of macOS is going to necessitate a shift to new software or working practices. In this post, we cover the major changes and challenges that Catalina brings to enterprise macOS fleets.

feature image catalina

Does SentinelOne Work With macOS Catalina?

First things first: Yes, it does. SentinelOne macOS Agent version 3.2.1.2800 was rolled out on the same day that Apple released macOS 10.15 Catalina. This Agent is supported with Management Consoles Grand Canyon & Houston. Ideally, you should update your SentinelOne Agent version before updating to Catalina to ensure the smoothest upgrade flow.

Developers Play Catalina Catch-up

image of catalina upgrade

Contrary to popular (mis)belief, kexts or kernel extensions are still alive and well in Catalina, and the move to a new “kextless” future with Apple’s SystemExtensions framework remains optional at least for the time being. However, that doesn’t mean your current array of kernel extensions from other developers are necessarily going to be unproblematic during an upgrade.

New rules for kexts mean developers at a minimum have to notarize them, and users will have to restart the Mac after approving them. On top of that, developers – particularly those distributing security software – will need to update their kexts and solutions to be compatible with Catalina’s new TCC and user privacy rules, changes in partition architecture and discontinued support for 32-bit apps (see below), among other things.

Upgrading a Mac to 10.15 with incompatible kexts already installed could lead to one or more kernel panics.

image of kernel panic

The safest bet is to contact vendors to check on their Catalina support before you pull the trigger on the Catalina upgrade. If for some reason that’s not possible or you have legacy kexts installed which are out of support, the best advice is to remove those before you upgrade a test machine, then immediately test for compatibility as part of your post-install routine.

Bye Bye, 32-Bit Applications

Apple called time on 32-bit applications several releases ago, offering increasingly urgent warnings of their impending doom through High Sierra and Mojave. However, in macOS Mojave these would still run after users dismissed the one-time warning alert, but Catalina finally drops the axe on 32-bit applications.

Before upgrading, check what legacy applications you have installed. From the command line, you can output a report with:

system_profiler SPLegacySoftwareDataType

For GUI users, you can take a trip to Apple > About This Mac and click the System Report… button.

image of system information

Scroll down the sidebar to “Legacy Apps” and click on it. Here you’ll see a list of all the apps that won’t run on Catalina. macOS 10.15 itself will also list any legacy apps during the upgrade process, but it’s wise to be prepared before you get that far.

image of legacy apps

VPP & Apple School/Business Manager Support

Catalina continues to allow various enterprise upgrade paths through its Mobile Device Management (MDM) framework, Device Enrollment Program (DEP) and Apple Configurator. For organizations enrolled in Apple’s Volume Purchase Program or with Apple Business Manager or Apple School Manager licensing, Catalina is supported right out of the door, saving you the bother of having to manually download, package and then install multiple instances of 10.15.

New in Catalina are Managed Apple IDs for Business, which attempt to separate the user’s work identity from their personal identity, allowing them to use separate accounts for things like iCloud Notes, iCloud Drive, Mail, Contacts and other services.

There is a plus here for user privacy, but for admins used to having total control over managed endpoints, be aware that a device with an enrollment profile and managed Apple ID means the business loses power over things like remote wipe and access to certain user data. Effectively, the device is separated in to “personal” and “managed” (i.e., business use), with a separate APFS volume for the managed accounts, apps and data.

Privacy Controls Reach New Heights

That’s not the only thing to be aware of with regards user data. The biggest change that end users are going to notice as they get to work on a newly upgraded macOS 10.15 Catalina install is Apple’s extended privacy control policies, which will manifest themselves in a number of ways.

In the earlier, macOS 10.14 Mojave, there are 12 items listed in the Privacy tab of the Security & Privacy pane in System Preferences. Catalina adds five more, with Speech Recognition, Input Monitoring, Files and Folders, Screen Recording, and Developer Tools added in the new version of macOS.

image of catalina sys prefs

Here’s what the first three control:

Speech Recognition: Apps that have requested access to speech recognition.
Input Monitoring: Apps that have requested access to monitor input from your keyboard.
Screen Recording: Apps that have requested access to record the contents of the screen.

Importantly, the three items above can only be allowed at the specific time when applications try to touch any of these services. Although applications can be pre-denied by MDM provisioning and configuration profiles, they cannot be pre-allowed. That has important implications for your workflows since any software in the enterprise that requires these permissions must obtain user approval in the UI in order to function correctly, or indeed at all. Be aware that Catalina’s implementation of Transparency, Consent and Control is not particularly forthcoming with feedback. Applications may simply silently fail when permission is denied.

The most obvious, but certainly not only, place where privacy controls are going to cause issue is with video meeting/conferencing software like Zoom, Skype and similar. Prompts from the OS that suggest applications must be restarted after permission has been granted for certain services like Screen Recording have raised fears that clicking ‘Allow’ during a meeting might kick users out of the conference while the app re-launches. Conversely, users who inadvertently click ‘Don’t Allow’ may wonder why later attempts to use the software continue to fail.

What all this means is that with macOS Catalina, there is a greater onus on sysadmins to engage in user education to preempt these kinds of issues before they arise. Thoroughly test how the apps you rely on are going to behave and what workflow users need to follow to ensure minimal interruption to their daily activities.

The remaining two additional items are:

Files and Folders: Allow the apps in the list to access files and folders.
Developer Tools: Allow the apps in the list to run software locally that does not meet the system's security policy.

These last two can both be pre-approved. The first grants access to user files in places like Desktop, Downloads, and Documents folders. The second allows developers to run their own software that isn’t yet notarized, signed or ready to be distributed (and thus subject to macOS’s full system policy).

And New Lows…

Here’s a good example of what all this might mean in practice. Let’s take as destination a user’s machine on which File Sharing, Remote Management (which allows Screen Sharing) and Remote Login (for SSH) have been enabled.

Suppose, as admin, I choose to both Screen Share and File Share from my source machine into this user’s computer. These two different services only require the same credentials – user name and password for a registered user on the destination device – to be entered a single time per session to simultaneously enable both services, but they have confusingly different restrictions.

Trying to navigate to the destination’s Desktop folder via File Sharing in the Finder from the source indicates that the user’s Desktop folder is empty rather than inaccessible.

image of file sharing in catalina

If I persist in trying to access any of these protected folders, the misleading Finder display is eventually replaced with a permission denied alert.

image of file sharing permission denied

While Screen Sharing in the same session, however, I can see the Desktop folder’s contents without a problem; in fact, in this case it contains 17 items. Indeed, via Screen Sharing, I can move these items from the Desktop folder to any other folder that is accessible through File Sharing, such as the ~/Public folder. That, in a roundabout and inconvenient way, means I can get past the permission denial thrown above. Further, because I can enable other services in the Privacy pane from my Screen Sharing session, such as Full Disk Access, I can also use those to grant myself SSH access, with which I am similarly also able to work around the File Share permission denied problem.

This kind of inconsistency and complexity is unfortunate. Aside from making legitimate users jump through these hoops for no security pay-off, it raises this question: what does a legitimate user need to do to make File Sharing work properly? It seems we should go to the Files and Folders pane in System preferences and add the required process. But what process needs to be added? There’s simply no help here for those trying to figure out how to manage Apple’s user privacy controls. As it turns out, there also appears to be a bug in the UI that prevents anything at all being added to Files and Folders, so at the moment we can’t answer that question for you either.

Catalina’s Vista of Alerts: Cancel or Allow?

This expansion of user privacy controls has one very significant and obvious consequence for everyone using macOS 10.15 Catalina, graphically portrayed in this tweet by Tyler Hall.

image of twitter user alert fatigue

The spectacle of numerous alerts has made some liken Apple’s investment in user privacy through consent to Microsoft’s much-maligned Windows Vista release, which had a similarly poor reputation for irritating users with an array of constant popups and dialogs, many of which seemed quite unnecessary.

Yes, your macOS users are going to be hit by a plethora of authorization requests, alerts and notifications. While Tyler Hall’s image was undoubtedly designed to illustrate the effect in dramatic fashion, there’s no doubt that Catalina’s insistence on popping alerts is going to cause a certain amount of irritation among many users after they upgrade, and who then try getting down to some work only to be interrupted multiple times. However, if the trade-off for a bit of disruption to workflows is improved security, then that’s surely not such a bad thing?

The question is whether security is improved in this way or not. Experience has taught malware authors that users are easily manipulated, a well-recognized phenomenon that led to the coining of the phrase “social engineering” and the prevalence of phishing and spearphishing attacks as the key to business compromise.

On the one hand, some will feel that these kinds of alerts and notifications help educate users about what applications are doing – or attempting to do – behind the scenes, and user education is always a net positive in terms of security.

On the other hand, the reality is that most users are simply trying to use a device to get work done. Outside of admins, IT and security folk, the overwhelming majority of users have no interest in how devices work or what applications are doing, as much as we ‘tech people’ would like it to be otherwise. What users want is to be productive, and they expect technology and policy to ensure that they are productive in a safe environment rather than harangued by lots of operating system noise.

image of complex alert

The alert shown above illustrates the point. How informative would that really be to most users, who are unlikely to have even heard of System Events.app or understand the consequences adumbrated in the message text?

Critically, consent dialogs rely on the user making an immediate decision about security for which they are not sufficiently informed, at a time when it’s not convenient, and by an “actor” – the application that’s driving the alert and whose developer writes the alert message text – whose interests lie in the user choosing to allow.

As the user has opened the application with the intent to do something productive, their own interests lie in responding quickly and taking the path that will cause least further interruption. In that context, it seems that users are overwhelmingly likely to choose to allow the request regardless of whether that’s the most secure thing to do or not.

The urgency of time, the paucity of information and the combined interests of the user and the developer to get the app up and running conspire to make these kinds of controls a poor choice for a security mechanism. We talk a lot about “defense in depth”, but when a certain layer of that security posture relies on annoying users with numerous alerts, it could be argued that technology is failing the user. Security needs to be handled in a better way that leaves users to get on with their work and lets automated security solutions take care of the slog of deciding what’s malicious and what’s not.

Conclusion

If you are an enterprise invested in a Mac fleet, then upgrading to Catalina is a question of “when” rather than “if”. Given the massive changes presented by Catalina – from dropping support for 32-bit apps and compatibility issues with existing kernel extensions to new restrictions on critical business software like meeting apps and user consent alerts – there’s no doubt that that’s a decision not to be rushed into. Test your workflows, look at your current dependencies and roll out your upgrades with caution.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Suse’s OpenStack Cloud dissipates

Suse, the newly independent open-source company behind the eponymous Linux distribution and an increasingly large set of managed enterprise services, today announced a bit of a new strategy as it looks to stay on top of the changing trends in the enterprise developer space. Over the course of the last few years, Suse put a strong emphasis on the OpenStack platform, an open-source project that essentially allows big enterprises to build something in their own data centers akin to the core services of a public cloud like AWS or Azure. With this new strategy, Suse is transitioning away from OpenStack . It’s ceasing both production of new versions of its OpenStack Cloud and sales of its existing OpenStack product.

“As Suse embarks on the next stage of our growth and evolution as the world’s largest independent open source company, we will grow the business by aligning our strategy to meet the current and future needs of our enterprise customers as they move to increasingly dynamic hybrid and multi-cloud application landscapes and DevOps processes,” the company said in a statement. “We are ideally positioned to execute on this strategy and help our customers embrace the full spectrum of computing environments, from edge to core to cloud.”

What Suse will focus on going forward are its Cloud Application Platform (which is based on the open-source Cloud Foundry platform) and Kubernetes-based container platform.

Chances are, Suse wouldn’t shut down its OpenStack services if it saw growing sales in this segment. But while the hype around OpenStack died down in recent years, it’s still among the world’s most active open-source projects and runs the production environments of some of the world’s largest companies, including some very large telcos. It took a while for the project to position itself in a space where all of the mindshare went to containers — and especially Kubernetes — for the last few years. At the same time, though, containers are also opening up new opportunities for OpenStack, as you still need some way to manage those containers and the rest of your infrastructure.

The OpenStack Foundation, the umbrella organization that helps guide the project, remains upbeat.

“The market for OpenStack distributions is settling on a core group of highly supported, well-adopted players, just as has happened with Linux and other large-scale, open-source projects,” said OpenStack Foundation COO Mark Collier in a statement. “All companies adjust strategic priorities from time to time, and for those distro providers that continue to focus on providing open-source infrastructure products for containers, VMs and bare metal in private cloud, OpenStack is the market’s leading choice.”

He also notes that analyst firm 451 Research believes there is a combined Kubernetes and OpenStack market of about $11 billion, with $7.7 billion of that focused on OpenStack. “As the overall open-source cloud market continues its march toward eight figures in revenue and beyond — most of it concentrated in OpenStack products and services — it’s clear that the natural consolidation of distros is having no impact on adoption,” Collier argues.

For Suse, though, this marks the end of its OpenStack products. As of now, though, the company remains a top-level Platinum sponsor of the OpenStack Foundation and Suse’s Alan Clark remains on the Foundation’s board. Suse is involved in some of the other projects under the OpenStack brand, so the company will likely remain a sponsor, but it’s probably a fair guess that it won’t continue to do so at the highest level.

Patch Tuesday Lowdown, October 2019 Edition

On Tuesday Microsoft issued software updates to fix almost five dozen security problems in Windows and software designed to run on top of it. By most accounts, it’s a relatively light patch batch this month. Here’s a look at the highlights.

Happily, only about 15 percent of the bugs patched this week earned Microsoft’s most dire “critical” rating. Microsoft labels flaws critical when they could be exploited by miscreants or malware to seize control over a vulnerable system without any help from the user.

Also, Adobe has kindly granted us another month’s respite from patching security holes in its Flash Player browser plugin.

Included in this month’s roundup is something Microsoft actually first started shipping in the third week of September, when it released an emergency update to fix a critical Internet Explorer zero-day flaw (CVE-2019-1367) that was being exploited in the wild.

That out-of-band security update for IE caused printer errors for many Microsoft users whose computers applied the emergency update early on, according to Windows update expert Woody Leonhard. Apparently, the fix available through this month’s roundup addresses those issues.

Security firm Ivanti notes that the patch for the IE zero day flaw was released prior to today for Windows 10 through cumulative updates, but that an IE rollup for any pre-Windows 10 systems needs to be manually downloaded and installed.

Once again, Microsoft is fixing dangerous bugs in its Remote Desktop Client, the Windows feature that lets a user interact with a remote desktop as if they were sitting in front of the other PC. On the bright side, this critical bug can only be exploited by tricking a user into connecting to a malicious Remote Desktop server — not exactly the most likely attack scenario.

Other notable vulnerabilities addressed this month include a pair of critical security holes in Microsoft Excel versions 2010-2019 for Mac and Windows, as well as Office 365. These flaws would allow an attacker to install malware just by getting a user to open a booby-trapped Office file.

Windows 10 likes to install patches all in one go and reboot your computer on its own schedule. Microsoft doesn’t make it easy for Windows 10 users to change this setting, but it is possible. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update. To get there, click the Windows key on your keyboard and type “windows update” into the box that pops up.

Staying up-to-date on Windows patches is good. Updating only after you’ve backed up your important data and files is even better. A reliable backup means you’re not pulling your hair out if the odd buggy patch causes problems booting the system. So do yourself a favor and backup your files before installing any patches.

As always, if you experience any problems installing any of the patches this month, please feel free to leave a comment about it below; there’s a decent chance other readers have experienced the same and may even chime in here with some helpful tips.

You Thought Ransomware Was Declining? Think Again!

Two years have passed since the outbreak of the ransomware attacks Petya and Wannacry, which had devastating effects across the world. In 2018, there was a slight decline in their frequency and impact (especially towards the end of the year) as cryptojacking briefly became more attractive to unsophisticated cyber criminals, so you might have thought the worst was behind us as far as the ransomware threat was concerned. Unfortunately, it looks like 2019 could quite accurately be labeled The Year of the Ransomware Comeback.

Ransomware Now A Matter of National Security

Ransomware attacks worldwide have more than doubled over the past 12 months, with attacks in the U.S. responsible for more than half of all incidents recorded.

The situation has become so dire that it is now considered a threat to U.S. National Security. Ann Neuberger, director of the NSA’s cybersecurity division, said that ransomware (along with Advanced Cyber threats from nation states such as Russia, China, Iran and North Korea) are now the top cyber threats. US officials have stated their concerns that there is a high probability that ransomware attacks will interfere with the upcoming 2020 U.S. elections, either through voter database encryption or the disabling of voting machines through any one of an increasing number of ransomware strains.

The Ongoing Evolution of Ransomware

As malware variants are becoming more and more sophisticated, so their use is becoming more diverse. Over the past year, the number of different types of ransomware discovered by security researchers around the world has doubled, and their sophistication and maliciousness has intensified. 

The creators of the MegaCortex ransomware have combined a variety of elusive features that prevent legacy defense mechanisms from identifying and blocking this attack. Other types of ransomware increase the psychological pressure on victims in order to secure and hasten payment – such as the ransom malware called Jigsaw, which not only encrypts user files but also gradually deletes them over time. This means victims need to respond quickly – they only have 24 hours to pay the $150 ransom or the malware begins its slow but sure process of destruction, deleting the victim’s files with no possibility of recovery.

Hackers are also becoming more creative in their infection methods. In addition to traditional infection via phishing and spearphishing methods, the use of Remote Desktop Protocols (RDP) is increasing, leveraging stolen RDP certificates to obtain permissions for the distribution and execution of malicious activities on corporate networks.

Ransomware Authors Are Getting More Creative

Extremely creative hackers also use Managed Security Service Providers (MSSP) as intrusion channels into organizations – in one case hackers broke into the provision of such services and utilized remote-controlled security products to infect clients with Sodinokibi Ransomware.

Hackers are also testing new targets in addition to Windows-based systems. Ransomware is now a cross-platform threat: thousands of Linux servers have been infected and their files have been encrypted by a new breed of ransomware called Lilu that only attacks Linux-based systems, and there have even been examples of ransomware attacks targeting macOS users in the past.

The watershed moment for the ransomware attacks of 2019 was the attack against the Baltimore City computer systems that occurred in May 2019. The attack left the city offline for weeks, resulting in a costly recovery. Baltimore Municipality estimates the cost of the financial attack to be $18.2 million – the city’s Information Technology Office has spent $4.6 million on recovery efforts since the attack and expects to spend another $5.4 million by the end of the year. Another $8.2 million comes from potential lost or delayed revenue, such as money from property taxes, real estate fees and fine collection.

After this attack, there were many (sometimes coordinated) attacks on cities and municipalities. Most notable was a series of attacks against 22 cities and agencies in Texas.

In addition to the municipalities, the education sector has been hit hardest by ransomware. Since the beginning of 2019, there have been about 533 attacks against US public schools – more than the total number of attacks in 2018. Ransomware attacks have delayed the start of the school year and cost educational institutions a small fortune. Some school districts have been paralyzed for months because of such attacks.

Cascading Costs of Ransomware Attacks

In addition to restoring damaged reputation, the direct monetary damages caused by ransomware are on the rise. A Long Island school paid $100,000 to release its systems in August, and a New York state school paid $88,000 the same month. The Ryuk ransomware is largely responsible for the massive increase in ransomware payments. The malware operators demand an average of $288,000 for the release of systems, compared to the $10,000 price required by other criminal gangs. At times, the demands have been outrageous – the Riviera Beach City of Florida paid a $600,000 ransom in June 2019 to recover files following an attack, and another cybercrime gang demanded $5.3 million from New Bedford, but the city offered to pay “only” $400,000. Fortunately in that case, city officials were able to recover their data from a backup and escaped without paying anything.

Summary

The current ransomware epidemic is the latest in a series of cyberattacks that have hit organizations and posed a significant challenge to our modern way of life. Unlike some forms of cyber threats such as those conducted by nation states, the motive for ransomware attacks is purely financial. As such, this kind of threat can only be addressed through economic metrics, namely the post-incident cost of an attack (downtime + damage to reputation + insurance premiums + fees and other indirect expenses) versus the cost of investment in a strong preventive security solution.

Fortunately, a modern endpoint detection and response solution will provide an almost hermetic seal against the data-destructive rampage of ransomware and should be the first thing to consider when facing this challenge. It is rare enough in cyber security that the solution is simple, effective and readily available. Any organization that has not suffered from a ransomware attack should take advantage of this fact and deploy a robust endpoint security solution throughout the enterprise and avoid becoming the next sorry victim in a long line of organizational casualties.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Satya Nadella looks to the future with edge computing

Speaking today at the Microsoft Government Leaders Summit in Washington, DC, Microsoft CEO Satya Nadella made the case for edge computing, even while pushing the Azure cloud as what he called “the world’s computer.”

While Amazon, Google and other competitors may have something to say about that, marketing hype aside, many companies are still in the midst of transitioning to the cloud. Nadella says the future of computing could actually be at the edge, where computing is done locally before data is then transferred to the cloud for AI and machine learning purposes. What goes around, comes around.

But as Nadella sees it, this is not going to be about either edge or cloud. It’s going to be the two technologies working in tandem. “Now, all this is being driven by this new tech paradigm that we describe as the intelligent cloud and the intelligent edge,” he said today.

He said that to truly understand the impact the edge is going to have on computing, you have to look at research, which predicts there will be 50 billion connected devices in the world by 2030, a number even he finds astonishing. “I mean this is pretty stunning. We think about a billion Windows machines or a couple of billion smartphones. This is 50 billion [devices], and that’s the scope,” he said.

The key here is that these 50 billion devices, whether you call them edge devices or the Internet of Things, will be generating tons of data. That means you will have to develop entirely new ways of thinking about how all this flows together. “The capacity at the edge, that ubiquity is going to be transformative in how we think about computation in any business process of ours,” he said. As we generate ever-increasing amounts of data, whether we are talking about public sector kinds of use case, or any business need, it’s going to be the fuel for artificial intelligence, and he sees the sheer amount of that data driving new AI use cases.

“Of course when you have that rich computational fabric, one of the things that you can do is create this new asset, which is data and AI. There is not going to be a single application, a single experience that you are going to build, that is not going to be driven by AI, and that means you have to really have the ability to reason over large amounts of data to create that AI,” he said.

Nadella would be more than happy to have his audience take care of all that using Microsoft products, whether Azure compute, database, AI tools or edge computers like the Data Box Edge it introduced in 2018. While Nadella is probably right about the future of computing, all of this could apply to any cloud, not just Microsoft.

As computing shifts to the edge, it’s going to have a profound impact on the way we think about technology in general, but it’s probably not going to involve being tied to a single vendor, regardless of how comprehensive their offerings may be.

Forward Networks raises $35M to help enterprises map, track and predict their networks’ behavior

Security breaches and other activities that cause network surges and outages are all on the rise in the enterprise, and today, a startup called Forward Networks, which has built a clever way to help businesses monitor their network traffic to identify when things are going wrong, has raised a round of $35 million to continue expanding its business to meet that demand.

The money, a Series C, is being led by Goldman Sachs, which in this case is both a strategic and financial investor. David Erickson, the startup’s co-founder and CEO, said the investment bank started out as a customer, and Joshua Matheus, MD for technology at Goldman Sachs, was so pleased with the results that he recommended the bank also invest in the company. Others participating in this round include Andreessen Horowitz, Threshold Ventures (previously DFJ Venture) and A. Capital, the three investors that were behind Forward Networks’ previous round of $16 million in 2017.

Erickson, along with other co-founders Nikhil Handigol, Brandon Heller and Peyman Kazemian, were all Stanford PhDs, and the company’s technology is based around work they did there around mathematical modeling. Here, that concept is applied to a company’s network to create essentially a replica of a company’s network architecture, which is in turn used to simulate individual processes and apps running on the network to figure out how they interact and what would represent “normal” versus “abnormal” behavior, which in turn is applied in real time to monitor the network and predict what might happen on it. This is not a fixing platform per se, but in developer operations, there is a fundamental need and gap in the market for products that help engineers identify what is not working in order to know what to try to fix.

If you are familiar with Honeycomb.io — a DevOps platform for running apps to determine when and where bugs or conflicts might arise (which itself recently raised funding) — this seems to be taking a similar approach, but on a network scale.

Considered together, it seems that we’re starting to see a new wave of services and platforms designed to provide more granular and intelligent pictures of how apps and networks behave in our modern technology landscapes.

Erickson tells me that today, the vast majority of Forward Networks’ customers are using the product to monitor on-premises rather than cloud architectures.

“We launched a public cloud product for AWS towards the end of last year, which today is in use by customers, but the dominant use case for us is on-prem,” Erickson said, who said that while the media (ahem) loves to talk about cloud, in many cases large enterprises have actually been slower to migrate processes in cases where legacy services still work well, and they still harbour distrust of public cloud security and reliability. “We see growth towards the cloud but it’s baby steps.”

The company has been growing steadily; today its network monitoring covers some 75,000 devices. In that context, Goldman Sachs is a significant client, with some 15,000 devices in its network alone.

Looking ahead, Erickson said that the funding would be used in part for R&D and in part to continue its business development. There are a number of other solutions and services out there that have identified the opportunity of providing better network management as a route to identifying security threats and other risks, so that also presents an opportunity for M&A for Forward, although Erickson declined to comment further on that.

“We continue to see the value that Forward Networks’ platform brings to large enterprises running complex networks,” said Bill Krause, board partner at Andreessen Horowitz. “They have solved a critical business problem, which presents a real growth opportunity.”

Nadella warns government conference not to betray user trust

Microsoft CEO Satya Nadella, delivering the keynote at the Microsoft Government Leaders Summit in Washington, DC today, had a message for attendees to maintain user trust in their tools technologies above all else.

He said it is essential to earn user trust, regardless of your business. “Now, of course, the power law here is all around trust because one of the keys for us, as providers of platforms and tools, trust is everything,” he said today. But he says it doesn’t stop with the platform providers like Microsoft. Institutions using those tools also have to keep trust top of mind or risk alienating their users.

“That means you need to also ensure that there is trust in the technology that you adopt, and the technology that you create, and that’s what’s going to really define the power law on this equation. If you have trust, you will have exponential benefit. If you erode trust it will exponentially decay,” he said.

He says Microsoft sees trust along three dimensions: privacy, security and ethical use of artificial intelligence. All of these come together in his view to build a basis of trust with your customers.

Nadella said he sees privacy as a human right, pure and simple, and it’s up to vendors to ensure that privacy or lose the trust of their customers. “The investments around data governance is what’s going to define whether you’re serious about privacy or not,” he said. For Microsoft, they look at how transparent they are about how they use the data, their terms of service and how they use technology to ensure that’s being carried out at runtime.

He reiterated the call he made last year for a federal privacy law. With GDPR in Europe and California’s CCPA coming on line in January, he sees a centralized federal law as a way to streamline regulations for business.

As for security, as you might expect, he defined it in terms of how Microsoft was implementing it, but the message was clear that you needed security as part of your approach to trust, regardless of how you implement that. He asked several key questions of attendees.

“Cyber is the second area where we not only have to do our work, but you have to [ask], what’s your operational security posture, how have you thought about having the best security technology deployed across the entire chain, whether it’s on the application side, the infrastructure side or on the endpoint, side, and most importantly, around identity,” Nadella said.

The final piece, one which he said was just coming into play, was how you use artificial intelligence ethically, a sensitive topic for a government audience, but one he wasn’t afraid to broach. “One of the things people say is, ‘Oh, this AI thing is so unexplainable, especially deep learning.’ But guess what, you created that deep learning [model]. In fact, the data on top of which you train the model, the parameters and the number of parameters you use — a lot of things are in your control. So we should not abdicate our responsibility when creating AI,” he said.

Whether Microsoft or the U.S. government can adhere to these lofty goals is unclear, but Nadella was careful to outline them both for his company’s benefit and this particular audience. It’s up to both of them to follow through.

Facebook’s Workplace hits 3M paying users, launches Portal app in a wider push for video

The rapid rise of Slack — which has recently broken the 100,000 mark for paying businesses using its service — has ushered in a rush of competition from other companies across the worlds of social media and enterprise software, all aiming to become the go-to conversation layer for businesses. Today, Workplace, Facebook’s effort in that race, announced a milestone in its growth, along with a bigger push into video services and other improvements.

The service — which starts at $1.50 per month per front-line worker and then has tiers of $4 and $8 — now has passed 3 million paying users, adding 1 million workers from mostly enterprise businesses in the last eight months.

And to capitalize on Facebook’s growing focus on video in its consumer service, Workplace is announcing several steps of its own into video. It’s releasing a special app that can be used on the Portal, Facebook’s video screen; and alongside that, it’s announcing new video features: captioning at the bottom of videos; auto-translating starting with 14 languages; and a new P2P architecture that will speed up video transmission for those who might be watching videos on Workplace in places where bandwidth is constrained.

The features and milestone number are all being announced today at Flock, the Workplace user conference that Facebook puts on each year. Alongside all these, Facebook also announced several other features for its enterprise app (more on the other new features below).

The push to video comes at an interesting time for Workplace on the competitive front. Karandeep Anand, who came to Facebook from Microsoft and currently heads up Workplace with Julien Codorniou managing business development, has made a point of differentiating Workplace from others in the field of workplace collaboration by emphasizing how it’s used by very large enterprises like Walmart (the world’s largest single employer) to bring together on to a single communication platform not just white-collar knowledge workers but also frontline workers.

The company says that today, its customers include 150 companies with over 10,000 active users apiece, with other names on its books including Starbucks, Spotify, AstraZeneca, Deliveroo and Kering.

The push to video follows that trajectory: it’s a way for Workplace (and Facebook) to differentiate the experience and use cases for the product to businesses, which might already be using Slack but might consider buying this as well, if not migrating away from the other product altogether. (Teams is a different ballgame, of course, as it has a strong video component of its own and also likes to position itself as a product for all kinds of employees, too.)

Workplace’s video efforts here will mark the first time that Facebook is positioning Portal as a product for businesses. This is notable, when you consider there has been some adoption of Amazon’s Alexa in workplace scenarios, too; and that there has been some pushback from consumers about the prospect of having a Facebook video device sitting in their homes. This gives Facebook’s $179 hardware (which will be sold at the same price to businesses) a new avenue for sales.

Video has been a cornerstone of how Workplace has been developing for a while now, with companies using it as a way for, say, the big boss to send out more personalised communications to workers, and for people in workgroups to create video chats with each other. A dedicated screen for video chats takes this idea to the next level, and plays on the fact that video conferencing services like Zoom have caught on like wildfire in modern offices, where people who work together often work in disparate locations.

There is another way that Portal could find some traction with businesses: videoconferencing solutions tend to be very expensive, in part because of the hefty hardware investments that need to be made. Offering a device at $179 drastically undercuts that investment. Codorniou declined to comment on whether Facebook might make a more concerted effort to push this as a cost-effective videoconferencing alternative down the line, but he did point out that today Facebook and Zoom have a close relationship.

The other video features that Workplace is announcing today will further enhance the experience: Facebook will now give users the option to include automatic captions at the bottoms of videos, with the bonus of translation, initially in 14 languages. And the improved video quality for those with limited bandwidth is significantly not something that Facebook has rolled out in its consumer app: the aim is to improve the quality of broadcasting in scenarios where bandwidth might not be as strong but there are simultaneous people watching the same event — something you could imagine applying, say, at a company all-hands or town hall event with remote participants.

Alongside all of these video features, Workplace is adding in a host of other features to expand the use cases for the product beyond basic chatting:

  • New learning product. This is not about e-learning per se, but Workplace is now offering a way for HR to add onboarding teaching and videos into Workplace for new employees or new services at the company. There are no plans right now to expand this to educational content, Codorniou said.
  • Surveys are also coming to Workplace. These will be set by administrators — not any worker at any time — and it seems that for now there will be no anonymity, so that will mean it’s unlikely that these will cover any sensitive topics, and might in any case see a chilling effect in how people feel they can respond.
  • Frontline access is getting overhauled in Workplace, where people who do not use company email addresses will now be able to create accounts using generated codes.
  • Those admins that are trying to track how well Workplace is actually working for them will also be able to track engagement and other metrics on the platform.

Workplace is also adding in some gamification features to the platform, where people can publicly thank people, set and follow workplace goals and award badges to individuals who have achieved something in areas like sales, anniversaries or other positive milestones.

As with the video features, the idea is to bring services to Workplace that you are not necessarily getting in Slack and other competitive products. That is the maxim also when the features are replicas of features you might have seen elsewhere, but not all in one consolidated place.

Asked what he thought about the claims that Facebook is too much of a “copycat” when it came to building new features, Codorniou was defensive. “I think Workplace itself is getting to a market that has been untouched before. When it comes to badges or goals, for example, yes people have but these before, but the difference is that we are offering them to a wide network of people. If you have to use a separate app, it’s not a great experience.”

And, he added, “everything that we ship is the result of customer feedback and requests. If they tell us they want these, it means they’re not finding what they needed on the market.”

Arm brings custom instructions to its embedded CPUs

At its annual TechCon event in San Jose, Arm today announced Custom Instructions, a new feature of its Armv8-M architecture for embedded CPUs that, as the name implies, enables its customers to write their own custom instructions to accelerate their specific use cases for embedded and IoT applications.

“We already have ways to add acceleration, but not as deep and down to the heart of the CPU. What we’re giving [our customers] here is the flexibility to program your own instructions, to define your own instructions — and have them executed by the CPU,” ARM senior director for its automotive and IoT business, Thomas Ensergueix, told me ahead of today’s announcement.

He noted that Arm always had a continuum of options for acceleration, starting with its memory-mapped architecture for connecting over a bus GPUs and today’s neural processor units. This allows the CPU and the accelerator to run in parallel, but with the bus being the bottleneck. Customers also can opt for a co-processor that’s directly connected to the CPU, but today’s news essentially allows Arm customers to create their own accelerated algorithms that then run directly on the CPU. That means the latency is low, but it’s not running in parallel, as with the memory-mapped solution.

arm instructions

As Arm argues, this setup allows for the lowest-cost (and risk) path for integrating customer workload acceleration, as there are no disruptions to the existing CPU features and it still allows its customers to use the existing standard tools with which they are already familiar.

custom assemblerFor now, custom instructions will only be available to be implemented in the Arm Cortex-M33 CPUs, starting in the first half of 2020. By default, it’ll also be available for all future Cortex-M processors. There are no additional costs or new licenses to buy for Arm’s customers.

Ensergueix noted that as we’re moving to a world with more and more connected devices, more of Arm’s customers will want to optimize their processors for their often very specific use cases — and often they’ll want to do so because by creating custom instructions, they can get a bit more battery life out of these devices, for example.

Arm has already lined up a number of partners to support Custom Instructions, including IAR Systems, NXP, Silicon Labs and STMicroelectronics .

“Arm’s new Custom Instructions capabilities allow silicon suppliers like NXP to offer their customers a new degree of application-specific instruction optimizations to improve performance, power dissipation and static code size for new and emerging embedded applications,” writes NXP’s Geoff Lees, SVP and GM of Microcontrollers. “Additionally, all these improvements are enabled within the extensive Cortex-M ecosystem, so customers’ existing software investments are maximized.”

In related embedded news, Arm also today announced that it is setting up a governance model for Mbed OS, its open-source operating system for embedded devices that run an Arm Cortex-M chip. Mbed OS has always been open source, but the Mbed OS Partner Governance model will allow Arm’s Mbed silicon partners to have more of a say in how the OS is developed through tools like a monthly Product Working Group meeting. Partners like Analog Devices, Cypress, Nuvoton, NXP, Renesas, Realtek,
Samsung and u-blox are already participating in this group.

What is Deepfake? (And Should You Be Worried?)

We hear a lot about Artificial Intelligence and Machine Learning being used for good in the Cybersecurity world – detecting and responding to malicious and suspicious behavior to safeguard the enterprise – but like many technologies, AI can be used for bad as well as for good. One area which has received increasing attention in the last couple of years is the ability to create ‘Deepfakes’ of audio and video content using generative adversarial networks or GANs. In this post, we take a look at Deepfake and ask whether we should be worried about it.

image of deep fake

What is Deepfake?

A Deepfake is the use of machine (“deep“) learning to produce a kind of fake media content – typically a video with or without audio – that has been ‘doctored’ or fabricated to make it appear that some person or persons did or said something that in fact they did not.

Going beyond older techniques of achieving the same effect such as advanced audio-video editing and splicing, Deepfake takes advantage of computer processing and machine learning to produce realistic video and audio content depicting events that never happened. Currently, this is more or less limited to “face swapping”: placing the head of one or more persons onto other people’s bodies and lip-syncing the desired audio. Nevertheless, the effects can be quite stunning, as seen in this Deepfake of Steve Buscemi faceswapped onto the body of Jennifer Lawrence.

deep fake steve buscemi
real jennifer lawrence
Source

It all started in 2017 when Reddit user ‘deepfakes’ produced sleazy Deep Fakes of celebs engaged in sex acts by transposing the faces of famous people onto the bodies of actors in adult movies. Before long, many people began posting similar kinds of video until moderators banned deepfakes and the subreddit entirely.

image of deepfakes banned from reddit

Of course, that was only the beginning. Once the technology had been prized out of the hands of academics and turned into usable code that didn’t require a deep understanding of Artificial Intelligence concepts and techniques, many others were able to start playing around with it. Political figures, actors and other public figures soon found themselves ‘faceswapped’ and appearing all over Youtube and other video platforms. Reddit itself contains other, non-pornographic Deep Fakes such as the ‘Safe For Work’ Deepfakes subreddit r/SFWdeepfakes.

How Easy Is It To Create Deep Fakes?

While not trivial, it’s not that difficult either for anyone with average computer skills. As we have seen, where once it would have required vast resources and skills only available to a few specialists, there are now tools available on github with which anyone can easily experiment and create their own Deep Fakes using off-the-shelf computer equipment.

At minimal, you can do it with a few Youtube URLs. As Gaurav Oberoi explained in this post last year, anyone can use an automated tool that takes a list of half a dozen or so Youtube videos of a few minutes each and then extracts the frames containing the faces of the target and substitute.

The software will inspect various attributes of the original such as skin tone, the subject’s facial expression, the angle or tilt of the head and the ambient lighting, and then try to reconstruct the image with the substitute’s face in that context. Early versions of the software could take days or even weeks to churn through all these details, but in Oberoi’s experiment, it only took around 72 hours with an off-the-shelf GPU (an NVIDIA GTX 1018 TI) to produce a realistic swap.

Although techniques that use less-sophisticated methods can be just as devastating, as witnessed recently in the doctored video of CNN journalist Jim Acosta and the fake Nancy Pelosi drunk video, real Deep Fakes use a method called generative adversarial networks (GANs). This involves using two competing machine learning algorithms in which one produces the image and the other tries to detect it. When a detection is made, the first AI improves the output to get past the detection. The second AI now has to improve its decision-making to spot the fakery and improve its detection. This process is iterated multiple times until the Deep Fake-producing AI beats the detection AI or produces an image that the creator feels is good enough to fool human viewers.

Access to GANs is no longer restricted to those with huge budgets and supercomputers.

image of faceswap deep learning tool

How Do Deep Fakes Affect Cybersecurity?

Deep Fakes are a new twist on an old ploy: media manipulation. From the old days of splicing audio and video tapes to using photoshop and other editing suites, GANs offer us a new way to play with media. Even so, we’re not convinced that they open up a specifically new channel to threat actors, and articles like this one seem to be stretching their credibility when they try to exaggerate the connection between Deep Fakes and conventional phishing threats.

Of course, Deep Fakes do have the potential to catch a lot of attention, circulate and go viral as people marvel at fake media portraying some unlikely turn of events: a politician with slurred speech, a celebrity in a compromising position, controversial quotes from a public figure and the like. By creating content with the ability to attract large amounts of shares, it’s certainly possible that hackers could utilize Deep Fakes in the same way as other phishing content by luring people into clicking on something that has a malicious component hidden inside, or stealthily redirects users to malicious websites while displaying the content. But as we already noted above with the Jim Acosta and Nancy Pelosi fakes, you don’t really need to go deep to achieve that effect.

The one thing we know about criminals is that they are not fond of wasting time and effort on complicated methods when perfectly good and easier ones abound. There’s no shortage of people falling for all the usual, far simpler phishing bait that’s been circulating for years and is still clearly highly effective. For that reason, we don’t see Deep Fakes as a particularly severe threat for this kind of cybercrime at the time being.

That said, be aware that there have been a small number of reported cases of Deep Fake voice fraud in attempts to convince company employees to wire money to fraudulent accounts. This appears to be a new twist on the business email compromise phishing tactic, with the fraudsters using Deep Fake audio of a senior employee issuing instructions for a payment to be made. It just shows that criminals will always experiment with new tactics in the hope of a payday and you can never be too careful.

Perhaps of greater concern are uses of Deepfake content in personal defamation attacks, attempts to discredit the reputations of individuals, whether in the workplace or personal life, and the widespread use of fake pornographic content. So-called “revenge porn” can be deeply distressing even when it is widely acknowledged as fake. The possibility of Deep Fakes being used to discredit executives or businesses by competitors is also not beyond the realms of possibility. Perhaps the most likeliest threat, though, comes from information warfare during times of national emergency and political elections – here comes 2020! – with such events widely thought to be ripe for disinformation campaigns using Deepfake content.

How Can You Spot A Deep Fake?

Depending on the sophistication of the GAN used and the quality of the final image, it may be possible to spot flaws in a Deep Fake in the same way that close inspection can often reveal sharp contrasts, odd lighting or other disjunctions in “photoshopped” images. However, generative adversarial networks have the capacity to produce extremely high quality images that perhaps only another, generative adversarial network might be able to detect. Since the whole idea of using a GAN is to ultimately defeat detection by another generative adversarial network, that too may not always be possible.

image of faceswap algorithm
Source

By far the best judge of fake content, however, is our ability to look at things in context. Individual events or artefacts like video and audio recordings may be – or become – indistinguishable from the real thing in isolation, but detection is a matter of judging something in light of other evidence. To take a trivial example, it would take more than a video of a flying horse to convince us that such animals really exist. We should want not only independent verification (such as a video from another source) but also corroborating evidence. Who witnessed it take off? Where did it land? Where is it currently located? and so on.

We need to be equally circumspect when viewing consumable media, particularly when it makes surprising or outlandlish claims. Judging veracity in context is not a new approach. It’s the same idea behind multi-factor authentication; it’s the standard for criminal investigations; it underpins scientific method and, indeed, it’s at the heart of contextual malware detection as used in SentinelOne.

What is new, perhaps, is that we may have to start applying more rigour in our judgement of media content that depicts far-fetched or controversial actions and events. That’s arguably something we should be doing anyway. Perhaps the rise of Deepfake will encourage us all to get better at it.

Conclusion

With accessible tools for creating Deep Fakes now available to anyone, it’s understandable that there should be concern about the possibility of this technology being used for nefarious purposes. But that’s true of pretty much all technological innovation; there will always be some people that will find ways to use it to the detriment of others. Nonetheless, Deepfake technology comes out of the same advancements as other machine learning tools that improve our lives in immeasurable ways, including in the detection of malware and malicious actors.

While creating a fake video for the purposes of information warfare is not beyond the realms of possibility or even likelihood, it is not beyond our means to recognize disinformation by judging it in the context of other things that we know to be true, reasonable or probable. Should we be worried about Deep Fakes? As with all critical reasoning, we should be worried about taking on trust extraordinary claims that are not supported by an extraordinary amount of other credible evidence.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security