SmartDrive snaps up $90M for in-truck video telematics solutions for safety and fuel efficiency

Trucks and other large commercial vehicles are the biggest whales on the road today — are they also, by virtue of that size, some of the most dangerous and inefficient if they are driven badly. Today, a startup that has built a platform aimed at improving both of those areas has raised a large round of funding to continue fuelling (so to speak) its own growth: SmartDrive, a San Diego-based provider of video-based telematics and transportation insights, has snapped up a round of $90 million.

The company is not disclosing its valuation but according to PitchBook, it was last valued (in 2017) at $290 million, which would put the valuation now around $380 million. But given that the company has been growing well — it says that in the first half of this year, its contracted units were up 48%, while sales were up by 44% — that figure may well be higher. (We are asking.)

The funding comes at an interesting time for fleet management and the trucking industry. A lot of the big stories about automotive technology at the moment seem to be focused on autonomous vehicles for private usage, but that leaves a large — and largely legacy — market in the form of fleet management and commercial vehicles.

That’s not to say it’s been completely ignored, however. Bigger companies like Uber, Telsa and Volvo, and startups like Nikola and more are all building smarter trucks, and just yesterday Samsara, which makes an industrial IoT platform that works, in part, to provide fleet management to the trucking industry, raised $300 million on a $6.3 billion valuation.

The telematics market was estimated to be worth $25.5 billion in 2018 and is forecast to grow to some $98 billion by 2026.

The round was led by TPG Sixth Street Partners, a division of investment giant TPG (which backs the likes of Spotify and many others), which earlier this year was raising a $2 billion fund for growth-stage investments. Unnamed existing investors also participated. The company prior to this had raised $230 million, with other backers including Founders Fund, NewView Capital, Oak Investment Partners, Michelin and more. (NEA had also been an investor but has more recently sold its stake.)

SmartDrive has been around since 2005 and focuses on a couple of key areas. Tapping data from the many sensors that you have today in commercial vehicles, it builds up a picture of how specific truckers are handling their vehicles, from their control on tricky roads to what gears and speed they are using as they go up inclines, and how long they idle their engines. The resulting data is used both to provide a better picture to fleet managers of that performance, and to highlight specific areas where the trucker can improve his or her performance, and how.

Analytics and data provided to customers include multi-camera 360-degree views, extended recording and U-turn triggering, along with diagnostics on specific driver performance. The company claims that the information has led to more satisfaction among drivers and customers, with driver retention rates of 70% or higher and improvements to 9 miles per gallon (mpg) on trips, versus industry averages of 20% driver retention and 6 mpg.

“This is an exciting time at SmartDrive and in the transportation sector overall as adoption of video-based telematics continues to accelerate,” stated Steve Mitgang, SmartDrive CEO, in a statement. “Building on our pioneering video-based safety program, our vision of an open platform powering best-of-breed video, compliance and telematics applications is garnering significant traction across a diverse range of fleets given the benefits of choice, flexibility and a lower total cost of ownership. The investment from TPG Sixth Street Partners and our existing investors will fuel continued innovation in areas such as computer vision and AI, while also enhancing sales and marketing initiatives and further international expansion.”

The focus for SmartDrive seems to be on how drivers are doing in specific circumstances: it doesn’t seem to suggest whether there could have been better routes, or if better fleet management could have resulted in improved performance. (That could be one area where it grows, or fits into a bigger platform, however.)

“SmartDrive is a market leader in the large and expanding transportation safety and intelligence sector and we are pleased to be investing in a growing company led by such a talented team,” noted Bo Stanley, partner and co-head of the Capital Solutions business at TPG Sixth Street Partners, in a statement. “SmartDrive’s proprietary data analytics platform and strong subscriber base put it in a great position to continue to capitalize on its track record of innovation and the broader secular trend of higher demand for safer and smarter transportation.”

IBM brings Cloud Foundry and Red Hat OpenShift together

At the Cloud Foundry Summit in The Hague, IBM today showcased its Cloud Foundry Enterprise Environment on Red Hat’s OpenShift container platform.

For the longest time, the open-source Cloud Foundry Platform-as-a-Service ecosystem and Red Hat’s Kubernetes-centric OpenShift were mostly seen as competitors, with both tools vying for enterprise customers who want to modernize their application development and delivery platforms. But a lot of things have changed in recent times. On the technical side, Cloud Foundry started adopting Kubernetes as an option for application deployments and as a way of containerizing and running Cloud Foundry itself.

On the business side, IBM’s acquisition of Red Hat has brought along some change, too. IBM long backed Cloud Foundry as a top-level foundation member, while Red Hat bet on its own platform instead. Now that the acquisition has closed, it’s maybe no surprise that IBM is working on bringing Cloud Foundry to Red Hat’s platform.

For now, this work is still officially still a technology experiment, but our understanding is that IBM plans to turn this into a fully supported project that will give Cloud Foundry users the option to deploy their application right to OpenShift, while OpenShift customers will be able to offer their developers the Cloud Foundry experience.

“It’s another proof point that these things really work well together,” Cloud Foundry Foundation CTO Chip Childers told me ahead of today’s announcement. “That’s the developer experience that the CF community brings and in the case of IBM, that’s a great commercialization story for them.”

While Cloud Foundry isn’t seeing the same hype as in some of its earlier years, it remains one of the most widely used development platforms in large enterprises. According to the Cloud Foundry Foundation’s latest user survey, the companies that are already using it continue to move more of their development work onto the platform and the according to the code analysis from source{d}, the project continues to see over 50,000 commits per month.

“As businesses navigate digital transformation and developers drive innovation across cloud native environments, one thing is very clear: they are turning to Cloud Foundry as a proven, agile, and flexible platform — not to mention fast — for building into the future,” said Abby Kearns, executive director at the Cloud Foundry Foundation. “The survey also underscores the anchor Cloud Foundry provides across the enterprise, enabling developers to build, support, and maximize emerging technologies.”image024

Also at this week’s Summit, Pivotal (which is in the process of being acquired by VMware) is launching the alpha version of the Pivotal Application Service (PAS) on Kubernetes, while Swisscom, an early Cloud Foundry backer, is launching a major update to its Cloud Foundry-based Application Cloud.

The mainframe business is alive and well, as IBM announces new z15

It’s easy to think about mainframes as some technology dinosaur, but the fact is these machines remain a key component of many large organizations’ computing strategies. Today, IBM announced the latest in their line of mainframe computers, the z15.

For starters, as you would probably expect, these are big and powerful machines capable of handling enormous workloads. For example, this baby can process up to 1 trillion web transactions a day and handle 2.4 million Docker containers, while offering unparalleled security to go with that performance. This includes the ability to encrypt data once, and it stays encrypted, even when it leaves the system, a huge advantage for companies with a hybrid strategy.

Speaking of which, you may recall that IBM bought Red Hat last year for $34 billion. That deal closed in July and the companies have been working to incorporate Red Hat technology across the IBM business including the z line of mainframes.

IBM announced last month that it was making OpenShift, Red Hat’s Kubernetes-based cloud-native tools, available on the mainframe running Linux. This should enable developers, who have been working on OpenShift on other systems, to move seamlessly to the mainframe without special training.

IBM sees the mainframe as a bridge for hybrid computing environments, offering a highly secure place for data that when combined with Red Hat’s tools, can enable companies to have a single control plane for applications and data wherever it lives.

While it could be tough to justify the cost of these machines in the age of cloud computing, Ray Wang, founder and principal analyst at Constellation Research, says it could be more cost-effective than the cloud for certain customers. “If you are a new customer, and currently in the cloud and develop on Linux, then in the long run the economics are there to be cheaper than public cloud if you have a lot of IO, and need to get to a high degree of encryption and security,” he said.

He added, “The main point is that if you are worried about being held hostage by public cloud vendors on pricing, in the long run the z is a cost-effective and secure option for owning compute power and working in a multi-cloud, hybrid cloud world.”

Companies like airlines and financial services companies continue to use mainframes, and while they need the power these massive machines provide, they need to do so in a more modern context. The z15 is designed to provide that link to the future, while giving these companies the power they need.

macOS Notarization: Security Hardening or Security Theater?

Earlier this year, Apple introduced a new security measure called ‘Notarization’ to complement their existing strategies like Gatekeeper, XProtect, and the Malware Removal Tool. It would be fair to say that there’s been a bit of confusion and not a little pushback from developers about this new security measure. In this post, we’ll look at what Notarization is, why Apple have introduced it, and why it’s proving controversial among developers. We’ll also address the most important question for macOS users: will Apple’s Notarization requirement make your Mac more secure from malware? Let’s find out.

image of notarization

What is Notarization?

That should be a simple question to answer, but as we’ll see, Apple have been shifting the ground on this and a lack of clarity over what Notarization is and what it requires has caused some upset among the developer community.

image of apple news

According to Apple, Notarization is a process whereby all 3rd party software distributed outside of the Mac App Store must be uploaded to Apple’s servers and checked for malware. If the software passes Apple’s malware scan check, its details are added to Apple’s database of “safe” or at least “allowed” software. Developers in return receive an electronic “ticket” which can be attached to the software by the developer when they distribute it. Where Notarization is not yet compulsory (i.e., macOS Mojave), Notarized apps present a slightly different warning from Gatekeeper than unnotarized apps, which supposedly tells users that the software they are about to launch has passed Apple’s checks.

Put like that, Notarization sounds good for both users and developers. Users get extra confidence that apps they download are malware-free, and developers get a way to show users that their apps are safe. Sounds great, but there’s been a few wrinkles not only in implementation but also in design that have, shall we say, upset the Apple cart.

Enter The Hardened Runtime…Or Not

When Apple first introduced Notarization as an optional requirement in Mojave 10.14.5, it laid out very clear rules about what developers would have to do. One of the requirements was that developers build their software using something called the Hardened Runtime. This requires developers to jump through a few hoops to ensure they won’t lose necessary functionality and is signified by a flag in the codesigning certificate which tells the OS to treat the executable somewhat like Apple’s own SIP-protected executables. The existence of the flag will prevent other processes (like a debugger or decompiler) from attaching to it, prevent code injection, DLL hijacking and several other things.

As explained in Howard Oakley’s postNotarization devalued?“, Apple later decided to encourage developers to have all their software uploaded and scanned by their Notary service, even software that would never be used on newer versions of macOS or which could not, for various reasons, be built with the new hardened runtime.

More recently, in yet a third change, Apple have temporarily dropped the stricter requirements that they originally insisted on.

image of third notarization change

In short, the only thing it now turns out that is required for Notarization is the malware check itself. There will be a fourth change in January 2020. At that time, Apple’s current intention is to revert back to the strict requirements initially announced.

Who’s Got A Ticket To Ride?

As mentioned above, when an application is successfully notarized, Apple also issues the developer with a “ticket” which they can then attach to their software prior to distribution. The ticket is basically there to take care of the situation when a user tries to launch a Notarized app but has no connection to the internet or Apple’s Notarization servers are down. The ticket says “No need to check, I’m notarized.” This obviously has great benefit to Apple, reducing the burden on their servers, but many developers have balked at the clumsy tooling and inevitable delay between building, notarizing and receiving the ticket. The delay is currently only around a few minutes when everything is working properly, but if it’s not, it’s possible that developers may be stuck in limbo waiting for Apple’s servers to respond.

If you’re running Mojave 10.14.5 or later and you have the Xcode Command Line tools installed, you can easily do a quick check of which apps are currently notarized and stapled with a ticket in your /Applications folder like this:

for i in /Applications/* ; do stapler validate "${i}"|grep -B 1 worked;done

image of checking for notarized apps

Tickets are, technically, optional. Developers don’t have to staple them to their software when they distribute their app, and doing so is cumbersome for automated workflows as indicated above. However, by not stapling the ticket, developers run the real risk that their app might fail to run on launch if the user’s device cannot connect to Apple’s servers at that time. Given that possibility, very few developers would consider that an “option” and feel forced into adjusting their workflow to accommodate Apple’s changes.

Why Did Apple Introduce Notarization?

Given the extra burden on developers, it’s worth stopping to reflect on why Apple are keen on the Notarization technology. The company recognizes the need for a ‘defense in depth’ strategy, and Notarization is ostensibly another layer of added security for end users. One might hope that Notarization is an attempt to stem the flood of adware, PUP (potentially unwanted programs) and PPI (pay-per-install) “badware” that has been afflicting macOS users for some time now. Apple have been struggling to keep up with revoking developer signatures, and XProtect and MRT tool are simply not built to recognize the simple, automated changes that rogue developers can use to make their software invisible to Apple’s tools. Since Notarization on the forthcoming Catalina and beyond will prevent the launch of any software that Apple has not already received a copy of, presumably Apple will be able to add to their Notarization malware scan more fine-grained signatures over time. In effect, Notarization should serve as a kind of ‘XProtect in the Cloud’ service, allowing Apple to refine their detections and knock out larger classes of unwanted software globally simply by refusing or revoking Notarization status.

Will Notarization Defeat Malware?

That sounds plausible, and worthy, but Apple have been historically unwilling to revoke the signatures of a number of developers who distribute PUPs and who manage to stay within the letter of the law while infuriating users. As far as out-and-out malware goes, the real test of whether Notarization will make any difference will depend on what exactly the “malware scan” involves. As ever with the Cupertino outfit, that remains opaque.

Interestingly, we’ve already seen malware authors adopting the hardened runtime themselves, although to what end it’s not clear.

image of malware with hardened runtime

It fails as an anti-analysis strategy since it’s a simple matter for researchers to bypass the hardened runtime flag. More likely, I suspect, is that malware authors have been testing Apple’s Notarization service to see where and how their creations are caught. Apple helpfully provides a full error log on failed Notarization requests which may be useful to that end, so it’s at least possible if not likely that bad actors will figure out through trial -and-error what they can get away with.

Moreover, while bundles, packages and disk images require Notarization, it remains the case that not all file formats can be notarized or stapled with a ticket. In particular, this new technology is not going to affect any malware that simply runs a script or stand-alone executable. Scripts are increasingly common in adware and malware installers like Shlayer. Although these face the usual Gatekeeper checks, they are not subject to Notarization.

image of player command

New Technology Bypassed By Old Tricks

A large part of why some developers feel that Notarization is more security theater rather than true security hardening is that while it requires the good guys to adapt to these new requirements, bad actors already have the tools and techniques they need to bypass Notarization checks.

As mentioned above, scripts and standalone binaries are not subject to Notarization, since Apple’s “stapling” technology cannot be applied to these. Notarization checks also only apply to quarantined and “gatekeepered” apps. As we’ve discussed before, the very simple bypass for all of Apple’s security technologies is simply to remove the quarantine bit.

Here’s a simple demonstration. In this example, I’ve chosen a free 3rd party application called “Slimjet.app”. I must stress I have absolutely no knowledge of this application and I am not suggesting there is anything wrong with it. I chose it because it was the first unnotarized application I came across on a 3rd party distribution site. If we download and try to install this application on a system where Notarization is enforced, Gatekeeper will block it because, although it is correctly codesigned, it is not notarized. So far so good.

image of unnotarized app

But there are (at least) two simple ways around this, both of which are old tricks. First, we can simply remove the quarantine bit with xattr. It is still the case that this can be done by a process – such as a malicious installer script – without elevating privileges. The app will now run without complaint from Gatekeeper.

image of slimjet browser

Second, we can build the app into an installer package with pkgbuild and use that to install the app without setting the quarantine bit.

pkgbuild --component /Volumes/FlashPeak Slimjet/FlashPeak Slimjet.app --install-location /Applications ~/Desktop/slimejet.pkg

Apple have said that installer packages signed with Developer IDs must also be notarized, but that does not stop us from creating an unsigned installer package with something like the code above and then using simple social engineering tricks to convince the user to right-click ‘open’ it.

If that strikes you as unlikely, reflect on the fact that we see infections on a daily basis for unsigned code that use this exact trick. Here, for example, is a malicious application on a disk image that helpfully provides the victim with simple images showing exactly what they should do to launch it:

image of malware installer

The proof that this is a tried-and-trusted technique lies in the simple fact that these kind of installers are rampant in the wild, and attackers don’t stick with unsuccessful strategies.

Notarization doesn’t raise the bar here for unsigned packages like the one we created above because the unsigned installer isn’t subject to a Notarization check. The app the package drops is, as we mentioned, free of the quarantine bit and so it too will not have to pass Notarization or any other Gatekeeper checks.

Conclusion

It’s hard to knock Apple for effort. They are clearly paying a lot of attention to security issues on macOS, but the Notarization requirements suffer the same flaw as other built-in technologies; namely, relying on the easily-removed and not always respected com.apple.quarantine bit. Meanwhile, developers and end users alike are being forced through a succession of hoops and dialogs. Given the simplicity of bypassing all this extra effort, don’t be surprised if Notarization makes very little impact on the current state of adware, PUPs and malware on macOS.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

ScyllaDB takes on Amazon with new DynamoDB migration tool

There are a lot of open source databases out there, and ScyllaDB, a NoSQL variety, is looking to differentiate itself by attracting none other than Amazon users. Today, it announced a DynamoDB migration tool to help Amazon customers move to its product.

It’s a bold move, but Scylla, which has a free open source product along with paid versions, has always had a penchant for going after bigger players. It has had a tool to help move Cassandra users to ScyllaDB for some time.

CEO Dor Laor says DynamoDB customers can now also migrate existing code with little modification. “If you’re using DynamoDB today, you will still be using the same drivers and the same client code. In fact, you don’t need to modify your client code one bit. You just need to redirect access to a different IP address running Scylla,” Laor told TechCrunch.

He says that the reason customers would want to switch to Scylla is because it offers a faster and cheaper experience by utilizing the hardware more efficiently. That means companies can run the same workloads on fewer machines, and do it faster, which ultimately should translate to lower costs.

The company also announced a $25 million Series C extension led by Eight Roads Ventures. Existing investors Bessemer Venture Partners, Magma Venture Partners, Qualcomm Ventures and TLV Partners also participated. Scylla has raised a total of $60 million, according to the company.

The startup has been around for 6 years and customers include Comcast, GE, IBM and Samsung. Laor says that Comcast went from running Cassandra on 400 machines to running the same workloads with Scylla on just 60.

Laor is playing the long game in the database market, and it’s not about taking on Cassandra, DynamoDB or any other individual product. “Our main goal is to be the default NoSQL database where if someone has big data, real-time workloads, they’ll think about us first, and we will become the default.”

Explorium reveals $19.1M in total funding for machine learning data discovery platform

Explorium, a data discovery platform for machine learning models, received a couple of unannounced funding rounds over the last year — a $3.6 million seed round last September and a $15.5 million Series A round in March. Today, it made both of these rounds public.

The seed round was led by Emerge with participation of F2 Capital. The Series A was led by Zeev Ventures with participation from the seed investors. The total raised is $19.1 million.

The company founders, who have a data science background, found that it was problematic to find the right data to build a machine learning model. Like most good startup founders confronted with a problem, they decided to solve it themselves by building a data discovery platform for data scientists.

CEO and co-founder, Maor Shlomo says that the company wanted to focus on the quality of the data because not much work has been done there. “A lot of work has been invested on the algorithmic part of machine learning, but the algorithms themselves have very much become commodities. The challenge now is really finding the right data to feed into those algorithms,” Sholmo told TechCrunch.

It’s a hard problem to solve, so they built a kind of search engine that can go out and find the best data wherever it happens to live, whether it’s internally or in an open data set, public data or premium databases. The company has partnered with thousands of data sources, according to Schlomo, to help data scientist customers find the best data for their particular model.

“We developed a new type of search engine that’s capable of looking at the customers data, connecting and enriching it with literally thousands of data sources, while automatically selecting what are the best pieces of data, and what are the best variables or features, which could actually generate the best performing machine learning model,” he explained.

Shlomo sees a big role for partnerships, whether that involves data sources or consulting firms, who can help push Explorium into more companies.

Explorium has 63 employees spread across offices in Tel Aviv, Kiev and San Francisco. It’s still early days, but Sholmo reports “tens of customers.” As more customers try to bring data science to their companies, especially with a shortage of data scientists, having a tool like Explorium could help fill that gap.

Kubernetes co-founder Craig McLuckie is as tired of talking about Kubernetes as you are

“I’m so tired of talking about Kubernetes . I want to talk about something else,” joked Kubernetes co-founder and VP of R&D at VMware Craig McLuckie during a keynote interview at this week’s Cloud Foundry Summit in The Hague. “I feel like that 80s band that had like one hit song — Cherry Pie.”

He doesn’t quite mean it that way, of course (though it makes for a good headline, see above), but the underlying theme of the conversation he had with Cloud Foundry executive director Abby Kearns was that infrastructure should be boring and fade into the background, while enabling developers to do their best work.

“We still have a lot of work to do as an industry to make the infrastructure technology fade into the background and bring forwards the technologies that developers interface with, that enable them to develop the code that drives the business, etc. […] Let’s make that infrastructure technology really, really boring.”

IMG 20190911 115940

What McLuckie wants to talk about is developer experience and with VMware’s intent to acquire Pivotal, it’s placing a strong bet on Cloud Foundry as one of the premiere development platforms for cloud native applications. For the longest time, the Cloud Foundry and Kubernetes ecosystem, which both share an organizational parent in the Linux Foundation, have been getting closer, but that move has accelerated in recent months as the Cloud Foundry ecosystem has finished work on some of its Kubernetes integrations.

McLuckie argues that the Cloud Native Computing Foundation, the home of Kubernetes and other cloud-native, open-source projects, was always meant to be a kind of open-ended organization that focuses on driving innovation. And that created a large set of technologies that vendors can choose from.

“But when you start to assemble that, I tend to think about you building up this cake which is your development stack, you discover that some of those layers of the cake, like Kubernetes, have a really good bake. They are done to perfection,” said McLuckie, who is clearly a fan of the Great British Baking show. “And other layers, you look at it and you think, wow, that could use a little more bake, it’s not quite ready yet. […] And we haven’t done a great job of pulling it all together and providing a recipe that delivers an entirely consumable experience for everyday developers.”

EEK3PG1W4AAaasp

He argues that Cloud Foundry, on the other hand, has always focused on building that highly opinionated, consistent developer experience. “Bringing those two communities together, I think, is going to have incredibly powerful results for both communities as we start to bring these technologies together,” he said.

With the Pivotal acquisition still in the works, McLuckie didn’t really comment on what exactly this means for the path forward for Cloud Foundry and Kubernetes (which he still talked about with a lot of energy, despite being tired of it). But it’s clear that he’s looking to Cloud Foundry to enable that developer experience on top of Kubernetes that abstracts all of the infrastructure away for developers and makes deploying an application a matter of a single CLI command.

Bonus: Cherry Pie.

Patch Tuesday, September 2019 Edition

Microsoft today issued security updates to plug some 80 security holes in various flavors of its Windows operating systems and related software. The software giant assigned a “critical” rating to almost a quarter of those vulnerabilities, meaning they could be used by malware or miscreants to hijack vulnerable systems with little or no interaction on the part of the user.

Two of the bugs quashed in this month’s patch batch (CVE-2019-1214 and CVE-2019-1215) involve vulnerabilities in all supported versions of Windows that have already been exploited in the wild. Both are known as “privilege escalation” flaws in that they allow an attacker to assume the all-powerful administrator status on a targeted system. Exploits for these types of weaknesses are often deployed along with other attacks that don’t require administrative rights.

September also marks the fourth time this year Microsoft has fixed critical bugs in its Remote Desktop Protocol (RDP) feature, with four critical flaws being patched in the service. According to security vendor Qualys, these Remote Desktop flaws were discovered in a code review by Microsoft, and in order to exploit them an attacker would have to trick a user into connecting to a malicious or hacked RDP server.

Microsoft also fixed another critical vulnerability in the way Windows handles link files ending in “.lnk” that could be used to launch malware on a vulnerable system if a user were to open a removable drive or access a shared folder with a booby-trapped .lnk file on it.

Shortcut files — or those ending in the “.lnk” extension — are Windows files that link easy-to-recognize icons to specific executable programs, and are typically placed on the user’s Desktop or Start Menu. It’s perhaps worth noting that poisoned .lnk files were one of the four known exploits bundled with Stuxnet, a multi-million dollar cyber weapon that American and Israeli intelligence services used to derail Iran’s nuclear enrichment plans roughly a decade ago.

In last month’s Microsoft patch dispatch, I ruefully lamented the utter hose job inflicted on my Windows 10 system by the July round of security updates from Redmond. Many readers responded by saying one or another updates released by Microsoft in August similarly caused reboot loops or issues with Windows repeatedly crashing.

As there do not appear to be any patch-now-or-be-compromised-tomorrow flaws in the September patch rollup, it’s probably safe to say most Windows end-users would benefit from waiting a few days to apply these fixes. 

Very often fixes released on Patch Tuesday have glitches that cause problems for an indeterminate number of Windows systems. When this happens, Microsoft then patches their patches to minimize the same problems for users who haven’t yet applied the updates, but it sometimes takes a few days for Redmond to iron out the kinks.

The trouble is, Windows 10 by default will install patches and reboot your computer whenever it likes. Here’s a tutorial on how to undo that. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

Most importantly, please have some kind of system for backing up your files before applying any updates. You can use third-party software to do this, or just rely on the options built into Windows 10. At some level, it doesn’t matter. Just make sure you’re backing up your files, preferably following the 3-2-1 backup rule.

Finally, Adobe fixed two critical bugs in its Flash Player browser plugin, which is bundled in Microsoft’s IE/Edge and Chrome (although now hobbled by default in Chrome). Firefox forces users with the Flash add-on installed to click in order to play Flash content; instructions for disabling or removing Flash from Firefox are here. Adobe will stop supporting Flash at the end of 2020.

As always, if you experience any problems installing any of these patches this month, please feel free to leave a comment about it below; there’s a good chance other readers have experienced the same and may even chime in here with some helpful tips.

NY Payroll Company Vanishes With $35 Million

MyPayrollHR, a now defunct cloud-based payroll processing firm based in upstate New York, abruptly ceased operations this past week after stiffing employees at thousands of companies. The ongoing debacle, which allegedly involves malfeasance on the part of the payroll company’s CEO, resulted in countless people having money drained from their bank accounts and has left nearly $35 million worth of payroll and tax payments in legal limbo.

Unlike many stories here about cloud service providers being extorted by hackers for ransomware payouts, this snafu appears to have been something of an inside job. Nevertheless, it is a story worth telling, in part because much of the media coverage of this incident so far has been somewhat disjointed, but also because it should serve as a warning to other payroll providers about how quickly and massively things can go wrong when a trusted partner unexpectedly turns rogue.

Clifton Park, NY-based MyPayrollHR — a subsidiary of ValueWise Corp. — disclosed last week in a rather unceremonious message to some 4,000 clients that it would be shutting its virtual doors and that companies which relied upon it to process payroll payments should kindly look elsewhere for such services going forward.

This communique came after employees at companies that depend on MyPayrollHR to receive direct deposits of their bi-weekly payroll payments discovered their bank accounts were instead debited for the amounts they would normally expect to accrue in a given pay period.

To make matters worse, many of those employees found their accounts had been dinged for two payroll periods — a month’s worth of wages — leaving their bank accounts dangerously in the red.

The remainder of this post is a deep-dive into what we know so far about what transpired, and how such an occurrence might be prevented in the future for other payroll processing firms.

A $26 MILLION TEXT FILE

To understand what’s at stake here requires a basic primer on how most of us get paid, which is a surprisingly convoluted process. In a typical scenario, our employer works with at least one third party company to make sure that on every other Friday what we’re owed gets deposited into our bank account.

The company that handled that process for MyPayrollHR is a California firm called Cachet Financial Services. Every other week for more than 12 years, MyPayrollHR has submitted a file to Cachet that told it which employee accounts at which banks should be credited and by how much.

According to interviews with Cachet, the way the process worked ran something like this: MyPayrollHR would send a digital file documenting deposits made by each of these client companies which laid out the amounts owed to each clients’ employees. In turn, those funds from MyPayrollHR client firms then would be deposited into a settlement or holding account maintained by Cachet.

From there, Cachet would take those sums and disburse them into the bank accounts of people whose employers used MyPayrollHR to manage their bi-weekly payroll payments.

But according to Cachet, something odd happened with the instructions file MyPayrollHR submitted on the afternoon of Wednesday, Sept. 4 that had never before transpired: MyPayrollHR requested that all of its clients’ payroll dollars be sent not to Cachet’s holding account but instead to an account at Pioneer Savings Bank that was operated and controlled by MyPayrollHR.

The total amount of this mass payroll deposit was approximately $26 million. Wendy Slavkin, general counsel for Cachet, told KrebsOnSecurity that her client then inquired with Pioneer Savings about the wayward deposit and was told MyPayrollHR’s bank account had been frozen.

Nevertheless, the payroll file submitted by MyPayrollHR instructed financial institutions for its various clients to pull $26 million from Cachet’s holding account — even though the usual deposits from MyPayrollHR’s client banks had not been made.

REVERSING THE REVERSAL

In response, Cachet submitted a request to reverse that transaction. But according to Slavkin, that initial reversal request was improperly formatted, and so Cachet soon after submitted a correctly coded reversal request.

Financial institutions are supposed to ignore or reject payment instructions that don’t comport with precise formatting required by the National Automated Clearinghouse Association (NACHA), the not-for-profit organization that provides the backbone for the electronic movement of money in the United States. But Slavkin said a number of financial institutions ended up processing both reversal requests, meaning a fair number of employees at companies that use MyPayrollHR suddenly saw a month’s worth of payroll payments withdrawn from their bank accounts.

Dan L’Abbe, CEO of the San Francisco-based consultancy Granite Solutions Groupe, said the mix-up has been massively disruptive for his 250 employees.

“This caused a lot of chaos for employers, but employees were the ones really affected,” L’Abbe said. “This is all very unusual because we don’t even have the ability to take money out of our employee accounts.”

Slavkin said Cachet managed to reach the CEO of MyPayrollHR — Michael T. Mann — via phone on the evening of Sept. 4, and that Mann said he would would call back in a few minutes. According to Slavkin, Mann never returned the call. Not long after that, MyPayrollHR told clients that it was going out of business and that they should find someone else to handle their payroll.

In short order, many people hit by one or both payroll reversals took to Twitter and Facebook to vent their anger and bewilderment at Cachet and at MyPayrollHR. But Slavkin said Cachet ultimately decided to cancel the previous payment reversals, leaving Cachet on the hook for $26 million.

“What we have since done is reached out to 100+ receiving banks to have them reject both reversals,” Slavkin said. “So most — if not all — employees affected by this will in the next day or two have all their money back.”

THE VANISHING MANN

Cachet has since been in touch with the FBI and with federal prosecutors in New York, and Slavkin said both are now investigating MyPayrollHR and its CEO. On Monday, New York Governor Andrew Cuomo called on the state’s Department of Financial Services to investigate the company’s “sudden and disturbing shutdown.”

A tweet sent Sept. 11 by the FBI’s Albany field office.

The $26 million hit against Cachet wasn’t the only fraud apparently perpetrated by MyPayrollHR and/or its parent firm: According to Slavkin, the now defunct New York company also stiffed National Payment Corporation (NatPay) — the Florida-based firm which handles tax withholdings for MyPayrollHR clients — to the tune of more than $9 million.

In a statement provided to KrebsOnSecurity, NatPay said it was alerted late last week that the bank accounts of MyPayrollHR and one of its affiliated companies were frozen, and that the notification came after payment files were processed.

“NatPay was provided information that MyPayrollHR and Cloud Payroll may have been the victims of fraud committed by their holding company ValueWise, whose CEO and owner is Michael Mann,” NatPay said. “NatPay immediately put in place steps to manage the orderly process of recovering funds [and] has more than sufficient insurance to cover actions of attempted or real fraud.”

Requests for comment from different executives at both MyPayrollHR and its parent firm ValueWise Corp. went unanswered, and the latter’s Web site is now offline. Several erstwhile MyPayrollHR employees reached via LinkedIn said none of them had seen or heard from Mr. Mann in days.

Meanwhile, Granite Solutions Groupe CEO L’Abbe said some of his employees have seen their bank accounts credited back the money that was taken, while others are still waiting for those reversals to come through.

“It varies widely,” L’Abbe said. “Every bank processes differently, and everyone’s relationship with the bank is different. Others have absolutely no money right now and are having a helluva time with their bank believing this is all the result of fraud. Things are starting to settle down now, but a lot of employees are still in limbo with their bank.”

For its part, Cachet Financial says it will be looking at solutions to better detect when and if instructions from clients for funding its settlement accounts suddenly change.

“Our system is excellent at protecting against outside hackers,” Slavkin said. “But when it comes to something like this it takes everyone by complete surprise.”

With its Kubernetes bet paying off, Cloud Foundry doubles down on developer experience

More than 50% of the Fortune 500 companies are now using the open-source Cloud Foundry Platform-as-a-Service project — either directly or through vendors like Pivotal — to build, test and deploy their applications. Like so many other projects, including the likes of OpenStack, Cloud Foundry went through a bit of a transition in recent years as more and more developers started looking to containers — and especially the Kubernetes project — as a platform on which to develop. Now, however, the project is ready to focus on what always differentiated it from its closed- and open-source competitors: the developer experience.

Long before Docker popularized containers for application deployment, though, Cloud Foundry had already bet on containers and written its own orchestration service, for example. With all of the momentum behind Kubernetes, though, it’s no surprise that many in the Cloud Foundry started to look at this new project to replace the existing container technology.