The Good, the Bad and the Ugly in Cybersecurity – Week 44

Image of The Good, The Bad & The Ugly in CyberSecurity

The Good

The benefits of the DDW (Deep Dark Web) are beginning to shine through. Whether it is a site like SecureDrop (aka DeadDrop) that allows people to anonymously share information with journalists or someone in Iran sharing on Tor’s own website how grateful they are to be able to get news “from the West” with less fear of being persecuted, these rays of light from the DDW are always welcome. So it was great to see that this week, BBC News decided to host news web servers on the DDW.  These are only accessible via Tor so that a user can’t accidentally visit a site without the anonymization protection. Even better, the BBC are hosting translated, regionally-targeted sites in Arabic, Persian, Vietnamese and Russian languages to help people in those censored regions access unfiltered content from the West.

image of tor the onion router

The Bad

Although authorities at the Kudankulam Nuclear Power Plant (KKNPP) in India denied reports on Monday that the power plant had been compromised by malware, there is little doubt amongst the security community that bridging an air gap is entirely feasible. Myriad ways and means have been developed that allow jumping air gaps via thumb drives, compromised laptops, or standing up stealthy ad-hoc sneaky wireless networks. See AirHopper, COTTONMOUTH, or USBee as examples.

Citizens in India are demanding an explanation, but instead were treated to bland denial by the KKNPP.

“…the plant and other Indian nuclear power plants control systems stand alone and are not connected to outside cyber network and Internet. Hence, any cyberattack on Nuclear Power Plant Control System [is] not possible. Moreover, all the systems had been loaded with home-grown firewalls to check the hackers’ attempts, if any.”

I don’t know about you, but “air-gapped” and “home-grown firewalls” rarely belong in the same description of mission-critical infrastructure.

On Wednesday, plant authorities confirmed the compromise, while still asserting that mission critical networks were not compromised. This author has learned from decades supporting critical operational environments in the context of military operations that the phrase “isolated” often does not actually infer an air-gap, but rather some combination of a set of firewalls, data guards, and/or data diodes that logically separate, rather than physically separate, networks. A physically isolated mission critical network would indeed be the norm for an operational Nuclear facility. So then, what about these “home-grown firewalls” mentioned earlier in the week? 

image of Indian Nuclear Power Plant
Image Credit: indiawaterportal.org/The Kudankulam Nuclear Power Plant (KKNPP)/Wikimedia Commons

The Ugly

Ransomware victims are paying upwards of over $1m USD, and the trend is just getting worse. In a twist, some of the campaigns have been first targeting the company’s insurance documentation prior to holding their data for ransom. Patrick Cannon, head of enterprise risk claims at Tokio Marine Kiln Group Ltd, said he had heard of one incident where:

“…the insured said they couldn’t afford the ransom, so the attacker produced a copy of the insurance policy and said that, actually, their cyber insurance would cover it”

image of ryuk ransomware

A report by Beazley shows a 37% rise in ransomware this quarter compared to last, and significant focus on IT Organizations and MSSP’s being hit. This uptick could be related to the recent re-emergence of Emotet-driven campaigns, or it could also be the result of last spring’s Fin 9 and related MSSP-targeted campaigns by Gift-Carding operations having been discovered and “burned”: why not make additional profit on your way out of the MSSPs by targeting both the MSSP and their customers with ransomware? It seems that, for the unprotected at least, the dilemma posed by ransomware is not going away any time soon!


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

New Relic snags early-stage serverless monitoring startup IOpipe

As we move from a world dominated by virtual machines to one of serverless, it changes the nature of monitoring, and vendors like New Relic certainly recognize that. This morning the company announced it was acquiring IOpipe, a Seattle-based early-stage serverless monitoring startup, to help beef up its serverless monitoring chops. Terms of the deal weren’t disclosed.

New Relic gets what it calls “key members of the team,” which at least includes co-founders Erica Windisch and Adam Johnson, along with the IOpipe technology. The new employees will be moving from Seattle to New Relic’s Portland offices.

“This deal allows us to make immediate investments in onboarding that will make it faster and simpler for customers to integrate their [serverless] functions with New Relic and get the most out of our instrumentation and UIs that allow fast troubleshooting of complex issues across the entire application stack,” the company wrote in a blog post announcing the acquisition.

It adds that initially the IOpipe team will concentrate on moving AWS Lambda features like Lambda Layers into the New Relic platform. Over time, the team will work on increasing support for serverless function monitoring. New Relic is hoping by combining the IOpipe team and solution with its own, it can speed up its serverless monitoring chops.

Eliot Durbin, an investor at Bold Start, which led the company’s $2 million seed round in 2018, says both companies win with this deal. “New Relic has a huge commitment to serverless, so the opportunity to bring IOpipe’s product to their market-leading customer base was attractive to everyone involved,” he told TechCrunch.

The startup has been helping monitor serverless operations for companies running AWS Lambda. It’s important to understand that serverless doesn’t mean there are no servers, but the cloud vendor — in this case AWS — provides the exact resources to complete an operation, and nothing more.

IOpipe co-founders Erica Windisch and Adam Johnson

Photo: New Relic

Once the operation ends, the resources can simply get redeployed elsewhere. That makes building monitoring tools for such ephemeral resources a huge challenge. New Relic has also been working on the problem and released New Relic Serverless for AWS Lambda earlier this year.

As TechCrunch’s Frederic Lardinois pointed out in his article about the company’s $2.5 million seed round in 2017, Windisch and Johnson bring impressive credentials:

IOpipe co-founders Adam Johnson (CEO) and Erica Windisch (CTO), too, are highly experienced in this space, having previously worked at companies like Docker and Midokura (Adam was the first hire at Midokura and Erica founded Docker’s security team). They recently graduated from the Techstars NY program.

IOpipe was founded in 2015, which was just around the time that Amazon was announcing Lambda. At the time of the seed round the company had eight employees. According to PitchBook data, it currently has between 1 and 10 employees, and has raised $7.07 million since its inception.

New Relic was founded in 2008 and has raised more than $214 million, according to Crunchbase, before going public in 2014. Its stock price was $65.42 at the time of publication, up $1.40.

Building A Custom Tool For Shellcode Analysis

The Zero2Hero malware course continues with Daniel Bunce demonstrating how to write a custom tool to load, execute and debug malicious shellcode in memory.

image of shellcode analysis

Recently, FireEye posted a blog post detailing malware used by APT 41, specifically the DEADEYE initial first stage, capable of downloading, dropping or loading payloads on an infected system, and the LOWKEY backdoor. Additionally, they described an additional “RC4 Layer”, which is Position Independent Code (PIC) that RC4 decrypts an embedded payload and loads it into memory using it’s reflective DLL loader capabilities.

Unlike Windows executables, shellcode doesn’t have any headers, meaning the Windows loader cannot execute standalone shellcode. As a result, debugging is impossible without an external tool to load and execute shellcode for you. Therefore, in this blog post, we will cover how to write a tool in C to load shellcode into memory and wait until a debugger is attached before executing it. But first, why do we need to debug shellcode?

Why Debugging Position Independent Shellcode is Useful

Position Independent Code can be executed anywhere inside memory, without any issues. This means there are no hardcoded addresses and no calls to APIs such as GetProcAddress or LoadLibrary, not to mention a few other complications.

As a result, static analysis of the shellcode can take a while to fully understand as the shellcode is forced to manually lookup addresses that may not be known without debugging. Furthermore, plenty of malware utilizes hashing when looking up APIs, so a hash will be passed into a function that will hash each export of the DLL in question, until a matching pair is found. Whilst the hashing routine can be replicated so that the correct API is found without static analysis, there are many hashing algorithms out there, from CRC hashing up to custom hashing algorithms. This means there will be plenty of situations where you will have to update the script to include an additional algorithm, slowing down analysis further. So why not just load it into memory yourself and execute when a debugger is attached?

Well, you can. The guys at OALabs have created an extremely helpful tool called BlobRunner to do exactly that. However, rather than simply use a pre-existing tool, in this post we’re going to be focusing on writing our own, as it’s not too difficult to do so, and shellcode execution isn’t uncommon inside malware, so knowing the internals of how it works will help when it comes to recognizing it inside of a sample.

Overview of the ShellCode Analysis Tool’s Routine

So, the tool we’re writing needs to be able to read the shellcode from a file, allocate a region of memory large enough to accomodate the shellcode, write the shellcode into said region of memory, wait until a debugger has been attached, and then execute it. As it is for debugging and not behavioural analysis, we need to wait until a debugger has been attached, to prevent it running before attaching to it. The shellcode is also 64 bit, and as Visual C++ compiler does not support inline assembly for 64 bit applications, we’ll have to use CreateThread() to execute the shellcode, rather than using a jmp. Anyway, with that covered, let’s take a look at what the main() function will look like.

Writing the main() Function

This function is setup to accept two arguments: the filename, and the entry point offset.

image of main function

In the case of the loader, without analyzing what executes it, the entry point seems to be at the offset 0x80. If we don’t pass in a second argument (the offset), the program will attempt to execute it from the offset 0x00. The offset is converted to an integer using strtol(), and then the function read_into_memory() is called, which, as the name suggests, reads the shellcode into process memory. This returns the base address of the shellcode, which is then passed into the function execution(), along with the entry point offset. As you can see, the layout is fairly simple – unlike loading an executable in process memory, we don’t need to map the sections of shellcode into memory, and can simply execute it once it has been copied into memory. With that said, let’s move on to the other functions that we need.

Writing the read_into_memory() Function

This function is responsible for loading the shellcode from a file into an allocated region of memory.

image of read_into_memory function

To do this, we first need the size of the shellcode, which we can get using fseek() and ftell(). This is then passed into VirtualAlloc(), along with PAGE_EXECUTE_READWRITE, allowing us to execute that region of memory without calling VirtualProtect(). The return address is put into the buffer variable, which is then passed into fread(), along with the handle to the open file. After the shellcode has been read into memory, we return the address in buffer back to the calling function, for use in the execution() function, which we will move onto now.

Writing the execution() Function

The execution() function is the most important part of the entire tool, as without it we’d still only have static shellcode.

image of execution function

To begin with, we add the entry point offset to the base address of the shellcode, so if the allocated region of memory was at 0x20000, the actual entry point would be 0x20080 for our LOWKEY reflective DLL loader. We then pass this into a call to CreateThread(), along with the value 0x4, which indicates we want to create the thread in a suspended state. This is done to prevent execution of the shellcode whatsoever, and we only resume the thread once a debugger is attached. This is done through a pretty simple while() loop that constantly calls IsDebuggerPresent() until one is detected. From there, the thread is resumed with a call to ResumeThread(), and then we enter another loop that will call WaitForSingleObject() every second to check if the thread has exited. If so, we close the handle and return from the function!

Compile Time!

That pretty much wraps up the tool, so let’s go ahead and compile it to test it out!

animated image of compiled shellcode analysis tool

As you can see, everything runs smoothly and we’re able to see which API each hash represents, a lot faster than if we were writing a script to calculate it!

So, now we know how to read and execute shellcode in memory, recognizing it in malware should be fairly easy! In this case, the DEADEYE payload that executes the shellcode is packed and protected with VMProtect, making it very difficult to locate the function responsible for loading and executing the payload; however, the unpacked payloads can be found on Malpedia here for those that have an account.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

How you react when your systems fail may define your business

Just around 9:45 a.m. Pacific Time on February 28, 2017, websites like Slack, Business Insider, Quora and other well-known destinations became inaccessible. For millions of people, the internet itself seemed broken.

It turned out that Amazon Web Services was having a massive outage involving S3 storage in its Northern Virginia datacenter, a problem that created a cascading impact and culminated in an outage that lasted four agonizing hours.

Amazon eventually figured it out, but you can only imagine how stressful it might have been for the technical teams who spent hours tracking down the cause of the outage so they could restore service. A few days later, the company issued a public post-mortem explaining what went wrong and which steps they had taken to make sure that particular problem didn’t happen again. Most companies try to anticipate these types of situations and take steps to keep them from ever happening. In fact, Netflix came up with the notion of chaos engineering, where systems are tested for weaknesses before they turn into outages.

Unfortunately, no tool can anticipate every outcome.

It’s highly likely that your company will encounter a problem of immense proportions like the one that Amazon faced in 2017. It’s what every startup founder and Fortune 500 CEO worries about — or at least they should. What will define you as an organization, and how your customers will perceive you moving forward, will be how you handle it and what you learn.

We spoke to a group of highly-trained disaster experts to learn more about preventing these types of moments from having a profoundly negative impact on your business.

It’s always about your customers

Reliability and uptime are so essential to today’s digital businesses that enterprise companies developed a new role, the Site Reliability Engineer (SRE), to keep their IT assets up and running.

Tammy Butow, principal SRE at Gremlin, a startup that makes chaos engineering tools, says the primary role of the SRE is keeping customers happy. If the site is up and running, that’s generally the key to happiness. “SRE is generally more focused on the customer impact, especially in terms of availability, uptime and data loss,” she says.

Companies measure uptime according to the so-called “five nines,” or 99.999 percent availability, but software engineer Nora Jones, who most recently led Chaos Engineering and Human Factors at Slack, says there is often too much of an emphasis on this number. According to Jones, the focus should be on the customer and the impact that availability has on their perception of you as a company and your business’s bottom line.

Someone needs to be calm and just keep asking the right questions.

“It’s money at the end of the day, but also over time, user sentiment can change [if your site is having issues],” she says. “How are they thinking about you, the way they talk about your product when they’re talking to their friends, when they’re talking to their family members. The nines don’t capture any of that.”

Robert Ross, founder and CEO at FireHydrant, an SRE as a Service platform, says it may be time to rethink the idea of the nines. “Maybe we need to change that term. Maybe we can popularize something like ‘happiness level objectives’ or ‘happiness level agreements.’ That way, the focus is on our products.”

When things go wrong

Companies go to great lengths to prevent disasters to avoid disappointing their customers and usually have contingencies for their contingencies, but sometimes, no matter how well they plan, crises can spin out of control. When that happens, SREs need to execute, which takes planning, too; knowing what to do when the going gets tough.

Ransomware Attacks: To Pay or Not To Pay? Let’s Discuss

This year has seen an escalation in the number of ransomware attacks striking organizations, with both private and public sector agencies like local government and education firmly in the firing line of threats such as Ryuk and Robinhood ransomware. Often understaffed and under resourced, those responsible for delivering critical public services are at the sharp end of the dilemma: to pay or not to pay? It’s a quandary that has technical, ethical, legal, safety and, of course, financial dimensions. In this post, we explore the arguments both for and against. Our aim here is to describe the implications and rationale from both angles across a number of different considerations. 

Ransomware Attacks: To Pay or Not To Pay? Let's Discuss

Is Paying a Ransom to Stop a Ransomware Attack Illegal?

It may seem odd to some, but it isn’t illegal to pay a ransomware demand, even though the forced encryption of someone else’s data and demand for payment is itself a federal crime under at least the Computer Fraud and Abuse Act and the Electronic Communications Privacy Act, as well as many laws passed by State legislatures.

One might argue that the best way to solve the ransomware epidemic would be to make it illegal for organizations to pay. Criminals are naturally only interested in the pay off, and if that route to the payday was simply prescribed by law, it would very quickly lead both to companies exploring other options to deal with ransomware and, at least in theory, criminals moving toward some other endeavour with an easier payout.

In sum, one could argue that it is the ease with which criminals can be paid and the perceived anonymity of crypto payment that helps foster the continuance of the ransomware threat. 

The idea of outlawing the payment of ransomware demands might seem appealing at first, until you unpack the idea to think how it would work in practice. Publicly traded companies have a legal duty to shareholders; public service companies have legally binding commitments to serve their communities. A law that threatened to fine organizations, or perhaps imprison staff, would be hugely controversial in principle and likely difficult to enforce in practice, quite aside from the ethics of criminalizing the victim of a crime whose sole intent is to coerce that victim into making a payment.

Imagine a prosecutor attempting to convince a court that an employee – whose actions, say, restored a critical public service and saved the taxpayer millions of dollars after authorizing a five figure ransomware payment – should be jailed. How would that, in principle, be different from prosecuting a parent for securing the safety of a child by paying off kidnappers? It doesn’t look like an easy case to win, particularly when the employee (or organization) might cite legitimate extenuating circumstances such as preserving life or other legal obligations. 

Is It Ethical To Pay a Ransomware Demand?

If it’s not illegal to pay a ransomware demand, that still leaves open the separate question as to whether it’s ethical. There’s a couple of different angles that can be taken on this one. According to some interpretations of ethics, something is a “good” or “right” decision if it leads to an overall benefit for the community.

On this pragmatic conception of ethics, one might argue that paying a ransomware demand that restores some vital service or unlocks some irreplaceable data outweighs the ‘harm’ of rewarding and encouraging those engaged in criminal behavior. 

On the other hand, it could be argued that what is right, or ethical, is distinct from what is a pragmatic or merely expedient solution. Indulging in a fantastical thought-experiment for a moment, would we consider it ethical if a ransomware author demanded the life of a person, instead of money, to release data that would save the lives of thousands of others? Many would have a strong intuition that it would always be unethical to murder one innocent to protect the lives of others. And that suggests that what is “right” and “wrong” might not revolve around a simple calculation of perceived benefits. 

The real problem with the pragmatic approach, however, is that there’s no agreement on how to objectively calculate the outcome of different ethical choices. More often than not, the weight we give to different ethical choices merely reflect our bias for the choice that we are naturally predisposed to.

If pragmatism can’t help inform us of whether it’s ethical or not to pay ransomware, we could look to a different view of ethics that suggest we should consider actions as “right” or “wrong” insofar as they reflect the values of the kind of society we want to live in. This view is sometimes expressed more simply as a version of the “do unto others as you would have them do unto you” maxim. A more accurate way to parse it might be to ask: Do we want to live in a society where we think it’s right (ethical) to pay those who engage in criminal behavior? Is this a maxim that we would want to teach our children? Put in those terms, many would perhaps say not.

Is It Prudent To Pay a Ransomware Demand?

Even if we might have a clear idea of the legal situation and a particular take on our own ethical stance, the question of whether to pay or not to pay raises other issues. We are not entirely done with the pragmatics of the ransomware dilemma. We may still feel inclined to make an unethical choice in light of other, seemingly more pressing concerns. 

There is a real, tangible pressure on making a choice that could save your organization or your city millions of dollars, or which might spare weeks of downtime of a critical service.

Even if they believe it would be technically unethical to do so, sometimes, some people may judge that today’s hard reality just takes imminent precedence over loftier principles.

A case in point: recently, three Alabama hospitals paid a ransom in order to resume operations. The hospitals’ spokesperson said: 

“We worked with law enforcement and IT security experts to assess all options in executing the solution we felt was in the best interests of our patients and in alignment with our health system’s mission. This included purchasing a decryption key from the attackers to expedite system recovery and help ensure patient safety.”

This “hard reality” perspective is reflected in recent changes made to the FBI’s official guidance on ransomware threats. 

“…the FBI understands that when businesses are faced with an inability to function, executives will evaluate all options to protect their shareholders, employees, and customers.”

However, the possibility that the criminals will not hold up their side of the bargain must be factored into any decision about paying a ransomware demand. In some cases, decryption keys are not even available, and in others, the ransomware authors simply didn’t respond once they were paid. We saw this to some degree with WannaCry. In the flurry of the WannaCry outbreak, some victims paid and got keys, yet a large amount either never heard from the authors, or the key pairs between victim and server were unmatched, making per-user decryption impossible.

A further point to consider when weighing up the prudence of acquiescing to the demand for payment is how this will affect your organization beyond the present attack itself. Will paying harm your reputation or earn you plaudits? Will other – or even the same – attackers now see you as a soft target and look to strike you again? Will your financial support for the criminals’ enterprise lead to further attacks against other companies, or services, that you yourself rely on? In other words, will giving in to the ransomware demand produce worse long-term effects than the immediate ones it seems – if the attackers deliver on their promise – to solve? 

What Happens If I Don’t Pay A Ransom for Ransomware Attacks?

If you choose not to pay the ransom, then of course you are in the very same position the ransomware attacker first put you in by encrypting all your files in order to “twist your arm” into paying. 

Depending on what kind of ransomware infection you have, there is some possibility that a decryptor already exists for that strain; less likely, but not unheard of, is the possibility that an expert analysis team may discover a way to decrypt your files. A lot of ransomware is poorly written and poorly implemented, and it may be that all is not lost as it might at first seem.

The NoMoreRansom Project is the culmination of effort from global law enforcement agencies and private security industry partners. They host a large repository of stand-alone decryption tools which are constantly updated by industry partners.

This can be a very valuable resource when evaluating your course of action when facing a ransomware attack.

Also consider whether you have inventoried all possible backup and recovery options. Many look no further than the Maersk shipping story during the NotPetya attack to emphasize the importance of being able to rapidly restore one’s entire infrastructure from backup. The most eye-opening realization for Maersk (and indeed the entire industry) was that recovery depended on a happy accident: a sole unaffected domain controller did not become infected due to a local power outage where it was residing. Without that fortunate, coincidental event, it would have taken exponentially longer to rebuild their entire infrastructure after 50,000 devices and thousands of apps were destroyed all at once. 

Some hail this as a success story for backups, but shareholders and operators on board the thousands of ships worldwide are quick to remind us that this incident still cost the company well over a half billion dollars in the 6 months following the incident. While backup and restoration are indeed critical, they are by no means the primary basis for a strategy to address the threat of ransomware.

Finally, there is the worst case scenario, where you have no backups and no recovery software, and you will have to dig yourself out by re-building data, services and, perhaps your reputation, from the ground up. Transparency is undoubtedly your best bet in that kind of scenario. Admit to past mistakes, commit to learning those lessons, and stand tall on your ethical decision not to reward criminal behavior. 

What Happens If I Pay A Ransom for Ransomware Attacks?

There is perhaps more uncertainty in paying than there is in not paying. At least when you choose not to pay a ransomware demand, what happens next is in your hands. In handing over whatever sum the ransomware attacker demands, you remain in their clutches until or unless they provide a working decryption key. 

Before going down the road of paying, look for experienced advisors and consultants to help negotiate with the extortionists. Despite the often taunting ransomware notes, some ransomware groups will engage in negotiating terms if they think it will improve their chances of a payday.

Tactics like asking for ‘proof of life’ to decrypt a portion of the environment up front prior to payment, or to negotiate payment terms like 50% up front, and 50% only after the environment has been decrypted, can work with some groups, albeit not with others.

The vast majority of ransom is still being paid in bitcoin, which is not an anonymous or untraceable currency. If you do feel forced to pay, you can work with the FBI and share wallet and payment details. Global Law Enforcement is keen to track where the money moves.

And where do you go beyond that? Any sensible organization must realize the need for urgent investment in determining not only the vector of that attack but all other vulnerabilities, as well as rolling out a complete cybersecurity solution that can block and rollback ransomware attacks in future. While these are all costs that need to be borne regardless of whether you pay or do not pay, the temptation to take the quick, easy way out rather than working through the entire problem risks leaving holes that may be exploited in the future. Balance the need for speed of recovery against several risks:

  1. Unknown back doors the attackers leave on systems
  2. Partial data recovery (note some systems will not be recovered at all)
  3. Zero recovery after payment (it is rare, but in some cases the decryption key provided is 100% useless, or worse, one is never sent)</span

Finally, note that some organizations that get hit successively by the same actors might have actually only been hit once, but encryption payloads may have been triggered in subsequent waves. Experience pays off tremendously in all of these scenarios, and ‘knowing thy enemy’ can make all the difference.

Conclusion

Pay or don’t pay, make sure you notify the proper law enforcement agency:

“Regardless of whether you or your organization have decided to pay the ransom, the FBI urges you to report ransomware incidents to law enforcement. Doing so provides investigators with the critical information they need to track ransomware attackers, hold them accountable under U.S. law, and prevent future attacks”.

At SentinelOne, we concur with the FBI that paying criminals for criminal activity is no way to put an end to criminal behavior. We understand the technical, ethical, and financial impacts that ransomware has on a business. It’s why we offer a ransomware guarantee with a trusted security solution that can both block known and unknown ransomware activity and also rollback your protected devices to a healthful state without recourse to backups or lengthy reinstallation. If you’d like to find out more, contact us today or request a free demo


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Samsung ramps up its B2B partner and developer efforts

Chances are you mostly think of Samsung as a consumer-focused electronics company, but it actually has a very sizable B2B business as well, which serves more than 15,000 large enterprises and hundreds of thousands of SMB entrepreneurs via its partners. At its developer conference this week, it’s putting the spotlight squarely on this side of its business — with a related hardware launch as well. The focus of today’s news, however, is on Knox, Samsung’s mobile security platform, and Project AppStack, which will likely get a different name soon, and which provides B2B customers with a new mechanism to deliver SaaS tools and native apps to their employees’ devices, as well as new tools for developers that make these services more discoverable.

At least in the U.S., Samsung hasn’t really marketed its B2B business all that much. With this event, the company is clearly thinking to change that.

At its core, Samsung is, of course, a hardware company, and as Taher Behbehani, the head of its U.S. mobile B2B division, told me, Samsung’s tablet sales actually doubled in the last year, and most of these were for industrial deployments and business-specific solutions. To better serve this market, the company today announced that it is bringing the rugged Tab Active Pro to the U.S. market. Previously, it was only available in Europe.

The Active Pro, with its 10.1″ display, supports Samsung’s S Pen, as well as Dex for using it on the desktop. It’s got all of the dust and water-resistance you would expect from a rugged device, is rated to easily support drops from about four feet high and promises up to 15 hours of battery life. It also features LTE connectivity and has an NFC reader on the back to allow you to badge into a secure application or take contactless payments (which are quite popular in most of the world but are only very slowly becoming a thing in the U.S.), as well as a programmable button to allow business users and frontline workers to open any application they select (like a barcode scanner).

“The traditional rugged devices out there are relatively expensive, relatively heavy to carry around for a full shift,” Samsung’s Chris Briglin told me. “Samsung is growing that market by serving users that traditionally haven’t been able to afford rugged devices or have had to share them between up to four co-workers.”

Today’s event is less about hardware than software and partnerships, though. At the core of the announcements is the new Knox Partner Program, a new way for partners to create and sell applications on Samsung devices. “We work with about 100,000 developers,” said Behbehani. “Some of these developers are inside companies. Some are outside independent developers and ISVs. And what we hear from these developer communities is when they have a solution or an app, how do I get that to a customer? How do I distribute it more effectively?”

This new partner program is Samsung’s solution for that. It’s a three-tier partner program that’s an evolution of the existing Samsung Enterprise Alliance program. At the most basic level, partners get access to support and marketing assets. At all tiers, partners can also get Knox validation for their applications to highlight that they properly implement all of the Knox APIs.

The free Bronze tier includes access to Knox SDKs and APIs, as well as licensing keys. At the Silver level, partners will get support in their region, while Gold-level members get access to the Samsung Solutions Catalog, as well as the ability to be included in the internal catalog used by Samsung sales teams globally. “This is to enable Samsung teams to find the right solutions to meet customer needs, and promote these solutions to its customers,” the company writes in today’s announcement. Gold-level partners also get access to test devices.

The other new service that will enable developers to reach more enterprises and SMBs is Project AppStack.

“When a new customer buys a Samsung device, no matter if it’s an SMB or an enterprise, depending on the information they provide to us, they get to search for and they get to select a number of different applications specifically designed to help them in their own vertical and for the size of the business,” explained Behbehani. “And once the phone is activated, these apps are downloaded through the ISV or the SaaS player through the back-end delivery mechanism which we are developing.”

For large enterprises, Samsung also runs an algorithm that looks at the size of the business and the vertical it is in to recommend specific applications, too.

Samsung will run a series of hackathons over the course of the next few months to figure out exactly how developers and its customers want to use this service. “It’s a module. It’s a technology backend. It has different components to it,” said Behbehani. “We have a number of tools already in place we have to fine- tune others and we also, to be honest, want to make sure that we come up with a POC in the marketplace that accurately reflects the requirements and the creativity of what the demand is in the marketplace.”

Google launches TensorFlow Enterprise with long-term support and managed services

Google open-sourced its TensorFlow machine learning framework back in 2015 and it quickly became one of the most popular platforms of its kind. Enterprises that wanted to use it, however, had to either work with third parties or do it themselves. To help these companies — and capture some of this lucrative market itself — Google is launching TensorFlow Enterprise, which includes hands-on, enterprise-grade support and optimized managed services on Google Cloud.

One of the most important features of TensorFlow Enterprise is that it will offer long-term support. For some versions of the framework, Google will offer patches for up to three years. For what looks to be an additional fee, Google will also offer to companies that are building AI models engineering assistance from its Google Cloud and TensorFlow teams.

All of this, of course, is deeply integrated with Google’s own cloud services. “Because Google created and open-sourced TensorFlow, Google Cloud is uniquely positioned to offer support and insights directly from the TensorFlow team itself,” the company writes in today’s announcement. “Combined with our deep expertise in AI and machine learning, this makes TensorFlow Enterprise the best way to run TensorFlow.”

Google also includes Deep Learning VMs and Deep Learning Containers to make getting started with TensorFlow easier, and the company has optimized the enterprise version for Nvidia GPUs and Google’s own Cloud TPUs.

Today’s launch is yet another example of Google Cloud’s focus on enterprises, a move the company accelerated when it hired Thomas Kurian to run the Cloud businesses. After years of mostly ignoring the enterprise, the company is now clearly looking at what enterprises are struggling with and how it can adapt its products for them.

Breaches at NetworkSolutions, Register.com, and Web.com

Top domain name registrars NetworkSolutions.com, Register.com and Web.com are asking customers to reset their passwords after discovering an intrusion in August 2019 in which customer account information was accessed.

A notice to customers at notice.web.com.

“On October 16, 2019, Web.com determined that a third-party gained unauthorized access to a limited number of its computer systems in late August 2019, and as a result, account information may have been accessed,” Web.com said in a written statement. “No credit card data was compromised as a result of this incident.”

The Jacksonville, Fla.-based Web.com said the information exposed includes “contact details such as name, address, phone numbers, email address and information about the services that we offer to a given account holder.”

The “such as” wording made me ask whether the company has any reason to believe passwords — scrambled or otherwise — were accessed.

A spokesperson for Web.com later clarified that the company does not believe customer passwords were accessed.

“We encrypt account passwords and do not believe this information is vulnerable as a specific result of this incident. As an added precautionary measure, customers will be required to reset passwords the next time they log in to their accounts. As with any online service or platform, it is also good security practice to change passwords often and use a unique password for each service.”

Both Network Solutions and Register.com are owned by Web.com. Network Solutions is now the world’s fifth-largest domain name registrar, with almost seven million domains in its stable, according to domainstate.com; Register.com listed at #17 with 1.7 million domains.

Web.com’s homepage currently makes no mention of the breach notification.

NetworkSolutions.com does not appear to currently link to any information about the incident on its homepage, nor does Web.com. To get to the advisory, one needs to visit notice.web.com.

Web.com said it has reported the incident to law enforcement and hired an outside security firm to investigate further, and is in the process of notifying affected customers through email and via its website.

The company says it plans to circle back with customers when it learns the results of its investigation, but I wonder whether we’ll ever hear more about this breach.

Web.com wasn’t clear how long the intrusion lasted, but if the breach wasn’t detected until mid-October that means the intruders potentially had about six weeks inside unnoticed. That’s a long time for an adversary to wander about one’s network, and plenty of time to steal a great deal more information than just names, addresses and phone numbers.

H/T to domaininvesting.com‘s Elliot Silver for the heads up on this notification.

WeFarm rakes in $13M to grow its marketplace and network for independent farmers

Huge networks like Facebook and LinkedIn have a huge gravitational force in the world of social media — the size of their audiences make them important platforms for advertising and those who want information (for better or worse) to reach as many people as possible. But alongside their growth, we’re seeing a lasting role for platforms and networks focused on more narrow special interests, and today one of them — focused on farmers, of all communities — is picking up a round of funding to propel its growth.

WeFarm, a marketplace and networking site for small-holder farmers (that is, farms not controlled by large agribusinesses), has raised $13 million in a Series A round of funding, with plans to use the money to continue adding more users — farmers — and more services geared to their needs.

The round, which brings the total raised by the company to a modest $20 million, is being led by True Ventures, with AgFunder, June Fund; previous investors LocalGlobe, ADV and Norrsken Foundation; and others also participating.

WeFarm today has around 1.9 million registered users, and its early moves into providing a marketplace — helping to put farmers in touch with local suppliers of goods and gear such as seed and fertilizers — generated $1 million in sales in its first eight months of operations, a sign that there is business to be had here. The startup points out that this growth has been, in fact, “faster… than both Amazon and eBay in their early stages.”

WeFarm is based out of London, but while the startup does have users out of the U.K. and the rest of Europe, Kenny Ewan, the company’s founder and CEO, said in an interview that it is seeing much more robust activity and growth out of developing economies, where small-scale agriculture reigns supreme, but those working the farms have been massively underserved when it comes to new, digital services.

“We are building an ecosystem for global small-scale agriculture, on behalf of farmers,” Ewan said, noting that there are roughly 500 million small-scale farms globally, with some 1 billion people working those holdings, which typically extend 1.5-2 hectares and often are focused around staple commercial crops like rice, coffee, cattle or vegetables. “This is probably the biggest industry on Earth, accounting for some 75-80% of the global supply chain, and yet no one has built anything for them. This is significant on many levels.”

The service that WeFarm provides, in turn, is two-fold. The network, which is free to join, first of all serves as a sounding board, where farmers — who might live in a community with other farmers, but might also be quite solitary — can ask each other questions or get advice on agricultural or small-holding matters. Think less Facebook and more Stack Exchange here.

That provided a natural progression to WeFarm’s second utility track: a marketplace. Initially Ewan said that it’s been working with — and importantly, vetting — local suppliers to help them connect with farmers and the wider ecosystem for goods and services that they might need.

Longer term, the aim will be to provide a place where small-holding farmers might be able to exchange goods with each other, or sell on what they are producing.

In addition to providing access to goods for sale, WeFarm is helping to manage the e-commerce process behind it. For example, in regions like Africa, mobile wallets have become de facto bank accounts and proxies for payment cards, so one of the key ways that people can pay for items is via SMS.

“For 90% of our users, we are the only digital service they use, so we have to make sure we can fulfill their trust,” Ewan said. “This is a network of trust for the biggest industry on earth and we have to make sure it works well.”

For True and other investors, this is a long-term play, where financial returns might not be as obvious as moral ones.

“We are enormously inspired by how Kenny and the Wefarm team have empowered the world’s farmers, and we see great potential for their future,” said Jon Callaghan, co-founder of True Ventures, in a statement. “The company is not only impact-driven, but the impressive growth of the Wefarm Marketplace demonstrates exciting commercial opportunities that will connect those farmers to more of what they need to the benefit of all, across the food supply chain. This is a big, global business.”

Still, given the bigger size of the long tail, the company that can consolidate and manage that community potentially has a very valuable business on its hands, too.

Datameer announces $40M investment as it pivots away from Hadoop roots

Datameer, the company that was born as a data prep startup on top of the open-source Hadoop project, announced a $40 million investment and a big pivot away from Hadoop, while staying true to its big data roots.

The investment was led by existing investor ST Telemedia . Existing investors Redpoint Ventures, Kleiner Perkins, Nextworld Capital, Citi Ventures and Top Tier Capital Partners also participated. Today’s investment brings the total raised to almost $140 million, according to Crunchbase data.

Company CEO Christian Rodatus says the company’s original mission was about making Hadoop easier to use for data scientists, business analysts and engineers. In the last year, the three biggest commercial Hadoop vendors — Cloudera, Hortonworks and MapR — fell on hard times. Cloudera and Hortonworks merged and MapR was sold to HPE in a fire sale.

Starting almost two years ago, Datameer recognized that against this backdrop, it was time for a change. It began developing a couple of new products. It didn’t want to abandon its existing customer base entirely, of course, so it began rebuilding its Hadoop product and is now calling it Datameer X. It is a modern cloud-native product built to run on Kubernetes, the popular open-source container orchestration tool. Instead of Hadoop, it will be based on Spark. He reports they are about two-thirds done with this pivot, but the product has been in the hands of customers.

The company also announced Neebo, an entirely new SaaS tool to give data scientists the ability to process data in whatever form it takes. Rodatus sees a world coming where data will take many forms, from traditional data to Python code from data analysts or data scientists to SaaS vendor dashboards. He sees Neebo bringing all of this together in a managed service with the hope that it will free data scientists to concentrate on getting insight from the data. It will work with data visualization tools like Tableau and Looker, and should be generally available in the coming weeks.

The money should help them get through this pivot, hire more engineers to continue the process and build a go-to-market team for the new products. It’s never easy pivoting like this, but the investors are likely hoping that the company can build on its existing customer base, while taking advantage of the market need for data science processing tools. Time will tell if it works.