Intel Inside – SentinelOne Cryptominer Detection

Cryptominers are illegally used for Cryptojacking, the process by which an attacker secretly launches cryptocurrency mining software on a target system. The software consumes processor cycles to process cryptocurrency transactions, thus earning the attacker a commission, usually in the form of the Monero cryptocurrency. Cryptomining attacks increased dramatically in 2018 and emerged as one of the top threats facing organizations. According to reports, Cryptomining attacks have become so popular they are estimated potentially to consume almost half a percent of the world’s electricity consumption.

What Does SentinelOne Offer to Mitigate the Risk?

At SentinelOne, we have identified this emerging threat and decided to investigate and build a solution to detect and mitigate cryptojacking.

In Windows Agent 3.0 and for the first time, SentinelOne is introducing new capabilities to detect and mitigate in-browser cryptominers. As part of the solution, we make use of Intel’s Accelerated Memory Scanning (AMS) library, which enables fast memory scanning offloaded to the Graphics Processing Unit (GPU).

In this new version of the Agent, in-browser cryptominer detection will be focused on detection of Cryptonight-based cryptocurrencies. This family includes popular and profitable cryptocurrencies such as Monero.

This new capability extends an already existing feature of detecting command line-based cryptominers, which now makes the protection from cryptominers much broader.

How Does the Detection Work?

To detect cryptominers, it’s important to understand which attributes distinguish them from other processes.

In preliminary research conducted by SentinelOne, we identified various characteristics that are unique to cryptominers. These characteristics are related to the cryptominer’s execution behavior. Once these characteristics are observed by the SentinelOne Agent, it starts to scan the potential cryptominer’s memory using Intel AMS library in order to find unique patterns in memory. If these patterns are found, then the threat is classified as cryptominer.

If the SentinelOne endpoint policy is set to “Protect” (auto-mitigate), then the Agent will kill the cryptominer. The user on the endpoint may experience the mitigation as a closed iframe or a closed browser tab.

Partnership with Intel

Intel and SentinelOne Integration

SentinelOne has partnered with Intel to integrate Intel’s Accelerated Memory Scanning capability to the agent. By leveraging this capability, SentinelOne now offloads the processing power needed to scan for cryptomining attacks from the CPU to the GPU – dramatically increasing the speed of cryptominer detection without latency or degradation of endpoint performance. This creates a much more efficient way to capture memory-based cyber attacks at the OS level.

Independent benchmark testing from PassMark Software validated that the SentinelOne’s hardware-based approach of using Intel’s silicon to power threat scanning significantly increases detection rates of memory-based attacks such as cryptominers, while providing a 10x improvement in scanning time with no increase in CPU usage.

Demo


Like this article? Follow us on LinkedInTwitter, YouTube or Facebook to see the content we post.

Read more Feature Spotlights

Threads emerges from stealth with $10.5M from Sequoia for a new take on enabling work conversations

The rapid rise of Slack has ushered in a new wave of apps, all aiming to solve one challenge: creating a user-friendly platform where coworkers can have productive conversations. Many of these are based around real-time notifications and “instant” messaging, but today a new startup called Threads coming out of stealth to address the other side of the coin: a platform for asynchronous communication that is less time-sensitive, and creating coherent narratives out of those conversations.

Armed with $10.5 million in funding led by Sequoia, the company is launching a beta of its service today.

Rousseau Kazi, the startup’s CEO who co-founded threads with Jon McCord, Mark Rich and Suman Venkataswamy, cut his social teeth working for six years at Facebook (with a resulting number of patents to his name around the mechanics of social networking), says that the mission of Threads is to become more inclusive when it comes to online conversations.

“After a certain number of people get involved in an online discussion, conversations just break and messaging becomes chaotic,” he said. (McCord and Rich are also Facebook engineering alums, while Venkataswamy is a Bright Roll alum.)

And if you have ever used Twitter, or even been in a popular channel in Slack, you will understand what he is talking about. When too many people begin to talk, the conversation gets very noisy and it can mean losing the “thread” of what is being discussed, and seeing conversation lurch from one topic to another, often losing track of important information in the process.

There is an argument to be made for whether a platform that was built for real-time information is capable of handling a difference kind of cadence. Twitter, as it happens, is trying to figure that out right now. Slack, meanwhile, has itself introduced threaded comments to try to address this too — although the practical application of its own threading feature is not actually very user friendly.

Threads’ answer is to view its purpose as addressing the benefit of “asynchronous” conversation.

To start, those who want to start threads first register as organizations on the platform. Then, those who are working on a project or in a specific team creates a “space” for themselves within that org. You can then start threads within those spaces. And when a problem has been solved or the conversation has come to a conclusion, the last comment gets marked as the conclusion.

The idea is that topics and conversations that can stretch out over hours, days or even longer, around specific topics. Threads doeesn’t want to be the place you go for red alerts or urgent requests, but where you go when you have thoughts about a work-related subject and how to tackle it.

These resulting threads, when completed or when in progress, can in turn be looked at as straight conversations, or as annotated narratives.

For now, it’s up to users themselves to annotate what might be important to highlight for readers, although when I asked him, Kazi told me he would like to incorporate over time more features that might use natural language processing to summarize and pull out what might be worth following up or looking at if you only want to skim read a longer conversation. Ditto the ability to search threads. Right now it’s all based around keywords but you can imagine a time when more sophisticated and nuanced searches to surface conversations relevant to what you might be looking for.

Indeed, in this initial launch, the focus is all about what you want to say on Threads itself — not lots of bells and whistles, and not trying to compete against the likes of Slack, or Workplace (Facebook’s effort in this space), or Yammer or Teams from Microsoft, or any of the others in the messaging mix.

There are no integrations of other programs to bring data into Threads from other places, but there is a Slack integration in the other direction: you can create an alert there so that you know when someone has updated a Thread.

“We don’t view ourselves as a competitor to Slack,” Kazi said. “Slack is great for transactional conversation but for asynchronous chats, we thought there was a need for this in the market. We wanted something to address that.”

It may not be a stated competitor, but Threads actually has something in common with Slack: the latter launched with the purpose of enabling a certain kind of conversation between co-workers in a way that was easier to consume and engage with than email.

You could argue that Threads has the same intention: email chains, especially those with multiple parties, can also be hard to follow and are in any case often very messy to look at: something that the conversations in Threads also attempt to clear up.

But email is not the only kind of conversation medium that Threads thinks it can replace.

“With in-person meetings there is a constant tension between keeping the room small for efficiency and including more people for transparency,” said Sequoia partner Mike Vernal in a statement. “When we first started chatting with the team about what is now Threads, we saw an opportunity to get rid of this false dichotomy by making decision-making both more efficient and more inclusive. We’re thrilled to be partnering with Threads to make work more inclusive.” Others in the round include Eventbrite CEO Julia Hartz, GV’s Jessica Verrilli, Minted CEO Mariam Naficy, and TaskRabbit CEO Stacy Brown-Philpot.

The startup was actually formed in 2017, and for months now it has been running a closed, private version of the service to test it out with a small amount of users. So far, the company sizes have ranged between 5 and 60 employees, Kazi tells me.

“By using Threads as our primary communications platform, we’ve seen incredible progress streamlining our operations,” said one of the testers, Perfect Keto & Equip Foods Founder and CEO, Anthony Gustin. “Internal meetings have reduced by at least 80 percent, we’ve seen an increase in participation in discussion and speed of decision making, and noticed an adherence and reinforcement of company culture that we thought was impossible before. Our employees are feeling more ownership and autonomy, with less work and time that needs to be spent — something we didn’t even know was possible before Threads.”

Kazi said that the intention is ultimately to target companies of any size, although it will be worth watching what features it will have to introduce to help handle the noise, and continue to provide coherent discussions, when and if they do start to tackle that end of the market.

Compass acquires Contactually, a CRM provider to the real estate industry

Compass, the real estate tech platform that is now worth $4.4 billion, has made an acquisition to give its agents a boost when it comes to looking for good leads on properties to sell. It is acquiring Contactually, an AI-based CRM platform designed specifically for the industry, which includes features like linking up a list of homes sold by a brokerage with records of sales in the area and other property indexes to determine which properties might be good targets to tap for future listings.

Contactually had already been powering Compass’s own CRM service that it launched last year, so there is already a degree of integration between the two.

Terms of the deal are not being disclosed. Crunchbase notes that Contactually had raised around $18 million from VCs that included Rally Ventures, Grotech and Point Nine Capital, and it was last valued at around $30 million in 2016, according to PitchBook. From what I understand, the startup had strong penetration in the market, so it’s likely that the price was a bit higher than this previous valuation.

The plan is to bring over all of Contactually’s team of 32 employees, led by Zvi Band, the co-founder and CEO, to integrate the company’s product into Compass’s platform completely. They will report to CTO Joseph Sirosh and head of product Eytan Seidman. It will also mean a bigger operation for Compass in Washington, DC, which is where Contactually had been based.

“The Contactually team has worked for the past 8 years to build a best-in-class CRM that aggregates relationships and automatically documents every touchpoint,” said Band in a statement “We are proud that our investment into machine learning has resulted in new features like Best Time to Email and other data-driven, follow-up recommendations which help agents be more effective in their day-to-day. After working extensively with the Compass team, it was apparent that joining forces would accelerate our missions of building the future of the industry.”

For the time being, customers who are already using the product — and a large number of real estate brokers and agents in the U.S. already were, at prices that ranged from $59/month to $399/month depending on the level of service — will continue their contracts as before.

I suspect that the longer-term plan, however, will be a little different: You have to wonder if agents who compete against Compass would be happy to use a service where their data is being processed by it, and for Compass itself. I would suspect that having this tech for itself would give it an edge over the others.

Compass, I understand from sources, is on track to make $2 billion in revenues in 2019 (its 2018 targets were $1 billion on $34 billion in property sales, and it had previously said it would be doubling that this year). Now in 100 cities, it’s come a long way from its founding in 2012 by Ori Allon and Robert Reffkin.

The bigger picture beyond real estate is that, as with many other analog industries, those who are tackling them with tech-first approaches are sweeping up not only existing business, but in many cases helping the whole market to expand. Contactually, as a tool that can help source potential properties for sale that owners hadn’t previously considered putting on the market, could end up serving that very end for Compass.

The focus on using tech to storm into a legacy industry is also coming at an interesting time. As we’ve pointed out before, the housing market is predicted to cool this year, and that will put the squeeze on agents who do not have strong networks of clients and the tools to maximise whatever opportunities there are out there to list and sell properties.

The likes of Opendoor — which appears to be raising money and inching closer to Compass in terms of valuation — is also trying out a different model, which essentially involves becoming a middle part in the chain, buying properties from sellers and selling them on to buyers, to speed up the process and cut out some of the expenses for the end users. That approach underscores the fact that, while the infusion of technology is an inevitable trend, there will be multiple ways of applying that.

This appears to be Compass’s first full acquisition of a tech startup, although it has made partial acqui-hires in the past.

Box fourth quarter revenue up 20 percent, but stock down 22 percent after hours

By most common sense measurements, Box had a pretty good earnings report today, reporting revenue up 20 percent year over year to $163.7 million. That doesn’t sound bad, yet Wall Street was not happy with the stock getting whacked, down more than 22 percent after hours as we went to press. It appears investors were unhappy with the company’s guidance.

Part of the problem, says Alan Pelz-Sharpe, principal analyst at Deep Analysis, a firm that watches the content management space, is that the company failed to hit its projections, combined with weaker guidance; a tough combination, but he points out the future does look bright for the company.

Box did miss its estimates and got dinged pretty hard today; however, the bigger picture is still of solid growth. As Box moves more and more into the enterprise space, the deal cycle takes longer to close and I think that has played a large part in this shift. The onus is on Box to close those bigger deals over the next couple of quarters, but if it does, then that will be a real warning shot to the legacy enterprise vendors as Box starts taking a chunk out of their addressable market,” Pelz-Sharpe told TechCrunch.

This fits with what company CEO Aaron Levie was saying. “Wall Street did have higher expectations with our revenue guidance for next year, and I think that’s totally fair, but we’re very focused as a company right now on driving reacceleration in our growth rate and the way that we’re going to do that is by really bringing the full suite of Box’s capabilities to more of our customers,” Levie told TechCrunch.

Holger Mueller, an analyst with Constellation Research says failing to hit guidance is always going to hurt a company with Wall Street. “It’s all about hitting the guidance, and Box struggled with this. At the end of the day, investors don’t care for the reasons, but making the number is what matters. But a booming economy and the push to AI will help Box as enterprises need document automation solutions,” Mueller said.

On the positive side, Levie pointed out that the company achieved positive non-GAAP growth rate for the first time in its 14-year history, with projections for the first full year of non-GAAP profitability for FY20 that it just kicked off.

The company was showing losses on a cost per share of 14 cents a share for the most recent quarter, but even that was a smaller loss than the 24 cents a share from the previous fiscal year. It would seem that the revenue is heading generally in the correct direction, but Wall Street did not see it that way, flogging the cloud content management company.

Chart: Box

Wall Street tends to try to project future performance. What a company has done this quarter is not as important to investors, who are apparently not happy with the projections, but Levie pointed out the opportunity here is huge. “We’re going after 40 plus billion dollar market, so if you think about the entirety of spend on content management, collaboration, storage infrastructure — as all of that moves to the cloud, we see that as the full market opportunity that we’re going out and serving,” Levie explained.

Pelz-Sharpe also thinks Wall Street could be missing the longer-range picture here. “The move to true enterprise started a couple of years back at Box, but it has taken time to bring on the right partners and infrastructure to deal with these bigger and more complex migrations and implementations,” Pelz-Sharpe explained. Should that happen, Box could begin capturing much larger chunks of that $40 billion addressable cloud content management market, and the numbers could ultimately be much more to investor’s liking. For now though, they are clearly not happy with what they are seeing.

Open-source communities fight over telco market

When you think of MWC Barcelona, chances are you’re thinking about the newest smartphones and other mobile gadgets, but that’s only half the story. Actually, it’s probably far less than half the story because the majority of the business that’s done at MWC is enterprise telco business. Not too long ago, that business was all about selling expensive proprietary hardware. Today, it’s about moving all of that into software — and a lot of that software is open source.

It’s maybe no surprise then that this year, the Linux Foundation (LF) has its own booth at MWC. It’s not massive, but it’s big enough to have its own meeting space. The booth is shared by the three LF projects: the Cloud Native Computing Foundation (CNCF), Hyperleger and Linux Foundation Networking, the home of many of the foundational projects like ONAP and the Open Platform for NFV (OPNFV) that power many a modern network. And with the advent of 5G, there’s a lot of new market share to grab here.

To discuss the CNCF’s role at the event, I sat down with Dan Kohn, the executive director of the CNCF.

At MWC, the CNCF launched its testbed for comparing the performance of virtual network functions on OpenStack and what the CNCF calls cloud-native network functions, using Kubernetes (with the help of bare-metal host Packet). The project’s results — at least so far — show that the cloud-native container-based stack can handle far more network functions per second than the competing OpenStack code.

“The message that we are sending is that Kubernetes as a universal platform that runs on top of bare metal or any cloud, most of your virtual network functions can be ported over to cloud-native network functions,” Kohn said. “All of your operating support system, all of your business support system software can also run on Kubernetes on the same cluster.”

OpenStack, in case you are not familiar with it, is another massive open-source project that helps enterprises manage their own data center software infrastructure. One of OpenStack’s biggest markets has long been the telco industry. There has always been a bit of friction between the two foundations, especially now that the OpenStack Foundation has opened up its organizations to projects that aren’t directly related to the core OpenStack projects.

I asked Kohn if he is explicitly positioning the CNCF/Kubernetes stack as an OpenStack competitor. “Yes, our view is that people should be running Kubernetes on bare metal and that there’s no need for a middle layer,” he said — and that’s something the CNCF has never stated quite as explicitly before but that was always playing in the background. He also acknowledged that some of this friction stems from the fact that the CNCF and the OpenStack foundation now compete for projects.

OpenStack Foundation, unsurprisingly, doesn’t agree. “Pitting Kubernetes against OpenStack is extremely counterproductive and ignores the fact that OpenStack is already powering 5G networks, in many cases in combination with Kubernetes,” OpenStack COO Mark Collier told me. “It also reflects a lack of understanding about what OpenStack actually does, by suggesting that it’s simply a virtual machine orchestrator. That description is several years out of date. Moving away from VMs, which makes sense for many workloads, does not mean moving away from OpenStack, which manages bare metal, networking and authentication in these environments through the Ironic, Neutron and Keystone services.”

Similarly, ex-OpenStack Foundation board member (and Mirantis co-founder) Boris Renski told me that “just because containers can replace VMs, this doesn’t mean that Kubernetes replaces OpenStack. Kubernetes’ fundamental design assumes that something else is there that abstracts away low-level infrastructure, and is meant to be an application-aware container scheduler. OpenStack, on the other hand, is specifically designed to abstract away low-level infrastructure constructs like bare metal, storage, etc.”

This overall theme continued with Kohn and the CNCF taking a swipe at Kata Containers, the first project the OpenStack Foundation took on after it opened itself up to other projects. Kata Containers promises to offer a combination of the flexibility of containers with the additional security of traditional virtual machines.

“We’ve got this FUD out there around Kata and saying: telco’s will need to use Kata, a) because of the noisy neighbor problem and b) because of the security,” said Kohn. “First of all, that’s FUD and second, micro-VMs are a really interesting space.”

He believes it’s an interesting space for situations where you are running third-party code (think AWS Lambda running Firecracker) — but telcos don’t typically run that kind of code. He also argues that Kubernetes handles noisy neighbors just fine because you can constrain how many resources each container gets.

It seems both organizations have a fair argument here. On the one hand, Kubernetes may be able to handle some use cases better and provide higher throughput than OpenStack. On the other hand, OpenStack handles plenty of other use cases, too, and this is a very specific use case. What’s clear, though, is that there’s quite a bit of friction here, which is a shame.

Crypto Mining Service Coinhive to Call it Quits

Roughly one year ago, KrebsOnSecurity published a lengthy investigation into the individuals behind Coinhive[.]com, a cryptocurrency mining service that has been heavily abused to force hacked Web sites to mine virtual currency. On Tuesday, Coinhive announced plans to pull the plug on the project early next month.

A message posted to the Coinhive blog on Tuesday, Feb. 26, 2019.

In March 2018, Coinhive was listed by many security firms as the top malicious threat to Internet users, thanks to the tendency for Coinhive’s computer code to be surreptitiously deployed on hacked Web sites to steal the computer processing power of its visitors’ devices.

Coinhive took a whopping 30 percent of the cut of all Monero currency mined by its code, and this presented something of a conflict of interest when it came to stopping the rampant abuse of its platform. At the time, Coinhive was only responding to abuse reports when contacted by a hacked site’s owner. Moreover, when it would respond, it did so by invalidating the cryptographic key tied to the abuse.

Trouble was, killing the key did nothing to stop Coinhive’s code from continuing to mine Monero on a hacked site. Once a key was invalidated, Coinhive would simply cut out the middleman and proceed to keep 100 percent of the cryptocurrency mined by sites tied to that account from then on.

In response to that investigation, Coinhive made structural changes to its platform to ensure it was no longer profiting from this shady practice.

Troy Mursch is chief research officer at Bad Packets LLC, a company that has closely chronicled a number of high-profile Web sites that were hacked and seeded with Coinhive mining code over the years. Mursch said that after those changes by Coinhive, the mining service became far less attractive to cybercriminals.

“After that, it was not exactly enticing for miscreants to use their platform,” Mursch said. “Most of those guys just took their business elsewhere to other mining pools that don’t charge anywhere near such high fees.”

As Coinhive noted in the statement about its closure, a severe and widespread drop in the value of most major crytpocurrencies weighed heavily on its decision. At the time of my March 2018 piece on Coinhive, Monero was trading at an all-time high of USD $342 per coin, according to charts maintained by coinmarketcap.com. Today, a single Monero is worth less than $50.

In the announcement about its pending closure, Coinhive said the mining service would cease to operate on March 8, 2019, but that users would still be able to access their earnings dashboards until the end of April. However, Coinhive noted that only those users who had earned above the company’s minimum payout threshold would be able to cash out their earnings.

Mursch said it is likely that a great many people using Coinhive — legitimately on their own sites or otherwise — are going to lose some money as a result. That’s because Coinhive’s minimum payout is .05 Monero, which equals roughly USD $2.35.

“That means Coinhive is going to keep all the virtually currency from user accounts that have mined something below that threshold,” he said. “Maybe that’s just a few dollars or a few pennies here or there, but that’s kind of been their business model all along. They have made a lot of money through their platform.”

KrebsOnSecurity’s March 2018 Coinhive story traced the origins of the mining service back to Dominic Szablewski, a programmer who founded the German-language image board pr0gramm[.]com (not safe for work). The story noted that Coinhive began as a money-making experiment that was first debuted on the pr0gramm Web site.

The Coinhive story prompted an unusual fundraising campaign from the pr0gramm[.]com user community, which expressed alarm over the publication of details related to the service’s founders (even though all of the details included in that piece were drawn from publicly-searchable records). In an expression of solidarity to protest that publication, the pr0gramm board members collectively donated hundreds of thousands of euros to various charities that support curing cancer (Krebs is translated in German to “cancer” or “crab.”)

After that piece ran, Coinhive added to its Web site the contact information for Badges2Go UG, a limited liability company established in 2017 and headed by a Slyvia Klein from Frankfurt who is also head of an entity called Blockchain Future. Klein did not respond to requests for comment.

New VMware Kubernetes product comes courtesy of Heptio acquisition

VMware announced a new Kubernetes product today called VMware Essential PKS, which has been created from its acquisition of Heptio for $550 million at the end of last year.

VMware already had two flavors of Kubernetes, a fully managed cloud product and an enterprise version with all of the components, such as registry and network, pre-selected by VMware. What this new version does is provide a completely open version of Kubernetes where the customer can choose all the components, giving a flexible option for those who want it, according to Scott Buchanan, senior director of product marketing for cloud-native apps at VMware.

Buchanan said the new product comes directly from the approach that Heptio had taken to selling Kubernetes prior to the acquisition. “We’re introducing a new offering called VMware Essential PKS, and that offering is a packaging of the approach that Heptio took to market and that gained a lot of traction, and that approach is a natural complement to the other Kubernetes products in the VMware portfolio,” he explained.

Buchanan acknowledged that a large part of the market is going to go for the fully managed or fully configured approaches, but there is a subset of buyers that will want more choice in their Kubernetes implementation.

“Larger enterprises with more complex infrastructure want to have a very customized approach to how they build out their architecture. They don’t want to be integrated. They just want a foundation on which to build because the organizations are larger and more complex and they’re also more likely to have an internal DevOps or SREOps team to operate the platform on a day-to-day basis,” he explained.

While these organizations want flexibility, they also require more of a consultative approach to the sale. Heptio had a 40-person field service engineering team that came over in the acquisition, and VMware is in the process of scaling that team. These folks consult with the customer and help them select the different components that make up a Kubernetes installation to fit the needs of each organization.

Buchanan, who also came over in the acquisition, says that being part of VMware (which is part of the Dell family of companies) means they have several layers of sales with VMware, Pivotal and Dell all selling the product.

Heptio is the Kubernetes startup founded by Craig McLuckie and Joe Beda, the two men who helped develop the technology while they were at Google. Heptio was founded in 2016 and raised $33.5 million prior to the acquisition, according to Crunchbase data.

Former Russian Cybersecurity Chief Sentenced to 22 Years in Prison

A Russian court has handed down lengthy prison terms for two men convicted on treason charges for allegedly sharing information about Russian cybercriminals with U.S. law enforcement officials. The men — a former Russian cyber intelligence official and an executive at Russian security firm Kaspersky Lab — were reportedly prosecuted for their part in an investigation into Pavel Vrublevsky, a convicted cybercriminal who ran one of the world’s biggest spam networks and was a major focus of my 2014 book, Spam Nation.

Sergei Mikhailov, formerly deputy chief of Russia’s top anti-cybercrime unit, was sentenced today to 22 years in prison. The court also levied a 14-year sentence against Ruslan Stoyanov, a senior employee at Kaspersky Lab. Both men maintained their innocence throughout the trial.

Following their dramatic arrests in 2016, many news media outlets reported that the men were suspected of having tipped off American intelligence officials about those responsible for Russian hacking activities tied to the 2016 U.S. presidential election.

That’s because two others arrested for treason at the same time — Mikhailov subordinates Georgi Fomchenkov and Dmitry Dokuchaev — were reported by Russian media to have helped the FBI investigate Russian servers linked to the 2016 hacking of the Democratic National Committee. The case against Fomchenkov and Dokuchaev has not yet gone to trial.

What exactly was revealed during the trial of Mikhailov and Stoyanov is not clear, as the details surrounding it were classified. But according to information first reported by KrebsOnSecurity in January 2017, the most likely explanation for their prosecution stemmed from a long-running grudge held by Pavel Vrublevsky, a Russian businessman who ran a payment firm called ChronoPay and for years paid most of the world’s top spammers and virus writers to pump malware and hundreds of billions of junk emails into U.S. inboxes.

In 2013, Vrublevsky was convicted of hiring his most-trusted spammer and malware writer to launch a crippling distributed denial-of-service (DDoS) attack against one of his company’s chief competitors.

Prior to Vrublevsky’s conviction, massive amounts of files and emails were taken from Vrublevsky’s company and shared with this author. Those included spreadsheets chock full of bank account details tied to some of the world’s most active cybercriminals, and to a vast network of shell corporations created by Vrublevsky and his co-workers to help launder the proceeds from their various online pharmacy, spam and fake antivirus operations.

In a telephone interview with this author in 2011, Vrublevsky said he was convinced that Mikhailov was taking information gathered by Russian government cybercrime investigators and feeding it to U.S. law enforcement and intelligence agencies. Vrublevsky told me then that if ever he could prove for certain Mikhailov was involved in leaking incriminating data on ChronoPay, he would have someone “tear him a new asshole.”

An email that Vrublevsky wrote to a ChronoPay employee in 2010 eerily presages the arrests of Mikhailov and Stoyanov, voicing Vrublevsky’s suspicion that the two were closely involved in leaking ChronoPay emails and documents that were seized by Mikhailov’s own division. A copy of that email is shown in Russian in the screen shot below. A translated version of the message text is available here (PDF).

A copy of an email Vrublevsky sent to a ChronoPay co-worker about his suspicions that Mikhailov and Stoyanov were leaking government secrets.

Predictably, Vrublevsky has taken to gloating on Facebook about today’s prison’s sentences, calling them “good news.” He told the Associated Press that Mikhailov had abused his position at the FSB to go after Internet entrepreneurs like him and “turn them into cybercriminals,” thus “whipping up cyber hysteria around the world.”

This is a rather rich quote, as Vrublevsky was already a well-known and established cybercriminal long before Mikhailov came into his life. Also, I would not put it past Vrublevsky to have somehow greased the wheels of this prosecution.

As I noted in Spam Nation, emails leaked from ChronoPay suggest that Vrublevsky funneled as much as $1 million to corrupt Russian political leaders for the purpose of initiating a criminal investigation into Igor Gusev, a former co-founder of ChronoPay who went on to create a pharmacy spam operation that closely rivaled Vrublevsky’s own pharmacy spam operation — Rx Promotion.

Vrublevsky crowing on Facebook about the sentencing of Mikhailov (left) and Stoyanov.

Feature Spotlight – Full Remote Shell

Full Remote Shell gives your security team a rapid way to investigate attacks, collect forensic data, and remediate breaches no matter where the compromised endpoints are located, eliminating uncertainty and greatly reducing any downtime that results from an attack.

Today we are glad to announce another useful feature of the SentinelOne platform, the Full Remote Shell. As the name indicates, it allows an authorized administrator to securely access their managed endpoints directly from the SentinelOne console. This way, sysadmins and SecOps can quickly establish a full remote shell session to troubleshoot end-user issues without having to go on-site.

Savvy admins know that having the capability to see and interact with any device on the network can make the difference between “problem solved” and “late for dinner”. With so much noise on the network today, the remote shell is an essential tool for better endpoint management.

Why You Need Remote Shell Capability

The endpoint landscape is constantly changing. Users install a wide variety of software to perform their jobs more efficiently, and that makes it challenging to keep up with best practices for security and risk management. In such a demanding environment, admins will find a vast number of use cases for a powerful, full remote shell capability.

Here are a few suggestions to get you started:

  1. Faster troubleshooting as admins don’t have to be physically next to an endpoint device to solve problems
  2. Better support for remote users as there’s no need to ask users to come by the office or visit the IT department
  3. Easily change local configuration without leaving the premises
  4. Initiate a remote control in a secure way
  5. Go deeper into forensic investigation with a memory dump and other advanced tools
  6. Terminate an unwanted application or process running on an endpoint
  7. Query any device on the network, locally

Is it Secure?

For security products, allowing a remote shell into every device is to some extent against their DNA. On the other hand, we want to ensure that SentinelOne meets the security needs of the enterprise. We listened to you, our customers, and here’s how we met the dual needs of usability and security:

  1. Full Remote Shell access must be specifically enabled in the management policy
  2. For each session, the administrator is required to choose a dedicated password to encrypt the session
  3. Prior to allowing access, the admin must enable two factor authentication
  4. Full auditing – every session is logged, including every access, usage and session history

How SentinelOne Full Remote Shell is Different

When we came to design the full remote shell, we interviewed admins about their experience using similar capabilities in other products. The main pain these users reported was the limited amount of commands they could execute. If they needed another command, it required a feature request to the vendor and other lengthy processes. To avoid that pain, we use native shell capabilities. In other words, anything you can do with PowerShell and Bash, you can also do with SentinelOne Full Remote Shell, and that provides a lot of options. We even support command completion and other means to simplify the sysadmin’s life.

Demo



Conclusion

SentinelOne’s Full Remote Shell helps your business avoid the “IT nightmare” of managing a distributed network securely. It allows IT personnel to respond quickly and efficiently when employees experience technical problems without having to go to each device. The remote shell capability has all the power you would expect in a regular shell session, implemented in the simple and secure way that is the hallmark of the SentinelOne offering. Full Remote Shell is available starting agent version 3.0.


Like this article? Follow us on LinkedInTwitter, YouTube or Facebook to see the content we post.

Read more Feature Spotlights

Say hello to Microsoft’s new $3,500 HoloLens with twice the field of view

Microsoft unveiled the latest version of its HoloLens ‘mixed reality’ headset at MWC Barcelona today. The new HoloLens 2 features a significantly larger field of view, higher resolution and a device that’s more comfortable to wear. Indeed, Microsoft says the device is three times as comfortable to wear (though it’s unclear how Microsoft measured this).

Later this year, HoloLens 2 will be available in the United States, Japan, China, Germany, Canada, United Kingdom, Ireland, France, Australia and New Zealand for $3,500.

One of the knocks against the original HoloLens was its limited field of view. When whatever you wanted to look at was small and straight ahead of you, the effect was striking. But when you moved your head a little bit or looked at a larger object, it suddenly felt like you were looking through a stamp-sized screen. HoloLens 2 features a field of view that’s twice as large as the original.

“Kinect was the first intelligent device to enter our homes,” HoloLens chief Alex Kipman said in today’s keynote, looking back the the device’s history. “It drove us to create Microsoft HoloLens. […] Over the last few years, individual developers, large enterprises, brand new startup have been dreaming up beautiful things, helpful things.”

The HoloLens was always just as much about the software as the hardware, though. For HoloLens, Microsoft developed a special version of Windows, together with a new way of interacting with the AR objects through gestures like air tap and bloom. In this new version, the interaction is far more natural and lets you tap objects. The device also tracks your gaze more accurately to allow the software to adjust to where you are looking.

“HoloLens 2 adapts to you,” Kipman stressed. “HoloLens 2 evolves the interaction model by significantly advancing how people engage with holograms.”

In its demos, the company clearly emphasized how much faster and fluid the interaction with HoloLens applications becomes when you can use slides, for example, by simply grabbing the slider and moving it, or by tapping on a button with either a finger or two or with your full hand. Microsoft event built a virtual piano that you can play with ten fingers to show off how well the HoloLens can track movement. The company calls this ‘instinctual interaction.’

Microsoft first unveiled the HoloLens concept at a surprise event on its Redmond campus back in 2015. After a limited, invite-only release that started days after the end of MWC 2016, the device went on sale to everybody in August  2016. Four years is a long time between hardware releases, but the company clearly wanted to seed the market and give developer a chance to build the first set of HoloLens applications on a stable platform.

To support developers, Microsoft is also launching a number of Azure services for HoloLens today. These include spatial anchors and remote rendering to help developers stream high-polygon content to HoloLens.

It’s worth noting that Microsoft never positioned the device as consumer hardware. I may have shown off the occasional game, but its focus was always on business applications, with a bit of educational applications thrown in, too. That trend continued today. Microsoft showed off the ability to have multiple people collaborate around a single hologram, for example. That’s not new, of course, but goes to show how Microsoft is positioning this technology.

For these enterprises, Microsoft will also offer the ability to customize the device.

“When you change the way you see the world, you change the world you see,” Microsoft CEO Satya Nadella said, repeating a line from the company’s first HoloLens announcement four years ago. He noted that he believes that connecting the physical world with the virtual world will transform the way we will work.