You Can Now Search Google From iMessage With App Download

There’s a fun, new update for Apple’s iMessage app that will probably make lots of power users happy.  As long as you also have Google’s iOS app installed, you’ll be able to perform Google searches from within iMessage itself.

In order to make it work, you’ll have to go into the app drawer (App Store icon) and enable the iMessage extension.  Once enabled, all you have to do to use Google search is to tap on the Google shortcut icon to get the search box.  Even better, the update includes shortcuts for watching trending YouTube videos, scoping out nearby restaurants, checking out local weather conditions, and even a handy GIF search.  There’s also a quick news search option.

If you search for restaurant recommendations, the new app makes the results easy to share in the conversation you’re having. Although curiously, this feature doesn’t carry over to YouTube videos or news.  Even so, it can be situationally useful.

In a similar vein, Google’s keyboard app, “GBoard” also now has search built into it.  In fact, you don’t even have to have GBoard installed.   As long as you’ve got the core Google App, the search extension will appear in iMessage’s app drawer.

These are small changes, but if you spend a lot of time texting, you’ll find them invaluable.  Think back to prior text conversations you’ve had.  There have probably been a number of occasions when you found yourself wishing you could do a quick search on whatever topic you were talking about.

It’s great to see these kinds of changes as the cellphone market continues to grow.  When the iPhone first burst onto the scene, apps were few and far between.  Now, not only are there untold thousands of apps on the market, but they are becoming increasingly integrated.  That’s very good to see.

Used with permission from Article Aggregator

IoT devices could be next customer data frontier

At the Adobe Summit this week in Las Vegas, the company introduced what could be the ultimate customer experience construct, a customer experience system of record that pulls in information, not just from Adobe tools, but wherever it lives. In many ways it marked a new period in the notion of customer experience management, putting it front and center of the marketing strategy.

Adobe was not alone, of course. Salesforce, with its three-headed monster, the sales, marketing and service clouds, was also thinking of a similar idea. In fact, they spent $6.5 billion dollars last week to buy MuleSoft to act as a data integration layer to access  customer information from across the enterprise software stack, whether on prem, in the cloud, or inside or outside of Salesforce. And they announced the Salesforce Integration Cloud this week to make use of their newest company.

As data collection takes center stage, we actually could be on the edge of yet another data revolution, one that could be more profound than even the web and mobile were before it. That is…the Internet of Things.

Here comes IoT

There are three main pieces to that IoT revolution at the moment from a consumer perspective. First of all, there is the smart speaker like the Amazon Echo or Google Home. These provide a way for humans to interact verbally with machines, a notion that is only now possible through the marriage of all this data, sheer (and cheap) compute power and the AI algorithms that fuel all of it.

Next, we have the idea of a connected car, one separate from the self-driving car. Much like the smart speaker, humans can interact with the car, to find directions and recommendations and that leaves a data trail in its wake. Finally we, have sensors like iBeacons sitting in stores, providing retailers with a world of information about a customer’s journey through the store — what they like or don’t like, what they pick up, what they try on and so forth.

There are very likely a host of other categories too, and all of this information is data that needs to be processed and understood just like any other signals coming from customers, but it also has unique characteristics around the volume and velocity of this data — it is truly big data with all of the issues inherent in processing that amount of data.

The means it needs to be ingested, digested and incorporated into that central customer record-keeping system to drive the content and experiences you need to create to keep your customers happy — or so the marketing software companies tell us, at least. (We also need to consider the privacy implications of such a record, but that is the subject for another article.)

Building a better relationship

Regardless of the vendor, all of this is about understanding the customer better to provide a central data gathering system with the hope of giving people exactly what they want. We are no longer a generic mass of consumers. We are instead individuals with different needs, desires and requirements, and the best way to please us they say, is to understand us so well, that the brand can deliver the perfect experience at exactly the right moment.

Photo: Ron Miller

That involves listening to the digital signals we give off without even thinking about it. We carry mobile, connected computers in our pockets and they send out a variety of information about our whereabouts and what we are doing. Social media acts as a broadcast system that brands can tap into to better understand us (or so the story goes).

Part of what Adobe, Salesforce and others can deliver is a way to gather that information, pull it together into his uber record keeping system and apply a level of machine and learning and intelligence to help further the brand’s ultimate goals of serving a customer of one and delivering an efficient (and perhaps even pleasurable) experience.

Getting on board

At an Adobe Summit session this week on IoT (which I moderated), the audience was polled a couple of times. In one show of hands, they were asked how many owned a smart speaker and about three quarters indicated they owned at least one, but when asked how many were developing applications for these same devices only a handful of hands went up. This was in a room full of marketers, mind you.

Photo: Ron Miller

That suggests that there is a disconnect between usage and tools to take advantage of them. The same could be said for the other IoT data sources, the car and sensor tech, or any other connected consumer device. Just as we created a set of tools to capture and understand the data coming from mobile apps and the web, we need to create the same thing for all of these IoT sources.

That means coming up with creative ways to take advantage of another interaction (and data collection) point. This is an entirely new frontier with all of the opportunity involved in that, and that suggests startups and established companies alike need to be thinking about solutions to help companies do just that.

Asana introduces Timeline, lays groundwork for AI-based monitoring as the “team brain” for productivity

When workflow management platform Asana announced a $75 million round of funding in January led by former Vice President Al Gore’s Generation Investment Management, the startup didn’t give much of an indication of what it planned to do with the money, or what it was that won over investors to a new $900 million valuation (a figure we’ve now confirmed with the company).

Now, Asana is taking off the wraps on the next phase of its strategy. This week, the company announced a new feature it’s calling Timeline — composite, visual, and interactive maps of the various projects assigned to different people within a team, giving the group a wider view of all the work that needs to be completed, and how the projects fit together, mapped out in a timeline format.

Timeline is a new premium product: Asana’s 35,000 paying customers will be able to access it for no extra charge. Those who are among Asana’s millions of free users will have to upgrade to the premium tier to access it.

The Timeline that Asana is making is intended to be used in scenarios like product launches, marketing campaigns and event planning, and it’s not a matter of a new piece of software where you have to duplicate work, but each project automatically becomes a new segment on a team’s Timeline. Viewing projects through the Timeline allows users to identify if different segments are overlapping and adjust them accordingly.

Perhaps one of the most interesting aspects of the Timeline, however, is that it’s the first instalment of a bigger strategy that Asana plans to tackle over the next year to supercharge and evolve its service, making it the go-to platform for helping keep you focused on work, when you’re at work.

While Asana started out as a place where people go to manage the progress of projects, its ambition going forward is to become a platform that, with a machine-learning engine at the back end, will aim to manage a team’s and a company’s wider productivity and workload, regardless of whether they are actively in the Asana app or not.

“The long term vision is to marry computer intelligence with human intelligence to run entire companies,” Asana co-founder Justin Rosenstein said in an interview. “This is the vision that got investors excited.”

The bigger product — the name has not been revealed — will include a number of different features. Some that Rosenstein has let me see in preview include the ability for people to have conversations about specific projects — think messaging channels but less dynamic and more contained. And it seems that Asana also has designs to move into the area of employee monitoring: it has been working on a widget of sorts that installs on your computer and watches you work, with the aim of making you more efficient.

“Asana becomes a team brain to keep everyone focused,” said Rosenstein.

Given that Asana’s two co-founders, Dustin Moskovitz and Rosenstein, previously had close ties to Facebook — Moskovitz as a co-founder and Rosenstein as its early engineering lead — you might wonder if Timeline and the rest of its new company productivity engine might be bringing more social elements to the table (or desk, as the case may be).

In fact, it’s quite the opposite.

Rosenstein may have to his credit the creation of the “like” button and other iconic parts of the world’s biggest social network, but he has in more recent times become a very outspoken critic of the distracting effects of services like Facebook’s. It’s part of a bigger trend hitting Silicon Valley, where a number of leading players have, in a wave of mea culpa, turned against some of the bigger innovations particularly in social media.

Some have even clubbed together to form a new organization called the Center for Humane Technology, whose motto is “Reversing the digital attention crisis and realigning technology with humanity’s best interests.” Rosenstein is an advisor, although when I tried to raise the issue of the backlash that has hit Facebook on multiple fronts, I got a flat response. “It’s not something I want to talk about right now,” he said. (That’s what keeping focussed is all about, I guess.)

Asana, essentially, has taken the belief that social can become counterproductive when you have to get something done, and applied it to the enterprise environment.

This is an interesting twist, given that one of the bigger themes in enterprise IT over the last several years has been how to turn business apps and software more “social” — tapping into some of the mechanics and popularity of social networking to encourage employees to collaborate and communicate more with each other even when (as is often the case) they are not in the same physical space.

But social working might not be for everyone, all the time. Slack, the wildly popular workplace chat platform that interconnects users with each other and just about every enterprise and business app, is notable for producing “a gazillion notifications”, in Rosenstein’s words, leading to distraction from actually getting things done.

“I’m not saying services like Slack can’t be useful,” he explained. (Slack is also an integration partner of Asana’s.) “But companies are realising that, to collaborate effectively, they need more than communication. They need content and work management. I think that Slack has a lot of useful purposes but I don’t know if all of it is good all the time.”

The “team brain” role that Asana envisions may be all about boosting productivity by learning about you and reducing distraction — you will get alerts, but you (and presumably the brain) prioritise which ones you get, if any at all — but interestingly it has kept another feature characteristic of a lot of social networking services: amassing data about your activities and using that to optimise engagement. As Rosenstein described it, Asana will soon be able to track what you are working on, and how you work on it, to figure out your working patterns.

The idea is that, by using machine learning algorithms, you can learn what a person does quickly, and what might take longer, to help plan that person’s tasks better, and ultimately make that person more productive. Eventually, the system will be able to suggest to you what you should be working on and when.

All of that might sound like music to managers’ ears, but for some, employee monitoring programs sound a little alarming for how closely they monitor your every move. Given the recent wave of attention that social media services have had for all the data they collect, it will be interesting to see how enterprise services like this get adopted and viewed. It’s also not at all clear how these sorts of programs will sit in respect of new directives like GDPR in Europe, which put into place a new set of rules for how any provider of an internet service needs to inform users of how their data is used, and any data collecting needs to have a clear business purpose.

Still, with a different aim in mind — helping you work better — the end could justify the means, not just for bosses, but for people who might feel overwhelmed with what is on their work plate every day. “When you come in in the morning, you might have a list [many things] to do today,” Rosenstein said. “We take over your desktop to show the one thing you need to do.” Checking your Instagram feed be damned.

Azure’s availability zones are now generally available

No matter what cloud you build on, if you want to build something that’s highly available, you’re always going to opt to put your applications and data in at least two physically separated regions. Otherwise, if a region goes down, your app goes down, too. All of the big clouds also offer a concept called ‘availability zones’ in their regions to offer developers the option to host their applications in two separate data centers in the same zone for a bit of extra resilience. All big clouds, that is, except for Azure, which is only launching its availability zones feature into general availability today after first announcing a beta last September.

Ahead of today’s launch, Julia White, Microsoft’s corporate VP for Azure, told me that the company’s design philosophy behind its data center network was always about servicing commercial customers with the widest possible range of regions to allow them to be close to their customers and to comply with local data sovereignty and privacy laws. That’s one of the reasons why Azure today offers more regions than any of its competitors, with 38 generally available regions and 12 announced ones.

“Microsoft started its infrastructure approach focused on enterprise organizations and built lots of regions because of that,” White said. “We didn’t pick this regional approach because it’s easy or because it’s simple, but because we believe this is what our customers really want.”

Every availability zone has its own network connection and power backup, so if one zone in a region goes down, the others should remain unaffected. A regional disaster could shut down all of the zones in a single region, though, so most business will surely want to keep their data in at least one additional region.

As marketing data proliferates, consumers should have more control

At the Adobe Summit in Las Vegas this week, privacy was on the minds of many people. It was no wonder with social media data abuse dominating the headlines, GDPR just around the corner, and Adobe announcing the concept of a centralized customer experience record.

With so many high profile breaches in recent years, putting your customer data in a central record-keeping system would seem to be a dangerous proposition, yet Adobe sees so many positives for marketers, it likely believes this to be a worthy trade-off.

Which is not to say that the company doesn’t see the risks. Executives speaking at the conference continually insisted that privacy is always part of the conversation at Adobe as they build tools — and they have built in security and privacy safeguards into the customer experience record.

Ben Kepes, an independent analyst says this kind of data collection does raise ethical questions about how to use it. “This new central repository of data about individuals is going to be incredibly attractive to Adobe’s customers. The company is doing what big brands and corporations ask for. But in these post-Cambridge Analytica days, I wonder how much of a moral obligation Adobe and the other vendors have to ensure their tools are used for good purposes,” Kepes asked.

Offering better experiences

It’s worth pointing out that the goal of this exercise isn’t simply to collect data for data’s sake. It’s to offer consumers a more customized and streamlined experience. How does that work? There was a demo in the keynote illustrating a woman’s experience with a hotel brand.

Brad Rencher, EVP and GM at Adobe Experience Cloud explains Adobe’s Cloud offerings. Photo: Jeff Bottari/Invision for Adobe/AP Images

The mythical woman started a reservation for a trip to New York City, got distracted in the middle and was later “reminded” to return to it via Facebook ad. She completed the reservation and was later issued a digital key to her room, allowing her to bypass the front desk check-in process. Further, there was a personal greeting on the television in her room with a custom message and suggestions for entertainment based on her known preferences.

As one journalist pointed out in the press event, this level of detail from the hotel is not something that would thrill him (beyond the electronic check-in). Yet there doesn’t seem to be a way to opt out of that data (unless you live in the EU and will be subject to GDPR rules).

Consumers may want more control

As it turns out, that reporter wasn’t alone. According to a survey conducted last year by The Economist Intelligence Unit in conjunction with ForgeRock, an identity management company, consumers are not just willing sheep that tech companies may think we are.

The survey was conducted last October with 1,629 consumers participating from eight countries including Australia, China, France, Germany, Japan, South Korea, the UK and the US. It’s worth noting that survey questions were asked in the context of Internet of Things data, but it seems that the results could be more broadly applied to any types of data collection activities by brands.

There are a couple of interesting data points that perhaps brands should heed as they collect customer data in the fashion outlined by Adobe. In particular as it relates to what Adobe and other marketing software companies are trying to do to build a central customer profile, when asked to rate the statement, “I am uncomfortable with companies building a “profile” of me to predict my consumer behaviour,” 39 percent strongly agreed with that statement. Another 35 percent somewhat agreed. That would suggest that consumers aren’t necessarily thrilled with this idea.

When presented with the statement, Providing my personal information may have more drawbacks than benefits, 32 percent strongly agreed and 41 percent somewhat agreed.

That would suggest that it is on the brand to make it clearer to consumers that they are collecting that data to provide a better overall experience, because it appears that consumers who answered this survey are not necessarily making that connection.

Perhaps it wasn’t a coincidence that at a press conference after the Day One keynote announcing the unified customer experience record, many questions from analysts and journalists focused on notions of privacy. If Adobe is helping companies gather and organize customer data, what role do they have in how their customers’ use that data, what role does the brand have and how much control should consumers have over their own data?

These are questions we seem to be answering on the fly. The technology is here now or very soon will be, and wherever the data comes from, whether the web, mobile devices or the Internet of Things, we need to get a grip on the privacy implications — and we need to do it quickly. If consumers want more control as this survey suggests, maybe it’s time for companies to give it to them.

How Deep Visibility Saves You Time

In September 2017, we announced a new module – Deep Visibility – to search for Indicators of Compromise (IoCs) and hunt threats.  The feedback from our early adopters has been very positive and we would like to share some thoughts on how Deep Visibility saves time.

EPP+EDR in a Single Agent

Deep Visibility (DV) is now a built-in component of agent version 2.5 and can be enabled using a policy configuration while not requiring the installation of another agent.  DV collects information of various types and these can also be controlled using the policy –

The combination of EPP and EDR in a singular, purpose-built agent results in significant time savings from deployment, management, and capability standpoint.

Always On and On All Platforms

DV collects and streams the information for agents into the SentinelOne Management Console.  The protocol uses compression and optimization to reduce bandwidth costs. More importantly, the information is available for threat hunting even when a compromised device is not.  DV is also available on all platforms – Windows, Mac and Linux.  Many customers who were previously using osquery for threat hunting on Linux are now switching to DV as it provides cross-platform support with better manageability and user interface.  By offering a single pane view into IoCs and equivalent capabilities on all platforms, DV saves time for our customers – they do not have to deploy different tools for different platforms.

Encrypted Traffic Visibility Directly from the Endpoint

With 70%+ of traffic being encrypted, existing tools fall short only allowing unencrypted traffic to be visible and searchable.  The DV module enables visibility of all network traffic – even encrypted traffic – without requiring any changes to network topology.  This lets you track users compromised by a Phishing attack, lateral movement within the network, and data exfiltration attempts.  In the example below, you can see the full URL that I visited after receiving an email with an account activation link –

You save time and money by not having to deploy additional third-party hardware or certificates.

Extended Storage

The 2017 Trustwave Global Security report claims an average dwell time of 49 days.  Deep Visibility data is kept indexed and available for search for 90 days to cover even such an extended time period.  After 90 days, the data is retired from the indices, but stored for 12 months.  It is also available for customers to export into their own security tools and data lakes.  We’re proud to offer our customers such a lengthy repository to enable maximum forensic value of the module.   With other tools that offer shorter retention periods, you would have to re-load older data from your repository (if you have one) or re-construct the data using forensics tools like EnCase or eCat.

 

File Integrity Monitoring

The data collected by Deep Visibility can also be used for meeting file integrity needs, as every file change is tracked. We will soon be introducing a watchlist that alerts you whenever a file is modified by an unauthorized user.   We save you the hassle of deploying a File Integrity tool like Tripwire.

I close by inviting our customers and security professionals to try Deep Visibility.  We look forward to working with you to make the world a safer place – and giving you industry-first real-time visibility of this commitment in the modules and features we constantly ship. We will be hosting a webinar on Deep Visibility on the 5th of April at 10am PT. This will feature Jim Jaeger, former Director of Operations at the NSA, as well as a demo on SentinelOne’s Deep Visibility capabilities. Register here.

 

The post How Deep Visibility Saves You Time appeared first on SentinelOne.

How Deep Visibility helps you against Phishing

As of today, most of the network traffic is encrypted. This improves privacy but eliminates the option for network product to see the traffic. Google played a significant role, has pressure on websites to adopt HTTPS and recently announced Jigsaw – allowing anyone to set up and run their own ‘homebrew’ VPN. According to Google:

  • Over 68% of Chrome traffic on both Android and Windows is now protected
  • Over 78% of Chrome traffic on both Chrome OS and Mac is now protected
  • 81 of the top 100 sites on the web use HTTPS by default

Source: Google

The bad guys know network security products cannot see encrypted traffic

Despite being one of the oldest tricks on the web, phishing continues to be a significant problem for organizations. Your users are your assets, but also part of the security problem.

As per our study of 500 business leaders over the US, UK, Germany, and France uncovered how Ransomware effects their business we can see several trends:

  1. 66% of the enterprises experienced ransomware originating from either a phishing, email or social networks.
  2. 44% of the undertakings experienced from Drive-by-download caused by clicking on a compromised website.

Phishing sites are trying to trick users into entering credentials, personal information, and so on. For this, they want to avoid the “not secured” indication.

SentinelOne will automatically mitigate malicious attempts incident by incident, while Deep Visibility will get to the root of these. By looking into the encrypted traffic, you can see as no other solution can, the chain of events leading to the compromise attempts are revealed.

Demo

In the following video, you can see how to identify phishing attempts on your users.

Subscribe to SentinelOne YouTube channel



Step by step

The starting point would be a twit stating:

Now, you might want to look if there is any evidence of this campaign inside your network. A simple search would show you 21 entries of this encrypted URL.

Conclusion

SentinelOne Deep visibility is a simple way to have visibility on your assets, including the increasing blind spots of encrypted traffic. It will allow your team to understand better the security incidents, monitor phishing attempts on your users, identity data leakage ensure cross assets and all these is a simple and straightforward interface that allows you to automate and connect it to other products on your portfolio.

We will be hosting a webinar on Deep Visibility on the 5th of April at 10am PT. This will feature Jim Jaeger, former Director of Operations at the NSA, as well as a demo on SentinelOne’s Deep Visibility capabilities. Register here.

The post How Deep Visibility helps you against Phishing appeared first on SentinelOne.

Deep Hooks: Monitoring native execution in WoW64 applications – Part 3

Introduction

Last time (part 1, part 2)we demonstrated several different methods for injecting 64-bit modules into WoW64 processes. This post will pick up where we left off and describe how the ability to execute 64-bit code in such processes can be leveraged to hook native x64 APIs. To accomplish this task, the injected DLL must host a hooking engine capable of operating in the native region of a WoW64 process. Unfortunately, none of the hooking engines we inspected can manage this out-of-the-box, so we were forced into modifying one of them in order to make it suit our needs.

Creating the injected DLL

 

Choosing the hooking engine to adapt

Hooking is a well-established technique in the field of computer security, used extensively by defenders and attackers alike. Since the publication of the seminal article Detours: Binary Interception of Win32 Functions back in 1999, many different hooking libraries were developed. Most of them exercise similar concepts to the ones presented in that article, but differ in other aspects such as their support for various CPU architectures, support for transactions, etc. Out of these libraries, we had to choose one which best fit our requirements:

  1. Support inline hooking of x64 functions.
  2. Open-sourced and free-licensed – so that we could modify it legally.
  3. Preferably, the hooking engine should be relatively minimal so as to require as few modifications as possible.

After taking all of these requirements into consideration, we chose MinHook as our go-to engine. What eventually tipped the scales in favor of it was its small codebase, making it relatively easy to use in a PoC. All of the modifications presented later were done on top of it, and might be slightly different if another hooking engine is used instead.

The complete source code of our modified hooking engine is available here.

Look ma’, no dependencies!

In part 1 we briefly mentioned that not any 64-bit module can be easily loaded into a WoW64 process. Most DLLs tend to use (both implicitly and explicitly) various functions found in common Win32 subsystem DLLs, such as kernel32.dll, user32.dll, etc. However, the 64-bit versions of these modules are not loaded by default into WoW64 processes, as they are not required for the WoW64 subsystem to operate. Furthermore, due to some limitations imposed by the address space layout, forcing the process to load any of these is somewhat difficult and unreliable.

To avoid unnecessary complications, we chose to modify our hooking engine and the DLL that hosts it so that they would only rely on native 64-bit modules found normally in WoW64 processes. Basically, this left us with just the native NTDLL, as the DLLs comprising the WoW64 environment don’t usually contain functions which are beneficial to us.

In a more practical sense, to force the build environment to only link against NTDLL, we specify the /NODEFAULTLIB flag in the linker settings, and explicitly add “ntdll.lib” to the list of additional dependencies:

Figure 19 – linker configuration for the host DLL

API re-implementation

The first and most noticeable effect that this change raises is that higher-level Win32 API functions are not available at our disposal and would have to be re-implemented using their NTDLL counterparts. As demonstrated in figure 20, for every Win32 API used by MinHook, we introduced a replacement function which has the same public interface and implements the same core functionality, while internally using only NTDLL facilities.

Most of the time these “translations” were rather straightforward (for example calls to VirtualProtect() can be replaced almost directly with calls to NtProtectVirtualMemory()). In other, more complex cases the mapping between the Win32 API functions and their native counterparts is not as clear, so we had to resort to some reverse engineering or peeking inside the ReactOS sources.


Figure 20 – private implementation of VirtualProtect()

Project configuration

After re-implementing all Win32 API calls in MinHook, we were still left with a bunch of errors:

Figure 21 – a bunch of errors

Luckily, solving most of these errors only requires slight configuration changes to the project. As can be seen in the figure, most of the errors take the form of an unresolved external symbol normally exported from the CRT (which is not available). They can be settled by changing a few flags in the linker settings:

  • Disable basic runtime checks (remove the /RTC flag from the command line)
  • Disable buffer security checks (/GS- flag)
  • The entry point must be explicitly specified as DllMain since DllMainCRTStartup is not linked in.
  • Additionally, memcpy() and memset() have to be implemented manually, or replaced with calls to RtlCopyMemory() and RtlFillMemory() exported from NTDLL.

After applying all of these changes, we successfully created a custom 64-bit DLL that contains no dependencies other than NTDLL:

Figure 22 – a minimalistic DLL with only a single import descriptor (NTDLL)

Hooking the native NTDLL

Once we modified our hooking engine to match all the aforementioned limitations, we can take a closer look at the hooking mechanism itself. The hooking technique employed by MinHook, as well as by the vast majority of such libraries, is dubbed “inline hooking”. The inner workings of this technique are rather well-documented, but here is a simplified description of the steps this method includes:

  1. Allocate a “trampoline” in the process’ address-space and copy into it the prolog of the function that will eventually be hooked.
  2. Place a JMP instruction in the trampoline, right after the copied prolog. This JMP should point back to the instruction following the prolog of the original function.
  3. Place another JMP instruction in the trampoline, right before the copied prolog. This JMP should point into a detour function (usually found in the DLL we previously injected into the process).
  4. Overwrite the hooked function prolog with a JMP instruction pointing to the trampoline.

Figure 23.1 – a general illustration of what an inline hook looks like.

Figure 23.2 – a view of the trampoline. Marked in red is the jump to the detour function, and marked in green are the instructions copied from the hooked function and the jump back into it.

This hooking method works by modifying the prolog of the hooked function, so whenever it is called by the application, the detour function will be called instead. The detour function can then execute any code before, after or instead of the original function.

In 64-bit mode, most hooking engines use two different types of jumps for the hooked function and the trampoline:

  1. The jump from the hooked function to the trampoline is a relative jump encoded as “E9 ”. Since this instruction operates on a DWORD-sized operand, the trampoline must be at a distance of no more than 2GB away from the hooked function. This form of jump is usually chosen for this step since it only takes up 5 bytes and so it’s compact enough to fit neatly inside the function’s prolog.
  2. The jumps from the trampoline into the detour function and back to the hooked function, shown in figure 23.2, are indirect, RIP-relative jumps encoded as “FF25 ” (mnemonic form: JMP qword ptr [rip+offset]). This instruction will jump into a 64-bit absolute address stored in the location pointed to by RIP plus the offset.

When running in native 64-bit processes, hooking engines employing this technique work just fine. As can be expected, the trampoline is allocated a short distance from the target function (up to 2GB away), thus allowing for successful binary instrumentation.

However, some recent changes to the memory layout of WoW64 processes guarantee that this technique cannot be applied to the native NTDLL without some additional changes. As Alex Ionescu demonstrated in his blog, in recent Windows versions (starting from Windows 8.1 update 3), the native NTDLL has been relocated: instead of being loaded into the lower 4GB of the address space together with the rest of the process’ modules, it is now loaded to a much higher address.

Figure 24 – base address of 64-bit NTDLL on Windows 10 (left) and Windows 7 (right).

The rest of the address space above the 4GB boundary (with the exception of the native NTDLL and the native CFG bitmap) is protected with a SEC_NO_CHANGE VAD and thus cannot be accessed, allocated or freed by anyone. This means that the trampoline will always be allocated in the lower 4GB of the address space. Since the total user-mode address space in 64-bit systems is 128TB, the distance between the native NTDLL and the trampoline is bound to be much greater than 2GB. That makes the JMP emitted by most hooking engines inadequate.

Figure 25 – illustration showing the control transfers required for inline hooks in WoW64 processes on Windows 8.1 and up. Note that in Windows 10 RS4 preview (build 17115) the SEC_NO_CHANGE VADs don’t seem to exist anymore and memory can be allocated anywhere in the process address space.

An alternative form of JMP

To overcome this issue, we had to replace the relative JMP with a different instruction, capable of passing a distance of up to 128TB. When searching for alternatives, we bumped into a post by Gil Dabah listing a few possible options. After disqualifying every option that “dirties” a register, we were left with only a couple of viable choices. Initially, we attempted to replace the relative JMP with an indirect, RIP-relative JMP similar to the ones used in the trampoline:


This instruction performed well on Windows 10, and provided us with a way to instrument various native API functions in WoW64 processes. But when testing the modified code on earlier Windows versions such as Windows 8.1 and Windows 7, it failed to create the hooks entirely. As it turns out, NTDLL functions in these Windows versions are shorter than their counterparts in Windows 10, and usually do not contain enough space to accommodate our chosen JMP instruction, which takes up 14 bytes.

Figure 26 – implementation of ZwAllocateVirtualMemory() in Windows 10 RS2 (left) and Windows 8.1 (right).

To make our DLL universal across all Windows versions, we had to find a shorter instruction, still capable of branching to the trampoline. Eventually, we came up with a solution that actually takes advantage of the trampoline’s location: since the trampoline must be allocated in the lower 4GB of the address space, the upper 4 bytes of its 8-byte address are zeroed out. This lets us use the following option, which only takes up 6 bytes:

The reason this method works is because in x64 code the PUSH instruction, when supplied with a 4-byte operand, actually pushes an 8-byte value onto the stack. The upper 4 bytes are used as a sign extension, meaning that as long as the 4-byte address is not greater than 2GB, they will be zeroed.

We then use a RET instruction, which pops an 8 byte address from the stack and jumps to it. Since we have just pushed the address of the trampoline to the top of the stack, that would be our return address.

Figure 27 – NtAllocateVirtualMemory() containing our modified hook. Notice the first two instructions, which push the address of the trampoline into the stack and immediately “return” to it.

There is only one problem left with this method, caused by CFG. As mentioned in part 2 of this series, all private memory allocations in WoW64 processes – including the trampolines used for our hooks – are marked exclusively in the WoW64 CFG bitmap.

Whenever we wish to execute the original API function from the detour, we first need to call the trampoline in order to run that function’s prolog. But, if our DLL is compiled with CFG, it will attempt to validate the trampoline’s address against the native CFG bitmap before calling it. Due to this mismatch, the validation will fail, resulting in the termination of the process.

The solution to this problem is rather straightforward – having control over the DLL’s configuration, we can simply compile it without enabling CFG. This is done by removing the /guard:cf flag from the compiler’s command line.

Preventing infinite recursions

The last issue to consider when adapting a hooking engine is infinite recursions. After the hooks are placed, whenever a call is made to a hooked function, this call will reach our detour instead. But our detour functions also execute their own code, which might itself make calls to hooked functions, leading us back to our detour. Unless handled carefully, this can lead to infinite recursions.

Figure 28 – infinite recursions when the hook on LdrLoadDll attempts to load another DLL

Normally there exists a simple solution for this problem: declaring a thread local variable, which counts the “depth” of the recursion we’re in, and only executing the code inside the detour function the first time (counter == 1):


Figure 29 – a thread local variable counting the depth of the recursion

Unfortunately, we cannot use thread local variables in our DLL, for two reasons:

  1. Implicit TLS (__declspec(thread)) relies heavily on the CRT, which is not available to us.
  2. Explicit TLS APIs (TlsAlloc() / TlsFree(), etc.) are implemented in their entirely in kernel32.dll, whose 64-bit version is not loaded into WoW64 processes.

In spite of these constraints, wow64.dll does use TLS storage, as can be verified by looking at the output of the “!wow64exts.info” command:

Figure 30 – TLS  variables used by the WoW64 DLLs

As it turns out, wow64.dll does not dynamically allocate TLS slots at runtime but rather uses hardcoded locations in the TlsSlots array accessible directly from the TEB (already instantiated on a per-thread basis).

Figure 31 – Wow64SystemServiceEx writes a thread local variable into a hardcoded location in the TlsSlots array

After some empirical testing, we discovered that most TLS slots in the 64-bit TEB are never used by WoW64 DLLs, so for the purpose of this PoC we can just pre-allocate one of them to store our counter. There is no guarantee that this slot will remain unused in future Windows versions, so production-grade solutions would probably look into some other available members of the TEB.

Figure 32 – using an unused member of the TEB to count the “depth” of our recursion.


Conclusion

This concludes the third and final part of our “Deep Hooks” series. In these three posts we presented several different ways to inject a 64-bit DLL into a WoW64 process and then use it to hook API functions in the 64-bit NTDLL. Hopefully, this option would benefit security products by allowing them to gain better visibility into WoW64 processes and making them more resilient to bypasses such as “Heaven’s Gate”.

The methods presented throughout this series still have their limitations, found in the form of new mitigation options such as dynamic code restrictions, CFG export suppression and code integrity guard. When enabled, these might prevent us from creating our hooks or thwart our injection altogether, but more on that in some future post.

The post Deep Hooks: Monitoring native execution in WoW64 applications – Part 3 appeared first on SentinelOne.

New Freemium Offer Mines Cryptocurrency

Freemium software is certainly nothing new.  They are free apps that offer premium features if you don’t mind ads displaying while you’re using it or paying a small fee to have the ads removed.  At least one company is trying a new business model on for size, albeit with limited success.

The company is Qbix, and their freemium app is called “Calendar 2.”  It’s a solid calendar app with more features than Apple’s default app, and the Qbix offers its users premium features if they’re willing to allow the company to make use of CPU cycles to mine cryptocurrency.

Hackers around the world have been enslaving the computers of unsuspecting users and using their processing power to mine cryptocurrency, all while making millions in the process. However, this is the first instance we’ve seen of a company attempting to bring the business model mainstream.

Unfortunately, there were two issues with the release of the latest version.  First, there was a bug in the way the mining app was implemented that kept it running, even if users opted out of the default setting (which is, of course, to accept the arrangement).

Second, and even more disturbing, the mining software consumed twice as much processing power as the calendar app claimed that it would.  Both flaws were discovered by Calendar 2 users, who did not have nice things to say about the app and expressed their concern that Apple had allowed the app on the App Store in the first place.

For Apple’s part, the company seems to have no problem with the revenue scheme, provided that the offering company gets the consent of the user. Although given Calendar 2’s less-than-spectacular-success with the idea, the company may well change their Terms of Service to forbid it going forward.

 

 

Used with permission from Article Aggregator

Salesforce introduces Integration Cloud on heels of MuleSoft acquisition

Salesforce hasn’t wasted any time turning the MuleSoft acquisition into a product of its own, announcing the Salesforce Integration Cloud this morning.

While in reality it’s too soon to really take advantage of the MuleSoft product set, the company is laying the groundwork for the eventual integration into the Salesforce family with this announcement, which really showcases why Salesforce was so interested in them that they were willing to fork over $6.5 billion.

The company has decided to put their shiny new bauble front and center in the Integration Cloud announcement, so that when they are in the fold, they will have a place for them to hit the ground running

The Integration Cloud itself consists of three broad pieces: The Integration Platform, which will eventually be based on MuleSoft; Integration Builder, a tool that lets you bring together a complete picture of a customer from Salesforce tools, as well as across other enterprise data repositories and finally Integration Experiences, which is designed to help brands build customized experiences based on all the information you’ve learned from the other tools.

For now, it involves a few pieces that are independent of MuleSoft including a workflow tool called Lightning Flow, a new service that is designed to let Salesforce customers build workflows using the customer data in Salesforce CRM.

It also includes a dash of Einstein, Salesforce’s catch-all brand for the intelligence layer that underlies the platform, to build Einstein intelligence into any app.

Salesforce also threw in some Trailhead education components to help customers understand how to best make use of these tools.

But make no mistake, this is a typical Salesforce launch. It is probably earlier than it should be, but it puts the idea of integration out there in the minds of its customers and lays a foundation for a much deeper set of products and services down the road when MuleSoft is more fully integrated into the Salesforce toolset.

For now, it’s important to understand that this deal is about using data to fuel the various pieces of the Salesforce platform and provide the Einstein intelligence layer with information from across the enterprise wherever it happens to live, whether that’s in Salesforce, another cloud application or some on-prem legacy systems.

This should sound familiar to folks attending the Adobe Summit this week in Las Vegas, since it’s eerily similar to what Adobe announced on stage yesterday at the Summit keynote. Adobe is calling it a customer experience system of record, but the end game is pretty much the same: bringing together data about a customer from a variety of sources, building a single view of that customer, and then turning that insight into a customized experience.

That they chose to make this announcement during the Adobe Summit, where Adobe has announced some data integration components of its own could be a coincidence, but probably not.