How Malware Persists on macOS

Whether it’s a cryptominer looking for low-risk money-making opportunities, adware hijacking browser sessions to inject unwanted search results, or malware designed to spy on a user, steal data or traverse an enterprise network, there’s one thing all threats have in common: the need for a persistent presence on the endpoint. On Apple’s macOS platform, attackers have a number of different ways to persist from one login or reboot to another. 

In this post, we review macOS malware persistence techniques seen in the wild as well as highlighting other persistence mechanisms attackers could use if defenders leave the door open. Has your IT team and security solution got them all covered? Let’s take a look.

image of how malware persists on macOS

How To Persist Using a LaunchAgent

By far the most common way malware persists on macOS is via a LaunchAgent. Each user on a Mac can have a LaunchAgents folder in their own Library folder to specify code that should be run every time that user logs in. In addition, a LaunchAgents folder exists at the computer level which can run code for all users that log in. There is also a LaunchAgents folder reserved for the System’s own use. However, since this folder is now managed by macOS itself (since 10.11), malware is locked out of this location by default so long as System Integrity Protection has not been disabled or bypassed. 

image of LaunchAgents folder

LaunchAgents take the form of property list files, which can either specify a file to execute or can contain their own commands to execute directly. 

image of plist template in installer
Since user LaunchAgents require no privileges to install, these are by far the easiest and most common form of persistence seen in the wild. Unfortunately, Apple took the controversial step of hiding the parent Library folder from users by default all the way back in OSX 10.7 Lion, making it easier for threat actors to hide these agents from unsavvy users. 

Users can unhide this library in a couple of different ways for manual checks, but enterprise security solutions should monitor the contents of this folder and block or alert on malicious processes that write to this location, as shown here in this example from the SentinelOne console. The threat is autonomously blocked and the IT team is alerted to the IOCs, with reference to Mitre Att&ck framework, and convenient links to RecordedFuture and VirusTotal detections.

image of sentinel one detects plist write

Persistence By LaunchDaemon

LaunchDaemons only exist at the computer and system level, and technically are reserved for persistent code that does not interact with the user – perfect for malware. The bar is raised for attackers as writing a daemon to /Library/LaunchDaemons requires administrator level privileges. However, since most Mac users are also admin users and habitually provide authorisation for software to install components whenever asked, the bar is not all that high and is regularly cleared by infections we see in the wild. In this image, the computer has been infected by 3 separate, malicious LaunchDaemons.

image of infectious LaunchDaemons

Because LaunchDaemons run on startup and for every user even before a user logs in, it is essential that your security software is aware of what daemons are running and when any new daemons are written. As with System LaunchAgents, the System LaunchDaemons are protected by SIP so the primary location to monitor is /Library/LaunchDaemons.

Don’t just assume labels you recognize are benign either. Some legitimate LaunchDaemons point to unsigned code that could itself be replaced by something malicious. For example, the popular networking program Wireshark uses a LaunchDaemon,


that executes unsigned code at the path:

/Library/Application Support/Wireshark/ChmodBPF/ChmodBPF

Even Apple itself uses a LaunchDaemon that isn’t always cleaned up immediately such as


This points to an executable in the /macOS Install Data folder that could be replaced by malicious code.

image of macOS install data

Remember that with privileges, an attacker can either modify the program arguments of these property plists or the executables that they point to in order to achieve stealthy persistence. Since these programs will run with root privileges, it’s important that you or your security solution isn’t just blanket whitelisting code because it looks like it comes from a legitimate vendor.

Persistence with Profiles

Profiles are intended for organizational use to allow IT admins to manage machines for their users, but their potential for misuse has already been spotted by malware authors. As profiles can be distributed via email or a website, tricking users into inadvertanlty installing them is just another element of social engineering. 

image of profiles man page

Configuration profiles can force a user to use certain browser settings, DNS proxy settings, or VPN settings. Many other payloads are possible which make them ripe for abuse. 

image of profiles pane
Profiles can be viewed by users in System Preferences Profiles pane and by administrators by enumerating the /Library/Managed Preferences folder. Be aware that neither the pane nor folder will be present on a system where profiles have never been installed.

image of Profiles infection

Cron Still Persists on macOS

The venerable old cron job has not been overlooked by malware authors. Although Apple has announced that new cron jobs will require user interaction to install in 10.15 Catalina, it’s unlikely that this will do much to hinder attackers using it as a persistence method. As we’ve noted before, user prompts are not an effective security measure when the user has already been tricked into installing the malicious software under the guise of something else. There’s overwhelming evidence to suggest that users escape ‘death by dialog’ by simply clicking everything without paying attention to what the dialog alert actually says.

Malicious cron jobs are used by AdLoad and Mughthesec malware, among others, to achieve persistence.

image of cron infection

Kexts for Persistence

Kernel extensions are widely used by legitimate software for persistent behavior, and we’ve seen them also used by so-called PUP software like MacKeeper. An open-source keylogger, logkext, has also been around for some years, but in general kexts are not a favoured trick among malware authors as they are comparatively difficult to create, lack stealth, and can be easily removed.

How to Find Persistent Login Items 

Changes made by Apple to Login Items have, on the other hand, resulted in more attractive opportunities for malware persistence. Once upon a time, Login Items were easily enumerated through the System Preferences utility, but a newer mechanism makes it possible for any installed application to launch itself at login time simply by including a Login Item in its own bundle. While the intention of this mechanism is for legitimate developers to offer control of the login item through the app’s user interface, unscrupulous developers of commodity adware and PUP software have been abusing this as a persistence trick as it’s very difficult for users to reliably enumerate which applications actually contain a bundled login item. 

While it’s not a simple matter for users to enumerate all the Login Items, admins can do so with a little extra work by parsing the following file, if it exists:

~/Library/Application Support/

A method of doing so was first written up by security researcher Patrick Wardle, but that still requires some programming skill to implement. A more user-friendly AppleScript version that can be cut and pasted into the macOS Script Editor utility and run more conveniently is available here.

AppleScript & Friends

While on the subject of AppleScript, Apple’s most useful “swiss army knife” tool somewhat unsurprisingly also has some persistence mechanisms to offer. The first leverages Folder Actions and allows an attacker to execute code that could even be read into memory remotely every time a particular folder is written to. This remarkably clever way of enabling a fileless malware attack by re-purposing an old macOS convenience-tool was first written up by Cody Thomas.

image of folder actions

Admins with security solutions that do not have behavioral AI detection should monitor processes executing with osascript and ScriptMonitor in the command arguments to watch out for this kind of threat.

An even more wily trick leverages Mail rules, either local or iCloud-based, to achieve persistence by triggering code after sending the victim an email with a specially-crafted subject line. This method is particularly stealthy and will evade many detection tools.

Defenders can manually check for the presence of suspicious Mail rules by parsing the ubiquitous_SyncedRules.plist file and the SyncedRules.plist file for iCloud and local Mail rules, respectively. A quick bash script such as 

grep -A1 "AppleScript" ~/Library/Mail/V6/MailData/SyncedRules.plist

will enumerate any Mail rules that are calling AppleScripts. If any are found, those will then need to be examined closely to ensure they are not malicious.

Also Ran: Forgotten Persistence Tricks 

For those who remember them, rc.common and launchd.conf no longer work on macOS, and support for StartupItems also appears to have been removed after 10.9 Mavericks.

Even so, other old “nix tricks” do still work, and while we’ve yet to see any of the following persistence mechanisms used in the wild, they are worth keeping an eye on. These tricks include using periodics, loginhooks, at jobs, and the emond service. 

Periodics As a Means of Persistence

Periodics are system scripts that are generally used for maintenance and run on a daily, weekly and monthly schedule. Periodics live in similarly titled subfolders within etc/periodic folder.

image of periodic list

Listing the contents of each of the subfolders should reveal the standard set of periodics, unless your admins are using their own custom periodic scripts. If not, anything additional found in there should be treated as suspicious and inspected. Notice the unusual “uptime” script here, which will run on a daily basis without user interaction or notification.

image of periodic daily script

Also, be sure to check both /etc/defaults/periodic.conf and /etc/periodic.conf for system and local overrides to the default periodic configuration.

LoginHooks and LogoutHooks

LoginHooks and LogoutHooks have been around for years and are rarely used these days, but are still a perfectly viable way of running a persistence script on macOS Mojave. As the names suggest, these mechanisms run code when the user either logs in or logs out.

It’s a simple matter to write these hooks, but fortunately it’s also quite easy to check for their existence. The following command should return a result that doesn’t have either LoginHook or LogoutHook values:

sudo defaults read

If, on the other hand, it reveals a command or path to a script, then consider those worthy of investigation.

image of loginhook

At Jobs: Run Once, Persist Forever

A much less well-known mechanism is at jobs. While these only run once and are not enabled by default, they are a sneaky way to run some code on restart. The single-use isn’t really a problem, since the at job can simply be re-written each time the persistence mechanism fires, and these jobs are very unlikely to be noticed by most users or indeed many less-experienced admins. 

You can check whether any at jobs are scheduled by enumerating the /var/at/jobs directory. Jobs are prefixed with the letter a and have a hex-style name.

image of at jobs

Emond – The Forgotten Event Monitor

Sometime around OSX 10.5 Leopard, Apple introduced a logging mechanism called emond. It appears emond was never fully developed and development may have been abandoned by Apple for other mechanisms, but it remains available even on macOS 10.14 Mojave. 

In 2016, James Reynolds provided the most comprehensive analysis to-date of emond and its capabilities. Reynolds was not interested in emond from a security angle, but rather was documenting a little-known daemon from the angle of an admin wanting to implement their own log scanner. Reynolds concludes his analysis with an interesting comment, though:

Considering how easy it is to log, run a command, or send an email in a perl script, I can’t see why I’d want to use emond instead of a script.

This little-known service may not be much use to a Mac admin, but to a threat actor one very good reason would be to use it as a persistence mechanism that most macOS admins probably wouldn’t know to look for. 

Detecting malicious use of emond shouldn’t be difficult, as the System LaunchDaemon for the service looks for scripts to run in only one place:


Admins can easily check to see if a threat actor has placed anything in that location. 

image of emond clients folder

As emond is almost certainly not used in your environment for any legitimate reason, anything found in the emondClient directory should be treated as suspicious.


As the above mechanisms show, there are plenty of ways for attackers to persist on macOS. While some of the older ways are now defunct, the onus is still very much on defenders to keep an eye on the many possible avenues that code execution can survive a reboot. To lessen that burden, it’s recommended that you move to a next-gen behavioral solution that can autonomously detect and block malware from achieving persistence. If you are not already protected by SentinelOne’s activeEDR solution, contact us for a free demo and see how it can work for you in practice.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Zero Day Survival Guide | Everything You Need to Know Before Day One

Zero day. Perhaps the most frightening words for any IT leader to hear. For security researchers, zero days are one of the more fascinating topics, the crown jewel of hacking: a capability that can bypass traditional security measures, that might allow an attacker to run any code they want, or to penetrate any device. In this post, we will demystify what a zero day really is, how many are actually seen in the wild, what their impact has been, and how you can stay protected.


What is a Zero Day, Really?

The term “zero day” has come to describe one thing: A vulnerability or an attack vector that is known only to the attackers, so it can work without interruption from the defenders. You can think about it is a flaw in a piece of software, or even sometimes hardware. Here’s a typical lifecycle of an attack utilizing zero days to compromise devices:

  1. A vulnerability or new attack vector is discovered by a malware author.
  2. The capability is weaponized and proven to work
  3. The zero day is kept secret and utilized by cyber criminals.
  4. The vulnerability is discovered by defenders.
  5. The OS vendor or application vendor deliver a patch.
  6. The zero day is no longer a zero day.

With that said, here is a better scenario, based on responsible disclosure:

  1. A vulnerability or new attack vector is discovered by a hacker or a security researcher.
  2. The author reports it to the OS or application vendor.
  3. A patch is created, and released.
  4. The zero day is then published, crediting the hacker for his contribution, and sometimes even paying him for the responsible disclosure.

While the technical ability to discover a zero day (some would call it the ability to break things) is quite similar in both scenarios, the first is a crime that can cause huge damage, both financially and to a brand, the latter is the right path to choose. 

What is Not a Zero Day?

It is not uncommon to see the term ‘zero day’ used in marketing campaigns, to spread fear or just to demonstrate the risk associated with cyber attacks. The risk is definitely real, while the term is used loosely. Here’s what a zero day is not.

1. Malware with an unknown hash or reputation
It is very easy to change existing malware to evade signature-based solutions. In fact, there is a lot of malware out there that uses this technique to evade legacy AV. How easy is it? Just see:

2. Malware that evades legacy AV string-based scans (e.g., ‘Yara’ rules)
The same goes for packers – compressing executables without changing the software – yet another common way to infect devices and avoid legacy AV, but not a zero day.

3. Attacks against unpatched vulnerabilities
If a patch is available, but you did not patch and you got infected, then it was not from a zero day; and it means you need to reconsider your security program. The day Microsoft patched EternalBlue and other RCE exploits (14 March, 2017), those vulnerabilities ceased to be zero day vulnerabilities. Wannacry, first detected on 12th May that year, was around Day 59 after the patch, not Day 0.

In-the-Wild, Zero Day Attacks

Thanks to a recently-shared dataset collated by Google’s Project Zero team spanning the years from mid-2014 to the present day, it’s possible to shed some light on how knowledge of actual zero days can help improve your security posture.

The dataset includes zero day exploits that were either detected in the wild or were found in circumstances where in the wild use is a reasonable inference. For example, it includes leaks of exploits developed by the Equation Group and leaked by ShadowBrokers. Similarly, it includes tools leaked from the hack of defunct Italian private intelligence firm Hacking Team.

In total, there have been 108 zero day exploits discovered between July 2014 and June 2019. On average, around 20 zero day exploits are detected in the wild each year, which naturally leads to the question: how many go undetected? What percentage of the total are being detected?

Unfortunately, that will always remain an unknown. Assuming that attackers are not suffering 100% failure rate, however, defenders should think about their security solution in terms of where attacks might be getting through. Where do you lack visibility in your network? What are the bottlenecks in your response times that could be hiding an alert that was lost in the noise?

The data we have shows that the year 2015, with 28 discovered exploits, had by a small margin the highest number of attacks that leveraged zero day vulnerabilities. The lowest, 2018, only saw 12 detected zero days: a number almost equalled already in the first 6 months of 2019 with 10 detections.

image of in the wild zero day discoveries timeline

Attribution: Fundamental & Almost Impossible!

Knowing who is behind an attack is one of the most important mysteries to solve for a truly robust defensive strategy. Whether you are being targeted or just a victim of an indiscriminate attack on computer networks at large can play a crucial role in how your organization responds and allocates resources. 

And yet, attribution is probably the most difficult of all the tasks involved in defending against cybercrime. The entirety of the evidence will likely not lie only in artefacts and forensics on your particular network, and interpretation may equally demand knowledge of context that goes beyond your own organisation, particularly when thinking about nation state actors and APTs.

Of the 108 zero days, there are 44 for which no attribution has been claimed at all. Of the other 64, claims of attribution should be largely taken as ‘best guess’ for the reasons just noted.

With that in mind, the largest number of zero day exploits over the last 5 years appear to be from Russian and American nation state actors, respectively. APT 28, also known as Fancy Bear, Sophacy and several other names, were believed to be behind 10 of the zero day exploits detected in the wild. The Equation Group, widely believed to be a unit within the United States National Security Agency, were suspected of being behind 8 of the exploits.

Interestingy, 11 of the exploits discovered were attributed to two private intelligence firms, whose business relies on discovering or buying zero day exploits from other hackers and selling them on to third-parties for profit. While their intended customers may be law enforcement or government organisations, the fact that one of these private firms, Hacking Team, were themselves hacked and had their exploits leaked online makes attribution even more difficult.

image of claimed attributions

What Products Have Been Affected By Zero Days?

Essential to analysing your risk is gauging just how far your own software stack is vulnerable. As already noted, detected exploits tells us nothing about vulnerabilities being leveraged right now that remain undetected, but they can at least shine a light on areas you absolutely must be sure to cover. 

As the next graph shows, Microsoft products are by far and away the largest vectors for zero day exploits, with Windows, Office, Internet Explorer and Windows Kernel making up four of the top five affected products. Combined, they account for 62 of the 108 exploits discovered. It won’t be a surprise to many to see Adobe’s Flash holding up second place, with 23 zero day vulnerabilities found in the multimedia platform. 

That has important implications. With such a large percentage of the vulnerabilities found in products from just two vendors, it’s clear that use of those vendors’ products should be effectively monitored in your environment as a priority.

image of affected products

What Vulnerabilities Have Led to Zero Days?

By far, most of the zero day vulnerabilities uncovered were due to memory corruption issues. These result in exploits based on buffer overflows and out-of-bounds read/writes, among others. 

Another 14 vulnerabilities were due to logic and/or design flaws such as improper validation. These allowed exploits such as sandbox escapes and remote privilege escalations. 

image of vulnerability types

How Can You Protect Against Zero Day Exploits?

With 108 zero days discovered over a period of 1,825 days, that works out at an average of a new zero day exploit in the wild every 17 days. And while that kind of statistic can be misleading – we know the reality is that many have been leaked in a single day – it does suggest that zero day exploits are not rare occurrences you can afford to ignore until the next research article or media headline. 

Start by ensuring you have a comprehensive approach to network security. Your defensive strategy needs to be proactively searching out weakpoints and blindspots. That means making sure all endpoints have protection, that admins have the ability to see into all network traffic, including encrypted traffic, and knowing exactly what is connected to your network, including Linux-powered IoT machines. 

Choose a security solution that does not just whiteilist code from trusted sources or, equally as bad, puts a blanket network-wide block on tools your employees need in their daily work, killing their productivity. Instead, look for an endpoint security tool that actively monitors for and autonomously responds to chains of anomalous code execution, and which can provide contextualized alerts for an entire attack chain. A solution like SentinelOne allows your employees to use the tools they need to get their work done while at the same time autonomously taking action against malicious code execution, whatever its source.

Finally, prepare for the next news headline in advance. When a zero day attack is next detected, be sure you have tools in place that can retrohunt across your entire network, and that can help you patch quickly and easily


If there’s one thing we can learn from the last 5 years of zero day exploits, it is that zero days are a constant that you need to have a coordinated strategy to deal with. When the next news headline has everyone buzzing, be sure you have the ability to check, patch and defend against any attacker trying to leverage it against your network. If you’d like to see how the SentinelOne solution can help you do just that, we’d love to show you with a free demo.

Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Crane, a new early-stage London VC focused on ‘intelligent’ enterprise startups, raises $90M fund

Crane Venture Partners, a newish London-based early-stage VC targeting what it calls “intelligent” enterprise startups, is officially outing today.

Founded by Scott Sage and Krishna Visvanathan, who were both previously at DFJ Esprit, “Crane I” has had a second closing totalling $90 million, money the firm is investing in enterprise companies that are data-driven. Sage and Visvanathan are joined by Crane Partner Andy Leaver.

Specifically, Crane is seeking pre-Series A startups based in Europe, with a willingness to write the first institutional cheque. The firm is particularly bullish about London, noting that 90% of cloud and enterprise software companies that went public in the last 8-10 years opened their first international office in London. Investments already made from the fund include Aire, Avora, Stratio Automotive and Tessian.

Crane’s anchor LPs are MassMutual Ventures, the venture capital arm of Massachusetts Mutual Life Insurance Company (MassMutual), and the U.K. taxpayer funded British Patient Capital (BPC), along with other institutions, founders and VCs spanning the U.S., Europe and Asia. In addition, Crane has formed a strategic partnership with MassMutual Ventures to give Crane and its portfolio companies “deep access” to new markets and networks as they expand internationally.

Below follows an email Q&A with Crane founders Scott Sage and Krishna Visvanathan, where we discuss the new fund’s remit, why Crane is so bullish on the enterprise, London after Brexit, and why the enterprise isn’t so boring after all!

TC: Why does London and/or the world need a new enterprise focused VC?

SS: Just to correct you Steve, we’re an enterprise only seed fund 🙂 – which does make us somewhat unique. We back founders who have a differentiated product vision but who haven’t demonstrated the commercial metrics that our counterparts typically look for. We see opportunity and not just risk.

TC: It feels like years since I first heard you were both raising a fund together and of course I know that Crane has already made 20+ investments. So why did it take you so long to close and why are you only just officially announcing now?

KV: It was definitely a humbling experience and took us 12 months longer than we would have hoped! We held our first close for Crane I, our institutional fund, in July 2018, two and a half years from when we started raising. We had previously established a pre-cursor fund and started investing in Q1 2016, quietly building up our portfolio and presence. We had to hold off on discussing the fund until we concluded the final close a few weeks ago for regulatory and compliance reasons.

TC: You say that Crane is broadly targeting early-stage “intelligent” enterprise startups — as opposed to unintelligent ones! — but can you be more specific with regards to cheque size and stage and any particular verticals, themes or technologies you plan to invest in?

SS: Data is central to our thesis – the entire enterprise stack will need to be rebuilt to understand and learn from data, which is what we mean by intelligence. The majority of installed enterprise applications today are workflow tools and don’t do anything intelligent for the user or the organisation. We’re also excited about entirely new products for new markets that didn’t previously exist.

Our first cheques range from $750k to $3m, with sizeable follow on reserves to support our companies through Series B. We view our sweet spot as helping companies build their go-to market strategies and are happy to invest pre-revenue (approximately half of our portfolio at the time of investment), although we prefer to invest post-product.

TC: Given that you typically invest pre-Series A, where an enterprise startup may be pre-revenue and not yet have anything like definitive market fit, what are the standout qualities you look for in founding teams or the assumptions they are betting on?

KV: You mean apart from the obvious ones that every VC would say about passion, vision, hunger etc (mea culpa!)? We love highly technical teams who have a visceral understanding of the problem they are solving – usually because they lived through it previously. Many of the founders we’ve backed are reimagining the market segments they are addressing.

TC: Almost every new fund these days is talking about its operational support for portfolio companies. What does Crane do to actively support the very early-stage companies you back?

SS: Our sole focus is on supporting founders with their go-to-market strategy which encompasses everything from product positioning and generating marketing leads to building a high performing sales team, renewing and upselling customers. We have formal modules we run behind the scenes with a new company once we’ve invested and we’re also building out a stable of venture partners who are specialists in these areas. We believe that there is a multiplier effect in creating a community of similar staged businesses with parallels in their business models.

TC: Although Crane is pan-European, I know you are especially bullish on London as a leader in creating and adopting enterprise technology, why is that?

KV: We believe London has a great concentration of customers, data science and software talent, commercial and go-to-market talent. 90% of cloud and enterprise software companies that went public in the last 8-10 years opened their first international office in London. And, we’ve also seen a newfound boldness amongst young first-time founders who are not bound by the limits of their imaginations. Look at Onfido, Tessian and Senseon – all first-time founding teams we have backed who are building category-defining businesses.

TC: Which brings us to Brexit. How does Crane view the U.K. exiting the EU and the challenges this will undoubtedly create for tech and enterprise companies, in particular relating to hiring?

SS: We are believers in a global economy and the UK being a major contributor to it. The reason London is still the startup capital of Europe is because of its diversity and openness. The UK exiting the EU is counter to this which we believe will have a negative impact on our ability to attract talent and remain at the forefront of European tech.

TC: Lastly, enterprise tech is often viewed as “unsexy” and something many journalists (myself included) yawn at, even though it is a huge market and arguably the hidden software that the engine rooms of the world economy run on. Tell me something I might not already know about enterprise tech that I can repeat at a dinner party without sending everyone else to sleep?

KV: Imagine a world where you turn on your laptop and your day is pre-organised for you, your email self protects against catastrophic mistakes, your digital identity is portable, your physical workspace syncs with your calendar and auto reserves meeting rooms, and your creditworthiness is something you control, leaving you to focus on channelling your creativity as a journalist and not deal with pfaff. That’s the intelligent enterprise right there in the guise of Tessian, Onfido, OpenSensors and Aire, a selection of the companies in our portfolio. It may start with the enterprise, but ultimately, the products and businesses that are being built are all for people.

TC: Scott, Krishna, thanks for talking to TechCrunch!

Alyce picks up $11.5 million Series A to help companies give better corporate gifts

Alyce, an AI-powered platform that helps sales people, marketers and event planners give better corporate gifts, has today announced the close of an $11.5 million Series A funding. The round was led by Manifest, with participation from General Catalyst, Boston Seed Capital, Golden Ventures, Morningside and Victress Capital.

According to Alyce, $120 billion is spent each year (just in the United States) on corporate gifts, swag, etc. Unfortunately, the impact of these gifts isn’t usually worth the hassle. No matter how thoughtful or clever a gift is, each recipient is a unique individual with their own preferences and style. It’s nearly impossible for marketers and event planners to find a one-size-fits-all gift for their recipients.

Alyce, however, has a solution. The company asks the admin to upload a list of recipients. The platform then scours the internet for any publicly available information on each individual recipient, combing through their Instagram, Twitter, Facebook, LinkedIn, videos and podcasts in which they appear, etc.

Alyce then matches each individual recipient with their own personalized gift, as chosen from one of the company’s merchant partners. The platform sends out an invitation to that recipient to either accept the gift, exchange the gift for something else on the platform, or donate the dollar value to the charity of their choice.

This allows Alyce to ensure marketers and sales people always give the right gift, even when they don’t. For charity donations, the donation is made in the name of the corporate entity who gave the gift, not the recipient, meaning that all donations act as a write-off for the gifting company.

The best marketers and sales people know how impactful a great gift, at the right time, can be. But the work involved in figuring out what a person actually wants to receive can be overwhelming. Hell, I struggle to find the right gifts for my close friends and loved ones.

Alyce takes all the heavy lifting out of the equation.

The company also has integrations with Salesforce, so users can send an Alyce gift from directly within Salesforce.

Alyce charges a subscription to businesses who use the software, and also takes a small cut of gifts accepted on the platform. The company also offers to send physical boxes with cards and information about the gift as another revenue channel.

Alyce founder and CEO Greg Segall says the company is growing 30 percent month-over-month and has clients such as InVision, Lenovo, Marketo and Verizon.

GitHub hires former Bitnami co-founder Erica Brescia as COO

It’s been just over a year since Microsoft bought GitHub for $7.5 billion, but the company has grown in that time, and today it announced that it has hired former Bitnami COO and co-founder Erica Brescia to be its COO.

Brescia handled COO duties at Bitnami from its founding in 2011 until it was sold to VMware last month. In a case of good timing, GitHub was looking to fill its COO role and after speaking to CEO Nat Friedman, she believed it was going to be a good fit. The GitHub mission to provide a place for developers to contribute to various projects fits in well with what she was doing at Bitnami, which provided a way to deliver software to developers in the form of packages such as containers or Kubernetes Helm charts.

New GitHub COO Erica Brescia

She sees that experience of building a company, of digging in and taking on whatever roles the situation required, translating well as she takes over as COO at a company that is growing as quickly as GitHub. “I was really shocked to see how quickly GitHub is still growing, and I think bringing that kind of founder mentality, understanding where the challenges are and working with a team to come up with solutions, is something that’s going to translate really well and help the company to successfully scale,” Brescia told TechCrunch.

She admits that it’s going to be a different kind of challenge working with a company she didn’t help build, but she sees a lot of similarities that will help her as she moves into this new position. Right after selling a company, she obviously didn’t have to take a job right away, but this one was particularly compelling to her, too much so to leave on the table.

“I think there were a number of different directions that I could have gone coming out of Bitnami, and GitHub was really exciting to me because of the scale of the opportunity and the fact that it’s so focused on developers and helping developers around the world, both open source and enterprise, collaborate on the software that really powers the world moving forward,” she said.

She says as COO at a growing company, it will fall on her to find more efficient ways to run things as the company continues to scale. “When you have a company that’s growing that quickly, there are inevitably things that probably could be done more efficiently at the scale, and so one of the first things that I plan on spending time in on is just understanding from the team is where the pain points are, and what can we do to help the organization run like a more well-oiled machine.”

IBM, KPMG, Merck, Walmart team up for drug supply chain blockchain pilot

IBM announced its latest blockchain initiative today. This one is in partnership with KPMG, Merk and Walmart to build a drug supply chain blockchain pilot.

These four companies are coming together to help come up with a solution to track certain drugs as they move through a supply chain. IBM is acting as the technology partner, KPMG brings a deep understanding of the compliance issues, Merk is of course a drug company and Walmart would be a drug distributor through its pharmacies and care clinics.

The idea is to give each drug package a unique identifier that you can track through the supply chain from manufacturer to pharmacy to consumer. Seems simple enough, but the fact is that companies are loathe to share any data with one another. The blockchain would provide an irrefutable record of each transaction as the drug moved along the supply chain, giving authorities and participants an easy audit trail.

The pilot is part of a set of programs being conducted by various stakeholders at the request of the FDA. The end goal is to find solutions to help comply with the U.S. Drug Supply Chain Security Act. According to the FDA Pilot Program website, “FDA’s DSCSA Pilot Project Program is intended to assist drug supply chain stakeholders, including FDA, in developing the electronic, interoperable system that will identify and trace certain prescription drugs as they are distributed within the United States.”

IBM hopes that this blockchain pilot will show it can build a blockchain platform or network on top of which other companies can build applications. “The network in this case, would have the ability to exchange information about these pharmaceutical shipments in a way that ensures privacy, but that is validated,” Mark Treshock, global blockchain solutions leader for healthcare and life sciences at IBM told TechCrunch.

He believes that this would help bring companies on board that might be concerned about the privacy of their information in a public system like this, something that drug companies in particular worry about. Trying to build an interoperable system is a challenge, but Treshock sees the blockchain as a tidy solution for this issue.

Some people have said that blockchain is a solution looking for a problem, but IBM has been looking at it more practically, with several real-world projects in production, including one to track leafy greens from field to store with Walmart and a shipping supply chain with Maersk to track shipping containers as they move throughout the world.

Treshock believes the Walmart food blockchain is particularly applicable here and could be used as a template of sorts to build the drug supply blockchain. “It’s very similar, tracking food to tracking drugs, and we are leveraging or adopting the assets that we built for food trust to this problem. We’re taking that platform and adapting it to track pharmaceuticals,” he explained.

VMware announces intent to buy Avi Networks, startup that raised $115M

VMware has been trying to reinvent itself from a company that helps you build and manage virtual machines in your data center to one that helps you manage your virtual machines wherever they live, whether that’s on prem or the public cloud. Today, the company announced it was buying Avi Networks, a six-year-old startup that helps companies balance application delivery in the cloud or on prem in an acquisition that sounds like a pretty good match. The companies did not reveal the purchase price.

Avi claims to be the modern alternative to load balancing appliances designed for another age when applications didn’t change much and lived on prem in the company data center. As companies move more workloads to public clouds like AWS, Azure and Google Cloud Platform, Avi is providing a more modern load-balancing tool, that not only balances software resource requirements based on location or need, but also tracks the data behind these requirements.

Diagram: Avi Networks

VMware has been trying to find ways to help companies manage their infrastructure, whether it is in the cloud or on prem, in a consistent way, and Avi is another step in helping them do that on the monitoring and load-balancing side of things, at least.

Tom Gillis, senior vice president and general manager for the networking and security business unit at VMware sees, this acquisition as fitting nicely into that vision. “This acquisition will further advance our Virtual Cloud Network vision, where a software-defined distributed network architecture spans all infrastructure and ties all pieces together with the automation and programmability found in the public cloud. Combining Avi Networks with VMware NSX will further enable organizations to respond to new opportunities and threats, create new business models, and deliver services to all applications and data, wherever they are located,” Gillis explained in a statement.

In a blog post,  Avi’s co-founders expressed a similar sentiment, seeing a company where it would fit well moving forward. “The decision to join forces with VMware represents a perfect alignment of vision, products, technology, go-to-market, and culture. We will continue to deliver on our mission to help our customers modernize application services by accelerating multi-cloud deployments with automation and self-service,” they wrote. Whether that’s the case, time will tell.

Among Avi’s customers, which will now become part of VMware, are Deutsche Bank, Telegraph Media Group, Hulu and Cisco. The company was founded in 2012 and raised $115 million, according to Crunchbase data. Investors included Greylock, Lightspeed Venture Partners and Menlo Ventures, among others.

7 Tips to Protect Against Your Growing Remote Workforce

7 Tips to Building a Solid Security Plan to Protect Against Your Growing Remote Workforce (2)

Richard Melick, Sr. Technology Product Marketing Manager at Automox.

For years, security teams have developed strategies around a confined, controlled environment, building infrastructure into place to keep the proverbial vault shut and data secured — yet we continue to read about breaches, infections and data loss. The growing number of remote American workers continues to increase with no signs of a slowdown, so it’s only a matter of time before that data breach or malware exploit happens to you (if it hasn’t already)[1].

None of this should be news to security professionals. According to a 2018 OpenVPN survey, 90 percent of IT professionals that responded reported that they believe their remote workforce poses a security risk. And, 36 percent reported that a remote employee was the cause of a security incident. Apricorn reports similar results in a recent survey of IT decision-makers in the United Kingdom. Let’s face it, the data doesn’t lie.

The legacy approaches developed to support this growing trend such as requiring VPN connections back to the corporate office or using a cloud-based content control tool can work, but they are limited to Internet-connected devices. In order for you to confidently let workers roam, we offer some tips to help you develop a step-by-step security plan to keep your data, intellectual property and access secure.

Security Awareness Training

Security awareness training should be a part of every company’s employee onboarding training, no matter their position within the company. Whether you sign up for services training from a third-party company or develop one internally, security training can help get every employee into the mindset of what is expected of them to protect the company, assets and equipment.

Setting the ground rules from the very start such as establishing strong password rules, knowing how to detect phishing attempts, and even the core concept of backing up data will only help with future efforts.

Helping to ensure your users care about the security of their devices and the assets they are in charge of maintaining also helps to set the expectations on the users. It gets them into the mindset of securing not only their work computer but also the other devices they use daily such as home Wi-Fi and phones.

Trust No One

Data falling into the wrong hands, even if that person is an employee, can lead to a disaster no matter the kind of data. Designing your networks with a zero-trust model keeps that data under lock and key, except for those authorized users that require access and use.

Not all users on your network need access to every component. Remember, it’s always easier to grant permissions after verification than to take away access after a potential compromise. Users who connect to your network do not need visibility or access to assets past what they need to complete their jobs.

Two-Factor Authentication

Time and time again, we have seen that a secure password isn’t enough these days. While security plans might call for quarterly password updates users are often taking the path of least resistance to update to their newest password.

This predictable behavior can be safeguarded through a simple two-factor authentication (2FA) setup. If a password is compromised, the dual-factor authentication keeps the attempted access at bay while notifying both the user and IT admin of the suspicious activity.

Whether you use a token key or a mobile phone setup, both forms of 2FA provide an additional level of security to often-compromised passwords.

Patch Your Shit

Patch Tuesday is the standard delivery day for required software updates that many IT admins rely on to fix the gaps in their infrastructure software and critical programs. If you don’t patch, you’re not protected.

While many organizations have a patch strategy, a recent ServiceNow study conducted by the Ponemon Institute concluded that over 62% of companies can’t tell if vulnerabilities are patched quickly, and 74% of these companies cannot patch in a timely manner due to the cybersecurity staffing shortage[2]. The lack of visibility into systems and their vulnerability, along with this ever-growing shortage of employees, is leaving endpoints vulnerable to attack.  This forces organizations to rely on endpoint protection platforms to do more of the heavy lifting to keep environments secure.

Automated Services

Take some stress off of maintaining your roaming network and apply focused automation solutions.

An effective and advanced Endpoint Protection Platform (EPP) like SentinelOne is capable of delivering automation services to handle the necessary steps of security prevention and protection. By automating endpoint protection processes and processing, you are setting your network up for a preventive approach and letting your IT and security teams focus on more significant risks to the infrastructure.

Good Log Intelligence

Data is king here, especially when it comes to the use and connectivity of your devices by employees.

Did that machine connect to a new Wi-Fi? You should know that. Are large amounts of data being transferred out or copied to portable devices? Also good to know. This includes contextual awareness of authentication, focusing on geolocation and time/date connectivity. Did an in-office worker log in from a machine in another country? You need to know.

Logging how the endpoints are used can help your security team stop issues before they get out of hand and keep data secure, as well as provide reliable data to threat hunting teams in case of an incident.

Contingency Plans

A device is going to get lost. Someone is going to connect to that free coffee shop Wi-Fi or use a thumb drive. And all of this will be despite the hours of education and training. The world of convenience and curiosity will always override security training, and you need a contingency plan in place. Deploy solutions that provide the capability to quickly lockout machines, remove cloud access, and even remotely wipe stolen devices.

In the end, no perfect security plan exists. The workforce will continue to expand beyond the office walls, and you must accept that your security approach must evolve with your roaming employees. Building a foundation with the above-mentioned steps can set your organization up for a successful growth strategy as workforces change.



Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Apollo raises $22M for its GraphQL platform

Apollo, a San Francisco-based startup that provides a number of developer and operator tools and services around the GraphQL query language, today announced that it has raised a $22 million growth funding round co-led by Andreessen Horowitz and Matrix Partners. Existing investors Trinity Ventures and Webb Investment Network also participated in this round.

Today, Apollo is probably the biggest player in the GraphQL ecosystem. At its core, the company’s services allow businesses to use the Facebook -incubated GraphQL technology to shield their developers from the patchwork of legacy APIs and databases as they look to modernize their technology stacks. The team argues that while REST APIs that talked directly to other services and databases still made sense a few years ago, it doesn’t anymore now that the number of API endpoints keeps increasing rapidly.

Apollo replaces this with what it calls the Data Graph. “There is basically a missing piece where we think about how people build apps today, which is the piece that connects the billions of devices out there,” Apollo co-founder and CEO Geoff Schmidt told me. “You probably don’t just have one app anymore, you probably have three, for the web, iOS and Android . Or maybe six. And if you’re a two-sided marketplace you’ve got one for buyers, one for sellers and another for your ops team.”

Managing the interfaces between all of these apps quickly becomes complicated and means you have to write a lot of custom code for every new feature. The promise of the Data Graph is that developers can use GraphQL to query the data in the graph and move on, all without having to write the boilerplate code that typically slows them down. At the same time, the ops teams can use the Graph to enforce access policies and implement other security features.

“If you think about it, there’s a lot of analogies to what happened with relational databases in the ’80s,” Schmidt said. “There is a need for a new layer in the stack. Previously, your query planner was a human being, not a piece of software, and a relational database is a piece of software that would just give you a database. And you needed a way to query that database, and that syntax was called SQL.”

Geoff Schmidt, Apollo CEO, and Matt DeBergalis, CTO

GraphQL itself, of course, is open source. Apollo is now building a lot of the proprietary tools around this idea of the Data Graph that make it useful for businesses. There’s a cloud-hosted graph manager, for example, that lets you track your schema, as well as a dashboard to track performance, as well as integrations with continuous integration services. “It’s basically a set of services that keep track of the metadata about your graph and help you manage the configuration of your graph and all the workflows and processes around it,” Schmidt said.

The development of Apollo didn’t come out of nowhere. The founders previously launched Meteor, a framework and set of hosted services that allowed developers to write their apps in JavaScript, both on the front-end and back-end. Meteor was tightly coupled to MongoDB, though, which worked well for some use cases but also held the platform back in the long run. With Apollo, the team decided to go in the opposite direction and instead build a platform that makes being database agnostic the core of its value proposition.

The company also recently launched Apollo Federation, which makes it easier for businesses to work with a distributed graph. Sometimes, after all, your data lives in lots of different places. Federation allows for a distributed architecture that combines all of the different data sources into a single schema that developers can then query.

Schmidt tells me the company started to get some serious traction last year and by December, it was getting calls from VCs that heard from their portfolio companies that they were using Apollo.

The company plans to use the new funding to build out its technology to scale its field team to support the enterprises that bet on its technology, including the open-source technologies that power both the services.

“I see the Data Graph as a core new layer of the stack, just like we as an industry invested in the relational database for decades, making it better and better,” Schmidt said. “We’re still finding new uses for SQL and that relational database model. I think the Data Graph is going to be the same way.”

Helium launches $51M-funded ‘LongFi’ IoT alternative to cellular

With 200X the range of Wi-Fi at 1/1000th of the cost of a cellular modem, Helium’s “LongFi” wireless network debuts today. Its transmitters can help track stolen scooters, find missing dogs via IoT collars and collect data from infrastructure sensors. The catch is that Helium’s tiny, extremely low-power, low-data transmission chips rely on connecting to P2P Helium Hotspots people can now buy for $495. Operating those hotspots earns owners a cryptocurrency token Helium promises will be valuable in the future…

The potential of a new wireless standard has allowed Helium to raise $51 million over the past few years from GV, Khosla Ventures and Marc Benioff, including a new $15 million Series C round co-led by Union Square Ventures and Multicoin Capital. That’s in part because one of Helium’s co-founders is Napster inventor Shawn Fanning. Investors are betting that he can change the tech world again, this time with a wireless protocol that like Wi-Fi and Bluetooth before it could unlock unique business opportunities.

Helium already has some big partners lined up, including Lime, which will test it for tracking its lost and stolen scooters and bikes when they’re brought indoors, obscuring other connectivity, or their battery is pulled, out deactivating GPS. “It’s an ultra low-cost version of a LoJack” Helium CEO Amir Haleem says.

InvisiLeash will partner with it to build more trackable pet collars. Agulus will pull data from irrigation valves and pumps for its agriculture tech business. Nestle will track when it’s time to refill water in its ReadyRefresh coolers at offices, and Stay Alfred will use it to track occupancy status and air quality in buildings. Haleem also imagines the tech being useful for tracking wildfires or radiation.

Haleem met Fanning playing video games in the 2000s. They teamed up with Fanning and Sproutling baby monitor (sold to Mattel) founder Chris Bruce in 2013 to start work on Helium. They foresaw a version of Tile’s trackers that could function anywhere while replacing expensive cell connections for devices that don’t need high bandwith. Helium’s 5 kilobit per second connections will compete with SigFox, another lower-power IoT protocol, though Haleem claims its more centralized infrastructure costs are prohibitive. It’s also facing off against Nodle, which piggybacks on devices’ Bluetooth hardware. Lucky for Helium, on-demand rental bikes and scooters that are perfect for its network have reached mainstream popularity just as Helium launches six years after its start.

Helium says it already pre-sold 80% of its Helium Hotspots for its first market in Austin, Texas. People connect them to their Wi-Fi and put it in their window so the devices can pull in data from Helium’s IoT sensors over its open-source LongFi protocol. The hotspots then encrypt and send the data to the company’s cloud that clients can plug into to track and collect info from their devices. The Helium Hotspots only require as much energy as a 12-watt LED light bulb to run, but that $495 price tag is steep. The lack of a concrete return on investment could deter later adopters from buying the expensive device.

Only 150-200 hotspots are necessary to blanket a city in connectivity, Haleem tells me. But because they need to be distributed across the landscape, so a client can’t just fill their warehouse with the hotspots, and the upfront price is expensive for individuals, Helium might need to sign up some retail chains as partners for deployment. As Haleem admits, “The hard part is the education.” Making hotspot buyers understand the potential (and risks) while demonstrating the opportunities for clients will require a ton of outreach and slick marketing.

Without enough Helium Hotspots, the Helium network won’t function. That means this startup will have to simultaneously win at telecom technology, enterprise sales and cryptocurrency for the network to pan out. As if one of those wasn’t hard enough.