Zendesk launches Sell Marketplace to bring app store to CRM product

Zendesk acquired Base CRM in 2018 to give customers a CRM component to go with its core customer service software. After purchasing the company, it changed the name to Sell, and today the company announced the launch of the new Sell Marketplace.

Officially called The Zendesk Marketplace for Sell, it’s a place where companies can share components that extend the capabilities of the core Sell product. Companies like MailChimp, HubSpot and QuickBooks are available at launch.

App directory in Sell Marketplace. Screenshot: Zendesk

Matt Price, SVP and general manager at Zendesk, sees the marketplace as a way to extend Sell into a platform play, something he thinks could be a “game changer.” He likened it to the impact of app stores on mobile phones.

“It’s that platform that accelerated and really suddenly [transformed smart phones] from being just a product to [launching an] industry. And that’s what the marketplace is doing now, taking Sell from being a really great sales tool to being able to handle anything that you want to throw at it because it’s extensible through apps,” Price explained.

Price says that this ability to extend the product could manifest in several ways. For starters, customers can build private apps with a new application development framework. This enables them to customize Sell for their particular environment, such as connecting to an internal system or building functionality that’s unique to them.

In addition, ISVs can build custom apps, something Price points out they have been doing for some time on the Zendesk customer support side. “Interestingly Zendesk obviously has a very large community of independent developers, hundreds of them, who are [developing apps for] our support product, and now we have another product that they can support,” he said.

Finally, industry partners can add connections to their software. For instance, by installing Dropbox for Sell, it gives sales people a way to save documents to Dropbox and associate them with a deal in Sell.

Of course, what Zendesk is doing here with Sell Marketplace isn’t new. Salesforce introduced this kind of app store concept to the CRM world in 2006 when it launched AppExchange, but the Sell Marketplace still gives Sell users a way to extend the product to meet their unique needs, and that could prove to be a powerful addition.

Microsoft announces global Teams ad push as it combats Slack for the heart of enterprise comms

The long-running contest between Microsoft and its Teams service and Slack’s eponymous application continued this morning, with Redmond announcing what it describes as its first “global” advertising push for its enterprise communication service.

Slack, a recent technology IPO, exploded in the back half of last decade, accreting huge revenues while burrowing into the tech stacks of the startup world. The former startup’s success continued as it increasingly targeted larger companies; it’s easier to stack revenue in enterprise-scale chunks than it is by onboarding upstarts.

Enterprise productivity software, of course, is a large percentage of Microsoft’s bread and butter. And as Slack rose — and Microsoft decided against buying the then-nascent rival — the larger company invested in its competing Teams service. Notably, today’s ad push is not the first advertising salvo between the two companies. Slack owns that record, having welcomed Microsoft to its niche in a print ad that isn’t aging particularly well.

Slack and Teams are competing through public usage announcements. Most recently, Teams announced that it has 20 million daily active users (DAUs); Slack’s most recent number is 12 million. Slack, however, has touted how active its DAUs are, implying that it isn’t entirely sure that Microsoft’s figures line up to its own. Still, the rising gap between their numbers is notable.

Microsoft’s new ad campaign is yet another chapter in the ongoing Slack vs. Teams. The ad push itself is only so important. What matters more is that Microsoft is choosing to expend some of its limited public attention bandwidth on Teams over other options.

Stock

While Teams is merely part of the greater Office 365 world that Microsoft has been building for some time, Slack’s product is its business. And since its direct listing, some air has come out of its shares.

Slack’s share price has fallen from the mid-$30s after it debuted to the low-$20s today. I’ve explored that repricing and found that, far from the public markets repudiating Slack’s equity, the company was merely mispriced in its early trading life. The company’s revenue multiple has come down since its first days as a public entity, but remains rich; investors are still pricing Slack like an outstanding company.

Ahead, Slack and Microsoft will continue to trade competing DAU figures. The question becomes how far Slack’s brand can carry it against Microsoft’s enterprise heft.

macOS Security Updates Part 2 | Running Diffs on Apple’s MRT app

In the first part of this series, I looked at how we can keep informed of when Apple make changes to their built-in security tools. In this and the following parts, we’ll look at how to determine what changes have occurred. Our primary methodology is to run diffs against the previous version of the tool and the new one. However, XProtect, Gatekeeper and MRT.app use a variety of formats (XML, SQL and machO binary, respectively) to store data, and they also use different forms of obfuscation within those formats. In this post, we’ll start with looking at how to extract data and run diffs on Apple’s Malware Removal Tool, aka the MRT.app. The only tools you’ll need to do this are Apple’s free Xcode Command Line tools.

image running diffs MRT

Why the ‘strings’ Tool Won’t Show All Strings

If you’ve ever done any kind of file analysis you’re no doubt aware of the strings tool (and if not start here). While strings is fine for dumping constant strings in a binary, there are other ways of hiding strings that such tools are unable to extract. In a previous post I described how to run the strings tool on MRT.app’s binary and how to search for and decode other strings hidden in byte arrays using a disassembler. The problem with that approach, however, is two-fold. First, it’s labor intensive: you could literally spend hours (I have!) trying to manually find and decode the strings stored in byte arrays. Second, it requires some skill in knowing your way around a disassembler, extracting and cataloging what you find. Fortunately, as I’ll show below, there’s a better way that’s both fast and doesn’t require manually combing through disassembly.

Before we get to that, it’s also worth reminding ourselves that strings on macOS is not equivalent to the same tool on most Linux distros and lacks the --encoding option for dumping wide-character strings like Unicode UTF-16. For that reason, in general it’s better to use something like FireEye’s FLOSS tool for extracting strings, but that still won’t help us with those strings stored in byte arrays and only assembled in memory upon execution.

Just Enough Disassembly To Get the Idea!

When it comes to strings in MRT.app, the issue isn’t unicode or some other encoding but rather the fact that some strings are stored as single characters in an array. In order to understand the problem and the solution, let’s take a very quick look at what’s going on when we view the byte arrays in the MRT executable in a disassembler.

The following images were taken from Hopper – a commercial tool, simply because it’s prettier and cleaner for illustration purposes – and the open source radare2 (aka r2). However, you don’t need either to follow along, and indeed we’ll look at similar output from otool (which you will need and is included in the Xcode command line tools) a little further on.

The first image below shows part of one of MRT app’s typical byte arrays. The first column represents the address of each line of code, but we won’t be needing that today. The second column of greyed text represents the op codes or machine instructions while everything else to the right of that is a sort-of human readable translation to help us make sense of what instructions and values the op codes represent. Note, in particular, that the last two places of each line of op code are mirrored in the rightmost column, where they are prefixed with 0x to indicate hexadecimal notation.

image of Hopper byte array

Importantly, those hex numbers are ASCII characters, which you can reveal in Hopper by control-clicking and choosing ‘Characters’.

image of Hopper display character

As we can clearly see below, that’s an array of characters that can be concatenated to form a string. When the particular function containing this array executes in memory, the string “EliteKeyloggerAccessibilityStatus” (among others) will be created.

For the sake of comparison, here’s the same in r2, which nicely renders the ASCII by default, and then in Hopper.

image of disassembly in r2

image of Hopper elite keylogger

Let’s remain with the op codes in the greyed column for a moment because it’s these that are going to prove to be our salvation when trying to automate extracting these strings. You’ll notice that all the lines begin with C6, and all except the first are followed by 40. The byte C6 is an Intel assembly instruction that tells the processor to move some value into the RAX accumulator register. The following instruction simply says “move the hex number 45 into RAX”.

C6 00 45

The 00 in the middle means “with an offset of zero”; in other words, move the number 45 to exactly the address of the RAX register. All the subsequent lines have the C6 followed by 40, which is the assembly instruction for addition, and then an incrementing hexadecimal index, 1 through to the length of the array.

C6 40 01 45
C6 40 02 6C
C6 40 03 69

This tells the processor to move the number in the final two places (e.g., 45) to the address of (RAX + index number of bytes). So, for example, the op code

C6 40 09 6F

tells the processor to move the hexademical number 6F (which is the ASCII code for the lower case ‘o’ character) into an address that = (address of the RAX register + 9 bytes).

There’s only one other op code we need to take note of before we move on to the fun of extracting these strings, and that’s the op code for when a procedure (aka “function”) returns, C3.

image of end of procedure

The significance of this will come into play below.

Extracting Strings Hidden In Byte Arrays

If you already have the Xcode command line tools installed, we can make use of otool and some heavy piping through other provided utilities to dump all the strings contained in byte arrays in one go. For those eager to have a go, try this command, then we’ll go through it to explain how it works, make a few tweaks and suggest saving it as a script.

If you’re on Catalina, try this (triple-click the line below to select all of it):

otool -tvj /Library/Apple/System/Library/CoreServices/MRT.app/Contents/MacOS/MRT | grep movb | grep (%rax)$ | awk '{ print(NF==7)?"0x0a"$(NF-1):$(NF-1) }' | sed 's/$//g' | grep -v % | sed 's/{0x,,}//g' | grep -v '-' | awk 'length($0)>1' | awk 'length($0)!=3' | xxd -r -p

For Mojave and earlier (triple-click the line below to select all of it):

otool -tvj /System/Library/CoreServices/MRT.app/Contents/MacOS/MRT| grep movb | grep (%rax)$ | awk '{ print(NF==7)?"0x0a"$(NF-1):$(NF-1) }' | sed 's/$//g' | grep -v % | sed 's/{0x,,}//g' | grep -v '-' | awk 'length($0)>1' | awk 'length($0)!=3' | xxd -r -p

Yes, those are some mammoth one-liners! But if all is well, you should get an output that looks like the image below. I’m running Mojave, so I used the second of the two code blocks above (if you get an error, check you’re using the right code for your version of macOS).

image of output

On the current version of MRT.app, which is v1.52 at the time of writing, if I repeat that command (arrow up to recall last command, for those who are not regular command line users) and append

| wc -l

to get a line count, that returns 169 lines.

Here’s an explanation of how the code works. First, otool -tvj essentially produces an output similar to what we saw in Hopper and r2, but with a slightly different format. If we ran that alone in Terminal and gave it the path to the MRT binary, using what we learned in the last section, we could easily manually find the byte arrays just by searching for c6 40 01 (that would bring us to index 1; remember, the beginning of the array starts the line above at index 0).

image of otool

Note that otool uses the mnemonic movb – move byte – rather than just mov, which makes it a bit more transparent. Thus, our first grep after dumping the disassembly is to pull out every line that contains a movb instruction. Second, every line in a byte array will end with (%rax), so that’s our second grep.

image of grep after dump

Recall that every array starts with C6 00 XX. Because there’s no need to add an index to store the first character, there’s only three bytes in the instruction; in contrast, the remaining lines in the array all have four (review the previous section if you’re not clear why). For this reason, we now use a bit of awk and a ternary operator to test how many columns there are in the line.

image of awk ternary operator

If the line has exactly 7 columns, we know this could be the start of a byte array, so we prepend 0x0a – the new line character – to our output, and then print the character’s ASCII code (prepending 0a is what produces new lines for each decoded string in our output) sitting in the penultimate column (NF -1). If the line has more or less than 7 columns, then we output whatever is in the penultimate column (NF -1), which mostly (see below) will give us values with the pattern “0xXX,”.

From here on, each line is piped through a few grep, sed and awk commands to remove everything but the actual stripped hex number. We also exclude lines that only output a length of 1 or 3, as our hex output needs to be divisible by two for the next and final stage.

image of grep sed and awk

That final stage involves piping the output into xxd and using the -r and -p switches (you can also use rax2  -s as an alternative to xxd -r -p for you radare2 fans!) to turn the hex encoded ASCII back into human-readable language. That process requires each valid hex numeral to have a length of two characters (XX) to produce one printable ASCII character (hence, no piping anything of length 1 or 3 into it or else you’ll be treated to a few error beeps!).

image of xxd

If you’re only interested in finding out what’s new in MRT, our script should be fine as is for doing diffs (see the next section), but for other uses, it needs a tweak. In particular, for threat hunting, it would be useful to see which strings belong together as part of a single detection. This may help us to find samples in public malware repositories like VirusTotal or across our own data collected from endpoints.

At the end of the previous section, I pointed out that the op code for the “end of procedure” instruction was C3. We can now use that to demarcate groups of strings with some nominal delimiter. Here’s a version of the extraction code given above, but which inserts a delimiter character between strings from different functions (note you need to substitute the actual path to MRT binary when you paste this one):

otool -tvj  | egrep 'movb|retq' | egrep '(%rax)$|retq' | awk '{ print(NF==7)?"0x0a"$(NF-1):$(NF-1) }' | sed 's/^c3/5c/' | sed 's/$//g' | grep -v % | sed 's/{0x,,}//g' | grep -v '-' | awk 'length($0)>1' | awk 'length($0)!=3' | xxd -r -p

image of grep for end of procedure

As you can see, the primary difference is we’ve now included a grep for lines containing the mnemonic retq (again, note the difference between otool and Hopper). Later, after the first bunch of awk commands, we substitute the opcode c3 with 5c, which represents the backslash character. In the disassembly, when we encounter this line, the instruction will actually be at the column position NF-1, so it is already captured by the earlier piping. The result of the entire command now helps us to visually see the difference between strings contained in one procedure and another.

image of delimited string collections

There’s a couple of “ToDos” remaining. One, note that because the grep and sed search and replace for c3 is fairly crude, you’ll sometimes get only one forward slash between strings from different functions and sometimes multiple slashes. It would be nice to have a more consistent “end of procedure” marker. Two, we would do well to put these alternative versions (Mojave, Catalina, show/hide “end of procedure” markings) in a script with arguments. This isn’t the place to go into general bash scripting and we’re running out of space, so I’ll deploy the “leave it as an exercise for the reader” evasion here! If you need help with bash scripting, there’s no better place to start than here.

Running Diffs On MRT.app

In order to do diffs you’re going to need two versions of the MRT binary, so I suggest copying the current one now to a dedicated folder for this kind of work so that when the next iteration arrives, you have the older version safely tucked away for testing. I use a simple nomenclature, MRT_151, MRT_152, for version 1.51, 1.52, etc.

$ cp /System/Library/CoreServices/MRT.app/Contents/MacOS/MRT ~//MRT_152

Note: For Catalina, remember to prepend /Library/Apple to that path.

From our discussion above, we now have two kinds of string differences we can run on the MRT.app executable: the ordinary strings with strings or FLOSS and what I’ll call a “byteStrings” command or script using the code detailed above.

Aside from strings, it’s also good to know what classes have changed in a machO binary, what new symbols may have been imported from libraries and so on. The built-in tools nm and otool have some useful options you can check out in the man pages. I also tend to make heavy use of rabin2 (part of the r2 toolset) for binary analysis as some of the examples below show.

In order to do a fast diff on two MRT binaries, we can make use of a shell feature called process substitution, which allows us to treat the result of commands like a file, and thus pipe them into the diff utility.

Using this syntax,

$ diff -y <([commands] path to older file) <([commands] path to newer file)

the -y switch allows us to view the diff side-by-side for convenience. The process substitutions occur inside <().

Let’s try it:

$ diff -y <(strings -a MRT_151) <(strings -a MRT_152)

There’s three possible outcomes for each line, shown between the two columns. A > indicates the entry to the right has been added at that position in the newer file; a < indicates the entry to the left is missing at that position in the newer file, and a | indicates the line at that position has changed.

image of side by side strings diff

However, in practice interpretation is a little more nuanced than that because, due to code refactoring, many lines that may look like they’re missing will appear as added at another point in the diff. The above image shows a good example. Note how the pathWithComponents: line has shifted in MRT v152 (on the right) compared to MRT v151 (on the left).

There’s a couple of ways you can deal with this in general. If the output is short, a manual inspection will usually do. If there’s a lot of output, a bit more grepping should tell you how much content you’re looking for, but you’ll find it easier to manipulate the results in text files. The following two commands sort the additions and omissions into two separate text files, “added.txt” and “missing.txt”, respectively.

$ diff //g' > added.txt

$ diff <(strings -a MRT_151) <(strings -a MRT_152) | grep ' missing.txt

The sed command serves to remove the “greater than” and “less than” symbols, which we need to do so that we can then diff those two files against each other:

$ diff missing.txt added.txt | grep > | sort -u

Aside from strings, another important change to look at is the class names. My favorite tool for class dumping is rabin2. We’ll use the -cc option and grep out the additions and omissions, once again making good use of process substitution.

$ diff -y <(rabin2 -cc MRT_151) <(rabin2 -cc MRT_152) | egrep ''

image of diff with rabin cc

You can also play with other rabin2 options in the same pattern, such as -l instead of -cc to do a diff on the linked libraries (no change between MRT v1.51 and MRT v.152, in this instance). Likewise, diffing the two binaries with nm using process substitution can also sometimes turn up subtle changes.

image of diff with nm

Finally, of course, don’t forget to do a side-by-side diff with our new byteStrings tool!

$ diff -y <(./byteStrings MRT_151) <(./byteStrings MRT_152)

image of some new bytestrings

Conclusion

In this post, we’ve looked at ways to run diffs using different tools that can help us to understand what changes have occurred from one version of a program to another. Along the way, we’ve taken a tour through the basics of disassembly to understand how we can create our own tool to automate part of the process.

Of course, Apple may well decide to chose another method for obfuscating some of their strings in some future version of MRT.app, and then we’ll have to find another way to decode them. I certainly hope not, not only because Apple’s security team know better than anyone that in the end, you simply can’t prevent code being read on macOS, but more importantly because sharing threat intel with other security researchers and exposing threat actor techniques serves to make everyone safer. That, of course, would involve an admission that threat actors are actively targeting macOS, something that history suggests Apple are disinclined to admit.

However that may be, if nothing else, I hope you enjoyed this tour through diffing and disassembling machO binaries! In the next post, we’ll be looking at how to do diffs on XProtect and Gatekeeper, so sign up to our blog newsletter (form to the left) or follow us on social media (links below the line) to be notified when that’s live.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Google Cloud gets a premium support plan with 15-minute response times

Google Cloud today announced the launch of its premium support plans for enterprise and mission-critical needs. This new plan brings Google’s support offerings for the Google Cloud Platform (GCP) in line with its premium G Suite support options.

“Premium Support has been designed to better meet the needs of our customers running modern cloud technology,” writes Google’s VP of Cloud Support, Atul Nanda. “And we’ve made investments to improve the customer experience, with an updated support model that is proactive, unified, centered around the customer, and flexible to meet the differing needs of their businesses.”

The premium plan, which Google will charge for based on your monthly GCP spent (with a minimum cost of what looks to be about $12,500 per month), promises a 15-minute response time for P1 cases. Those are situations when an application or infrastructure is unusable in production. Other features include training and new product reviews, as well as support for troubleshooting third-party systems.

Google stresses that the team that will answer a company’s calls will consist of “content-aware experts” that know your application stack and architecture. As with similar premium plans from other vendors, enterprises will have a Technical Account manager who works through these issues with them. Companies with global operations can opt to have (and pay for) technical account managers available during business hours in multiple regions.

The idea here, however, is also to give GCP users more proactive support, which will soon include a site reliability engineering engagement, for example, that is meant to help customers “design a wrapper of supportability around the Google Cloud customer projects that have the highest sensitivity to downtime.” The Support team will also work with customers to get them ready for special events like Black Friday or other peak events in their industry. Over time, the company plans to add more features and additional support plans.

As with virtually all of Google’s recent cloud moves, today’s announcement is part of the company’s efforts to get more enterprises to move to its cloud. Earlier this week, for example, it launched support for IBM’s Power Systems architecture, as well as new infrastructure solutions for retailers. In addition, it also acquired no-code service AppSheet.

Cloudinary passes $60M ARR without VC money

Hello and welcome back to our regular morning look at private companies, public markets and the gray space in between.

Today we’re continuing our exploration of companies that have reached material scale, usually viewed through the lens of annual recurring revenue (ARR). We’ve looked at companies that have reached the $100 million ARR mark and a few that haven’t quite yet, but are on the way.

Today, a special entry. We’re looking at a company that isn’t yet at the $100 million ARR mark. It’s 60% of the way there, but with a twist. The company is bootstrapped. Yep, from pre-life as a consultancy that built a product to fit its own needs, Cloudinary is cruising toward nine-figure recurring revenue and an IPO under its own steam.

Cyral announces $11M Series A to help protect data in cloud

Cyral, an early stage startup that helps protect data stored in cloud repositories, announced an $11 million Series A today. The company also revealed a previous undisclosed $4.1 million angel investment, making the total $15.1 million.

The Series A was led by Redpoint Ventures. A.Capital Ventures, Costanoa VC, Firebolt, SV Angel and Trifecta Capital also participated in on the round.

Cyral co-founder and CEO Manav Mital says the company’s product acts as a security layer on top of cloud data repositories — whether databases, data lakes, data warehouse or other data repository — helping identify issues like faulty configurations or anomalous activity.

Mital says that unlike most security data products of this ilk, Cyral doesn’t use an agent or watch points to try to detect signals that indicate something is happening to the data. Instead, he says that Cyral is a security layer attached directly to the data.

“The core innovation of Cyral is to put a layer of visibility attached right to the data endpoint, right to the interface where application services and users talk to the data endpoint, and in real time see the communication,” Mital explained.

As an example, he says that Cyral could detect that someone has suddenly started scanning rows of credit card data, or that someone was trying to connect to a database on an unencrypted connection. In each of these cases, Cyral would detect the problem, and depending on the configuration, send an alert to the customer’s security team to deal with the problem, or automatically shut down access to the database before informing the security team.

It’s still early days for Cyral with 15 employees and a handful of early access customers. Mital says for this round he’s working on building a product to market that’s well designed and easy to use.

He says that people get the problem he’s trying to solve. “We could walk into any company and they are all worried about this problem. So for us getting people interested has not been an issue. We just want to make sure we build an amazing product,” he said.

Epsagon scores $16M Series A to monitor modern development environments

Epsagon, an Israeli startup that wants to help monitor modern development environments like serverless and containers, announced a $16 million Series A today.

U.S. Venture Partners (USVP), a new investor led the round. Previous investors Lightspeed Venture Partners and StageOne Ventures also participated. Today’s investment brings the total raised to $20 million, according to the company.

CEO and co-founder Nitzan Shapira says that the company has been expanding its product offerings in the last year to cover not just its serverless roots, but also giving deeper insights into a number of forms of modern development.

“So we spoke around May when we launched our platform for microservices in the cloud products, and that includes containers, serverless and really any kind of workload to build microservices apps. Since then we have had a few several significant announcements,” Shapira told TechCrunch.

For starters, the company announced support or tracing and metrics for Kubernetes workloads including native Kubernetes along with managed Kubernetes services like AWS EKS and Google GKE. “A few months ago, we announced our Kubernetes integration. So, if you’re running any Kubernetes workload, you can integrate with Epsagon in one click, and from there you get all the metrics out of the box, then you can set up a tracing in a matter of minutes. So that opens up a very big number of use cases for us,” he said.

The company also announced support for AWS AppSync, a no-code programming tool on the Amazon cloud platform. “We are the only provider today to introduce tracing for AppSync and that’s [an area] where people really struggle with the monitoring and troubleshooting of it,” he said.

The company hopes to use the money from today’s investment to expand the product offering further with support for Microsoft Azure and Google Cloud Platform in the coming year. He also wants to expand the automation of some tasks that have to be manually configured today.

“Our intention is to make the product as automated as possible, so the user will get an amazing experience in a matter of minutes including advanced monitoring, identifying different problems and troubleshooting,” he said

Shapira says the company has around 25 employees today, and plans to double headcount in the next year.

Adobe Experience Manager now offered as cloud-native SaaS application

Adobe announced today that Adobe Experience Manager (AEM) is now available as a cloud-native SaaS application. Prior to this, it was available on premises or as a managed service, but it wasn’t pure cloud-native.

Obviously being available as a cloud service makes sense for customers, and offers all of the value you would get from any cloud service. Customers can now access all of the tools in AEM without having to worry about maintaining, managing or updating it, giving the marketing team more flexibility, agility and ongoing access to the latest updates.

This value proposition did not escape Loni Stark, Adobe’s senior director of strategy and product marketing. “It creates a compelling offer for mid-size companies and enterprises that are increasingly transforming to adopt advanced digital tools but need more simplicity and flexibility to support their changing business models,” Stark said in a statement.

AEM provides a number of capabilities, including managing the customer experience in real time. Having real-time access to data means you can deliver the products, services and experiences that make sense based on what you know about the customer in any given moment.

What’s more, you can meet customers wherever they happen to be. Today, it could be the company website, mobile app or other channel. Companies need to be flexible and tailor content to the specific channel, as well as what they know about the customer.

It’s interesting to note that AEM is based on the purchase of Day Software in 2010. That company originally developed a web content management product, but over time it evolved to become Adobe Experience Manager, and has been layering on functionality to meet an experience platform’s requirements since. Today, the product includes tools for content management, asset management and digital forms.

The company made the announcement today at NRF 2020, a huge retail conference taking place in New York City this week.

Google acquires AppSheet to bring no-code development to Google Cloud

Google announced today that it is buying AppSheet, an eight-year-old no-code mobile-application-building platform. The company had raised more than $17 million on a $60 million valuation, according to PitchBook data. The companies did not share the purchase price.

With AppSheet, Google gets a simple way for companies to build mobile apps without having to write a line of code. It works by pulling data from a spreadsheet, database or form, and using the field or column names as the basis for building an app.

It is integrated with Google Cloud already integrating with Google Sheets and Google Forms, but also works with other tools, including AWS DynamoDB, Salesforce, Office 365, Box and others. Google says it will continue to support these other platforms, even after the deal closes.

As Amit Zavery wrote in a blog post announcing the acquisition, it’s about giving everyone a chance to build mobile applications, even companies lacking traditional developer resources to build a mobile presence. “This acquisition helps enterprises empower millions of citizen developers to more easily create and extend applications without the need for professional coding skills,” he wrote.

In a story we hear repeatedly from startup founders, Praveen Seshadri, co-founder and CEO at AppSheet, sees an opportunity to expand his platform and market reach under Google in ways he couldn’t as an independent company.

“There is great potential to leverage and integrate more deeply with many of Google’s amazing assets like G Suite and Android to improve the functionality, scale, and performance of AppSheet. Moving forward, we expect to combine AppSheet’s core strengths with Google Cloud’s deep industry expertise in verticals like financial services, retail, and media  and entertainment,” he wrote.

Google sees this acquisition as extending its development philosophy with no-code working alongside workflow automation, application integration and API management.

No code tools like AppSheet are not going to replace sophisticated development environments, but they will give companies that might not otherwise have a mobile app the ability to put something decent out there.

The crypto rich find security in Anchorage

Not the city, the $57 million-funded cryptocurrency custodian startup. When someone wants to keep tens or hundreds of millions of dollars in Bitcoin, Ethereum, or other coins safe, they put them in Anchorage’s vault. And now they can trade straight from custody so they never have to worry about getting robbed mid-transaction.

With backing from Visa, Andreessen Horowitz, and Blockchain Capital, Anchorage has emerged as the darling of the cryptocurrency security startup scene. Today it’s flexing its muscle and war chest by announcing its first acquisition, crypto risk modeling company Merkle Data.

Anchorage Security

Anchorage founders

Anchorage has already integrated Merkle’s technology and team to power today’s launch of its new trading feature. It eliminates the need for big crypto owners to manually move assets in and out of custody to buy or sell, or to set up their own in-house trading. Instead of grabbing some undisclosed spread between the spot price and the price Anchorage quotes its clients, it charges a transparent per transaction fee of a tenth of a percent.

It’s stressful enough trading around digital fortunes. Anchorage gives institutions and token moguls peace of mind throughout the process while letting them stake and vote while their riches are in custody. Anchorage CEO Nathan McCauley tells me “Our clients want to be able to fund a bank account with USD and have it seamlessly converted into crypto, securely held in their custody accounts. Shockingly, that’s not yet the norm–but we’re changing that.”

Buy and sell safely

Founded in 2017 by leaders behind Docker and Square, Anchorage’s core business is its omnimetric security system that takes passwords that can be lost or stolen out of the equation. Instead, it uses humans and AI to review scans of your biometrics, nearby networks, and other data for identity confirmation. Then it requires consensus approval for transactions from a set of trusted managers you’ve whitelisted.

With Anchorage Trading, the startup promises efficient order routing, transparent pricing, and multi-venue liquidity from OTC desks, exchanges, and market makers. “Because trading and custody are directly integrated, we’re able to buy and sell crypto from custody, without having to make risky external transfers or deal with multiple accounts from different providers” says Bart Stephens, founder and managing partner of Blockchain Capital.

Trading isn’t Anchorage’s primary business, so it doesn’t have to squeeze clients on their transactions and can instead try to keep them happy for the long-term. That also sets up Anchorage to be foundational part of the cryptocurrency stack. It wouldn’t disclose the terms of the Merkle Data acquisition, but the Pantera Capital-backed company brings quantative analysts to Anchorage to keep its trading safe and smart.

“Unlike most traditional financial assets, crypto assets are bearer assets: in order to do anything with them, you need to hold the underlying private keys. This means crypto custodians like Anchorage must play a much larger role than custodians do in traditional finance” says McCauley. “Services like trading, settlement, posting collateral, lending, and all other financial activities surrounding the assets rely on the custodian’s involvement, and in our view are best performed by the custodian directly.”

Anchorage will be competing with Coinbase, which offers integrated custody and institutional brokerage through its agency-only OTC desk. Fidelity Digital Assets combines trading and brokerage, but for Bitcoin only. BitGo offers brokerage from custody through a partnership with Genesis Global Trading. But Anchorage hopes its experience handling huge sums, clear pricing, and credentials like membership in Facebook’s Libra Association will win it clients.

McCauley says the biggest threat to Anchorage isn’t competitors, thoguh, but hazy regulation. Anchorage is building a core piece of the blockchain economy’s infrastructure. But for the biggest financial institutions to be comfortable getting involved, lawmakers need to make it clear what’s legal.