CIO Cynthia Stoddard explains Adobe’s journey from boxes to the cloud

Up until 2013, Adobe sold its software in cardboard boxes that were distributed mostly by third party vendors.

In time, the company realized there were a number of problems with that approach. For starters, it took months or years to update, and Adobe software was so costly, much of its user base didn’t upgrade. But perhaps even more important than the revenue/development gap was the fact that Adobe had no direct connection to the people who purchased its products.

By abdicating sales to others, Adobe’s customers were third-party resellers, but changing the distribution system also meant transforming the way the company developed and sold their most lucrative products.

The shift was a bold move that has paid off handsomely as the company surpassed an $11 billion annual run rate in December — but it still was an enormous risk at the time. We spoke to Adobe CIO Cynthia Stoddard to learn more about what it took to completely transform the way they did business.

Understanding the customer

Before Adobe could make the switch to selling software as a cloud service subscription, it needed a mechanism for doing that, and that involved completely repurposing their web site, Adobe.com, which at the time was a purely informational site.

“So when you think about transformation the first transformation was how do we connect and sell and how do we transition from this large network of third parties into selling direct to consumer with a commerce site that needed to be up 24×7,” Stoddard explained.

She didn’t stop there though because they weren’t just abandoning the entire distribution network that was in place. In the new cloud model, they still have a healthy network of partners and they had to set up the new system to accommodate them alongside individual and business customers.

She says one of the keys to managing a set of changes this immense was that they didn’t try to do everything at once. “One of the things we didn’t do was say, ‘We’re going to move to the cloud, let’s throw everything away.’ What we actually did is say we’re going to move to the cloud, so let’s iterate and figure out what’s working and not working. Then we could change how we interact with customers, and then we could change the reporting, back office systems and everything else in a very agile manner,” she said.

New Charges, Sentencing in Satori IoT Botnet Conspiracy

The U.S. Justice Department today charged a Canadian and a Northern Ireland man for allegedly conspiring to build botnets that enslaved hundreds of thousands of routers and other Internet of Things (IoT) devices for use in large-scale distributed denial-of-service (DDoS) attacks. In addition, a defendant in the United States was sentenced today to drug treatment and 18 months community confinement for his admitted role in the botnet conspiracy.

Indictments unsealed by a federal court in Alaska today allege 20-year-old Aaron Sterritt from Larne, Northern Ireland, and 21-year-old Logan Shwydiuk of Saskatoon, Canada conspired to build, operate and improve their IoT crime machines over several years.

Prosecutors say Sterritt, using the hacker aliases “Vamp” and “Viktor,” was the brains behind the computer code that powered several potent and increasingly complex IoT botnet strains that became known by exotic names such as “Masuta,” “Satori,” “Okiru” and “Fbot.”

Shwydiuk, a.k.a. “Drake,” “Dingle, and “Chickenmelon,” is alleged to have taken the lead in managing sales and customer support for people who leased access to the IoT botnets to conduct their own DDoS attacks.

A third member of the botnet conspiracy — 22-year-old Kenneth Currin Schuchman of Vancouver, Wash. — pleaded guilty in Sept. 2019 to aiding and abetting computer intrusions in September 2019. Schuchman, whose role was to acquire software exploits that could be used to infect new IoT devices, was sentenced today by a judge in Alaska to 18 months of community confinement and drug treatment, followed by three years of supervised release.

Kenneth “Nexus-Zeta” Schuchman, in an undated photo.

The government says the defendants built and maintained their IoT botnets by constantly scanning the Web for insecure devices. That scanning primarily targeted devices that were placed online with weak, factory default settings and/or passwords. But the group also seized upon a series of newly-discovered security vulnerabilities in these IoT systems — commandeering devices that hadn’t yet been updated with the latest software patches.

Some of the IoT botnets enslaved hundreds of thousands of hacked devices. For example, by November 2017, Masuta had infected an estimated 700,000 systems, allegedly allowing the defendants to launch crippling DDoS attacks capable of hurling 100 gigabits of junk data per second at targets — enough firepower to take down many large websites.

In 2015, then 15-year-old Sterritt was involved in the high-profile hack against U.K. telecommunications provider TalkTalk. Sterritt later pleaded guilty to his part in the intrusion, and at his sentencing in 2018 was ordered to complete 50 hours of community service.

The indictments against Sterritt and Shwydiuk (PDF) do not mention specific DDoS attacks thought to have been carried out with the IoT botnets. In an interview today with KrebsOnSecurity, prosecutors in Alaska declined to discuss any of their alleged offenses beyond building, maintaining and selling the above-mentioned IoT botnets.

But multiple sources tell KrebsOnSecuirty Vamp was principally responsible for the 2016 massive denial-of-service attack that swamped Dyn — a company that provides core Internet services for a host of big-name Web sites. On October 21, 2016, an attack by a Mirai-based IoT botnet variant overwhelmed Dyn’s infrastructure, causing outages at a number of top Internet destinations, including Twitter, Spotify, Reddit and others.

In 2018, authorities with the U.K.’s National Crime Agency (NCA) interviewed a suspect in connection with the Dyn attack, but ultimately filed no charges against the youth because all of his digital devices had been encrypted.

“The principal suspect of this investigation is a UK national resident in Northern Ireland,” reads a June 2018 NCA brief on their investigation into the Dyn attack (PDF), dubbed Operation Midmonth. “In 2018 the subject returned for interview, however there was insufficient evidence against him to provide a realistic prospect of conviction.”

The login prompt for Nexus Zeta’s IoT botnet included the message “Masuta is powered and hosted on Brian Kreb’s [sic] 4head.” To be precise, it’s a 5head.

The unsealing of the indictments against Sterritt and Shwydiuk came just minutes after Schuchman was sentenced today. Schuchman has been confined to an Alaskan jail for the past 13 months, and Chief U.S. District Judge Timothy Burgess today ordered the sentence of 18 months community confinement to begin Aug. 1.

Community confinement in Schuchman’s case means he will spend most or all of that time in a drug treatment program. In a memo (PDF) released prior to Schuchman’s sentencing today, prosecutors detailed the defendant’s ongoing struggle with narcotics, noting that on multiple occasions he was discharged from treatment programs after testing positive for Suboxone — which is used to treat opiate addiction and is sometimes abused by addicts — and for possessing drug contraband.

The government’s sentencing memo also says Schuchman on multiple occasions absconded from pretrial supervision, and went right back to committing the botnet crimes for which he’d been arrested — even communicating with Sterritt about the details of the ongoing FBI investigation.

“Defendant’s performance on pretrial supervision has been spectacularly poor,” prosecutors explained. “Even after being interviewed by the FBI and put on restrictions, he continued to create and operate a DDoS botnet.”

Prosecutors told the judge that when he was ultimately re-arrested by U.S. Marshals, Schuchman was found at a computer in violation of the terms of his release. In that incident, Schuchman allegedly told his dad to trash his computer, before successfully encrypting his hard drive (which the Marshals service is still trying to decrypt). According to the memo, the defendant admitted to marshals that he had received and viewed videos of “juveniles engaged in sex acts with other juveniles.”

“The circumstances surrounding the defendant’s most recent re-arrest are troubling,” the memo recounts. “The management staff at the defendant’s father’s apartment complex, where the defendant was residing while on abscond status, reported numerous complaints against the defendant, including invitations to underage children to swim naked in the pool.”

Adam Alexander, assistant US attorney for the district of Alaska, declined to say whether the DOJ would seek extradition of Sterritt and Shwydiuk. Alexander said the success of these prosecutions is highly dependent on the assistance of domestic and international law enforcement partners, as well as a list of private and public entities named at the conclusion of the DOJ’s press release on the Schuchman sentencing (PDF).

However, a DOJ motion (PDF) to seal the case records filed back in September 2019 says the government is in fact seeking to extradite the defendants.

Chief Judge Burgess was the same magistrate who presided over the 2018 sentencing of the co-authors of Mirai, a highly disruptive IoT botnet strain whose source code was leaked online in 2016 and was built upon by the defendants in this case. Both Mirai co-authors were sentenced to community service and home confinement thanks to their considerable cooperation with the government’s ongoing IoT botnet investigations.

Asked whether he was satisfied with the sentence handed down against Schuchman, Alexander maintained it was more than just another slap on the wrist, noting that Schuchman has waived his right to appeal the conviction and faces additional confinement of two years if he absconds again or fails to complete his treatment.

“In every case the statutory factors have to do with the history of the defendants, who in these crimes tend to be extremely youthful offenders,” Alexander said. “In this case, we had a young man who struggles with mental health and really pronounced substance abuse issues. Contrary to what many people might think, the goal of the DOJ in cases like this is not to put people in jail for as long as possible but to try to achieve the best balance of safeguarding communities and affording the defendant the best possible chance of rehabilitation.”

William Walton, supervisory special agent for the FBI’s cybercrime investigation division in Anchorage, Ala., said he hopes today’s indictments and sentencing send a clear message to what he described as a relatively insular and small group of individuals who are still building, running and leasing IoT-based botnets to further a range of cybercrimes.

“One of the things we hope in our efforts here and in our partnerships with our international partners is when we identify these people, we want very much to hold them to account in a just but appropriate way,” Walton said. “Hopefully, any associates who are aspiring to fill the vacuum once we take some players off the board realize that there are going to be real consequences for doing that.”

Ransomware – A Complex Attack Needs a Sophisticated Defense

A guest post from Arete, by

  • Jim Jaeger, President and Chief Cyber Strategist, Arete Incident Response
  • Larry Wescott, CISSP, Cyber Strategist, Arete Incident Response
  • Rae Jewell, Director, MDR & IR Security Operations, Arete Incident Response, contributed to this article.

A ransomware attack is not a simple infection from malware. It is a complex series of actions in which the initial infection is only the first step. A successful ransomware attack almost always involves a variety of attack vectors, frequently guided by human intervention. Successfully resisting a ransomware attack requires a solution that can neutralize the full range of threats from these vectors.

Microsoft recently issued a detailed report describing how complex ransomware variants, which are manually guided by their issuers, operate. Here are some of the characteristics of these human-guided ransomware variants:

  • They begin with unsophisticated types of malware which can trigger multiple alerts, but tend to be triaged as unimportant and not thoroughly investigated
  • They may drop multiple variants of malware until a variant is not caught by antivirus software
  • They can attack servers which have Remote Desktop Protocol (RDP) configured as open to the internet, and use brute force attacks to gain access to the corporate network
  • Once inside, they surveil the network
  • They can use other utilities to steal credentials to gain administrative privileges
    • They can then stop services such as antivirus protection or other services which may lead to their detection
    • Other tools are downloaded to enable persistence of the malware, elevation of privileges and clearing of event logs
  • They can execute PowerShell scripts connecting to a command and control server, allowing persistent control over other machines
  • They will stop Exchange Server, SQL Server and other similar services which can lock certain files so they cannot be encrypted
  • They can introduce legitimate binaries and use Alternate Data Streams to masquerade the execution of ransomware code as legitimate code

Ransomware Techniques Seen in the Wild

Execution of the ransomware payload at the highest privilege level with the fewest obstacles is the ultimate goal of the attacker. Often the attackers will also disable or encrypt on-line back-up systems so they cannot be used to recover data encrypted in the ransomware attack. We are now seeing some ransomware variants exfiltrating sensitive data with the additional goal of threatening the victim with exposure of the data if the ransom is not paid.

Other pernicious ransomware techniques include polymorphism, or code which constantly changes itself to avoid detection, and the use of fileless strategies to infect machines without dropping files onto the target machine. Some have noted the use of artificial intelligence tactics to take over some of the human-guided techniques described above, such as reconnaissance and scaling attacks.

AV Signatures Are Failing to Block Ransomware

Defensive antivirus systems which are signature-based are totally insufficient to repel attacks from this wide variety of potential attack vectors.

We respond to hundreds of ransomware attacks a year. In every case where the victim was using signature-based antivirus defenses, it did NOT detect the ransomware and allowed it to execute and encrypt critical data.

The National Institute of Standards and Technologies describes the limitations of signature-based detection systems this way:

Signature-based detection is very effective at detecting known threats but largely ineffective at detecting previously unknown threats, threats disguised using evasion techniques, and many variants of known threats. For example, if an attacker modified the malware to use a filename of “freepics2.exe”, a signature-based defense looking for “freepics.exe” would not match it.

Signature-based detection is the simplest detection method because it just compares the current unit of activity, such as a packet or a log entry, to a list of signatures using string comparison operations. Signature-based detection technologies have little understanding of many network or application protocols and cannot track and understand the state of complex communications. They also lack the ability to remember previous requests when processing the current request. This limitation prevents signature-based detection methods from detecting attacks that comprise multiple events if none of the events contains a clear indication of an attack.

The Next Step in Evolution: EPP

An Endpoint Protection Platform (EPP) is a step up the protection ladder. An EPP system is a

“set of software tools and technologies that enable the securing of endpoint devices. It is a unified security solution that combines antivirus, antispyware, intrusion detection/prevention, a personal firewall and other endpoint protection solutions.”

Although some EPP solutions include threat intelligence and data analytics, they sometimes lack capabilities such as the ability to analyze memory, which would allow detection of memory resident attacks, or existing operating system binaries and capabilities (such as PowerShell), which could detect LOL “living-off-the-land” attacks which hijack these operating system functions.

An EPP is an important step in the right direction, as a correctly deployed solution provides a defensive perimeter around the organization, on all of the endpoints which represent potential access channels for malware. Even one unmonitored access point may be all that is needed for an intruder to get inside and start the processes which could culminate in a successful ransomware attack.

A consequence of a fully deployed EPP solution, however, is a potentially massive amount of data generated by the endpoints, which must be analyzed in order to detect the hits that even a signature-based detection process would generate. That’s assuming that the malware can be detected by the signature based system – that constantly evolving polymorphic malware is not involved, and that the malware is identified by the signatures stored by the system, which, is not a given. Further, as the size of the business increases, obviously the magnitude of the data generated increases exponentially.

Problem Solved: EDR

But perhaps more importantly, as the NIST comment pointed out, a signature based system will not be able to analyze the context of an attack, and trigger an alert if a pattern emerges, such as repeated login attempts, especially over a number of endpoints, which may indicate a brute force attack. To the extent that an EPP solution contains threat intelligence or data analytics, it may be able to detect these kinds of attacks.

But as attacks grow more sophisticated, how those solutions implement their analytical capabilities may become an issue. In terms of sheer volume, a University of Maryland study estimated in 2007 that attacks occur every 39 seconds, a volume which has undoubtedly increased. Both cloud-based solutions and those involving a central database can present bottlenecks and delays in triggering alerts, which could provide attackers with critical advantages in establishing themselves inside networks. Attacks are also increasing in sophistication, with some seeing indications that attackers are beginning to incorporate artificial intelligence into their malware.

Endpoint detection and response (EDR) technology incorporates data analytics and threat intelligence solutions into a package which can respond to the threat, by killing or quarantining the malicious process. The most effective and advanced solutions are active EDR solutions, which incorporate artificial intelligence and machine learning (AI/ML) into behavioral analysis of system activity. These solutions apply data analytics at the endpoint, leveraging advanced methods of applying data science at the endpoint in real time, with minimal performance overhead. Another advantage of active EDR is autonomous response – the ability to respond to threats at machine speed. The use of AI allows active EDR to respond to a ransomware attack before the malware can encrypt the data – much more quickly than a human could respond to an alert.

Proud to Protect the World’s Leading Enterprises
The World’s Leading and Largest Enterprises Trust in SentinelOne.

Conclusion

By focusing on behavior rather than conformance to a signature, active EDR can detect patterns at variance with the system baseline, whether from new (or evolved) variants, or activities occurring within the network which are at odds from the normal. Processes indicating suspicious activity can be killed or isolated before they can spread.

Active EDR also automates analysis of the activity to provide context for the human analyst, thus reducing by orders of magnitude the data generated by an EPP solution. This additional context reduces the amount of time required for human analysis, thus either allowing them to keep up with the anomalies generated by the system, or otherwise reducing the number of human analysts required in the absence of the active EDR system.

We routinely employ active EDR technology on every ransomware incident that we respond to and find it to be 100% effective in containing and neutralizing the malware. Active EDR enables us to confidently recover encrypted systems into a clean environment, whether we are restoring from (off-line) back-ups or employing decryption keys. Active EDR technology has proven to be effective against the most persistent ransomware variants being employed by attackers. Once our clients see how effective the active EDR tools we employ during our incident response operations are, they frequently purchase these systems for long term use on their networks.

Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Cape Privacy launches data science collaboration platform with $5.06M seed investment

Cape Privacy emerged from stealth today after spending two years building a platform for data scientists to privately share encrypted data. The startup also announced $2.95 million in new funding and $2.11 million in funding it got when the business launched in 2018, for a total of $5.06 million raised.

Boldstart Ventures and Version One led the round, with participation from Haystack, Radical Ventures and Faktory Ventures.

Company CEO Ché Wijesinghe says that data science teams often have to deal with data sets that contain sensitive data and share data internally or externally for collaboration purposes. It creates a legal and regulatory data privacy conundrum that Cape Privacy is trying to solve.

“Cape Privacy is a collaboration platform designed to help focus on data privacy for data scientists. So the biggest challenge that people have today from a business perspective is managing privacy policies for machine learning and data science,” Wijesinghe told TechCrunch.

The product breaks down that problem into a couple of key areas. First of all it can take language from lawyers and compliance teams and convert that into code that automatically generates policies about who can see the different types of data in a given data set. What’s more, it has machine learning underpinnings so it also learns about company rules and preferences over time.

It also has a cryptographic privacy component. By wrapping the data with a cryptographic cypher, it lets teams share sensitive data in a safe way without exposing the data to people who shouldn’t be seeing it because of legal or regulatory compliance reasons.

“You can send something to a competitor as an example that’s encrypted, and they’re able to process that encrypted data without decrypting it, so they can train their model on encrypted data,” company co-founder and CTO Gavin Uhma explained.

The company closed the new round in April, which means they were raising in the middle of a pandemic, but it didn’t hurt that they had built the product already and were ready to go to market, and that Uhma and his co-founders had already built a successful startup, GoInstant, which was acquired by Salesforce in 2012. (It’s worth noting that GoInstant debuted at TechCrunch Disrupt in 2011.)

Uhma and his team brought Wijesinghe on board to build the sales and marketing team because, as a technical team, they wanted someone with go-to-market experience running the company so they could concentrate on building product.

The company has 14 employees and is already an all-remote team, so the team didn’t have to adjust at all when the pandemic hit. While it plans to keep hiring fairly limited for the foreseeable future, the company has had a diversity and inclusion plan from the start.

“You have to be intentional about about seeking diversity, so it’s something that when we sit down and map out our hiring and work with recruiters in terms of our pipeline, we really make sure that diversity is one of our objectives. You just have it as a goal, as part of your culture, and it’s something that when we see the picture of the team, we want to see diversity,” he said.

Wijesinghe adds, “As a person of color myself, I’m very sensitive to making sure that we have a very diverse team, not just from a color perspective, but a gender perspective as well.”

The company is gearing up to sell the product  and has paid pilots starting in the coming weeks.

Dell’s debt hangover from $67B EMC deal could put VMware stock in play

When Dell bought EMC in 2016 for $67 billion it was one of the biggest acquisitions in tech history, and it brought with it a boatload of debt. Since then Dell has been working on ways to mitigate that debt by selling off various pieces of the corporate empire and going public again, but one of its most valuable assets remains VMware, a company that came over as part of the huge EMC deal.

The Wall Street Journal reported yesterday that Dell is considering selling part of its stake in VMware. The news sent the stock of both companies soaring.

It’s important to understand that even though VMware is part of the Dell family, it runs as a separate company, with its own stock and operations, just as it did when it was part of EMC. Still, Dell owns 81% of that stock, so it could sell a substantial stake and still own a majority of the company, or it could sell it all, or incorporate into the Dell family, or of course it could do nothing at all.

Patrick Moorhead, founder and principal analyst at Moor Insights & Strategy, thinks this might just be about floating a trial balloon. “Companies do things like this all the time to gauge value, together and apart, and my hunch is this is one of those pieces of research,” Moorhead told TechCrunch.

But as Holger Mueller, an analyst with Constellation Research, points out, it’s an idea that could make sense. “It’s plausible. VMware is more valuable than Dell, and their innovation track record is better than Dell’s over the last few years,” he said.

Mueller added that Dell has been juggling its debts since the EMC acquisition, and it will struggle to innovate its way out of that situation. What’s more, Dell has to wait on any decision until September 2021 when it can move some or all of VMware tax-free, five years after the EMC acquisition closed.

“While Dell can juggle finances, it cannot master innovation. The company’s cloud strategy is only working on a shrinking market and that ain’t easy to execute and grow on. So yeah, next year makes sense after the five-year tax-free thing kicks in,” he said.

In between the spreadsheets

VMware is worth $63.9 billion today, while Dell is valued at a far more modest $38.9 billion, according to Yahoo Finance data. But beyond the fact that the companies’ market caps differ, they are also quite different in terms of their ability to generate profit.

Looking at their most recent quarters each ending May 1, 2020, Dell turned $21.9 billion in revenue into just $143 million in net income after all expenses were counted. In contrast, VMware generated just $2.73 billion in revenue, but managed to turn that top line into $386 million worth of net income.

So, VMware is far more profitable than Dell from a far smaller revenue base. Even more, VMware grew more last year (from $2.45 billion to $2.73 billion in revenue in its most recent quarter) than Dell, which shrank from $21.91 billion in Q1 F2020 revenue to $21.90 billion in its own most recent three-month period.

VMware also has growing subscription software (SaaS) revenues. Investors love that top line varietal in 2020, having pushed the valuation of SaaS companies to new heights. VMware grew its SaaS revenues from $411 million in the year-ago period to $572 million in its most recent quarter. That’s not rocketship growth mind you, but the business category was VMware’s fastest growing segment in percentage and gross dollar terms.

So VMware is worth more than Dell, and there are some understandable reasons for the situation. Why wouldn’t Dell sell some VMware to lower its debts if the market is willing to price the virtualization company so strongly? Heck, with less debt perhaps Dell’s own market value would rise.

It’s all about that debt

Almost four years after the deal closed, Dell is still struggling to figure out how to handle all the debt, and in a weak economy, that’s an even bigger challenge now. At some point, it would make sense for Dell to cash in some of its valuable chips, and its most valuable one is clearly VMware.

Nothing is imminent because of the five-year tax break business, but could something happen? September 2021 is a long time away, and a lot could change between now and then, but on its face, VMware offers a good avenue to erase a bunch of that outstanding debt very quickly and get Dell on much firmer financial ground. Time will tell if that’s what happens.

AWS launches Amazon Honeycode, a no-code mobile and web app builder

AWS today announced the beta launch of Amazon Honeycode, a new, fully managed low-code/no-code development tool that aims to make it easy for anybody in a company to build their own applications. All of this, of course, is backed by a database in AWS and a web-based, drag-and-drop interface builder.

Developers can build applications for up to 20 users for free. After that, they pay per user and for the storage their applications take up.

Image Credits: Amazon/AWS

“Customers have told us that the need for custom applications far outstrips the capacity of developers to create them,” said AWS VP Larry Augustin in the announcement. “Now with Amazon Honeycode, almost anyone can create powerful custom mobile and web applications without the need to write code.”

Like similar tools, Honeycode provides users with a set of templates for common use cases like to-do list applications, customer trackers, surveys, schedules and inventory management. Traditionally, AWS argues, a lot of businesses have relied on shared spreadsheets to do these things.

“Customers try to solve for the static nature of spreadsheets by emailing them back and forth, but all of the emailing just compounds the inefficiency because email is slow, doesn’t scale, and introduces versioning and data syncing errors,” the company notes in today’s announcement. “As a result, people often prefer having custom applications built, but the demand for custom programming often outstrips developer capacity, creating a situation where teams either need to wait for developers to free up or have to hire expensive consultants to build applications.”

It’s no surprise then that Honeycode uses a spreadsheet view as its core data interface, which makes sense, given how familiar virtually every potential user is with this concept. To manipulate data, users can work with standard spreadsheet-style formulas, which seems to be about the closest the service gets to actual programming. ‘Builders,” as AWS calls Honeycode users, can also set up notifications, reminders and approval workflows within the service.

AWS says these databases can easily scale up to 100,000 rows per workbook. With this, AWS argues, users can then focus on building their applications without having to worry about the underlying infrastructure.

As of now, it doesn’t look like users will be able to bring in any outside data sources, though that may still be on the company’s roadmap. On the other hand, these kinds of integrations would also complicate the process of building an app and it looks like AWS is trying to keep things simple for now.

Honeycode currently only runs in the AWS US West region in Oregon but is coming to other regions soon.

Among Honeycode’s first customers are SmugMug and Slack.

“We’re excited about the opportunity that Amazon Honeycode creates for teams to build apps to drive and adapt to today’s ever-changing business landscape,” said Brad Armstrong, VP of Business and Corporate Development at Slack in today’s release. “We see Amazon Honeycode as a great complement and extension to Slack and are excited about the opportunity to work together to create ways for our joint customers to work more efficiently and to do more with their data than ever before.”

Why AWS built a no-code tool

AWS today launched Amazon Honeycode, a no-code environment built around a spreadsheet-like interface that is a bit of a detour for Amazon’s cloud service. Typically, after all, AWS is all about giving developers all of the tools to build their applications — but they then have to put all of the pieces together. Honeycode, on the other hand, is meant to appeal to non-coders who want to build basic line-of-business applications. If you know how to work a spreadsheet and want to turn that into an app, Honeycode is all you need.

To understand AWS’s motivation behind the service, I talked to AWS VP Larry Augustin and Meera Vaidyanathan, a general manager at AWS.

“For us, it was about extending the power of AWS to more and more users across our customers,” explained Augustin. “We consistently hear from customers that there are problems they want to solve, they would love to have their IT teams or other teams — even outsourced help — build applications to solve some of those problems. But there’s just more demand for some kind of custom application than there are available developers to solve it.”

Image Credits: Amazon

In that respect then, the motivation behind Honeycode isn’t all that different from what Microsoft is doing with its PowerApps low-code tool. That, too, after all, opens up the Azure platform to users who aren’t necessarily full-time developers. AWS is taking a slightly different approach here, though, but emphasizing the no-code part of Honeycode.

“Our goal with honey code was to enable the people in the line of business, the business analysts, project managers, program managers who are right there in the midst, to easily create a custom application that can solve some of the problems for them without the need to write any code,” said Augustin. “And that was a key piece. There’s no coding required. And we chose to do that by giving them a spreadsheet-like interface that we felt many people would be familiar with as a good starting point.”

A lot of low-code/no-code tools also allow developers to then “escape the code,” as Augstin called it, but that’s not the intent here and there’s no real mechanism for exporting code from Honeycode and take it elsewhere, for example. “One of the tenets we thought about as we were building Honeycode was, gee, if there are things that people want to do and we would want to answer that by letting them escape the code — we kept coming back and trying to answer the question, ‘Well, okay, how can we enable that without forcing them to escape the code?’ So we really tried to force ourselves into the mindset of wanting to give people a great deal of power without escaping to code,” he noted.

Image Credits: Amazon

There are, however, APIs that would allow experienced developers to pull in data from elsewhere. Augustin and Vaidyanathan expect that companies may do this for their users on tthe platform or that AWS partners may create these integrations, too.

Even with these limitations, though, the team argues that you can build some pretty complex applications.

“We’ve been talking to lots of people internally at Amazon who have been building different apps and even within our team and I can honestly say that we haven’t yet come across something that is impossible,” Vaidyanathan said. “I think the level of complexity really depends on how expert of a builder you are. You can get very complicated with the expressions [in the spreadsheet] that you write to display data in a specific way in the app. And I’ve seen people write — and I’m not making this up — 30-line expressions that are just nested and nested and nested. So I really think that it depends on the skills of the builder and I’ve also noticed that once people start building on Honeycode — myself included — I start with something simple and then I get ambitious and I want to add this layer to it — and I want to do this. That’s really how I’ve seen the journey of builders progress. You start with something that’s maybe just one table and a couple of screens, and very quickly, before you know, it’s a far more robust app that continues to evolve with your needs.”

Another feature that sets Honeycode apart is that a spreadsheet sits at the center of its user interface. In that respect, the service may seem a bit like Airtable, but I don’t think that comparison holds up, given that both then take these spreadsheets into very different directions. I’ve also seen it compared to Retool, which may be a better comparison, but Retool is going after a more advanced developer and doesn’t hide the code. There is a reason, though, why these services were built around them and that is simply that everybody is familiar with how to use them.

“People have been using spreadsheets for decades,” noted Augustin. “They’re very familiar. And you can write some very complicated, deep, very powerful expressions and build some very powerful spreadsheets. You can do the same with Honeycode. We felt people were familiar enough with that metaphor that we could give them that full power along with the ability to turn that into an app.”

The team itself used the service to manage the launch of Honeycode, Vaidyanathan stressed — and to vote on the name for the product (though Vaidyanathan and Augustin wouldn’t say which other names they considered.

“I think we have really, in some ways, a revolutionary product in terms of bringing the power of AWS and putting it in the hands of people who are not coders,” said Augustin.

Zoom founder and CEO Eric Yuan will speak at Disrupt 2020

The coronavirus pandemic has bruised and battered many technology startups, but it has also boosted a small few. One such company is Zoom, which has shouldered the task of keeping us connected to one another in the midst of remote work and social distancing.

So, of course, we’re absolutely thrilled to have the chance to chat with Zoom founder and CEO Eric Yuan at Disrupt 2020 online.

Yuan moved to Silicon Valley in 1997 after being rejected for a work visa nine times. He got a job at WebEx and, upon the company’s acquisition by Cisco, became VP of Engineering at the company. He pitched an idea for a mobile-friendly video conferencing system that was rejected by his higher-ups.

And thus, Zoom was born.

Zoom launched in 2011 and quickly became one of the biggest teleconferencing platforms in the world, competing with the likes of Google and Cisco. The company has investors like Emergence, Horizon Ventures, and Sequoia, and ultimately filed to go public in 2019.

With some of the most reliable video conferencing software on the market, a tiered pricing structure that’s friendly to average users and massive enterprises alike, and a lively ecosystem of apps and bots on the Zoom App Marketplace, Zoom was well poised to be a public company. In fact, Zoom popped 81 percent in its first day of trading on the Nasdaq, garnering a valuation of $16 billion at the time.

But few could have prepared the company for the explosive growth it would see in 2020.

The coronavirus pandemic necessitated access to a reliable and user-friendly video conferencing software for everyone, not just companies moving to remote work. People used Zoom for family dinners, cocktail hours with friends, first dates, and religious gatherings.

In fact, Zoom reported 300 million daily active participants in April.

But that growth led to increased scrutiny of the business and the product. The company was beset by security issues and had to pause product innovation to focus its energy on resolving those issues.

We’ll talk to Yuan about the growing pains the company went through, his plans for Zoom’s future, the acceleration in changing user behavior, and more.

It’ll be a conversation you won’t want to miss.

Disrupt 2020 runs from September 14 to September 18, and the show will be completely virtual. That means it’s easier than ever to attend and engage with the show. There are just a few Digital Pro Passes left at the $245 price – once they are gone, prices will increase. Discounts are available for current students and non-profit/government employees. Or if you are a founder you can exhibit and be able to generate leads even before the event kicks off at your virtual booth for $445. Get your tickets today.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-6dc1569ad0f69868975c3a5843dfa621’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-6dc1569ad0f69868975c3a5843dfa621’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

Salesforce announces a new mobile collaboration tool for sales called Anywhere

Even before the pandemic pushed most employees to work from home, sales people often worked outside of the office. Salesforce introduced a new tool today at the Trailheadx Conference called Salesforce Anywhere that’s designed to let teams collaborate and share data wherever they happen to be.

Salesforce VP of product, Michael Machado says that the company began thinking about the themes of working from anywhere pre-COVID. “We were really thinking across the board what a mobile experience would be for the end users that’s extremely opinionated, really focuses on the jobs to be done and is optimized for what workers need and how that user experience can be transformed,” Machado explained.

As the pandemic took hold and the company saw how important collaboration was becoming in a digital context, the idea of an app like this took on a new sense of urgency. “When COVID happened, it really added fuel to the fire as we looked around the market and saw that this is a huge need with our customers going through a major transformation, and we wanted to be there to support them in Salesforce with kind of a native experience,” he said.

The idea is to move beyond the database and help surface the information that matters most to individual sales people based on their pipelines. “So we’re going to provide real time alerts so users are able to subscribe to their own alerts that they want to be notified about, whether it’s based on a list they use or a report that they work off of [in Salesforce], but also at the granularity of a single field in Salesforce,” he said.

Employees can then share information across a team, and have chats related to that information. While there are other chat tools out there, Machado says that this tool is focused on sharing Salesforce data, rather than being general purpose like Slack or any other business chat tool.

Image Credit: Salesforce

 

Salesforce sees this as another way to remove the complexity of working in CRM. It’s not a secret that sales people don’t love entering customer information into CRM tools, so the company is attempting to leverage that information to make it worth their while. If the tool isn’t creating a layer of work just for record keeping’s sake, but actually taking advantage of that information to give the sales person key information about their pipeline when it matters most, that makes the record keeping piece more attractive. Being able to share and communicate around that information is another advantage.

This also creates a new collaboration layer that is increasingly essential with workers spread out and working from home. Even when we return to some semblance of normal, sales people on the road can use Anywhere to collaborate, communicate and stay on top of their tasks.

The new tool will be available in beta in July. The company expects to make it generally available some time in the fourth quarter this year.

macOS Big Sur | 9 Big Surprises for Enterprise Security

A little later in June than usual, Apple’s WorldWide Developer Conference (WWDC) 2020 kicked off this week in, of course, more than unusual circumstances. With COVID-19 still very much an issue as we hit the mid-year mark, Apple’s signature event has turned entirely virtual, allowing us all a front-row seat to take in the news, announcements and forthcoming developments across Apple’s platforms. Among those, the beta release of the next version of macOS is of major interest to enterprises and security teams. In this post, we round up the most significant changes we’ve seen announced so far affecting macOS security. Let’s take a look!

1. Will Your Hardware Support macOS 11.0 Big Sur?

In order to take advantage of the changes brought to macOS in Big Sur, you will of course need compatible Apple hardware. There are seven supported product lines for macOS Big Sur, with the earliest supported models going back to 2013:

What does this mean for enterprise?

Ageing hardware in your Mac fleet is probably already feeling the heat from the resource-intensive Mojave and Catalina. The only real question is how much of your macOS hardware you want to update now before the ARM chip hardware becomes available at the end of 2020 and through 2021.

2. Big Sur Version Number: Is it macOS 10.16 or 11.0?

There were a couple of big shocks with the first release of the macOS Big Sur beta, neither of which were explicitly called out in Apple’s Keynote on Monday. The first of these, spotted by some eagle-eyed watchers, was that macOS 10.15 isn’t being superseded by macOS 10.16! Instead, Apple have finally put the nail in the coffin of the 20-years of Mac OS X, now not just in name but in version number, too: Big Sur is to be the first macOS 11.0!

What does this mean for enterprise?

Such a small change, but it will have consequences for many enterprises, as Apple themselves are finding out. So many enterprise workflows rely on scripts that check for version numbers in the 10.x range that many of these are going to be instantly broken.

if [[ ${osvers_major} -ne 10 ]]; then

Even Apple’s own software update check is probably counting in 10s, and the first beta was delivered as 10.16 (and possibly the rest, too; indeed, there’s plenty of internal documentation referencing “10.16”).

3.Kexts Get a Temporary Stay of Execution

Another big surprise turned out to be that the much-anticipated demise of kernel extensions did not occur, although Apple have certainly put in enough road blocks to make developers and users want to transition away from kexts as fast as possible and migrate to using System Extensions and DriverKit instead.

Nonetheless, kexts remain an open possibility for organizations that have critical dependencies, and there’s a new tool kmutil to manage loading, unloading and diagnosing kexts and “kext collections” on Big Sur.

What does this mean for enterprise?

Organizations will be able to upgrade to macOS Big Sur without fear of losing functionality from software that depends on kernel extensions. However, it should be noted that kextutil and kextload have now been replaced by kmutil and there are changes and restrictions on how these work (see the man page for details).

Apple have made it clear that there’s no let up in the drive to move developers away from kexts, advising that:

“IT administrators who use device drivers, cloud storage solutions, networking, and security apps that require kernel extensions are encouraged to move to newer versions – even if that means switching vendors – that are built on System Extensions.”

SentinelOne customers can be assured that our forthcoming macOS 4.4 Agent does not use kexts and will be compatible with macOS 10.15 Catalina and macOS Big Sur.

4. Compatibility with Rosetta 2, Apple silicon and Universal Binaries

Of course, one huge change that was mentioned at the end of the Keynote was the one that has been widely anticipated in the media: Apple’s move to an ARM-based chipset for macOS, dubbed “Apple silicon” at WWDC 2020. Although there is no ARM-based hardware available at the moment, a Developer Transition Kit is being made available, and Apple have said they expect to start shipping ARM Macs in late 2020.

To facilitate this transition, Apple have resurrected an old friend familiar to those that remember the PowerPC-to-Intel transition: Rosetta. Re-invented as Rosetta 2, this software-layer technology will allow certain classes of software compiled on Intel architecture to run on ARM-powered devices.

However, there’s a couple of gotchas with Rosetta 2. First, the translation inevitably takes time, and no amount of clever optimization will alter the fact that some applications relying on Rosetta 2 will launch or run slower than they would if run natively on the Intel architecture they were compiled for. To help avoid this problem, Apple have also introduced a new multi-architecture, Universal (aka ‘Fat’) binary format. This allows developers to port existing macOS apps to run natively on Apple silicon. With Xcode 12 or later, developers can build a “fat” binary that contains architectures for both machines: an ARM-based Mac isn’t required to compile these universal binaries.

The second problem with Rosetta 2 is that it won’t translate all software compiled on Intel architecture into something that will run on Apple silicon. In particular, Windows virtualization software and kernel exensions are not supported. Those hoping Rosetta 2 might give them a ‘get out of jail free’ card for their kernel extensions will have to think again. It remains unclear what future there is for Windows Virtual machines on macOS, though VMWare at least seem to still be holding out some hope that all is not lost.

What does this mean for enterprise?

The widely-anticipated transition to ARM, which Apple expect to happen over 2 years, will not have any immediate effects on enterprise, but in the mid-to-long term IT teams will need to inventory which apps can run natively with universal binary, which are labouring under Rosetta 2 translation, and which are just incompatible.

Perhaps the biggest choice affecting organizations here, as mentioned earlier in this post, is deciding on when to upgrade hardware, and whether it’s in your businesses’ best interest to await ARM-powered Macs before further purchases.

5. Keep Out! A Cryptographically Signed System Volume

Aside from the big changes we’ve already covered, there are some important changes that have received less attention. Among these is that macOS 11 brings cryptographic signing to the System volume. In Catalina, Apple already made an architectural change to split the root volume into two: a read-only System volume and Data volume for everything else.

Now, the System volume receives extra protection by being cryptographically validated to prevent offline attacks and malicious tampering.

What does this mean for enterprise?

Since messing with the System has been a “no-no” since Apple introduced System Integrity Protection way back in macOS El Capitan 10.11, this hardening of the System volume shouldn’t present too many challenges to IT and security teams.

There are a couple of upshots, however. One is that FileVault no longer needs to encrypt the System volume at rest on macOS 11. The Data volume will still be protected by FileVault encryption, if it is turned on.

Secondly, attempting to boot a “live” version of the OS (i.e., a writable filesystem) by turning off System Integrity Protection will no longer work. It is possible to disable the protections and make modifications while it is not booted, but “live” modifications are now off the table in macOS 11.

6. Networking Ins and Outs

There’s a number of small changes to networking and internet use that could affect your security or workflows that we will lump together here.

The first, publicly reported by macrumors, is that the long-standing Network Utility app has been “deprecated”, although from what we can tell it’s actually now entirely non-functional.

The Network Utility contains a bunch of useful tools like Netstat, Ping, Traceroute and Port Scanning among others.

Apple have also added a useful Privacy Tracker and blocker to Safari that allows users to see what tracking cookies a site is using and that Safari has blocked. Publicly reported here, the Privacy toolbar button provides detailed insight through a popup with a number of further pop-outs accessed from within.

Given the extensive malicious use of extensions in all popular browsers, security teams will want to take note of the new Web Extensions feature in macOS Big Sur. This allows existing Chrome and Firefox extensions to be easily converted to use with Safari 14 with the xcrun tool. Apple hope to ameliorate security concerns around this by continuing to insist browser extensions must be distributed through its App Store. There are also new user controls for managing extensions in the browser.

Safari 14 will also reportedly have HTTP/3 enabled by default. According to Apple’s release notes, Big Sur enables experimental HTTP/3 support in Safari via Experimental Features in the Developer menu. It can be enabled system-wide using the following Terminal command:

defaults write -g CFNetworkHTTP3Override -int 3

And since we’re discussing the command line, another networking change in macOS 11 pertains to the networksetup utility, /usr/sbin/networksetup. As of Big Sur, this tool will no longer allow standard users to change network settings. Standard users will be able to toggle Wifi on and off and read the network settings, but modifications will require an administrator user name and password.

What does this mean for enterprise?

Overall, these changes should help to enhance network security. The change to the networksetup command line tool is in line with permissions that standard users have in the System Preferences.app, and it’s nice to see Apple get consistent across tools.

We also don’t envisage a significantly greater attack surface arising from an increase in browser extensions so long as Apple’s App Store does the work of monitoring and removing these for malicious or abusive behaviour in a timely and effective manner. The onus there, of course, is on Apple, and enterprise security teams may well feel that locking down the installation of browser extensions through MDM or similar configuration tools is something worth considering given both the history of malicious extensions and how much of our private and sensitive data goes through browser applications. On that, the new privacy reporting tool is a welcome addition to Safari 14 in Big Sur.

Finally, on networking, the removal of the Network Utility app shouldn’t cause much consternation among IT teams. You probably are already using the command line equivalents of the tools it provided a GUI wrapper for, so we don’t expect this to be missed by too many.

7. Certificate Trust: Root Is Not Enough

With macOS Catalina and earlier, the command line security tool can be used to change certificate trust settings if the effective user is running as root via the add-trusted-cert flag, as shown in the tool’s man page on a Catalina install:

In macOS Big Sur, simply running with UID 0 will no longer be sufficient to make this change: confirmation will be required with an administrator password.

In a fortunate nod to managed enterprise environments, Apple will allow the change to take place without confirmation if the certificate payload is deployed with the root certificate using a configuration profile.

What does this mean for enterprise?

This welcome hardening to certificate trust settings may affect your workflow if you use the security command-line tool to change trust settings or a privileged process calling the SecTrustSettingsSetTrustSettings function.

8. Configuration Profiles Get Preferential Treatment

Configuration profiles, managed through the command-line profiles tool or the Profiles System Preferences pane, have been abused by macOS malware for some time now. Chief among the perpertrators are adware infections seeking to manipulate browser home page and search settings, although others have been seen that configure things like malicious DNS settings, too. Profiles started to become a preferred target by adware after Apple took steps to lock down Safari browser preferences in previous OS releases.

In macOS Big Sur, Apple have acknowledged this abusive use of profiles and now raised the bar for their installation. Unless the device is enrolled in an MDM program, installing a profile will now require the user to manually finish the profile installation in the System Preferences app, where a window will also describe the profile’s actual behaviour.

Somewhat oddly, there is an 8 minute timeout on this action: if the user does not complete the installation within that timeframe, macOS Big Sur will remove it from System Preferences.

What does this mean for enterprise?

Your enterprise is likely only using profiles through an MDM solution, and so this change shouldn’t affect most. If for some reason you are manually scripting the installation of profiles, you will need to engage in user education to teach them how to complete the steps (in less than 8 minutes!).

While malicious use of profiles through social engineering will undoubtedly be tried by adware and other macOS malware vendors, this change should have a welcome impact on reducing some of the worst offenders.

9. Surprisingly, No Surprises in App Security

Although WWDC 2020 is far from over, we haven’t seen any announcements or suggestions that Apple will change their approach to App security with macOS 11. With so many under-the-hood changes focused on security, enterprise security teams might have been hoping for some major developments, but as yet that remains to be seen.

In macOS 11, it seems that Apple will continue to rely on the model that’s been slowly evolving through the macOS 10.x years but which is still somewhat behind the rest of the industry in relying on static signatures for blocking and detection. The app security model continues to look something like this:

  • Protect: codesigning, notarization, and system policy checks via Gatekeeper
  • Detect: malware blocking via static Yara rules in XProtect
  • Remove: removal of known malware via static detection signatures in MRT.app

What does this mean for enterprise?

While Apple admirably places lots of focus on security – and some of the changes in Big Sur are more than welcome – it does seem oddly out-of-touch in some respects. Even Windows now has some rudimentary behavioral detection, and Apple’s reliance on its triumvirate of Gatekeeper, XProtect and MRT.app is starting to look and feel dated.

Gatekeeper and Notarization are weakened by simple social engineering ploys that attackers began using with 10.14 and have only got better at. XProtect’s static YARA rules rely on malware already being known to Apple (i.e., some user or organization has to first get infected before Apple can update the signatures), and MRT.app only runs when it is updated, the user logs in or reboots the Mac – meaning it is impotent for most of the time the Mac is actually in use.

Moreover, for enterprise security teams, there’s no visibility as to what any of these tools are doing or have done. Even with Apple’s latest iteration of macOS, with its new name, new version number and many new features, there’s still every need to keep your Macs protected by a behavioral security platform that can detect, protect and report on known and unknown malicious activity.

Conclusion

WWDC 2020 still has a few days to run, and we’ll update this post in light of any new announcements in the rest of the week. From what we’ve seen so far, we like the look of macOS 11 and welcome the changes that have been reported. Most of these should help your security team to improve the security posture of your macOS fleet with only a few changes needed to most workflows.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security