Luther.AI is a new AI tool that acts like Google for personal conversations

When it comes to pop culture, a company executive or history questions, most of us use Google as a memory crutch to recall information we can’t always keep in our heads, but Google can’t help you remember the name of your client’s spouse or the great idea you came up with at a meeting the other day.

Enter Luther.AI, which purports to be Google for your memory by capturing and transcribing audio recordings, while using AI to deliver the right information from your virtual memory bank in the moment of another online conversation or via search.

The company is releasing an initial browser-based version of their product this week at TechCrunch Disrupt where it’s competing for the $100,000 prize at TechCrunch Disrupt Battlefield.

Luther.AI’s founders say the company is built on the premise that human memory is fallible, and that weakness limits our individual intelligence. The idea behind Luther.AI is to provide a tool to retain, recall and even augment our own brains.

It’s a tall order, but the company’s founders believe it’s possible through the growing power of artificial intelligence and other technologies.

“It’s made possible through a convergence of neuroscience, NLP and blockchain to deliver seamless in-the-moment recall. GPT-3 is built on the memories of the public internet, while Luther is built on the memories of your private self,” company founder and CEO Suman Kanuganti told TechCrunch.

It starts by recording your interactions throughout the day. For starters, that will be online meetings in a browser, as we find ourselves in a time where that is the way we interact most often. Over time though, they envision a high-quality 5G recording device you wear throughout your day at work and capture your interactions.

If that is worrisome to you from a privacy perspective, Luther is building in a few safeguards starting with high-end encryption. Further, you can only save other parties’ parts of a conversation with their explicit permission. “Technologically, we make users the owner of what they are speaking. So for example, if you and I are having a conversation in the physical world unless you provide explicit permission, your memories are not shared from this particular conversation with me,” Kanuganti explained.

Finally, each person owns their own data in Luther and nobody else can access or use these conversations either from Luther or any other individual. They will eventually enforce this ownership using blockchain technology, although Kanuganti says that will be added in a future version of the product.

Luther.ai search results recalling what person said at meeting the other day about customer feedback.

Image Credits: Luther.ai

Kanuganti says the true power of the product won’t be realized with a few individuals using the product inside a company, but in the network effect of having dozens or hundreds of people using it, even though it will have utility even for an individual to help with memory recall, he said.

While they are releasing the browser-based product this week, they will eventually have a stand-alone app, and can also envision other applications taking advantage of the technology in the future via an API where developers can build Luther functionality into other apps.

The company was founded at the beginning of this year by Kanuganti and three co-founders including CTO Sharon Zhang, design director Kristie Kaiser and scientist Marc Ettlinger . It has raised $500,000 and currently has 14 employees including the founders.

Two Russians Charged in $17M Cryptocurrency Phishing Spree

U.S. authorities today announced criminal charges and financial sanctions against two Russian men accused of stealing nearly $17 million worth of virtual currencies in a series of phishing attacks throughout 2017 and 2018 that spoofed websites for some of the most popular cryptocurrency exchanges.


The Justice Department unsealed indictments against Russian nationals Danil Potekhin and Dmitirii Karasavidi, alleging the duo was responsible for a sophisticated phishing and money laundering campaign that resulted in the theft of $16.8 million in cryptocurrencies and fiat money from victims.

Separately, the U.S. Treasury Department announced economic sanctions against Potekhin and Karasavidi, effectively freezing all property and interests of these persons (subject to U.S. jurisdiction) and making it a crime to transact with them.

According to the indictments, the two men set up fake websites that spoofed login pages for the currency exchanges Binance, Gemini and Poloniex. Armed with stolen login credentials, the men allegedly stole more than $10 million from 142 Binance victims, $5.24 million from 158 Poloniex users, and $1.17 million from 42 Gemini customers.

Prosecutors say the men then laundered the stolen funds through an array of intermediary cryptocurrency accounts — including compromised and fictitiously created accounts — on the targeted cryptocurrency exchange platforms. In addition, the two are alleged to have artificially inflated the value of their ill-gotten gains by engaging in cryptocurrency price manipulation using some of the stolen funds.

For example, investigators alleged Potekhin and Karasavidi used compromised Poloniex accounts to place orders to purchase large volumes of “GAS,” the digital currency token used to pay the cost of executing transactions on the NEO blockchain — China’s first open source blockchain platform.

“Using digital crurency in one victim Poloniex account, they placed an order to purchase approximately 8,000 GAS, thereby immediately increasing the market price of GAS from approximately $18 to $2,400,” the indictment explains.

Potekhin and others then converted the artificially inflated GAS in their own fictitious Poloniex accounts into other cryptocurrencies, including Ethereum (ETH) and Bitcoin (BTC). From the complaint:

“Before the Eight Fictitious Poloniex Accounts were frozen, POTEKHIN and others transferred approximately 759 ETH to nine digital currency addresses. Through a sophisticated and layered manner, the ETH from these nine digital currency addresses was sent through multiple intermediary accounts, before ultimately being deposited into a Bitfinex account controlled by Karasavidi.”

The Treasury’s action today lists several of the cryptocurrency accounts thought to have been used by the defendants. Searching on some of those accounts at various cryptocurrency transaction tracking sites points to a number of phishing victims.

“I would like to blow your bitch ass away, if you even had the balls to show yourself,” exclaimed one victim, posting in a comment on the Etherscan lookup service.

One victim said he contemplated suicide after being robbed of his ETH holdings in a 2017 phishing attack. Another said he’d been relieved of funds needed to pay for his 3-year-old daughter’s medical treatment.

“You and your team will leave a trail and will be found,” wrote one victim, using the handle ‘Illfindyou.’ “You’ll only be able to hide behind the facade for a short while. Go steal from whales you piece of shit.”

There is potentially some good news for victims of these phishing attacks. According to the Treasury Department, millions of dollars in virtual currency and U.S. dollars traced to Karasavidi’s account was seized in a forfeiture action by the United States Secret Service.

Whether any of those funds can be returned to victims of this phishing spree remains to be seen. And assuming that does happen, it could take years. In February 2020, KrebsOnSecurity wrote about being contacted by an Internal Revenue Service investigator seeking to return funds seized seven years earlier as part of the governments 2013 seizure of Liberty Reserve, a virtual currency service that acted as a $6 billion hub for the cybercrime world.

Today’s action is the latest indication that the Treasury Department is increasingly willing to use its authority to restrict the financial resources tied to various cybercrime activities. Earlier this month, the agency’s Office of Foreign Asset Control (OFAC) added three Russian nationals and a host of cryptocurrency addresses to its sanctions lists in a case involving efforts by Russian online troll farms to influence the 2018 mid-term elections.

In June, OFAC took action against six Nigerian nationals suspected of stealing $6 million from U.S. businesses and individuals through Business Email Compromise fraud and romance scams.

And in 2019, OFAC sanctioned 17 members allegedly associated with “Evil Corp.,” an Eastern European cybercrime syndicate that has stolen more than $100 million from small businesses via malicious software over the past decade.

A copy of the indictments against Potekhin and Karasavidi is available here (PDF).

Verkada adds environmental sensors to cloud-based building operations toolkit

As we go deeper into the pandemic, many buildings sit empty or have limited capacity. During times like these, having visibility into the state of the building can give building operations peace of mind. Today, Verkada, a startup that helps operations manage buildings via the cloud, announced a new set of environmental sensors to give customers even greater insight into building conditions.

The company had previously developed cloud-based video cameras and access control systems. Verkada CEO and co-founder of Filip Kaliszan says today’s announcement is about building on these two earlier products.

“What we do today is cameras and access control — cameras, of course provide the eyes and the view into building in spaces, while access control controls how you get in and out of these spaces,” Kaliszan told TechCrunch. Operations teams can manage these devices from the cloud on any device.

The sensor pack that the company is announcing today layers on a multi-function view into the state of the environment inside a building. “The first product that we’re launching along this environmental sensor line is the SV11, which is a very powerful unit with multiple sensors on board, all of which can be managed in the cloud through our Verkada command platform. The sensors will give customers insight into things like air quality, temperature, humidity, motion and occupancy of the space, as well as the noise level,” he said.

There is a clear strategy behind the company’s product road map. The idea is to give building operations staff a growing picture of what’s going on inside the space. “You can think of all the data being combined with the other aspects of our platform, and then begin delivering a truly integrated building and setting the standard for enterprise building security,” Kaliszan said.

These tools, and the ability to access all the data about a building remotely in the cloud, obviously have even more utility during the pandemic. “I think we’re fortunate that our products can help customers mitigate some of the effects of the pandemic. So we’ve seen a lot of customers use our tools to help them manage through the pandemic, which is great. But when we were originally designing this environmental sensor, the rationale behind it were these core use cases like monitoring server rooms for environmental changes.”

The company, which was founded in 2016, has been doing well. It has 4,200 customers and roughly 400 employees. It is still growing and actively hiring and expects to reach 500 by the end of the year. It has raised $138.9 million, the most recent coming January this year, when it raised an $80 million Series C investment led Felicis Ventures on a $1.6 billion valuation.

Data virtualization service Varada raises $12M

Varada, a Tel Aviv-based startup that focuses on making it easier for businesses to query data across services, today announced that it has raised a $12 million Series A round led by Israeli early-stage fund MizMaa Ventures, with participation by Gefen Capital.

“If you look at the storage aspect for big data, there’s always innovation, but we can put a lot of data in one place,” Varada CEO and co-founder Eran Vanounou told me. “But translating data into insight? It’s so hard. It’s costly. It’s slow. It’s complicated.”

That’s a lesson he learned during his time as CTO of LivePerson, which he described as a classic big data company. And just like at LivePerson, where the team had to reinvent the wheel to solve its data problems, again and again, every company — and not just the large enterprises — now struggles with managing their data and getting insights out of it, Vanounou argued.

varada architecture diagram

Image Credits: Varada

The rest of the founding team, David Krakov, Roman Vainbrand and Tal Ben-Moshe, already had a lot of experience in dealing with these problems, too, with Ben-Moshe having served at the chief software architect of Dell EMC’s XtremIO flash array unit, for example. They built the system for indexing big data that’s at the core of Varada’s platform (with the open-source Presto SQL query engine being one of the other cornerstones).

Image Credits: Varada

Essentially, Varada embraces the idea of data lakes and enriches that with its indexing capabilities. And those indexing capabilities is where Varada’s smarts can be found. As Vanounou explained, the company is using a machine learning system to understand when users tend to run certain workloads, and then caches the data ahead of time, making the system far faster than its competitors.

“If you think about big organizations and think about the workloads and the queries, what happens during the morning time is different from evening time. What happened yesterday is not what happened today. What happened on a rainy day is not what happened on a shiny day. […] We listen to what’s going on and we optimize. We leverage the indexing technology. We index what is needed when it is needed.”

That helps speed up queries, but it also means less data has to be replicated, which also brings down the cost. As MizMaa’s Aaron Applbaum noted, since Varada is not a SaaS solution, the buyers still get all of the discounts from their cloud providers, too.

In addition, the system can allocate resources intelligently so that different users can tap into different amounts of bandwidth. You can tell it to give customers more bandwidth than your financial analysts, for example.

“Data is growing like crazy: in volume, in scale, in complexity, in who requires it and what the business intelligence uses are, what the API uses are,” Applbaum said when I asked him why he decided to invest. “And compute is getting slightly cheaper, but not really, and storage is getting cheaper. So if you can make the trade-off to store more stuff, and access things more intelligently, more quickly, more agile — that was the basis of our thesis, as long as you can do it without compromising performance.”

Varada, with its team of experienced executives, architects and engineers, ticked a lot of the company’s boxes in this regard, but he also noted that unlike some other Israeli startups, the team understood that it had to listen to customers and understand their needs, too.

“In Israel, you have a history — and it’s become less and less the case — but historically, there’s a joke that it’s ‘ready, fire, aim.’ You build a technology, you’ve got this beautiful thing and you’re like, ‘alright, we did it,’ but without listening to the needs of the customer,” he explained.

The Varada team is not afraid to compare itself to Snowflake, which at least at first glance seems to make similar promises. Vananou praised the company for opening up the data warehousing market and proving that people are willing to pay for good analytics. But he argues that Varada’s approach is fundamentally different.

“We embrace the data lake. So if you are Mr. Customer, your data is your data. We’re not going to take it, move it, copy it. This is your single source of truth,” he said. And in addition, the data can stay in the company’s virtual private cloud. He also argues that Varada isn’t so much focused on the business users but the technologists inside a company.

 

Latent AI makes edge AI workloads more efficient

Latent AI, a startup that was spun out of SRI International, makes it easier to run AI workloads at the edge by dynamically managing workloads as necessary.

Using its proprietary compression and compilation process, Latent AI promises to compress library files by 10x and run them with 5x lower latency than other systems, all while using less power thanks to its new adaptive AI technology, which the company is launching as part of its appearance in the TechCrunch Disrupt Battlefield competition today.

Founded by CEO Jags Kandasamy and CTO Sek Chai, the company has already raised a $6.5 million seed round led by Steve Jurvetson of Future Ventures and followed by Autotech Ventures .

Before starting Latent AI, Kandasamy sold his previous startup OtoSense to Analog Devices (in addition to managing HPE Mid-Market Security business before that). OtoSense used data from sound and vibration sensors for predictive maintenance use cases. Before its sale, the company worked with the likes of Delta Airlines and Airbus.

Image Credits: Latent AI

In some ways, Latent AI picks up some of this work and marries it with IP from SRI International .

“With OtoSense, I had already done some edge work,” Kandasamy said. “We had moved the audio recognition part out of the cloud. We did the learning in the cloud, but the recognition was done in the edge device and we had to convert quickly and get it down. Our bill in the first few months made us move that way. You couldn’t be streaming data over LTE or 3G for too long.”

At SRI, Chai worked on a project that looked at how to best manage power for flying objects where, if you have a single source of power, the system could intelligently allocate resources for either powering the flight or running the onboard compute workloads, mostly for surveillance, and then switch between them as needed. Most of the time, in a surveillance use case, nothing happens. And while that’s the case, you don’t need to compute every frame you see.

“We took that and we made it into a tool and a platform so that you can apply it to all sorts of use cases, from voice to vision to segmentation to time series stuff,” Kandasamy explained.

What’s important to note here is that the company offers the various components of what it calls the Latent AI Efficient Inference Platform (LEIP) as standalone modules or as a fully integrated system. The compressor and compiler are the first two of these and what the company is launching today is LEIP Adapt, the part of the system that manages the dynamic AI workloads Kandasamy described above.

In practical terms, the use case for LEIP Adapt is that your battery-powered smart doorbell, for example, can run in a low-powered mode for a long time, waiting for something to happen. Then, when somebody arrives at your door, the camera wakes up to run a larger model — maybe even on the doorbell’s base station that is plugged into power — to do image recognition. And if a whole group of people arrives at ones (which isn’t likely right now, but maybe next year, after the pandemic is under control), the system can offload the workload to the cloud as needed.

Kandasamy tells me that the interest in the technology has been “tremendous.” Given his previous experience and the network of SRI International, it’s maybe no surprise that Latent AI is getting a lot of interest from the automotive industry, but Kandasamy also noted that the company is working with consumer companies, including a camera and a hearing aid maker.

The company is also working with a major telco company that is looking at Latent AI as part of its AI orchestration platform and a large CDN provider to help them run AI workloads on a JavaScript backend.

In 2020, Warsaw’s startup ecosystem is ‘a place to observe carefully’

If you listed the trends that have captured the attention of 20 Warsaw-focused investors who replied to our recent surveys, automation/AI, enterprise SaaS, cleantech, health, remote work and the sharing economy would top the list. These VCs said they are seeking opportunities in the “digital twin” space, proptech and expanded blockchain tokenization inside industries.

Investors in Central and Eastern Europe are generally looking for the same things as VCs based elsewhere: startups that have a unique value proposition, capital efficiency, motivated teams, post-revenue and a well-defined market niche.

Out of the cohort we interviewed, several told us that COVID-19 had not yet substantially transformed how they do business. As Michał Papuga, a partner at Flashpoint VC put it, “the situation since March hasn’t changed a lot, but we went from extreme panic to extreme bullishness. Neither of these is good and I would recommend to stick to the long-term goals and not to be pressured.”

Said Pawel Lipkowski of RBL_VC, “Warsaw is at its pivotal point — think Berlin in the ‘90s. It’s a place to observe carefully.”

Here’s who we interviewed for part one:

For the conclusion, we spoke to the following investors:

Karol Szubstarski, partner, OTB Ventures

What trends are you most excited about investing in, generally?
Gradual shift of enterprises toward increased use of automation and AI, that enables dramatic improvement of efficiency, cost reduction and transfer of enterprise resources from tedious, repeatable and mundane tasks to more exciting, value added opportunities.

What’s your latest, most exciting investment?
One of the most exciting opportunities is ICEYE. The company is a leader and first mover in synthetic-aperture radar (SAR) technology for microsatellites. It is building and operating its own commercial constellation of SAR microsatellites capable of providing satellite imagery regardless of the cloud cover, weather conditions and time of the day and night (comparable resolution to traditional SAR satellites with 100x lower cost factor), which is disrupting the multibillion dollar satellite imagery market.

Are there startups that you wish you would see in the industry but don’t? What are some overlooked opportunities right now?
I would love to see more startups in the digital twin space; technology that enables creation of an exact digital replica/copy of something in physical space — a product, process or even the whole ecosystem. This kind of solution enables experiments and [the implementation of] changes that otherwise could be extremely costly or risky – it can provide immense value added for customers.

What are you looking for in your next investment, in general?
A company with unique value proposition to its customers, deep tech component that provides competitive edge over other players in the market and a founder with global vision and focus on execution of that vision.

Which areas are either oversaturated or would be too hard to compete in at this point for a new startup? What other types of products/services are you wary or concerned about?
No market/sector is too saturated and has no room for innovation. Some markets seem to be more challenging than others due to immense competitive landscape (e.g., food delivery, language-learning apps) but still can be the subject of disruption due to a unique value proposition of a new entrant.

How much are you focused on investing in your local ecosystem versus other startup hubs (or everywhere) in general? More than 50%? Less?
OTB is focused on opportunities with links to Central Eastern European talent (with no bias toward any hub in the region), meaning companies that leverage local engineering/entrepreneurial talent in order to build world-class products to compete globally (usually HQ outside CEE).

Which industries in your city and region seem well-positioned to thrive, or not, long term? What are companies you are excited about (your portfolio or not), which founders?
CEE region is recognized for its sizable and highly skilled talent pool in the fields of engineering and software development. The region is well-positioned to build up solutions that leverage deep, unique tech regardless of vertical (especially B2B). Historically, the region was especially strong in AI/ML, voice/speech/NLP technologies, cybersecurity, data analytics, etc.

How should investors in other cities think about the overall investment climate and opportunities in your city?
CEE (including Poland and Warsaw) has always been recognized as an exceptionally strong region in terms of engineering/IT talent. Inherent risk aversion of entrepreneurs has driven, for a number of years, a more “copycat”/local market approach, while holding back more ambitious, deep tech opportunities. In recent years we are witnessing a paradigm shift with a new generation of entrepreneurs tackling problems with unique, deep tech solutions, putting emphasis on global expansion, neglecting shallow local markets. As such, the quality of deals has been steadily growing and currently reflects top quality on global scale, especially on tech level. CEE market demonstrates also a growing number of startups (in total), which is mostly driven by an abundance of early-stage capital and success stories in the region (e.g., DataRobot, Bolt, UiPath) that are successfully evangelizing entrepreneurship among corporates/engineers.

Do you expect to see a surge in more founders coming from geographies outside major cities in the years to come, with startup hubs losing people due to the pandemic and lingering concerns, plus the attraction of remote work?
I believe that local hubs will hold their dominant position in the ecosystem. The remote/digital workforce will grow in numbers but proximity to capital, human resources and markets still will remain the prevalent force in shaping local startup communities.

Which industry segments that you invest in look weaker or more exposed to potential shifts in consumer and business behavior because of COVID-19? What are the opportunities startups may be able to tap into during these unprecedented times?
OTB invests in general in companies with clearly defined technological advantage, making quantifiable and near-term difference to their customers (usually in the B2B sector), which is a value-add regardless of the market cycle. The economic downturn works generally in favor of technological solutions enabling enterprise clients to increase efficiency, cut costs, bring optimization and replace manual labour with automation — and the vast majority of OTB portfolio fits that description. As such, the majority of the OTB portfolio has not been heavily impacted by the COVID pandemic.

How has COVID-19 impacted your investment strategy? What are the biggest worries of the founders in your portfolio? What is your advice to startups in your portfolio right now?
The COVID pandemic has not impacted our investment strategy in any way. OTB still pursues unique tech opportunities that can provide its customers with immediate value added. This kind of approach provides a relatively high level of resilience against economic downturns (obviously, sales cycles are extending but in general sales pipeline/prospects/retention remains intact). Liquidity in portfolio is always the number one concern in uncertain, challenging times. Lean approach needs to be reintroduced, companies need to preserve cash and keep optimizing — that’s the only way to get through the crisis.

Are you seeing “green shoots” regarding revenue growth, retention or other momentum in your portfolio as they adapt to the pandemic?
A good example in our portfolio is Segron, a provider of an automated testing platform for applications, databases and enterprise network infrastructure. Software development, deployment and maintenance in enterprise IT ecosystem requires continuous and rigorous testing protocols and as such a lot of manual heavy lifting with highly skilled engineering talent being involved (which can be used in a more productive way elsewhere). The COVID pandemic has kept engineers home (with no ability for remote testing) while driving demand for digital services (and as such demand for a reliable IT ecosystem). The Segron automated framework enables full automation of enterprise testing leading to increased efficiency, cutting operating costs and giving enterprise customers peace of mind and a good night’s sleep regarding their IT infrastructure in the challenging economic environment.

What is a moment that has given you hope in the last month or so? This can be professional, personal or a mix of the two.
I remain impressed by the unshakeable determination of multiple founders and their teams to overcome all the challenges of the unfavorable economic ecosystem.

Fiverr Business helps teams manage freelance projects

Freelance marketplace Fiverr launched a new service today designed to help teams at larger companies manage their work with freelancers.

CEO Micha Kaufman told me via email that Fiverr had already begun working with larger clients, but that Fiverr Business is better-designed to meet their needs.

“Organizations require tools to manage their team accounts, defining projects, assigning budgets, tracking progress and collaborating internally,” Kaufman wrote. “Fiverr Business provides all of that and much more, including exclusive access to Fiverr’s personal executive assistants which are always available to Fiverr Business customers to help with administrative account tasks, general project management, talent matching, and more.”

He also suggested that with the pandemic forcing companies to adopt remote work and placing pressure on their bottom lines, many of them are increasingly turning to freelancers, and he claimed, “2020 marks the beginning of a decade where businesses will invest and learn how to truly integrate freelancers into their workflows.”

projects dashboard

Image Credits: Fiverr

Fiverr Group Product Manager Meidad Hinkis walked me through the new service, showing me how users can create projects, assign team members and set freelance budgets, then actually hire freelancers, as well as offer internal and external feedback on the work that comes in.

He also noted there’s a special pool of curated freelancers available through Fiverr Business, and like Kaufman, emphasized that customers will also have access to assistants to help them find freelancers and manage projects. (On the freelancer side, payments and the rest of the experience should be pretty similar.)

On top of the freelancer fees, Fiverr Business will cost $149 per year for teams of up to 50 users, and Hinkis said the company is offering the first year for free.

“We so believe in product and the direction that we want people to get real value before they decide,” he said.

Dropbox CEO Drew Houston says the pandemic forced the company to reevaluate what work means

Dropbox CEO and co-founder Drew Houston, appearing at TechCrunch Disrupt today, said that COVID has accelerated a shift to distributed work that we have been talking about for some time, and these new ways of working will not simply go away when the pandemic is over.

“When you think more broadly about the effects of the shift to distributed work, it will be felt well beyond when we go back to the office. So we’ve gone through a one-way door. This is maybe one of the biggest changes to knowledge work since that term was invented in 1959,” Houston told TechCrunch Editor-In-Chief Matthew Panzarino.

That change has prompted Dropbox to completely rethink the product set over the last six months, as the company has watched the way people work change in such a dramatic way. He said even though Dropbox is a cloud service, no SaaS tool in his view was purpose-built for this new way of working and we have to reevaluate what work means in this new context.

“Back in March we started thinking about this, and how [the rapid shift to distributed work] just kind of happened. It wasn’t really designed. What if you did design it? How would you design this experience to be really great? And so starting in March we reoriented our whole product road map around distributed work,” he said.

He also broadly hinted that the fruits of that redesign are coming down the pike. “We’ll have a lot more to share about our upcoming launches in the future,” he said.

Houston said that his company has adjusted well to working from home, but when they had to shut down the office, he was in the same boat as every other CEO when it came to running his company during a pandemic. Nobody had a blueprint on what to do.

“When it first happened, I mean there’s no playbook for running a company during a global pandemic so you have to start with making sure you’re taking care of your customers, taking care of your employees, I mean there’s so many people whose lives have been turned upside down in so many ways,” he said.

But as he checked in on the customers, he saw them asking for new workflows and ways of working, and he recognized there could be an opportunity to design tools to meet these needs.

“I mean this transition was about as abrupt and dramatic and unplanned as you can possibly imagine, and being able to kind of shape it and be intentional is a huge opportunity,” Houston said.

Houston debuted Dropbox in 2008 at the precursor to TechCrunch Disrupt, then called the TechCrunch 50. He mentioned that the Wi-Fi went out during his demo, proving the hazards of live demos, but offered words of encouragement to this week’s TechCrunch Disrupt Battlefield participants.

Although his is a public company on a $1.8 billion run rate, he went through all the stages of a startup, getting funding and eventually going public, and even today as a mature public company, Dropbox is still evolving and changing as it adapts to changing requirements in the marketplace.

What Is Cloud Security (and How Do You Secure the Cloud Today)?

According to Gartner, most enterprises these days are using more than one public cloud service, and in many cases organizations are increasingly invested in using container-based and cloud-native applications. The move to cloud computing brings significant benefits to the enterprise, but it also adds new risk factors that may be outside the scope of your usual cybersecurity practices. Securing virtual machines, containers, Kubernetes and serverless workloads whether in public, private or hybrid clouds means developing an effective understanding of the security issues affecting cloud workloads. In this post, we explain the challenges and solutions to effective cloud security.

What Is Cloud Security?

Cloud security can be thought of as an element of cybersecurity that pertains specifically to maintaining the confidentiality, integrity and availability of data, applications and services located on servers controlled partially or entirely by one or more third parties.

In order to secure your data, services and business continuity when using cloud providers and/or cloud services, it is vital to understand that the nature of the risk is quite unlike managing data that is held on a traditional, on-premises server, which is typically maintained, controlled and secured by the business owner.

With cloud computing, crucially, the main difference is that whenever using any kind of cloud architecture that contains some element of a public cloud service, the data owner has neither physical control of, nor physical access to, the underlying hardware. In addition, the data owner typically needs to communicate with the cloud provider or service across the public internet, rather than having traffic protected within the perimeter of a local intranet and firewall. Finally, the containers themselves, Docker images, Kubernetes clusters and such, also present unique security challenges.

Given these architectural differences, organizations need to think about a range of issues that broadly fall into four categories: namely, security issues that affect:

  1. the cloud provider
  2. the data owner (or client/user)
  3. communication and transmission of data between 1 and 2
  4. the container software stack

What Are the Challenges of Cloud Security?

We can start to get an idea of the challenges that cloud security presents by looking at some recent examples of what happens when things go wrong.

Let’s suppose your data is hosted on multiple servers outside of your control. Typically, public cloud providers host multiple “tenants” on the same server. Although you can expect any reputable provider to maintain good data isolation between different tenants, perhaps a vulnerability in the computing stack used by your cloud provider has been discovered that allows an attacker to compromise private clouds managed by that provider, such as CVE-2020-3056.

Supposing such a bug was exploited on a remote server belonging to your cloud provider. The question for your security team in such a scenario is this: What visibility do you currently have into what is happening on your workloads? What tools do you have in place that would alert you to unauthorized access or allow you to threat-hunt across your containers after such a vulnerability came to light?

Automated Application Control for Cloud Workloads
Protect cloud-native workloads with advanced lockdown capabilities that guarantee the immutable state of containerized workloads.

Other considerations for cloud security are thrown up by the sustained hacking campaign dubbed Cloud Hopper. Multiple top-tier organizations including such big names as Philips and Rio Tinto lost intellectual property to Chinese-backed APT actor APT10, which penetrated at least a dozen different cloud service providers, including Hewlett-Packard and IBM. The hackers dropped “bespoke malware”, leveraged dynamic-DNS and exfiltrated large amounts of data. Deploying EPP solutions designed primarily for protecting end-user devices like laptops and desktop computers isn’t going to help here: In fact, using solutions designed for endpoints on cloud instances is said to put enterprise data and applications at even greater risk.

Cloud security issues don’t end there, either. Gartner has stated that perhaps the biggest threat to cloud security in the next few years is likely to come from “mismanagement of identities, access and privilege”, with at least half of all cloud security incidents coming from such problems by 2023. Weak access controls due to misconfiguration can be exploited by external actors or by insiders or can lead to unintentional but nevertheless damaging data leaks.

Finally, vulnerabilities or misconfiguration in the container stack itself, such as container escapes, represent a challenging technical problem for security teams whose members may have limited experience in Docker and Kubernetes technology.

Solving the Cloud Security Problem

The kind of problems we noted in the previous section with bugs, misconfigured Docker images and attacks on MSPs—i.e., provider-side security issues—can only be managed through proper visibility and control over your containerized workloads.

Pre-runtime protections that scan both the host and ensure that it and the container image are infection-free are important, but they are not enough, as they will not protect the container against attacks once it is in use or offer any ability for your SOC team to threat hunt or do incident response.

A better solution is to use something like an Application Control Engine, which does away with the need for “Allow-Lists” (aka “Whitelists”) and protects cloud-native workloads with advanced “lockdown” capabilities that guarantee the immutable state of containerized workloads, protecting your workloads against the unauthorised installation and subsequent abuse of legitimate tools like Weave Scope.

Bugs that allow Linux container escapes are best addressed by deploying behavioral detection capabilities on the workloads themselves. To that end, Workload Protection that can provide EDR and runtime protection for your cloud servers is essential.

Such a solution needs to be lightweight so that it does not impact performance, and ideally it should offer functionality such as a secure remote shell, node firewall control, network isolation and file fetching. With a capable Workload Protection solution, you can gain both visibility and control over your containerized workloads.

In terms of securing the client—i.e., data owners—side of the equation, apart from having trusted endpoint security on communicating devices, it is essential that user access is properly managed and locked down. Allowing admins or other users excess access to critical data on cloud platforms is a data breach waiting to happen. Identity and access management (IAM) will help you to define and manage the correct roles and access privileges of individual users. Role-based access control (RBAC) should be implemented with Kubernetes clusters. Having Workload Protection with EDR will help your SOC team hunt for abuses of user privileges, be they insider threats or external attacks conducted through credential theft.

With regard to protecting communication between the cloud and the client, there are at least two considerations to bear in mind.  First, ensure that all data is encrypted both at rest and in transit. That way, even if a data leakage occurs, the information should be unusable to the attackers. Second, in the event of a denial of service attack, it is vital that you have a business continuity plan in place. This might include redundant capacity to cope with extra network traffic (easier in public or hybrid cloud situations) or engaging a DDoS mitigation service, both of which your cloud provider may offer.

Conclusion

Cloud security requires a different approach to endpoint security, as you are faced with the added burden of protecting both the devices you control and those that you don’t. Servers outside of your control can be running a software stack with vulnerabilities that you can not see or patch, and these servers may be managed by an unknown number of people who are equally outside of your control.

Of course, you can expect any reputable cloud provider to take their own security responsibilities seriously, but the core of the issue is that your threat surface is increased once you are dealing with third-party devices and staff. Moreover, the containers that you deploy can themselves contain bugs or misconfigurations. In order to ensure the security of your cloud workloads, apprise your security team of the points noted above and deploy a capable Workload Protection solution to your container workloads. For more information about SentinelOne’s Workload Protection offering, contact us or request a free demo.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Airtable raises $185M and launches new low-code and automation features

The spreadsheet-centric database and no-code platform Airtable today announced that it has raised a $185 million Series D funding round, putting the company at a $2.585 billion post-money valuation.

Thrive Capital led the round, with additional funding by existing investors Benchmark, Coatue, Caffeinated Capital and CRV, as well as new investor D1 Capital. With this, Airtable, which says it now has 200,000 companies using its service, has raised a total of about $350 million. Current customers include Netflix, HBO, Condé Nast Entertainment, TIME, City of Los Angeles, MIT Media Lab and IBM.

In addition, the company is also launching one of its largest feature updates today, which starts to execute on the company’s overall platform vision that goes beyond its current no-code capabilities and brings tools to the service more low-code features, as well new automation (think IFTTT for Airtable) and data management.

As Airtable founder and CEO Howie Liu told me, a number of investors approached the company since it raised its Series C round in 2018, in part because the market clearly realized the potential size of the low-code/no-code market.

“I think there’s this increasing market recognition that the space is real, and the space is very large […],” he told me. “While we didn’t strictly need the funding, it allowed us to continue to invest aggressively into furthering our platform, vision and really executing aggressively, […] without having to worry about, ‘well, what happens with COVID?’ There’s a lot of uncertainty, right? And I think even today there’s still a lot of uncertainty about what the next year will bear.”

The company started opening the round a couple of months after the first shelter in place orders in California, and for most investors, this was a purely digital process.

Liu has always been open about the fact that he wants to build this company for the long haul — especially after he sold his last company to Salesforce at an early stage. As a founder, that likely means he is trying to keep his stake in the company high, even as Airtable continues to raise more money. He argues, though, that more so than the legal and structural controls, being aligned with his investors is what matters most.

“I think actually, what’s more important in my view, is having philosophical alignment and expectations alignment with the investors,” he said. “Because I don’t want to be in a position where it comes down to a legal right or structural debate over the future of the company. That almost feels to me like the last resort where it’s already gotten to a place where things are ugly. I’d much rather be in a position where all the investors around the table, whether they have legal say or not, are fully aligned with what we’re trying to do with this business.”

Just as important as the new funding though, are the various new features the company is launching today. Maybe the most important of these is Airtable Apps. Previously, Airtable users could use pre-built blocks to add maps, Gantt charts and other features to their tables. But while being a no-code service surely helped Airtable’s users get started, there’s always an inevitable point where the pre-built functionality just isn’t enough and users need more custom tools (Liu calls this an escape valve). So with Airtable Apps, more sophisticated users can now build additional functionality in JavaScript — and if they choose to do so, they can then share those new capabilities with other users in the new Airtable Marketplace.

Image Credits: Airtable

“You may or may not need an escape valve and obviously, we’ve gotten this far with 200,000 organizations using Airtable without that kind of escape valve,” he noted. “But I think that we open up a lot more use cases when you can say, well, Airtable by itself is 99% there, but that last 1% is make or break. You need it. And then, just having that outlet and making it much more leveraged to build that use case on Airtable with 1% effort, rather than building the full-stack application as a custom built application is all the difference.”

Image Credits: Airtable

The other major new feature is Airtable Automations. With this, you can build custom, automated workflows to generate reports or perform other repetitive steps. You can do a lot of that through the service’s graphical interface or use JavaScript to build your own custom flows and integrations, too. For now, this feature is available for free, but the team is looking into how to charge for it over time, given that these automated flows may become costly if you run them often.

The last new feature is Airtable Sync. With this, teams can more easily share data across an organization, while also providing controls for who can see what. “The goal is to enable people who built software with Airtable to make that software interconnected and to be able to share a source of truth table between different instances of our tables,” Liu explained.

Image Credits: Airtable