Tag Archive for: IT

AWS launches discounted spot capacity for its Fargate container platform

AWS today quietly brought spot capacity to Fargate, its serverless compute engine for containers that supports both the company’s Elastic Container Service and, now, its Elastic Kubernetes service.

Like spot instances for the EC2 compute platform, Fargate Spot pricing is significantly cheaper, both for storage and compute, than regular Fargate pricing. In return, though, you have to be able to accept the fact that your instance may get terminated when AWS needs additional capacity. While that means Fargate Spot may not be perfect for every workload, there are plenty of applications that can easily handle an interruption.

“Fargate now has on-demand, savings plan, spot,” AWS VP of Compute Services Deepak Singh told me. “If you think about Fargate as a compute layer for, as we call it, serverless compute for containers, you now have the pricing worked out and you now have both orchestrators on top of it.”

He also noted that containers already drive a significant percentage of spot usage on AWS in general, so adding this functionality to Fargate makes a lot of sense (and may save users a few dollars here and there). Pricing, of course, is the major draw here, and an hour of CPU time on Fargate Spot will only cost $0.01245364 (yes, AWS is pretty precise there) compared to $0.04048 for the on-demand price,

With this, AWS is also launching another important new feature: capacity providers. The idea here is to automate capacity provisioning for Fargate and EC2, both of which now offer on-demand and spot instances, after all. You simply write a config file that, for example, says you want to run 70% of your capacity on EC2 and the rest on spot instances. The scheduler will then keep that capacity on spot as instances come and go, and if there are no spot instances available, it will move it to on-demand instances and back to spot once instances are available again.

In the future, you will also be able to mix and match EC2 and Fargate. “You can say, I want some of my services running on EC2 on demand, some running on Fargate on demand, and the rest running on Fargate Spot,” Singh explained. “And the scheduler manages it for you. You squint hard, capacity is capacity. We can attach other capacity providers.” Outpost, AWS’ fully managed service for running AWS services in your data center, could be a capacity provider, for example.

These new features and prices will be officially announced in Thursday’s re:Invent keynote, but the documentation and pricing is already live today.

New Amazon tool simplifies delivery of containerized machine learning models

As part of the flurry of announcements coming this week out of AWS re:Invent, Amazon announced the release of Amazon SageMaker Operators for Kubernetes, a way for data scientists and developers to simplify training, tuning and deploying containerized machine learning models.

Packaging machine learning models in containers can help put them to work inside organizations faster, but getting there often requires a lot of extra management to make it all work. Amazon SageMaker Operators for Kubernetes is supposed to make it easier to run and manage those containers, the underlying infrastructure needed to run the models and the workflows associated with all of it.

“While Kubernetes gives customers control and portability, running ML workloads on a Kubernetes cluster brings unique challenges. For example, the underlying infrastructure requires additional management such as optimizing for utilization, cost and performance; complying with appropriate security and regulatory requirements; and ensuring high availability and reliability,” AWS’ Aditya Bindal wrote in a blog post introducing the new feature.

When you combine that with the workflows associated with delivering a machine learning model inside an organization at scale, it becomes part of a much bigger delivery pipeline, one that is challenging to manage across departments and a variety of resource requirements.

This is precisely what Amazon SageMaker Operators for Kubernetes has been designed to help DevOps teams do. “Amazon SageMaker Operators for Kubernetes bridges this gap, and customers are now spared all the heavy lifting of integrating their Amazon SageMaker and Kubernetes workflows. Starting today, customers using Kubernetes can make a simple call to Amazon SageMaker, a modular and fully-managed service that makes it easier to build, train, and deploy machine learning (ML) models at scale,” Bindal wrote.

The promise of Kubernetes is that it can orchestrate the delivery of containers at the right moment, but if you haven’t automated delivery of the underlying infrastructure, you can over (or under) provision and not provide the correct amount of resources required to run the job. That’s where this new tool, combined with SageMaker, can help.

“With workflows in Amazon SageMaker, compute resources are pre-configured and optimized, only provisioned when requested, scaled as needed, and shut down automatically when jobs complete, offering near 100% utilization,” Bindal wrote.

Amazon SageMaker Operators for Kubernetes are available today in select AWS regions.

CircleCI launches improved AWS support

For about a year now, continuous integration and delivery service CircleCI has offered Orbs, a way to easily reuse commands and integrations with third-party services. Unsurprisingly, some of the most popular Orbs focus on AWS, as that’s where most of the company’s developers are either testing their code or deploying it. Today, right in time for AWS’s annual re:Invent developer conference in Las Vegas, the company announced that it has now added Orb support for the AWS Serverless Application Model (SAM), which makes setting up automated CI/CD platforms for testing and deploying to AWS Lambda significantly easier.

In total, the company says, more than 11,000 organizations started using Orbs since it launched a year ago. Among the AWS-centric Orbs are those for building and updating images for the Amazon Elastic Container Services and the Elastic Container Service for Kubernetes (EKS), for example, as well as AWS CodeDeploy support, an Orb for installing and configuring the AWS command line interface, an Orb for working with the S3 storage service and more.

“We’re just seeing a momentum of more and more companies being ready to adopt [managed services like Lambda, ECS and EKS], so this became really the ideal time to do most of the work with the product team at AWS that manages their serverless ecosystem and to add in this capability to leverage that serverless application model and really have this out of the box CI/CD flow ready for users who wanted to start adding these into to Lambda,” CircleCI VP of business development Tom Trahan told me. “I think when Lambda was in its earlier days, a lot of people would use it and they would use it and not necessarily follow the same software patterns and delivery flow that they might have with their traditional software. As they put more and more into Lambda and are really putting a lot more what I would call ‘production quality code’ out there to leverage. They realize they do want to have that same software delivery capability and discipline for Lambda as well.”

Trahan stressed that he’s still talking about early adopters and companies that started out as cloud-native companies, but these days, this group includes a lot of traditional companies, as well, that are now rapidly going through their own digital transformations.

Box looks to balance growth and profitability as it matures

Prevailing wisdom states that as an enterprise SaaS company evolves, there’s a tendency to sacrifice profitability for growth — understandably so, especially in the early days of the company. At some point, however, a company needs to become profitable.

Box has struggled to reach that goal since going public in 2015, but yesterday, it delivered a mostly positive earnings report. Wall Street seemed to approve, with the stock up 6.75% as we published this article.

Box CEO Aaron Levie says the goal moving forward is to find better balance between growth and profitability. In his post-report call with analysts, Levie pointed to some positive numbers.

“As we shared in October [at BoxWorks], we are focused on driving a balance of long-term growth and improved profitability as measured by the combination of revenue growth plus free cash flow margin. On this combined metric, we expect to deliver a significant increase in FY ’21 to at least 25% and eventually reaching at least 35% in FY ’23,” Levie said.

Growing the platform

Part of the maturation and drive to profitability is spurred by the fact that Box now has a more complete product platform. While many struggle to understand the company’s business model, it provides content management in the cloud and modernizing that aspect of enterprise software. As a result, there are few pure-play content management vendors that can do what Box does in a cloud context.

Coralogix announces $10M Series A to bring more intelligence to logging

Coralogix, a startup that wants to bring automation and intelligence to logging, announced a $10 million Series A investment today.

The round was led by Aleph with participation from StageOne Ventures, Janvest Capital Partners and 2B Angels. Today’s investment brings the total raised to $16.2 million, according to the company.

CEO and co-founder Ariel Assaraf says his company focuses on two main areas: logging and analysis. The startup has been doing traditional applications performance monitoring up until now, but today, it also announced it was getting into security logging, where it tracks logs for anomalies and shares this information with security information and event management (SEIM) tools.

“We do standard log analytics in terms of ingesting, parsing, visualizing, alerting and searching for log data at scale using scaled, secure infrastructure,” Assaraf said. In addition, the company has developed a set of algorithms to analyze the data, and begin to understand patterns of expected behavior, and how to make use of that data to recognize and solve problems in an automated fashion.

“So the idea is to generally monitor a system automatically for customers plus giving them the tools to quickly drill down into data, understand how it behaves and get context to the issues that they see,” he said.

For instance, the tool could recognize that a certain sequence of events like a user logging in, authenticating that user and redirecting him or her to the application or website. All of those events happen every time, so if there is something different, the system will recognize that and share the information with DevOps team that something is amiss.

The company, which has offices in Tel Aviv, San Francisco and Kiev, was founded in 2015. It already has 1500 customers including Postman, Fiverr, KFC and Caesars Palace. They’ve been able to build the company with just 30 people to this point, but want to expand the sales and marketing team to help build it out the customer base further. The new money should help in that regard.

Vivun snags $3M seed round to bring order to pre-sales

Vivun, a startup that wants to help companies keep better track of pre-sales data announced a $3 million seed round today led by Unusual Ventures, the venture firm run by Harness CEO Jyoti Bansal.

Vivun founder and CEO Matt Darrow says that pre-sales team works more closely with the customer than anyone else, delivering demos and proof of concepts, and generally helping sales get over the finish line. While sales has CRM to store knowledge about the customer, pre-sales has been lacking a tool to track info about their interactions with customers, and that’s what his company built.

“The main problem that we solve is we give technology to those pre-sales leaders to run and operate their teams, but then take those insights from the group that knows more about the technology and the customer than anybody else, and we deliver that across the organization to the product team, sales team and executive staff,” Darrow explained.

Darrow is a Zuora alumni, and his story is similar to that company’s founder Tien Tzuo, who built the first billing system for Salesforce, then founded Zuroa to build a subscription billing system for everyone else. Similarly, Darrow built a pre-sales tool for Zuroa after finding there wasn’t anything else out there that was devoted specifically to tracking that kind of information.

“At Zuora, I had to build everything from scratch. After the IPO, I realized that this is something that every tech company can take advantage of because every technology company will really need this role to be of high value and impact,” he said.

The company not only tracks information via a mobile app and browser tool, it also has a reporting dashboard to help companies understand and share the information the pre-sales team is hearing from the customer. For example, they might know that x number of customers have been asking for a certain feature, and this information can be organized and passed onto other parts of the company.

Screenshot: Vivun

Bansal, who was previously CEO and co-founder at AppDynamics, a company he sold to Cisco for $3.7 billion just before its IPO in 2017, saw a company filling a big hole in the enterprise software ecosystem. He is not just an investor, he’s also a customer.

“To be successful, a technology company needs to understand three things: where it will be in five years, what its customers need right now, and what the market wants that it’s not currently providing. Pre-sales has answers to all three questions and is a strategically important department that needs management, analytics, and tools for accelerating deals. Yet, no one was making software for this critical department until Vivun,” he said in a statement.

The company was founded in 2018 and has been bootstrapped until now. It spent the first year building out the product. Today, the company has 20 customers including SignalFx (acquired by Splunk in August for $1.05 billion) and Harness.

Instagram founders join $30M raise for Loom work video messenger

Why are we all trapped in enterprise chat apps if we talk 6X faster than we type, and our brain processes visual info 60,000X faster than text? Thanks to Instagram, we’re not as camera-shy anymore. And everyone’s trying to remain in flow instead of being distracted by multi-tasking.

That’s why now is the time for Loom. It’s an enterprise collaboration video messaging service that lets you send quick clips of yourself so you can get your point across and get back to work. Talk through a problem, explain your solution, or narrate a screenshare. Some engineering hocus pocus sees videos start uploading before you finish recording so you can share instantly viewable links as soon as you’re done.

Loom video messaging on mobile

“What we felt was that more visual communication could be translated into the workplace and deliver disproportionate value” co-founder and CEO Joe Thomas tells me. He actually conducted our whole interview over Loom, responding to emailed questions with video clips.

Launched in 2016, Loom is finally hitting its growth spurt. It’s up from 1.1 million users and 18,000 companies in February to 1.8 million people at 50,000 businesses sharing 15 million minutes of Loom videos per month. Remote workers are especially keen on Loom since it gives them face-to-face time with colleagues without the annoyance of scheduling synchronous video calls. “80% of our professional power users had primarily said that they were communicating with people that they didn’t share office space with” Thomas notes.

A smart product, swift traction, and a shot at riding the consumerization of enterprise trend has secured Loom a $30 million Series B. The round that’s being announced later today was led by prestigious SAAS investor Sequoia and joined by Kleiner Perkins, Figma CEO Dylan Field, Front CEO Mathilde Collin, and Instagram co-founders Kevin Systrom and Mike Krieger.

“At Instagram, one of the biggest things we did was focus on extreme performance and extreme ease of use and that meant optimizing every screen, doing really creative things about when we started uploading, optimizing everything from video codec to networking” Krieger says. “Since then I feel like some products have managed to try to capture some of that but few as much as Loom did. When I first used Loom I turned to Kevin who was my Instagram co-founder and said, ‘oh my god, how did they do that? This feels impossibly fast.’”

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-4d6c7146a401f7d25cd12a887828ac7e’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-4d6c7146a401f7d25cd12a887828ac7e’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

Systrom concurs about the similarities, saying “I’m most excited because I see how they’re tackling the problem of visual communication in the same way that we tried to tackle that at Instagram.” Loom is looking to double-down there, potentially adding the ability to Like and follow videos from your favorite productivity gurus or sharpest co-workers.

Loom is also prepping some of its most requested features. The startup is launching an iOS app next month with Android coming the first half of 2020, improving its video editor with blurring for hiding your bad hair day and stitching to connect multiple takes. New branding options will help external sales pitches and presentations look right. What I’m most excited for is transcription, which is also slated for the first half of next year through a partnership with another provider, so you can skim or search a Loom. Sometimes even watching at 2X speed is too slow.

But the point of raising a massive $30 million Series B just a year after Loom’s $11 million Kleiner-led Series A is to nail the enterprise product and sales process. To date, Loom has focused on a bottom-up distribution strategy similar to Dropbox. It tries to get so many individual employees to use Loom that it becomes a team’s default collaboration software. Now it needs to grow up so it can offer the security and permissions features IT managers demand. Loom for teams is rolling out in beta access this year before officially launching in early 2020.

Loom’s bid to become essential to the enterprise, though, is its team video library. This will let employees organize their Looms into folders of a knowledge base so they can explain something once on camera, and everyone else can watch whenever they need to learn that skill. No more redundant one-off messages begging for a team’s best employees to stop and re-teach something. The Loom dashboard offers analytics on who’s actually watching your videos. And integration directly into popular enterprise software suites will let recipients watch without stopping what they’re doing.

To build out these features Loom has already grown to a headcount of 45, though co-founder Shahed Khan is stepping back from company. For new leadership, it’s hired away former head of web growth at Dropbox Nicole Obst, head of design for Slack Joshua Goldenberg, and VP of commercial product strategy for Intercom Matt Hodges.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-11fd0a41b2d5fe7bbcb5320087d22473’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-11fd0a41b2d5fe7bbcb5320087d22473’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

Still, the elephants in the room remain Slack and Microsoft Teams. Right now, they’re mainly focused on text messaging with some additional screensharing and video chat integrations. They’re not building Loom-style asynchronous video messaging…yet. “We want to be clear about the fact that we don’t think we’re in competition with Slack or Microsoft Teams at all. We are a complementary tool to chat” Thomas insists. But given the similar productivity and communication ethos, those incumbents could certainly opt to compete. Slack already has 12 million daily users it could provide with video tools.

Loom co-founder and CEO Joe Thomas

Hodges, Loom’s head of marketing, tells me “I agree Slack and Microsoft could choose to get into this territory, but what’s the opportunity cost for them in doing so? It’s the classic build vs. buy vs. integrate argument.” Slack bought screensharing tool Screenhero, but partners with Zoom and Google for video chat. Loom will focus on being easily integratable so it can plug into would-be competitors. And Hodges notes that “Delivering asynchronous video recording and sharing at scale is non-trivial. Loom holds a patent on its streaming, transcoding, and storage technology, which has proven to provide a competitive advantage to this day.”

The tea leaves point to video invading more and more of our communication, so I expect rival startups and features to Loom will crop up. Vidyard and Wistia’s Soapbox are already pushing into the space. As long as it has the head start, Loom needs to move as fast as it can. “It’s really hard to maintain focus to deliver on the core product experience that we set out to deliver versus spreading ourselves too thin. And this is absolutely critical” Thomas tells me.

One thing that could set Loom apart? A commitment to financial fundamentals. “When you grow really fast, you can sometimes lose sight of what is the core reason for a business entity to exist, which is to become profitable. . . Even in a really bold market where cash can be cheap, we’re trying to keep profitability at the top of our minds.”

Xerox tells HP it will bring takeover bid directly to shareholders

Xerox fired the latest volley in the Xerox HP merger letter wars today. Xerox CEO John Visentin wrote to the HP board that his company planned to take its $33.5 billion offer directly to HP shareholders.

He began his letter with a tone befitting a hostile takeover attempt, stating that their refusal to negotiate defied logic. “We have put forth a compelling proposal – one that would allow HP shareholders to both realize immediate cash value and enjoy equal participation in the substantial upside expected to result from a combination. Our offer is neither ‘highly conditional’ nor ‘uncertain’ as you claim,” Visentin wrote in his letter.

He added, “We plan to engage directly with HP shareholders to solicit their support in urging the HP Board to do the right thing and pursue this compelling opportunity.”

The letter was in response to one yesterday from HP in which it turned down Xerox’s latest overture, stating that the deal seemed beyond Xerox’s ability to afford it. It called into question Xerox’s current financial situation, citing Xerox’s own financial reports, and took exception to the way in which Xerox was courting the company.

“It is clear in your aggressive words and actions that Xerox is intent on forcing a potential combination on opportunistic terms and without providing adequate information,” the company wrote.

Visentin fired back in his letter, “While you may not appreciate our “aggressive” tactics, we will not apologize for them. The most efficient way to prove out the scope of this opportunity with certainty is through mutual due diligence, which you continue to refuse, and we are obligated to require.”

He further pulled no punches writing that he believes the deal is good for both companies and good for the shareholders. “The potential benefits of a combination between HP and Xerox are self-evident. Together, we could create an industry leader – with enhanced scale and best-in-class offerings across a complete product portfolio — that will be positioned to invest more in innovation and generate greater returns for shareholders.”

Patrick Moorhead, founder and principal analyst at Moor Insights & Strategies, thinks HP ultimately has the upper hand in this situation. “I feel like we have seen this movie before when Carl Icahn meddled with Dell in a similar way. Xerox is a third of the size HP Inc., has been steadily declining in revenue, is running out of options, and needs HP more than HP needs it.”

It would seem Xerox has chosen a no-holds barred approach to the situation. The pen is now in HP’s hands as we await the next letter and see how the printing giant intends to respond to the latest missive from Xerox.

New Amazon capabilities put machine learning in reach of more developers

Today, Amazon announced a new approach that it says will put machine learning technology in reach of more developers and line of business users. Amazon has been making a flurry of announcements ahead of its re:Invent customer conference next week in Las Vegas.

While the company offers plenty of tools for data scientists to build machine learning models and to process, store and visualize data, it wants to put that capability directly in the hands of developers with the help of the popular database query language, SQL.

By taking advantage of tools like Amazon QuickSight, Aurora and Athena in combination with SQL queries, developers can have much more direct access to machine learning models and underlying data without any additional coding, says VP of artificial intelligence at AWS, Matt Wood.

“This announcement is all about making it easier for developers to add machine learning predictions to their products and their processes by integrating those predictions directly with their databases,” Wood told TechCrunch.

For starters, Wood says developers can take advantage of Aurora, the company’s MySQL (and Postgres)-compatible database to build a simple SQL query into an application, which will automatically pull the data into the application and run whatever machine learning model the developer associates with it.

The second piece involves Athena, the company’s serverless query service. As with Aurora, developers can write a SQL query — in this case, against any data store — and based on a machine learning model they choose, return a set of data for use in an application.

The final piece is QuickSight, which is Amazon’s data visualization tool. Using one of the other tools to return some set of data, developers can use that data to create visualizations based on it inside whatever application they are creating.

“By making sophisticated ML predictions more easily available through SQL queries and dashboards, the changes we’re announcing today help to make ML more usable and accessible to database developers and business analysts. Now anyone who can write SQL can make — and importantly use — predictions in their applications without any custom code,” Amazon’s Matt Asay wrote in a blog post announcing these new capabilities.

Asay added that this approach is far easier than what developers had to do in the past to achieve this. “There is often a large amount of fiddly, manual work required to take these predictions and make them part of a broader application, process or analytics dashboard,” he wrote.

As an example, Wood offers a lead-scoring model you might use to pick the most likely sales targets to convert. “Today, in order to do lead scoring you have to go off and wire up all these pieces together in order to be able to get the predictions into the application,” he said. With this new capability, you can get there much faster.

“Now, as a developer I can just say that I have this lead scoring model which is deployed in SageMaker, and all I have to do is write literally one SQL statement that I do all day long into Aurora, and I can start getting back that lead scoring information. And then I just display it in my application and away I go,” Wood explained.

As for the machine learning models, these can come pre-built from Amazon, be developed by an in-house data science team or purchased in a machine learning model marketplace on Amazon, says Wood.

Today’s announcements from Amazon are designed to simplify machine learning and data access, and reduce the amount of coding to get from query to answer faster.

AWS expands its IoT services, brings Alexa to devices with only 1MB of RAM

AWS today announced a number of IoT-related updates that, for the most part, aim to make getting started with its IoT services easier, especially for companies that are trying to deploy a large fleet of devices. The marquee announcement, however, is about the Alexa Voice Service, which makes Amazon’s Alex voice assistant available to hardware manufacturers who want to build it into their devices. These manufacturers can now create “Alexa built-in” devices with very low-powered chips and 1MB of RAM.

Until now, you needed at least 100MB of RAM and an ARM Cortex A-class processor. Now, the requirement for Alexa Voice Service integration for AWS IoT Core has come down 1MB and a cheaper Cortex-M processor. With that, chances are you’ll see even more lightbulbs, light switches and other simple, single-purpose devices with Alexa functionality. You obviously can’t run a complex voice-recognition model and decision engine on a device like this, so all of the media retrieval, audio decoding, etc. is done in the cloud. All it needs to be able to do is detect the wake word to start the Alexa functionality, which is a comparably simple model.

“We now offload the vast majority of all of this to the cloud,” AWS IoT VP Dirk Didascalou told me. “So the device can be ultra dumb. The only thing that the device still needs to do is wake word detection. That still needs to be covered on the device.” Didascalou noted that with new, lower-powered processors from NXP and Qualcomm, OEMs can reduce their engineering bill of materials by up to 50 percent, which will only make this capability more attractive to many companies.

Didascalou believes we’ll see manufacturers in all kinds of areas use this new functionality, but most of it will likely be in the consumer space. “It just opens up the what we call the real ambient intelligence and ambient computing space,” he said. “Because now you don’t need to identify where’s my hub — you just speak to your environment and your environment can interact with you. I think that’s a massive step towards this ambient intelligence via Alexa.”

No cloud computing announcement these days would be complete without talking about containers. Today’s container announcement for AWS’ IoT services is that IoT Greengrass, the company’s main platform for extending AWS to edge devices, now offers support for Docker containers. The reason for this is pretty straightforward. The early idea of Greengrass was to have developers write Lambda functions for it. But as Didascalou told me, a lot of companies also wanted to bring legacy and third-party applications to Greengrass devices, as well as those written in languages that are not currently supported by Greengrass. Didascalou noted that this also means you can bring any container from the Docker Hub or any other Docker container registry to Greengrass now, too.

“The idea of Greengrass was, you build an application once. And whether you deploy it to the cloud or at the edge or hybrid, it doesn’t matter, because it’s the same programming model,” he explained. “But very many older applications use containers. And then, of course, you saying, okay, as a company, I don’t necessarily want to rewrite something that works.”

Another notable new feature is Stream Manager for Greengrass. Until now, developers had to cobble together their own solutions for managing data streams from edge devices, using Lambda functions. Now, with this new feature, they don’t have to reinvent the wheel every time they want to build a new solution for connection management and data retention policies, etc., but can instead rely on this new functionality to do that for them. It’s pre-integrated with AWS Kinesis and IoT Analytics, too.

Also new for AWS IoT Greengrass are fleet provisioning, which makes it easier for businesses to quickly set up lots of new devices automatically, as well as secure tunneling for AWS IoT Device Management, which makes it easier for developers to remote access into a device and troubleshoot them. In addition, AWS IoT Core now features configurable endpoints.