Google Cloud goes after the telco business with Anthos for Telecom and its Global Mobile Edge Cloud

Google Cloud today announced a new solution for its telecom customers: Anthos for Telecom. You can think of this as a specialized edition of Google’s container-based Anthos multi-cloud platform for both modernizing existing applications and building new ones on top of Kubernetes. The announcement, which was originally slated for MWC, doesn’t come as a major surprise, given Google Cloud’s focus on offering very targeted services to its enterprise customers in a number of different verticals.

Given the rise of edge computing and, in the telco business, 5G, Anthos for Telecom makes for an interesting play in what could potentially be a very lucrative market for Google. This is also the market where the open-source OpenStack project has remained the strongest.

What’s maybe even more important here is that Google is also launching a new service called the Global Mobile Edge Cloud (GMEC). With this, telco companies will be able to run their applications not just in Google’s 20+ data center regions, but also in Google’s more than 130 edge locations around the world.

“We’re basically giving you compute power on our edge, where previously it was only for Google use, through the Anthos platform,” explained Eyal Manor, the VP of Engineering for Anthos. “The edge is very powerful and I think we will now see substantially more innovation happening for applications that are latency-sensitive. We’ve been investing in edge compute and edge networking for a long time in Google over the years for the internal services. And we think it’s a fairly unique capability now to open it up for third-party customers.”

For now, Google is only making this available to its teleco partners, with AT&T being the launch customers, but over time, Manor said, it’ll likely open its edge cloud to other verticals, as well. Google also expects to be able to announce other partners in the near future.

As for Anthos for Telecom, Manor notes that this is very much what its customers are asking for, especially now that so many of their new applications are containerized.

“[Anthos] brings the best of cloud-as-a-service to our customers, wherever they are, in multiple environments and provide the lock-in free environment with the latest cloud tools,” explained Manor. “The goal is really to empower developers and operators to move faster in a consistent way, so regardless of where you are, you don’t have to train your technical staff. It works on-premise, it works on GCP and on other clouds. And that’s what we hear from customers — customers really like choice.”

In the telecom industry, those customers also want to get higher up the stack and get consistency between their data centers and the edge — and all of that, of course, is meant to bring down the cost of running these networks and services.

“We don’t want to manage the [technology] we previously invested in for many years because the upgrades were terribly expensive and slow for that. I hear that consistently. And please Google, make this seem like a service in the cloud for us,” Manor said.

For developers, Anthos also promises to provide the same development experience, no matter where the application is deployed — and Google now has an established network of partners that provides both solutions to developers as well as operators around Anthos. To this effect, Google is also launching new partnerships with the Amdocs customer experience platform and Netcracker today.

“We’re excited to unveil a new strategy today to help telecommunications companies innovate and accelerate their digital transformation through Google Cloud,” said Thomas Kurian, CEO of Google Cloud, in today’s announcement. “By collaborating closely with leading telecoms, partners and customers, we can transform the industry together and create better overall experiences for our users globally.”

Etsy’s 2-year migration to the cloud brought flexibility to the online marketplace

Founded in 2005, Etsy was born before cloud infrastructure was even a thing.

As the company expanded, it managed all of its operations in the same way startups did in those days — using private data centers. But a couple of years ago, the online marketplace for crafts and vintage items decided to modernize and began its journey to the cloud.

That decision coincided with the arrival of CTO Mike Fisher in July 2017. He was originally brought in as a consultant to look at the impact of running data centers on Etsy’s ability to innovate. As you might expect, he concluded that it was having an adverse impact and began a process that would lead to him being hired to lead a long-term migration to the cloud.

That process concluded last month. This is the story of how a company born in data centers made the switch to the cloud, and the lessons it offers.

Stuck in a hardware refresh loop

When Fisher walked through the door, Etsy operated out of private data centers. It was not even taking advantage of a virtualization layer to maximize the capacity of each machine. The approach meant IT spent an inordinate amount of time on resource planning.

YC-backed Turing uses AI to help speed up the formulation of new consumer packaged goods

One of the more interesting and useful applications of artificial intelligence technology has been in the world of biotechnology and medicine, where now more than 220 startups (not to mention universities and bigger pharma companies) are using AI to accelerate drug discovery by using it to play out the many permutations resulting from drug and chemical combinations, DNA and other factors.

Now, a startup called Turing — which is part of the current cohort at Y Combinator due to present in the next Demo Day on March 22 — is taking a similar principle but applying it to the world of building (and “discovering”) new consumer packaged goods products.

Using machine learning to simulate different combinations of ingredients plus desired outcomes to figure out optimal formulations for different goods (hence the “Turing” name, a reference to Alan Turing’s mathematical model, referred to as the Turing machine), Turing is initially addressing the creation of products in home care (e.g. detergents), beauty and food and beverage.

Turing’s founders claim that it is able to save companies millions of dollars by reducing the average time it takes to formulate and test new products, from an average of 12 to 24 months down to a matter of weeks.

Specifically, the aim is to reduce all the time it takes to test combinations, giving R&D teams more time to be creative.

“Right now, they are spending more time managing experiments than they are innovating,” Manmit Shrimali, Turing’s co-founder and CEO, said.

Turing is in theory coming out of stealth today, but in fact it has already amassed an impressive customer list. It is already generating revenues by working with eight brands owned by one of the world’s biggest CPG companies, and it is also being trialed by another major CPG behemoth (Turing is not disclosing their names publicly, but suffice it to say, they and their brands are household names).

“Turing aims to become the industry norm for formulation development and we are here to play the long game,” Shrimali said. “This requires creating an ecosystem that can help at each stage of growing and scaling the company, and YC just does this exceptionally well.”

Turing is co-founded by Shrimali and Ajith Govind, two specialists in data science that worked together on a previous startup called Dextro Analytics. Dextro had set out to help businesses use AI and other kinds of business analytics to help with identifying trends and decision making around marketing, business strategy and other operational areas.

While there, they identified a very specific use case for the same principles that was perhaps even more acute: the research and development divisions of CPG companies, which have (ironically, given their focus on the future) often been behind the curve when it comes to the “digital transformation” that has swept up a lot of other corporate departments.

“We were consulting for product companies and realised that they were struggling,” Shrimali said. Add to that the fact that CPG is precisely the kind of legacy industry that is not natively a tech company but can most definitely benefit from implementing better technology, and that spells out an interesting opportunity for how (and where) to introduce artificial intelligence into the mix.

R&D labs play a specific and critical role in the world of CPG.

Before eventually being shipped into production, this is where products are discovered; tested; tweaked in response to input from customers, marketing, budgetary and manufacturing departments and others; then tested again; then tweaked again; and so on. One of the big clients that Turing works with spends close to $400 million in testing alone.

But R&D is under a lot of pressure these days. While these departments are seeing their budgets getting cut, they continue to have a lot of demands. They are still expected to meet timelines in producing new products (or often more likely, extensions of products) to keep consumers interested. There are a new host of environmental and health concerns around goods with huge lists of unintelligible ingredients, meaning they have to figure out how to simplify and improve the composition of mass-market products. And smaller direct-to-consumer brands are undercutting their larger competitors by getting to market faster with competitive offerings that have met new consumer tastes and preferences.

“In the CPG world, everyone was focused on marketing, and R&D was a blind spot,” Shrimali said, referring to the extensive investments that CPG companies have made into figuring out how to use digital to track and connect with users, and also how better to distribute their products. “To address how to use technology better in R&D, people need strong domain knowledge, and we are the first in the market to do that.”

Turing’s focus is to speed up the formulation and testing aspects that go into product creation to cut down on some of the extensive overhead that goes into putting new products into the market.

Part of the reason why it can take upwards of years to create a new product is because of all the permutations that go into building something and making sure it works as consistently as a consumer would expect it to (which still being consistent in production and coming in within budget).

“If just one ingredient is changed in a formulation, it can change everything,” Shrimali noted. And so in the case of something like a laundry detergent, this means running hundreds of tests on hundreds of loads of laundry to make sure that it works as it should.

The Turing platform brings in historical data from across a number of past permutations and tests to essentially virtualise all of this: It suggests optimal mixes and outcomes from them without the need to run the costly physical tests, and in turn this teaches the Turing platform to address future tests and formulations. Shrimali said that the Turing platform has already saved one of the brands some $7 million in testing costs.

Turing’s place in working with R&D gives the company some interesting insights into some of the shifts that the wider industry is undergoing. Currently, Shrimali said one of the biggest priorities for CPG giants include addressing the demand for more traceable, natural and organic formulations.

While no single DTC brand will ever fully eat into the market share of any CPG brand, collectively their presence and resonance with consumers is clearly causing a shift. Sometimes that will lead to acquisitions of the smaller brands, but more generally it reflects a change in consumer demands that the CPG companies are trying to meet. 

Longer term, the plan is for Turing to apply its platform to other aspects that are touched by R&D beyond the formulations of products. The thinking is that changing consumer preferences will also lead to a demand for better “formulations” for the wider product, including more sustainable production and packaging. And that, in turn, represents two areas into which Turing can expand, introducing potentially other kinds of AI technology (such as computer vision) into the mix to help optimise how companies build their next generation of consumer goods.

Nvidia acquires data storage and management platform SwiftStack

Nvidia today announced that it has acquired SwiftStack, a software-centric data storage and management platform that supports public cloud, on-premises and edge deployments.

The company’s recent launches focused on improving its support for AI, high-performance computing and accelerated computing workloads, which is surely what Nvidia is most interested in here.

“Building AI supercomputers is exciting to the entire SwiftStack team,” says the company’s co-founder and CPO Joe Arnold in today’s announcement. “We couldn’t be more thrilled to work with the talented folks at NVIDIA and look forward to contributing to its world-leading accelerated computing solutions.”

The two companies did not disclose the price of the acquisition, but SwiftStack had previously raised about $23.6 million in Series A and B rounds led by Mayfield Fund and OpenView Venture Partners. Other investors include Storm Ventures and UMC Capital.

SwiftStack, which was founded in 2011, placed an early bet on OpenStack, the massive open-source project that aimed to give enterprises an AWS-like management experience in their own data centers. The company was one of the largest contributors to OpenStack’s Swift object storage platform and offered a number of services around this, though it seems like in recent years it has downplayed the OpenStack relationship as that platform’s popularity has fizzled in many verticals.

SwiftStack lists the likes of PayPal, Rogers, data center provider DC Blox, Snapfish and Verizon (TechCrunch’s parent company) on its customer page. Nvidia, too, is a customer.

SwiftStack notes that it team will continue to maintain an existing set of open source tools like Swift, ProxyFS, 1space and Controller.

“SwiftStack’s technology is already a key part of NVIDIA’s GPU-powered AI infrastructure, and this acquisition will strengthen what we do for you,” says Arnold.

Google Cloud announces four new regions as it expands its global footprint

Google Cloud today announced its plans to open four new data center regions. These regions will be in Delhi (India), Doha (Qatar), Melbourne (Australia) and Toronto (Canada) and bring Google Cloud’s total footprint to 26 regions. The company previously announced that it would open regions in Jakarta, Las Vegas, Salt Lake City, Seoul and Warsaw over the course of the next year. The announcement also comes only a few days after Google opened its Salt Lake City data center.

GCP already had a data center presence in India, Australia and Canada before this announcement, but with these newly announced regions, it now offers two geographically separate regions for in-country disaster recovery, for example.

Google notes that the region in Doha marks the company’s first strategic collaboration agreement to launch a region in the Middle East with the Qatar Free Zones Authority. One of the launch customers there is Bespin Global, a major managed services provider in Asia.

“We work with some of the largest Korean enterprises, helping to drive their digital transformation initiatives. One of the key requirements that we have is that we need to deliver the same quality of service to all of our customers around the globe,” said John Lee, CEO, Bespin Global. “Google Cloud’s continuous investments in expanding their own infrastructure to areas like the Middle East make it possible for us to meet our customers where they are.”

Datastax acquires The Last Pickle

Data management company Datastax, one of the largest contributors to the Apache Cassandra project, today announced that it has acquired The Last Pickle (and no, I don’t know what’s up with that name either), a New Zealand-based Cassandra consulting and services firm that’s behind a number of popular open-source tools for the distributed NoSQL database.

As Datastax Chief Strategy Officer Sam Ramji, who you may remember from his recent tenure at Apigee, the Cloud Foundry Foundation, Google and Autodesk, told me, The Last Pickle is one of the premier Apache Cassandra consulting and services companies. The team there has been building Cassandra-based open source solutions for the likes of Spotify, T Mobile and AT&T since it was founded back in 2012. And while The Last Pickle is based in New Zealand, the company has engineers all over the world that do the heavy lifting and help these companies successfully implement the Cassandra database technology.

It’s worth mentioning that Last Pickle CEO Aaron Morton first discovered Cassandra when he worked for WETA Digital on the special effects for Avatar, where the team used Cassandra to allow the VFX artists to store their data.

“There’s two parts to what they do,” Ramji explained. “One is the very visible consulting, which has led them to become world experts in the operation of Cassandra. So as we automate Cassandra and as we improve the operability of the project with enterprises, their embodied wisdom about how to operate and scale Apache Cassandra is as good as it gets — the best in the world.” And The Last Pickle’s experience in building systems with tens of thousands of nodes — and the challenges that its customers face — is something Datastax can then offer to its customers as well.

And Datastax, of course, also plans to productize The Last Pickle’s open-source tools like the automated repair tool Reaper and the Medusa backup and restore system.

As both Ramji and Datastax VP of Engineering Josh McKenzie stressed, Cassandra has seen a lot of commercial development in recent years, with the likes of AWS now offering a managed Cassandra service, for example, but there wasn’t all that much hype around the project anymore. But they argue that’s a good thing. Now that it is over ten years old, Cassandra has been battle-hardened. For the last ten years, Ramji argues, the industry tried to figure out what the de factor standard for scale-out computing should be. By 2019, it became clear that Kubernetes was the answer to that.

“This next decade is about what is the de facto standard for scale-out data? We think that’s got certain affordances, certain structural needs and we think that the decades that Cassandra has spent getting harden puts it in a position to be data for that wave.”

McKenzie also noted that Cassandra provides users with a number of built-in features like support for mutiple data centers and geo-replication, rolling updates and live scaling, as well as wide support across programming languages, give it a number of advantages over competing databases.

“It’s easy to forget how much Cassandra gives you for free just based on its architecture,” he said. “Losing the power in an entire datacenter, upgrading the version of the database, hardware failing every day? No problem. The cluster is 100 percent always still up and available. The tooling and expertise of The Last Pickle really help bring all this distributed and resilient power into the hands of the masses.”

The two companies did not disclose the price of the acquisition.

Honeywell says it will soon launch the world’s most powerful quantum computer

“The best-kept secret in quantum computing.” That’s what Cambridge Quantum Computing (CQC) CEO Ilyas Khan called Honeywell‘s efforts in building the world’s most powerful quantum computer. In a race where most of the major players are vying for attention, Honeywell has quietly worked on its efforts for the last few years (and under strict NDA’s, it seems). But today, the company announced a major breakthrough that it claims will allow it to launch the world’s most powerful quantum computer within the next three months.

In addition, Honeywell also today announced that it has made strategic investments in CQC and Zapata Computing, both of which focus on the software side of quantum computing. The company has also partnered with JPMorgan Chase to develop quantum algorithms using Honeywell’s quantum computer. The company also recently announced a partnership with Microsoft.

Honeywell has long built the kind of complex control systems that power many of the world’s largest industrial sites. It’s that kind of experience that has now allowed it to build an advanced ion trap that is at the core of its efforts.

This ion trap, the company claims in a paper that accompanies today’s announcement, has allowed the team to achieve decoherence times that are significantly longer than those of its competitors.

“It starts really with the heritage that Honeywell had to work from,” Tony Uttley, the president of Honeywell Quantum Solutions, told me. “And we, because of our businesses within aerospace and defense and our business in oil and gas — with solutions that have to do with the integration of complex control systems because of our chemicals and materials businesses — we had all of the underlying pieces for quantum computing, which are just fabulously different from classical computing. You need to have ultra-high vacuum system capabilities. You need to have cryogenic capabilities. You need to have precision control. You need to have lasers and photonic capabilities. You have to have magnetic and vibrational stability capabilities. And for us, we had our own foundry and so we are able to literally design our architecture from the trap up.”

The result of this is a quantum computer that promises to achieve a quantum Volume of 64. Quantum Volume (QV), it’s worth mentioning, is a metric that takes into account both the number of qubits in a system as well as decoherence times. IBM and others have championed this metric as a way to, at least for now, compare the power of various quantum computers.

So far, IBM’s own machines have achieved QV 32, which would make Honeywell’s machine significantly more powerful.

Khan, whose company provides software tools for quantum computing and was one of the first to work with Honeywell on this project, also noted that the focus on the ion trap is giving Honeywell a bit of an advantage. “I think that the choice of the ion trap approach by Honeywell is a reflection of a very deliberate focus on the quality of qubit rather than the number of qubits, which I think is fairly sophisticated,” he said. “Until recently, the headline was always growth, the number of qubits running.”

The Honeywell team noted that many of its current customers are also likely users of its quantum solutions. These customers, after all, are working on exactly the kind of problems in chemistry or material science that quantum computing, at least in its earliest forms, is uniquely suited for.

Currently, Honeywell has about 100 scientists, engineers and developers dedicated to its quantum project.

Stack Overflow expands its Teams service with new integrations

Most developers think of Stack Overflow as a question and answer site for their programming questions. But over the last few years, the company has also built a successful business in its Stack Overflow for Teams product, which essentially offers companies a private version of its Q&A product. Indeed, the Teams product now brings in a significant amount of revenue for the company and the new executive team at Stack Overflow is betting that it can help the company grow rapidly in the years to come.

To make Teams even more attractive to businesses, the company today launched a number of new integrations with Jira (Enterprise and Business), GitHub (Enterprise and Business) and Microsoft Teams (Enterprise). These join existing integrations with Slack, Okta and the Business tier of Microsoft Teams.

“I think the integrations that we have been building are reflective of that developer workflow and all of the number of tools that someone who is building and leveraging technology has to interact with,” Stack Overflow Chief Product Officer Teresa Dietrich told me. “When we think about integrations, we think about the vertical right, and I think that ‘developer workflow’ is one of those industry verticals that we’re thinking about. ChatOps is obviously another one, as you can see from our Slack and Teams integration. And the JIRA and GitHub [integrations] that we’re building are really at the core of a developer workflow.”

Current Stack Overflow for Teams customers include the likes of Microsoft, Expensify and Wix. As the company noted, 65 percent of its existing Teams customers use GitHub, so it’s no surprise that it is building out this integration.

Ampere launches new chip built from ground up for cloud workloads

Ampere, the chip startup run by former Intel President Renee James, announced a new chip today that she says is designed specifically to optimize for cloud workloads.

Ampere VP of product Jeff Wittich says the new chip is called the Ampere Altra, and it’s been designed with some features that should make it attractive to cloud providers. This involves three main focuses including high performance, scalability and power efficiency — all elements that would be important to cloud vendors operating at scale.

The Altra is an ARM chip with some big features.”It’s 64-bit ARM cores or 160 cores in a two-socket platforms –we support both one socket and two socket [configurations]. We are running at 3 GHz turbo, and that’s 3 GHz across all of the cores because of the way that cloud delivers compute, you’re utilizing all the cores as much of the time as possible. So our turbo performance was optimized for all of the cores being able to sustain it all the time,” Wittich explained.

The company sees this chip as a kind of workhorse for the cloud. “We’ve really looked at this as we’re designing a general purpose CPU that is built for the cloud environment, so you can utilize that compute the way the cloud utilizes that type of compute. So it supports the vast array of all of the workloads that run in the cloud,” he said.

Founder and CEO James says the company has been working with their cloud customers to give them the kind of information they need to optimize the chip for their individual workloads at a granular configuration level, something the hyper scalers in particular really require.

“Let’s go do what we can to build the platform that delivers the raw power and performance, the kind of environment that you’re looking for, and then have a design approach that enables them to work with us on what’s important and the kind of control, that kind of feature set that’s unique because each one of them have their own software environment,” James explained.

Among the companies working with Ampere early on have been Oracle (an investor, according to Crunchbase) and Microsoft, among others.

James says one of the unforeseen challenges of delivering this chip is possible disruptions in the supply chain due to the Corona-19 virus and its impact in Asia where many of the parts come from, and the chips are assembled.

She says the company has taken that into consideration and has been able to build up a worldwide supply chain she hopes will help with hiccups that might occur because of supply chain slow downs.

The Case for Limiting Your Browser Extensions

Last week, KrebsOnSecurity reported to health insurance provider Blue Shield of California that its Web site was flagged by multiple security products as serving malicious content. Blue Shield quickly removed the unauthorized code. An investigation determined it was injected by a browser extension installed on the computer of a Blue Shield employee who’d edited the Web site in the past month.

The incident is a reminder that browser extensions — however useful or fun they may seem when you install them — typically have a great deal of power and can effectively read and/or write all data in your browsing sessions. And as we’ll see, it’s not uncommon for extension makers to sell or lease their user base to shady advertising firms, or in some cases abandon them to outright cybercriminals.

The health insurance site was compromised after an employee at the company edited content on the site while using a Web browser equipped with a once-benign but now-compromised extension which quietly injected code into the page.

The extension in question was Page Ruler, a Chrome addition with some 400,000 downloads. Page Ruler lets users measure the inch/pixel width of images and other objects on a Web page. But the extension was sold by the original developer a few years back, and for some reason it’s still available from the Google Chrome store despite multiple recent reports from people blaming it for spreading malicious code.

How did a browser extension lead to a malicious link being added to the health insurance company Web site? This compromised extension tries to determine if the person using it is typing content into specific Web forms, such as a blog post editing system like WordPress or Joomla.

In that case, the extension silently adds a request for a javascript link to the end of whatever the user types and saves on the page. When that altered HTML content is saved and published to the Web, the hidden javascript code causes a visitor’s browser to display ads under certain conditions.

Who exactly gets paid when those ads are shown or clicked is not clear, but there are a few clues about who’s facilitating this. The malicious link that set off antivirus alarm bells when people tried to visit Blue Shield California downloaded javascript content from a domain called linkojager[.]org.

The file it attempted to download — 212b3d4039ab5319ec.js — appears to be named after an affiliate identification number designating a specific account that should get credited for serving advertisements. A simple Internet search shows this same javascript code is present on hundreds of other Web sites, no doubt inadvertently published by site owners who happened to be editing their sites with this Page Ruler extension installed.

If we download a copy of that javascript file and view it in a text editor, we can see the following message toward the end of the file:

[NAME OF EXTENSION HERE]’s development is supported by advertisements that are added to some of the websites you visit. During the development of this extension, I’ve put in thousands of hours adding features, fixing bugs and making things better, not mentioning the support of all the users who ask for help.

Ads support most of the internet we all use and love; without them, the internet we have today would simply not exist. Similarly, without revenue, this extension (and the upcoming new ones) would not be possible.

You can disable these ads now or later in the settings page. You can also minimize the ads appearance by clicking on partial support button. Both of these options are available by clicking ’x’ button in the corner of each ad. In both cases, your choice will remain in effect unless you reinstall or reset the extension.

This appears to be boilerplate text used by one or more affiliate programs that pay developers to add a few lines of code to their extensions. The opt-out feature referenced in the text above doesn’t actually work because it points to a domain that no longer resolves — thisadsfor[.]us. But that domain is still useful for getting a better idea of what we’re dealing with here.

Registration records maintained by DomainTools [an advertiser on this site] say it was originally registered to someone using the email address frankomedison1020@gmail.com. A reverse WHOIS search on that unusual name turns up several other interesting domains, including icontent[.]us.

icontent[.]us is currently not resolving either, but a cached version of it at Archive.org shows it once belonged to an advertising network called Metrext, which marketed itself as an analytics platform that let extension makers track users in real time.

An archived copy of the content once served at icontent[.]us promises “plag’n’play” capability.

“Three lines into your product and it’s in live,” iContent enthused. “High revenue per user.”

Another domain tied to Frank Medison is cdnpps[.]us, which currently redirects to the domain “monetizus[.]com.” Like its competitors, Monetizus’ site is full of grammar and spelling errors: “Use Monetizus Solutions to bring an extra value to your toolbars, addons and extensions, without loosing an audience,” the company says in a banner at the top of its site.

Be sure not to “loose” out on sketchy moneymaking activities!

Contacted by KrebsOnSecurity, Page Ruler’s original developer Peter Newnham confirmed he sold his extension to MonetizUs in 2017.

“They didn’t say what they were going to do with it but I assumed they were going to try to monetize it somehow, probably with the scripts their website mentions,” Newnham said.

“I could have probably made a lot more running ad code myself but I didn’t want the hassle of managing all of that and Google seemed to be making noises at the time about cracking down on that kind of behaviour so the one off payment suited me fine,” Newnham said. “Especially as I hadn’t updated the extension for about 3 years and work and family life meant I was unlikely to do anything with it in the future as well.”

Monetizus did not respond to requests for comment.

Newnham declined to say how much he was paid for surrendering his extension. But it’s not difficult to see why developers might sell or lease their creation to a marketing company: Many of these entities offer the promise of a hefty payday for extensions with decent followings. For example, one competing extension monetization platform called AddonJet claims it can offer revenues of up to $2,500 per day for every 100,000 user in the United States (see screenshot below).

Read here how its work!

I hope it’s obvious by this point, but readers should be extremely cautious about installing extensions — sticking mainly to those that are actively supported and respond to user concerns. Personally, I do not make much use of browser extensions. In almost every case I’ve considered installing one I’ve been sufficiently spooked by the permissions requested that I ultimately decided it wasn’t worth the risk.

If you’re the type of person who uses multiple extensions, it may be wise to adopt a risk-based approach going forward. Given the high stakes that typically come with installing an extension, consider carefully whether having the extension is truly worth it. This applies equally to plug-ins designed for Web site content management systems like WordPress and Joomla.

Do not agree to update an extension if it suddenly requests more permissions than a previous version. This should be a giant red flag that something is not right. If this happens with an extension you trust, you’d be well advised to remove it entirely.

Also, never download and install an extension just because some Web site says you need it to view some type of content. Doing otherwise is almost always a high-risk proposition. Here, Rule #1 from KrebsOnSecurity’s Three Rules of Online Safety comes into play: “If you didn’t go looking for it, don’t install it.” Finally, in the event you do wish to install something, make sure you’re getting it directly from the entity that produced the software.

Google Chrome users can see any extensions they have installed by clicking the three dots to the right of the address bar, selecting “More tools” in the resulting drop-down menu, then “Extensions.” In Firefox, click the three horizontal bars next to the address bar and select “Add-ons,” then click the “Extensions” link on the resulting page to view any installed extensions.