The Istio service mesh hits version 1.0

Istio, the service mesh for microservices from Google, IBM, Lyft, Red Hat and many other players in the open-source community, launched version 1.0 of its tools today.

If you’re not into service meshes, that’s understandable. Few people are. But Istio is probably one of the most important new open-source projects out there right now. It sits at the intersection of a number of industry trends, like containers, microservices and serverless computing, and makes it easier for enterprises to embrace them. Istio now has more than 200 contributors and the code has seen more than 4,000 check-ins since the launch of  version 0.1.

Istio, at its core, handles the routing, load balancing, flow control and security needs of microservices. It sits on top of existing distributed applications and basically helps them talk to each other securely, while also providing logging, telemetry and the necessary policies that keep things under control (and secure). It also features support for canary releases, which allow developers to test updates with a few users before launching them to a wider audience, something that Google and other webscale companies have long done internally.

“In the area of microservices, things are moving so quickly,” Google product manager Jennifer Lin told me. “And with the success of Kubernetes and the abstraction around container orchestration, Istio was formed as an open-source project to really take the next step in terms of a substrate for microservice development as well as a path for VM-based workloads to move into more of a service management layer. So it’s really focused around the right level of abstractions for services and creating a consistent environment for managing that.”

Even before the 1.0 release, a number of companies already adopted Istio in production, including the likes of eBay and Auto Trader UK. Lin argues that this is a sign that Istio solves a problem that a lot of businesses are facing today as they adopt microservices. “A number of more sophisticated customers tried to build their own service management layer and while we hadn’t yet declared 1.0, we hard a number of customers — including a surprising number of large enterprise customer — say, ‘you know, even though you’re not 1.0, I’m very comfortable putting this in production because what I’m comparing it to is much more raw.’”

IBM Fellow and VP of Cloud Jason McGee agrees with this and notes that “our mission since Istio’s launch has been to enable everyone to succeed with microservices, especially in the enterprise. This is why we’ve focused the community around improving security and scale, and heavily leaned our contributions on what we’ve learned from building agile cloud architectures for companies of all sizes.”

A lot of the large cloud players now support Istio directly, too. IBM supports it on top of its Kubernetes Service, for example, and Google even announced a managed Istio service for its Google Cloud users, as well as some additional open-source tooling for serverless applications built on top of Kubernetes and Istio.

Two names missing from today’s party are Microsoft and Amazon. I think that’ll change over time, though, assuming the project keeps its momentum.

Istio also isn’t part of any major open-source foundation yet. The Cloud Native Computing Foundation (CNCF), the home of Kubernetes, is backing linkerd, a project that isn’t all that dissimilar from Istio. Once a 1.0 release of these kinds of projects rolls around, the maintainers often start looking for a foundation that can shepherd the development of the project over time. I’m guessing it’s only a matter of time before we hear more about where Istio will land.

Now On Stage! Deep Hooks: Monitoring Native Execution In WOW64 Applications

Four months ago we published a 3-part series of blog posts, presenting a research carried out by two of our researchers: Yarden Shafir and Assaf Carlsbad. The research introduced a new way of monitoring parts of WoW64 processes that are usually ignored by security products. This blindspot often leads to exploitation by sophisticated pieces of malware. The technique presented in the research can be used to better monitor WoW64 processes, making detection harder to bypass. The research is the basis for some new SentinelOne detection capabilities, for example detecting the notorious “Heaven‘s Gate” bypass technique.

As a reminder, here are the links to this series of blog posts:

Following its publication, the research drew a lot of interest from the community. The researchers were invited to various cyber security conferences to share their insights.

For those of you who are intrigued by WOW64 applications and how to protect them, here is their talk from BSidesTLV. You can also meet them on August 30, at the HITB GSEC 2018 Conference.


Like this article? Follow us on LinkedInTwitterYouTube or Facebook to see the content we post.
Want to see how SentinelOne can help improve your security? Get a Demo Now

Google Calendar makes rescheduling meetings easier

Nobody really likes meetings — and the few people who do like them are the ones with whom you probably don’t want to have meetings. So when you’ve reached your fill and decide to reschedule some of those obligations, the usual process of trying to find a new meeting time begins. Thankfully, the Google Calendar team has heard your sighs of frustration and built a new tool that makes rescheduling meetings much easier.

Starting in two weeks, on August 13th, every guest will be able to propose a new meeting time and attach to that update a message to the organizer to explain themselves. The organizer can then review and accept or deny that new time slot. If the other guests have made their calendars public, the organizer can also see the other attendees’ availability in a new side-by-side view to find a new time.

What’s a bit odd here is that this is still mostly a manual feature. To find meeting slots to begin with, Google already employs some of its machine learning smarts to find the best times. This new feature doesn’t seem to employ the same algorithms to proposed dates and times for rescheduled meetings.

This new feature will work across G Suite domains and also with Microsoft Exchange. It’s worth noting, though, that this new option won’t be available for meetings with more than 200 attendees and all-day events.

A pickaxe for the AI gold rush, Labelbox sells training data software

Every artificial intelligence startup or corporate R&D lab has to reinvent the wheel when it comes to how humans annotate training data to teach algorithms what to look for. Whether it’s doctors assessing the size of cancer from a scan or drivers circling street signs in self-driving car footage, all this labeling has to happen somewhere. Often that means wasting six months and as much as a million dollars just developing a training data system. With nearly every type of business racing to adopt AI, that spend in cash and time adds up.

Labelbox builds artificial intelligence training data labeling software so nobody else has to. What Salesforce is to a sales team, Labelbox is to an AI engineering team. The software-as-a-service acts as the interface for human experts or crowdsourced labor to instruct computers how to spot relevant signals in data by themselves and continuously improve their algorithms’ accuracy.

Today, Labelbox is emerging from six months in stealth with a $3.9 million seed round led by Kleiner Perkins and joined by First Round and Google’s Gradient Ventures.

“There haven’t been seamless tools to allow AI teams to transfer institutional knowledge from their brains to software,” says co-founder Manu Sharma. “Now we have over 5,000 customers, and many big companies have replaced their own internal tools with Labelbox.”

Kleiner’s Ilya Fushman explains that “If you have these tools, you can ramp up to the AI curve much faster, allowing companies to realize the dream of AI.”

Inventing the best wheel

Sharma knew how annoying it was to try to forge training data systems from scratch because he’d seen it done before at Planet Labs, a satellite imaging startup. “One of the things that I observed was that Planet Labs has a superb AI team, but that team had been for over six months building labeling and training tools. Is this really how teams around the world are approaching building AI?,” he wondered.

Before that, he’d worked at DroneDeploy alongside Labelbox co-founder and CTO Daniel Rasmuson, who was leading the aerial data startup’s developer platform. “Many drone analytics companies that were also building AI were going through the same pain point,” Sharma tells me. In September, the two began to explore the idea and found that 20 other companies big and small were also burning talent and capital on the problem. “We thought we could make that much smarter so AI teams can focus on algorithms,” Sharma decided.

Labelbox’s team, with co-founders Ysiad Ferreiras (third from left), Manu Sharma (fourth from left), Brian Rieger (sixth from left) Daniel Rasmuson (seventh from left)

Labelbox launched its early alpha in January and saw swift pickup from the AI community that immediately asked for additional features. With time, the tool expanded with more and more ways to manually annotate data, from gradation levels like how sick a cow is for judging its milk production to matching systems like whether a dress fits a fashion brand’s aesthetic. Rigorous data science is applied to weed out discrepancies between reviewers’ decisions and identify edge cases that don’t fit the models.

“There are all these research studies about how to make training data” that Labelbox analyzes and applies, says co-founder and COO Ysiad Ferreiras, who’d led all of sales and revenue at fast-rising grassroots campaign texting startup Hustle. “We can let people tweak different settings so they can run their own machine learning program the way they want to, instead of being limited by what they can build really quickly.” When Norway mandated all citizens get colon cancer screenings, it had to build AI for recognizing polyps. Instead of spending half a year creating the training tool, they just signed up all the doctors on Labelbox.

Any organization can try Labelbox for free, and Ferreiras claims hundreds have. Once they hit a usage threshold, the startup works with them on appropriate SaaS pricing related to the revenue the client’s AI will generate. One called Lytx makes DriveCam, a system installed on half a million trucks with cameras that use AI to detect unsafe driver behavior so they can be coached to improve. Conde Nast is using Labelbox to match runway fashion to related items in their archive of content.

Eliminating redundancy, and jobs?

The big challenge is convincing companies that they’re better off leaving the training software to the experts instead of building it in-house where they’re intimately, though perhaps inefficiently, involved in every step of development. Some turn to crowdsourcing agencies like CrowdFlower, which has their own training data interface, but they only work with generalist labor, not the experts required for many fields. Labelbox wants to cooperate rather than compete here, serving as the management software that treats outsourcers as just another data input.

Long-term, the risk for Labelbox is that it’s arrived too early for the AI revolution. Most potential corporate customers are still in the R&D phase around AI, not at scaled deployment into real-world products. The big business isn’t selling the labeling software. That’s just the start. Labelbox wants to continuously manage the fine-tuning data to help optimize an algorithm through its entire life cycle. That requires AI being part of the actual engineering process. Right now it’s often stuck as an experiment in the lab. “We’re not concerned about our ability to build the tool to do that. Our concern is ‘will the industry get there fast enough?’” Ferreiras declares.

Their investor agrees. Last year’s big joke in venture capital was that suddenly you couldn’t hear a startup pitch without “AI” being referenced. “There was a big wave where everything was AI. I think at this point it’s almost a bit implied,” says Fushman. But it’s corporations that already have plenty of data, and plenty of human jobs to obfuscate, that are Labelbox’s opportunity. “The bigger question is ‘when does that [AI] reality reach consumers, not just from the Googles and Amazons of the world, but the mainstream corporations?’”

Labelbox is willing to wait it out, or better yet, accelerate that arrival — even if it means eliminating jobs. That’s because the team believes the benefits to humanity will outweigh the transition troubles.

“For a colonoscopy or mammogram, you only have a certain number of people in the world who can do that. That limits how many of those can be performed. In the future, that could only be limited by the computational power provided so it could be exponentially cheaper” says co-founder Brian Rieger. With Labelbox, tens of thousands of radiology exams can be quickly ingested to produce cancer-spotting algorithms that he says studies show can become more accurate than humans. Employment might get tougher to find, but hopefully life will get easier and cheaper too. Meanwhile, improving underwater pipeline inspections could protect the environment from its biggest threat: us.

“AI can solve such important problems in our society,” Sharma concludes. “We want to accelerate that by helping companies tell AI what to learn.”

The Making of 90 Days | A CISO’s Journey to Impact

In the course of delivering provocative endpoint technology to thousands of customers over the past 2.5 years, we’ve seen a variety of styles, challenges, successes, and failures across the global CISO population.  No doubt the role and responsibility of the CISO is one of the most unique charges in the enterprise of today. Rather than keep these observations and interactions internal to SentinelOne, our team decided to serve as a platform for allowing CISOs to share perspectives, experiences, and lessons learned with the broader community.

Our ebook series was born.  While certainly near and dear to us, the topic of endpoint protection, detection, and response is but a part (a very important part!) of the CISO and cybersecurity leader experience – this publication is NOT an endpoint bible or propaganda piece.  It’s not an “Idiot’s Guide” to being a CISO. Rather, our design principle of this piece is simple: a laconic read to provide practical advice for CISOs to make instant and lasting impact in their organizations.

In our first publication, “90 Days: A CISO’s Journey to Impact – Know Your Role,” we profile some of the world’s leading enterprise cybersecurity leaders.  They share their views on the role and advice to create rapid impact in a reasonable amount of time – 90 days.  It is our hope that readers, whether existing or future CISOs, gain insights to continuously improve and be their most effective selves.

At SentinelOne, continuous hustle, improvement, and effectiveness is the purpose that each of us brings to work every day:  improving the state of enterprise cybersecurity and returning time to our customers through cutting-edge technology. Our approach of a converged EPP + EDR offering that operates autonomously and packages some of the most advanced response, remediation, and hunting tools in an automated fashion wasn’t born from complacency and acceptance of the status quo.  Rather, it was born from the desire, drive, and passion to create impact and know/define our role in the ecosystem of endpoint security.

This is the underlying spirit that we hope you garner from reading the first volume of our ebook.  On behalf of SentinelOne, here’s to your impact and success – happy reading,

Migo

Enterprises that do not implement automation in discovery and  especially in response and remediation of attacks will simply be left behind.                           

Download Ebook

Announcing Central Park and 2.6 Release

We are thrilled to announce the General Availability of our 2.6 Agents and Central Park Management Console.

Central Park brings SentinelOne’s unmatched detection capabilities into the multi-tenancy world. With this release, large enterprises with sites around the globe can manage their security with ease. It’s also great news for MSPs and MSSPs, who can now build their business on top of the SentinelOne offering and provide more value to their customers. Central Park includes a complete rewrite of our Management Console, built with the newest and most secure technologies available.

In this post, I will review the most significant changes in the release.

Manageability

Starting in this Central Park release, security managers can view assets on a global view and on a site view, depending on the permissions associated with each administrator. It allows for both the autonomy and the control needed by the enterprise.

Site and License Management

Global admins have better control over licensing consumption and visibility, directly from the SentinelOne console.

Improved Analyze View

The Analyze view redesign makes it easier to grasp the overall security status at a glance. We have added a wide range of filters to search for many different attributes of a threat. Free text search in combination with the filters is now supported.

Classification makes it simpler to prioritize security incidents and resolve them faster. Examples of classifications are Exploit, Ransomware, Trojan, Backdoor, Keylogger, and many others.

Improved Reports View

Comfortable report creation with more interval options

Seamless Active Directory Support

We have removed the need to connect SentinelOne console to your AD domain controller or to configure anything. Starting with 2.6, the SentinelOne agent queries the endpoint for its AD membership and sends that data to management.

Deep Visibility Enhancements

Deep Visibility allows the IR team and administrators to look into every activity on their endpoints, regardless of whether it is on Windows, macOS, or Linux. Beyond UX and UI improvements, we have added the ability to create a watchlist. Just create your query, save it as a watchlist, select who needs to be notified, and you are done.

Flexible Configuration

We take pride in the simplicity of the policy options in SentinelOne. In fact, since our last significant change in policy (2.0 release) we found that most of our customers adapt quickly, without any need for a walkthrough. That said, the underlying configuration allows you much greater flexibility, including the ability to change even a single parameter on a single device. This may be needed for a particular user or group within your environment. To make these scenarios easy, we expanded our configuration, and we now allow you to change anything you might need, directly from the Console itself, saving you time and IT overhead.

Token-based installation

Until 2.6, we sent each Console a dedicated Agent, embedded with the right URL and configuration. We found this to be the most secure and straightforward way to deploy our technology. As we experience significant growth in the number and size of our customers, we’ve taken another step to give IT professionals more control in the deployment process by using token-based deployment. With tokens, you do not need SentinelOne support at any stage of your initial deployment, and you are entirely in control.

Protection

WSL

WSL provides a Linux-compatible kernel interface developed by Microsoft (containing no Linux kernel code) which can run a GNU Userland on top of it. This Userland can contain a Bash shell and command language, with native Linux command-line tools (sed, awk) and programming language interpreters (Ruby, Python, etc.). WSL is excellent news for sysadmins and IT professionals, but also provides a bypass to traditional defenses. As such, we developed a way to inspect WSL and to distinguish benign uses from malicious ones.

DFI (Static AI)

We are releasing a new version of our DFI, SentinelOne static AI module, capable of preventing the execution of malicious files. We have also improved the correlation between our engines, to ensure mitigation of the entire malicious chain, enriching the behavioral AI too.

New Logo, New Branding

We felt the time was right to introduce a new logo and brand message to convey who we are and the value we deliver to our customers.  A message about time: “it’s about time.” It’s about time because we save our customers’ valuable time by preventing and catching threats at machine speed, and it’s about time that the market demanded a solution that converges EPP & EDR into ONE purpose-built agent.

Enhanced Agent Performance

Optimization of Agent-Management protocol – for lower bandwidth consumption and reduced bit-rate peaks.

Conclusion

The unspoken risks of security today include complexity and management overhead. To keep your assets secure, you need prevention, detection, and response at scale, but you also need a solution that is easy to manage. At SentinelOne, we solve this problem by making our technology accessible and making AI and automation work in your favor.

Central Park is already in use for new deployments, including managed services, at scale.

Existing customers will be upgraded to this codebase as part of the Denali release in August 2018.

We continue to improve all aspects of the product: simplifying both the user and management experience, adding greater prevention and detection capabilities, and improving performance.

From SentinelOne’s perspective, this marks a new benchmark for enterprise readiness and supports the growth that SentinelOne expects in 2018.

Get in touch with the SentinelOne experts

Request a demo

7 Best Reasons to Visit SentinelOne at BlackHat

BlackHat 2018 is just a few weeks away, and the SentinelOne team could not be more excited!  We will be located in a prime area of the vendor pavilion, booth #212. Below our the 7 best reasons to visit us:

  1. Best Product – Autonomous endpoint protection with EPP + EDR through a single agent that saves you time.
  2. Best Location – Booth #212.  You will see us right when you enter the expo doors.
  3. Best Swag – Grab a cool t-shirt, hoodie, or a pair of socks!
  4. Best Presentations – Experts from our partner & customer ecosystem will present a variety of topics.
  5. Best Demos – The smart guys (and gals) will be showing live demonstrations of our industry-leading solution.
  6. Best Research – Learn how SentinelOne is on the cutting edge of stopping today’s most advanced attacks.
  7. Best Drink – Sip a sparkling lavender lemonade specially poured from our S1 ice sculpture.

See why autonomous endpoint protection is the ONLY answer to diverse modes of attack. Visit Booth#212 at Black Hat, the world’s leading information security event that promotes the latest in research, development, and trends.

Looking forward to meeting you!

Ready to schedule a meeting? Register for a demo? Join our happy hour?

Find out more

Related read

Windows Security

“ALL YOUR FILES ARE ENCRYPTED” – Unless set to Russian Locale

New Windows 10 File Type Can Be Abused for Running Malicious Applications

Hancitor Banking Trojan is Back | Using Malicious Word Attachment

SentinelOne Detects and Blocks New Variant of Powershell CryptoWorm

MacOS Security

Calisto Detected installing Backdoor on macOS

The Weakest Link: When Admins Get Phished | MacOS “OSX.Dummy” Malware

SentinelOne Releases Open Source Tool to Help Enterprises Protect from the Latest macOS Code-Signing Vulnerability

OSX.CpuMeaner: New Cryptocurrency Mining Trojan Targets MacOS

Linux Security

Drupal Exploit on Linux – SentinelOne Detection and Response Case Study

SentinelOne Releases Free Linux Tool to Detect Meltdown Vulnerability Exploitations

Think you can overlook Linux? Think again!

Guides

Implementing a Proactive Approach for Securing your Assets

5 Shortcuts to Speed Up Your Endpoint Security Management Process

6 Key Factors When Choosing VDI Security

 

 

 

Google is rolling out a version of Google Voice for enterprise G Suite customers

Google today said it will be rolling out an enterprise version of its Google Voice service for G Suite users, potentially tapping a new demand source for Google that could help attract a whole host of new users.

Google voice has been a long-enjoyed service for everyday consumers, and offers a lot of benefits beyond just having a normal phone number. The enterprise version of Google Voice appears to give companies a way to offer those kinds of tools, including AI-powered parts of it like voicemail transcription, that employees may be already using and potentially skirting the guidelines of a company. Administrators can provision and port phone numbers, get detailed reports and set up call routing functionality. They can also deploy phone numbers to departments or employees, giving them a sort of universal number that isn’t tied to a device — and making it easier to get in touch with someone where necessary.

All of this is an effort to spread adoption of G Suite among larger enterprises as it offers a nice consistent business for Google. While its advertising business continues to grow, the company is investing in cloud products as another revenue stream. That division offers a lot of overhead while Google figures out where the actual total market capture of its advertising is and starts to work on other projects like its hardware, Google Home, and others.

While Google didn’t explicitly talk about it ahead of the conference today, there’s another potential opportunity for something like this: call centers. An enterprise version of Google Voice could give companies a way to provision out certain phone numbers to employees to handle customer service requests and get a lot of information about those calls. Google yesterday announced that it was rolling out a more robust set of call center tools that lean on its expertise in machine learning and artificial intelligence, and getting control of the actual numbers that those calls take in is one part of that equation.

There’s also a spam filtering feature, which will probably be useful in handling waves of robo-calls for various purposes. It’s another product that Google is porting over to its enterprise customers with a bit better controls for CTOs and CIOs after years of understanding how normal consumers are using it and having an opportunity to rigorously test parts of the product. That time also gives Google an opportunity to thoroughly research the gaps in the product that enterprise customers might need in order to sell them on the product.

Google Voice enterprise is going to be available as an early adopter product.

Google Cloud introduces shielded virtual machines for additional security

While we might like to think all of our applications are equal in our eyes, in reality some are more important than others and require an additional level of security. To meet those requirements, Google introduced shielded virtual machines at Google Next today.

As Google describes it, “Shielded VMs leverage advanced platform security capabilities to help ensure your VMs have not been tampered with. With Shielded VMs, you can monitor and react to any changes in the VM baseline as well as its current runtime state.”

These specialized VMs run on GCP and come with a set of partner security controls to defend against things like rootkits and bootkits, according to Google. There are a whole bunch of things that happen even before an application launches inside a VM, and each step in that process is vulnerable to attack.

That’s because as the machine starts up, before you even get to your security application, it launches the firmware, the boot sequence, the kernel, then the operating system — and then and only then, does your security application launch.

That time between startup and the security application launching could leave you vulnerable to certain exploits that take advantage of those openings. The shielded VMs strip out as much of that process as possible to reduce the risk.

“What we’re doing here is we are stripping out any of the binary that doesn’t absolutely have to be there. We’re ensuring that every binary that is there is signed, that it’s signed by the right party, and that they load in the proper sequence,” a Google spokesperson explained. All of these steps should reduce overall risk.

Shielded VMs are available in Beta now

Snark AI looks to help companies get on-demand access to idle GPUs

Riding on a wave of an explosion in the use of machine learning to power, well, just about everything is the emergence of GPUs as one of the go-to methods to handle all the processing for those operations.

But getting access to those GPUs — whether using the cards themselves or possibly through something like AWS — might still be too difficult or too expensive for some companies or research teams. So Davit Buniatyan and his co-founders decided to start Snark AI, which helps companies rent GPUs that aren’t in use across a distributed network of companies that just have them sitting there, rather than through a service like Amazon. While the larger cloud providers offer similar access to GPUs, Buniatyan’s hope is that it’ll be attractive enough to companies and developers to tap a different network if they can lower that barrier to entry. The company is launching out of Y Combinator’s Summer 2018 class.

“We bet on that there will always be a gap between mining and AWS or Google Cloud prices,” Buniatyan said. “If the mining will be [more profitable than the cost of running a GPU], anyone can get into AWS and do mining and be profitable. We’re building a distributed cloud computing platform for clients that can easily access the resources there but are not used.”

The startup works with companies with a lot of spare GPUs that aren’t in use, such as gaming cloud companies or crypto mining companies. Teams that need GPUs for training their machine learning models get access to the raw hardware, while teams that just need those GPUs to handle inference get access to them through a set of APIs. There’s a distinction between the two because they are two sides to machine learning — the former building the model that the latter uses to execute some task, like image or speech recognition. When the GPUs are idle, they run mining to pay the hardware providers, and Snark AI also offers the capability to both mine and run deep learning inference on a piece of hardware simultaneously, Buniatyan said.

Snark AI matches the proper amount of GPU power to whatever a team needs, and then deploys it across a network of distributed idle cards that companies have in various data centers. It’s one way to potentially reduce the cost of that GPU over time, which may be a substantial investment initially but get a return over time while it isn’t in use. If that’s the case, it may also encourage more companies to sign up with a network like this — Snark AI or otherwise — and deploy similar cards.

There’s also an emerging trend of specialized chips that focus on machine learning or inference, which look to reduce the cost, power consumption or space requirements of machine learning tasks. That ecosystem of startups, like Cerebras Systems, Mythic, Graphcore or any of the other well-funded startups, all potentially have a shot at unseating GPUs for machine learning tasks. There’s also the emergence of ASICs, customized chips that are better suited to tasks like crypto mining, which could fracture an ecosystem like this — especially if the larger cloud providers decide to build or deploy something similar (such as Google’s TPU). But this also means that there’s room to potentially create some new interface layer that can snap up all the leftovers for tasks that companies might need, but don’t necessarily need bleeding-edge technology like that from those startups.

There’s always going to be the same argument that was made for Dropbox prior to its significant focus on enterprises and collaboration: the price falls dramatically as it becomes more commoditized. That might be especially true for companies like Amazon and Google, which have already run that playbook, and could leverage their dominance in cloud computing to put a significant amount of pressure on a third-party network like Snark AI. Google also has the ability to build proprietary hardware like the TPU for specialized operations. But Buniatyan said the company’s focus on being able to juggle inference and mining, in addition to keeping that cost low for idle GPUs of companies that are just looking to deploy, should keep it viable, even amid a changing ecosystem that’s focusing on machine learning.