PinnacleOne Alert | Russian Space-Based Nuclear Anti-Satellite Weapon

Key Takeaways

  • Russia is likely developing, but has not fully deployed, a nuclear-weapon based anti-satellite system (which would be a treaty violation).
  • This system would threaten to destroy wide swaths of military and commercial systems in low and medium earth orbit, create ground effects, and complicate strategic deterrence.
  • The USG is peeling back the curtain, slightly, on the increasing weaponization of space to alert the broader public to the consequences to economic and geopolitical stability.
  • Firms should incorporate assessments of space-based threats (human and natural) to the core capabilities they rely on to conduct their operations, especially satellite communications, position/navigation/timing, and ground-based systems with space-based dependencies.

Recommendations

Organizations that have to this point assumed the reliability and availability of space systems should conduct scenario-based planning and exercises to improve enterprise resilience.

Executives should ask:

  • How would my business function if satellite communications, GPS, or regional electrical infrastructure were interrupted or degraded for weeks or months?
  • What would we expect to be a collateral consequence of the overt weaponization of space or outright space conflict for terrestrial geopolitical and economic affairs?
  • How do we attempt to better understand and assess the implications of these normally highly classified and deeply shrouded government activities on my commercial objectives?
  • How do we as an organization build more resilience and redundancy into our service providers for essential communications and PNT capabilities?

What Happened?

Multiple sources reported Russia is in the process of developing a space-based anti-satellite (ASAT) nuclear weapons system that would be strategically destabilizing if made operational. Sources told the NYT that Russia “does not appear close to deploying” a nuclear weapon itself, and it is not considered an “urgent threat”. However, there is a “limited window of time, which they did not define, to prevent its deployment.”

While NPR’s sources were not certain if the capability was based on a nuclear weapon or merely nuclear powered, the Washington Post’s reporting squared with the NYT, quoting officials familiar that it involved “damaging critical intelligence or communications satellites with a nuclear weapon.”

Why Does it Matter?

The objective of such a system is to create a form of “area-denial” via electromagnetic interference and radiation effects on satellites in orbit. Deploying nuclear weapons into space is prohibited by the 1967 Outer Space Treaty (negotiated between the U.S. and the Soviet Union).

A nuclear weapon detonated in space can function as an “orbital area denial” system.
A nuclear weapon detonated in space can function as an “orbital area denial” system. (Source)

This intelligence comes in the context of Russia’s known heavy investments in superoruzhie (“superweapons”), as an asymmetric strategy to counter U.S. technical overmatch. From the Status-6 megaton nuclear torpedo, to the M730 Burevestnik nuclear-powered cruise missile, Avangard hypersonic glide vehicle, and 9M730 Burevestnik nuclear-powered cruise missile, this new nuclear ASAT system continues a pattern of developing weapons that strike fear of catastrophic destruction to maintain a strategic balance.

It is noteworthy that these reports come soon after the Russian’s launched Kosmos-2575 (a national security related payload) on 09FEB24 in a way that made it coplanar to Kosmos-2574 launched on 27DEC23. The US Space Force confirmed these satellites have identical orbital parameters. These launches are similar to those Russia conducted in 2022 of “inspector satellites” that can maneuver in orbit and “exhibited characteristics of a space weapon” but are not alleged to be nuclear in nature.

What Is the U.S. Saying?

Attentive observers would have seen, on 13JAN24, the Deputy Secretary of Defense post to X: “The United States is committed to leading with restraint and responsibility in the space domain — and in every domain. We do our part to avoid escalation. We strive to prevent miscommunication. And we work with like-minded nations to keep the space domain peaceful.”

The next day on 14JAN24, Deputy Secretary of Defense Hicks posted, “Space can be a domain of unpredictability, chaos, and destruction… or a domain of stability, tranquility, and possibility. For the good of all mankind, the United States emphatically chooses the latter – and we strongly encourage all nations to do the same.” In retrospect, this seems like a clear expression of displeasure directed at the Russians for their moves to covertly weaponize and destabilize the space domain.

It comes soon after Hicks signed an order rewriting the DoD’s classification policy for space in order to downgrade information previously locked up in Special Access Programs for broader dissemination with allies and partners. Back in 2021, General Hynten (Vice Chairman of the Joint Chiefs of Staff) told the National Security Space Association: “In space, we over-classify everything… Deterrence does not happen in the classified world. Deterrence does not happen in the black; deterrence happens in the white.”

The Washington Examiner reported that “Russian physicists have openly theorized plans to develop a space-based nuclear warhead system that would vaporize a second target-facing element to create a plasma wave that strikes targets in space at range.”

Technical Background

With this disclosure, the U.S. will now have to determine how to respond and how much of this response it needs to make public. Rep. Turner’s statement and presentation of relevant intelligence to the entire House has put his colleagues in the Senate and the White House in a difficult position as they deliberate on a response that preserves geopolitical and strategic flexibility.

A Defense Threat Reduction Agency report found that “one low-yield (10-20 kt), high altitude (125-300 km) nuclear explosion could disable — in weeks to months — all LEO [low earth orbit] satellites not specifically hardened to withstand radiation generated by that explosion.” They note a strategic objective of such an attack would be a “deliberate effort to cause economic damage with lower likelihood of nuclear retaliation.. by [a] rogue state facing economic strangulation or imminent military defeat or [p]ose economic threat to the industrial world without causing human casualties or visible damage to economic infrastructure.”

Prompt/direct nuclear effects against ground and air infrastructure via a high altitude electromagnetic pulse (HEMP) would be immediate, though somewhat localized under the detonation. Within weeks, however, the accumulating degradation against LEO assets would cause global disruption to most orbital telecommunications assets.

Also, the Pentagon highlighted in its annual report on the People’s Republic of China military that “the PRC is developing other sophisticated space-based capabilities, such as satellite inspection and repair. At least some of these capabilities could also function as a weapon.”

Given that, for now it appears, Russia has not yet deployed nuclear weapons themselves into orbit, there is time for crisis management, strategic signaling, and deterrence options. However, if they come close to putting, or actually put, a nuclear “package” into space in violation of the OST, then the U.S. would be facing an immediate strategic crisis on par with the Cuban Missile Crisis, at a time when leaders have their hands full with international crises and geopolitical flashpoints.

High Altitude Nuclear Detonations (HANDs) generate strong belt-pumping effects that dramatically reduce the lifetime of LEO satellite constellations
High Altitude Nuclear Detonations (HANDs) generate strong belt-pumping effects that dramatically reduce the lifetime of LEO satellite constellations (Source: DTRA)

What Should Executives Know?

While this remains in the strategic and orbital sphere for now, the implications for terrestrial communications systems, the emerging space economy, and global peace are deeply concerning. Further, this action by Russia is meant to serve as a cross-domain deterrent to shape conventional conflicts and diplomacy on the ground, casting the dynamics of the Ukrainian war in the light of its implicit threat to make a mess of things in space.

This development sheds public light on an issue that those in the intelligence and defense communities have been concerned with for years – namely, the fragility of global communications and position, navigation, and timing (PNT) systems to disruption or destruction in space.

More now recognize that space is the source of novel risks for a global economy dependent on orbital systems: space weather and solar flares, proliferating directed energy and electromagnetic warfare systems (both ground based and space-based), cyberattacks on space-systems, and the worrying prospect of orbital nuclearization create an escalating and complex risk environment.

Kryptina RaaS | From Underground Commodity to Open Source Threat

One of the key drivers behind the explosion in ransomware attacks over the last five years and more has been the development and proliferation of the ransomware-as-a-service model, a means of providing cybercriminals with easy to use, low cost tools with which to undertake and manage ransomware campaigns. Developers benefit from a steady stream of income from subscription sales while avoiding directly engaging in criminal acts. The recently observed Kryptina Raas, a dedicated Linux attack framework, has added a new twist to this model: moving from a paid service to an openly available tool.

In this post, we explore the development, technicalities and implications of Kryptina RaaS and its move into open-source crimeware. We dive into what defenders need to know to protect against this latest Linux ransomware and the dangers that open source threats pose to organizations.

The Development of Kryptina RaaS

The Kryptina RaaS first surfaced in December 2023 on underground forums, marketed as a lightweight, fast, and highly customizable ransomware solution for Linux systems. Authored in C, it offered an attractive proposition for cybercriminals looking for efficient ways to target the Linux servers and cloud workloads that form the backbone of many organization’s networks.

Initially, two purchase options were available: a standalone build (encryptor and decryptor) for $20, and a complete package including source code, builder, and documentation for $500. The developer quickly added new features in January including support for both 32 and 64-bit targets, an updated web interface and support for Monero (XMR) and Bitcoin (BTC) payments. The complete package price went up to $800 with the addition of these new features. This pricing strategy was indicative of the creator’s intention to cater to a wide range of actors within the cybercriminal ecosystem.

February saw a surprising turn of events as the creator, known as ‘Corlys’, published the entire source code on BreachForums, effectively removing any financial barrier to entry.

The developer’s stated reasons for releasing the source code of Kryptina were that it had failed to attract buyers. Given the short period of time between its first appearance as a paid offering and release of the open source code, some may not find this credible. Other motivations could include an attempt to build kudos within the cybercrime community, feuds with other criminals and/or fear of attention from law enforcement.

Kryptina 2.2 source code posting in BreachForums
Kryptina 2.2 source code posting in BreachForums

Whatever the motivation, the release of the RaaS source code, complete with extensive documentation, could have significant implications for the spread and impact of ransomware attacks against Linux systems. It is likely to increase the ransomware builder’s attractiveness and usability, drawing in yet more low-skilled participants to the cybercrime ecosystem. There is also significant risk that it will lead to the development of multiple spin-offs and an increase in attacks, an effect previously observed after the leak of Babuk ransomware’s source code.

Kryptina Payload | Technical Details

As noted, Kryptina is a Linux-only ransomware offering payloads for either elf64 or elf32 architectures. Upon execution, the ransomware targets the directories and files specified in the builder during the configuration stage.

The encryption process uses multiple parallel threads and depends on OpenSSL’s libcrypto library. It uses the AES256 algorithm in CVC mode. The keys and configuration data are obfuscated via XOR using a custom value defined at build time, and then base64 encoded.

File encryption is handled by the krptna_process_file() function. This initializes an OpenSSL cipher context EVP_CIPHER_CTX_new() before processing file streams via EVP_CipherUpdate() to transform unencrypted file data to the encrypted data in the output buffer. EVP_CipherFinal finalizes the processes and handles any required CBC padding.

Payloads can be configured to securely delete files before encryption, further hampering any data recovery efforts. When enabled, multiple steps are taken to overwrite individual file data and inhibit recovery. The secure_delete_file() function determines the file size using the stat function. It then creates a buffer filled with random bytes. The file to be encrypted is then opened in write mode, and the buffer of random bytes is written into the file, completely overwriting its original content, until it matches the file’s original size. Once this is achieved, the file is permanently removed using the unlink function.

“Secure deletion” in Kryptina
“Secure deletion” in Kryptina

The secure_delete_file() function utilizes a single-pass method, overwriting each byte of the file just once, avoiding using multiple overwrites with varying patterns seen in other ransomware. A single pass is generally adequate for rendering the original data unrecoverable and increases the speed of encryption.

Kryptina Web Interface & Builder

Kryptina’s architecture is built on a foundation of Python scripts for the payload builder and web server components, requiring dependencies like pycrypto, termcolor, flask, and others for full functionality. The tool’s source code is well-documented, reflecting an intent to provide Kryptina as a turnkey solution.

As noted, since its appearance Kryptina underwent several rapid revisions, with version 2.0 introducing significant enhancements including a web interface. The web server, powered by Flask, allows the user to easily view and manage campaigns, build encryptors and decryptors and to communicate with victims via the ‘Chat’ option. If the operator configures “Enable Public View” for the campaign, victims are able to initiate contact with the attacker following instructions provided in the ransom note.

Within the interface, the ‘Dashboard’ displays a quick view of attack campaigns
Within the interface, the ‘Dashboard’ displays a quick view of attack campaigns

The builder supports a wide range of command-line arguments for specifying target names, descriptions, encryption keys, directories or files to target, and more. This level of customization underscores Kryptina’s versatility and the granular control it offers to operators.

The builder can also be scripted with Python and supports the following command-line parameters.

Arg Description
-n Name of your target
-a About, A short description about your target
-k Base64-encoded 256-bit key to use (default: random)
-t Directories or files to target (comma-separated)
-i Files/extensions to ignore (comma-separated)
-e Custom extension to use (default: .krptna)
-j Max number of jobs (threads) to use (default: 20)
–arch32 Create a 32-bit binary (x86)
–xor_key The XOR key to use for encoding encrypted config data (default: 155)
–note The local file containing the encryption notice text (default: note/template.txt)
–note_name The absolute/relative filename to write encryption notice to on target
–nonote Don’t write encryption note on the target
–bitcoin Bitcoin wallet address for receiving payment
–monero Monero wallet address for receiving payment
–amount The amount to ask for in USD (default: 100.0)
–deadline The payment deadline in hours (default: 72.0)
–tox The Tox chat ID you wish to be contacted on
–session The Session chat ID you wish to be contacted on
–debug Enable debug output
–demo Create a demo payload that doesn’t actually encrypt/decrypt files
–symbols Build binary with debug symbols (-ggdb3)
–nolog Don’t log payload config to the output/ directory
–persist Disable payload self-delete
–secdel Enable secure delete when encrypting files (very slow, but makes recovery much harder)
–maxsize Maximum size of file (in megabytes) to target (default: unlimited)
–recommend Auto-set recommended values for undefined parameters
–static Build the payloads as static binaries
–verbose Print compiler commands and output

SentinelOne Protects Against Kryptina Ransomware

SentinelOne Singularity detects Kryptina payloads and protects Linux systems against Kryptina ransomware. When allowed to execute in ‘Detect Only’ mode for observation purposes, Kryptina’s malicious behavior along with indicators can be viewed in the Management console.

SentinelOne protects against Kryptina ransomware
SentinelOne protects against Kryptina ransomware

Conclusion | Navigating the Kryptina Threat to Linux Systems

The journey of Kryptina RaaS from a paid underground tool to a freely available open-source project illustrates the complexity of threats facing network defenders. As other actors iterate on the provided code, which provides everything from customizable ransomware payloads to campaign management and victim communication, it is likely that a host of Kryptina variants will proliferate in much the same way as we saw Babuk variants multiply and diversify.

As the move to cloud and cloud workloads and containers continues apace, the attractiveness of Linux as a target for cybercriminals grows with it. Powering everything from edge devices to servers, orchestration technologies like Kubernetes, and cloud infrastructure like AWS, Azure and Google Cloud, Linux systems are at the heart of modern enterprise environments, and securing them is essential.

To learn how SentinelOne can help protect the Linux systems in your organization from ransomware and other threats, contact us or request a free demo.

Indicators of Compromise

Source files

03bbfdbad1d1fd93d6c76de9a61e9cfc49e7e319
095538ff7643b0c142335c978bfe83d32a68cdac
1f08d9d0fe90d572a1bb0488ffe60e9f20c11002
226aea1e37bc2d809115ceb6ac5ea99e62d759c9
2aa6a1019c16f4142888278098f0c3263e95e446
33306b854770f95d0a164932d72bec1f78de54bf
51acdb8f29726fe7d5b6207f106e7138b564fd39
5413adf32129d50c4984e406d5a3804435d1cfc1
60b5beffaf738f5112233ed9b36975822c1f7bfc
6f3c3129fc2ac56b61fa4df21e723f3dd2aceb70
8ec866aa48a9bb8d6df7fbbe1a073390f4b0098c
d0231ce29ea7a63bea7451c42d69e93c83babb48
d41b8a7bc9bc444372e06e67585a8086d6ae8cfc
d46fbc4a57dce813574ee312001eaad0aa4e52de
ddcf4a6bc32afe94e3ea955eead9db179d5394c2
e3e8ed6ac01e6edb8d8848b1472882afb0b36f0b
f84ffe172f9d6db18320ad69fc9eade46c41e9da

Payload Samples

355d70ffe98e6f22b6c3ad8d045e025a5ff78260
63580c4b49d350cf1701fb906c94318a683ae668
63ff8359da29c3ba8352ceb4939f2a3e64987ab6
dd495839a4f4db0331c72a4483071a1cef8da17e

MITRE ATT&CK

T1014  Defense Evasion
T1059.006  Command and Scripting Interpreter: Python
T1068  Privilege Escalation
T1070.003  Indicator Removal: Clear Command History
T1070.004  Indicator Removal: File Deletion
T1070.002  Indicator Removal: Clear Linux or Mac System Logs
T1140  Deobfuscate/Decode Files or Information
T1222.002  File and Directory Permissions Modification: Linux and Mac File and Directory Permissions Modification
T1485  Data Destruction
T1486  Data Encrypted for Impact
T1562.001  Impair Defenses: Disable or Modify Tools
T1562.012  Impair Defenses: Disable or Modify Linux Audit System
T1573.002  Encrypted Channel: Asymmetric Cryptography

Singularity Cloud
Simplifying runtime detection and response of cloud VMs, containers, and Kubernetes clusters for maximum visibility, security, and agility.

U.S. Internet Leaked Years of Internal, Customer Emails

The Minnesota-based Internet provider U.S. Internet Corp. has a business unit called Securence, which specializes in providing filtered, secure email services to businesses, educational institutions and government agencies worldwide. But until it was notified last week, U.S. Internet was publishing more than a decade’s worth of its internal email — and that of thousands of Securence clients — in plain text out on the Internet and just a click away for anyone with a Web browser.

Headquartered in Minnetonka, Minn., U.S. Internet is a regional ISP that provides fiber and wireless Internet service. The ISP’s Securence division bills itself “a leading provider of email filtering and management software that includes email protection and security services for small business, enterprise, educational and government institutions worldwide.”

U.S. Internet/Securence says your email is secure. Nothing could be further from the truth.

Roughly a week ago, KrebsOnSecurity was contacted by Hold Security, a Milwaukee-based cybersecurity firm. Hold Security founder Alex Holden said his researchers had unearthed a public link to a U.S. Internet email server listing more than 6,500 domain names, each with its own clickable link.

A tiny portion of the more than 6,500 customers who trusted U.S. Internet with their email.

Drilling down into those individual domain links revealed inboxes for each employee or user of these exposed host names. Some of the emails dated back to 2008; others were as recent as the present day.

Securence counts among its customers dozens of state and local governments, including: nc.gov — the official website of North Carolina; stillwatermn.gov, the website for the city of Stillwater, Minn.; and cityoffrederickmd.gov, the website for the government of Frederick, Md.

Incredibly, included in this giant index of U.S. Internet customer emails were the internal messages for every current and former employee of U.S. Internet and its subsidiary USI Wireless. Since that index also included the messages of U.S. Internet’s CEO Travis Carter, KrebsOnSecurity forwarded one of Mr. Carter’s own recent emails to him, along with a request to understand how exactly the company managed to screw things up so spectacularly.

Individual inboxes of U.S. Wireless employees were published in clear text on the Internet.

Within minutes of that notification, U.S. Internet pulled all of the published inboxes offline. Mr. Carter responded and said his team was investigating how it happened. In the same breath, the CEO asked if KrebsOnSecurity does security consulting for hire (I do not).

[Author’s note: Perhaps Mr. Carter was frantically casting about for any expertise he could find in a tough moment. But I found the request personally offensive, because I couldn’t shake the notion that maybe the company was hoping it could buy my silence.]

Earlier this week, Mr. Carter replied with a highly technical explanation that ultimately did little to explain why or how so many internal and customer inboxes were published in plain text on the Internet.

“The feedback from my team was a issue with the Ansible playbook that controls the Nginx configuration for our IMAP servers,” Carter said, noting that this incorrect configuration was put in place by a former employee and never caught. U.S. Internet has not shared how long these messages were exposed.

“The rest of the platform and other backend services are being audited to verify the Ansible playbooks are correct,” Carter said.

Holden said he also discovered that hackers have been abusing a Securence link scrubbing and anti-spam service called Url-Shield to create links that look benign but instead redirect visitors to hacked and malicious websites.

“The bad guys modify the malicious link reporting into redirects to their own malicious sites,” Holden said. “That’s how the bad guys drive traffic to their sites and increase search engine rankings.”

For example, clicking the Securence link shown in the screenshot directly above leads one to a website that tries to trick visitors into allowing site notifications by couching the request as a CAPTCHA request designed to separate humans from bots. After approving the deceptive CAPTCHA/notification request, the link forwards the visitor to a Russian internationalized domain name (рпроаг[.]рф).

The link to this malicious and deceptive website was created using Securence’s link-scrubbing service. Notification pop-ups were blocked when this site tried to disguise a prompt for accepting notifications as a form of CAPTCHA.

U.S. Internet has not responded to questions about how long it has been exposing all of its internal and customer emails, or when the errant configuration changes were made. The company also still has not disclosed the incident on its website. The last press release on the site dates back to March 2020.

KrebsOnSecurity has been writing about data breaches for nearly two decades, but this one easily takes the cake in terms of the level of incompetence needed to make such a huge mistake unnoticed. I’m not sure what the proper response from authorities or regulators should be to this incident, but it’s clear that U.S. Internet should not be allowed to manage anyone’s email unless and until it can demonstrate more transparency, and prove that it has radically revamped its security.

Fat Patch Tuesday, February 2024 Edition

Microsoft Corp. today pushed software updates to plug more than 70 security holes in its Windows operating systems and related products, including two zero-day vulnerabilities that are already being exploited in active attacks.

Top of the heap on this Fat Patch Tuesday is CVE-2024-21412, a “security feature bypass” in the way Windows handles Internet Shortcut Files that Microsoft says is being targeted in active exploits. Redmond’s advisory for this bug says an attacker would need to convince or trick a user into opening a malicious shortcut file.

Researchers at Trend Micro have tied the ongoing exploitation of CVE-2024-21412 to an advanced persistent threat group dubbed “Water Hydra,” which they say has being using the vulnerability to execute a malicious Microsoft Installer File (.msi) that in turn unloads a remote access trojan (RAT) onto infected Windows systems.

The other zero-day flaw is CVE-2024-21351, another security feature bypass — this one in the built-in Windows SmartScreen component that tries to screen out potentially malicious files downloaded from the Web. Kevin Breen at Immersive Labs says it’s important to note that this vulnerability alone is not enough for an attacker to compromise a user’s workstation, and instead would likely be used in conjunction with something like a spear phishing attack that delivers a malicious file.

Satnam Narang, senior staff research engineer at Tenable, said this is the fifth vulnerability in Windows SmartScreen patched since 2022 and all five have been exploited in the wild as zero-days. They include CVE-2022-44698 in December 2022, CVE-2023-24880 in March 2023, CVE-2023-32049 in July 2023 and CVE-2023-36025 in November 2023.

Narang called special attention to CVE-2024-21410, an “elevation of privilege” bug in Microsoft Exchange Server that Microsoft says is likely to be exploited by attackers. Attacks on this flaw would lead to the disclosure of NTLM hashes, which could be leveraged as part of an NTLM relay or “pass the hash” attack, which lets an attacker masquerade as a legitimate user without ever having to log in.

“We know that flaws that can disclose sensitive information like NTLM hashes are very valuable to attackers,” Narang said. “A Russian-based threat actor leveraged a similar vulnerability to carry out attacks – CVE-2023-23397 is an Elevation of Privilege vulnerability in Microsoft Outlook patched in March 2023.”

Microsoft notes that prior to its Exchange Server 2019 Cumulative Update 14 (CU14), a security feature called Extended Protection for Authentication (EPA), which provides NTLM credential relay protections, was not enabled by default.

“Going forward, CU14 enables this by default on Exchange servers, which is why it is important to upgrade,” Narang said.

Rapid7’s lead software engineer Adam Barnett highlighted CVE-2024-21413, a critical remote code execution bug in Microsoft Office that could be exploited just by viewing a specially-crafted message in the Outlook Preview pane.

“Microsoft Office typically shields users from a variety of attacks by opening files with Mark of the Web in Protected View, which means Office will render the document without fetching potentially malicious external resources,” Barnett said. “CVE-2024-21413 is a critical RCE vulnerability in Office which allows an attacker to cause a file to open in editing mode as though the user had agreed to trust the file.”

Barnett stressed that administrators responsible for Office 2016 installations who apply patches outside of Microsoft Update should note the advisory lists no fewer than five separate patches which must be installed to achieve remediation of CVE-2024-21413; individual update knowledge base (KB) articles further note that partially-patched Office installations will be blocked from starting until the correct combination of patches has been installed.

It’s a good idea for Windows end-users to stay current with security updates from Microsoft, which can quickly pile up otherwise. That doesn’t mean you have to install them on Patch Tuesday. Indeed, waiting a day or three before updating is a sane response, given that sometimes updates go awry and usually within a few days Microsoft has fixed any issues with its patches. It’s also smart to back up your data and/or image your Windows drive before applying new updates.

For a more detailed breakdown of the individual flaws addressed by Microsoft today, check out the SANS Internet Storm Center’s list. For those admins responsible for maintaining larger Windows environments, it often pays to keep an eye on Askwoody.com, which frequently points out when specific Microsoft updates are creating problems for a number of users.

PinnacleOne ExecBrief | Safe, Secure, and Trustworthy AI

Welcome back to the re-launched PinnacleOne Executive Brief. Intended for corporate executives and senior leadership in risk, strategy, and security roles, the P1 ExecBrief provides actionable insights on critical developments spanning geopolitics, cybersecurity, strategic technology, and related policy dynamics.

For our second post, we summarize SentinelOne’s response to the National Institute of Standards and Technology (NIST) Request for Information on its responsibilities under the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued on October 30, 2023.

Please subscribe to read future issues and forward this newsletter to your colleagues to get them to sign up as well.

Feel free to contact us directly with any comments or questions: pinnacleone-info@sentinelone.com

Insight Focus | Safe, Secure, and Trustworthy AI

Artificial Intelligence permeates conversations in all sectors, offering promises innovators cannot resist and perils that have analysts nervous. SentinelOne has observed firsthand how both attackers and network defenders are increasingly using AI to improve their respective cyber capabilities.

In October, the White House issued an Executive Order to exert federal leadership on ensuring responsible AI use. The EO tasked NIST with creating AI evaluation capabilities and development guidelines. SentinelOne submitted a response to their Request For Information informed by our own expertise and experience using AI to secure our client’s systems.

Summary

In our response, we provided our assessment of the impact of emerging AI technologies on cybersecurity for both offensive and defensive purposes. While current effects are nascent, we expect these technologies to become increasingly used by both malign actors and network defenders – effectively we are entering the era of AI vs. AI.

It is critical that federal policy enable rather than hamstring research and development efforts that help American firms and government agencies keep pace with rapidly moving threats. We describe one example of AI-enabled cybersecurity technology, Purple AI, that we developed to drive industry innovation and stay ahead of the risk curve.

We then provided our observations of AI risk management approaches across the client industries we serve. Here we summarize three different stances being adopted by firms with various risk tolerances, market incentives, and industry considerations. We also provide a summary of the analytic framework we developed to advise firms on establishing and maintaining enterprise-wide AI risk management processes and tools.

Finally, we offered three specific recommendations that encourage NIST to emulate its successful approach with the Cybersecurity Framework. Our recommendations emphasize the importance and value of common ground truths and lexicon, industry-specific framework profiles, and a focus on voluntary, risk-based guidance.

Impact of AI on Cybersecurity

AI systems are force enablers for both attackers and defenders. SentinelOne assesses that threat actors, including state and non-state groups, are using AI to augment existing tactics, techniques, and procedures (TTPs) and improve their offensive effectiveness. We also see the rapid emergence of industry technologies that leverage AI to automate detection and response and improve defensive capabilities, including our own Purple AI.

Cyber Attacker Use of AI: Raising the Floor

Given the persistent fallibility of humans, social engineering has become a component of most cyber attacks. Even before ChatGPT, attackers used generative AI to win over the trust of unsuspecting victims. In 2019, attackers spoofed the voice of a CEO using AI voice technology to scam another higher-up out of $243,000, and by 2022 two thirds of cybersecurity professionals reported that deepfakes were a component of attacks they had investigated the previous year. The proliferation of these technologies will enable lesser skilled, opportunistic hackers to conduct advanced social engineering attacks that combine text, voice, and video to increase the scale and frequency of access operations.

Meanwhile, highly capable state threat actors are best placed to fully leverage the AI frontier for advanced cyber operations, but these effects will remain hard to discern and attribute. The UK’s National Cyber Security Centre predicts that AI may assist with malware and exploit development, but that in the near term, human expertise will continue to drive this innovation.

It is important to note that many uses of AI for vulnerability detection and exploitation may not have any indicators of AI’s role in enabling the attack.

Cyber Defender Use of AI: Increase the Signal, Reduce the Noise

SentinelOne has learned lessons in developing our own AI-enabled cyber defense capabilities. Increasingly, we recognize that analysts can get overwhelmed by alert fatigue, forcing responders to spend valuable time synthesizing complex and ambiguous information. Security problems are becoming data problems. Given this, we designed our Purple AI system to take in large quantities of data and use a generative model to use natural language inputs, rather than code, to help human analysts accelerate threat-hunting, analysis and response.

By combining the power of AI for data analytics and conversational chats, an analyst could use a prompt such as “Is my environment infected with SmoothOperator?”, or “Do I have any indicators of SmoothOperator on my endpoints?” to hunt for a specific named threat. In response, these tools will deliver results along with context-aware insights based on the observed behavior and identified anomalies within the returned data. Suggested follow up questions and best next actions are also provided. With a single click of a button the analyst can then trigger one or multiple actions, while continuing the conversation and analysis.

Purple AI is an example of how the cybersecurity industry is integrating generative AI into solutions that allow defenders, like threat hunters and SOC team analysts, to leverage the power of large language models to identify and respond to attacks faster and easier.

Using natural language conversational prompts and responses, even less-experienced or under-resourced security teams can rapidly expose suspicious and malicious behaviors that hitherto were only possible to discover with highly-trained analysts dedicating many hours of effort. These tools will allow threat hunters to ask questions about specific, known threats and get fast answers without needing to create manual queries around indicators of compromise.

How PinnaceOne Advises Firms on AI Risk Management

We tell firms to focus on six areas of AI risk management, with specific considerations for each:

(1) Regulatory & Compliance (4) Reputational
(2) Technology & Security (5) Legal
(3) Data & Privacy (6) Operational Disruption

We have found that those organizations that already have effective cross-functional teams to coordinate infosec, legal, and enterprise technology responsibilities are well positioned to manage AI integration. An area of common challenge, however, is the relationship between AI development engineers, product managers, trust & safety, and infosec teams. In many firms, for example, it is not clear who owns model poisoning/injection attacks, prompt abuse, or corporate data controls, among other emerging AI security challenges. Further, the shifting landscape of third-party platform integrations, open-source proliferation, and DIY capability sets hinder technology planning, security roadmaps, and budgets

AI Safety and Security

While specific tools and technical processes will differ across industry and use-case, we observe common approaches to AI safety and security taking shape. These approaches combine traditional cybersecurity red teaming methodologies with evolving AI-specific techniques that ensure system outputs and performance are trustworthy, safe, and secure.

AI security and safety assurance involves a much broader approach than conventional information security practices (e.g., penetration testing) and must incorporate assessments of model fairness/bias, harmful content, and misuse. Any AI risk mitigation framework should encourage firms to deploy an integrated suite of tools and processes that address:

  • Cybersecurity (e.g., compromised confidentiality, integrity, or accessibility);
  • Model security (e.g., poisoning, evasion, inference, and extraction attacks); and
  • Ethical practice (e.g., bias, misuse, harmful content, and social impact).

This requires security practices that emulate not only malicious threat actors but also how normal users may unintentionally trigger problematic outputs or leak sensitive data. To do this effectively, an AI safety and security team requires a mix of cybersecurity practitioners, AI/ML engineers, and policy/legal experts to ensure compliance and user trust.

There will be a need for specific practices and tools for specialized use-cases. For example, the production of synthetic media may require embedded digital watermarking to demonstrate provable provenance and traceability of training data to avoid copyright liability.

Also, as AI agents become more powerful and prevalent, a much larger set of legal, ethical, and security considerations will be raised regarding what controls are in place to govern the behavior of such agents and constrain their ability to take independent action in the real world (e.g., access cloud computing resources, make financial transactions, register as a business, impersonate a human, etc.).

Further, the implications for geopolitical competition and national security will become increasingly important as great powers race to capture strategic advantage. Working at the frontier of these technologies will involve inherent risk and U.S. adversaries may accept a higher risk tolerance in order to leap ahead. International standard setting and trust-building measures will be necessary to prevent a race-to-the-bottom competitive dynamic and security spiral.

To manage and mitigate these risks, we will need common and broad guardrails but also specific best practices and security tools calibrated to different industries. These should be based on the nature of the use-case, operational scope, scale of potential externalities, effectiveness of controls, and take into account a cost-benefit balance between innovation and risk. Given the pace of change, maintaining this balance will be an ever evolving effort.

Policy Recommendations

Our recommendations to NIST emphasize the importance and value of common ground truths and lexicon, industry-specific framework profiles, and a focus on voluntary, risk-based guidance.

  1. NIST should align and cross-walk the Cyber Security Framework (CSF) and AI Risk Management Framework to ensure a set of common ground truths. The firms we advise often look towards industry practices at competing firms to model their cybersecurity. Having a common set of terms and standards will enable companies to understand and strive towards industry benchmarks that both set the floor and raise the ceiling across diverse sectors.
  2. NIST should follow the CSF model and develop framework profiles for various sectors and subsectors, including profiles that range from small and medium-sized businesses to large enterprises. These Framework Profiles would help organizations in these diverse sectors align their organizational requirements and objectives, risk appetite, and resources against the desired outcomes of the RMF and its core components.
  3. In keeping with the successful CSF approach, NIST should seek to maximize voluntary adoption of its guidance that addresses the societal impacts and negative externalities from AI that pose the greatest risk – prescriptive regulation is a domain best suited for congressional and executive action, given the larger national security and economic considerations at issue.

We appreciated the opportunity to share with NIST our perspective on both AI risk and opportunity. We encourage our industry peers to maintain close policy engagement to ensure the U.S. keeps its innovative edge in AI-enabled cybersecurity to stay ahead of increasingly capable and malign threats.

The Good, the Bad and the Ugly in Cybersecurity – Week 6

The Good | Public-Private Partnership to Crackdown on Commercial Spyware

The private and public sectors have done a rare thing this week: they agreed that something must be done about the proliferation and abuse of commercial spyware, a problem that has exploded in recent years.

On Monday, the State Department announced a new policy to impose visa restrictions on individuals involved with the misuse of spyware. The restrictions, which extend to spouses and children, can be imposed on anyone believed to be involved in the targeting, unlawful surveillance, harassment, suppression or intimidation of others including journalists, activists and dissidents.

This was quickly followed on Tuesday by the announcement of an international agreement dubbed The Pall Mall Process, in which a range of public and private entities agreed to tackle the threat posed by the proliferation and abuse of commercial cyber intrusion tools and services. The agreement included tech corps like Google and Microsoft and a host of government agencies from the US, UK, Europe, Japan and Singapore.

The explosion in spyware and spyware services has led to a rapid expansion in the pool of state and non-state actors, tracked as private sector offensive actors (PSOAs) by cyber defenders, who can access and deploy sophisticated cyber intrusion tools to commit cyber crimes against nations, organizations and individuals.

Recognizing the need for greater oversight of the development and distribution of commercial spyware, the Pall Mall Process will aim to establish guiding principles and policy options for governments, industry and civil society regarding the development, purchase, and use of commercially available cyber intrusion capabilities.

The Bad | Volt Typhoon Prepares Attacks on US Critical Infrastructure

CISA, the NSA and the FBI have warned this week that China-backed state-sponsored actor Volt Typhoon has been conducting a long-running campaign to infiltrate and hide within the networks of US critical infrastructure organizations. In a detailed joint advisory, they warned that the campaign avoids typical tactics, techniques and procedures of other threat actors and will easily evade simple security solutions.

The threat actor’s MO involves initial access through N-day and Zero-day vulnerabilities in network gear such as Ivanti, Citrix, Cisco and Fortinet (see below) appliances. Volt Typhon operators then use VPN sessions to maintain persistent access and blend in with regular traffic. Notably, they tend to avoid dropping malware on the victim’s network to help avoid discovery, preferring direct control via command line sessions and LOLBins.

Top among these is heavy use of PowerShell to perform targeted queries on Windows security event logs, and vssadmin to access the sensitive Active Directory’s NTDS.dit file from a Volume shadow copy, a technique which allows the attacker to bypass the file locking mechanism that protects the file on the live Windows environment. NTDS.dit contains hashed versions of passwords, which are then subject to brute force attacks offline to reveal clear text credentials.

Following credentials dumping, Volt Typhoon remains silent on the network. CISA says the threat actor is pre-positioning itself in preparation for a future disruptive or destructive cyber attack on US critical infrastructure. Security teams, particularly those defending critical infrastructure entities, can review the detailed detection and hunting recommendations here.

The Ugly | Host of New Bugs Disclosed for Ivanti, Cisco and Fortinet

While Volt Typhoon operators may feel down about this week’s exposure, the tranche of new bugs in the very appliances that they target will no doubt be cheering them up. Ivanti, Fortinet, and Cisco have all disclosed new serious vulnerabilities this week.

A critical security bug in Fortinet’s FortiOS, rated 9.6 on the CVSSv3 scale, was disclosed this Thursday. The flaw, CVE-2024-21762, could allow a remote unauthenticated attacker to execute arbitrary code.

Fortinet says that the vulnerability is potentially being exploited in the wild, though few other details are available at this time. The bug affects versions prior to FortiOS 7.4.3. Users that cannot upgrade may disable SSL VPN, but the company explicitly warns that simply disabling webmode is not a valid workaround. The latest bug comes on the back of other further updates to address vulnerabilities previously patched as CVE-2023-34992 (CVSSv3 9.7) in FortSIEM supervisor.

Cisco CSRF CVE-2024

Meanwhile, Cisco Expressway Series devices were found to have multiple vulnerabilities that could allow unauthenticated remote attackers to perform CSRF (Cross-site forgery request) attacks. CVE-2024-20254 and CVE-2024-20255 affect Cisco Expressway Series devices in the default configuration; CVE-2024-20252 affects devices only if the cluster database (CDB) API feature has been enabled. It is disabled by default.

To cap off a worrying week for network admins, Ivanti this week disclosed yet another flaw affecting Ivanti Connect Secure. CVE-2024-22024 is an 8.3 CVSSv3 rated bug that could allow an attacker to access a subset of restricted resources without authentication. It is not known to be currently exploited in the wild, but patch now, while threat actors are busy elsewhere.

Decrypting SentinelOne Cloud Detection | The STAR™ Rules Engine in Real-Time CWPP

In this the fifth installment of our Detection Engine blog series, we examine the STAR Rules Engine and its role as one of five detection engines which work together as part of our cloud workload protection platform (CWPP) to detect, block, and respond to runtime threats impacting cloud workloads. (The first, second, third, and fourth posts in the series discuss the Static AI, Behavioral AI, Application Control, and Cloud Threat Intelligence Engines, respectively.)

STAR Rules Engine 101

Cloud workloads can create millions of security telemetry events daily. Security teams need an automated means of finding indicators of compromise (IOCs) lurking deep within that security data lake.

The STAR Rules Engine is a rules-based engine which enables users to transform queries of cloud workload telemetry into automated threat hunting rules. Whenever a match is made, these custom rules trigger alerts and, optionally, automated response actions. In this way, STAR rules become a force multiplier for security teams, equipping them to take swift action at scale in response to an ever-evolving threat landscape.

How Does It Work?

STAR rules apply custom detection logic which is immediately pushed either to every agent in the customer’s fleet or to a subset: it’s the customer’s choice. Each rule can be customized to suit the organization’s specific requirements. Users have the flexibility to choose between receiving alerts only or taking a mitigating response action such as matching process kill, network quarantine, and more. SentinelOne offers automatic mitigation options based on policy settings for suspicious or malicious threat confidence levels.

When a STAR rule matches inbound telemetry from the CWPP agent to the Singularity Data Lake, an alert is issued in near-real time. If there is a response action tied to the rule, the agent carries out that response on the Storyline™ tied to the telemetry event that made the match. STAR alerts can be found in the management console under Alerts and in the Activity log, ensuring that security practitioners remain informed. Threats are prominently displayed in the Threats section, providing a comprehensive overview of detected risks.

Creating a STAR Rule

STAR rules can be created based upon any number of 200+ telemetry attributes. While that may sound daunting, creating a rule is done in 4 easy steps and with profound impact on SOC productivity.

STEP 1: Write a Query

The first step is to write a Singularity Data Lake query. This part represents the art of your threat hunting expertise. Early customers of Purple AI will find this step especially straightforward, as the GenAI allows natural language queries that are automatically translated to the appropriate SDL syntax.

Copy this query syntax to your paste buffer:

event.type == "File Modification" AND endpoint.os == "linux" AND tgt.file.path == "/etc/passwd"

STEP 2: Create a New STAR Rule

Click ‘Star Custom Rules’ and then ‘New Rule’. Give the rule a unique name, description, and severity. Click ‘Next’ to go to the rule condition.

Set an appropriate scope, and then paste the query syntax from Step 1 into the rule. Click ‘Next’.

STEP 3: Add Response Actions

In short, what do you want to do when your rule matches a threat query? Do you wish to treat the detection as a suspicious or a malicious threat? If so, tick the box ‘Treat as a threat’ and select the appropriate choices, ‘Suspicious Threat Policy’ or ‘Malicious Threat Policy’. The automated response action is governed by the policy defined for the agent’s scope. Click ‘Next’.

STEP 4: Save the Rule

In the Summary window, review the rule details. If you want the rule to be immediately active, tick the ‘Active rule immediately after saving’ box. If everything looks right, click ‘Submit’.

If creating a rule seems simple, great. That is the intention. A list of your custom rules can be found within the SentinelOne management console by navigating to Sentinels on the left navigation pane, and then selecting the STAR CUSTOM RULES tab as shown below. Here, we find the newly created rule, “JM-Detect passwd changes”. We will reference this rule later in the blog when we examine an example detection.

Best Practices for STAR Rules

Customers of Singularity Complete are entitled to use up to 100 STAR rules with the option to purchase additional rules in packs of 300 up to a maximum of 1000 STAR rules per customer.

To maximize the effectiveness of STAR rules, organizations should follow these best practices:

  1. Tune Queries for Accuracy: Firstly, be sure to fine-tune queries to generate a narrow and relevant list of true-positive matches. This ensures that alerts are focused on actionable threats. For example, if a query creates hundreds of results, narrow the time frame or add more search parameters to the rules to create more tightly focused results.
  2. Save Queries as Custom Rules: Convert successful queries into custom rules, allowing continuous monitoring and automated response.
  3. Iterative Rule Refinement: Let a STAR rule run for a period and then investigate its generated alerts (i.e., matches). Refine the rule to further narrow the results based on feedback and analysis so that the final match is laser-focused on the specific condition you desire. This iterative approach ensures optimal threat detection.
  4. Automated Mitigation: Once your rule is finely tuned and its results meet expectations, select an Auto Response to automatically mitigate identified threats. This proactive approach achieves bespoke, rapid cloud threat detection and response for your unique use case.

By following these best practices, organizations can harness the full potential of SentinelOne’s STAR rules.

Example: STAR Rule Detecting a Change to /etc/passwd

Earlier we created a STAR rule, “JM-Detect passwd changes”, which triggers a ‘Suspicious’ alert anytime a file modification was made on a Linux VM’s /etc/passwd file path. Now that the rule has been created, its custom detection logic is pushed to all CWPP agents within the scope.

Looking at the Incidents panel, we see two detections labelled ‘Suspcious’, one made by the STAR rule and which relates to a bash script.

By clicking on the incident, we see expanded details. Here, a threat actor ran a bash script which began by escalating privileges via the sudo command, the originating process. The STAR rule detected a change to the VM’s passwd file.

Keep in mind that the rule was configured to use the Suspicious Threat Policy, which is Detect Mode, which is why the Incident Status is marked Unresolved or NOT MITIGATED. Had the rule been invoked under the Malicious Threat Policy and the policy set to take a mitigating action (such as process kill), the response action would have been automated, thereby offloading the overburdened security analyst.

Here again, we take a moment to emphasize the careful decision making behind the choice of automated response action. The flexibility and control are yours. The combination of agent and agentless cloud security capabilities are a powerful, transformative mechanism in creating better cloud security outcomes.

As a next step, the security analyst would surely initiate a mitigation action, which is easily done directly in the SentinelOne management console via the ‘Actions’ button in the upper right of the Incidents pane. Quite likely they would also open a security ticket, document their findings and actions, and inform the DevOps owner for this Linux VM.

Conclusion

One of five engines in SentinelOne’s real-time CWPP solution, the STAR Rules Engine triggers alerts on custom logic matching of security telemetry, as well as optionally automating prescriptive response action. In this way, the STAR Rules Engine becomes a force multiplier for the cloud security operations team. It works alongside the other local engines to deliver real-time cloud threat detection and response that far exceeds the limited abilities of agentless CWPP.

To learn more about the value of real-time, AI-powered CWPP in your cloud security stack, head over to the solution homepage, or see how Singularity Cloud Workload Security works with a 2-minute guided walk-through here. Whenever you are ready, connect with one of our cloud security experts for a personalized demo.

EBook: A Cloud Workload Protection Platform Buyer’s Guide
The Cloud Workload Protection Platform Buyer’s Guide is designed to walk you through key considerations when buying cloud workload solutions. We hope it helps to bring clarity to your evaluation and selection process.

From Cybercrime Saul Goodman to the Russian GRU

In 2021, the exclusive Russian cybercrime forum Mazafaka was hacked. The leaked user database shows one of the forum’s founders was an attorney who advised Russia’s top hackers on the legal risks of their work, and what to do if they got caught. A review of this user’s hacker identities shows that during his time on the forums he served as an officer in the special forces of the GRU, the foreign military intelligence agency of the Russian Federation.

Launched in 2001 under the tagline “Network terrorism,” Mazafaka would evolve into one of the most guarded Russian-language cybercrime communities. The forum’s member roster includes a Who’s Who of top Russian cybercriminals, and it featured sub-forums for a wide range of cybercrime specialities, including malware, spam, coding and identity theft.

One representation of the leaked Mazafaka database.

In almost any database leak, the first accounts listed are usually the administrators and early core members. But the Mazafaka user information posted online was not a database file per se, and it was clearly edited, redacted and restructured by whoever released it. As a result, it can be difficult to tell which members are the earliest users.

The original Mazafaka is known to have been launched by a hacker using the nickname “Stalker.” However, the lowest numbered (non-admin) user ID in the Mazafaka database belongs to another individual who used the handle “Djamix,” and the email address djamix@mazafaka[.]ru.

From the forum’s inception until around 2008, Djamix was one of its most active and eloquent contributors. Djamix told forum members he was a lawyer, and nearly all of his posts included legal analyses of various public cases involving hackers arrested and charged with cybercrimes in Russia and abroad.

“Hiding with purely technical parameters will not help in a serious matter,” Djamix advised Maza members in September 2007. “In order to ESCAPE the law, you need to KNOW the law. This is the most important thing. Technical capabilities cannot overcome intelligence and cunning.”

Stalker himself credited Djamix with keeping Mazafaka online for so many years. In a retrospective post published to Livejournal in 2014 titled, “Mazafaka, from conception to the present day,” Stalker said Djamix had become a core member of the community.

“This guy is everywhere,” Stalker said of Djamix. “There’s not a thing on [Mazafaka] that he doesn’t take part in. For me, he is a stimulus-irritant and thanks to him, Maza is still alive. Our rallying force!”

Djamix told other forum denizens he was a licensed attorney who could be hired for remote or in-person consultations, and his posts on Mazafaka and other Russian boards show several hackers facing legal jeopardy likely took him up on this offer.

“I have the right to represent your interests in court,” Djamix said on the Russian-language cybercrime forum Verified in Jan. 2011. “Remotely (in the form of constant support and consultations), or in person – this is discussed separately. As well as the cost of my services.”

WHO IS DJAMIX?

A search on djamix@mazafaka[.]ru at DomainTools.com reveals this address has been used to register at least 10 domain names since 2008. Those include several websites about life in and around Sochi, Russia, the site of the 2014 Winter Olympics, as well as a nearby coastal town called Adler. All of those sites say they were registered to an Aleksei Safronov from Sochi who also lists Adler as a hometown.

The breach tracking service Constella Intelligence finds that the phone number associated with those domains — +7.9676442212 — is tied to a Facebook account for an Aleksei Valerievich Safronov from Sochi. Mr. Safronov’s Facebook profile, which was last updated in October 2022, says his ICQ instant messenger number is 53765. This is the same ICQ number assigned to Djamix in the Mazafaka user database.

The Facebook account for Aleksey Safronov.

A “Djamix” account on the forum privetsochi[.]ru (“Hello Sochi”) says this user was born Oct. 2, 1970, and that his website is uposter[.]ru. This Russian language news site’s tagline is, “We Create Communication,” and it focuses heavily on news about Sochi, Adler, Russia and the war in Ukraine, with a strong pro-Kremlin bent.

Safronov’s Facebook profile also gives his Skype username as “Djamixadler,” and it includes dozens of photos of him dressed in military fatigues along with a regiment of soldiers deploying in fairly remote areas of Russia. Some of those photos date back to 2008.

In several of the images, we can see a patch on the arm of Safronov’s jacket that bears the logo of the Spetsnaz GRU, a special forces unit of the Russian military. According to a 2020 report from the Congressional Research Service, the GRU operates both as an intelligence agency — collecting human, cyber, and signals intelligence — and as a military organization responsible for battlefield reconnaissance and the operation of Russia’s Spetsnaz military commando units.

Mr. Safronov posted this image of himself on Facebook in 2016. The insignia of the GRU can be seen on his sleeve.

“In recent years, reports have linked the GRU to some of Russia’s most aggressive and public intelligence operations,” the CRS report explains. “Reportedly, the GRU played a key role in Russia’s occupation of Ukraine’s Crimea region and invasion of eastern Ukraine, the attempted assassination of former Russian intelligence officer Sergei Skripal in the United Kingdom, interference in the 2016 U.S. presidential elections, disinformation and propaganda operations, and some of the world’s most damaging cyberattacks.”

According to the Russia-focused investigative news outlet Meduza, in 2014 the Russian Defense Ministry created its “information-operation troops” for action in “cyber-confrontations with potential adversaries.”

“Later, sources in the Defense Ministry explained that these new troops were meant to ‘disrupt the potential adversary’s information networks,’” Meduza reported in 2018. “Recruiters reportedly went looking for ‘hackers who have had problems with the law.’”

Mr. Safronov did not respond to multiple requests for comment. A 2018 treatise written by Aleksei Valerievich Safronov titled “One Hundred Years of GRU Military Intelligence” explains the significance of the bat in the seal of the GRU.

“One way or another, the bat is an emblem that unites all active and retired intelligence officers; it is a symbol of unity and exclusivity,” Safronov wrote. “And, in general, it doesn’t matter who we’re talking about – a secret GRU agent somewhere in the army or a sniper in any of the special forces brigades. They all did and are doing one very important and responsible thing.”

It’s unclear what role Mr. Safronov plays or played in the GRU, but it seems likely the military intelligence agency would have exploited his considerable technical skills, knowledge and connections on the Russian cybercrime forums.

Searching on Safronov’s domain uposter[.]ru in Constella Intelligence reveals that this domain was used in 2022 to register an account at a popular Spanish-language discussion forum dedicated to helping applicants prepare for a career in the Guardia Civil, one of Spain’s two national police forces. Pivoting on that Russian IP in Constella shows three other accounts were created at the same Spanish user forum around the same date.

Mark Rasch, a former cybercrime prosecutor for the U.S. Department of Justice, said there has always been a close relationship between the GRU and the Russian hacker community. Rasch noted that in the early 2000s, the GRU was soliciting hackers with the skills necessary to hack US banks in order to procure funds to help finance Russia’s war in Chechnya.

“The guy is heavily hooked into the Russian cyber community, and that’s useful for intelligence services,” Rasch said. “He could have been infiltrating the community to monitor it for the GRU. Or he could just be a guy wearing a military uniform.”

Blocking Attacks with an AI-powered CNAPP | Welcome to Cloud Native Security!

Today is an exciting day. We announced that our acquisition of PingSafe is now complete. PingSafe represents an important part of SentinelOne’s cloud security future, and I wanted to take this opportunity to outline our journey ahead. More broadly, I will address how and why cloud security often represents the most challenging aspect of enterprise security and is in dire need of an upgrade

 

Cloud As Change

The power of cloud computing and containers is that they can drive business transformation and innovation by enabling greater scale, speed, and reliability in the development and deployment of applications and data pipelines.

However, usage of new technologies can also result in security concerns and an expanded attack surface. There is a tension that exists between security and innovation, and currently, for many companies, security is not balancing out.

One of the most interesting trends of cyber attacks in 2023 was a pivot made by many threat actors to the cloud. This pivot is partly due to an inability of many of their targets to prevent and detect threat activity in their cloud environments.

Some of the high-profile breaches of last year included customized cloud attack techniques to enable ransomware deployments. Notably, Roasted 0ktapus (also known as Scattered Spider), performed sophisticated discovery, lateral movement, persistence, and defense evasion across cloud environments within their attacks.

Beyond targeting cloud infrastructure, threat actors are modifying and disabling cloud services and identities as a part of their attacks.

Additionally, the level of automation within cloud breaches is rising, contributing to increased compromise speeds. Even within readily available open-source toolsets like AlienFox (which SentinelOne Labs reported on here) and Androxgh0st (which the FBI and CISA recently released a TLP:Clear Advisory for), we have seen the inclusion of automated scripts. In these two instances: for persistence and privilege escalation within cloud identity, as well as the manipulation of cloud service provider email services.

These developments in how threat actors approach cloud environments necessitate a mindshift towards cloud security. For example cloud misconfigurations. With Gartner’s warning in mind that by 2025, 99% of cloud breaches will be due to preventable human error, there has been a rightful focus on identifying, assessing and remediating cloud misconfigurations.

Cloud Security Posture Management (CSPM) capabilities have been key for cloud security practitioners, and Well Architected Frameworks from cloud service providers have been invaluable, the work of comparing ever-changing deployment realities to compliance standards remains a noisy affair with little prioritization.

With threat actors now seen causing misconfigurations themselves, alerts like excessive and changing permissions in Azure AD or AWS IAM may now be compromise artifacts within a greater attack chain. Misconfigurations as a result of threat actor actions can now be indicative of malicious activity already performed, leading to a need to include misconfigurations as compromise artifacts.

Security operations center analysts may now have to work alongside cloud security and infrastructure teams to identify when misconfigurations may represent malicious activity.We should also not forget that before cloud environments are deployed into production, there are increasing threats of supply chain risk within the build pipeline from the use of open-source components. Our colleagues at ReversingLabs recently compiled some excellent reporting on trends specific to this space.

When taken all together, it’s clear that current cloud security tooling can be improved to assist infrastructure and development teams tasked with keeping the cloud safe.

Three Realities of Cloud Security for Today and Tomorrow

As we look at what customers need today and will require tomorrow, three realities quickly emerge. The first reality is that organizations require security and visibility controls across the breadth of their environments, from build pipelines, deployment services, identity and endpoints to cloud infrastructure.

Ideally, these capabilities are provided by a single platform, not only for ease of use and the benefits of using fewer vendors but also because the crucial correlation of security data among these environments is what is needed to drive security innovation to where it needs to be.

The second reality is that while visibility is paramount to security, it cannot be its sole component. Effective cloud security must go further and include the power to block at machine speed and respond. By nature, this necessitates a protection solution that combines the stopping and forensics power of an agent with the speed and breadth of agentless security.

And thirdly, the most important reality is that security must be sustainably achieved and maintained. Nobody needs or has time for noise! With limited resources and constant time pressures, cloud security insights must be prioritized, actionable and false-positive free, and security practitioners require AI assistance to help speed their workflows. Our focus in building a modern, comprehensive approach to cloud security is to deliver a single platform, purpose-built to empower users and deliver visibility, protection, response and remediation capabilities through the cloud lifecycle in real time.

We are approaching this by bringing together our existing agent-based Cloud Workload Security (aka Cloud Workload Protection Platform or CWPP) and Cloud Data Security threat protection products along with our new capabilities from our PingSafe acquisition, which from here on shall be referred to as Cloud Native Security.

Combined they form our comprehensive CNAPP.

For the mandatory definition after a potentially new acronym, we’ll rely on Gartner:

“.. a unified and tightly integrated set of security and compliance capabilities designed to secure and protect cloud-native applications across development and production”

To understand the outcomes of our comprehensive CNAPP, it is worthwhile using a framework of the four guiding principles it revolves around:

Visibility Across the Entire Environment

Before anything can be achieved, you need visibility. As the saying goes: “you can’t protect what you can’t see”. SentinelOne ensures security teams can manage cloud sprawl with agentless visibility, easily deployed within minutes, combined with integrated endpoint and identity founded on a single platform.

Many attacks start on the endpoint, escalate privileges via identity related attacks, and then move into the cloud. Our customers instantly gain centralized visibility and discovery across multi-cloud, with an asset inventory including mapped relationships between resources.

With views into areas typically reserved for development and infrastructure teams, security is now able to overview the health of container images, Infrastructure as Code (IaC) templates, Kubernetes clusters, and diverse cloud services. This enables security teams to be able to identify and report on misconfigurations, secrets, and malware, and will be able to hunt down unmanaged instances.

The importance of visibility through the cloud lifecycle is to ensure security can be embedded into the build process, in a shift-left style, as well as across crucial production environments.

However, SentinelOne’s comprehensive CNAPP goes a step further, and enables customers to not just shift left, but Respond Left using IaC template scanning and via our partnership with Snyk. This is the ability for a security team to work alongside development and infrastructure teams to apply insights post-incident.

Remediation and response should include pivoting back to the build pipeline to ensure the possibility of re-infection is thoroughly reduced, at the root cause.

Action What Matters by Defending with an Attacker’s Perspective

An unfortunate reality is that noise is now a defining feature of cloud security tooling. A focus on compliance and best practices, while important, leads many security teams unable to differentiate between what is misconfigured or vulnerable according to best practices versus which weaknesses represent likely exploit potential.

SentinelOne’s CNAPP has a revolutionary approach to cutting through this noise – the incorporation of an automated attacker’s perspective. Our Offensive Security Engine simulates attacker methods safely, and validates where your cloud environment is actually exploitable.

This is false-positive free, evidence-based reporting of Exploit Paths that instantly identifies the most crucial risks that require immediate attention. This is the differentiation between a theoretical CVE list of “Where do I start with all these areas I need to fix?” and “What needs to be fixed now?”, and a world of difference in risk management.

Moving Beyond Detection to Protection

In a world of advanced threats and shortened time frames to breach, visibility alone quite clearly is insufficient. By combining agent and agentless capabilities, SentinelOne is able to pair our in-depth understanding of cloud threats with robust runtime protection. This enables advanced cloud detection and response that includes protection, by blocking threats at machine speed.

For response, rather than relying on time-costly human intervention or self-created automated workflows with limited scope, SentinelOne identifies and kills malicious processes and quarantines files as they run.

This allows security teams to more quickly and efficiently move into the remediation phase of an incident, and perform remedial actions via SentinelOne.

Being AI-driven

AI is everywhere today, but not all AI is equivalent. Layering AI over the top of legacy, siloed tooling will bring some value, but infusing AI and automation into a cohesive platform will yield faster, better results. No one wants 68 agents, 14 consoles showing alert queues, or 3 query languages.

We’ve built the Singularity Platform as the AI-infused foundation that underpins and brings together all of our security solutions, so when we say ONE agent and ONE platform, we mean it. Many vendors will offer you a portfolio, we will deliver the platform.

Our cloud security solutions are built on this single platform, meaning across identity, endpoint, and cloud we can correlate activities in the Singularity Data Lake and use our AI security analyst Purple AI to spot and stop nefarious activities.

What Happens Now?

Now that we have closed the PingSafe acquisition we are rapidly moving to integrate the capabilities into our platform to offer the following capabilities:

  • Cloud Security Posture Management (CSPM): Automatically identify misconfigurations, ensuring compliance with industry benchmarks such as NIST, MITRE, CIS, PCI-DSS, and more.
  • Agentless Vulnerability Scanning: Discover vulnerabilities across your entire cloud infrastructure without installing any agent.
  • Infrastructure as Code (IaC) Scanning: Shift left to identify pre-production issues in IaC templates and container configuration files.
  • Kubernetes Security Posture Management (KSPM): Secure containers from code to runtime.
  • Secrets Scanning: Detect and prevent cloud credential leakage in public repositories.
  • Offensive Security Engine: Use an attacker’s mindset to simulate attacks, verify actual exploit paths and prioritize those issues with breach potential assessments.

We expect to launch Cloud Native Security in mid 2024 as we begin the global rollout phases and will immediately offer customers the ability to run the existing PingSafe solution.

If you would like to see how Cloud Native Security provides rapid visibility and prioritized false-positive free, evidence-based reporting and actionable insights, or if you would like to ensure your critical cloud workloads are secured with machine speed protection with our Cloud Workload Security, contact us or request a free demo.

Customer Guidance on Emerging AnyDesk Cybersecurity Incident

AnyDesk, a remote desktop software, has recently released confirmation of a cyberattack in which hackers were able to access the company’s production environment.  Anydesk stated that no authentication tokens were stolen during the attack, as these tokens only exist on the end user’s device and are associated with the device’s fingerprint. However, out of caution, the company has revoked all passwords to their web portal and recommends users change their passwords, especially if they are used on other sites. Further, AnyDesk will be revoking all previous code signing certificates.

It is strongly recommended that all users install the latest version  of the software (version 8.0.8 for Windows, other binaries are still using the old certificate), as the old code signing certificate will soon be revoked. Furthermore, despite AnyDesk’s assurance that passwords were not stolen in the attack, it is strongly advised that all AnyDesk users change their passwords, especially if they use their AnyDesk password at other sites.

The following query can be used to identify executables in your environment that have been signed with the older, to-be revoked certificate (including prior versions of the Anydesk client):

((src.process.publisher in:anycase ('PHILANDRO SOFTWARE GMBH')) OR (tgt.process.publisher in:anycase ('PHILANDRO SOFTWARE GMBH')))

We will continue to provide more context and insight as the situation unfolds so that we can provide you more exact guidance to help mitigate risk in your environment.

SentinelOne Vigilance Team