Firebolt raises $127M more for its new approach to cheaper and more efficient Big Data analytics

Snowflake changed the conversation for many companies when it comes to the potentials of data warehousing. Now one of the startups that’s hoping to disrupt the disruptor is announcing a big round of funding to expand its own business.

Firebolt, which has built a new kind of cloud data warehouse that promises much more efficient, and cheaper, analytics around whatever is stored within it, is announcing a major Series B of $127 million on the heels of huge demand for its services.

The company, which only came out of stealth mode in December, is not disclosing its valuation with this round, which brings the total raised by the Israeli company to $164 million. New backers Dawn Capital and K5 Global are in this round, alongside previous backers Zeev Ventures, TLV Partners, Bessemer Venture Partners and Angular Ventures.

Nor is it disclosing many details about its customers at the moment. CEO and co-founder Eldad Farkash told me in an interview that most of them are U.S.-based, and that the numbers have grown from the dozen or so that were using Firebolt when it was still in stealth mode (it worked quietly for a couple of years building its product and onboarding customers before finally launching six months ago). They are all migrating from existing data warehousing solutions like Snowflake or BigQuery. In other words, its customers are already cloud-native, Big Data companies: it’s not trying to proselytize on the basic concept but work with those who are already in a specific place as a business.

“If you’re not using Snowflake or BigQuery already, we prefer you come back to us later,” he said. Judging by the size and quick succession of the round, that focus is paying off.

The challenge that Firebolt set out to tackle is that while data warehousing has become a key way for enterprises to analyze, update and manage their big data stores — after all, your data is only as good as the tools you have to parse it and keep it secure — typically data warehousing solutions are not efficient, and they can cost a lot of money to maintain.

The challenge was seen firsthand by the three founders of Firebolt, Farkash (CEO), Saar Bitner (COO) and Ariel Yaroshevich (CTO) when they were at a previous company, the business intelligence powerhouse Sisense, where respectively they were one of its co-founders and two members of its founding team. At Sisense, the company continually came up against an issue: When you are dealing in terabytes of data, cloud data warehouses were straining to deliver good performance to power their analytics and other tools, and the only way to potentially continue to mitigate that was by piling on more cloud capacity. And that started to become very expensive.

Firebolt set out to fix that by taking a different approach, rearchitecting the concept. As Farkash sees it, while data warehousing has indeed been a big breakthrough in Big Data, it has started to feel like a dated solution as data troves have grown.

“Data warehouses are solving yesterday’s problem, which was, ‘How do I migrate to the cloud and deal with scale?’” he told me back in December. Google’s BigQuery, Amazon’s RedShift and Snowflake are fitting answers for that issue, he believes, but “we see Firebolt as the new entrant in that space, with a new take on design on technology. We change the discussion from one of scale to one of speed and efficiency.”

The startup claims that its performance is up to 182 times faster than that of other data warehouses with a SQL-based system that works on academic research that had yet to be applied anywhere, around how to handle data in a lighter way, using new techniques in compression and how data is parsed. Data lakes in turn can be connected with a wider data ecosystem, and what it translates to is a much smaller requirement for cloud capacity. And lower costs.

Fast forward to today, and the company says the concept is gaining a lot of traction with engineers and developers in industries like business intelligence, customer-facing services that need to parse a lot of information to serve information to users in real time and back-end data applications. That is proving out what investors suspected would be a shift before the startup even launched, stealthily or otherwise.

“I’ve been an investor at Firebolt since their Series A round and before they had any paying customers,” said Oren Zeev of Zeev Ventures. “What had me invest in Firebolt is mostly the team. A group of highly experienced executives mostly from the big data space who understand the market very well, and the pain organizations are experiencing. In addition, after speaking to a few of my portfolio companies and Firebolt’s initial design partners, it was clear that Firebolt is solving a major pain, so all in all, it was a fairly easy decision. The market in which Firebolt operates is huge if you consider the valuations of Snowflake and Databricks. Even more importantly, it is growing rapidly as the migration from on-premise data warehouse platforms to the cloud is gaining momentum, and as more and more companies rely on data for their operations and are building data applications.”

Feature Spotlight: Data-Driven Threat Intelligence with Singularity Signal

Many organizations today have adopted cyber threat intelligence (CTI) programs with the goal of using attacker insights to bolster their defenses. The reality is that most teams struggle to gain full value from their threat intelligence platforms because of their limited scalability to large datasets and lack of actionability. Singularity Signal is an open threat intelligence platform from SentinelOne that harnesses data and analyzes it at unmatched scale to address the threat intelligence data volume challenge. Singularity Signal combines artificial- and human-based intelligence to provide context, enrichment, and actionability to cyber data, empowering organizations to stay a step ahead with unparalleled insight into the attacker mindset.

What is Cyber Threat Intelligence?

The primary goal of a CTI provider is to gather intelligence on the tactics, techniques, and procedures (TTP) of adversaries so organizations can make more informed and data-driven decisions about their cybersecurity programs. These decisions ultimately drive more effective protection, detection, and response against modern cyber-attacks. According to Gartner:

“Evidence-based knowledge, including context, mechanisms, indicators, implications and actionable advice about an existing or emerging menace or hazard to assets that can be used to inform decisions regarding the subject’s response to that menace or hazard.”

As a result, CTI can help organizations discover blind spots, provide decision-makers with informed insights into the threat landscape, and ultimately mitigate risk.

Effectively applying threat intelligence empowers security analysts to identify and understand the relationship between adversaries and their TTPs and take proactive steps in their environment accordingly.

Today’s Threat Intelligence Challenges

The cyber threat landscape continues to evolve in complexity and stakes, with recent examples being the DarkSide ransomware campaign against Colonial Pipeline, SUNBURST, the malware variant behind the SolarWinds corporate attack, and the Microsoft Exchange zero-day vulnerabilities that were rapidly exploited by HAFNIUM. And that’s just the tip of the iceberg.

In response, many organizations have implemented cyber threat intelligence over the past several years as an integral part of their information security programs. By integrating CTI, they hope to better prepare for emergent threats and take informed action against cyber risk.

Trouble is, many of these teams are sold on the promise of threat intelligence but rarely see its tangible value in practice. According to the Information Security Forum (ISF)’s research, 82% of their members have a cyber threat intelligence capability, with the remaining 18% planning to implement one in the next twelve months. However, only 25% of those members believe their current capability delivers the expected objectives. In other words, teams are dishing out significant investments but seeing dubious returns.

Many of the common pitfalls of modern threat intelligence are root-caused by the inability to effectively process, correlate, and analyze data, given the exponential growth of available telemetry and signals. Most threat intelligence solutions available today heavily depend on human analysts to consolidate, parse, enrich, and validate data, and their analyses focus too deeply on attribution and backstory versus remediation and action.

In addition, threat intelligence sources often exist in vacuums, and teams lack the right technology and processes to connect and correlate their data for a more complete picture. As a result, it has become costly and highly time-consuming to operationalize CTI, and threat researchers struggle to weed out meaningful insight from the noise.

At SentinelOne, we believe the key to modernizing CTI and maximizing its value is in combining the best of artificial intelligence (AI) with human intelligence. By doing so, organizations resolve two primary pain points: the amount of data that requires manual processing, and the time it takes to manually correlate and contextualize it.

How Can Singularity Signal Help My Organization?

Singularity Signal combines artificial intelligence (AI) and machine learning models with human-enriched intelligence and context to help you preempt even the most advanced attacks and derive tangible value from your threat intelligence investments.

This is achieved through the Singularity Signal AI engine, designed to process billions of signals in real-time. The Signal AI engine analyzes data gathered from the SentinelOne Singularity user base, as well as a global dataset of open source, commercial, and SIGINT feeds. This provides our researchers with unique insights into the probability of attacks and enables them to perform continuous threat modeling in an effort to predict adversaries’ next moves.

With Singularity Signal, you gain a complete, tailored picture of how you are impacted by advanced persistent threats (APTs), nation-state groups, and emergent attacks such as zero-days through real-time enrichment of tactics, techniques, and procedures (TTPs), ongoing threat intelligence reporting curated by our experts, and easy integration of custom intelligence sources through the Singularity Marketplace.

Singularity Signal addresses the data problem in CTI and empowers human threat researchers and security analysts to make informed, data-backed decisions. This helps you take a more proactive, more automated, and more informed approach to your defenses.

Singularity Signal
Join our webinar to learn more about data-driven intelligence.

Summary

SentinelOne is committed to helping customers to become proactive with their cybersecurity programs. Recent attacks have demonstrated the importance of understanding adversaries and how they operate in order to reduce their attack surface. Singularity Signal empowers modern security teams to break down the common barriers to running a CTI program by optimizing both artificial- and human-based intelligence, and mastering swathes of cyber data at scale.

For more information, join the Singularity Signal webinar or request a demo.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Pequity, a compensation platform designed for more equitable pay, raises $19M

Diversity and inclusion have become central topics in the world of work. In the best considerations, improving them is a holistic effort, involving not just conceiving of products with this in mind, but hiring and managing talent in a diverse and inclusive way, too. A new startup called Pequity, which has built a product to help with the latter of these areas, specifically in equitable compensation, has now raised some funding — a sign of the demand in the market, as well as how tech is being harnessed in aid of helping it.

The San Francisco-based startup has raised $19 million in a Series A led by Norwest Venture Partners. First Round Capital, Designer Fund, and Scribble Ventures also participated in the fundraise, which will be used to continue investing in product and also hiring: the company has 20 on its own books now and will aim to double that by the end of this year, on the heels of positive reception in the market.

Since launching officially last year, Pequity has picked up over 100 customers, with an initial focus on fast-scaling companies in its own backyard, a mark of how D&I have come into focus in the tech industry in particular. Those using Pequity to compare and figure out compensation include Instacart, Scale.ai and ClearCo, and the company said that in the last four months, the platform’s been used to make more then 5,000 job offers.

Kaitlyn Knopp, the CEO who co-founded the company with Warren Lebovics (both pictured, right), came up for the idea for Pequity in much the same way that many innovations in the world of enterprise IT come to market: through her own first-hand experience.

She spent a decade working in employment compensation in the Bay Area, with previous roles at Google, Instacart, and Cruise. In that time, she found the tools that many companies used were lacking and simply “clunky” when it came to compensation analysis.

“The way the market has worked so far is that platforms had compensation as an element but not the focus,” she said. “It was the end of the tagline, the final part of a ‘CRM for candidates.’ But you still have to fill in all the gaps, you have to set the architecture the right way. And with compensation, you have to bake in your own analytics, which implies that you have to have some expertise.”

Indeed, as with other aspects of enterprise software, she added that the very biggest tech companies sometimes worked on their own tools, but not only does that leave smaller or otherwise other-focused businesses out of having better calculation tools, but it also means that those tools are siloed and miss out on being shaped by a bigger picture of the world of work. “We wanted to take that process and own it.”

The Pequity product essentially works by plugging into all of the other tools that an HR professional might be using — HRIS, ATS, and payroll products — to manage salaries across the whole of the organization in order to analyse and compare how compensation could look for existing and prospective employees. It combines a company’s own data and then compares it to data from the wider market, including typical industry ranges and market trends, to provide insights to HR teams.

All of this means that HR teams are able to make more informed decisions, which is step number one in being more transparent and equitable, but is also something that Pequity is optimized to cover specifically in how it measures compensation across a team.

And in line with that, there is another aspect of the compensation mindset that Knopp also wanted to address in a standalone product, and that is the idea of building a tool with a mission, one of providing a platform that can bring in data to make transparent and equitable decisions.

“A lot of the comp tools that I’ve interacted with are reactive,” she said. “You may have to do, say, a pay equity test, you do your promotion and merit cycles, and then you find all these issues that you have to solve. We’re flagging those things proactively with our analytics, because we’re plugging into those systems, which will give you those alerts before the decisions need to be made.”

As an added step in that direction, Knopp said that ultimately she believes the tool should be something that those outside of HR, such as managers and emploiyees themselves, should be able to access to better understand the logic of their own compensation and have more information going into any kind of negotiation.

Ultimately, it will be interesting to see whether modernized products like Pequity, which are tackling old problems with a new approach and point of view, find traction in the wider market. If one purpose in HR is to address diversity and inclusion, and part of the problem has been that the tools are just not fit for that purpose, then it seems a no-brainer that we’ll see more organizations trying out new things to see if they can help them in their own race to secure talent.

“Compensation reflects a company’s values, affects its ability to hire talent, and is the biggest expense on its P&L. And yet, most comp teams run on spreadsheets and emails,” said Parker Barrile, Partner at Norwest, in a statement. “Pequity empowers comp teams to design and manage equitable compensation programs with modern software designed by comp professionals, for comp professionals.”

 

Salesforce, AWS announce extended partnership with further two-way integration

Salesforce and AWS represent the two most successful cloud companies in their respective categories. Over the last few years the two cloud giants have had an evolving partnership. Today they announced plans for a new set of integration capabilities to make it easier to share data and build applications that cross the two platforms.

Patrick Stokes, EVP and GM for Platform at Salesforce, points out that the companies have worked together in the past to provide features like secure sharing between the two services, but they were hearing from customers that they wanted to take it further and today’s announcement is the first step towards making that happen.

“[The initial phases of the partnership] have really been massively successful. We’re learning a lot from each other and from our mutual customers about the types of things that they want to try to accomplish, both within the Salesforce portfolio of products, as well as all the Amazon products, so that the two solutions complement each other really nicely. And customers are asking us for more, and so we’re excited to enter into this next phase of our partnership,” Stokes explained.

He added, “The goal really is to unify our platforms, so bring [together] all the power of the Amazon services with all of the power of the of the Salesforce platform.” These capabilities could be the next step in accomplishing that.

This involves a couple of new features the companies are working on to help developers on both the platform and application side of the equation. For starters that includes enabling developers to virtualize Amazon data inside Salesforce without having to do all the coding to make that happen manually.

“More specifically, we’re going to virtualize Amazon data within the Salesforce platform, so whether you’re working with an S3 bucket, Amazon RDS or whatever it is we’re going to make it so that that the data is virtualized and just appears just like it’s native data on the Salesforce platform,” he said.

Similarly, developers building applications on Amazon will be able to access Salesforce data and have it appear natively in Amazon. This involves providing connectors between the two systems to make the data flow smoothly without a lot of coding to make that happen.

The companies are also announcing event sharing capabilities, which makes it easier for both Amazon and Salesforce customers to build microservices-based applications that cross both platforms.

“You can build microservices-oriented architecture that spans the services of Salesforce and Amazon platforms, again without having to write any code. To do that, [we’re developing] out of the box connectors so you can click and drag the events that you want.”

The companies are also announcing plans to make it easier from an identity and access management perspective to access the platforms with a guided setup. Finally, the companies are working on applications to build Amazon Chime communications tooling into Service Cloud and other Salesforce services to build things like virtual call centers using AWS machine learning technology.

Amazon VP of Global Marketing Rachel Thorton says that having the two cloud giants work together in this way should make it easier for developers to create solutions that span the two platforms. “I just think it unlocks such possibilities for developers, and the faster and more innovative developers can be, it just unlocks opportunities for businesses, and creates better customer experiences,” Thornton said.

It’s worth noting that Salesforce also has extensive partnerships with other cloud providers including Microsoft Azure and Google Cloud Platform.

As is typically the case with Salesforce announcements, while all of these capabilities are being announced today, they are still in the development stage and won’t go into beta testing until later this year with GA expected sometime next year. The companies are expected to release more details about the partnership at Dreamforce and re:Invent, their respective customer conferences later this year.

How Cyber Sleuths Cracked an ATM Shimmer Gang

In 2015, police departments worldwide started finding ATMs compromised with advanced new “shimming” devices made to steal data from chip card transactions. Authorities in the United States and abroad had seized many of these shimmers, but for years couldn’t decrypt the data on the devices. This is a story of ingenuity and happenstance, and how one former Secret Service agent helped crack a code that revealed the contours of a global organized crime ring.

Jeffrey Dant was a special agent at the U.S. Secret Service for 12 years until 2015. After that, Dant served as the global lead for the fraud fusion center at Citi, one of the largest financial institutions in the United States.

Not long after joining Citi, Dant heard from industry colleagues at a bank in Mexico who reported finding one of these shimming devices inside the card acceptance slot of a local ATM. As it happens, KrebsOnSecurity wrote about that particular shimmer back in August 2015.

This card ‘shimming’ device is made to read chip-enabled cards and can be inserted directly into the ATM’s card acceptance slot.

The shimmers were an innovation that caused concern on multiple levels. For starters, chip-based payment cards were supposed to make it far more expensive and difficult for thieves to copy and clone. But these skimmers took advantage of weaknesses in the way many banks at the time implemented the new chip card standard.

Also, unlike traditional ATM skimmers that run on hidden cell phone batteries, the ATM shimmers found in Mexico did not require any external power source, and thus could remain in operation collecting card data until the device was removed.

When a chip card is inserted, a chip-capable ATM reads the data stored on the smart card by sending an electric current through the chip. Incredibly, these shimmers were able to siphon a small amount of that power (a few milliamps) to record any data transmitted by the card. When the ATM is no longer in use, the skimming device remains dormant, storing the stolen data in an encrypted format.

Dant and other investigators looking into the shimmers didn’t know at the time how the thieves who planted the devices went about gathering the stolen data. Traditional ATM skimmers are either retrieved manually, or they are programmed to transmit the stolen data wirelessly, such as via text message or Bluetooth.

But recall that these shimmers don’t have anywhere near the power needed to transmit data wirelessly, and the flexible shimmers themselves tend to rip apart when retrieved from the mouth of a compromised ATM. So how were the crooks collecting the loot?

“We didn’t know how they were getting the PINs at the time, either,” Dant recalled. “We found out later they were combining the skimmers with old school cameras hidden in fake overhead and side panels on the ATMs.”

Investigators wanted to look at the data stored on the shimmer, but it was encrypted. So they sent it to MasterCard’s forensics lab in the United Kingdom, and to the Secret Service.

“The Secret Service didn’t have any luck with it,” Dant said. “MasterCard in the U.K. was able to understand a little bit at a high level what it was doing, and they confirmed that it was powered by the chip. But the data dump from the shimmer was just encrypted gibberish.”

Organized crime gangs that specialize in deploying skimmers very often will encrypt stolen card data as a way to remove the possibility that any gang members might try to personally siphon and sell the card data in underground markets.

THE DOWNLOAD CARDS

Then in 2017, Dant got a lucky break: Investigators had found a shimming device inside an ATM in New York City, and that device appeared identical to the shimmers found in Mexico two years earlier.

“That was the first one that had showed up in the U.S. at that point,” Dant said.

The Citi team suspected that if they could work backwards from the card data that was known to have been recorded by the skimmers, they might be able to crack the encryption.

“We knew when the shimmer went into the ATM, thanks to closed-circuit television footage,” Dant said. “And we know when that shimmer was discovered. So between that time period of a couple of days, these are the cards that interacted with the skimmer, and so these card numbers are most likely on this device.”

Based off that hunch, MasterCard’s eggheads had success decoding the encrypted gibberish. But they already knew which payment cards had been compromised, so what did investigators stand to gain from breaking the encryption?

According to Dant, this is where things got interesting: They found that the same primary account number (unique 16 digits of the card) was present on the download card and on the shimmers from both New York City and Mexican ATMs.

Further research revealed that account number was tied to a payment card issued years prior by an Austrian bank to a customer who reported never receiving the card in the mail.

“So why is this Austrian bank card number on the download card and two different shimming devices in two different countries, years apart?” Dant said he wondered at the time.

He didn’t have to wait long for an answer. Soon enough, the NYPD brought a case against a group of Romanian men suspected of planting the same shimming devices in both the U.S. and Mexico. Search warrants served against the Romanian defendants turned up multiple copies of the shimmer they’d seized from the compromised ATMs.

“They found an entire ATM skimming lab that had different versions of that shimmer in untrimmed squares of sheet metal,” Dant said. “But what stood out the most was this unique device — the download card.”

The download card (right, in blue) opens an encrypted session with the shimmer, and then transmits the stolen card data to the attached white plastic device. Image: KrebsOnSecurity.com.

The download card consisted of two pieces of plastic about the width of a debit card but a bit longer. The blue plastic part — made to be inserted into a card reader — features the same contacts as a chip card. The blue plastic was attached via a ribbon cable to a white plastic card with a green LED and other electronic components.

Sticking the blue download card into a chip reader revealed the same Austrian card number seen on the shimming devices. It then became very clear what was happening.

“The download card was hard coded with chip card data on it, so that it could open up an encrypted session with the shimmer,” which also had the same card data, Dant said.

The download card, up close. Image: KrebsOnSecurity.com.

Once inserted into the mouth of ATM card acceptance slot that’s already been retrofitted with one of these shimmers, the download card causes an encrypted data exchange between it and the shimmer. Once that two-way handshake is confirmed, the white device lights up a green LED when the data transfer is complete.

THE MASTER KEY

Dant said when the Romanian crew mass-produced their shimming devices, they did so using the same stolen Austrian bank card number. What this meant was that now the Secret Service and Citi had a master key to discover the same shimming devices installed in other ATMs.

That’s because every time the gang compromised a new ATM, that Austrian account number would traverse the global payment card networks — telling them exactly which ATM had just been hacked.

“We gave that number to the card networks, and they were able to see all the places that card had been used on their networks before,” Dant said. “We also set things up so we got alerts anytime that card number popped up, and we started getting tons of alerts and finding these shimmers all over the world.”

For all their sleuthing, Dant and his colleagues never really saw shimming take off in the United States, at least nowhere near as prevalently as in Mexico, he said.

The problem was that many banks in Mexico and other parts of Latin America had not properly implemented the chip card standard, which meant thieves could use shimmed chip card data to make the equivalent of old magnetic stripe-based card transactions.

By the time the Romanian gang’s shimmers started showing up in New York City, the vast majority of U.S. banks had already properly implemented chip card processing in such a way that the same phony chip card transactions which sailed through Mexican banks would simply fail every time they were tried against U.S. institutions.

“It never took off in the U.S., but this kind of activity went on like wildfire for years in Mexico,” Dant said.

The other reason shimming never emerged as a major threat for U.S. financial institutions is that many ATMs have been upgraded over the past decade so that their card acceptance slots are far slimmer, Dant observed.

“That download card is thicker than a lot of debit cards, so a number of institutions were quick to replace the older card slots with newer hardware that reduced the height of a card slot so that you could maybe get a shimmer and a debit card, but definitely not a shimmer and one of these download cards,” he said.

Shortly after ATM shimmers started showing up at banks in Mexico, KrebsOnSecurity spent four days in Mexico tracing the activities of a Romanian organized crime gang that had very recently started its own ATM company there called Intacash.

Sources told KrebsOnSecurity that the Romanian gang also was paying technicians from competing ATM providers to retrofit cash machines with Bluetooth-based skimmers that hooked directly up to the electronics on the inside. Hooked up to the ATM’s internal power, those skimmers could collect card data indefinitely, and the data could be collected wirelessly with a smart phone.

Follow-up reporting last year by the Organized Crime and Corruption Reporting Project (OCCRP) found Intacash and its associates compromised more than 100 ATMs across Mexico using skimmers that were able to remain in place undetected for years. The OCCRP, which dubbed the Romanian group “The Riviera Maya Gang,” estimates the crime syndicate used cloned card data and stolen PINs to steal more than $1.2 billion from bank accounts of tourists visiting the region.

Last month, Mexican authorities arrested Florian “The Shark” Tudor, Intacash’s boss and the reputed ringleader of the Romanian skimming syndicate. Authorities charged that Tudor’s group also specialized in human trafficking, which allowed them to send gang members to compromise ATMs across the border in the United States.

Singularity XDR – From Vision to Reality

These past quarters have been incredibly exciting for SentinelOne as a whole and specifically for Singularity XDR. They have highlighted a terrific year of success for everyone involved – from Engineering to every person in our Global Sales team.

Over the past few months, Singularity XDR has outperformed every other vendor in the MITRE ATT&CK evaluations (in more ways than one), has had unrivaled success in the Gartner Critical Capabilities report, and positioned itself firmly in the leaders quadrant of the 2021 Gartner EPP Magic Quadrant. Every external data point is important to us, and we pride ourselves on being a data-driven company but no data point is perhaps as good as the way your industry sees you and how much they emulate your choices (as they are arguably your strongest critics), including Palo Alto Networks CEO and other security vendors like Checkpoint on their MITRE report.

From its launch, Singularity XDR has inherently remained a means to an end, as opposed to a shiny goal in and of itself. Every investment we’ve made has been 100% tied to a need our customers – from the Fortune 10 to enterprises – have expressed: how can we help accelerate detection and response workflows with fewer clicks, agents, screens, and people. Data and automation are a huge part of that, but they are not the goal – they are a way for us to achieve what our customers need (more on that later).

Furthermore, as Resha Chedda mentioned in a previous post (“5 Questions to Consider Before Choosing the Right XDR Solution”), our approach to XDR is one that assumes that it is rooted in our unique approach to EDR. Challenges such as data normalization, contextualization, and correlation are not new. Neither is the need to maximize automation and maintain a strong offering across all protected surfaces. It is not a coincidence that this market is not XIEM or XOAR but XDR. Had our EDR not been best in class, our XDR strategy would be inherently flawed.

That does not mean that EDR is enough. While the EDR fundamentals are critical, they must be combined with net-new innovation and execution at an exponentially larger scale. XDR is somewhere between evolution and revolution – On one hand, XDR is clearly not just a patchwork of capabilities; it calls for an organic evolution from EPP and EDR. On the other hand, the realities facing an XDR solution – ranging from the threat landscape to the abundance of vulnerable surfaces and the quantity of data, all serve as a catalyst for revolutionary, force-multiplying, technological advancement.

From E to XDR

Both EDR and XDR are essentially a Venn diagram of Data, Detection, Automation and Action – with the sweet spot being unsurprisingly in the middle. There are several key EDR foundational product capabilities that SentinelOne has spent multiple years perfecting and have been a huge part of our success. It’s exactly that sweet-spot which: A) is so hard to replicate; and B) is critical to XDR (and why so many “XDR” vendors are failing)

Storyline™ + MITRE

  • The 2021 MITRE ATT&CK evaluation (as well as those before it) has become, for all intents and purposes, the benchmark for EDR. While the results are definitely open to interpretation and almost everyone can make a claim to be the best – we’re actually proudest of a metric that often gets overlooked and is probably the only one that translates directly to MTTR (Mean time to Respond) – Despite being the only vendor with 100% visibility as well as more observed behaviors – SentinelOne created fewer distinct incidents than any other vendor.
  • This is thanks to Storyline – The ability to correlate and aggregate 100s of suspicious activities across multiple entities (Files, Processes, Users, Domains, IPs and more) into a single Campaign-Level-Insight. This means that analysts using SentineOne can Triage, Investigate and Respond to large, complex attacks with exponentially fewer clicks, but more importantly, in less time.

1 Click Remediation – Across Every Platform

  • One of the most impactful derivatives of Storyline is the fact that an entire campaign can be mitigated, remediated and in some cases entirely rolled-back with a single click. With Storyline correlating all of the different activities and entities – remediating the storyline translates into remediating an entire campaign. SentinelOne’s approach to Response has always been focused on automation and cross-platform parity and will continue to be so. For example, we have recently launched a Remote Scripting Orchestration capability which will enable our customers to automatically execute multiple scripts across their entire environment, regardless of OS, with a single, easy to use workflow.

STAR™

  • Storyline Active Response (STAR)™  is our cloud-based Automated Hunting, Detection, and Response engine. Balancing between our own built-in detection and delivering a powerful mechanism for our partners and customers to define their own detections was not an easy task- but that’s what we’ve achieved with STAR. Every query that can be run in Singularity can also be defined as a rule that monitors every incoming data-point in near real-time. Upon triggering, these rules can initiate anything from a simple alert to a complex playbook of actions. STAR is already a key part of our EDR and will evolve with Singularity XDR in terms of both the data it monitors and the actions it can initiate.

365 Days of Live Hunting Data

  • The term data retention gets thrown around far too frequently in this industry. Many vendors claim they offer “Long Term Data Retention,” but upon further investigation, it is either only partial data or partially accessible. Within Singularity, we offer our customers the ability to perform live hunting across 365 days (or more if needed) of data. For context, S1 was deemed the only one with 100% MITRE ATT&CK visibility – so that’s an unparalleled amount of data).  Since SolarWinds and SUNBURST, many of our customers have utilized our ability to query 365 days of data to get a picture of what occurred in their environment for a whole year.

It’s the combination and balance of foundations like these, alongside many others that enable us to deliver on XDR.

Data & Automation – As Guiding Principles

Despite everything said above, we are far from “Done”. We continue to Build, Integrate and Expand on all fronts.

The recent acquisition of Scalyr is probably the best example of our strategy becoming a reality, as well as the inseparable bond between being the best EDR and XDR. With Data being one of the cornerstones of every security problem, not just XDR, it was clear to us that we needed to establish Singularity on top of a foundation that can optimize for the “holy trinity” of data – Performance, Scale and Cost. Scalyr’s technology complements everything we want to achieve with both EDR and XDR – Unparalleled speed, efficiency, and flexibility and a perfect fit for rapid ingestion of data from an increasing number of new sources, both Endpoint and beyond.  Additionally, we are now seeing just how critical it is to choose a technology that was designed to address topics such as “How easy is it to ingest new types of data”, “How efficient is the data normalization process” and “Does the technology support the creation of abstraction and analysis layers on top of the data” – instead of just buying a “SIEM” or creating a “Graph”.

SentinelOne Storyline Active Response (STAR)
Customize EDR to adapt to your environment

Marketplace and Integrating into existing Security Stacks

The recently launched Singularity Marketplace is another huge part of our XDR strategy. SentinelOne has a great track record of launching new products – whether covering new surfaces or addressing new use-cases, but we recognize that many of our customers have diverse security stacks. With the Singularity Marketplace we facilitate the integration and orchestration of Data and Response across those stacks, but in a way that alienates nobody and optimizes for speed, simplicity and above all else, IMPACT. The list and scope of XDR applications that we support for Data Ingestion, Correlation and Response is growing by the week – being prioritized by a single voice – that of our customers. We make sure that every source we ingest and every API we expose actually makes an impact – does it enhance context, does it improve root-cause analysis, does it accelerate remediation?

Conclusion

Today’s marketing and positioning around the need for XDR can be confusing. One might find different technologies claiming similar claims, leaving the buyer with too many options and the need to research what it actually means. For us at SentinelOne, it’s these initiatives, together with the EDR foundations mentioned above and several other ground-breaking projects – from covering new surfaces to introducing new workflows, that are going to help us keep delivering what our customers need. Singularity XDR will not compromise – we are not a SOAR, not a SIEM, but the world’s best XDR. And we’re only just getting started.

If you would like to learn more about STAR and the SentinelOne XDR platform, contact us for more information or request a free demo.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Memory.ai, the startup behind time-tracking app Timely, raises $14M to build more AI-based productivity apps

Time is your most valuable asset — as the saying goes — and today a startup called Memory.ai, which is building AI-based productivity tools to help you with your own time management, is announcing some funding to double down on its ambitions: It wants not only to help manage your time, but to, essentially, provide ways to use it better in the future.

The startup, based out of Oslo, Norway, initially made its name with an app called Timely, a tool for people to track time spent doing different tasks. Aimed not just at people who are quantified self geeks, but those who need to track time for practical reasons, such as consultants or others who work on the concept of billable hours. Timely has racked up 500,000 users since 2014, including more than 5,000 paying businesses in 160 countries.

Now, Memory.ai has raised $14 million as it gears up to launch its next apps, Dewo (pronounced “De-Voh”), an app that is meant to help people do more “deep work” by learning about what they are working on and filtering out distractions to focus better; and Glue, described as a knowledge hub to help in the creative process. Both are due to be released later in the year.

The funding is being led by local investors Melesio and Sanden, with participation from Investinor, Concentric and SNÖ Ventures, who backed Memory.ai previously.

“Productivity apps” has always been something of a nebulous category in the world of connected work. They can variously cover any kind of collaboration management software ranging from Asana and Jira through to Slack and Notion; or software that makes doing an existing work task more efficient than you did it before (e.g. Microsoft has described all of what goes into Microsoft 365 — Excel, Word, PowerPoint, etc. — as “productivity apps”); or, yes, apps like those from Memory.ai that aim to improve your concentration or time management.

These days, however, it feels like the worlds of AI and advances in mobile computing are increasingly coming together to evolve that concept once again.

If the first wave of smartphone communications and the apps that are run on smartphone devices — social, gaming, productivity, media, information, etc. — have led to us getting pinged by a huge amount of data from lots of different places, all of the time, then could it be that the second wave is quite possibly going to usher in a newer wave of tools to handle all that better, built on the premise that not everything is of equal importance? No-mo FOMO? We’ll see.

In any case, some bigger platform players also helping to push the agenda of what productivity means in this day and age.

For example, in Apple’s recent preview of iOS 15 (due to come out later this year) the company gave a supercharge to its existing “do not disturb” feature on its phones, where it showed off a new Focus mode, letting users customize how and when they want to receive notifications from which apps, and even which apps they want to have displayed, all organized by different times of day (e.g. work time), place, calendar items and so on.

Today, iPhone plays so many roles in our lives. It’s where we get information, how people reach us, and where we get things done. This is great, but it means our attention is being pulled in so many different directions and finding that balance between work and life can be tricky,” said Apple’s Craig Federighi in the WWDC keynote earlier this month. “We want to free up space to focus and help you be in the moment.” How well that gets used, and how much other platforms like Google follow suit, will be interesting to see play out. It feels, in any case, like it could be the start of something.

And, serendipitously — or maybe because this is some kind of zeitgeist — this is also playing into what Memory.ai has built and is building. 

Mathias Mikkelsen, the Oslo-based founder of Memory.ai, first came up with his idea for Timely (which had also been the original name of the whole startup) when he was working as a designer in the ad industry, one of those jobs that needed to track what he was working on, and for how long, in order to get paid.

He said he knew the whole system as it existed was inefficient: “I just thought it was insane how cumbersome and old it was. But at the same time how important it was for the task,” he said.

The guy had an entrepreneurial itch that he was keen to scratch, and this idea would become the salve to help him. Mikkelsen was so taken with building a startup around time management, that he sold his apartment in Oslo and moved himself to San Francisco to be where he believed was the epicenter of startup innovation. He tells me he lived off the proceeds of his flat for two years “in a closet” in a hacker house, bootstrapping Timely, until eventually getting into an accelerator (500 Startups) and subsequently starting to raise money. He eventually moved back to Oslo after two years to continue growing the business, as well as to live somewhere a little more spacious.

The startup’s big technical breakthrough with Timely was to figure out an efficient way of tracking time for different tasks, not just time worked on anything, without people having to go through a lot of data entry.

The solution: to integrate with a person’s computer, plus a basic to-do schedule for a day or week, and then match up which files are open when to determine how long one works for one client or another. Phone or messaging conversations, for the moment, are not included, and neither are the contents of documents — just the titles of them. Nor is data coming from wearable devices, although you could see how that, too, might prove useful.

The basic premise is to be personalised, so managers and others cannot use Timely to track exactly what people are doing, although they can track and bill for those billable hours. All this is important, as it also will feed into how Dewo and Glue will work.

The startup’s big conceptual breakthrough came around the same time: Getting time tracking or any productivity right “has never been a UI problem,” Mikkelsen said. “It’s a human nature problem.” This is where the AI comes in, to nudge people towards something they identify as important, and nudge them away from work that might not contribute to that. Tackling bigger issues beyond time are essential to improving productivity overall, which is why Memory.ai now wants to extend to apps for carving out time for deep thinking and creative thinking.

While it might seem to be a threat that a company like Apple has identified the same time management predicament that Memory.ai has, and is looking to solve that itself, Mikkelsen is not fazed. He said he thinks of Focus as not unlike Apple’s work on Health: there will be ways of feeding information into Apple’s tool to make it work better for the user, and so that will be Memory.ai’s opportunity to hopefully grow, not cannibalize, its own audience with Timely and its two new apps. It is, in a sense, a timely disruption.

“Memory’s proven software is already redefining how businesses around the world track, plan and manage their time. We look forward to working with the team to help new markets profit from the efficiencies, insights and transparency of a Memory-enabled workforce,” said Arild Engh, a partner at Melesio, in a statement.

Kjartan Rist, a partner at Concentric, added: “We continue to be impressed with Memory’s vision to build and launch best-in-class products for the global marketplace. The company is well on its way to becoming a world leader in workplace productivity and collaboration, particularly in light of the remote and hybrid working revolution of the last 12 months. We look forward to supporting Mathias and the team in this exciting new chapter.”

Vantage raises $4M to help businesses understand their AWS costs

Vantage, a service that helps businesses analyze and reduce their AWS costs, today announced that it has raised a $4 million seed round led by Andreessen Horowitz. A number of angel investors, including Brianne Kimmel, Julia Lipton, Stephanie Friedman, Calvin French Owen, Ben and Moisey Uretsky, Mitch Wainer and Justin Gage, also participated in this round.

Vantage started out with a focus on making the AWS console a bit easier to use — and helping businesses figure out what they are spending their cloud infrastructure budgets on in the process. But as Vantage co-founder and CEO Ben Schaechter told me, it was the cost transparency features that really caught on with users.

“We were advertising ourselves as being an alternative AWS console with a focus on developer experience and cost transparency,” he said. “What was interesting is — even in the early days of early access before the formal GA launch in January — I would say more than 95% of the feedback that we were getting from customers was entirely around the cost features that we had in Vantage.”

Image Credits: Vantage

Like any good startup, the Vantage team looked at this and decided to double down on these features and highlight them in its marketing, though it kept the existing AWS Console-related tools as well. The reason the other tools didn’t quite take off, Schaechter believes, is because more and more, AWS users have become accustomed to infrastructure-as-code to do their own automatic provisioning. And with that, they spend a lot less time in the AWS Console anyway.

“But one consistent thing — across the board — was that people were having a really, really hard time 12 times a year, where they would get a shocking AWS bill and had to figure out what happened. What Vantage is doing today is providing a lot of value on the transparency front there,” he said.

Over the course of the last few months, the team added a number of new features to its cost transparency tools, including machine learning-driven predictions (both on the overall account level and service level) and the ability to share reports across teams.

Image Credits: Vantage

While Vantage expects to add support for other clouds in the future, likely starting with Azure and then GCP, that’s actually not what the team is focused on right now. Instead, Schaechter noted, the team plans to add support for bringing in data from third-party cloud services instead.

“The number one line item for companies tends to be AWS, GCP, Azure,” he said. “But then, after that, it’s Datadog, Cloudflare, Sumo Logic, things along those lines. Right now, there’s no way to see, P&L or an ROI from a cloud usage-based perspective. Vantage can be the tool where that’s showing you essentially, all of your cloud costs in one space.”

That is likely the vision the investors bought into, as well, and even though Vantage is now going up against enterprise tools like Apptio’s Cloudability and VMware’s CloudHealth, Schaechter doesn’t seem to be all that worried about the competition. He argues that these are tools that were born in a time when AWS had only a handful of services and only a few ways of interacting with those. He believes that Vantage, as a modern self-service platform, will have quite a few advantages over these older services.

“You can get up and running in a few clicks. You don’t have to talk to a sales team. We’re helping a large number of startups at this stage all the way up to the enterprise, whereas Cloudability and CloudHealth are, in my mind, kind of antiquated enterprise offerings. No startup is choosing to use those at this point, as far as I know,” he said.

The team, which until now mostly consisted of Schaechter and his co-founder and CTO Brooke McKim, bootstrapped the company up to this point. Now they plan to use the new capital to build out its team (and the company is actively hiring right now), both on the development and go-to-market side.

The company offers a free starter plan for businesses that track up to $2,500 in monthly AWS cost, with paid plans starting at $30 per month for those who need to track larger accounts.

DarkRadiation | Abusing Bash For Linux and Docker Container Ransomware

While new ransomware families are a common occurrence these days, a recently discovered ransomware dubbed ‘DarkRadiation’ is especially noteworthy for defenders. First, it targets Linux and Docker cloud containers, making it of particular concern to enterprises. Secondly, DarkRadiation is written entirely in Bash, a feature that can make it difficult for some security solutions to identify as a threat. In this post, we’ll take a look at the DarkRadiation Bash scripts and show how this novel ransomware can be detected.

What is DarkRadiation Ransomware?

DarkRadiation appears to have been first noticed in late May by Twitter user @r3dbU7z and was later reported on by researchers at Trend Micro. It appears to have come to light as part of a set of hacker tools through discovery on VirusTotal.

At this time, we have no information on delivery methods or evidence of in-the-wild attacks. However, analysis of its various components suggest that the actors behind its development intend on using it as a campaign targeting Linux installs and Docker containers.

The ransomware uses a complex collection of Bash scripts and at least half a dozen C2s, all of which appear to be currently offline, to communicate with Telegram bots via hardcoded API keys.

DarkRadiation is part of a larger collection of hacking scripts

The DarkRadiation scripts have a number of dependencies including wget, curl, sshpass, pssh and openssl. If any of these are not available on the infected device, the malware attempts to download the required tools using YUM (Yellowdog Updater, Modified), a python-based package manager widely adopted by popular Linux distros such as RedHat and CentOS.

DarkRadiation checks for and installs dependencies

Code artifacts in the same script show the ransomware attempting to stop, disable and delete the /var/lib/docker directory, used by Docker to store images, containers, and local named volumes. Despite the name of the function, docker_stop_and_encrypt, it appears that at least in its current form it acts purely as a wiper for Docker images. However, as other researchers have noted, several versions of these scripts were found on the threat actor’s infrastructure, suggesting that they may be in nascent development and not yet ready for full deployment.

The ransomware appears to wipe the main Docker directory

In order to facilitate communication, the ransomware relies on another script, bt_install.sh, to set up and test a Telegram bot, written to the local file path at "/usr/share/man/man8/mon.8.gz". Fans of the popular science fiction trilogy “The Matrix” may recognize the test message, “Knock, knock, Neo.” included in the bt_install shell script.

DarkRadiation threat actors appear to be fans of The Matrix Trilogy

The same script also installs and enables a service called “griphon” as a way to gain persistence. If the malware has been run with admin rights, the service is installed as “griphon.service” at the default "/etc/systemd/system/" path and ensures the Telegram bot is brought up and running each time the device is re-booted.

A systemd service called ‘Griphon’ is installed for persistence

The ExecStart command ensures that the bot is started either on system boot or by manual invocation of the service via systemctl.

Bash Ransomware Script and Obfuscation

DarkRadiation embedded ransomware note

The ransomware script exists in several versions called supermicro_cr and crypt. An obfuscated version in the attacker’s repository uses a simple technique that we’ve seen before in shell script-based malware, which has been common on macOS for a while. The technique involves assigning random variables to “chunks” of script code.

The ransomware script is obfuscated with node-bash-obfuscate

Comments left in the code in Russian suggest the author used an npm package called node-bash-obfuscate’.

Translated comments reveal the hacker’s choice of obfuscation tool

Despite the apparent complexity of the obfuscated script, all such scripts can be easily translated back to plain text simply by replacing the eval command with echo, which prints the script to stdout without executing it.

On execution, the script creates a new user with the name “ferrum”. In some versions, the password is downloaded from the attacker’s C2 via curl and in others it is hardcoded with strings such as “$MeGaPass123#”.

The ransomware script creates a new user and password

For the purpose of avoiding accidental discovery, the ransomware writes itself to "/usr/share/man/man8/", a folder typically reserved for the man pages associated with System administration controls: in other words, a directory not likely to be traversed by chance even by admin users. Moreover, in order to facilitate privilege escalation, the script uses a fairly blunt but often wildly effective ‘social engineering’ technique: by simply asking the user for the required privileges.

The execution chain is caught by the SentinelOne agent and reflected in the Management console:

The chain of execution as seen in the SentinelOne console

If allowed to execute, the ransomware script uses openssl (one of the dependencies we noted earlier) to encrypt files enumerated via the grep and xargs utilities. Encrypted files are appended with the extension .☢, and the encryption key is sent to the attacker’s C2 via the Telegram bot.

openssl is used for file encryption

How SentinelOne Deals With DarkRadiation

For endpoints protected by SentinelOne, DarkRadiation is blocked from the outset, so there’s no risk of any data being encrypted by the malware. As always, it’s safest to have your SentinelOne endpoints use the ‘Protect’ policy to ensure that threats are killed and quarantined automatically. When this occurs, the Management console gives a full report of what processes were killed and quarantined, and shows associated MITRE TTPs in the Threat Indicators panel.

DarkRadiation MITRE TTPs shown in SentinelOne console

In the demo video below, we show how SentinelOne deals with DarkRadiation using the Detect-only policy.

SentinelOne vs DarkRadiation
(Bash Ransomware)

Conclusion

Malware written in shell script languages allows attackers to be more versatile and to avoid some common detection methods. As scripts do not need to be recompiled, they can be iterated upon more rapidly. Moreover, since some security software relies on static file signatures, these can easily be evaded through rapid iteration and the use of simple obfuscator tools to generate completely different script files.

However, no amount of iteration or obfuscation changes the nature of what the malware actually does on execution. Hence, security teams are advised to use a trusted behavioral detection engine such as SentinelOne Singularity that can detect malicious behavior before it does harm to your Linux systems, servers or Docker containers.

If you would like to learn more about how SentinelOne can help secure your organization, contact us for more information or request a free demo.

Indicators of Compromise

SHA256/SHA1
supermicro_cr
d0d3743384e400568587d1bd4b768f7555cc13ad163f5b0c3ed66fdc2d29b810
e437221542112affc30e036921e4395b72fe6504

supermicro_bt
652ee7b470c393c1de1dfdcd8cb834ff0dd23c93646739f1f475f71a6c138edd
5b231b4d834220bf378d1a64c15cc04eca6ddaf6

supermicro_cr_third (obfuscated)
9f99cf2bdf2e5dbd2ccc3c09ddcc2b4cba11a860b7e74c17a1cdea6910737b11
1bea1c2715f44fbfe38c80d333dfa5a28921cefb

supermicro_cr_third (deobfuscated)
654d19620d48ff1f00a4d91566e705912d515c17d7615d0625f6b4ace80f8e3a
83881c44a41f35a054513a4fa68306183100e73b

crypt3.sh
0243ac9f6148098de0b5f215c6e9802663284432492d29f7443a5dc36cb9aab5
919b574a4d000161e52d57b827976b6d9388b33f

crypt2_first.sh
e380c4b48cec730db1e32cc6a5bea752549bf0b1fb5e7d4a20776ef4f39a8842
215d777140728b748fc264ef203ebd27b2388666

bt_install.sh
fdd8c27495fbaa855603df4f774fe86bbc21743f59fd039f734feb07704805bd
45b57869e3857b50c1d794baba6ceca2641a7cfa

MITRE ATT&CK
T1027 Obfuscated Files or Information
T1202 Indirect Command Execution
T1082 System Information Discovery
T1083 File and Directory Discovery (System Object Enumeration)
T1486 Data Encrypted for Impact
T1059.004 Command and Scripting Interpreter: Unix Shell
T1059 Command and Scripting Interpreter
T1014 Rootkits
T1548 Abuse Elevation Control Mechanism
T1543.002 Create or Modify System Process: Systemd Service


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

DataRails books $25M more to build better financial reporting tools for SMBs

As enterprise startups continue to target interesting gaps in the market, we’re seeing increasingly sophisticated tools getting built for small and medium businesses — traditionally a tricky segment to sell to, too small for large enterprise tools, and too advanced in their needs for consumer products. In the latest development of that trend, an Israeli startup called DataRails has raised $25 million to continue building out a platform that lets SMBs use Excel to run financial planning and analytics like their larger counterparts.

The funding closes out the company’s Series A at $43.5 million, after the company initially raised $18.5 million in April (some at the time reported this as its Series A, but it seems the round had yet to be completed). The full round includes Zeev Ventures, Vertex Ventures Israel and Innovation Endeavors, with Vintage Investment Partners added in this most recent tranche. DataRails is not disclosing its valuation, except to note that it has doubled in the last four months, with hundreds of customers and on target to cross 1,000 this year, with a focus on the North American market. It has raised $55 million in total. 

The challenge that DataRails has identified is that on one hand, SMBs have started to adopt a lot more apps, including software delivered as a service, to help them manage their businesses — a trend that has been accelerated in the last year with the pandemic and the knock-on effect that has had for remote working and bringing more virtual elements to replace face-to-face interactions. Those apps can include Salesforce, NetSuite, Sage, SAP, QuickBooks, Zuora, Xero, ADP and more.

But on the other hand, those in the business who manage finances and financial reporting are lacking the tools to look at the data from these different apps in a holistic way. While Excel is a default application for many of them, they are simply reading lots of individual spreadsheets rather than integrated data analytics based on the numbers.

DataRails has built a platform that can read the reported information, which typically already lives in Excel spreadsheets, and automatically translate it into a bigger picture view of the company.

For SMEs, Excel is such a central piece of software, yet such a pain point for its lack of extensibility and function, that this predicament was actually the germination of starting DataRails in the first place,

Didi Gurfinkel, the CEO who co-founded the company with Eyal Cohen (the CPO) said that DataRails initially set out to create a more general-purpose product that could help analyze and visualize anything from Excel.

Image: DataRails

“We started the company with a vision to save the world from Excel spreadsheets,” he said, by taking them and helping to connect the data contained within them to a structured database. “The core of our technology knows how to take unstructured data and map that to a central database.” Before 2020, DataRails (which was founded in 2015) applied this to a variety of areas with a focus on banks, insurance companies, compliance and data integrity.

Over time, it could see a very specific application emerging, specifically for SMEs: providing a platform for FP&A (financial planning and analytics), which didn’t really have a solution to address it at the time. “So we enabled that to beat the market.”

“They’re already investing so much time and money in their software, but they still don’t have analytics and insight,” said Gurfinkel.

That turned out to be fortunate timing, since “digital transformation” and getting more out of one’s data was really starting to get traction in the world of business, specifically in the world of SMEs, and CFOs and other people who oversaw finances were already looking for something like this.

The typical DataRails customer might be as small as a business of 50 people, or as big as 1,000 employees, a size of business that is too small for enterprise solutions, “which can cost tens of thousands of dollars to implement and use,” added Cohen, among other challenges. But as with so many of the apps that are being built today to address those using Excel, the idea with DataRails is low-code or even more specifically no-code, which means “no IT in the loop,” he said.

“That’s why we are so successful,” he said. “We are crossing the barrier and making our solution easy to use.”

The company doesn’t have a huge number of competitors today, either, although companies like Cube (which also recently raised some money) are among them. And others like Stripe, while currently not focusing on FP&A, have most definitely been expanding the tools that it is providing to businesses as part of their bigger play to manage payments and subsequently other processes related to financial activity, so perhaps it, or others like it, might at some point become competitors in this space as well.

In the meantime, Gurfinkel said that other areas that DataRails is likely to expand to cover alongside FP&A include HR, inventory and “planning for anything,” any process that you have running in Excel. Another interesting turn would be how and if DataRails decides to look beyond Excel at other spreadsheets, or bypass spreadsheets altogether.

The scope of the opportunity — in the U.S. alone there are more than 30 million small businesses — is what’s attracting the investment here.

“We’re thrilled to reinvest in DataRails and continue working with the team to help them navigate their recent explosive and rapid growth,” said Yanai Oron, general partner at Vertex Ventures, in a statement. “With innovative yet accessible technology and a tremendous untapped market opportunity, DataRails is primed to scale and become the leading FP&A solution for SMEs everywhere.”

“Businesses are constantly about to start, in the midst of, or have just finished a round of financial reporting — it’s a never-ending cycle,” added Oren Zeev, founding partner at Zeev Ventures. “But with DataRails, FP&A can be simple, streamlined, and effective, and that’s a vision we’ll back again and again.”