Gauging growth in the most challenging environment in decades

Traditionally, measuring business success requires a greater understanding of your company’s go-to-market lifecycle, how customers engage with your product and the macro-dynamics of your market. But in the most challenging environment in decades, those metrics are out the window.

Enterprise application and SaaS companies are changing their approach to measuring performance and preparing to grow when the economy begins to recover. While there are no blanket rules or guidance that applies to every business, company leaders need to focus on a few critical metrics to understand their performance and maximize their opportunities. This includes understanding their burn rate, the overall real market opportunity, how much cash they have on hand and their access to capital. Analyzing the health of the company through these lenses will help leaders make the right decisions on how to move forward.

Play the game with the hand you were dealt. Earlier this year, our company closed a $40 million Series C round of funding, which left us in a strong cash position as we entered the market slowdown in March. Nonetheless, as the impact of COVID-19 became apparent, one of our board members suggested that we quickly develop a business plan that assumed we were running out of money. This would enable us to get on top of the tough decisions we might need to make on our resource allocation and the size of our staff.

While I understood the logic of his exercise, it is important that companies develop and execute against plans that reflect their actual situation. The reality is, we did raise the money, so we revised our plan to balance ultra-conservative forecasting (and as a trained accountant, this is no stretch for me!) with new ideas for how to best utilize our resources based on the market situation.

Burn rate matters, but not at the expense of your culture and your talent. For most companies, talent is both their most important resource and their largest expense. Therefore, it’s usually the first area that goes under the knife in order to reduce the monthly spend and optimize efficiency. Fortunately, heading into the pandemic, we had not yet ramped up hiring to support our rapid growth, so were spared from having to make enormously difficult decisions. We knew, however, that we would not hit our 2020 forecast, which required us to make new projections and reevaluate how we were deploying our talent.

Email Reply Chain Attacks | What Are They & How Can You Stay Safe?

As recent data confirms, email phishing remains the number one vector for enterprise malware infections, and Business Email Compromise (BEC) the number one cause of financial loss due to internet crime in organizations. While typical phishing and spearphishing attacks attempt to spoof the sender with a forged address, a more sophisticated attack hijacks legitimate email correspondence chains to insert a phishing email into an existing email conversation. The technique, known variously as a ‘reply chain attack’, ‘hijacked email reply chain’ and ‘thread hijack spamming’ was observed by SentinelLabs researchers in their recent analysis of Valak malware. In this post, we dig into how email reply chain attacks work and explain how you can protect yourself and your business from this adversary tactic.

How Do Email Reply Chain Attacks Work?

Hijacking an email reply chain begins with an email account takeover. Either through an earlier compromise and credentials dumping or techniques such as credential stuffing and password-spraying, hackers gain access to one or more email accounts and then begin monitoring conversation threads for opportunities to send malware or poisoned links to one or more of the participants in an ongoing chain of correspondence.

The technique is particularly effective because a bond of trust has already been established between the recipients. The threat actor neither inserts themselves as a new correspondent nor attempts to spoof someone else’s email address. Rather, the attacker sends their malicious email from the genuine account of one of the participants.

Since the attacker has access to the whole thread, they can tailor their malspam message to fit the context of an ongoing conversation. This, on top of the fact that the recipient already trusts the sender, massively increases the chance of the victim opening the malicious attachment or clicking a dangerous link.

To see how this works, suppose an account belonging to “Sam” has been compromised, and the attacker sees that Sam and “Georgie” (and perhaps others) have been discussing a new sales campaign. The attacker can use this context to send Georgie a malicious document that appears related to the conversation they are currently having.

In order to keep the owner of the compromised account ignorant of the attacker’s behaviour, hackers will often use an alternate Inbox to receive messages.

This involves using the email client’s rules to route particular messages away from the usual Inbox and into a folder that the genuine account holder is unlikely to inspect, such as the Trash folder. With this technique, if Georgie in our example replies to Sam’s phishing email, the reply can be diverted so the real Sam never sees it.

Alternatively, when a hacker successfully achieves an account takeover, they may use the email client’s settings to forward mail from certain recipients to another account.

Another trick that can help keep an account holder in the dark is to create an email rule that scans for keywords such as “phish, “phishing, “hack” and “hacked” in incoming messages and deletes them or auto replies to them with a canned message. This prevents any suspicious or concerned colleagues from alerting the account holder with emails like “Have you been hacked?” and so on.

Which Malware Families Have Used Reply Chain Attacks?

Email reply chain attacks began appearing in 2017. In 2018 Gozi ISFB/Ursnif banking trojan campaigns also started using the technique, although in some cases the chain of correspondence itself was faked simply to add legitimacy; in others, the attackers compromised legitimate accounts and used them both to hijack existing threads and to spam other recipients.

Malicious attachments may leverage VBScript and PowerShell through Office Macros to deliver payloads such as Emotet, Ursnif and other loader or banking trojan malware.

SentinelLabs researchers have shown how Valak malware uses specialized plugins designed to steal credentials specifically for use in email reply chain attacks.

As the researchers point out:

“If you are going to leverage reply chain attacks for your spamming campaigns, then you obviously need some email data. It’s interesting to see that when campaigns shifted more towards Valak and away from Gozi, the addition of a plugin surrounding the theft of exchange data showed up.”

Why Are Email Reply Chain Attacks So Effective?

Although spearphishing and even blanket spam phishing campaigns are still a tried-and-trusted method of attack for threat actors, email reply chain attacks raise the bar for defenders considerably.

In ordinary phishing attacks, it is common to see tell-tale grammar and spelling errors, like here.

Also, mass spoofing emails are often sent with subjects or body messages that bear little meaningful context to most recipients, immediately raising suspicion.

Even with more targeted spearphishing attacks, awareness training and safe email practices such as not clicking links, opening attachments from unknown senders or replying to unsolicited emails can have an impact on reducing risk. However, with email reply chain attacks, the usual kind of warning indicators may be missing.

Email reply chain attacks are often carefully-crafted with no language errors, and the leap in credibility gained by inserting a reply to an existing thread from a legitimate sender means that even the most cautious and well-trained staff are at risk of falling victim to this kind of tactic.

How Can You Prevent a Reply Chain Attack?

Given their trusted, legitimate point of origin and the fact that the attacker has email history and conversational context, it can be difficult to spot a well-crafted reply chain attack, particularly if it appears in (or appears to be part of) a long thread with multiple, trusted participants.

However, there are several recommendations that you can follow to avoid becoming a victim of this type of fraud.

First, since reply chain attacks rely on account compromises, ensure that all your enterprise email accounts are following best practices for security. That must include two factor or multi-factor authentication, unique passwords on every account and passwords that are 16 characters in length or longer. Users should be encouraged to regularly inspect their own email client settings and mail rules to make sure that messages are not unknowingly being diverted or deleted.

Second, lock down or entirely forbid use of Office Macros wherever possible. Although these are not the only means by which malicious attachments can compromise a device, Macros remain a common attack vector.

Third, knowledge is power, so expand your user awareness training to include discussion of email reply chain attacks and how they work by referring staff to articles such as this one. Email users need to raise their awareness of how phishing attacks work and how attackers are evolving their techniques. Crucially, they need to understand why it’s important to treat all requests to open attachments or click links with a certain amount of caution, no matter what the source.

Fourth, and most importantly, ensure that your endpoints are protected with a modern, trusted EDR security solution that can stop the execution of malicious code hidden in attachments or links before it does any damage. Legacy AV suites that rely on reputation and YARA rules were not built to handle modern, fileless and polymorphic attacks. A next-gen, automated AI-powered platform is the bare minimum in today’s cyber security threatscape.

Conclusion

Email reply chain attacks are yet another form of social engineering deployed by threat actors to achieve their aims. Unlike the physical world with its hardcoded laws of nature, there are no rules in the cyber world that cannot be changed either by manipulating the hardware, the software or the user. This, however, is just as true for defenders as it is for attackers. By keeping control of all aspects of our cyber environment, we can defeat attacks before they occur or create lasting damage to the organization. Secure your devices, educate your users, train your staff, and let the criminals find another target.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Next-Gen AV and The Challenge of Optimizing Big Data at Scale

At SentinelOne, we provide full visibility into managed endpoint data for our customers. Over time, the amount of data events we need to store, search and retrieve has become huge, and we currently handle around 200 billion events per day. While collection and storage is easy enough, querying the data quickly and efficiently is the main challenge. In this post, we will share how we overcame this challenge and achieved the ability to quickly query tremendous amounts of data.

Architectural Overview

Our event stream results in a lot of small files, which are written to S3 and HDFS. We store our partitions in Hive Metastore and query it with Presto. Files are partitioned by date where every day a new partition is automatically created.


We started in a naive way, by simply aggregating events in the pipeline into files being periodically pushed to S3. While this worked well at the beginning, as scale surged, a serious problem emerged.

To allow near real-time search across our managed endpoints, we wrote many small files, rather than attempting to combine them into larger files. We also support the arrival of late events, so data might arrive with an old timestamp after a few days.

Unfortunately, Presto doesn’t work well with many small files. We were working with tens of thousands of files, from hundreds of kilobytes to tens of megabytes. Leaving data in many small files, obviously, made queries very slow, so we faced the challenge of solving the common problem of “small files”.

Attempt 1 — Reduce Frequency of Writing

Our first thought was simply to write less often. While this reduced the number of files, it conflicted with our business constraints of having events searchable within a few minutes. Data is flushed frequently to allow queries on recent events, thus generating millions of files.

Our files are written in ORC format, so appending ORC files is possible, but not effective. When appending ORC stripes, without decompressing the data and restructuring the stripes, the results are big files that are queried really slowly.

Attempt 2 — In-place Compaction

Next, we tried to compact files on the fly in S3. Since our data volumes were small, we were able to compact the files in-memory with Java code. We maintained Hive Metastore for partition locations. It was quite a simple solution, but it turned out to be a headache.


Compacting files in-place is challenging since there’s no way to make atomic writes in S3. We had to take care of deletion of small files that were replaced by compacted files. While the Hive Metastore partitioning was pointing to the same S3 location, we ended up with duplicate or missing data for a while.

S3 listing is eventually consistent:

“A process writes a new object to Amazon S3 and immediately lists keys within its bucket. Until the change is fully propagated, the object might not appear in the list.”

Although the file would be uploaded successfully, it might appear in the list perhaps half an hour later. Those issues were unacceptable, so we returned to the small files problem.

Attempt 3 — Write files to HDFS, Then Copy to S3

To mitigate S3 eventual consistency, we decided to move to HDFS, which Presto supports natively and as such the transition required zero work.

The small files problem is a known issue in HDFS. HDFS name node holds all file system metadata in memory. Each entry takes about 1 KB. As the amount of files grows, HDFS requires a lot of RAM.

We experienced even worse degradation when trying to save our real time small files in S3:

  • When querying the data in presto, it retrieves the partition mapping from the Hive metastore, and lists the files in each partition. As mentioned above, S3 listing is eventually consistent, so in real-time we sometimes missed a few files in the list response. Listing in HDFS is deterministic and immediately returns all files.
  • S3 list API response is limited to 1000 entries. When listing a directory of large amounts of files, Hive executes several API requests, which cost time.

We stored files in Hive Metastore, different locations for each partition, where Today pointed to HDFS and the files of older data pointed to S3.


This solution solved our consistency issues, but still, small files are problematic! How can we avoid that? Let’s try compaction again.

Attempt 4 — Compaction with Presto Clusters

To spawn a Presto cluster and run compaction with it is simple and easy.

At the end of each day, we created EMR clusters that handled compaction on the last day files. Our clusters had hundreds of nodes, memory-optimized with the compaction done in-memory.


When you set up a Presto cluster, you need to do these steps:

  • Set these parameters to optimize compacted files output:
  • Create two external tables — a source table with raw data and a destination table of compacted data — based on this template:
  • Add partitions to the Hive store, for the source and destination tables to point to the correct location:
  • Finally, run the insert magical command, that will do the compaction:

The rapid growth in SentinelOne’s data made this system infeasible from a cost and maintenance perspective. We encountered several problems as our data grew:

  • Each big partition had 200 GB on disk, which in memory is in fact 2T of raw, uncompressed data every day. Compaction is done on uncompressed data, so holding it in memory through the entire process of compaction required huge clusters.
  • Running a few clusters with hundreds of nodes is quite expensive. First, we ran with Spots to reduce costs, but as our cluster grew, it became hard to get a lot of big nodes for several hours. At peak, one cluster ran for 3 hours to get one big partition compacted. When we moved to On-Demand machines, costs increased dramatically.
  • Presto has no built-in fault-tolerance mechanism, which is very disruptive when running on Spots. If even one Spot failed, the whole operation failed, and we had to run it all over again. This caused delays in switching to compacted data, which resulted in slower queries.

As we started to compact files, Hive Metastore locations were changed to point to compacted data vs. current-day, small-file data.

At the end of the compaction process, we had a job that switched Hive Metastore partitioning.

Attempt 5 — Compact the Files Take 2: Custom-Made Compaction

At this point, we decided to take control. We built a custom-made solution for compaction, and we named it Compaction Manager.

  • When small files are written to our S3 bucket from our event stream (1), we use AWS event notifications from S3 to SQS on object creation events (2).
  • Our service, the Compaction Manager, reads messages from SQS (3) and inserts S3 paths to the database (4).
  • Compaction Manager aggregates files ready to be compacted by internal logic (5), and assigns tasks to worker processes (6).
  • Workers compact files by internal logic and write big files as output (8).
  • The workers update the Compaction Manager on success or failure (7).

What Did We Gain from the Next Generation Solution?

  • We control EVERYTHING. We own the logic of compaction, the size of output files and handle retries.
  • Our compaction is done continuously, allowing us to have fine-grained control over the amount of workers we trigger. Due to seasonality of the data, resources are utilized effectively, and our worker cluster is autoscaled over time according to the load.
  • Our new approach is fault-tolerant. Failure is not a deal breaker any more; the Manager can easily retry the failed batch without restarting the whole process.
  • Continuous compaction means that late files are handled as regular files, without special treatment.
  • We wrote the entire flow as a continuous compaction process that happens all the time, and thus requires less computation power and is much more robust to failures. We choose the batches of files to compact, so we control memory requirements (as opposed to Presto, where we load all data with a simple select query). We can use Spots instead of On-Demand machines and reduce costs dramatically.
  • This approach introduced new opportunities to implement internal logic for compacted files. We choose what files to compact and when. Thus, we can aggregate files by specific criteria, improving queries directly.

Conclusion

We orchestrated our own custom solution to handle huge amounts of data and allow our customers to query it really quickly by utilizing a combination of S3 and HDFS for storage. For first-day data, we enjoy the advantages of HDFS, and for the rest of the data, we rely on S3 because it’s a managed service.

Compaction with Presto is nice, but as we learned it is not enough when you are handling a lot of data. Instead, we solved the challenge with a customised solution which both improved our performance and cut our costs by 80% relative to the original approach.

This post was written by Ariela Shmidt, Big Data SW Engineer and Benny Yashinovsky, Big Data SW Architect at SentinelOne.

If you want to join the team, check out this open position: Big Data Engineer


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

API platform Postman delivers $150M Series C on $2B valuation

APIs provide a way to build connections to a set of disparate applications and data sources, and can help simplify a lot of the complex integration issues companies face. Postman has built an enterprise API platform and today it got rewarded with a $150 million Series C investment on a whopping $2 billion valuation — all during a pandemic.

Insight Partners led the round with help from existing investors CRV and Nexus Venture Partners. Today’s investment brings the total raised to $207 million, according to the company. That includes a $50 million Series B from a year ago, making it $200 million raised in just a year. That’s a lot of cash.

Abhinav Asthana, CEO and co-founder at Postman, says that what’s attracting all that dough is an end-to-end platform for building APIs. “We help developers, QA, DevOps — anybody who is in the business of building APIs — work on the same platform. They can use our tools for designing, documentation, testing and monitoring to build high quality APIs, and they do that faster.” Asthana told TechCrunch.

He says that he was not actively looking for funding before this round came together. In fact, he says that investors approached him after the pandemic shut everything down in California in March, and he sees it as a form of validation for the startup.

“We think it shows the strength of the company. We have phenomenal adoption across developers and enterprises and the pandemic has [not had much of an impact on us]. The company has been receiving crazy inbound interest [from investors],” he said.

He didn’t want to touch the question of going public just yet, but he feels the hefty valuation sends a message to the market that this is a solid company that is going to be around for the long term.

Jeff Horing, co-founder and managing director at lead investor Insight Partners certainly sees it that way. “The combination of the market opportunity, the management team and Postman’s proven track record of success shows that they are ready to become the software industry’s next great success,” he said in a statement.

Today the company has around 250 employees divided between the US and Bangalore in India, and he sees doubling that number in the next year. One thing the pandemic has shown him is that his employees can work from anywhere and he intends to hire people across the world to take advantage of the most diverse talent pool possible.

“Looking for diverse talent as part of our large community as we build this workforce up is going to be a key way in which we want to solve this. Along with that, we are bringing people from diverse communities into our events and making sure that we are constantly in touch with those communities, which should help us build up a very strong diverse kind of hiring function,” he said.

He added, “We want to be deliberate about that, and over the coming months we will also shed more light on what specifically we are doing.”

Tulsa is trying to build a startup ecosystem from scratch

When you think about startup hubs, Tulsa, Oklahoma is probably not the first city that comes to mind.

A coalition of business, education, government and philanthropists are working to foster a startup ecosystem in a city that’s better known for its aerospace and energy companies. These community leaders recognized that raising the standard of living for a wide cross-section of citizens required a new generation of companies and jobs — which takes commitment from a broad set of interested parties.

In Tulsa, that effort began with George Kaiser Family Foundation (GKFF), a philanthropic organization, and ended with the creation of Tulsa Innovation Labs (TIL), a partnership between GKFF, Israeli cybersecurity venture capitalists Team8 and several area colleges and local government.

Why Tulsa?

Tulsa is a city of more than 650,000 people, with a median household income of $53,902 and a median house price of $150,500. Glassdoor reports that the average salary for a software engineer in Tulsa is $66,629; in San Francisco, the median home price is over $1.1 million, household income comes in at $112,376 and Glassdoor’s average software engineer salary is $115,822.

Home to several universities and a slew of cultural attractions, the city has a lot to offer. To sweeten the deal, GKFF spun up “Tulsa Remote,” an initiative that offers $10,000 to remote workers who will relocate and make the city their home base. The goal: draw in new, high-tech workers who will help build a more vibrant economy.

Tulsa is the second-largest city in the state of Oklahoma and 47th-most populous city in the United States. Photo Credit: DenisTangneyJr/Getty Images

Local colleges are educating the next generation of workers; Tulsa Innovation Labs is working with the University of Tulsa in partnership with Team8 through the university’s Cyber Fellows program. There are also ongoing discussions with Oklahoma State University-Tulsa and the University of Oklahoma-Tulsa about building a similar relationship.

These constituencies are trying to grow a startup ecosystem from the ground up. It takes a sense of cooperation and hard work and it will probably take some luck, but they are starting with $50 million, announced just this week from GKFF, for startup investments through TIL.

InVision adds new features to Freehand, a virtual whiteboard tool, as user demand surges

No business is immune to the effects of the coronavirus pandemic. We’ve seen Airbnb — a company particularly susceptible to this black swan event — go through an insane design sprint. Even enterprise collaboration tools have felt it, with Box readjusting its product road map to focus on how the tool worked for remote employees.

InVision has also seen the change in its users behavior and adapted accordingly. Freehand, the company’s collaborative whiteboarding tool has seen a huge surge in users and the startup has added a handful of new features to the product.

The company says that Freehand is seeing 130% growth in weekly active users since March.

New features include sticky notes that come in multiple color, size and text options, as well as templates to give teams a jumping off point for their whiteboarding exercise. Freehand has six new templates to start — brainstorming, wireframing, retrospectives, standups, diagrams and ice breakers — with more to be added more soon.

InVision has also added a “presenting” mode to Freehand.

Because this virtual whiteboard has no space constraints, it can literally zoom out to infinity and is restricted only by the imagination of the team working on it. In “presenting” mode, a team leader can take over the view of the virtual whiteboard to guide their team through one part of the content at a time.

Freehand has an integration with Microsoft Teams and Slack, and also has a new shortcut where users can type “freehand.new” into any browser to start on a fresh whiteboard.

Interestingly, the user growth around Freehand doesn’t just come from the usual suspects of design, product and engineering teams. Departments across organizations, including HR, marketing and IT teams, are coming to Freehand to collaborate on projects and tasks. More than 60 percent of Freehand users are not coming from the design team.

InVision has also added some fine-tuning features, such as a brand new toolbar to allow for easier drawing of shapes, alignment, color and opacity features, and better controls for turning lines into precise arrows or end-points for diagrams.

One of the most interesting things about Freehand is that it allows for democratized access to the whiteboard itself. With no restraints on time or space, and with no one gatekeeping up at the front of the room holding the marker, all members of a team can go in and add their thoughts and ideas to the whiteboard before, during or after a meeting.

“One of the nice things about a whiteboard or a virtual whiteboard like this one is it removes the aspects of the restrictions of time and space, so teams can have more efficient meetings where they get the benefit of democratic input without the cost of having only one person at a time being able to speak or add,” said David Fraga, InVision President. “It offers a synchronous collision of collaboration.”

InVision has raised a total of $350 million from investors like FirstMark, Spark, Battery, Accel and Tiger Global Management. The company now boasts more than 7 million total registered users, with 100 of the Fortune 100 companies using the product. InVision is also part of the $100 million ARR club.

Quolum announces $2.75M seed investment to track SaaS spending

As companies struggle to find ways to control costs in today’s economy, understanding what you are spending on SaaS tools is paramount. That’s precisely what early stage startup Quolum is attempting to do, and today it announced a $2.75 million seed round.

Surge Ventures and Nexus Venture Partners led the round with help from a dozen unnamed angel investors.

Company founder Indus Khatian says that he launched the company last summer pre-COVID, when he recognized that companies were spending tons of money on SaaS subscriptions and he wanted to build a product to give greater visibility into that spending.

This tool is aimed at finance, who might not know about the utility of a specific SaaS tool like PagerDuty, but who looks at the bills every month. The idea is to give them data about usage as well as cost to make sure they aren’t paying for something they aren’t using.

“Our goal is to give finance a better set of tools, not just to put a dollar amount on [the subscription costs], but also the utilization, as in who’s using it, how much are they using it and is it effective? Do I need to know more about it? Those are the questions that we are helping finance answer,” Khatian explained.

Eventually, he says he also wants to give that data directly to lines of business, but for starters he is focusing on finance. The product works by connecting to the billing or expense software to give insight into the costs of the services. It takes that data and combines it with usage data in a dashboard to give a single view of the SaaS spending in one place.

While Khatian acknowledges there are other similar tools in the marketplace such as Blissfully, Intello and others, he believes the problem is big enough for multiple vendors to do well. “Our differentiator is being end-to-end. We are not just looking at the dollars, or stopping at how many times you’ve logged in, but we’re going deep into consumption. So for every dollar that you’ve spent, how many units of that software you have consumed,” he said.

He says that he raised the money last fall and admits that it probably would have been tougher today, and he would have likely raised on a lower valuation.

Today the company consists of a 6 person development team in Bangalore in India and Khatian in the U.S. After the company generates some revenue he will be hiring a few people to help with marketing, sales and engineering.

When it comes to building a diverse company, he points out that he himself is an immigrant founder, and he sees the ability to work from anywhere, an idea amplified by COVID-19, helping result in a more diverse workforce. As he builds his company, and adds employees,  he can hire people across the world, regardless of location.

IBM Cloud suffers prolonged outage

The IBM Cloud is currently suffering a major outage, and with that, multiple services that are hosted on the platform are also down, including everybody’s favorite tech news aggregator, Techmeme.

It looks like the problems started around 2:30pm PT and spread from there. Best we can tell, this is a worldwide problem and involves a networking issue, but IBM’s own status page isn’t actually loading anymore and returns an internal server error, so we don’t quite know the extent of the outage or what triggered it. IBM Cloud’s Twitter account has also remained silent, though we found a status page for IBM Aspera hosted on a third-party server, which seems to confirm that this is likely a worldwide networking issue.

IBM Cloud, which published a paper about ensuring zero downtime in April, also suffered a minor outage in its Dallas data center in March.

We’ve reached out to IBM’s PR team and will update this post once we get more information.

Update #1 (5:06pm PT): we are seeing some reports that IBM Cloud is slowly coming back online, but the company’s status page also now seems to be functioning again and still shows that the cloud outage continues for the time being.

Update #2 (5:25pm PT): IBM keeps adding additional information to its status page, though networking issues seem to be at the core of this issue.

Hear from Figma founder and CEO Dylan Field at TC Early Stage in July

Figma is one of the fastest-growing companies in the world of design and in the broader SaaS category. So it goes without saying that we’re absolutely thrilled to have Figma CEO Dylan Field join us at Early Stage, our virtual two-day conference on July 21 and 22, as a speaker. You can pick up a ticket to the event here!

Early Stage is all about giving entrepreneurs the tools they need to be successful. Experts across a wide variety of core competencies, including fundraising, growth marketing, media management, recruiting, legal and tech development will offer their insights and answer questions from the audience.

Field joins an outstanding speaker list that includes Lo Toney, Ann Muira Ko, Dalton Caldwell, Charles Hudson, Cyan Banister and more.

Field founded Figma in 2012 after becoming a Theil fellow. The company spent four years in development before launching, working tediously on the technology and design of a product that aimed to be the Google Docs of design.

Figma is a web-based design product that allows people to design collaboratively on the same project in real time.

The design space is, in many respects, up for grabs as it goes through a transformation, with designers receiving more influence within organizations and other departments growing more closely involved with the design process overall.

This also means that there is fierce competition in this industry, with behemoths like Adobe iterating their products and growing startups like InVision and Canva sprinting hard to capture as much market as possible.

Figma, with $130 million+ in total funding, has lured investors like Index, A16Z, Sequoia, Greylock, and KPCB.

At Early Stage, we’ll talk to Field about staying patient during the product development process and then transitioning into an insane growth sprint. We’ll also chat about the fundraising process, how he built a team from scratch, and how he took the team remote in the midst of a pandemic, as well as chatting about the product development strategy behind Figma.

How to take your time as fast as you can

Figma spent four years in stealth before ever launching a product. But when it finally did come to market, its industry was in the midst of a paradigm shift. Entire organizations started participating in the design process, and conversely, designers became empowered, asserting more influence over the direction of the company and the products they built. We’ll hear from Figma founder and CEO Dylan Field on how he stayed patient with product development and sprinted towards growth.

Get your pass to Early Stage for access to over to 50 small-group workshops along with world-class networking with CrunchMatch. They start at just $199 but prices increase in a few days so grab yours today.

( function() {
var func = function() {
var iframe = document.getElementById(‘wpcom-iframe-a6616518ac28e032cbc62a9e2178943e’)
if ( iframe ) {
iframe.onload = function() {
iframe.contentWindow.postMessage( {
‘msg_type’: ‘poll_size’,
‘frame_id’: ‘wpcom-iframe-a6616518ac28e032cbc62a9e2178943e’
}, “https://tcprotectedembed.com” );
}
}

// Autosize iframe
var funcSizeResponse = function( e ) {

var origin = document.createElement( ‘a’ );
origin.href = e.origin;

// Verify message origin
if ( ‘tcprotectedembed.com’ !== origin.host )
return;

// Verify message is in a format we expect
if ( ‘object’ !== typeof e.data || undefined === e.data.msg_type )
return;

switch ( e.data.msg_type ) {
case ‘poll_size:response’:
var iframe = document.getElementById( e.data._request.frame_id );

if ( iframe && ” === iframe.width )
iframe.width = ‘100%’;
if ( iframe && ” === iframe.height )
iframe.height = parseInt( e.data.height );

return;
default:
return;
}
}

if ( ‘function’ === typeof window.addEventListener ) {
window.addEventListener( ‘message’, funcSizeResponse, false );
} else if ( ‘function’ === typeof window.attachEvent ) {
window.attachEvent( ‘onmessage’, funcSizeResponse );
}
}
if (document.readyState === ‘complete’) { func.apply(); /* compat for infinite scroll */ }
else if ( document.addEventListener ) { document.addEventListener( ‘DOMContentLoaded’, func, false ); }
else if ( document.attachEvent ) { document.attachEvent( ‘onreadystatechange’, func ); }
} )();

Flatfile scores $7.6M seed investment to simplify data onboarding

One of the huge challenges companies like enterprise SaaS vendors face with new customers is getting customer data into their service. It’s a problem that Flatfile founders faced first hand in their jobs, and they decided to solve it. Today, the company announced a healthy $7.6 million seed investment to expand on that vision.

The company also announced the release of its latest product called Concierge.

Two Sigma Ventures led the investment with participation from previous investors Afore Capital, Designer Fund and Gradient Ventures (Google’s AI- focused venture fund).

Company CEO David Boskovic says he and co-founder Eric Crane recognized that this is a problem just about every company faces. Let’s say you sign up for a CRM tool like Hubspot (which is a Flatfile customer). Your first step is to get your customer data into the new service.

As Boskovic points out, if you have thousands of existing customers that can be a real problem, often involving days or even weeks to prepare the data, depending on the size of your customer base. It typically includes importing your data from an existing source, then manually moving it to an Excel spreadsheet.

“What we’re trying to solve for at Flatfile is automating that entire process. You can drop in any data that you have and get it into a new product, and what that solves from a market perspective is the speed of adopting new software,” Boskovic told TechCrunch.

Image Credit: Flatfile

He says they have automated the process to the point it usually takes just a few minutes to process the data, If there are problems that Flatfile can’t solve, it presents the issue to the user who can fix it and move on.

The founders realized that not every use case is going to involve a simple one-to-one data transfer, so they created their new product called Concierge to help companies manage more complex data integration scenarios for their customers

“What we do is we provide a bridge between disparate data formats that are a little bit more complex and let our customers collaborate with their new customers that they are onboarding to bring the data to the right state to use it in the new system,” Boskovic explained.

Whatever they are doing it seems to be working. The company launched in 2018 and today it has 160 customers with 300 sitting on a waiting list. It has increased that customer count by 5x since the beginning of the year in the middle of a pandemic.

Any product that reduces labor and increases efficiency and collaboration in a digital context is going to get the attention of customers right now, and Flatfile is seeing huge spike in interest in spite of the current economy. “We’re helping onboard customers quickly and more efficiently. And our Concierge service can also help reduce in-person touch points by reducing this long, typical data onboarding process,” Boskovic said.

The company has not had to change the way it’s worked because of the pandemic as it has been a distributed workforce from day one. In fact, Boskovic is in Denver and co-founder Eric Crane is based in Atlanta. The startup currently has 14 employees, but plans to fill at least 10 roles this year.

“We’ve got a pretty aggressive hiring map. Our pipeline is bigger than we can handle from a sales perspective,” he said. That means they will be looking to fill sales, marketing and product jobs.