DoD reaffirms Microsoft has won JEDI cloud contract, but Amazon legal complaints still pending

We have seen a lot of action this week as the DoD tries to finally determine the final winner of the $10 billion, decade-long DoD JEDI cloud contract. Today, the DoD released a statement that after reviewing the proposals from finalists Microsoft and Amazon again, it reiterated that Microsoft was the winner of the contract.

“The Department has completed its comprehensive re-evaluation of the JEDI Cloud proposals and determined that Microsoft’s proposal continues to represent the best value to the Government. The JEDI Cloud contract is a firm-fixed-price, indefinite-delivery/indefinite-quantity contract that will make a full range of cloud computing services available to the DoD,” the DoD said in a statement.

This comes on the heels of yesterday’s Court of Appeals decision denying Oracle’s argument that the procurement process was flawed and that there was a conflict of interest because a former Amazon employee helped write the requirements for the RFP.

While the DoD has determined that it believes that Microsoft should still get the contract, after selecting them last October, that doesn’t mean this is the end of the line for this long-running saga. In fact, a federal judge halted work on the project in February pending a hearing on an ongoing protest from Amazon, which believes it should have won based on merit, and the fact it believes the president interfered with the procurement process to prevent Jeff Bezos, who owns The Washington Post, from getting the lucrative contract.

The DoD confirmed that the project could not begin until the legal wrangling was settled. “While contract performance will not begin immediately due to the Preliminary Injunction Order issued by the Court of Federal Claims on February 13, 2020, DoD is eager to begin delivering this capability to our men and women in uniform,” the DoD reported in a statement.

A Microsoft spokesperson said the company was ready to get to work on the project as soon as it got the OK to proceed. “We appreciate that after careful review, the DoD confirmed that we offered the right technology and the best value. We’re ready to get to work and make sure that those who serve our country have access to this much needed technology,” a Microsoft spokesperson told TechCrunch .

Meanwhile, in a blog post published late this afternoon, Amazon made it clear that it was unhappy with today’s outcome and will continue to pursue legal remedy for what they believe to be presidential interference that has threatened the integrity of the procurement process. Here’s how they concluded the blog post:

We strongly disagree with the DoD’s flawed evaluation and believe it’s critical for our country that the government and its elected leaders administer procurements objectively and in a manner that is free from political influence. The question we continue to ask ourselves is whether the President of the United States should be allowed to use the budget of the Department of Defense to pursue his own personal and political ends? Throughout our protest, we’ve been clear that we won’t allow blatant political interference, or inferior technology, to become an acceptable standard. Although these are not easy decisions to make, and we do not take them lightly, we will not back down in the face of targeted political cronyism or illusory corrective actions, and we will continue pursuing a fair, objective, and impartial review.

While today’s statement from DoD appears to take us one step closer to the end of the road for this long-running drama, it won’t be over until the court rules on Amazon’s arguments. It’s clear from today’s blog post that Amazon has no intention of stepping down.

Note: We have  updated this story with content from an Amazon blog post responding to this news.

Avo raises $3M for its analytics governance platform

Avo, a startup that helps businesses better manage their data quality across teams, today announced that it has raised a $3 million seed round led by GGV Capital, with participation from  Heavybit, Y Combinator and others.

The company’s founder, Stefania Olafsdóttir, who is currently based in Iceland, was previously the head of data science at QuizUp, which at some point had 100 million users around the world. “I had the opportunity to build up the Data Science Division, and that meant the cultural aspect of helping people ask and answer the right questions — and get them curious about data — but it also meant the technical part of setting up the infrastructure and tools and pipelines, so people can get the right answers when they need it,” she told me. “We were early adopters of self-serve product analytics and culture — and we struggled immensely with data reliability and data trust.”

Image Credits: Avo

As companies collect more data across products and teams, the process tends to become unwieldy and different teams end up using different methods (or just simply different tags), which creates inefficiencies and issues across the data pipeline.

“At first, that unreliable data just slowed down decision making, because people were just like, didn’t understand the data and needed to ask questions,” Olafsdóttir said about her time at QuizUp. “But then it caused us to actually launch bad product updates based on incorrect data.” Over time, that problem only became more apparent.

“Once organizations realize how big this issue is — that they’re effectively flying blind because of unreliable data, while their competition might be like taking the lead on the market — the default is to patch together a bunch of clunky processes and tools that partially increase the level of liability,” she said. And that clunky process typically involves a product manager and a spreadsheet today.

At its core, the Avo team set out to build a better process around this, and after a few detours and other product ideas, Olafsdóttir and her co-founders regrouped to focus on exactly this problem during their time in the Y Combinator program.

Avo gives developers, data scientists and product managers a shared workspace to develop and optimize their data pipelines. “Good product analytics is the product of collaboration between these cross-functional groups of stakeholders,” Olafsdóttir argues, and the goal of Avo is to give these groups a platform for their analytics planning and governance — and to set company-wide standards for how they create their analytics events.

Once that is done, Avo provides developers with typesafe analytics code and debuggers that allows them to take those snippets and add them to their code within minutes. For some companies, this new process can help them go from spending 10 hours on fixing a specific analytics issue to an hour or less.

Most companies, the team argues, know — deep down — that they can’t fully trust their data. But they also often don’t know how to fix this problem. To help them with this, Avo also today released its Inspector product. This tool processes event streams for a company, visualizes them and then highlights potential errors. These could be type mismatches, missing properties or other discrepancies. In many ways, that’s obviously a great sales tool for a service that aims to avoid exactly these problems.

One of Avo’s early customers is Rappi, the Latin American delivery service. “This year we scaled to meet the demand of 100,000 new customers digitizing their deliveries and curbside pickups. The problem with every new software release was that we’d break analytics. It represented 25% of our Jira tickets,” said Rappi’s head of Engineering, Damian Sima. “With Avo we create analytics schemas upfront, identify analytics issues fast, add consistency over time and ensure data reliability as we help customers serve the 12+ million monthly users their businesses attract.”

As most startups at this stage, Avo plans to use the new funding to build out its team and continue to develop its product.

“The next trillion-dollar software market will be driven from the ground up, with developers deciding the tools they use to create digital transformation across every industry. Avo offers engineers ease of implementation while still retaining schemas and analytics governance for product leaders,” said GGV Capital Managing Partner Glenn Solomon. “Our investment in Avo is an investment in software developers as the new kingmakers and product leaders as the new oracles.”

Optimizely acquired by content management company Episerver

Episerver is announcing that it has reached an agreement to acquire Optimizely for an undisclosed sum.

Optimizely was founded in 2009 by Dan Siroker and Pete Koomen. It became synonymous with A/B testing, subsequently building a broader suite of tools for marketers to experiment with and personalize their websites and apps, with more than 1,000 customers, including Gap, StubHub, IBM and The Wall Street Journal.

The company had raised more than $200 million in funding from Goldman Sachs, Index Ventures, Andreessen Horowitz, GV and others. Earlier this year, it laid off 15% of its staff, citing the impact of COVID-19.

Episerver, meanwhile, was founded in Stockholm back in 1994 and offers tools for marketers to manage their digital content. Accel-KKR  sold the company to Insight Partners for $1.1 billion in 2018. (Today’s announcement describes Insight as a “strategic advisor and sponsor” in the acquisition.)

In a statement, Episerver CEO Alex Atzberger said this is “the most significant transformation in our company’s history – one that will set a new industry standard for digital experience platforms.” It sounds like the idea is to extend Episerver’s capabilities around content and commerce with Optimizely’s experimentation tools.

“The breakthrough combination of Episerver and Optimizely will transform digital experience creation and optimization, enabling digital teams to replace guesswork with evidence-based outcomes,” Atzberger said. “This, along with our shared mission to empower growing companies to compete digitally, makes me thrilled to welcome the Optimizely team to Episerver, as we prove there are no extraordinary experiences without experimentation.”

A company spokesperson said the deal is for a mix of cash and stock. The acquisition is expected to close in the fourth quarter of this year, with the companies remaining fully staffed and independent until then.

“Winning in today’s digital world requires delivering the best and most personalized digital experiences,” said Jay Larson, who replaced Siroker as Optimizely CEO in 2017, in a statement. “Episerver and Optimizely have a shared vision to optimize every customer touchpoint through the use of experimentation. Together, we will enable our customers to do more testing, in more places, with greater ease than ever before.”

Oracle loses $10B JEDI cloud contract appeal yet again

Oracle was never fond of the JEDI cloud contract process, that massive $10 billion, decade-long Department of Defense cloud contract that went to a single vendor. It was forever arguing to anyone who would listen that that process was faulty and favored Amazon.

Yesterday it lost another round in court when the U.S. Court of Appeals rejected the database giant’s argument that the procurement process was flawed because it went to a single vendor. It also didn’t buy that there was a conflict of interest because a former Amazon employee was involved in writing the DoD’s request for proposal criteria.

On the latter point, the court wrote, “The court addressed the question whether the contracting officer had properly assessed the impact of the conflicts on the procurement and found that she had.”

Further, the court found that Oracle’s case didn’t have merit in some cases because it failed to meet certain basic contractual criteria. In other cases, it didn’t find that the DoD violated any specific procurement rules with this bidding process.

This represents the third time the company has tried to appeal the process in some way, four if you include direct executive intervention with the president. In fact, even before the RFP had been released in April 2018, CEO Safra Catz brought complaints to the president that the bid favored Amazon.

In November 2018, the Government Accountability Office (GAO) denied Oracle’s protest that it favored Amazon or any of the other points in their complaint. The following month, the company filed a $10 billion lawsuit in federal court, which was denied last August. Yesterday’s ruling is on the appeal of that decision.

It’s worth noting that for all its complaints that the deal favored Amazon, Microsoft actually won the bid. Even with that determination, the deal remains tied up in litigation as Amazon has filed multiple complaints, alleging that the president interfered with the deal and that they should have won on merit.

As with all things related to this contract, the drama has never stopped.

Feature Spotlight: Introducing Cloud Funnel, the EDR Data Lake

Next generation EDR solutions create a wealth of endpoint telemetry data, operating autonomously to provide real-time endpoint protection, detection, and response, with or without a cloud connection. When such a cloud connection is available, this telemetry is securely streamed up to the cloud data lake. For many enterprises, a cloud data lake is the preferred option. For some, however, and for any number of reasons, storage of their EDR telemetry in their own enterprise data lake is required. To address this need, SentinelOne created Cloud Funnel.

Data Retention in the Cloud Data Lake

Let’s first consider the cloud data lake case. With SentinelOne Complete, an autonomous, lightweight SentinelOne agent is deployed to protect each of your Windows, macOS, and Linux endpoints. Even when offline (i.e., not connected to the cloud), the SentinelOne agent can detect and autonomously respond to security threats at the endpoint, using behavioral AI to identify processes gone wild, correlate related activity, and automatically assemble these events into a comprehensive event Storyline™ as shown in the figure below. When a cloud connection becomes available, each endpoint’s telemetry is uploaded to the SentinelOne Deep Visibility™ cloud, where aggregated analysis and threat hunting operations are managed from the SentinelOne management console.

Figure 1: Correlated Storyline™ telemetry viewed within the SentinelOne console

A wide variety of data retention options are available. And while this EDR data retention in the cloud fits most customer use cases, some organizations prefer to maintain a copy of their telemetry data within their own on-prem data lake. This is where the optional SentinelOne Cloud Funnel capability comes into focus.

Create Your Own Data Lake with Cloud Funnel

Cloud Funnel is a data subscription that enables you to store your organization’s EDR data locally in your own data lake. From there, security teams may take any number of actions on their EDR data.

Figure 2: What is Cloud Funnel?

Typical use cases for this data include:

  • Extended retention. SentinelOne currently offers various retention options, starting with 14 days, and extending up to a full year. However, you may want additional retention beyond a year, or you may want to be in direct control of your retention policy.
  • Regulatory Compliance and Audit Considerations. The various procedures and regulations that govern your business may require you to have custody of your EDR event data, and direct access to retrieve specific event data.
  • Correlation to other Data Sources. SentinelOne’s Deep Visibility cloud equips you to powerfully and intuitively analyze the entire scope of EDR data. Even so, cross-correlating endpoint telemetry to different data sources from across your enterprise might reveal further insight on potential threats. For example, combining firewall logs or active directory logs with EDR data from the SentinelOne agents could potentially reveal new findings. Cloud Funnel empowers you to achieve this in your own data lake.
  • Integration with Security Tools. The SentinelOne console provides a rich set of capabilities for managing your endpoint fleet, analyzing threats, configuring firewall and device policies, and more. You may also have investments in other components of a security stack, such as a SIEM, and wish to consolidate all your security operations in a single infrastructure. Cloud Funnel affords you the option to do exactly that.
  • SOAR Workflows. You may have an existing Security Orchestration, Automation and Response (SOAR) solution, with bespoke workflows that are aligned with your existing security processes. Cloud Funnel integrates SentinelOne EDR events directly with these workflows. For example, you may wish to automatically open a support tracking ticket whenever you encounter an EDR event that is associated with a specific set of conditions.
Cloud Funnel by SentinelOne
Aggregated Endpoint Telemetry in Your Data Lake.
Retain Your Data Locally. Correlate With Other Data Sources. Automate SOAR Workflows.

How Does Cloud Funnel Work?

Considering the sheer volume of EDR data generated by SentinelOne endpoints, we chose to build the solution based on Apache Kafka, an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.

We chose Kafka for the following reasons:

  • Proven. Kafka is a tried and tested open source industry solution for messaging that is capable of supporting throughput of thousands of messages per second. It is capable of handling these messages with very low latency in the range of milliseconds, as demanded by most EDR use cases. It is also very durable, fault tolerant, and scalable, allowing us to expand the solution gradually as we on-board more and more customers.
  • Consumer-friendly. It is possible to integrate with a variety of consumers using Kafka, in different languages and based on different behaviours that match the consumption use-case. SentinelOne provides a code sample in Java via its Knowledge Base platform. Samples in additional languages can be provided upon request.
  • Secure. Kafka supports Salted Challenge Response Authentication Mechanism (SCRAM), an authentication solution from the SASL (Simple Authentication and Security Layer) family that addresses the security concerns with traditional username/password authentication mechanisms. We create a separate topic per customer account, and the communication is encrypted with SSL based on TLS1.2+ connection.
Figure 3: Sample Cloud Funnel Output translated to JSON

We chose the Protobuf as the protocol for the DV (Deep Visibility) event schema because it is language-independent, interoperable, extensible, and backward compatible.

Once you subscribe to Cloud Funnel, you will receive, from our support agents, the export schema, the topic name, the Kafka Broker address and the consumer credentials (user and password). You will also get a link to our Knowledge Base article with instructions on how to connect.

Conclusion

We believe Cloud Funnel is a powerful complement to the existing console-based Deep Visibility offering, and the perfect solution for customers interested in maintaining their own EDR data repository.

To learn more about how Cloud Funnel can work for you, contact us for more information or request a free demo.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

A SonicWall cloud bug exposed corporate networks to hackers

A newly discovered bug in a cloud system used to manage SonicWall firewalls could have allowed hackers to break into thousands of corporate networks.

Enterprise firewalls and virtual private network appliances are vital gatekeepers tasked with protecting corporate networks from hackers and cyberattacks while still letting in employees working from home during the pandemic. Even though most offices are empty, hackers frequently look for bugs in critical network gear in order to break into company networks to steal data or plant malware.

Vangelis Stykas, a researcher at security firm Pen Test Partners, found the new bug in SonicWall’s Global Management System (GMS), a web app that lets IT departments remotely configure their SonicWall devices across the network.

But the bug, if exploited, meant any existing user with access to SonicWall’s GMS could create a user account with access to any other company’s network without permission.

From there, the newly created account could remotely manage the SonicWall gear of that company.

In a blog post shared with TechCrunch, Stykas said there were two barriers to entry. Firstly, a would-be attacker would need an existing SonicWall GMS user account. The easiest way — and what Stykas did to independently test the bug — was to buy a SonicWall device.

The second issue was that the would-be attacker would also need to guess a unique seven-digit number associated with another company’s network. But Stykas said that this number appeared to be sequential and could be easily enumerated, one after the other.

Once inside a company’s network, the attacker could deliver ransomware directly to the internal systems of their victims, an increasingly popular tactic for financially driven hackers.

SonicWall confirmed the bug is now fixed. But Stykas criticized the company for taking more than two weeks to patch the vulnerability, which he described as “trivial” to exploit.

“Even car alarm vendors have fixed similar issues inside three days of us reporting,” he wrote.

A SonicWall spokesperson defended the decision to subject the fix to a “full” quality check before it was rolled out, and said it is “not aware” of any exploitation of the vulnerability.

Transposit scores $35M to build data-driven runbooks for faster disaster recovery

Transposit is a company built by engineers to help engineers, and one big way to help them is to get systems up and running faster when things go wrong — as they always will at some point. Transposit has come up with a way to build runbooks for faster disaster recovery, while using data to update them in an automated fashion.

Today, the company announced a $35 million Series B investment led by Altimeter Capital, with participation from existing investors Sutter Hill Ventures, SignalFire and Unusual Ventures. Today’s investment brings the total raised to $50.4 million, according to the company.

Company CEO Divanny Lamas and CTO and founder Tina Huang see technology issues as less an engineering problem and more as a human problem, because it’s humans who have to clean up the messes when things go wrong. Huang says forgetting the human side of things is where she thinks technology has gone astray.

“We know that the real superpower of the product is that we focus on the human and the user side of things. And as a result, we’re building an engineering culture that I think is somewhat differentiated,” Huang told TechCrunch.

Transposit is a platform that at its core helps manage APIs, connections to other programs, so it starts with a basic understanding of how various underlying technologies work together inside a company. This is essential for a tool that is trying to help engineers in a moment of panic figure out how to get back to a working state.

When it comes to disaster recovery, there are essentially two pieces: getting the systems working again, then figuring out what happened. For the first piece, the company is building data-driven runbooks. By being data-driven, they aren’t static documents. Instead, the underlying machine learning algorithms can look at how the engineers recovered and adjust accordingly.

Transposit diaster recovery dashboard

Image Credits: Transposit

“We realized that no one was focusing on what we realize is the root problem here, which is how do I have access to the right set of data to make it easier to reconstruct that timeline, and understand what happened? We took those two pieces together, this notion that runbooks are a critical piece of how you spread knowledge and spread process, and this other piece, which is the data, is critical,” Huang said.

Today the company has 26 employees, including Huang and Lamas, who Huang brought on board from Splunk last year to be CEO. The company is somewhat unique having two women running the organization, and they are trying to build a diverse workforce as they build their company to 50 people in the next 12 months.

The current make-up is 47% female engineers, and the goal is to remain diverse as they build the company, something that Lamas admits is challenging to do. “I wish I had a magic answer, or that Tina had a magic answer. The reality is that we’re just very demanding on recruiters. And we are very insistent that we have a diverse pipeline of candidates, and are constantly looking at our numbers and looking at how we’re doing,” Lamas said.

She says being diverse actually makes it easier to recruit good candidates. “People want to work at diverse companies. And so it gives us a real edge from a kind of culture perspective, and we find that we get really amazing candidates that are just tired of the status quo. They’re tired of the old way of doing things and they want to work in a company that reflects the world that they want to live in,” she said.

The company, which launched in 2016, took a few years to build the first piece, the underlying API platform. This year it added the disaster recovery piece on top of that platform, and has been running its beta since the beginning of the summer. They hope to add additional beta customers before making it generally available later this year.

Hypatos gets $11.8M for a deep learning approach to document processing

Process automation startup Hypatos has raised a €10 million (~$11.8 million) seed round of funding from investors including Blackfin Tech, Grazia Equity, UVC Partners and Plug & Play Ventures.

The Germany and Poland-based company was spun out of AI for accounting startup Smacc at the back end of 2018 to apply deep learning tech to power a wider range of back-office automation, with a focus on industries with heavy financial document processing needs, such as the financial and insurance sectors.

Hypatos is applying language processing AI and computer vision tech to speed up financial document processing for business use cases such as invoices, travel and expense management, loan application validation and insurance claims handling via — touting a training data set of more than 10 million annotated data entities.

It says the new seed funding will go on R&D to expand its portfolio of AI models so it can automate business processing for more types of documents, as well as for fueling growth in Europe, North American and Asia. Its customer base at this point includes Fortune 500 companies, major accounting firms and more than 300 software companies.

While there are plenty of business process automation plays, Hypatos says its use of deep learning tech supports an “in-depth understanding” of document content — which in turn allows it to offer customers a “soup to nuts” automation menu that covers document classification, information capturing, content validation and data enrichment.

It dubs its approach “cognitive process automation” (CPA) versus more basic applications of business process automation with software robots (RPA), which it argues aren’t so contextually savvy — thereby claiming an edge.

As well as document processing solutions, it has developed machine learning modules for enhancing customers’ existing systems (e.g. ECM, ERP, CRM, RPA); and offers APIs for software providers to draw on its machine learning tech for their own applications.

“All offerings include machine learning pipeline software for continuous model training in the cloud or in on-premise deployments,” it notes in a press release.

“We have deep knowledge of how financial documents are processed and millions of data entities in our training data,” says chief commercial officer Cem Dilmegani, discussing where Hypatos fits in the business process automation landscape. “We get compared to RPA companies like UiPath, enterprise content management (ECM) companies like Kofax Readsoft as well as generalist ML document automation companies like Hyperscience. However, we are quite different.

“We focus on end-to-end automation, we don’t only help companies capture data, we help them process it using our deep domain understanding, enabling higher rates of automation. For example, to automate incoming invoice processing (A/P automation) we apply our document understanding AI to capture all data, classify the document, identify the specific goods and services, validate for internal/external compliance and assign financial accounts, cost centers, cost categories etc. to automate all processing tasks.

“Finally, we offer this technology as components easily accessible via APIs. This allows RPA or ECM users to leverage our technology and increase their level of automation.”

Hypatos claims it’s seeing uplift as a result of the coronavirus pandemic — noting it’s providing a service to more than a dozen Fortune 500 companies to help with in-shoring efforts, which it says are accelerating as a result of COVID-19 putting pressure on the traditional business process outsourcing model as offshore workforce productivity in lower wage regions is affected by coronavirus lockdowns.

“We believe that we are in a pivotal moment of machine learning adoption in large organizations,” adds Andreas Unseld, partner at UVC Partners, in a supporting statement. “Hypatos’ technology provides ample opportunity to transform many core business processes. We’re impressed by the Hypatos machine learning technology and see the team in a perfect position to take a leading role in the machine learning revolution to come.”

The Joys of Owning an ‘OG’ Email Account

When you own a short email address at a popular email provider, you are bound to get gobs of spam, and more than a few alerts about random people trying to seize control over the account. If your account name is short and desirable enough, this kind of activity can make the account less reliable for day-to-day communications because it tends to bury emails you do want to receive. But there is also a puzzling side to all this noise: Random people tend to use your account as if it were theirs, and often for some fairly sensitive services online.

About 16 years ago — back when you actually had to be invited by an existing Google Mail user in order to open a new Gmail account — I was able to get hold of a very short email address on the service that hadn’t yet been reserved. Naming the address here would only invite more spam and account hijack attempts, but let’s just say the account name has something to do with computer hacking.

Because it’s a relatively short username, it is what’s known as an “OG” or “original gangster” account. These account names tend to be highly prized among certain communities, who busy themselves with trying to hack them for personal use or resale. Hence, the constant account takeover requests.

What is endlessly fascinating is how many people think it’s a good idea to sign up for important accounts online using my email address. Naturally, my account has been signed up involuntarily for nearly every dating and porn website there is. That is to be expected, I suppose.

But what still blows me away is the number of financial and other sensitive accounts I could access if I were of a devious mind. This particular email address has accounts that I never asked for at H&R Block, Turbotax, TaxAct, iTunes, LastPass, Dashlane, MyPCBackup, and Credit Karma, to name just a few. I’ve lost count of the number of active bank, ISP and web hosting accounts I can tap into.

I’m perpetually amazed by how many other Gmail users and people on similarly-sized webmail providers have opted to pick my account as a backup address if they should ever lose access to their inbox. Almost certainly, these users just lazily picked my account name at random when asked for a backup email — apparently without fully realizing the potential ramifications of doing so. At last check, my account is listed as the backup for more than three dozen Yahoo, Microsoft and other Gmail accounts and their associated file-sharing services.

If for some reason I ever needed to order pet food or medications online, my phantom accounts at Chewy, Coupaw and Petco have me covered. If any of my Weber grill parts ever fail, I’m set for life on that front. The Weber emails I periodically receive remind me of a piece I wrote many years ago for The Washington Post, about companies sending email from [companynamehere]@donotreply.com, without considering that someone might own that domain. Someone did, and the results were often hilarious.

It’s probably a good thing I’m not massively into computer games, because the online gaming (and gambling) profiles tied to my old Gmail account are innumerable.

For several years until recently, I was receiving the monthly statements intended for an older gentleman in India who had the bright idea of using my Gmail account to manage his substantial retirement holdings. Thankfully, after reaching out to him he finally removed my address from his profile, although he never responded to questions about how this might have happened.

On balance, I’ve learned it’s better just not to ask. On multiple occasions, I’d spend a few minutes trying to figure out if the email addresses using my Gmail as a backup were created by real people or just spam bots of some sort. And then I’d send a polite note to those that fell into the former camp, explaining why this was a bad idea and ask what motivated them to do so.

Perhaps because my Gmail account name includes a hacking term, the few responses I’ve received have been less than cheerful. Despite my including detailed instructions on how to undo what she’d done, one woman in Florida screamed in an ALL CAPS reply that I was trying to phish her and that her husband was a police officer who would soon hunt me down. Alas, I still get notifications anytime she logs into her Yahoo account.

Probably for the same reason the Florida lady assumed I was a malicious hacker, my account constantly gets requests from random people who wish to hire me to hack into someone else’s account. I never respond to those either, although I’ll admit that sometimes when I’m procrastinating over something the temptation arises.

Losing access to your inbox can open you up to a cascading nightmare of other problems. Having a backup email address tied to your inbox is a good idea, but obviously only if you also control that backup address.

More importantly, make sure you’re availing yourself of the most secure form of multi-factor authentication offered by the provider. These may range from authentication options like one-time codes sent via email, phone calls, SMS or mobile app, to more robust, true “2-factor authentication” or 2FA options (something you have and something you know), such as security keys or push-based 2FA such as Duo Security (an advertiser on this site and a service I have used for years).

Email, SMS and app-based one-time codes are considered less robust from a security perspective because they can be undermined by a variety of well-established attack scenarios, from SIM-swapping to mobile-based malware. So it makes sense to secure your accounts with the strongest form of MFA available. But please bear in mind that if the only added authentication options offered by a site you frequent are SMS and/or phone calls, this is still better than simply relying on a password to secure your account.

Maybe you’ve put off enabling multi-factor authentication for your important accounts, and if that describes you, please take a moment to visit twofactorauth.org and see whether you can harden your various accounts.

As I noted in June’s story, Turn on MFA Before Crooks Do It For You, people who don’t take advantage of these added safeguards may find it far more difficult to regain access when their account gets hacked, because increasingly thieves will enable multi-factor options and tie the account to a device they control.

Are you in possession of an OG email account? Feel free to sound off in the comments below about some of the more gonzo stuff that winds up in your inbox.

InfoSum raises $15.1M for its privacy-first, federated approach to big data analytics

Data protection and data privacy have gone from niche concerns to mainstream issues in the last several years, thanks to new regulations and a cascade of costly breaches that have laid bare the problems that arise when information and data security are treated haphazardly.

Yet that swing has also thrown up a whole series of issues for organisations and business functions that depend on sharing and exchanging data in order to work. Today, a startup that has built a new way of exchanging data while still keeping privacy in mind — starting first by applying the concept to the “marketing industrial complex” — is announcing a round of funding as it continues to pick up momentum.

InfoSum, a London startup that has built a way for organizations to share their data with each other without passing it on to each other — by way of a federated, decentralized architecture that uses mathematical representations to organise, “read” and query the data — is today announcing that it has raised $15.1 million.

Data may be the new oil, but according to founder and CEO Nick Halstead, that just means “it’s sticky and gets all over the place.” That is to say, InfoSum is looking for a new way to use data that is less messy, and less prone to leakage, and ultimately devaluation.

The Series A is being co-led by Upfront Ventures and IA Ventures. A number of strategics using InfoSum — Ascential, Akamai, Experian, British broadcaster ITV and AT&T’s Xandr — are also participating in the round. The startup has raised $23 million to date.

Nicholas Halstead, the founder and CEO who previously had founded and led another big data company, DataSift (the startup that gained early fame as a middleman for Twitter’s firehose of data, until Twitter called time on that relationship to push its own business strategy), said in an interview that the plan is to use the funding to continue fueling its growth, with a specific focus on the U.S. market.

To that end, Brian Lesser — the founder and former CEO of Xandr (AT&T’s adtech business that is now a part of AT&T’s WarnerMedia), and previous to that the North American CEO of GroupM — is joining the company as executive chairman. Lesser had originally led Xandr’s investment into InfoSum and had previously been on the board of the startup.

InfoSum got its start several years ago as CognitiveLogic, founded at a time when Halstead was first starting to get his head around the problems that were becoming increasingly urgent in how data was being used by companies, and how newer information architecture models using data warehousing and cloud computing could help solve that.

“I saw the opportunity for data collaboration in a more private way, helping enable companies to work together when it came to customer data,” he said. This eventually led to the company releasing its first product two years ago.

In the interim, and since then, that trend, he noted, has only gained momentum, spurred by the rise of companies like Snowflake that have disrupted the world of data warehousing, cookies have started to increasingly go out of style (and some believe will disappear altogether over time) and the concept of federated architecture has become much more ubiquitous, applied to identity management and other areas.

All of this means that InfoSum’s solution today may be aimed at martech, but it is something that affects a number of industries. Indeed, the decision to focus on marketing technology, he said, was partly because that is the industry that Halstead worked most closely with at DataSift, although the plan is to expand to other verticals as well.

“We’ve done a lot of work to change the marketing industrial complex,” said Lesser, “but its bigger use cases are in areas like finance and healthcare.”