Writing Malware Traffic Decrypters for ISFB/Ursnif

The Zero2Hero malware course continues with Daniel Bunce explaining how to decrypt communication traffic between an attacker’s C2 and an endpoint infected with ISFB/Ursnif malware.

Carrying on from last week’s topic of writing malware configuration extractors for ISFB/Ursnif, this week we will be taking a look at writing a traffic decrypter for ISFB. Our aim is to pass a binary and PCAP as an argument and decrypt the traffic to get access to downloaded payloads, received commands, and more. 

Traffic Decrypters are very useful when dealing with a prior infection as they allow the analyst to understand what data was received from and sent to the C2 server. The only downside is a packet capture is obviously required to get a full overview of what occured. 

In this post, I will be using the Ursnif payload and corresponding PCAP from the Malware Traffic Analysis site which you can find here

Summary of the Network Protocol

In this post, I won’t be covering the reverse engineering of the network protocol; however, I will sum it up.

  1. The payload sends an initial GET request to the C2, typically pointing to the directory /images/ with a long string of Base64 encoded, Serpent-CBC encrypted data containing information about the PC and implant.
  2. If the C2 is online, it will reply with a chunk of Base64 encoded and Serpent-CBC encrypted data. The last 64 bytes, however, are not Serpent-CBC encrypted and are in fact encrypted using RSA. Upon decoding and decrypting this using the RSA key embedded in the executable (pointed to by the JJ structure we discussed last time), we are left with data following a similar structure as seen below.

    image of struct last block 64
  3. Using the Serpent-CBC key, MD5, and Size, the sample will decrypt the response and validate it using the MD5 sum. What the sample does next depends on what was received.
  4. Typically, this is used to download the final stage of ISFB, which will be executed after being downloaded.

So, with that covered, let’s take a look at writing a script to extract and decrypt responses!

Writing main() For Our Traffic Decryptor

So the main function only needs to do two things – accept the PCAP and filename as arguments (which can be done very easily with argparse), and then call the functions responsible for gathering packets, extracting the necessary keys, and then decrypting the packets. 

image of main function

With that function complete, let’s move onto scanning the PCAP for suspicious packets that could be responses from the C2 server.

Parsing PCAP Files With parse_packet()

In order to parse the given PCAP I will be using the Scapy module as it contains tools allowing us to easily locate and identify whether a packet is from a possible C2. We can read in the PCAP using rdpcap(), which will store each packet in a list allowing us to loop through it, checking each packet for certain signs. 

Firstly, we can filter for the packets that contain some form of raw data inside them. We can then load this raw data into a variable for further parsing. 

The raw data contains the headers of the packet, and the chunk of data sent/received, so we can split it using the n delimiter to search specific lines for values. In the first loop seen in the image, we are checking for GET requests pointing to the /images/ directory, and if the packet matches the condition, the destination IP address is appended to the list suspect_c2. Due to some false positives I had in this script, specifically packets containing the strings “GET” and “/images/” due to google searches contaminating the PCAP, I added an additional check for the string “google.com” to prevent false IP’s being added to the list.

image of parse packet function

With a list of suspicious IPs in hand, we can now use these to get the C2 responses. These two loops can be compressed into one; however, I have separated them to improve readability. So, this next loop will once again loop through each of the packets; however, it will only search for packets that are from any of the IPs inside of the suspect_c2 list. 

If a valid response is found, it will be loaded into the variable response, and then split once again using n. Using this, we check for the HTTP 200 response, meaning the C2 is online. We then search for the string “PHPSESSID” inside the headers, as this is usually present in most ISFB responses (at least for version 2). 

We then check for whether or not a null byte is present in the C2 response – this is to prevent overlapping responses. Looking at the PCAP, once the first GET request is made to the C2 server and a response is received, the sample then queries favicon.ico, which contains raw binary data that is not part of the previous response. If we did not search for a null byte in the packet, then the raw binary data would simply be appended to the base64 encoded data – due to the fact that “PHPSESSID” is not present in the packet headers. The reason we append data if it doesn’t have the correct headers is because the response is extremely large, meaning it is sent in chunks of data which we must append together to get the full response.

image of http response

Once a list of packets has been created, we return from the function, but before doing so, we add whatever is stored in data to the responses list. This is done as the final packet that matches all the conditions will not be added to the list as the loop will simply exit.

Now that we have a list of suspicious packets, let’s move over to extracting both the RSA key and Serpent key from the executable!

How to Extract RSA & Serpent Keys

This function will be very similar to the configuration extractor due to the fact that the RSA key is stored in one blob of data and the serpent key is stored in another, both of which are pointed to by the JJ structures we looked at last time. This is a bit different as we are looking at extracting and parsing the configuration, so I will focus on that. 

Once we have located the offsets of the blobs and extracted them, the size of each blob is checked to see if it is equal to 132 bytes (0x84). The reason for this is that this is the typical length of the RSA key stored in the binary. If the length is not equal, then we call the function parse_config(), and pass the APLib decompressed blob as an argument.

image of extract key from executable function

The config parser function is fairly simple. The configuration stored in the binary (after decompression) contains information such as C2 addresses, any DGA URLs, DNS servers to utilize, and also a Serpent key which is used to encrypt the packets sent out. In this case, as we are not looking at decrypting any of the GET requests, it is not vital to have, although if we wanted to see what data was transmitted to the C2, it is required. An example of the config can be seen below. 

image config hexdump

Looking at the image, you’ll notice the strings in the bottom half of the configuration, but you might be wondering what the top half is supposed to be. Well, this is actually a lookup table used by the sample to retrieve specific values. The first two DWORDs in the image shown above are skipped, and then the table begins. The structure of the lookup table values can be seen below and is fairly simple. We are mainly interested in the first three DWORDs as those are the important values. 

image of lookup table struct

Essentially, what happens here is we loop through the lookup table, unpacking the four (includes the UID) DWORDs into four different variables, and using the value in Flags to determine whether the value is a direct pointer to the string or if it must be added to the current position. 

From there, it will check if the CRC hash stored in Name is found in the dictionary containing CRC hashes, which can be seen below. If it is located in the dictionary, it will check the value and see if it matches the string “key“. If it does, the value will be returned and used as the Serpent-CBC key. Otherwise, it will continue to parse the table. More information about this routine and ISFB in general can be found in this paper written by Maciej Kotowicz.

image of parse_config function

image of crc table struct

Now, with both the extracted RSA key and Serpent-CBC key, we can start decrypting the packets!

Functions for Decrypting Suspicious Packets

We’ll now write the final three functions we need to complete our malware traffic decrypter script. The decrypt_communication() function is fairly simple. First, we check to see if each packet in the list of suspicious packets is base64 encoded by checking for padding at the end.

image of decrypt communication function

If it is, we base64 decode it and store the last 64 bytes in a variable, which is then passed into RSA_Decrypt_Last_Block().

image or rsa decrypt last block function

The packet is then stripped of the last 64 bytes as they are no longer needed. Then, the size returned by the RSA decrypt function is converted to an integer, and if it is less than 3000, the size is altered to be the full size of the packet. The reason for this is on the smaller packet sent from the C2 server, the decryption script fails to decrypt the entirety of the data, so to fix this we can simply choose to decrypt the entire packet. 

From there, we pass the data into the Serpent_Decrypt_Packet, which will decrypt the data, and then MD5 hash it, comparing the resulting hash to the hash in the RSA decrypted block.

image of serpent decrypt packet function

Regardless of whether or not the hashes match, it will dump out the data to a file.

Executing the Traffic Decrypter Script

Upon executing the script (as long as no issues are raised), the payloads should have successfully been dumped! 

Interestingly, one of the packets failed to decrypt, and performing an RSA decryption of the final 64 bytes yielded a strange result, completely different to the first decrypted packet. This could be due to it coming from a different sample of Ursnif, or due to a parsing issue from my script, although in that case there would be issues with the first and third packet, which there was not.

output of function

output of function 2

Wrapping Up…

So! That brings us to an end to this blog post. I hope you have been able to learn something new from it! If you are interested in trying to replicate this decrypter yourself, you can find the Python implementation of Serpent-CBC encryption/decryption here. If you’ve completed the traffic decrypter for this version of ISFB, why not try writing one for version 3? You’ll have to change up the extraction a bit as they use a more complex method of storing it, but it’ll be a good challenge!


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Zoho launches Catalyst, a new developer platform with a focus on microservices

Zoho may be one of the most underrated tech companies. The 23-year-old company, which at this point offers more than 45 products, has never taken outside funding and has no ambition to go public, yet it’s highly profitable and runs its own data centers around the world. And today, it’s launching Catalyst, a cloud-based developer platform with a focus on microservices that it hopes can challenge those of many of its larger competitors.

The company already offered a low-code tool for building business apps. But Catalyst is different. Zoho isn’t following in the footsteps of Google or Amazon here and offering a relatively unopinionated platform for running virtual machines and containers. Indeed, it does nothing of the sort. The company is 100% betting on serverless as the next major technology for building enterprise apps and the whole platform has been tuned for this purpose.

Catalyst Zia AI

“Historically, when you look at cloud computing, when you look at any public clouds, they pretty much range from virtualizing your servers and renting our virtual servers all the way up the stack,” Raju Vegesna, Zoho’s chief evangelist, said when I asked him about this decision to bet on serverless. “But when you look at it from a developer’s point of view, you still have to deal with a lot of baggage. You still have to figure out the operating system, you still have to figure out the database. And then you have to scale and manage the updates. All of that has to be done at the application infrastructure level.” In recent years, though, said Vegesna, the focus has shifted to the app logic side, with databases and file servers being abstracted away. And that’s the trend Zoho is hoping to capitalize on with Catalyst.

What Catalyst does do is give advanced developers a platform to build, run and manage event-driven microservice-based applications that can, among other things, also tap into many of the tools that Zoho built for running its own applications, like a grammar checker for Zoho Writer, document previews for Zoho Drive or access to its Zia AI tools for OCR, sentiment analysis and predictions. The platform gives developers tools to orchestrate the various microservices, which obviously means it’ll make it easy to scale applications as needed, too. It integrates with existing CI/CD pipelines and IDEs.

Catalyst Functions

Catalyst also complies with the SOC Type II and ISO 27001 certifications, as well as GDPR. It also offers developers the ability to access data from Zoho’s own applications, as well as third-party tools, all backed by Zoho’s Unified Data Model, a relational datastore for server-side and client deployment.

“The infrastructure that we built over the last several years is now being exposed,” said Vegesna. He also stressed that Zoho is launching the complete platform in one go (though it will obviously add to it over time). “We are bringing everything together so that you can develop a mobile or web app from a single interface,” he said. “We are not just throwing 50 different disparate services out there.” At the same time, though, the company is also opting for a very deliberate approach here with its focus on serverless. That, Vegesna believes, will allow Zoho Catalyst to compete with its larger competitors.

It’s also worth noting that Zoho knows that it’s playing the long-game here, something it is familiar with, given that it launched its first product, Zoho Writer, back in 2005 before Google had launched its productivity suite.

Catalyst Homepage

 

Pendo scores $100M Series E investment on $1 billion valuation

Pendo, the late-stage startup that helps companies understand how customers are interacting with their apps, announced a $100 million Series E investment today on a valuation of $1 billion.

The round was led by Sapphire Ventures . Also participating were new investors General Atlantic and Tiger Global, and existing investors Battery Ventures, Meritech Capital, FirstMark, Geodesic Capital and Cross Creek. Pendo has now raised $206 million, according to the company.

Company CEO and co-founder Todd Olson says that one of the reasons they need so much money is they are defining a market, and the potential is quite large. “Honestly, we need to help realize the total market opportunity. I think what’s exciting about what we’ve seen in six years is that this problem of improving digital experiences is something that’s becoming top of mind for all businesses,” Olson said.

The company integrates with customer apps, capturing user behavior and feeding data back to product teams to help prioritize features and improve the user experience. In addition, the product provides ways to help those users either by walking them through different features, pointing out updates and new features or providing other notes. Developers can also ask for feedback to get direct input from users.

Olson says early on its customers were mostly other technology companies, but over time they have expanded into lots of other verticals, including insurance, financial services and retail, and these companies are seeing digital experience as increasingly important. “A lot of this money is going to help grow our go-to-market teams and our product teams to make sure we’re getting our message out there, and we’re helping companies deal with this transformation,” he says. Today, the company has more than 1,200 customers.

While he wouldn’t commit to going public, he did say it’s something the executive team certainly thinks about, and it has started to put the structure in place to prepare should that time ever come. “This is certainly an option that we are considering, and we’re looking at ways in which to put us in a position to be able to do so, if and when the markets are good and we decide that’s the course we want to take.”

When Card Shops Play Dirty, Consumers Win

Cybercrime forums have been abuzz this week over news that BriansClub — one of the underground’s largest shops for stolen credit and debit cards — has been hacked, and its inventory of 26 million cards shared with security contacts in the banking industry. Now it appears this brazen heist may have been the result of one of BriansClub’s longtime competitors trying to knock out a rival.

And advertisement for BriansClub that for years has used my name and likeness to peddle stolen cards.

Last month, KrebsOnSecurity was contacted by an anonymous source who said he had the full database of 26M cards stolen from BriansClub, a carding site that has long used this author’s name and likeness in its advertising. The stolen database included cards added to the site between mid-2015 and August 2019.

This was a major event in the underground, as experts estimate the total number of stolen cards leaked from BriansClub represent almost 30 percent of the cards on the black market today.

The purloined database revealed BriansClub sold roughly 9.1 million stolen credit cards, earning the site and its resellers a cool $126 million in sales over four years.

In response to questions from KrebsOnSecurity, the administrator of BriansClub acknowledged that the data center serving his site had been hacked earlier in the year (BriansClub claims this happened in February), but insisted that all of the cards stolen by the hacker had been removed from BriansClub store inventories.

However, as I noted in Tuesday’s story, multiple sources confirmed they were able to find plenty of card data included in the leaked database that was still being offered for sale at BriansClub.

Perhaps inevitably, the admin of BriansClub took to the cybercrime forums this week to defend his business and reputation, re-stating his claim that all cards included in the leaked dump had been cleared from store shelves.

The administrator of BriansClub, who’s appropriated the name and likeness of Yours Truly for his advertising, fights to keep his business alive.

Meanwhile, some of BriansClub’s competitors gloated about the break-in. According to the administrator of Verified, one of the longest running Russian language cybercrime forums, the hack of BriansClub was perpetrated by a fairly established ne’er-do-well who uses the nickname “MrGreen” and runs a competing card shop by the same name.

The Verified site admin said MrGreen had been banned from the forum, and added that “sending anything to Krebs is the lowest of all lows” among accomplished and self-respecting cybercriminals. I’ll take that as a compliment.

This would hardly be the first time some cybercriminal has used me to take down one of his rivals. In most cases, I’m less interested in the drama and more keen on validating the data and getting it into the proper hands to do some good.

That said, if the remainder of BriansClub’s competitors want to use me to take down the rest of the carding market, I’m totally fine with that.

The BriansClub admin, defending the honor of his stolen cards shop after a major breach.

Cyber Insurance Is No Substitute For Robust Cybersecurity Systems

Cyber insurance is often hailed as the “silver Bullet” that will solve all the cyber security issues for organizations. It appears to be a simple, elegant solution. If an attack occurs, call your insurance company, pay a small sum and let the insurance company deal with the fallout.

image cyber insurance no substitute

The Cyber Insurance Industry Is Growing

Indeed, the uptake of cyber insurance and the willingness to claim have both increased. The cyber insurance market is expected to grow at a CAGR of 26.5 percent from 2019 to 2028.

Recently, AIG said that cyber-insurance claims nearly doubled between 2017 and 2018 and that they received more cyber insurance claims last year than in 2016 and 2017 combined.

This is not surprising as cyber insurance policies tend to be very profitable for insurance companies. The way to calculate this is by the “loss ratio” of insurance policies, which is the number of claims paid to customers divided by the premiums charged by insurers. In 2016, the loss ratio on cyber insurance policies was 46%. By 2017, that figure dropped to 32%, meaning that for every $1 million in premiums that customers pay each year, insurance companies pay out just $320,000. That represents a 68% profitability rate!

Perceived Benefits Of Cyber Security Insurance

Security is a top driver of cyber insurance adoption, with 71% of organizations purchasing cybersecurity insurance as a precautionary measure, while 44% cited an increased priority on cybersecurity as the reason they bought a policy.

Most executives who buy cyber insurance are confident that it will pay off in case of an incident. A survey of 105 CFOs at enterprise-scale companies with annual revenue of at least $1 billion found that 71% in total felt that they were adequately covered in the event of a cybersecurity incident. 45% expected their cyber insurance provider to cover most of their losses in the event of a breach, and 26% expected the provider to cover their losses in full.

But The Truth Might Be Different

However, even if activating one’s insurance seems like the simplest solution, it might not be a viable option for long. Insurance companies will employ greater sophistication in their evaluation and response mechanisms and will reject more claims on the basis of the client’s inadequacy, the identity of the attackers or the methods used.

One example of this occurred when Mondelez International, maker of Oreo cookies, lost access to its logistics software after a NotPetya attack in 2017. Recovery took weeks as the company piled up losses in excess of $100 million, according to court documents reported in the media. Mondelez’s cyber insurance claims were denied on the grounds that the attack was an act of war by a foreign government rather than a criminal act perpetuated by individuals: a standard insurance clause that exempts insurers from covering damages caused by war.

In another case, insurance company Hiscox refused to pay their client, DLA Piper, following a devastating ransomware attack that wiped out systems at DLA Piper and cost the firm 15,000 hours of extra overtime for IT staff. That overtime amounted to several million pounds in wages. Hiscox claimed that DLA Piper did not have a cybersecurity-specific policy and that their “generic” insurance doesn’t cover this kind of damage.

The case of Everest National Insurance Company vs. National Bank of Blacksburg in Virginia is even more concerning. After a major cyber attack resulted in a cyber breach and significant operational downtime, the bank filed a cyber claim with its insurance company in the amount of $2.4 million. After investigating the claim, however, the insurance company only agreed to pay $50,000. The case went to court at the beginning of 2019 and proceedings are ongoing.

More recently, insurance company AIG refused to compensate a client for cyber-induced losses, claiming it is not required to pay for losses resulting from criminal activity. Hackers stole $5.9 million from a New York-based outfit in 2016 by sending phishing emails to company employees from spoofed email addresses requesting monetary transfers. Insurance company AIG says its policy stipulates that the insurer will not cover losses stemming from criminal activity and refused to pay for the loss. The case is now being discussed in court.

Moreover, it is clear that insurers will find additional ways to avoid payment. In particular, it seems that insurers will focus on human error as a reason to refuse payment. Since humans are involved in almost all data breaches, it would be easy to cite “the human factor” as a cause of the incident and refuse payment on the basis of negligence (or malifense). Even when payment would be unavoidable, it is likely that insurers will do their outmost to minimise their exposure to cybersecurity claims and limit payouts to particular sub-limits for losses – leaving victims well short of the total coverage provided by their policies.

Conclusion

These incidents, along with the fact that many clients don’t understand the intricacies of cyber insurance policies, provide some concrete evidence that cyber insurance is not to be taken at face value. At most, it can be seen as an external budget meant to offset the costs of breach. It should not, under any circumstance, be seen as a replacement for a robust security posture, which requires modern cybersecurity technologies, trained teams and tested procedures. Failure to reach required security levels will make it easier for insurers to refuse payment, making the investment in cyber insurance both redundant and costly.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security

Databricks brings its Delta Lake project to the Linux Foundation

Databricks, the big data analytics service founded by the original developers of Apache Spark, today announced that it is bringing its Delta Lake open-source project for building data lakes to the Linux Foundation under an open governance model. The company announced the launch of Delta Lake earlier this year, and, even though it’s still a relatively new project, it has already been adopted by many organizations and has found backing from companies like Intel, Alibaba and Booz Allen Hamilton.

“In 2013, we had a small project where we added SQL to Spark at Databricks […] and donated it to the Apache Foundation,” Databricks CEO and co-founder Ali Ghodsi told me. “Over the years, slowly people have changed how they actually leverage Spark and only in the last year or so it really started to dawn upon us that there’s a new pattern that’s emerging and Spark is being used in a completely different way than maybe we had planned initially.”

This pattern, he said, is that companies are taking all of their data and putting it into data lakes and then doing a couple of things with this data, machine learning and data science being the obvious ones. But they are also doing things that are more traditionally associated with data warehouses, like business intelligence and reporting. The term Ghodsi uses for this kind of usage is “Lake House.” More and more, Databricks is seeing that Spark is being used for this purpose and not just to replace Hadoop and doing ETL (extract, transform, load). “This kind of Lake House patterns we’ve seen emerge more and more and we wanted to double down on it.”

Spark 3.0, which is launching today soon, enables more of these use cases and speeds them up significantly, in addition to the launch of a new feature that enables you to add a pluggable data catalog to Spark.

Delta Lake, Ghodsi said, is essentially the data layer of the Lake House pattern. It brings support for ACID transactions to data lakes, scalable metadata handling and data versioning, for example. All the data is stored in the Apache Parquet format and users can enforce schemas (and change them with relative ease if necessary).

It’s interesting to see Databricks choose the Linux Foundation for this project, given that its roots are in the Apache Foundation. “We’re super excited to partner with them,” Ghodsi said about why the company chose the Linux Foundation. “They run the biggest projects on the planet, including the Linux project but also a lot of cloud projects. The cloud-native stuff is all in the Linux Foundation.”

“Bringing Delta Lake under the neutral home of the Linux Foundation will help the open-source community dependent on the project develop the technology addressing how big data is stored and processed, both on-prem and in the cloud,” said Michael Dolan, VP of Strategic Programs at the Linux Foundation. “The Linux Foundation helps open-source communities leverage an open governance model to enable broad industry contribution and consensus building, which will improve the state of the art for data storage and reliability.”

Canva, now valued at $3.2 billion, launches an enterprise product

Canva, the Australian-based design tool maker, has today announced that it has raised an additional $85 million to bring its valuation to $3.2 billion, up from $2.5 billion in May.

Investors in the company include Mary Meeker’s Bond, General Catalyst, Bessemer Venture Partners, Blackbird and Sequoia China.

Alongside the new funding and valuation, Canva is also making its foray into enterprise with the launch of Canva for Enterprise.

Thus far, Canva has offered users a lightweight tool set for creating marketing and sales decks, social media materials and other design products mostly unrelated to product design. The idea here is that, outside of product designers, the rest of the organization is often left behind with regards to keeping brand parity in the materials they use.

Canva is available for free for individual users, but the company has addressed the growing need within professional organizations to keep brand parity through Canva Pro, a premium version of the product available for $12.95/month.

The company is now extending service to organizations with the launch of Canva for Enterprise. The new product will not only offer a brand kit (Canva’s parlance for Design System), but will also offer marketing and sales templates, locked approval-based workflows and even hide Canva’s massive design library within the organization so employees only have access to their approved brand assets, fonts, colors, etc.

Canva for Enterprise also adds another layer of organization, allowing collaboration across comments, a dashboard to manage teams and assign roles, and team folders.

“We’re in a fortunate place because the market has been disaggregated,” said Canva CEO and founder Melanie Perkins. “The way we think about the pain point consumers have is that people are being inconsistent with the brand, and there are huge inefficiencies within the organization, which is why people have been literally asking us to build this exact product.”

More than 20 million users sign in to Canva each month across 190 countries, with 85% of Fortune 500 companies using the product, according to the company.

Perkins says the ultimate goal is to have every person in the world with access to the internet and a design need to be on the platform.

Autify raises $2.5M seed round for its no-code software testing platform

Autify, a platform that makes testing web application as easy as clicking a few buttons, has raised a $2.5 million seed round from Global Brain, Salesforce Ventures, Archetype Ventures and several angels. The company, which recently graduated from the Alchemist accelerator program for enterprise startups, splits its base between the U.S., where it keeps an office, and Japan, where co-founders Ryo Chikazawa (CEO) and Sam Yamashita got their start as software engineers.

The main idea here is that Autify, which was founded in 2016, allows teams to write tests by simply recording their interactions with the app with the help of a Chrome extension, then having Autify run these tests automatically on a variety of other browsers and mobile devices. Typically, these kinds of tests are very brittle and quickly start to fail whenever a developer makes changes to the design of the application.

Autify gets around this by using some machine learning smarts that give it the ability to know that a given button or form is still the same, no matter where it is on the page. Users can currently test their applications using IE, Edge, Chrome and Firefox on macOS and Windows, as well as a range of iOS and Android devices.

Scenario Editor

Chikazawa tells me that the main idea of Autify is based on his own experience as a developer. He also noted that many enterprises are struggling to hire automation engineers who can write tests for them, using Selenium and similar frameworks. With Autify, any developer (and even non-developer) can create a test without having to know the specifics of the underlying testing framework. “You don’t really need technical knowledge,” explained Chikazawa. “You can just out of the box use Autify.”

There are obviously some other startups that are also tackling this space, including SpotQA, for example. Chikazawa, however, argues that Autify is different, given its focus on enterprises. “The audience is really different. We have competitors that are targeting engineers, but because we are saying that no coding [is required], we are selling to the companies that have been struggling with hiring automating engineers,” he told me. He also stressed that Autify is able to do cross-browser testing, something that’s also not a given among its competitors.

The company introduced its closed beta version in March and is currently testing the service with about a hundred companies. It integrates with development platforms like TestRail, Jenkins and CircleCI, as well as Slack.

Screen Shot 2019 10 01 at 2.04.24 AM

&Open helps businesses distribute gifts to reward customer loyalty

&Open is a startup with an unusual name, and one that fills an unusual niche in the business world. It has built a gift-giving platform, so that businesses can reward loyalty with a small token of appreciation. The gift depends on the business and the circumstances, but it could be something like a book or a tea towel and a recipe.

Co-founder and CEO Jonathan Legge says the Dublin-based startup fits most easily in the corporate gift-giving category, but he sees the company handling much more than that. “We are more about gifting for loyalty and customer retention. We grew out of a B2C operation in which we got visibility on this market, and then quickly evolved &Open to fulfill this market,” Legge explained.

In fact, the company developed out of a business Legge had prior to launching &Open, producing high-end gifts. As part of that business, he was finding that he would get requests from CMOs of big companies like Google, Airbnb and Jameson’s to develop gifts for their events. From that, Legge saw the potential for a full-fledged business based on that idea and he launched &Open.

He sees a world in which transactions increasingly take place in the digital realm, yet consumers still crave physical interactions with businesses beyond an email or a text thanking them. That’s where &Open can help.

“We’re filling the space of helping businesses connect with their customers and showing they care, and not by kind of devaluing their own product and putting on sales. It’s more working with the customer support team, the loyalty team or the marketing team to watch the life cycle of the customer and make sure they’re being gifted at key moments in the life cycle and within their journey with a brand,” he said.

He says this definitely is not swag like you would get a conference, but something more personal that shows the brand cares about the customer. Nor is it a set of generic gifts that every &Open customer can select from. Instead it’s a catalog it creates with each one to reflect that brand’s values.

&Open welcome screen

Image: &Open

“We will design a catalog of gifts for our clients, and then they will be grouped into subsets of situations based on price. For Airbnb, the gift set could depend on whether it’s for a host or guest, and there’s different gifts within those situations. So for a host, it will be more stuff for the home such as a recipe book, a tea towel with a recipe or a guest book,” Legge said.

The company has been around since 2017 and is already in 52 countries. To make this all work, it has developed a three-part system. In addition to building a custom catalog for each brand, it has a logistics component to distribute the gift and make sure it has been delivered, and finally a technology platform that brings these different systems together.

The way it works for most customers is that the customer service team or the social media team will see situations where they think a gift is warranted, and they will log into the &Open system and choose a gift based on whatever the circumstances are — such as an apology for bad service or a reward for loyalty.

Today, the company has 25 employees, most of whom are in Dublin. The company is self-funded so far and has not sought outside investment.

Secrets of Evaluating Security Products | An Intro by Phat Hobbit

I’ve been watching the anti-malware industry for a long time as a user, service provider and now cyber security industry analyst. For many years, AV software was the only line of cyber defence and most of the folks I know from “back in the day” have a love-hate relationship with their anti-virus provider. Les Correia and Migo Kedem are both heavy weights in the anti-malware industry and their collaborative efforts have produced the book you can start reading now. To be honest I’m not even sure folks read introductions anymore as the sentiment is “why bother when it’s really the content I am after?” 

Fair point, but if you are reading this now, I think it’s time to explain just how important this book is from a business perspective. For me, I believe we have evolved from a very two-dimensional relationship with cyberspace to a three-dimensional cyber environment.  Most small & medium businesses were architected along the lines of “inside the firewall” = things I need to care about and “outside the firewall” = things I don’t need to care about: a very 2-D space. Along came SaaS and cloud hosting and we find ourselves plunged into the deep end of the pool. The third-dimension is really about “someone else’s computer that I need to care about”. Those in enterprise architecture have been living in this 3-D space for some time with datacentre architecture, but there is a new twist: “someone else’s computer that I need to care about + which I have limited control over”.

Across the spectrum of business, no one can argue that SaaS and cloud hosting haven’t become a major focus of organizational direction, pushing IT departments large and small. This evolution has created immense complications for cyber security, ranging from beliefs that “cloud is secure by default” to “we don’t have any skills or tools suitable for secure cloud adoption”. What I think is universally true is that endpoints – specifically user workstations and devices – are the targets of cyber criminals as they provide access to data and resources no matter where they are in cyberspace.

In the 3-D cyber world we have an additional complication which was unanticipated. Those user workstations and devices quite frequently escape the protection of “inside the firewall” as an increasingly mobile workforce accesses data and resources from anywhere that is connected – home, public spaces and work. If protecting data and resources in the cloud was a profound challenge, imagine global companies trying to secure endpoints located across the planet. 

This book seeks to educate and assist the reader in understanding the current and future situation in cyberspace; it provides advice and council on how to prepare your organization for the future. The cyber-criminal problem will not be vanquished anytime soon. In fact, it may be the highest growth “industry” related to the Internet. In a conversational style and business friendly language, Les and Migo provide wisdom and strategy to address the business issue of “24/7/365 connection requires 24/7/365 protection”. In major data breach investigations, the culprit is frequently a missed patch, unpatched vulnerability or configuration mistake that facilitates a malicious actor’s entry into the network. Imagine a solution that gives you confidence in protection while you take time to test and deploy patches to your endpoints?

My belief is that most organizations are now understanding that robust cyber security enhances brand reputation, protects profitability and facilitates growth. This book makes the case that robust endpoint defence is an important part of the cyber security strategy to support the organization’s strategic goals. Data breach is a stressful and costly endeavour for any organization to endure. Perhaps it’s time to read some expert advice on the subject and adopt your endpoint defensive strategy accordingly.

Enjoy the Read!
Ian Thornton-Trump (Phat_Hobbit)
London, UK.


Like this article? Follow us on LinkedIn, Twitter, YouTube or Facebook to see the content we post.

Read more about Cyber Security