Google Calendar now plays nicer with Microsoft Exchange

 Google today announced a small but important update to how Google Calendar and Microsoft Exchange can work together going forward. It’s not unusual for a company to still use both Google’s G Suite tools and Microsoft Exchange in parallel, and with this update, G Suite admins can now allow their users to see real-time free/busy information across the two systems. This means tools… Read More

Scientists Prove They Can Get Passwords From Brainwaves

It just may be the ultimate hack: stealing passwords from your brainwaves. It’s something that has moved beyond the realm of science fiction and into the realm of possibility, according to a joint research study conducted by researchers at the University of Alabama at Birmingham and the University of California, Riverside.

The research team tested 12 subjects, each wearing an EEG headset, which is a piece of gear increasingly common in video games. The subjects were asked to input a string of randomly generated passwords and PINS via their keyboards. While they were doing this, the researchers studied the brainwave patterns captured by the headsets, and were able to correctly deduce the PINs 46.5 percent of the time, and the passwords 37.3 percent of the time.

If you think this is something that only gamers have to worry about though, think again. Game developers have rushed to embrace the technology, and other industries aren’t far behind. Right now, you can buy an Emotiv headset and use it to do everything from controlling a wheelchair with your brain (a huge mobility assist for those who are partially or completely paralyzed), pilot drones via thought, turn lights off and on and a whole host of other things. Opening and using Enterprise applications can’t be far behind.

While it’s true that the percentages mentioned above aren’t stellar, bear in mind that this was their first attempt. As they improve their algorithm, and as the technology itself continues to advance, you can expect those percentages to climb dramatically. As the software becomes mainstream in more and more industries, it’s just a matter of time before the hackers of the world begin to take notice, which brings up a tough question.

How can we prevent our brains from being hacked? So far, no one has any answers.

Used with permission from Article Aggregator

Gas pump card skimmer now phones home

 In an unsurprising move by credit card thieves, police have found a new credit card skimmer that sends stolen data via SMS. By tearing apart cheap phones, crooks are able to send credit card information to their location instantly without having to access the skimmer physically or rely on an open Bluetooth connection. Brian Krebs received images of the skimmer from an unnamed source. They… Read More

Measuring the Usefulness of Multiple Models

The past several years have seen a massive increase in products, services, and features which are powered or enhanced by artificial intelligence — voice recognition, facial recognition, targeted advertisements, and so on. In the anti-virus industry, we’ve seen a similar trend with a push away from traditional, signature-based detection towards fancy machine learning models. Machine learning allows anti-virus companies to leverage large amounts of data and clever feature engineering to build models which can accurately detect malware and scales much better than manual reverse engineering. Using machine learning and training a model is simple enough that there are YouTube videos like “Build an Antivirus in 5 Minutes“. While these home brew approaches are interesting and educational, building and maintaining an effective, competitive model takes a lot longer than 5 minutes.

To stay competitive with machine learning one must constantly beef up the training data to include new and diverse samples, engineer better features, refine old features, tune model hyper parameters, research new models, and so on. To this end, I’ve been researching how to use multiple models to improve detection accuracy. The intuition is that different models may have different strengths and weaknesses and as the number of models which agree increases, you can be more certain. Using multiple model’s isn’t new. It’s a common technique generally agreed to improve accuracy, and we already use it in some places. However, I wanted to quantify how much can it improve accuracy and which combination of models would work best. In this post, I’m going to share my findings and give some ideas for future research.

Selecting the Algorithms

If you want to use multiple models, the first question is which learning algorithms to use. Intuitively, you want strong models which make different mistakes. For example, if one model is wrong about a particular file, you want your other models to not be wrong. In other words, you want to minimize the size of the intersection of the sets of misclassified files between models. In this way, three strong models which make the same mistakes may not perform as well as one strong model with two weaker models which make entirely different mistakes.

I picked three models which I hoped would perform differently: random forestmulti-layer perceptron, and extra trees. Random forests, which I’ve explained previously and extra trees are both ensemble algorithms which use decision trees as a base estimator, but I figured I could use very different parameters for each and get different results.

For evaluating the performance of multiple models, consider that when a model judges a file, there are four outcomes:

  1. true positive (TP) – file is bad and model says bad
  2. true negative (TN) – file is good and model says good
  3. false positive (FP) – file is good but model says bad
  4. false negative (FN) – file is bad but model says good

In the anti-virus industry, the most important metric for a model is the FP rate. If you detect 100% of malware but have an FP rate of only 0.1%, you’ll still be deleting 1 in 1,000 benign files which will probably break something important (like the operating system). The second most important metric is the TP rate, or the detection rate. This is how many malicious files you detect and these two rates are usually antagonistic. Improving the TP rate usually means increasing the FP rate, and vice versa.

Since FPs are so important to avoid, I decided to evaluate model combinations by measuring how much the FP sets overlap. The less they overlap, the better. This isn’t very rigorous, but it’s fast. I prefer to get quick results, build my intuition, and get more rigorous as I iterate. In a perfect world with unlimited time, CPU power, and memory, I would setup a grid search to find the ideal parameters for a dozen models and then build another grid search to find the ideal way to combine the models. Unfortunately, this could take weeks. By doing some quick experiments, I can find out if the idea is worth perusing, possibly come up with a better test, and save a lot of time by eliminating certain possibilities.

Building the Models

The training data consisted of features from a wide variety of about 1.7 million executables, about half of which were malicious. The data were vectorized and prepared by removing invariant features, normalizing, scaling, and agglomerating features. Decision trees don’t care much about scaling and normalizing, but MLP and other models do. By limiting the number of features to 1000, training time is reduced and previous experiments have shown that it doesn’t degrade model performance much. Below is the code for preparing the matrix:

import sklearn as skl
import sklearn.feature_selection
import gc

variance = skl.feature_selection.VarianceThreshold(threshold=0.001)
matrix = variance.fit_transform(matrix)

normalize = sklearn.preprocessing.Normalizer(copy=False)
matrix = normalize.fit_transform(matrix)

# Converts matrix from sparse to dense
scale = skl.preprocessing.RobustScaler(copy=False)
matrix = scale.fit_transform(matrix.toarray())

# Lots of garbage to collect after last step
# This may prevent some out of memory errors

fa = sklearn.cluster.FeatureAgglomeration(n_clusters=1000)
matrix = fa.fit_transform(matrix)

The random forest (RF), extra trees (ET), and multi-layer perceptron (MLP) models were built using the SKLearn Python library from the prepared matrix.

Testing the Models

The strongest performing model was the random forest with extra trees coming in a close second. The lackluster MLP performance is likely due to bad tuning of hyper parameters but it could just be a bad algorithm for this type of problem.

Below is the table of results showing the number of FPs from each model and the number of FPs in the intersection between each model and every other model:

Model FPs ∩ RF ∩ MLP ∩ ET
RF 3928 3539 2784
MLP 104356 3539 3769
ET 4302 2784 3769

The FPs common between all models is 2558. This means that all three models mistakenly labeled 2558 of the files as malicious when they were actually benign. The best way to minimize FPs is to require that all three models agree a file is malicious. With these three models, this would decrease the false positive rate by 35%. For example, instead of using just a random forest and getting 3928 FPs, if all models had to agree, the FPs would be limited to just 2558.

Requiring all models agree is a highly conservative way to combine models and is the most likely to reduce the TP rate. To measure the TP rate reduction, I checked the size of the intersections of TPs between the models. The table below shows the figures:

Model TPs ∩ RF ∩ MLP ∩ ET
RF 769043 759321 767333
MLP 761807 759321 758488
ET 768090 767333 758488

As with FPs, the RF model performed the best with the ET model lagging slightly behind. The intersection of TPs between all models was 757880. If all models were required to agree a file was malicious, this means the TP rate would only decrease 1.5%.

Below is roughly the code I used to collect FPs and TPs:

# matrix contains vectorized data
# indicies contains array of sample sha256 hashes
# labels contains array of sample labels - True=malicious, False=benign
def get_tps(labels, predicted, indicies):
    tps = set()
    for idx, label in enumerate(labels):
        prediction = predicted[idx]
        if label and prediction:
    return tps

def get_fps(labels, predicted, indicies):
    fps = set()
    for idx, label in enumerate(labels):
        prediction = predicted[idx]
        if not label and prediction:
    return fps

# Fit the classifier
mlp = skl.neural_network.MLPClassifier(), labels)

# Make predictions and collect FPs and TPs
mlp_predicted = skl.model_selection.cross_val_predict(mlp, matrix, labels, cv=10)
mlp_fps = get_fps(labels, predicted, indicies)
mlp_tps = get_fps(labels, predicted, indicies)

# rf_fps contains random forest FPs


This research suggests that by using multiple models one could easily reduce FPs by about a third with only a tiny hit to the detection rate. This seems promising, but the real world is often more complicated in practice. For example, we use many robust, non-AI systems to avoid FPs so the 35% reduction likely wouldn’t affect all file types equally and most of the FP reduction might be covered by such pre-existing systems. This research only establishes an upper bound for FP reductions. There’s also engineering considerations for model complexity and speed that need to be taken into account. I’m using Python and don’t care how slow models are, but implementing the code in C and making it performant might be quite tricky.

The extra trees worked better than expected. I did very little tuning yet it was strong and had fairly different false positives than the random forest model. The MLP model may, with enough tuning, eventually perform well, but maybe it’s a bad model for the number and type of features. I’m eager to try with an SVC model, but SVC training time grows quadratically so I’d need to use a bagging classifier with many smaller SVCs.

There are many different ways to combine models — voting, soft voting, blending, and stacking. What I’ve described here is a variant of voting. I’d like to experiment with stacking which works by training multiple models, then using the output of those models as features into a “second layer” model which figures out how to best combine the models. Since I’m most interested in minimizing false positives, I’ll have to compare stacking performance versus requiring all models agree. It may be possible to weight benign samples so models favor avoiding false positives while training without sacrificing detection rates.

The main bottleneck for this research is computing speed and memory. I may be able to just use a smaller training set. I can find out how small the training set can be by training on a subset of the data and testing the performance against the out-of-sample data. Another option is to switch from SKLearn to TensorFlow GPU which allows me to take advantage of my totally justified video card purchase.

Well-Known Travel Site Sabre Gets Hacked

If you used the travel site Saber for booking hotels between August of 2016 and March of 2017, be advised that your data was likely stolen by hackers, including your credit card number, your name as it appears on the card and the card’s expiration date.

Saber is one of the web’s leading travel and booking companies, but like many others, they don’t use their own, proprietary software to actually handle the bookings. Instead, they rely on SynXis Central Reservations system, which is a popular “software as a service.”

The reason that’s relevant is that if hackers have found a way into SynXis, then it’s not just Saber that’s at risk. Any of the web’s other major booking sites could be next, or they could already be infected and it’s just gone unnoticed, as it did in Saber’s case.

In any event, if you’ve used Saber during the timeframe mentioned above, you’ll want to contact your credit card company and report it as being compromised so they can stop any activity on it and issue you a replacement.

You’ll also want to scan all the purchases on your statement during the period to look for any suspicious activity, as you may be paying for goods or services you didn’t authorize.

This latest breach underscores the fact that it’s not just your own actions that can get you into trouble. Any site you use could potentially be a problem for you, especially if the site in question stores your data for any length of time. Note, however, that even if this isn’t the case, a hacker could conduct a man in the middle attack and still intercept sensitive information about you.

So far, 2017 has seen more hacks to this point than any year in the history of the internet, and all indications are that next year will break this year’s record. Be careful out there.

Used with permission from Article Aggregator

Intel beats earnings expectations as it manages to maintain growth in its Data Center Group

 Intel declared $14.8 billion in revenue this afternoon and earnings per share of 72 cents. This represents a solid beat as analysts had expected revenues of $14.41 billion and EPS of 68 cents. The company’s  stock finished up 22 cents and 0.63 percent to $34.97 per share in regular trading. In the moments after the company released its earnings, Intel’s stock shot up 3.43 percent. Read More

It looks like Amazon would be losing a lot of money if not for AWS

 Amazon reported its second-quarter earnings today, and it was a bit of a whiff — and a bummer for Jeff Bezos, who is now no longer the solar system’s richest human and has been relegated to the unfortunate position of second-richest human. Read More

Ransomware Discovered On Some Google Play Store Apps

Researchers from McAfee’s mobile division have discovered a strain of ransomware called “LeakerLocker” on two apps that slipped through Google’s various checks and made their way onto the Google Play Store.

The apps in question were “Booster and Cleaner Pro,” which was billed as an app designed to boost memory on your smartphone, and “Wallpapers Blur HD” which is a wallpaper management app. When Google was informed of the issue, they promptly removed both apps, but there are a few points of interest here.

Firstly, both apps were part of a rewards program, which actually pays users a small sum to install them on their devices. This methodology is becoming increasingly common and has been used in the past to get users to install harmful apps on their devices.

Secondly, the researchers who found the app say that it’s not a scam. What this means is that it doesn’t rely on underhanded tactics in order to install itself, but rather, it relies exclusively on permissions freely granted by the user.

Before Google pulled the plug on these two, the cleaner app was installed between 5k and 10k times, and the wallpaper app was installed between 1k and 5k times. If either of those names sound familiar to you, and you’ve installed, but not yet run the apps, delete them immediately to avoid any potential troubles. If you don’t, you’ll soon find that you can’t get into your phone.

Note that this strain of ransomware doesn’t encrypt your files, but locks your screen and thus makes all your files inaccessible. At that point, your only options are to pay the fee or restore from your most recent backup, neither of which are great options.

While Google has a generally good reputation and a proven ability to stop malicious apps before they ever make it to the play store, as this latest incident underscores, the company isn’t perfect. You can’t ever afford to completely let down your guard.

Used with permission from Article Aggregator

Slack is raising a $250 million round at $5 billion valuation

 Enterprise messaging service Slack is raising a $250 million round at a $5 billion valuation, TechCrunch has confirmed. We’re hearing that SoftBank, Accel Partners and other existing investors participated.
The $250 million financing amount was reported by Bloomberg. Axios first had the names of the lead investors. Recode originally reported on a $500 million round last month, but… Read More

Salesforce claims you can set up customer service in Service Cloud update in less than a day

 Salesforce announced an update to its Service Cloud today, which the company says enables non-technical administrators to build a customer service organization with connected services in less than a day. That’s a bold claim, even for the marketing department, but the Service Cloud app builder has been built on top of the Salesforce Lightning development platform and designed to drag… Read More