At RSA, SOC ‘Sees’ User Behaviors

At RSA, SOC ‘Sees’ User Behaviors

Instruments at the RSA SOC give analysts insight into attendee behavior on an open network.

RSA CONFERENCE 2018 – San Francisco – At RSAC 2018 the SOC is a demonstration site. It has some hard limits — no visibility to the external IP interfaces being the most significant — but it has tremendous visibility into what happens on the wireless network that supports the tens of thousands of attendees using the open system. And that network visibility translates into great visibility into the behavior of network security professionals in the wild.

A team of network security specialists including Cisco’s Jessica Bair staff the SOC, watching traffic of various sorts flow to and from the devices carried by attendees, exhibitors, and staff. Because the SOC isn’t blocking any traffic, there’s great interest in the monitoring, which happens courtesy of RSA NetWitness Packets; potentially malicious traffic is further given static analysis by Threat Grid.

One of the things visitors notice in the SOC fishbowl is a screen filled with a rolling list of partially obfuscated passwords. That’s when they see two important things about conference attendees, one of them good, one of them not so much.

Almost all of the passwords are either strong or very strong. That’s great, and shows that security professionals, at least, have acted on the need for stronger passwords.

The problem comes in the fact that the passwords can be seen to be strong; they’re being sent in clear text. It’s a sign of a lesson half-learned and indicative of problems likely to plague all levels of the computer-using population of companies.

And passwords aren’t the only data being sent in the clear. Other examples of documents analysts have seen traversing the network include business plans, resumes, and information on competitors, according to one of the engineers staffing the SOC. 

While the passwords and documents traversing the network represent a significant security risk, Bair quickly points out that there is no threat of long-term information release; the hard disks from the monitoring and analysis appliances are crushed at the end of the conference.

Of course, the monitoring infrastructure established in the SOC sees more that just potentially embarrassing clear text documents. Malware and possible malware were identified and analyzed through Cisco’s Advanced Malware Protection (AMP) Anywhere with its Threat Intelligence Cloud. Information on potential malware seen was communicated among all nodes of the security network and other security networks related to the RSA Conference infrastructure for more rapid identification and (potential) remediation.

Ultimately, Bair likened the activity of the SOC to the basic instruction given to fighting women and men of the U.S. Army. “You have to do three things: Shoot, move, and communicate. If you’re not doing all three three, you’re [redacted] dead.”

In cybersecurity terms, the system must actively defend the organization’s assets, be agile in shifting its activities to meet evolving threats, and share information and commands with other networks looking for malware and malicious behavior. With all three, an organization has a chance to effective behavior. Without the three, then sooner or later your organization is truly [redacted] dead.

Related content:


Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.


Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

More Insights

from Dark Reading – All Stories https://ubm.io/2HPnWBM
via IFTTT

At RSA, SOC ‘Sees’ User Behaviors

At RSA, SOC ‘Sees’ User Behaviors

Instruments at the RSA SOC give analysts insight into attendee behavior on an open network.

RSA CONFERENCE 2018 – San Francisco – At RSAC 2018 the SOC is a demonstration site. It has some hard limits — no visibility to the external IP interfaces being the most significant — but it has tremendous visibility into what happens on the wireless network that supports the tens of thousands of attendees using the open system. And that network visibility translates into great visibility into the behavior of network security professionals in the wild.

A team of network security specialists including Cisco’s Jessica Bair staff the SOC, watching traffic of various sorts flow to and from the devices carried by attendees, exhibitors, and staff. Because the SOC isn’t blocking any traffic, there’s great interest in the monitoring, which happens courtesy of RSA NetWitness Packets; potentially malicious traffic is further given static analysis by Threat Grid.

One of the things visitors notice in the SOC fishbowl is a screen filled with a rolling list of partially obfuscated passwords. That’s when they see two important things about conference attendees, one of them good, one of them not so much.

Almost all of the passwords are either strong or very strong. That’s great, and shows that security professionals, at least, have acted on the need for stronger passwords.

The problem comes in the fact that the passwords can be seen to be strong; they’re being sent in clear text. It’s a sign of a lesson half-learned and indicative of problems likely to plague all levels of the computer-using population of companies.

And passwords aren’t the only data being sent in the clear. Other examples of documents analysts have seen traversing the network include business plans, resumes, and information on competitors, according to one of the engineers staffing the SOC. 

While the passwords and documents traversing the network represent a significant security risk, Bair quickly points out that there is no threat of long-term information release; the hard disks from the monitoring and analysis appliances are crushed at the end of the conference.

Of course, the monitoring infrastructure established in the SOC sees more that just potentially embarrassing clear text documents. Malware and possible malware were identified and analyzed through Cisco’s Advanced Malware Protection (AMP) Anywhere with its Threat Intelligence Cloud. Information on potential malware seen was communicated among all nodes of the security network and other security networks related to the RSA Conference infrastructure for more rapid identification and (potential) remediation.

Ultimately, Bair likened the activity of the SOC to the basic instruction given to fighting women and men of the U.S. Army. “You have to do three things: Shoot, move, and communicate. If you’re not doing all three three, you’re [redacted] dead.”

In cybersecurity terms, the system must actively defend the organization’s assets, be agile in shifting its activities to meet evolving threats, and share information and commands with other networks looking for malware and malicious behavior. With all three, an organization has a chance to effective behavior. Without the three, then sooner or later your organization is truly [redacted] dead.

Related content:


Interop ITX 2018

Join Dark Reading LIVE for a two-day Cybersecurity Crash Course at Interop ITX. Learn from the industry’s most knowledgeable IT security experts. Check out the agenda here. Register with Promo Code DR200 and save $200.


Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

More Insights

from Dark Reading – All Stories https://ubm.io/2vusACF
via IFTTT

Is Facebook’s Anti-Abuse System Broken?


Facebook has built some of the most advanced algorithms for tracking users, but when it comes to acting on user abuse reports about Facebook groups and content that clearly violate the company’s “community standards,” the social media giant’s technology appears to be woefully inadequate.

Last week, Facebook deleted almost 120 groups totaling more than 300,000 members. The groups were mostly closed — requiring approval from group administrators before outsiders could view the day-to-day postings of group members.

However, the titles, images and postings available on each group’s front page left little doubt about their true purpose: Selling everything from stolen credit cards, identities and hacked accounts to services that help automate things like spamming, phishing and denial-of-service attacks for hire.

To its credit, Facebook deleted the groups within just a few hours of KrebsOnSecurity sharing via email a spreadsheet detailing each group, which concluded that the average length of time the groups had been active on Facebook was two years. But I suspect that the company took this extraordinary step mainly because I informed them that I intended to write about the proliferation of cybercrime-based groups on Facebook.

That story, Deleted Facebook Cybercrime Groups had 300,000 Members, ended with a statement from Facebook promising to crack down on such activity and instructing users on how to report groups that violate it its community standards.

In short order, some of the groups I reported that were removed re-established themselves within hours of Facebook’s action. I decided instead of contacting Facebook’s public relations arm directly that I would report those resurrected groups and others using Facebook’s stated process. Roughly two days later I received a series replies saying that Facebook had reviewed my reports but that none of the groups were found to have violated its standards. Here’s a snippet from those replies:

Perhaps I should give Facebook the benefit of the doubt: Maybe my multiple reports one after the other triggered some kind of anti-abuse feature that is designed to throttle those who would seek to abuse it to get otherwise legitimate groups taken offline — much in the way that pools of automated bot accounts have been known to abuse Twitter’s reporting system to successfully sideline accounts of specific targets.

Or it could be that I simply didn’t click the proper sequence of buttons when reporting these groups. The closest match I could find in Facebook’s abuse reporting system were, “Doesn’t belong on Facebook,” and “Purchase or sale of drugs, guns or regulated products.” There was/is no option for “selling hacked accounts, credit cards and identities,” or anything of that sort.

In any case, one thing seems clear: Naming and shaming these shady Facebook groups via Twitter seems to work better right now for getting them removed from Facebook than using Facebook’s own formal abuse reporting process. So that’s what I did on Thursday. Here’s an example:

Within minutes of my tweeting about this, the group was gone. I also tweeted about “Best of the Best,” which was selling accounts from many different e-commerce vendors, including Amazon and eBay:

That group, too, was nixed shortly after my tweet. And so it went for other groups I mentioned in my tweetstorm today. But in response to that flurry of tweets about abusive groups on Facebook, I heard from dozens of other Twitter users who said they’d received the same “does not violate our community standards” reply from Facebook after reporting other groups that clearly flouted the company’s standards.

Pete Voss, Facebook’s communications manager, apologized for the oversight.

“We’re sorry about this mistake,” Voss said. “Not removing this material was an error and we removed it as soon as we investigated. Our team processes millions of reports each week, and sometimes we get things wrong. We are reviewing this case specifically, including the user’s reporting options, and we are taking steps to improve the experience, which could include broadening the scope of categories to choose from.”

Facebook CEO and founder Mark Zuckerberg testified before Congress last week in response to allegations that the company wasn’t doing enough to halt the abuse of its platform for things like fake news, hate speech and terrorist content. It emerged that Facebook already employs 15,000 human moderators to screen and remove offensive content, and that it plans to hire another 5,000 by the end of this year.

“But right now, those moderators can only react to posts Facebook users have flagged,” writes Will Knight, for Technologyreview.com.

Zuckerberg told lawmakers that Facebook hopes expected advances in artificial intelligence or “AI” technology will soon help the social network do a better job self-policing against abusive content. But for the time being, as long as Facebook mainly acts on abuse reports only when it is publicly pressured to do so by lawmakers or people with hundreds of thousands of followers, the company will continue to be dogged by a perception that doing otherwise is simply bad for its business model.









Tags: , ,


from Krebs on Security http://bit.ly/2HeZdpf
via IFTTT

HackerOne CEO Talks Bug Bounty Programs at RSA Conference

SAN FRANCISCO – Marten Mickos, HackerOne CEO, catches up with Threatpost at RSA Conference to discuss hot-button issues around modern bounty programs. Topics range from how to design a program to protect consumer privacy and vulnerability disclosure issues. Mickos also reflects on the growing reliance of bug bounty programs by the private and public sector.

For all Threatpost’s RSA Conference 2018 coverage visit us here.

from Threatpost – English – Global – thr… http://bit.ly/2JdwzWd
via IFTTT

RSA Conference has a leaky app… again!

You wouldn’t expect the organisers of a seminar on nuclear physics to hand out conference badges that were contaminated with dangerous levels of radioactivity.

You wouldn’t expect to attend a workplace health and safety training course in a conference centre where the fire exits had been padlocked shut.

But cybersecurity conferences can be a bit different – they certainly don’t always practise what they preach.

For example, at the RSA Conference (RSAC) 2010 in San Francisco, one of our colleagues – Graham Cluley, now an independent blogger – was asked to copy his presentation onto a USB key supplied by the organisers for collating speakers’ contributions.

When he inserted the USB drive into his Mac, Sophos Anti-Virus popped up, boop!, to alert him to Windows malware on the USB key.

He quickly figured out that the conference computer had no anti-virus at all, and that the same USB key had been in and out of numerous other presenters’ Windows computers already that day. (This story didn’t say much about those other presenters, either.)

At the AusCERT conference, in Queensland, Australia, also in 2010, one of the security vendors – it was IBM, and the company was nominated for a prestigious Pwnie award for this blunder) handed out USB keys with product marketing material on it…

…together with not one but two malware infections.

RSAC was back in the “do as I say not as I do” limelight again in 2014, issuing an official mobile app for the event that hooked into the event database so you could see the schedule of talks, with any last-minute updates or changes automatically shown.

Unfortunately, the database pulled down by the app also included details of all the other conference delegates who had registered to use the app so far – meaning that anyone who installed the app after you would get to see your details, too.

In that breach, the data that leaked out apparently included name, job title, employer, and nationality.

For many delegates, those details were probably public already – or at least easy to figure out or guess – so there wasn’t a huge amount of harm done, but it was still a peculiarly hypocritical cybersecurity blunder for a cybersecurity event company to make.

It happened again

Well, it looks as though it’s happened again: another insecure app published as part of an RSAC cybersecurity event.

At RSAC 2018, Twitter user @svblxyz found similar security problems to those of 2014 in this year’s conference app.

Amongst other things, the app contained URLs from which database content could be downloaded, apparently including the real names of other mobile app users.

RSAC confirmed the breach in a tweet earlier today [at approximately 2018-04-20T06:00Z], admitting:

Our initial investigation shows that 114 first and last names of RSA Conference Mobile App users were improperly accessed. No other personal information was accessed, and we have every indication that the incident has been contained. We continue to take the matter seriously and monitor the situation.

With just 114 names leaked, and given that many conference delegates have probably mentioned their visit to the event publicly anyway, for example on social media or in an out-of-office email, this isn’t a particularly dangerous outcome.

But the leaked names are just a symptom, and it’s the underlying cause that’s worrying: there always seems to “be an app for that”, even when a well-designed web page would be just as good, and even when a well-designed web page already exists anyway.

What to do?

  • As a user, assume the worst, and stick to the web whenever you can. A one-off app for a single event simply won’t have had the same security scrutiny as your browser, so why not simply prefer your browser?
  • As an event organiser, assume the worst, and stick to the web whenever you can. If you need a way to get updated speaker lists and session timetables to delegates, consider publishing a standalone file, such as a PDF, that users can download if they want an offline copy. If you expect to published regular updates, use a simple solution such as an RSS feed so your users can easily find the latest version.
  • As a mobile app developer, assume the worst, and put app security up front, ahead of looks. You can always improve the look and feel of an app later on, but you can’t get stolen or leaked data back later on: once breached, always breached.


from Naked Security – Sophos http://bit.ly/2HN7kKV
via IFTTT

Kingpin who made 100 million robocalls loses his voice

How easy is it to download automated phone-calling technology, spoof numbers to make it look like calls are coming from a local neighbor, and robo-drag millions of hapless consumers away from what should be their robot-free dinners?

The question, from Senator Edward J. Markey, was directed at Adrian Abramovich: a Florida man the senator referred to as the “robocall kingpin.” On Wednesday, Abramovich was on the hot seat on Capitol Hill, having been subpoenaed to testify before the Senate Commerce, Science & Transportation Committee as it examined the problem of abusive robocalls.

The answer: just a click, Abramovich said. The technology is easy to use and can be set up by “anyone” from a home office.

There is available open source software, totally customizable to your needs, that can be misused by someone to make thousands of automated calls with the click of a button.

All you have to do is run an online search for Voice-over-IP (VoIP) providers, short-duration calls, and you’ll probably come up with 5, 6, or 7 providers, “most of which are US-based,” he said.

Markey then asked, “And how many people would I have to employ to place, say, 10,000 robocalls a day?”

Abramovich’s reply: one.

The Florida man is fighting a $120 million fine proposed last year by the Federal Communications Commission (FCC) for the nearly 97 million robocalls his marketing companies – Marketing Strategies Leaders Inc. and Marketing Leaders Inc. – made between October and December 2016.

That’s over one million calls a day, FCC Chairman Ajit Pai said in a statement that accompanied the proposed fine last year. The fine would be the first enforcement action against a large-scale spoofing operation under the Truth in Caller ID Act.

Abramovich is accused of tricking consumers with the robocalls. Consumers have reported receiving calls from what looked like local numbers, talking about “exclusive” vacation deals from well-known travel companies such as Marriott, Expedia, Hilton and TripAdvisor. Once you pressed “1” as prompted, you’d get transferred to a call center, where live operators gave targets the hard sell on what Pai called “low-quality vacation packages that have no relation” to the reputable companies initially referenced.

Pai said in his statement last year that many consumers spent from a few hundred up to a few thousand dollars on these purportedly “exclusive” vacation packages.

Pai said there are a few things that are truly nasty about the robocalling scheme: first, Abramovich apparently preyed on the elderly, finding it “profitable to send to these live operators the most vulnerable Americans – typically the elderly – to be bilked out of their hard-earned money”.

Secondly, these millions of calls drowned out operations of an emergency medical paging provider, Pai said:

By overloading this paging network, Mr. Abramovich could have delayed vital medical care, making the difference between a patient’s life and death.

Abramovich may well have showed up for the hearing – under the duress of a subpoena – but he stuck to general answers, refusing to answer questions about his particular case. He said he was no kingpin and that his robocalling activities were “significantly overstated,” given that only 2% of consumers had meaningful interaction with the calls.

Senator John Thune pointed out that 2% works out to 8 million people.

Thune: Does that sound like a small effect?
Abramovich: I am not prepared to discuss my specific case.

Other robocalling cases have involved more calls, Abramovich said. He also denied fraudulent activities, saying that resorts associated with his telemarketing “were indeed real resorts, offering real vacation packages.”

Robocalls increased from 831 million in September 2015 to 3.2 billion in March 2018 – a 285% increase in less than three years, according to testimony from Margot Freeman Saunders, senior counsel at the National Consumer Law Center.

Abramovich told the senators that he himself receives “four of five robocalls a day.” Since the FCC headlines, that’s ramped up, he said.

Senator Markey: And you don’t like it?
Abramovich: I decline the call.

Thune said that the committee would consider holding Abramovich in contempt of Congress for claiming a Fifth Amendment privilege throughout the hearing after speaking about his specific case during opening remarks.


from Naked Security – Sophos http://bit.ly/2Hjhvdm
via IFTTT

Chrome anti-phishing protection… from Microsoft!

Microsoft has made its SmartScreen anti-phishing API available to Chrome browser users through an extension called Windows Defender Browser .

If this makes you do a doubletake, we’ll repeat it:  from this week Chrome users will, for the first time, be able to protect their browsing using not only Google’s own Safe Browsing technology but Microsoft’s too.

Once installed, there’s not much to the extension beyond being able to turn its protection on or off and send Microsoft feedback on the current version (v1.62).

Technologically, it’s a different matter. From the moment it’s installed, Chrome users will potentially see warnings from two different systems designed to tell them about malicious links, pop-ups, malware downloads, and of course, phishing URLs.

These pages will both be coloured red and probably indistinguishable bar the fact they reference either Google or Microsoft.

Why might Microsoft want to be so generous to users of a rival browser?

Hitherto, there’s been a bit of a gulf in integrated browser protection with Chrome, Firefox, Apple’s Safari, Opera and Vivaldi using Google’s Safe Browsing technology and only Microsoft’s Windows 10 browser, Edge, and legacy Internet Explorer versions using SmartScreen.

For users, these function like a second browser-specific layer of protection that supplements whichever anti-malware software (e.g. Sophos Home) they have installed.

Testing last year by NSS Labs suggested that Microsoft’s platform had a noticeable edge over Google when it came to malware download detection and blocking phishing URLs.

We speculated at the time that this might be explained by the sheer volume of malware and phishing attacks that Microsoft must detect through its large base of Windows users.

Having both detection platforms on the same browser would at least set up a fair comparison of their effectiveness in future tests.

The simplest explanation of Microsoft’s motivation for offering SmartScreen on Chrome is that it gives the company visibility on the bad stuff encountered by the 60% of the market that uses Chrome (Edge is around 4%). This, in turn, helps Microsoft’s Office 365 Exchange email service offer better protection to compete with Google’s rival G Suite.

It also reminds Chrome users that Microsoft is out there even if not many of them use Edge as the number of people on its predecessor, IE, slowly dwindles.

Ironically, for anyone accessing Gmail and G Suite via Chrome, this has a hidden benefit – installing Defender Browser Protection means these users are being protected from phishing attacks before email reaches their inboxes via Google’s platform and, after that, courtesy of both Google and Microsoft (the reverse is already true for Outlook.com on Chrome).


from Naked Security – Sophos http://bit.ly/2J7rBKs
via IFTTT