Monthly Archives: September 2016

Decrypting The Dark Web: Patterns Inside Hacker Forum Activity

Decrypting The Dark Web: Patterns Inside Hacker Forum Activity

Data analysis to be presented at Black Hat Europe highlights trends in communication between bad actors who gather in underground forums across the Dark Web.

Data analysis can be used to expose patterns in cybercriminal communication and to detect illicit behavior in the Dark Web, says Christopher Ahlberg, co-founder and CEO at threat intelligence firm Recorded Future.

Ahlberg in November at Black Hat Europe 2016 in London will discuss how security pros can discover these patterns in forum and hacker behavior using techniques like natural language processing, temporal pattern analysis, and social network analysis.

Most companies conducting threat intelligence employ experts who navigate the Dark Web and untangle threats, he explains. However, it’s possible to perform data analysis without requiring workers to analyze individual messages and posts.

Recorded Future has 500-700 servers it uses to collect data from about 800 forums across the Dark Web. Forums are organized by geography, language, and sectors like carding, hacking, and reverse engineering.

‘Pattern Of Life’

Ahlberg describes the process of chasing bad actors as “pattern of life analysis.” This involves tracking an individual, or class of individuals, to paint a picture of their activity and develop a profile on their behavior. 

Over the last six months, he has spearheaded research to analyze more than three years of forum posts from surface and deep web. Forums have originated in the US, Russia, Ukraine, China, Iran, and Palestine/Gaza, among other locations.

The research unveiled a series of cybercriminal behavioral patterns. These can be used to discover illicit behavior, create points for further branches of research, and figure out how hackers are focusing on different tech and vulnerabilities.

Recorded Future built a methodology for analysts to track user actors’ handles as people jump across and within forums, he explains. Discovering patterns starts with attribution, or putting together a profile for one person. 

The problem is, bad actors often switch between handles to conceal their activity.

“Nobody puts in their real name,” he continues. “The issue is, you might track someone and find half of what they’re doing is on one handle, and the other half is on a different handle.”

He addresses this complication through a process called mathematical clustering. By observing handle activity over time, researchers can determine if two handles belong to the same person without running into many complications.

Temporal patterns exemplify one trend Ahlberg has taken from his observations of hacker activity.

“Overall, hacker forums have lower activity on Saturday and Sunday, and peak on Tuesday and Thursday,” he says. The times at which criminals are most active can shed some light on their lives and areas of focus. Some forums have a drop in activity around mid-day, a sign that participants could be full-time workers taking a lunch break. 

It’s also interesting to watch how forum activity relates to industry news. “By looking at forums and how they react to outside events, we can learn more about what they’re interested in,” Ahlberg says, calling the process “smoking out rats with external events.”

For example, a spike in Wednesday activity could be a sign the forum is reacting to patches and vulnerabilities published by Microsoft and Adobe a day prior. Patch Tuesday, he says, could be driving “Exploit Wednesday.” 

Ahlberg plans to share more of these trends, and the techniques he used to uncover them, during this year’s Black Hat Europe session entitled “Chasing Foxes by the Numbers: Patterns of Life and Activity in Hacker Forums.”

Related Content:

Kelly is an associate editor for InformationWeek. She most recently reported on financial tech for Insurance & Technology, before which she was a staff writer for InformationWeek and InformationWeek Education. When she’s not catching up on the latest in tech, Kelly enjoys … View Full Bio

More Insights

from Dark Reading – All Stories

Academics Put Another Dent in Online Anonymity

The Internet may make many promises, but anonymity isn’t always one of them. Users, for example, who covet their privacy often turn to Tor and other similar services to keep their activities on the web from prying eyes, yet that hasn’t stopped the FBI and researchers from trying to uncloak people on that network.

On the open Internet, users leave behind breadcrumbs as to their interests and locations on the sites they visit, data that is tracked by advertisers and other services interested in delivering targeted advertising in the browser.

A team of academics from Princeton and Stanford universities has gone a step further and figured out how to reveal a user’s identity from links clicked on in their Twitter feed. The researchers built a desktop Google Chrome extension called Footprints as a proof of concept that combs a user’s browser history for links clicked on from Twitter.

The extension sends all Twitter links from the last 30 days that are still in a user’s browsing history through the tool. The user is given the opportunity to review the links before sending them. The tool then returns, in less than a minute, a list of 15 possible Twitter profiles that are a likely match; the extension then deletes itself, the researchers said.

“We were interested in how much information leak there is when browsing the Web,” said Sharad Goel, assistant professor at Stanford in the Department of Management Science and Engineering. Goel along with Stanford students Ansh Shukla, Jessica Su and Princeton professor Arvind Narayanan, developed Footprints.

“We want to raise awareness and inform policy,” Goel said. “This is more of an academic demonstration. We’re not trying to make the tool available to other people, it’s mostly about raising awareness.”

A tool like this would allow a business already tracking a user’s information to correlate it with Twitter traffic to make a best guess as to the user’s identity. It would do so, Goel said, by analyzing the anonymized browsing history and running a similarity match against Twitter traffic to rank the overlaps and arrive at a conclusion.

In a post published to the Freedom to Tinker website, Su wrote that people’s social networks are distinct and made up of family, friends and colleagues, resulting in a distinctive set of links in one’s Twitter feed.

“Given only the set of web pages an individual has visited, we determine which social media feeds are most similar to it, yielding a list of candidate users who likely generated that web browsing history,” Su wrote. “In this manner, we can tie a person’s real-world identity to the near complete set of links they have visited, including links that were never posted on any social media site. This method requires only that one click on the links appearing in their social media feeds, not that they post any content.”

The researchers said there were two challenges to be worked out. First was their ability to quantify how similar a social media feed would be to web browsing history, which seems simple, but does not take into account users with an excessively large number of followers that could also include bots. Goel said those feeds were penalized in this exercise because of their size and the number of links they may contain could skew results.

“We posit a stylized, probabilistic model of web browsing behavior, and then compute the likelihood a user with that social media feed generated the observed browsing history,” Su wrote. “It turns out that this method is approximately equivalent to scaling the fraction of history links that appear in the feed by the log of the feed size.”

The demonstration uses Twitter feeds because they are for the most part public. The researchers heuristically narrowed the number of feeds to be searched and then applied their similarity measure to arrive at the final result, Su said.

Goel said he expects the tool to remain available for the time being as they continue to collect data and refine the demo. A paper is expected to follow in the next few weeks, he said.

from Threatpost – English – Global – thr…

Report a Grim Reminder of State of Critical Infrastructure Security

U.S. critical infrastructure got another reminder this week that it needs to do more to protect itself from cyber attacks with the release of an annual government report.

The NCCIC/ICS-CERT FY 2015 Annual Vulnerability Coordination Report points out that nagging issues continue to plague industrial control systems (ICS) and SCADA systems, notably a dearth of access controls limiting unauthorized access, poor software code quality, and the weakening, or absence of, crypotographic security when it comes to the protection of data and network communications.

The report, released by the U.S. Industrial Control Systems Cyber Emergency Response Team (ICS-CERT), represents trend data culled by private and public industrial control firms for 2015. Topping the list of industries with the most reported vulnerabilities are energy, critical manufacturing, water and wastewater systems, and food and agriculture.

“What this report reveals is we are still grappling with the same systemic problems that have plagued industrial control systems for the past 20 to 30 years,” said Justin Harvey, head of security strategy with research firm Gigamon. “We can’t afford to take the same business-as-usual approach to solving industrial control security issues.”

According to ICS-CERT, 52 percent of vulnerabilities reported in 2015 trace back to improper input validation and poor access controls. While the report prioritizes the gap, experts said the trend may simply reflect the types of vulnerabilities targeted by researchers disclosing vulnerabilities to the agency in 2015.

Chris Eng, VP of research at Veracode, said access controls also present a challenge to other sectors. “We see similar rates – if not higher – outside of the industrial control sector. A lot of these problems are tied to the fact these systems used by industrial control systems date back to even before programmers were thinking about incorporating security into software.”

More alarming to some experts is ICS-CERT data that shows a troubling trend when it comes to an uptick in reported cryptographic vulnerabilities when comparing 2015 data compared to past reports. The number of industrial control systems “missing encryption of sensitive data” jumped from 3 percent for years 2010-2014, to 14 percent in 2015. According to the report, from 2010 to 2014, seven percent of industrial control systems had inadequate encryption strength compared to 25 percent in 2015.

Alex Rothacker, security research director at Trustwave’s SpiderLabs team, said lingering issues from Heartbleed, POODLE and other vulnerabilities in crypto libraries could be popping up in ICS. “This increase probably indicates the use of these libraries in ICS systems,” he said.

According to ICS-CERT, cryptographic problems faced by private and public ICS operators trace back to a larger issue identified as “poor code quality vulnerabilities.” According to the report, half of ICS vulnerabilities are due to poor code quality.

“Poor code quality in software across the industry has also created many heartaches for enterprises using these products,” said Ann Barron-DiCamillo, CTO of Strategic Cyber Ventures and former director of US-CERT. “There’s a whole movement to create software assurance and teach better coding practices to focus on this underlying problem that continues to get easily exploited by adversaries.”

The report highlighted several other trends including an increase in overall reported vulnerabilities between 2010 and 2015, a shortening the length of time ICS-CERT tickets are resolved, and a drop in the severity of reported vulnerabilities. Researchers interviewed cautioned that the ICS-CERT report’s small sample-size of vulnerabilities makes it is difficult to draw hard conclusions. In 2015, ICS-CERT received 427 vulnerability reports and produced 197 advisories. Vulnerabilities were reported by industrial control systems stakeholders ranging from federal, state, local governments, as well as private sector owners, operators and vendors.

from Threatpost – English – Global – thr…

Cybercriminals’ Superior Business Savvy Keeps Them Ahead


Published: 2015-10-15
The Direct Rendering Manager (DRM) subsystem in the Linux kernel through 4.x mishandles requests for Graphics Execution Manager (GEM) objects, which allows context-dependent attackers to cause a denial of service (memory consumption) via an application that processes graphics data, as demonstrated b…


Published: 2015-10-15
netstat in IBM AIX 5.3, 6.1, and 7.1 and VIOS 2.2.x, when a fibre channel adapter is used, allows local users to gain privileges via unspecified vectors.


Published: 2015-10-15
Cross-site request forgery (CSRF) vulnerability in eXtplorer before 2.1.8 allows remote attackers to hijack the authentication of arbitrary users for requests that execute PHP code.


Published: 2015-10-15
Directory traversal vulnerability in QNAP QTS before 4.1.4 build 0910 and 4.2.x before 4.2.0 RC2 build 0910, when AFP is enabled, allows remote attackers to read or write to arbitrary files by leveraging access to an OS X (1) user or (2) guest account.


Published: 2015-10-15
Cisco Application Policy Infrastructure Controller (APIC) 1.1j allows local users to gain privileges via vectors involving addition of an SSH key, aka Bug ID CSCuw46076.

from Dark Reading – All Stories

Japanese man arrested for selling jailbroken iPhones

Would you jailbreak your iPhone if you could, and if it were easy to do?

Would you pay a bit extra for a second-hand device that had been jailbroken for you?

Jailbreaking is where you exploit bugs in Apple’s software to remove the restrictions imposed on your device by the operating system itself.

Jailbreaking liberates your iPhone from Apple’s “walled garden,” by which you are forced to shop at the App Store only.

That frees you up to run a whole range of apps that you can’t get via the official App Store, including apps with features that Apple won’t allow in the App Store at all.

Ironically, one example of a prohibited feature that requires a jailbroken iPhone is checking for a jailbroken iPhone.

That may sound like a pointless feature. If your phone isn’t jailbroken, why check if it is? The obvious answer, of course, is, “Why not?” If you’re a concerned user, or a sysadmin who’s serious about business security, you’ll want to keep an eye open for security anomalies – such as someone else sneakily jailbreaking your iPhone for nefarious reasons such as stealing data or installing malware.

Of course, jailbreaking also makes it much easier to install and use stolen, illegal or pirated content, including apps, music, videos and so on.

That doesn’t, ipso facto, make jailbreaking bad, which is why we’ve often voiced suggestions like this one:

[Although jailbreaking brings a security risk,] we nevertheless wish that Apple would come to the jailbreaking party, even though we’d continue to recommend that you avoid untrusted, off-market apps.

We suspect that Apple would benefit both the community and itself by offering an official route to jailbreaking – a route which could form the basis of independent invention and innovation in iDevice security by an interested minority.

For now, however, jailbreaking remains a controversial issue – especially, it seems, in Japan.

Reports from Toyama, a city in the central part of Japan, say that a 24-year-old man named Daisuke Ikeda was recently arrested for selling five pre-jailbroken iPhones online.

Apparently, the phones also included a hacked version of Monster Strike, an online game that’s popular in Japan, allowing players greater powers in the game than they’d have if they were using the official version.

Racking up gameplay points or accessing powerful characters without earning or paying for them is unlikely to earn you many friends amongst players who have built up prestige in the game the hard way…

…and, in Japan, it seems it can land you in trouble with the police too.

According to the Japan Times, Ikeda had sold 200 iPhones before his arrest, “raking in an estimated ¥5,000,000 [about $50k] in sales.”

(Of course, that wasn’t his profit: there’s no suggestion that the phones were unlawfully acquired, so you have to subtract Ikeda’s purchase price from his average selling price of $250 per phone.)

What’s not clear from this case is the attitude of the authorities in Japan to jailbreaking in general.

If you decide to jailbreak your own phone, purchased outright in locked-down form – for example to run ported Unix utilities that would otherwise be blocked, or to install additional security features that Apple doesn’t provide – is that OK?

Afterwards, can you sell it on, or do you have to restore it with locked-down Apple firmware first?

What if it was a model that Apple no longer supported, so there wasn’t any recent firmware to restore?

One thing is certain: trying to regulate jailbreaking raises as many questions as it answers.

from Naked Security – Sophos

Six Ways To Prepare For The EU’s GDPR

6 Ways To Prepare For The EU’s GDPR

In less than 20 months, all US companies doing business in the EU will face new consumer privacy requirements. Here’s how to prepare for them.

In less than 20 months, all companies handling personal data belonging to residents of the European Union will be expected to comply with a new set of privacy requirements under the EU General Data Protection Regulation (GDPR).

The GDPR introduces tough new privacy requirements for companies handling EU data and vests consumers with significantly greater control and rights over the manner in which their data is collected, shared, retained, and destroyed. The GDPR gives EU regulators the authority to impose fines ranging from 2 percent to 4 percent of a company’s global revenues for violations of the regulation.

“The May 2018 deadline for GDPR compliance may seem like a long way off,” says John Crossno, product manager at enterprise technology vendor Compuware, which did a recent survey on the preparedness of US firms for GDPR. “Given the complexity of change it will require in the way organizations handle personal data, it’s really not.”

Two-thirds of the CIOs at large companies in the survey said they had no plans yet for implementing critical GDPR requirements like data anonymization, customer consent, and the right to be forgotten.

Here, in no particular order, are the issues that US companies must be addressing right now to prepare for GDPR.

Develop And Articulate A Clear Privacy Policy

Under GDPR, companies must provide clear notice to their customers of the purpose for which their data is being collected, says Dana Simberkoff, chief compliance and risk officer at software vendor AvePoint.

Companies need to write a clear privacy policy that consumers will actually be able to read and understand.

In that policy, they need to clearly indicate what personal information is being requested or collected from consumers, says Simberkoff. Consumers have to be given a choice of whether or not to provide it, and any data that is collected needs to be clearly marked for the specific purpose for which it was collected.

In addition, any data that is collected for a stated purpose can only be used for that purpose and for which consent was obtained, she says.

The obligation to meet this requirement flows from the entity that collected the data to any other organization that might process or handle it. Both will be held jointly liable in the event the data is used inappropriately or if there is a data breach.

“The GDPR requires that you not only create policies that meet its mandate, but that you operationalize those policies and be able to prove that you have done so,” Simberkoff says. “Companies should already be practicing transparency around why you want to collect data and ensuring all data is only used for the exact purpose and within the boundaries of consent.”

Enable An Opt-In Requirement For Data Sharing

Most US companies currently use an opt-out policy when collecting and sharing consumer data. The opt-out model requires consumers to specifically ask data collectors and aggregators not to share their data with third parties. Otherwise, consent is assumed by default.

GDPR will require organizations to do just the opposite. They will not be allowed to collect or share EU consumer data by default. The EU consumer would specifically have to consent to such data collection and sharing by opting in.  The consent must be “freely given, specific, informed and unambiguous” Simberkoff says, quoting from the directive.

“Privacy policies must be clear and concise, and companies must provide consumers with an opt-in option to having their data shared with third parties,” she says. “Just offering an opt-out option will no longer be acceptable.”

In addition to requiring affirmative consent, GDPR also places restrictions on the ability of companies to obtain consent from children without specific parental authorization.

Start Implementing Privacy by Design

GDPR is big on the notion of privacy by design, a requirement that emphasizes the importance of baking in, rather than bolting on, privacy protections into products, processes, and services.

“Software and development practices that don’t follow privacy by design principles put organizations at major risk in light of GDPR,” says Dan Blum, a senior analyst at KuppingerCole.

The earlier developers can implement privacy-friendly practices the more they can lower risks, reduce costs of compliance, and future-proof their software, he says.

Examples of privacy friendly software features under GDPR include opt-in, data use minimization, purpose-specificity, data anonymization and the right to be forgotten.

Larger organizations would benefit from establishing a privacy and data governance practice, if they don’t already have one, to keep track of software and development requirements as to manage change, Blum says. “They will need developer awareness and training to get developers to align with these processes and do their part,” Blum notes.

The Information Commissioner’s Office in the UK recommends eight foundational principles for privacy by design that include fair and lawful processing of personal data, minimization, data retention, and data security controls.

Prepare For New Data Breach Reporting Requirements

GDPR requires companies to inform consumers about data breaches impacting their personal information. While that requirement is not particularly new for American companies—most states mandate it currently—the breach reporting requirements under GDPR are strenuous.

“At 72 hours, the timeline to report a breach is the tightest that we’ve seen with any regulatory measures,” says Eldon Sprickerhoff, founder and chief security strategist at eSentire. 

The potential fines that companies face for non-compliance are also the highest, he says. Importantly, non-compliance fines aren’t issued because of a data breach. “The fines are issued because an organization failed to properly report a data breach within the designated timeframe,” he says.

The key to preparedness for this requirement is knowing what data you have and what legislation covers that data Spickerhoff says. Also key is a good understanding of the threats against your organization and the ability to describe how well you are able to defend against those threats.

“Do you know what access risks exist? Can you demonstrate that you’re doing what you’ve claimed?” Spickerhoff asks. Ensuring that your organization has adequate measures to protect against cyber attacks is important, he says. “Including compliance reporting timelines as a part of incident response plans and policies is another vital exercise.”

Implement Controls For Tracking And Managing Data

GDPR gives consumers the right to ask companies holding data about them to erase that data upon request. It also gives them the right to ask for a copy of their digital data so they can transfer it to someone else if they choose to do so.

The so-called right to portability and the right to erasure or right to be forgotten provisions impose new requirements on companies doing business in the EU, says Eve Maler, vice president of innovation and emerging technology at ForgeRock.

“IT managers need to be asking themselves: can we track a customer’s personal data as it travels through our systems? Can we erase it if they request us to do so? Or better yet, can we provide them the tools to do this on their own?” Maler says. “These capabilities will be required under GDPR, and it’s a significant departure from business as usual.”

Be Ready For Data Protection Impact Assessments

The GDPR requires companies to do data protection impact assessments (DPIAs) to identify “high risks” to consumer data privacy that might surface during data processing, says AvePoint’s Simberkoff.

Only some types of data processing involving personal data will trigger the requirement. Some time between now and when GDPR goes into effect, EU data privacy authorities will release a public list of the types of processing they consider to be high-risk and needing a DPIA.

The impact assessments can be incorporated into the standard planning, development, test and deployment, and monitoring, processes, Simberkoff says. They will allow privacy teams to implement privacy by design and enable a risk-based approach to data protection.

Online tools are available that allow organizations to conduct DPIAs and the goal should be to go ahead and conduct the assessments in advance of GDPR, Simberkoff says.

When risks are identified, companies should implement measures to mitigate those risks, which under GDPR include data encryption and pseudonymization or anonymization of data.

Related stories:


Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

More Insights

from Dark Reading – All Stories

mimikittenz – Extract Plain-Text Passwords From Memory

mimikittenz is a post-exploitation powershell tool that utilizes the Windows function ReadProcessMemory() in order to extract plain-text passwords from various target processes.

mimikittenz - Extract Plain-Text Passwords From Memory

The aim of mimikittenz is to provide user-level (non-admin privileged) sensitive data extraction in order to maximise post exploitation efforts and increase value of information gathered per target.

NOTE: This tool is targeting running process memory address space, once a process is killed it’s memory ‘should’ be cleaned up and inaccessible however there are some edge cases in which this does not happen.


Currently mimikittenz is able to extract the following credentials from memory:


  • Gmail
  • Office365
  • Outlook Web


Remote Access

  • Juniper SSL-VPN
  • Citrix NetScaler
  • Remote Desktop Web Access 2012


  • Jira
  • Github
  • Bugzilla
  • Zendesk
  • Cpanel


  • Malwr
  • VirusTotal
  • AnubisLabs


  • Dropbox
  • Microsoft Onedrive
  • AWS Web Services
  • Slack
  • Twitter
  • Facebook

You can download mimikittenz here:


Or read more here.

from Darknet – The Darkside