14 Social Media-Savvy CISOs to Follow on Twitter

14 Social Media-Savvy CISOs to Follow on Twitter

A roundup of some of the more social media-engaged security leaders to follow for updates on industry news, trends, and events.

Social-savvy CISOs share security scoops. Say that ten times fast.

Then jump over to Twitter, where members of the security community are actively posting news updates, industry trends, updates from events throughout the year, and their thoughts and opinions on all of the above.

We scoured our own Twitter feeds and conducted searches to find socially engaged CISOs and CSOs. These executives, who have years of experience in the industry, log on several times a week to share news, insights, guidance, and conversation with fellow infosec pros.

There are obviously far more than 14 active CISOs on Twitter and other forms of social media. This list – in no particular order – is only the beginning, and we’d like to continue adding names.

If you, or someone you know, is an socially engaged CISO, please add their name and handle in the comments. (We didn’t include security vendor CISOs, but please share any you follow who should be noted).


Kelly Sheridan is Associate Editor at Dark Reading. She started her career in business tech journalism at Insurance & Technology and most recently reported for InformationWeek, where she covered Microsoft and business IT. Sheridan earned her BA at Villanova University. View Full Bio

More Insights

from Dark Reading – All Stories http://ubm.io/2vKHWRy

How likely is a ‘digital Pearl Harbor’ attack on critical infrastructure?

It’s coming on two decades now since the first warnings that US critical infrastructure is vulnerable to a catastrophic cyberattack. According to some experts, it is perhaps more vulnerable now than ever – the threats are worse and the security is no better.

But how likely is such an attack? There is still plenty of debate about that.

Richard A Clarke, who in 2000 was the US’s top counter-terrorism and cybersecurity chief, gets credit for coining the term “digital Pearl Harbor”. He said at the time that it was “improbable,” but added that “statistically improbable events can occur”.

There have been similar warnings since from top government officials – former defense secretary Leon Panetta paraphrased Clarke in 2012, warning of a “cyber Pearl Harbor” – a major cyberattack on industrial control systems (ICS) that could disable the nation’s power grid, transportation system, financial industry and government for months or longer.

Of course, nothing even close to that catastrophic level has happened – yet. And there are a number of experts who say such doomsday language is gross hyperbole, peddling nothing but FUD (fear, uncertainty and doubt). Marcus Sachs, CSO of the North American Electric Reliability Corporation (NERC), said at the 2015 RSA conference that squirrels and natural disasters were a more realistic threat of taking down the grid than a cyber attack.

But a couple of experts in ICS – the equipment used to operate the grid and other critical infrastructure – say they are increasingly troubled that security has not really improved since the warnings began.

Galina Antova, co-founder and chief business development officer at Claroty, recently referred in a blog to “The Lost Decade of Information Security”, saying:

“We are no better off today in terms of cybersecurity readiness than we were 10 years ago. The threat landscape is clearly growing more active and dangerous by the day. The theoretical is becoming reality and, unfortunately, we aren’t prepared to counter the threat just over the horizon.

She has some company in the person of Joe Weiss, managing partner at Applied Control Solutions, who has said for years that ICS security is dangerously lax. Writing on his “Unfettered” blog last week, Weiss said there is essentially no security in ICS process sensors, the tools to detect anomalies in the operation of ICSs – which means an attacker could get control of them relatively easily and create major physical damage.

Weiss cited a number of sensor “malfunctions” that illustrate the problem. One, he said, resulted in the release of 10m gallons of untreated wastewater. Another, he said, was the rupture of a pipeline in Bellingham, WA, which released 237,000 gallons of gasoline into a nearby creek causing it to catch fire, killed three people, caused an estimated $45m in property damage and led to the bankruptcy of the Olympic Pipeline Company.

“That happened in June, 1999,” Weiss said in an interview. “How can that be relevant today? It turns out every bit of it is, because the same flaws that existed then exist today.”

He said in most cases there is no way to know if what happened was an accident or a malicious attack, because of a lack of visibility into the networks. And he wondered on his blog: “How can this lack of security and authentication of process sensors be acceptable?”

What to do? That is where Weiss and Antova part company – just a bit. Antova said she agrees that the sensor flaws exist and, as she wrote, the threat of major ICS attacks “is real and just over the horizon”, But, in an interview, she also said she is “allergic” to describing the threat at either extreme – in relatively trivial terms (squirrels) or disaster (Pearl Harbor).

She said it is not simple or quick to fix flaws in sensors. “Engineers know it takes years to design,” she said, “and it can take 25 to 35 years to replace the architecture” of ICS equipment. She ought to know – she was formerly global head of industrial security services at Siemens, a leading manufacturer of power generation and transmission systems.

In her blog post, she said called for implementing what is practical and feasible – the kind of “security hygiene” steps that would keep ICS from being the “low-hanging fruit” that it is now. Things like patches, really taking network segmentation seriously, and giving IT professionals visibility into the networks.

What has hampered that, she wrote, has been a failure to “bridge the gap” between IT and engineering staff, each of whom, “approach the world with different viewpoints, backgrounds and missions.” Engineers, she noted, focus on keeping things physically safe and running. Anything that impedes that, they reject.

She also said government regulatory frameworks and standards are, in many cases, not practical. One example she cited was the push for “air-gapped” networks. It sounded good, she said, but it interfered too much with efficiency and the needs of the business. “As a result, air gaps now have one thing in common with unicorns – they don’t exist,” she wrote.

But just doing security basics would help. “You have to start somewhere,” she said.

Weiss contends it is possible, and necessary, to be both more aggressive and creative. Part of the problem, he said, “is a failure of imagination. When you look at the bad guys, they really are bad guys. We need to think like bad guys.”

But the two agree that there needs to be better communication between operations and IT. “We’ve got to have engineering in the same room when IT comes in and says this is what I want to do,” Weiss said. “Every time there’s an important meeting in DC on cybersecurity, GE and Siemens aren’t there.”

And both agree that the risk of something really serious happening is growing. “We know these (ICS) networks are exposed,” Antova said. “They are resilient and have safety measures, but for a skilled hacker, it’s not that hard to fool safety equipment.”

The real menace, she is said, is that ransomware like WannaCry and Petya are not just in the hands of nation states, but, “in the hands of every crazy person. I don’t think people realize how poor the cyber hygiene is.”

from Naked Security – Sophos http://bit.ly/2fTkOep

It’s Not Exactly Open Season on the iOS Secure Enclave

The black box that is Apple’s iOS Secure Enclave may have been pried open, but that doesn’t necessarily mean it’s open season on iPhones and iPads worldwide.

Yesterday’s public disclosure of the decryption key for the Secure Enclave Processor firmware does indeed allow white and black hats to poke and probe about for vulnerabilities. And while finding a bug is one thing; exploiting it may be quite another.

Very little granular detail has been made public about what’s going on inside Secure Enclave. Probably the best known insight was provided during a 2016 Black Hat talk given by Azimuth Security researchers Tarjei Mandt, David Wang and Mathew Solnik.

They were able to reverse engineer the Secure Enclave Processor (SEP) hardware and software, and determined that while the hardware was state-of-the-art—or better—the software left a bit to be desired. Wang was interviewed on the Risky Business podcast (interview begins at 31:24) nearly a year ago and told host Patrick Gray that there were very little in the way of memory mitigations, though he could see that Apple was constantly tinkering with the security of the Secure Enclave’s software with each successive update.

“We think the hardware is light years ahead of the competition; the software, not so much,” Wang said. “It’s missing a lot of modern exploit mitigation technology; it’s pretty much unprotected.”

This was also disclosed during the Black Hat presentation where it was revealed that things such as ASLR or stack cookie protections were missing at the time.

Mandt, however, yesterday echoed what other researchers have been saying since the key was published: the immediate threat to users is negligible.

“Our research from last year also showed that doing this typically requires additional vulnerabilities in iOS in order to enable an attacker to communicate arbitrary messages (data) to the SEP,” Mandt told Threatpost. “It is also worth noting that Apple by now presumably has addressed the shortcomings that we highlighted last year regarding exploit mitigations, making exploitation harder.”

According to the most recent iOS Security Guide, communication between the Secure Enclave and the iOS application processor—which is entirely separated from the SEP—is done through “an interrupt-driven mailbox and shared memory data buffers.”

As for the lack of ASLR or stack cookies, Wang told Risky Business this could be due to a lack of computing resources in the Secure Enclave microkernel needed to support these mitigations.

The Secure Enclave, as explained in the iOS Security Guide, is a coprocessor onto itself inside the mobile operating system. Its job is to handle cryptographic operations for data protection key management; its separation from the rest of iOS maintains its integrity even if the kernel is compromised, Apple said in the guide. Primarily, the Secure Enclave processes Touch ID fingerprint data, signs off on purchases authorized through the sensor, or unlocks the phone by verifying the user’s fingerprint.

The key was published by a hacker known only as xerub, who refused to identify himself or provide any detail on how he derived the key or whether he found any vulnerabilities in the Secure Enclave. Apple acknowledged the report, but as of yesterday still had not confirmed the legitimacy of the key xerub published. The key unlocks only the SEP firmware; user data is not at risk, xerub told Threatpost.

The disclosure also harkened back to Apple’s decision last June to release an unencrypted version of the iOS 10 kernel to beta testers. “The kernel cache doesn’t contain any user info, and by unencrypting it we’re able to optimize the operating system’s performance without compromising security,” Apple said at the time.

The decision sparked similar concerns as to yesterday’s leak, that attackers as well as legitimate researchers would be able to find and potentially exploit vulnerabilities in the kernel. Apple’s contention is that the move ultimately improves security with more researchers examining the code for bugs and privately disclosing them to the company or through its bug bounty program. Such a move also potentially weakens gray-market sales for iOS bugs, or government hoarding of bugs.

Yesterday’s news set off another flurry of angst as to the ongoing security of iOS and what would happen now that the firmware had been unlocked.

“I wouldn’t say there is any immediate threat to users at this point,” Azimuth Security’s Mandt said. “Although the key disclosure allows anyone to analyze the software that is running on the SEP processor, it still requires an attacker to find and exploit a vulnerability in order to compromise SEP.”

from Threatpost – English – Global – thr… http://bit.ly/2uYvFGz

Drone firm says it’s stepping up security after US army ban

Two weeks ago, the US Army told its troops that using drones from DJI – maker of the world’s best-selling drones – was henceforth verboten, given unspecified vulnerabilities discovered by its research lab and the US Navy.

While the army was keeping mum about those vulnerabilities, others haven’t been so circumspect. Rather, they’ve been talking for months about sensitive information having the potential to be scattered in the tailwinds.

In May, Kevin Pomaski, a chief pilot for one of the largest commercial UAS service providers in the US, wrote an article about highly sensitive information that can be revealed in conversations between unmanned aerial system (UAS) pilots and their clients: details that he said can include infrastructure, stadiums, military installations, construction sites, details about security, details about the drone itself, details about the drone operator, and more.

This sensitive data is vulnerable to interception, he said:

Critical infrastructure access and layouts are being captured every day. This information may be accessed by foreign actors that mean to harm the countries that these locations are in. The complete data record can be cataloged by pilot, region or location and a full report of the layout, security response, names of people will be revealed. Corporate espionage agents would love to have visual and audio details of that new system being captured by the drone in any industrial field of pursuit.

More recently, rumors have been flying about operators being told not to show up for work at US government agencies unless they bring American-made drones with them. According to sUAS News, the unspecified government agencies allegedly have security concerns about data being shared unwittingly.

If the allegations are true, it adds up to a ban on the Chinese-made DJI equipment. DJI is, after all, a Chinese company, governed by Chinese law, as Pomaski pointed out.

He dissected the privacy policy of DJI’s Go app and came up with a number of issues around sensitive data. For example, this passage from the privacy policy notes that personal information could be transferred to offshore servers:

The DJI Go App connects to servers hosted in the United States, China, and Hong Kong. If you choose to use the DJI Go App from the European Union or other regions of the world, then please note that you may be transferring your personal information outside of those regions for storage and processing. Also, we may transfer your data from the US, China, and Hong Kong to other countries or regions in connection with storage and processing of data, fulfilling your requests, and providing the services associated with the DJI Go App. By providing any information, including personal information, on or through the DJI Go App, you consent to such transfer, storage, and processing.

Now, two months after the army banned DJI drones, DJI has responded by adding a privacy mode to its equipment to prevent flight data being shared to the internet.

On Monday, DJI announced that it’s adding a local data mode that stops internet traffic to and from its flight control apps “in order to provide enhanced data privacy assurances for sensitive government and enterprise customers”.

The company says the privacy mode had been in the works for months, before the army ban. The new privacy mode, due out in future app versions expected in the coming weeks, entails a tradeoff: blocking all internet data means that DJI apps won’t…

  • update maps or geofencing information, meaning pilots could wind up flying in banned zones
  • notify pilots of newly issued flight restrictions or software updates
  • be able to upload to YouTube

On the plus side:

[Local data mode] will provide an enhanced level of data assurance for sensitive flights, such as those involving critical infrastructure, commercial trade secrets, governmental functions or other similar operations.

The army memo had told troops to “cease all use, uninstall all DJI applications, remove all batteries/storage media from devices, and secure equipment for follow on direction.”

However, the army has reportedly walked that ban back a bit, sUAS News reported on Monday. A second memo had reportedly gone out at the end of last week, to the effect that the army will grant exceptions to the ban once a DJI plugin has passed OPSEC (Operational Security) scrutiny.

from Naked Security – Sophos http://bit.ly/2wo7Pas

Facebook Doles Out $100K for Internet Defense Prize

Facebook Doles Out $100K for Internet Defense Prize

Winners developed a new method of detecting spearphishing in corporate networks.

A team of researchers today was awarded a $100,000 prize from Facebook for their work in detecting spearphishing attacks.

Facebook awarded its Internet Defense Prize award to Grant Ho, University of California, Berkeley; Aashish Sharma, Lawrence Berkeley National Laboratory; Mobin Javed, University of California, Berkeley; Vern Paxson, University of California, Berkeley and International Computer Science Institute; and David Wagner, University of California, Berkeley, who authored Detecting Credential Spearphishing Attacks in Enterprise Settings.

The researchers came up with a method of detecting spearphishing in corporate networks that doesn’t trigger a large number of false positive alerts, according to Facebook.

Read more about the award on Facebook’s blog.

Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio

More Insights

from Dark Reading – All Stories http://ubm.io/2wX4MnP

Curbing the Cybersecurity Workforce Shortage with AI

Curbing the Cybersecurity Workforce Shortage with AI

By using cognitive technologies, an organization can address the talent shortage by getting more productivity from current employees and improving processes.

It may seem counterintuitive, but close to 0% unemployment in an industry is not a good thing. Little to no unemployment means there aren’t enough cybersecurity professionals to fill open positions; there’s a high demand for existing talent, resulting in salary inflation and high turnover; and hiring of underqualified workers is more likely. But this is the situation for cybersecurity, and it’s unlikely to get better soon — more than 1.5 million job openings are anticipated globally by 2019.

No matter how hard organizations try, they won’t be able to hire enough college graduates, recruit enough skilled professionals, or reskill enough of the existing workforce to reduce, let alone erase, the shortage. But there is another way: cognitive computing — systems that learn, think, and interact with humans. By using cognitive technologies such as artificial intelligence, machine learning, advanced analytic techniques, and automation, an organization can address the cyber workforce shortage by getting more productivity from the existing employees and optimizing the supporting processes.

The premise is simple: cognitive computing allows an organization to make better use of the time and skills of its cybersecurity talent and improve security in the process. Instead of having the workforce spend the bulk of its time reacting to potential threats or on mundane administrative tasks, it can now focus on proactive security and complex investigations.

For example, cognitive technologies can help address the workforce shortage by improving the organization’s workflow. One leading investment firm noted that by automating routine activities, tasks that use to take cyber professionals about 40 minutes were now accomplished in 40 seconds, and analysts’ productivity tripled. That’s the value of automation — not spending too much time on mundane tasks, when time and talent is already in short supply.

In addition to saving time, it saves money. A recent study found that organizations spend about 21,000 hours investigating false or erroneous security alerts at an average cost of $1.3 million annually. These alerts could be handled by cognitive systems, which would only notify cybersecurity personnel when more investigation is warranted.

But automation is just the beginning. One of the more powerful newer applications is the use of advanced analytics. This technique uses supercomputer processing power to sift through large sets of data to identify behavioral patterns, malicious code, and network anomalies that may not be readily apparent. This can help cyber professionals predict where threats are most likely to occur and then prevent them before they do.

Consider the case of a large cable and Internet service provider that was receiving more than 500,000 network security alerts every day. It implemented a behavioral analytics application that allowed analysts to baseline network activity, identify and correlate security alerts to isolate the most threatening, and refine security thresholds. The results: six months later, the provider saw a 99.8% reduction in alerts and its cyber professionals were now spending their time investigating the highest-priority alerts that required human ingenuity to solve.

How It’s Used
The applications for behavioral analytics are endless. Banks can use this technique to identify suspicious online account activity that deviates from an individual user’s typical profile, thereby stopping theft, fraud, or further network penetration before it begins in earnest. Cybersecurity firms can use it to detect a new virus or unknown attacks and stop the malicious behavior before damage happens, permitting responses at machine-speed.

Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Click for more info and to register.

The use of analytics is one of cognitive technologies’ greatest advantages for cybersecurity in that it allows organizations to take a proactive approach. The ability to wade through massive amounts of network traffic to quickly identify irregular behaviors is an enormous security advantage. Being able to predict where threats are most likely to occur, and then prevent them before they do, can change fundamentally change security.

Another way cognitive technology addresses the cybersecurity workforce shortage is by helping to reduce employee turnover, which can occur when employees feel unsatisfied with the work. A typical workday filled with uninspiring tasks or activities that aren’t challenging can prompt employees to seek professional fulfillment elsewhere. According to a report by the Society of Human Resource Management, 48% of employees reported that the work itself was very important to job satisfaction.

Naturally, there are concerns that cognitive computing means that the “robots are taking over” or that the efficiency of cognitive technologies may be so advantageous that humans may be out of work. But this fear is overblown. When grocery stores brought in self-checkout kiosks, cashiers feared they’d no longer be needed. The advent and widespread adoption of ATMs caused many to believe that bank tellers were on the brink of becoming passé. But the number of grocery store cashiers and bank tellers actually grew over time. In cybersecurity, there remains a place and an overwhelming need for human interaction and ingenuity that a machine cannot fulfill.

The key is to not compete against the machine but to compete with it. Cognitive technologies can manage rote security tasks, predict malicious attacks, and help retain employees. These capabilities allow companies to address workforce shortfalls by reassigning existing personnel without needing to rely solely on hiring new and experienced talent, while also improving processes and adding rigor to decision making.

But they can’t do everything. When these insights are combined with an organization’s knowledge of its own network, cybersecurity professionals can identify the network’s weak points, characterize the type of attacks the network is susceptible to, and prioritize addressing the pertinent vulnerabilities. In this way, human-machine teaming can produce better outcomes in less time.

Related Content:


Deborah Golden is a principal in Deloitte & Touche LLP’s Advisory practice, with over 20 years of information technology, security, and privacy experience encompassing various industries, with a specialization in Cyber-Risk Services, as well as within the Federal, Life … View Full Bio

More Insights

from Dark Reading – All Stories http://ubm.io/2fQZPbQ

‘Pulse wave’ DDoS – another way of blasting sites offline

After all the excitement over 2016’s Mirai Internet of Things (IoT) DDoS attack, you could be forgiven for thinking that the criminal pastime of overloading servers with lots of unwanted traffic has gone a bit quiet recently.

It’s been this way for years. DDoS attacks tend not to be noticed by anyone other than service providers unless they are particularly huge, hit well-known websites, or manifest nastiness such as the notorious DD4BC extortion gang attacks of 2015.

This happens infrequently even though below the surface of service providers fighting fires and commercial secrecy that obscures many unreported attacks, innovation rumbles on.

Now, mitigation company Incapsula has spotted an example of this behind-the-scenes evolution in the form of “pulse wave”, a new type of attack pattern which, from the off, had its experts intrigued.

DDoS attacks, which spew forth from botnets of one type or another, normally follow a format in which traffic increases before a peak is reached, after which comes either a gradual or sudden drop. The rise has to be gradual because bots take time to muster.

The recent wave of pulse attacks during 2017 looked different, with massive peaks popping out of nowhere rapidly, often within seconds. Demonstrating that this was no one-off, successive waves followed the same pattern.

Says Incapsula:

This, coupled with the accurate persistence in which the pulses reoccurred, painted a picture of very skilled bad actors exhibiting a high measure of control over their attack resources.

Granted, but to what end?

The clue was in the gaps between the “pulses” of each attack. In fact, the botnet or botnets behind these attacks were not necessarily being switched off at all – the gaps were just the attackers pointing it at different targets, like turning a water cannon.  This explained the rapid surge in traffic on the commencement of each attack.

It’s likely not a coincidence, Incapsula claims, that this pattern causes problems for one DDoS defence, which is to use on-site equipment with fail-over to a cloud traffic “scrubbing” system in the event that an attack gets too big. Because traffic ramps almost instantly, that fail-over can’t happen smoothly, and indeed the network might find rapidly itself cut off.

If that’s true, organisations that have built their datacentres around sensible layered or “hybrid” DDoS defense will be in a pickle. Either they’ll have to beef up their in-house mitigation systems or convince their cloud provider to offer rapid fail-over. Incapsula, we humbly note, sells cloud-based mitigation.

All in all, it sounds like a small but important technical innovation that will be countered with the same. Given the impressive traffic these botnets seem able to summon at will – reportedly 300Gbps for starters – it would be unwise to dismiss it as just another day at the internet office.

Or perhaps the real innovation in DDoS criminality isn’t in the way traffic is pointed at victims so much as the tragic wealth of undefended servers and devices that can be hijacked to generate the load in the first place.

This was one of the surprising lessons of Mirai and perhaps it has yet to be learned: never underestimate the damage a motley collection of ignored and forgotten webcams and home routers can do to some of the internet’s biggest brands if given the chance.

from Naked Security – Sophos http://bit.ly/2v82NuA