Facebook photo API bug exposed users’ unpublished photos

A bug in Facebook’s photo API may have exposed up to 6.8 million users’ photos to app developers, the company announced on Friday.

Facebook said that normally, when a user gives permission for an app to get at their Facebook photos, the developers are only supposed to get access to photos that are posted onto their timeline.

In this case, the bug allowed up to 1,500 apps – built by 876 developers – to access photos outside of the timeline. Specifically, photos from Marketplace or Facebook Stories were exposed. The most worrisome collection of exposed photos, however, were those that users hadn’t even posted.

It’s not that the apps were sniffing at your photo rolls, Facebook said. Rather, the API bug was letting those apps access photos that people had uploaded to the platform but hadn’t chosen to post.

They might have uploaded a photo to Facebook but hadn’t finished posting it because they lost reception, Facebook suggested.

Then again, maybe a user had second thoughts about posting a particularly sensitive, personal or intimate photo, and that’s where the fear factor kicks in: they might have had second thoughts for very good reasons, but a bug like this one makes reticence completely irrelevant.

Why is this even an issue, you might ask? One would imagine that photos that were never posted to Facebook were nothing more than a glimmer in the photographers’ eye, but no: Facebook says that it stores a copy of photos that are postus-interruptus for three days “so the person has it when they come back to the app to complete their post.”

Note the “when”: that’s marketing-positive speak that ignores the existence of the subjunctive “if,” as if second thoughts about posting just don’t happen in social media.

If only.

The only apps that were affected by the bug were so-called trusted ones: the apps that Facebook has approved to access the photo API and to which people had granted permission to access their photos.

You found this out WHEN?

Facebook says that its internal team discovered the bug, which may have affected people who used Facebook Login and who had granted permission to third-party apps to access their photos. It’s now fixed, but the third-party apps could have had inappropriate access to photos for nearly two weeks: specifically, the 12 days between 13 September to 25 September.

In its announcement, Facebook stayed quiet on the question of why we’re only hearing about this now. But when TechCrunch asked Facebook about what seems like an excessive notification lag, the platform said that its team discovered the breach on 25 September, went ahead and informed the European Union’s privacy watchdog (the IDPC, or Office Of The Data Protection Commissioner) on 22 November, and the IDPC then began a statutory inquiry into the breach.

Facebook must be suffering from serious apology fatigue at this point because all it managed to cough up was:

We’re sorry this happened.

Facebook told TechCrunch that it took time to investigate which apps and users were affected by the bug, and to then build and translate the warning notification it’s planning to send to those affected users.

Early this week, Facebook will release tools for app developers that will allow them to determine which people using their app might be affected by the bug. Facebook says it will be working with those developers to delete the photos that the apps should not have been able to access in the first place.

Facebook will be notifying users who’ve potentially been affected by the bug via an alert that will direct them to a Help Center link where they’ll be able to see if they’ve used any apps that were affected by the bug. It will look something like this:

What are the GDPR implications?

The delay might put Facebook at risk of stiff General Protection Data Regulation (GDPR) fines from the European Union for not promptly disclosing the issue within 72 hours. That can get painful: those fines can go up to €20 million (USD $22.68 million) or 4% of annual global revenue.

European regulators confirmed on Friday that they are, indeed, investigating Facebook for violating the GDPR – the first major test of the new regulations, as ABC News reports.

Here’s what Graham Doyle, the Irish Data Protection Commission’s head of communications, told ABC News:

The Irish DPC has received a number of breach notifications from Facebook since the introduction of the GDPR on May 25, 2018. With reference to these data breaches, including the breach in question, we have this week commenced a statutory inquiry examining Facebook’s compliance with the relevant provisions of the GDPR.

Yet another day that will live in privacy infamy

The photo API bug was discovered on 25 September: the same day that Facebook discovered that crooks had figured out to how to exploit a bug (actually, a combination of three different bugs) so that when they logged in as user X and did View As user Y, they essentially became user Y. In other words, the crooks exploited a bug so as to recover Facebook access tokens – the keys that allow you to stay logged into Facebook so you don’t need to re-enter your password every time you use the app – for user Y, potentially giving them access to lots of data about that user.

The access token bug affected what would turn out to be an estimated 30 million Facebook users.

As far as the newly disclosed photo API bug goes, we don’t know yet which apps got at photos they weren’t supposed to access. ABC News reached out to dating apps Tinder, Grindr and Bumble, but they hadn’t responded as of Monday.

Privacy advocates expressed a combination of concern and shell-shocked shrugging at the latest in Facebook’s privacy fumbles.

ABC News quoted Christine Bannan, counsel for the Electronic Privacy Information Center (EPIC), who said that the latest incident shows just how Facebook’s lack of concern for user privacy results in incidents like this:

It’s another example of FB not taking privacy seriously enough. Facebook just wants as much data as possible and just isn’t careful with it. This is happening because they are having developers have access to their platform without having standards and safeguards to what developers have access to.

Gennie Gebhart, a researcher with Electronic Frontier Foundation (EFF), told the news outlet that as far as users are concerned, it doesn’t matter if their data gets abused by design or by flubbery. It all amounts to the same in the end:

2018 has been the year of Facebook and other tech companies violating these privacy expectations, with nothing resembling informed consent. It is important to differentiate this from Cambridge Analytica, which wasn’t a bug. That was a platform behaving as it was intended. This is a different breed of privacy violation. This was an engineering mistake in the code. Of course, on the user end, those technicalities aren’t important. This is just another huge Facebook privacy scandal.

from Naked Security – Sophos http://bit.ly/2EBZoxw
via IFTTT

Logitech flaw fixed after Project Zero disclosure

When it comes to fixing security vulnerabilities, it should be clear by now that words only count when they’re swiftly followed by actions.

Ask peripherals maker Logitech, which last week became the latest company to find itself on the receiving end of an embarrassing public flaw disclosure by Google’s Project Zero team.

In September, Project Zero researcher Tavis Ormandy installed Logitech’s Options application for Windows (available separately for Mac), used to customise buttons on the company’s keyboards, mice, and touchpads.

Pretty quickly, he noticed some problems with the application’s design, starting with the fact that it…

opens a websocket server on port 10134 that any website can connect to, and has no origin checking at all.

Websockets simplify the communication between a client and a server – in this case between the application and the peripheral. However, this can, in theory, create security risks.

The only “authentication” is that you have to provide a pid [process ID] of a process owned by your user, but you get unlimited guesses so you can bruteforce it in microseconds.

Ormandy claimed this might offer attackers a way of executing keystroke injection to take control of a Windows PC running the software.

What to do? Tell the company, of course.  

Within days of contacting Logitech, Ormandy says he had a meeting to discuss the vulnerability with its engineers on 18 September, who assured him they understood the problem.

A new version of Options appeared on 1 October without a fix, although in fairness to Logitech that was probably too soon for any patch for Ormandy’s vulnerability to be included. As anyone who’s followed Google’s Project Zero will know, it operates a strict 90-day deadline for a company to fix vulnerabilities disclosed to it, after which they are made public.

On the basis of Ormandy’s first email contact, that passed on 11 December, the day he made the issue public and published the timeline described above.

I would recommend disabling Logitech Options until an update is available.

Clearly, the disclosure got things moving – on 13 December, Logitech suddenly updated Options to version 7.00.564 (7.00.554 for Mac). The company also tweeted that the flaws had been fixed, confirmed by Ormandy on the same day.

 

 

Earlier in 2018, Microsoft ran into a similar issue over a vulnerability found by Project Zero in the Edge browser.

Vendors must get from point of disclosure to releasing a fix more rapidly than they perhaps were used to in the past.

from Naked Security – Sophos http://bit.ly/2R0R5kt
via IFTTT

Twitter fixes bug that lets unauthorized apps get access to DMs

Back in 2013, the OAuth keys and secrets that official Twitter apps use to access users’ Twitter accounts were disclosed in a post to Github… a leak that meant that authors didn’t need to get their app approved by Twitter to access the Twitter API.

Years later, the chickens are still coming home to roost: on Friday, researcher Terence Eden posted about finding a bug in the OAuth screen that stems from a fix that Twitter used to limit the security risks of the exposed keys and secrets. The bug involved the OAuth screen saying that some apps didn’t have access to users’ Direct Messages… which was a lie. In fact, they did.

Imagine the airing of dirty laundry that could ensue, Eden said:

You’re trying out some cool new Twitter app. It asks you to sign in via OAuth as per usual. You look through the permissions – phew – it doesn’t want to access your Direct Messages.

You authorise it – whereupon it promptly leaks to the world all your sexts, inappropriate jokes, and dank memes. Tragic!

Eden explained that Twitter put in place some safeguards following the publishing of its OAuth keys and secrets, the most important being that it restricts so-called callback addresses. After the apps successfully login, they then return only to a predefined URL. In other words, a developer can’t use the API keys with their app.

The problem is, not all apps have a URL, or support callbacks, or are, in fact, actual apps. For those situations, Twitter provides a secondary, PIN-based authorization method. “You log in, it provides a PIN, you type the PIN into your app,” and the app is authorized to read your Twitter content, Eden explained.

That’s the spot where the bogus OAuth information was being fed to the user, Eden said. The dialog was erroneously telling the user that the app couldn’t access direct messages, though it could. Eden:

For some reason, Twitter’s OAuth screen says that these apps do not have access to Direct Messages. But they do!

Eden submitted his findings via HackerOne on 6 November. After Eden clarified some points for Twitter, it accepted the issue on that same day.

Twitter fixed the bug on 6 December, announced that it was paying Eden a bounty of $2,940 and gave him the go-ahead to publish the details of his report.

Eden told media outlets that by using his proof of concept, he was able to read his own direct messages, along with those of a dummy account he had created.

It would have been a difficult attack to pull off, he said:

An attacker would have had to convince you to click on a link, sign in, then type a PIN back into the original app. Given that most apps request DM access – and that most people don’t read warning screens – it is unlikely that anyone was mislead by it.

Twitter agrees and said that users don’t have to lift a finger: there’s no danger of our DMs being intercepted. From its summary on the HackerOne report:

We do not believe anyone was mislead [sic] by the permissions that these applications had or that their data was unintentionally accessed by the Twitter for iPhone or Twitter for Google TV applications as those applications use other authentication flows. To our knowledge, there was not a breach of anyone’s information due to this issue. There are no actions people need to take at this time.

from Naked Security – Sophos http://bit.ly/2PLkH0G
via IFTTT

Sneaky phishing campaign beats two-factor authentication

Protecting an account with multi-factor authentication (MFA) is a no-brainer, but that doesn’t mean every method for doing this is equally secure.

Take SMS authentication, for example, which in recent times has been undermined by various man-in-the-middle and man-in-the-browser attacks as well as SIM swap frauds carried out by tricking mobile providers.

This week, researchers at Certfa Lab said they’d detected a recent campaign by the Iranian ‘Charming Kitten’ group (previously blamed for the 2017 HBO hack) that offers the latest warning that SMS authentication is not the defence it once was.

The targets in this campaign were high-value individuals such as US Government officials, nuclear scientists, journalists, human rights campaigners, and think tank employees.

Certfa’s evidence comes from servers used by the attackers which contained a list of 77 Gmail and Yahoo email addresses, some of which were apparently successfully compromised despite having SMS verification turned on.

We don’t normally get a chance to peer inside attacks that are as targeted as this one, let alone ones prodding 2FA for weaknesses.

The campaign was built around the old idea of sending a fake alert from a plausible-looking address such as notifications.mailservices@gmail.com.

Google sends out alerts from time-to-time, so a few people might be tricked by this but there were other tweaks to boost its chances even further, such as:

  • Hosting phishing pages and files on sites.google.com, a Google sub-domain.
  • Sending the email alert as a clickable image hosted on Firefox Screenshot rather than URL text which might trip Google’s anti-phishing system.
  • Tracking who has opened emails by embedding a tiny 1×1 “beacon” pixel that is hosted and monitored from an external website (marketers have used this technique for years, which is why it’s a good idea to turn automatic image loading off in programs like Gmail).

SMS bypass

But how to beat authentication?

It’s possible the attackers were able to check phished passwords and usernames on-the-fly to see whether authentication was turned on. If it was – and presumably that would have been the case for most targets – a page mimicking the 2FA sign-in was thrown up.

This sounds simple, but the devil is in the detail. For example, it seems the attackers were also able to find out the last two digits of the target’s phone number, which was needed to generate a facsimile of the Google or Yahoo SMS verification pages.

While SMS OTP authentication was the primary target, Time-based One-time Password (TOTP) codes from an authentication app were also targeted.

According to Twitter comments by Certfa, the attacks against SMS authentication were successful, which is not a surprise given that all the attacker has to do is phish the code.

As for TOTP and HMAC-based One-time Password algorithm (HOTP)-based authenticator apps (i.e. Google Authenticator), the researchers are less sure – as with SMS, it would depend on how quickly the attackers could capture and enter the code within the allowed time window.

Where does this leave 2FA?

Using 2FA in any form is better than nothing but SMS is no longer the best option if users have a choice – Google, for one, no longer offers this option unless it was set up on an account a while ago.

Naked Security has published numerous articles on the vulnerability of older 2FA technologies such as SMS as well as the pros and cons of app-based authentication (Google Authenticator). In 2016, the US National Institute of Standards and Technology (NIST) recommended that users plan to move from SMS to more secure methods of authentication.

The most secure option by far is to use a FIDO U2F (or the more recent FIDO2) hardware token such as the YubiKey because bypassing it requires physical access to the key.

Google even offers a specially-hardened version of Gmail, the Advanced Protection Program (APP), built around this kind of security with some additional features added on top.

Password managers are another option because these will only auto-fill password fields when they detect the correct domain (see caveats regarding mobile versions). If that doesn’t happen as expected this could be a sign that something is wrong.

from Naked Security – Sophos http://bit.ly/2A3xTZE
via IFTTT

Delivering security and continuity for the cities of tomorrow

It’s seems like almost every part of our lives is now being supported by emerging technologies, from predictive analytics and artificial intelligence to the Internet of Things (IoT). First, we had smart phones, then smart watches and now smart cities.

Currently, more than half of the world’s population lives in towns and cities, and by 2050 this number could rise to 66 per cent. This is resulting in a growing need for solutions to effectively manage city infrastructure and cope with the rising population, all while keeping up with modernisation.

There are vast amounts of benefits when it comes to smart cities, such as wireless connectivity for utilities and intelligent transport systems. Through IoT, smart cities provide effective and innovative solutions to the growing number of challenges facing communities today. For example, sensor-enabled traffic lights can alert city maintenance workers about a burnt-out light bulb, ensuring public safety as well as saving valuable time and money.

Security vulnerabilities and risks to smart cities

It is clear that the future benefits of IoT-enabled cities are enormous. However, these benefits come with a significant array of challenges and risks, one being security. Though city administrators undoubtedly attempt to prevent attacks, we would be naive to ignore the possibility of something falling through the cracks. History has shown us that security measures that have even the smallest of vulnerabilities will be quickly identified and exploited by criminals and smart cities are no different.

As smart city technologies rely on digital networks, cyber criminals can take advantage of a number of vulnerabilities from a distance. The growth of IoT has been rapid, yet it has not been matched with adequate protection.

Due to inadequate software security, many smart city systems have been constructed with minimal end-to-end security, as many of the devices used assumed a safer environment with a smaller user community in mind. Cities and local councils are also under increasing pressure to make savings, so it’s no surprise that the use of legacy systems which have not been upgraded for several years is commonplace, leaving cities wide open to cyber threats.

A cyber-attack or extreme weather conditions, such a storm or heavy rain, potentially resulting in millions of residents being left with no electricity supply, are therefore very real threats to smart cities. And, in these hyper-connected environments, an outage can have cascading effects. For example, if an electricity grid is affected, power could be cut to homes, workplaces and various essential infrastructures, leaving thousands, if not millions, without power or heat for hours and even days. This is similar to the 2015 BlackEnergy cyber-attack in Ukraine where hackers accessed a power plant system, causing a power outage and leaving a whole city of 230,000 citizens without electricity for light or heating.

The preventative approach to smart city continuity

In the past, the standard approach to attacks or outages was addressed through the recovery process, however this is no longer enough to keep us safe.

Without taking a preventative approach, smart cities are at risk as even the latest and most resilient technology is unable to completely eliminate security risks and vulnerabilities. So, how can we address these risks to ensure continuity when things do inevitably go wrong?

As cities adopt smart technologies, making data security a priority is crucial. We are witnessing the increasing adoption of smart devices within homes, providing a wealth of new potential data streams that could inform smart city services – as long as they are secured. For example, live video feeds from smart home security cameras could be used to help inform city police services. This raises the issue of security and continuity at a network level, as it opens up possibilities for cyber attackers to hack into households. Security features on smart devices are essentially non-existent – potentially leaving a city vulnerable, along with a family’s online privacy.

With the introduction of cloud technology, smart city continuity can be now be ensured. Smart city systems can be backed up and restored at lightning speed. Enabling cloud also provides an “air gap” in critical systems, which can be forcibly shut down when systems are hacked or at risk. This leaves time to resolve vulnerability issues, prevent further damage and get things back up and running – allowing cities to not only avoid massive outages, but to also recover from them.

As smart cities move from concept to reality, securing their foundation will ensure the safety of the digitally connected communities of the future. Decision makers cannot afford to put the public at risk by not implementing the right processes to support the infrastructure. While deploying security solutions to prevent things from going wrong will help, using continuity as a framework for building resiliency is the way forward for smart cities. There is no substitute for being prepared but investing in the right solutions to get cities back up and running, with immediate access to heating, power or electricity is non-negotiable.

High quality investment in cloud backup and disaster recovery ahead of time is imperative. Taking a proactive, rather than reactive, approach will ensure that systems are protected and are resilient in the face of foreseeable or unforeseeable attacks and outages.

from Help Net Security – News http://bit.ly/2Bq0nNd
via IFTTT

Warding off security vulnerabilities with centralized data

This is the second article of a series, the first article is available here.

File access permissions

Having a system that lets you set the proper permissions and prevents unauthorized people from accessing files is important. However, you should expect that human error will lead to unwanted vulnerabilities. Expecting your users to manually set permissions on each file without ever making a mistake is unrealistic and bad for security and compliance. The key to getting file access permissions right is automation. By automating your access permissions, you’ll reduce the amount of manual work and, in turn, the risk.

To prevent data exposure, you should automate your sharing permissions with workflows while also using monitoring tools that will alert you if a file with sensitive content is shared with people that are not supposed to have access to it.

When companies don’t leverage automated document controls and permissions, their data is at risk. GoDaddy, Verizon, and DowJones, for example, exposed documents containing sensitive information because of improperly managed Amazon Web Services (AWS) S3 cloud storage bucket settings. Companies can easily avoid accidental human errors like this by using automated workflow tools.

Centralized information

To enhance productivity, the public cloud provides users with a suite of business tools with centralized access and real-time collaboration capabilities. This is an inherent advantage over the traditional workflow: save, drag, drop, email, download, edit, and repeat. Whereas many of the steps in the old-school process expose documents to security vulnerabilities, public cloud platforms allow you to keep your documents centralized and accessible.

Beyond being a version control nightmare, email attachments open your information up to unauthorized modifications and expose them to any software or network vulnerabilities found on the recipient’s device. By controlling access to documents, companies can effectively negate the risk of a file being shared with someone without their knowledge.

Centralizing information also means that no information should be stored on local devices. USB keys are one of the biggest offenders. These devices are often lost or stolen. In late 2017, a USB stick with highly confidential Heathrow Airport security data was found on the street. The drive’s files included detailed airport security and anti-terror measures.

Moreover, people tend to use USB keys that they’ve gotten for free from conferences. It’s possible that these devices have been intentionally infected with viruses. A security event in Taiwan recently awarded quiz winners USB sticks that contained malware designed to steal personal information. That’s not all, the list of USB drive-related incidents goes on.

There is also the possibility that your phone or laptop will be lost or stolen. Those odds become even greater when you’re traveling or running between meetings, events, and other appointments. If you have all of your files saved directly on your physical laptop or phone, you’re presented with an obvious problem. If you lose it, those files are gone and, if it gets stolen, you’re in even bigger trouble.

With cloud technology, personal computers and phones have become disposable. You can misplace or wipe these devices at any time without losing any sensitive work-related data. Even better, you can be up and running on a new device in only a few minutes. As many public cloud providers, like Google and AWS, have advanced security features, you’re able to revoke the access of a lost or stolen device as soon as it goes missing. In addition, these providers use cutting-edge security to ensure that all your corporate data is safe and sound in the cloud.

Audit monitoring

Blackhat hackers will repeatedly probe and attack whatever IT protocols a company has put in place with new techniques and approaches. When your documents are in the public cloud, the provider is in charge of network security. That means that their security team is monitoring the network audit logs for you. When your documents are all in the public cloud, it’s also much easier to centralize aggregated audit data.

Audits won’t be hidden within clunky firewall administration interfaces and other closed proprietary systems. This is important for maintaining and improving your security protocols.

When the audit information is readily available, your company is better equipped to conduct thorough security analyses. Data analysis systems, like Google BigQuery, make it easier, faster, and cheaper to load and analyze your audit log data. These systems can ingest vast amounts of data and allow you to quickly identify and investigate suspicious events. Automated alerts also allow for an immediate response in the event of a security breach. Through alerts and real-time monitoring, companies can secure their systems and files. When combined with an accurate audit log, from which your IT teams can pinpoint what information was exposed, companies can dramatically reduce the impact of security incidents.

from Help Net Security – News http://bit.ly/2QCLDFa
via IFTTT