HR Services Firm ComplyRight Suffers Major Data Breach

HR Services Firm ComplyRight Suffers Major Data Breach

More than 7,500 customer companies were affected, and the number of individuals whose information was leaked is unknown.

ComplyRight, a company that provides human resources functions to businesses, has begun notifying individuals of a data breach that may have exposed names, addresses, phone numbers, email addresses, and Social Security numbers taken from employee tax forms the company processed.

According to ComplyRight, the company has more than 76,000 customers, though it has not yet said how many were involved in the breach.

KrebsOnSecurity, which broke news of the breach on Wednesday, writes that it appears to be a compromise of the website itself, rather than customer communications to and from the website. In its report, KrebsOnSecurity said it could find no ComplyRight employee with a security title on LinkedIn.

In a statement provided to Dark Reading, Jeannie Warner, security manager at WhiteHat Security said, “As a human resources firm, ComplyRight handles forms overflowing with personally identifiable information, such as 1099s and W2s. The fact that the company touts its security prowess, yet Brian Krebs couldn’t identify a single employee with a security title, is deeply concerning – and just another reason for consumers to question their trust in digital businesses.”

A Qualys SSL Labs scan of the site efile4biz.com conducted by Dark Reading shows an overall score of “B”, capped because the server doesn’t support forward secrecy or AEAD cipher suites. It must be noted, however, that this was a scan of the public-facing site (which does contain login provisions for customers); customers transacting business with the company may be re-directed to other servers upon authentication.

Nevertheless, the fact that the page still support outdated protocols such as TLS 1.0 for sign in indicates that there may be other legacy vulnerabilities still in place in the site application code.

In the Web page disclosing the breach, ComplyRight notes that the breach occurred in late May 2018, while the disclosure occurred on July 18. Ryan Wilk, vice president of customer success at NuData Security, a Mastercard company, said, “One of the many dangerous things about breaches is the amount of time it takes for companies and end users to know their data is out in the open. From the moment a breach happens, hackers have ample time to broker the stolen names, Social Security numbers, tax data and other identifying information on the dark web – leaving customers and employees open to the impacts of identity theft.”

Related Content:


 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.


Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

More Insights

from Dark Reading – All Stories https://ubm.io/2zV9J69
via IFTTT

Why Artificial Intelligence Is Not a Silver Bullet for Cybersecurity

Why Artificial Intelligence Is Not a Silver Bullet for Cybersecurity

Like any technology, AI and machine learning have limitations. Three are detection, power, and people.

A recent Cisco survey found that 39% of CISOs say their organizations are reliant on automation for cybersecurity, another 34% say they are reliant on machine learning, and 32% report they are highly reliant on artificial intelligence (AI). I’m impressed by the optimism these CISOs have about AI, but good luck with that. I think it’s unlikely that AI will be used for much beyond spotting malicious behavior.

To be fair, AI definitely has a few clear advantages for cybersecurity. With malware that self-modifies like the flu virus, it would be close to impossible to develop a response strategy without using AI. It’s also handy for financial institutions like banks or credit card providers who are always on the hunt for ways to improve their fraud detection and prevention; once properly trained, AI can heavily enhance their SIEM systems. But AI is not the cybersecurity silver bullet that everyone wants you to believe. In reality, like any technology, AI has its limitations.

1. Fool Me Once: AI Can Be Used to Fool Other AIs
This is the big one for me. If you’re using AI to better detect threats, there’s an attacker out there who had the exact same thought. Where a company is using AI to detect attacks with greater accuracy, an attacker is using AI to develop malware that’s smarter and evolves to avoid detection. Basically, the malware escapes being detected by an AI … by using AI. Once attackers make it past the company’s AI, it’s easy for them to remain unnoticed while mapping the environment, behavior that a company’s AI would rule out as a statistical error. Even when the malware is detected, security already has been compromised and damage might already have been done.

2. Power Matters: With Low-Power Devices, AI Might Be Too Little, Too Late
Internet of Things (IoT) networks are typically low power with a small amount of data. If an attacker manages to deploy malware at this level, then chances are that AI won’t be able to help. AI needs a lot of memory, computing power, and, most importantly, big data to run successfully. There is no way this can be done on an IoT device; the data will have to be sent to the cloud for processing before the AI can respond. By then, it’s already too late. It’s like your car calling 911 for you and reporting your location at the time of crash, but you’ve still crashed. It might report the crash a little faster than a bystander would have, but it didn’t do anything to actually prevent the collision. At best, AI might be helpful in detecting that something’s going wrong before you lose control over the device, or, in the worst case, over your whole IoT infrastructure.

3. The Known Unknown: AI Can’t Analyze What It Does Not Know
While AI is likely to work quite well over a strictly controlled network, the reality is much more colorful and much less controlled. AI’s Four Horsemen of the Apocalypse are the proliferation of shadow IT, bring-your-own-device programs, software-as-a-service systems, and, as always, employees. Regardless of how much big data you have for your AI, you need to tame all four of these simultaneously — a difficult or near-impossible task. There will always be a situation where an employee catches up on Gmail-based company email from a personal laptop over an unsecured Wi-Fi network and boom! There goes your sensitive data without AI even getting the chance to know about it. In the end, your own application might be protected by AI that prevents you from misusing it, but how do you secure it for the end user who might be using a device that you weren’t even aware of? Or, how do you introduce AI to a cloud-based system that offers only smartphone apps and no corporate access control, not to mention real-time logs? There’s simply no way for a company to successfully employ machine learning in this type of situation.

AI does help, but it’s not a game changer. AI can be used to detect malware or an attacker in the system it controls, but it’s hard to prevent malware from being distributed through company systems, and there’s no way it can help unless you ensure it can control all your endpoint devices and systems. We’re still fighting the same battle we’ve always been fighting, but we — and the attackers — are using different weapons, and the defenses we have are efficient only when properly deployed and managed.

Rather than looking to AI as the Cyber Savior, we need to keep the focus on the same old boring problems we’ve always had: the lack of control, lack of monitoring, and lack of understanding of potential threats. Only by understanding who your users are and which devices they have for what purposes and then ensuring the systems used actually can be protected by AI can you start deploying and training it.

Related Content:


Learn from the industry’s most knowledgeable CISOs and IT security experts in a setting that is conducive to interaction and conversation. Register before July 27 and save $700! Click for more info


Tomáš Honzák serves as the head of security, privacy and compliance at GoodData, where he built an information security management system compliant with security and privacy management standards and regulations such as SOC 2, HIPAA and U.S.-EU Privacy … View Full Bio

More Insights

from Dark Reading – All Stories https://ubm.io/2mzfrku
via IFTTT

Why Security Startups Fly – And Why They Crash

Why Security Startups Fly – And Why They Crash

What makes startups stand out in a market flooded with thousands of vendors? Funding experts and former founders share their thoughts.

Businesses want security against common and complex cyberthreats – and venture capitalists have their eyes on startups promising it. The latest fundings have permeated security news: Most recently, BitSight raised $60 million in Series D, Social SafeGuard generated $11 million in Series B, Preempt secured $17.5 million in Series B, and Agari raised $40 million in Series E.

What’s more, last year broke records for venture capital (VC) funding in cybersecurity, with 2017 ending with 248 deals totaling $4.06 billion. Much of the high funding went to established firms including CrowdStrike and Exabeam, but plenty also was invested in relatively new entrants and startups.

The modern security market is “throbby and noisy and urgent,” says Scott Petry, co-founder and CEO of Authentic8 and founder of Postini, which was acquired by Google and became Gmail. “People are jumping into security because it’s a hot sector.”

It’s a relatively new problem for an industry unaccustomed to the spotlight. When he started Postini in 1999, Petry says, few people cared about security; most were focused on Web portals, applications, and data services. As a result, the company didn’t get much respect. Now, with cyberattacks escalating, the landscape has shifted. Security pros truly invested in defense are often balanced by people angling to get part of the ubiquitous VC funding.

“The challenge is, there’s an awful lot of technology being thrown at the security problem,” Petry says. But security’s problems often can’t be traced to a lack of tech: As more money is allocated toward security tools, the number of breaches is also going up. Most aren’t caused by gaps in technology but oversights, he adds, such as Equifax’s leaving a Web server unpatched.

Right now, the security market is unhealthy, Petry explains. Vendors capitalize on customers’ fear and uncertainty, and customers hit with breaches will buy more tech to fix the problem instead of assessing its root cause. “It’s human nature,” he admits. “The same nature applies to venture capitalists and companies hoping to get funded.”  

So where are those dollars going, and what are they being used for? Why do some startups stand out from others? And what will happen to the market as hundreds of vendors enter each year?

Where Investors are Investing 
If the problem isn’t technology, where are the billions of investment dollars going?

“Overall, the demand for cyber services is growing quite robustly, but there are so many companies that have been funded in the space that most are struggling,” says Dave Cowan, partner at Bessemer Venture Partners. There are two major trends in today’s security market, he says. One is working, one is not.

The displacement of the antivirus (AV) market is successful, he notes. Companies are turning off older antivirus agents and replacing them with next-gen systems built with a combination of endpoint detection, remediation, and attack prevention. Cowan cites Carbon Black, CrowdStrike, Cylance, Endgame, and SentinelOne as examples of next-gen AV success stories.

George Kurtz, co-founder and CEO of CrowdStrike, agrees that the ripest area for security investment is in endpoint protection. The challenge most companies will face is portfolio scope, he says. Do they offer the full spectrum of endpoint security, or do they target a small part of the solution?

“Buyers have more choices than ever as new technologies and solutions continue to emerge,” Kurtz says. “Many companies are ready to replace their legacy AV with more effective and efficient solutions.”

What’s not working so well: artificial intelligence (AI) for cybersecurity.

“Most of the companies who have raised money from venture investors in the last few years have touted their algorithms as the basis for identifying attacks,” Cowan says. Back in 2014, when the industry saw a spike in security breaches, businesses realized the stakes were getting higher and wanted visibility to detect sophisticated malware and advanced persistent threats.

The most enticing pitch was the application of AI to identify anomalies that could indicate an attack. Many startups were founded to detect suspicious activity, sending thousands of alerts to SOCs to experts who could only investigate a dozen per day. But detecting anomalies has little value to a business unless it has enough people to dig through those alerts and determine which are legitimate, Cowan says. Most alerts entering the SIEM don’t even get seen.

However, Kurtz points out, startups focused on AI continue to appear on the market as founders aim to capitalize on the benefits of this technology. As they continue to explore use cases for AI, companies will continue to receive venture funding, Cowan adds.

Asheem Chanda, partner at Greylock Partners, anticipates the continued growth of technology including cloud-based solutions, solutions that combine on-premises with cloud, the application of machine learning and AI to security, and anything around identity. Identity analytics, identity, governance, and new authentication techniques will be increasingly important in the future, he says.

What Makes Startups Stand Out
First things first: The technology has to be useful and business-appropriate.

“It’s important that a cyber company not only develop a strong defense, but develop one that works within enterprise organizations,” Cowan says, noting that it’s important for security leaders to also consider how useful a new tool might be. “Thinking about how the enterprise can actually use what you’re doing is an important factor to success.”

On a micro level, businesses building security tech should tackle smaller issues instead of trying to do everything. “What I’ve seen interesting, successful companies do is focus on solving a specific and narrow problem,” Petry explains. “Many companies are trying to take too big a bite of the apple.”

No single startup can solve all problems – the security landscape is incredibly diverse, he notes – but they can build expertise in one area. If it can solve a narrow problem quickly, acquire customers, and move on, a startup can build its business much more easily. “Solve a problem, do it well, and solve it for more people,” Petry sums up.

Successful startups employ people who know how to exploit a network, Cowan points out. It takes a hacker to stop a hacker, he says, and Silicon Valley doesn’t have many hackers. New companies aiming to deter and prevent major attacks, especially nation-state threats, need to build their products around the expertise of someone who has been in the attacker’s seat. It’s for their benefit and the benefit of their future customers.

Hiring the right financial expertise is also critical, Kurtz adds. Business is fundamentally a numbers game that relies on financial and hiring strategies. A CEO must hire employees who understand, and can perform against, the basic principle of good financial health.

Deciding Whether a Startup Is Worth the Money
A challenge for security leaders shopping in a market rife with vendors is deciding which technologies are worth their limited budgets. If you’re an IT manager and debating the pros and cons of testing a new tool, how can you tell whether the startup behind it is here to stay?

The first thing to consider is the quality of its technology team, Chanda says. It’s unlikely you’re going to get a world-class solution if the quality of the tech team isn’t “stellar,” he says, so look at the backgrounds of a startup’s founders. Where did they previously work? What did they last build?

Next, think about how the company markets its product. You want to work with one that explains its concept in a use-case-driven way that addresses your problem, and not as a technology looking for a problem to fix. In the security space, it’s important to build technology that fits with existing architecture as opposed to a tool that works in theory but is hard to use.

“Companies that are successful tend to be customer-centric and innovate in a customer-centric way,” Chanda says. “An important piece of that, for security companies, is being able to demonstrate a security solution … that works in combination with what the customer already has.” You don’t want a solution that will require you to overhaul your systems.

Finally, he says, consider the quality of the investor backing a startup. If a trusted VC has confidence the company will be around, it’s a good sign, Chanda explains.

Looking Ahead: If and When the Bubble Will Pop
The security market has thousands of vendors competing for customers and hundreds more entering each year. It seems the industry will maximize its capacity at some point. But will it?

Experts are undecided. Two things will keep the security bubble from popping, says Petry, and the first is ongoing security risk. Businesses will continue to lose data, meaning they will continue to spend more money on tools promising to prevent future incidents.

The second will be the limited capacity of major organizations to cover all of their bases. Established vendors spending hundreds of millions of dollars on security won’t have the resources to develop new systems in-house, so they’ll acquire smaller startups building them.

For startups, Kurtz advises committing to customer success, hiring top talent in a remote workforce, and creating a mission that employees are confident in. They should also get comfortable with failure, he explains, especially as tech continues to evolve. Those who succeed will be able to keep up with changes in technology, and businesses in the market for new tech should pay attention to them.

“The Silicon Valley mantra of ‘fail fast, fail often’ rings true for many tech entrepreneurs, but I believe it’s equally important to evolve even faster after failures,” he says. “While good companies are those that can excel quickly, the best companies are those that have a long-term vision and know where they are headed.”

Attackers’ changing strategies will also influence the shape of startups coming into the market, anticipates Gary Golomb, chief research officer at Awake Security. Companies that hard-code specific protections into their tech will have a harder time because they won’t be able to keep up with advanced attackers, as opposed to platforms that can accommodate new detections.

“The ability of attackers to shift tactics rapidly and intelligently based on a target’s security measures means that the startups that get funding and succeed will be those that have a platform approach where new detections can be added easily, whether by the startup or the customer,” Golomb says.

Related Content:


 

 

 

Black Hat USA returns to Las Vegas with hands-on technical Trainings, cutting-edge Briefings, Arsenal open-source tool demonstrations, top-tier security solutions and service providers in the Business Hall. Click for information on the conference and to register.


Kelly Sheridan is the Staff Editor at Dark Reading, where she focuses on cybersecurity news and analysis. She is a business technology journalist who previously reported for InformationWeek, where she covered Microsoft, and Insurance & Technology, where she covered financial … View Full Bio

More Insights

from Dark Reading – All Stories https://ubm.io/2LcOtOf
via IFTTT

Hackers hold 80,000 healthcare records to ransom

Data breaches tend to be mysterious affairs where organisations on the receiving end say as little as possible and the attackers remain safely in the shadows.

The breach of medical records at Canadian company CarePartners, which provides healthcare services on behalf of the Ontario Government, looks as if it is turning into an unwelcome exception to this rule.

CarePartners made the breach public in June, saying only that patient and employee health and financial data had been “inappropriately accessed by the perpetrators” without specifying the size or extent of the breach.

And so it would have remained had the attackers not decided to contact the Canadian Broadcasting Corporation (CBC) this week with more detail of their exploits. They also revealed the not insignificant nugget that they have demanded that CarePartners pay a ransom for them not to release the stolen data:

We requested compensation in exchange for telling them how to fix their security issues and for us to not leak data online.

To underscore the threat, the attackers sent CBC a sample data set which included thousands of medical records containing dates of birth, health numbers, phone numbers and details of past surgical procedures and medications.

Other files contained 140 patient credit card numbers complete with expiry dates and security codes, plus employee tax slips, social security numbers, bank account details and plaintext passwords.

The cache ran to thousands of records, said CBC, but the attackers claimed that hundreds of thousands of records were involved.

What’s concerning are discrepancies between CarePartners’ assessment of the breach and the new information the hackers have sent to CBC.

According to CBC, CarePartners said its forensic investigation had identified 627 patient files and 886 employee records that were part of the breach, with all affected individuals informed of the compromise.

And yet the sample sent by the hackers contained the names and contact information for more than 80,000 people.

When CBC’s journalists contacted a small sample of these individuals, none said they had been contacted by CarePartners.

According to the attackers, they gained access to the data after they discovered vulnerable software that hadn’t been updated in two years, adding:

This data breach affects hundreds of thousands of Canadians and was completely avoidable. None of the data we have was encrypted.

Beyond the fact that a serious breach has occurred, none of these details can be confirmed of course.

Publicising a ransom demand to a public body is probably a sign of desperation by the attackers that goes against the extortion playbook.

The first rule of extortion is to keep it a secret on the basis that publicity can make it harder for organisations to pay up, and may even force them to report the matter to the police.

The fact that the hackers have broken this rule is not good news. If they’ve given up any hope of being paid, that makes it more likely that the data will be posted to a public server where it will join the ocean of other personal healthcare data that lives in the darker recesses of the internet.

As with every data breach, today’s headlines are only the beginning of a story that stretches many years into the future, its consequences hard to predict.


from Naked Security – Sophos http://bit.ly/2NvxK5r
via IFTTT

Roblox says hacker injected code that led to avatars’ gang rape

“Roblox has made it almost impossible to rape people anymore,” a gamer complained in a YouTube video posted in September. He apologized for not posting a rape script video in over a year, all due to the company adding more security into their games.

If any of you guys know how to make the rape script work on filtered enabled games, make sure to let me know.

Well, somebody clearly did figure it out, as a whole lot of people unfamiliar with gaming rape culture found out earlier this month, when a 7-year-old girl’s avatar was gang-raped on a playground by two male avatars in the hugely popular, typically family-friendly game.

Roblox is a multiplayer online gaming platform in which users can create their own personal avatar, embark on their own adventures and interact with each other in virtual reality.

The girl’s mother, Amber Petersen, described in a 28 June Facebook post how she had seen her daughter’s character get attacked while she was playing Roblox on an iPad. Petersen shielded her daughter from seeing most of the attack, and she captured screenshots that she also posted.

At the time, Roblox traced the virtual violence to one “bad actor” and permanently banned them from the platform. As it was, at the time of the assault, Roblox already employed moderators who review images, video and audio before they’re uploaded to Roblox’s site, as well as automatic filters. After Petersen reported her daughter’s experience, the company put in yet more safeguards to keep it from happening again. It issued this statement:

Roblox’s mission is to inspire imagination and it is our responsibility to provide a safe and civil platform for play. As safety is our top priority – we have robust systems in place to protect our platform and users. This includes automated technology to track and monitor all communication between our players as well as a large team of moderators who work around the clock to review all the content uploaded into a game and investigate any inappropriate activity. We provide parental controls to empower parents to create the most appropriate experience for their child, and we provide individual users with protective tools, such as the ability to block another player.

The incident involved one bad actor that was able to subvert our protective systems and exploit one instance of a game running on a single server. We have zero tolerance for this behavior and we took immediate action to identify how this individual created the offending action and put safeguards in place to prevent it from happening again. In addition, the offender was identified and permanently banned from the platform. Our work on safety is never-ending and we are committed to ensuring that one individual does not get in the way of the millions of children who come to Roblox to play, create, and imagine.

Now, the company is blaming a hacker/hackers who attacked one of its servers and thereby managed to inject code that enabled the assault.

Tech Crunch reports that Roblox, which is experiencing vigorous growth (it recently said it expects to pay out double the sum it paid to content creators a year ago), was in the process of moving some older, user-generated games to a newer, more secure system when the attack took place. There were multiple games that could have been exploited in a similar way.

Following the incident, Roblox’s developers have removed the other vulnerable games and asked their creators to move them to a newer, safer system. Tech Crunch reports that most have done so, and those who haven’t won’t see their games back online until they do. None of the games now online are vulnerable to the exploit used by whatever hacker crawled out of Dante’s Seventh Circle of Hell to attack a 7-year-old’s avatar.

Petersen has lauded the company’s fast and thorough action. In her initial Facebook post, reeling with shock, disgust and guilt, Petersen had urged other parents to delete the app. But two weeks later, in a follow-up post on 11 July, Petersen said she’d edited that initial post: she now emphatically believes that the incident was not Roblox’s fault:

This was the fault of a HACKER, not the company. Shortly after I reported the abuse and wrote my Facebook post, Roblox quickly responded and determined that the offending avatars were hacked by an outside user. Immediately, the offender was permanently banned from the platform, the game was suspended, and Roblox engineers worked overtime through the weekend to tighten their platform to ensure this event would not happen again. Afterward, I revised my original post. Rather than calling for people to delete the app, I encouraged parents to double-check security settings on all their devices and make sure they are aware of what their children are playing.

Petersen is now urging parents to visit Roblox’s parent’s guide at https://corp.roblox.com/parents/.

Although she no longer thinks parents should delete Roblox, she still thinks that it’s vital for parents to closely supervise children’s activity, on any device, as “no form of technology is entirely safe from hackers,” she says.

And, these such hackers don’t restrain themselves to sexual violence or aggressiveness. On the Go Ask Mom Facebook page, one mother wrote, in response to the Roblox rape story, that she’s keeping her son off Roblox after learning about a game he was playing:

My son has not been allowed to play this since I walked into him playing and the mission was to kill yourself. Like he had to go around his character’s house and drink bleach or find a knife.

There’s just no way to protect kids from every single type of troubling content on games and social media. Rather than freak out and stuff them away in a Faraday cage, experts recommend that parents can take certain precautions, foremost of which is to keep an eye on what their children are encountering online.

Larry Magid, CEO of Connect Safely, a nonprofit dedicated to educating technology users about safety, privacy and security, told WRAL that Petersen was doing pretty much everything right.

Namely, she …

  • …was sitting right next to her daughter, ready to step in to interrupt when things took a turn for the objectionable.
  • …had the privacy settings set so her daughter would only experience age-appropriate play. It’s not clear how those settings were reset: it might have happened when the app was deleted to save space and then reinstalled, for example. Regardless, it points to the importance of regularly rechecking privacy settings.

Magid and other experts offered additional steps that can help:

  • Select “curated content” only in the security settings: that will restrict the content to age-appropriate games. Check out Roblox’s site for more information on its curated content.
  • Let Roblox – or any game maker, for that matter – know immediately when unacceptable content appears.

Those are helpful tips. But for better or worse, gamers, and game hackers, are a creative bunch. That means that the list of threats keeps morphing, and the hackers are ever ready to pounce on any means possible to insert their idea of “fun” into a game. Just run a search on “Roblox rape” on YouTube to see what I mean.

Maybe it was just one bad actor responsible in this case. But even if it was, there are clearly plenty of people who think of that act as a win and who would happily do the same.

That rape script video upload I mentioned? It was a six-part series.

Keep an eye on the kids – it’s a world of nasty out there.


from Naked Security – Sophos http://bit.ly/2NZWQui
via IFTTT

Aerohive Networks announces availability plans for its pluggable access point

Aerohive Networks released worldwide availability plans for its pluggable Wi-Fi access point, the Aerohive Atom AP30. Currently shipping with a Type A/B plug, Aerohive has now announced Type C and G plugs availability for Q3 2018, enabling customers who use the EU or UK-plug type to benefit from Wi-Fi connectivity that can be both deployed, and re-deployed, in a matter of minutes.

Aerohive Atom AP30 is designed to augment or replace traditional ceiling and wall-mounted access points. Its compact design, combined with Aerohive’s cloud management and automated mesh provisioning, allows IT departments to adapt their access networks to match their changing client demands in any environment or location.

Aerohive Atom AP30 is a small form factor access point that has all the features and functionality of its traditional ceiling-mounted counterparts, including identity-driven network access, software-defined architecture, and enterprise performance management.

Aerohive Atom AP30 can help organizations solve for all manner of use cases without having to install or re-position a ceiling-mounted access point, including, but not limited to:

Eliminate Wi-Fi dead zones – Aerohive Atom AP30 extends coverage to dead zones. Aerohive Atom AP30 can be temporarily or permanently installed in minutes to eliminate dead zones in distant corners of the office, long hallways or passageways, staircases, breakrooms, storage rooms, etc., saving the time and money of pulling new cable or re-positioning existing ceiling-mounted access points.

Solve density issues in meeting spaces – Aerohive Atom AP30 can augment overloaded ceiling-mounted access points in meeting space. Aerohive Atom AP30 can be temporarily or permanently installed in minutes to mitigate overloaded ceiling-mounted access points in conference rooms, training rooms, cafeterias, lobby areas, waiting rooms, gymnasiums, etc.

Improve performance for 3rd party networks – Aerohive Atom AP30 can connect to 3rd-party Wi-Fi networks, including Cisco, HPE, and Ruckus, to help compensate coverage or capacity challenges. With sensor mode equipped, Aerohive Atom AP30 may also be used to monitor 3rd-party networks for efficiency so that administrators can optimize their network accordingly.

Extend secure corporate connectivity – Aerohive Atom AP30 can extend corporate connectivity to anywhere you can plug it in to a power socket and get an IP address. Aerohive Atom AP30’s native VPN capability can make your corporate SSID and security available for teleworkers, marketing events, sales demonstrations, offsite meetings, embedded employees consulting for other companies, temporary offices, and so on.

Connect the previously unconnected – Aerohive Atom AP30 can bridge an IoT device to an Aerohive or 3rd-party network in any easy or hard-to-reach location. Aerohive Atom AP30 can act like an IoT hub and securely connect any Wi-Fi and/or Ethernet-based IoT device and sensor.

Solve for MDU connectivity – Aerohive Atom AP30’s form factor and plug-ability makes it simple to install temporarily or permanently in an MDU, such as barracks, dormitories, hotels, motels, cruise ships, condominiums, apartments, etc.

from Help Net Security – News http://bit.ly/2O2xLPi
via IFTTT

Socure’s Aida establishes trust and certainty for online financial transactions

Socure announced Aida (Authentic Identity Agent), the bot for establishing trust in online transactions. Named in honor of Ada Lovelace, the world’s first computer scientist, Aida uses artificial intelligence to process billions of multi-dimensional online and offline data points per second to validate the authenticity of digital identities in real-time.

According to the Javelin Strategy & Research 2017 Identity Fraud Study, due to the increasing adoption in the US of EMV (chip) cards and terminals in point of sale environments, fraudsters have shifted to fraudulently opening accounts and become better at evading detection.

As the number of data breaches has skyrocketed in recent years, personally identifiable data has flooded the black market, making it easier for fraudsters to impersonate real identities or to create fake ones using real data.

The reliability of identity data has become a key concern across industries, especially within the financial sector, which often depends on manual review to confirm the identity of new applicants.

AI-based decisioning for identity

By providing a multidimensional view of consumers based on applying self-training, predictive analytics models to hundreds of online and offline data sources, Aida enables financial services organizations to approve more digital transactions than previously possible without performing manual reviews. It can reduce fraud for online new account opening by up to 90 percent.

Using the Aida-powered Socure ID+ platform, financial services organizations have achieved the following business results:

1. Top 10 Credit Issuer reduced fraud by 85%, saving more than $50M in fraud losses
2. Top 5 Bank reduced their dependency on knowledge-based authentication (KBA) by 70%, which increased auto-acceptance rates and improved the customer experience
3. Leading Digital Bank completely eliminated KBA, while increasing annual revenues by $5M

“Socure is solving the single most difficult problem in identity verification – validating a person that’s never done business with an organization before,” said Sunil Madhu, Chief Strategy Officer for Socure.

“Using traditional approaches for vetting the identity of new customers in a mobile and digital world has been a miserable failure. Aida can assess in real-time and with unprecedented levels of reliability, whether a digital identity is authentic, synthetic or has been stolen by performing beyond-human analysis at machine speed. Aida essentially lives every minute of every day to verify identities and fight fraud.”

Aida learns customer identity from their digital footprints to calculate risk and correlation scores, which empowers businesses to increase online transaction acceptance rates as well as reduce manual reviews and fraud.

In the future it could provide consumers with a portable “Socure Verified” identity that can be used with participating financial institutions and ecommerce merchants.

The brains in the machine

Aida combines artificial intelligence, unsupervised machine learning and clustering algorithms to perform a continuous loop of:

  • Ingesting, normalizing and evaluating data from hundreds of online and offline data sources including credit bureaus, email history, phone records, IP addresses, social networks and more,
  • Generating fully explainable and transparent machine learning models in hours, while continuously training and improving them. Aida performs the work of human data scientists in software, at scale, to achieve accuracy levels far beyond traditional human rules-based approaches,
  • Performing predictive analytics on real-time transactions to assess and assign a risk score to identities, which is used to determine whether a request should be auto-accepted or flagged for manual review by a fraud analyst.

Availability

Aida is available immediately as part of the Socure ID+ platform.

from Help Net Security – News http://bit.ly/2NuLJsc
via IFTTT