news

Data Privacy Enforcement Actions Step Up

Spread the love

While data analytics and advanced analytics such as artificial intelligence including machine learning can be game changing for business results, there’s also a raft of responsibilities that come with being a steward of customer data. Are your organization’s data governance leaders effectively tracking the enforcement actions recently regarding data privacy? What lessons are there to be learned?

A handful of high-profile enforcement actions, settlements, and other events have again pointed out the need for enterprise organizations to keep a close eye on their internal practices to protect against fines, legal action, and reputational damage.

Twitter

Twitter has reached a settlement with the Federal Trade Commission, agreeing to pay $150 million penalty for violating a 2011 FTC order that prohibited the company from misrepresenting its privacy and security practices. Twitter collected information about mobile numbers and email addresses, saying this data would only be used for 2-factor authentication. Instead, the information was also used for targeted advertising, according to the FTC. Twitter matched the data collected for 2-factor authentication with data the company already had or data that it had acquired from data brokers. From 2014 to 2019 more than 140 million Twitter users provided this kind of information to the company.

Twitter called
the use of this data for advertising purposes “inadvertent” and pledged to work to continue to protect the privacy of its users.

“We have aligned with the agency on operational updates and program enhancements to ensure that people’s personal data remains secure and their privacy protected,” Twitter’s Chief Privacy Officer Damien Kieran wrote in a blog post.

Meta and the Cambridge Analytica Breach

This may seem like the return of an existing case — one that was filed The case claims that Facebook, under Zuckerberg’s control, allowed a third party to launch an app claiming to be a personality quiz that also collected data from the app users’ Facebook friends without their knowledge or consent, according to a statement from the Office of the DC AG. What happens in a case like this may matter going forward in terms of chief executives also being culpable for data privacy violations.

Clearview AI

The UK’s Information Commissioner’s Office announced
it has fined facial recognition artificial intelligence company Clearview AI more than £7.5 million (nearly $10 million) for using images of people in the UK and elsewhere that were collected from social media to create a global online database that could be used for facial recognition. What’s more, the watchdog organization issued an enforcement notice, ordering the company to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems.

Clearview AI has scraped more than 20 billion images of people’s faces and data from publicly available information on the internet including social media platforms without their consent.

It’s not the first time Clearview AI has run afoul of organizations policing data privacy. Data protection authorities in Italy, Australia, Canada, France, and German have also hit
Clearview AI with fines.

What to Read Next:

Enterprise Guide to Data Privacy

What Federal Privacy Policy Might Look Like If Passed

The Future of Privacy: What IT Leaders Need to Know