advertisement
The Federal Trade Commission (FTC) – the United States (US) agency that enforces antitrust law and promotes consumer protection – has a new way to punish tech companies that indulge in deceptive practices.
When it found that WW International had built Artificial Intelligence (AI) models using personal information collected through its app from young kids without parental permission, it fined the company $1.5 million and ordered it to delete the data.
The agency has used this approach twice before and it seems like it might become a regular fixture of its arsenal. It also sets global precedent for governments to take such measures against companies that violate data rules.
In 2019, after it was discovered that personal data of millions of Facebook users was collected without consent and used by British consulting firm Cambridge Analytica to influence the US elections, the FTC ordered the company to destroy "any information or work product, including any algorithms or equations" that was generated from the data.
At the time, it was unclear if this kind of a punishment would see frequent use.
Then, in late 2020, the agency settled a case with photo app developer Everalbum. The company allegedly used facial recognition to detect people’s identities in images without giving users an option to turn it off. It also used these photos to help build its facial recognition algorithms.
The commission directed it to delete all data of Everalbum users who did not give their express consent, and any facial recognition models or algorithms developed with users’ photos or videos.
Most recently, in March, WW International (formerly called Weight Watchers) marketed a weight loss app called Kurbo for use by children as young as eight and then allegedly harvested their personal and sensitive health information without parental permission.
“Our order against these companies requires them to delete their ill-gotten data, destroy any algorithms derived from it, and pay a penalty for their lawbreaking,” said FTC Chair Lina M Khan.
After this decision, algorithmic disgorgement will likely become a standard enforcement mechanism, just like monetary fines, Pam Dixon told Protocol, “This is definitely now to be expected whenever it is applicable or the right decision."
Tech companies today build their businesses keeping algorithms in focus. Social media companies, for example, are heavily reliant on algorithms for tailoring their content to individual users. Google Maps uses algorithms to set your route and Netflix uses them to decide what to show you next.
Artificial intelligence and algorithms can give businesses a competitive edge. As a result, they are often classified as intellectual property or trade secrets.
Problems could arise, however, in the execution of such a punishment. In some cases it might be difficult to separate ill-gotten data and algorithms trained with it from the unaffected parts of a company’s technology products and intellectual property.
If algorithms are trained using multiple data sets over a period of time, it can be complicated to remove the effects of only certain data. All three FTC orders don't explain in much detail how the algorithms should be scrubbed.
The FTC's moves could inform how consumer and data protection authorities across the world treat cases in which data has been obtained or used deceptively or illegally, especially since many governments, including India's, are in the process of introducing data protection bills.
(With inputs from Protocol.)
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)