The Federal Trade Commission (FTC) – the United States (US) agency that enforces antitrust law and promotes consumer protection – has a new way to punish tech companies that indulge in deceptive practices.
When it found that WW International had built Artificial Intelligence (AI) models using personal information collected through its app from young kids without parental permission, it fined the company $1.5 million and ordered it to delete the data.
However, it also imposed another penalty: the company has to destroy any algorithms or AI models it built or trained using this data.
The agency has used this approach twice before and it seems like it might become a regular fixture of its arsenal. It also sets global precedent for governments to take such measures against companies that violate data rules.
Third Time's the Charm
In 2019, after it was discovered that personal data of millions of Facebook users was collected without consent and used by British consulting firm Cambridge Analytica to influence the US elections, the FTC ordered the company to destroy "any information or work product, including any algorithms or equations" that was generated from the data.
At the time, it was unclear if this kind of a punishment would see frequent use.
“Cambridge Analytica was a good decision, but I wasn’t certain that that was going to become a pattern.”Pam Dixon, executive director of World Privacy Forum to Protocol
Then, in late 2020, the agency settled a case with photo app developer Everalbum. The company allegedly used facial recognition to detect people’s identities in images without giving users an option to turn it off. It also used these photos to help build its facial recognition algorithms.
The commission directed it to delete all data of Everalbum users who did not give their express consent, and any facial recognition models or algorithms developed with users’ photos or videos.
Referring to this settlement, FTC's acting chair at the time, Rebecca Kelly Slaughter, called this method "a novel algorithmic disgorgement remedy" in her annual review message for 2020.
Most recently, in March, WW International (formerly called Weight Watchers) marketed a weight loss app called Kurbo for use by children as young as eight and then allegedly harvested their personal and sensitive health information without parental permission.
“Our order against these companies requires them to delete their ill-gotten data, destroy any algorithms derived from it, and pay a penalty for their lawbreaking,” said FTC Chair Lina M Khan.
After this decision, algorithmic disgorgement will likely become a standard enforcement mechanism, just like monetary fines, Pam Dixon told Protocol, “This is definitely now to be expected whenever it is applicable or the right decision."
A Strong Disincentive
Tech companies today build their businesses keeping algorithms in focus. Social media companies, for example, are heavily reliant on algorithms for tailoring their content to individual users. Google Maps uses algorithms to set your route and Netflix uses them to decide what to show you next.
Artificial intelligence and algorithms can give businesses a competitive edge. As a result, they are often classified as intellectual property or trade secrets.
While monetary fines can be easily absorbed by larger companies, taking away algorithms can potentially ruin their revenue models – a stronger disincentive for those using deceptive practices to gain data.
Problems could arise, however, in the execution of such a punishment. In some cases it might be difficult to separate ill-gotten data and algorithms trained with it from the unaffected parts of a company’s technology products and intellectual property.
If algorithms are trained using multiple data sets over a period of time, it can be complicated to remove the effects of only certain data. All three FTC orders don't explain in much detail how the algorithms should be scrubbed.
The FTC's moves could inform how consumer and data protection authorities across the world treat cases in which data has been obtained or used deceptively or illegally, especially since many governments, including India's, are in the process of introducing data protection bills.
(With inputs from Protocol.)
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)