Skip to main content

Clearview AI, a facial recognition startup, recently settled a major privacy lawsuit initiated by privacy advocates, including the ACLU. The lawsuit accused Clearview AI of violating Illinois’ Biometric Information Privacy Act (BIPA) by collecting and storing biometric data without consent. The lawsuit’s key claims included unauthorized data collection, invasion of privacy, potential misuse, etc.

Founded in 2017 by Hoan Ton-That, Clearview AI made headlines for its powerful technology. The company’s software scrapes billions of images from social media and other publicly accessible websites to create an extensive database. Law enforcement agencies, private companies, and various organizations have used this technology to identify individuals, which has raised significant privacy concerns.

The settlement is estimated to be worth over $50 million but with a twist. Instead of a traditional cash payout, the agreement gives the plaintiffs a share in the company’s future value. In addition to this, attorneys’ fees, estimated at $20 million, will also come out of the settlement amount.

In May 2022, Clearview AI agreed to settle the lawsuit, making several critical commitments. The company agreed to stop selling its database access to most private companies and individuals in the U.S., restricting its use primarily to law enforcement and government agencies. Additionally, Clearview AI will comply with BIPA, ensuring transparent and consensual data collection. The settlement also includes stronger data protection protocols and regular audits to ensure adherence to privacy laws.

Privacy advocates are celebrating the settlement as a big win. The ACLU and other groups involved in the lawsuit see it as an important step in protecting people’s privacy. They believe it sends a clear message to tech companies that they must respect privacy laws and get permission before collecting biometric data.

However, some people think the settlement doesn’t go far enough. They worry that the use of facial recognition by law enforcement could still harm civil liberties and lead to misuse. Critics are concerned that the technology might still be used in ways that invade people’s privacy and violate their rightsโ€‹.

Conclusion

Going forward, both public and commercial organizations must have thorough data protection policies and remain watchful over civil rights. This example warns that, even if AI offers potent capabilities for security and identification, ethical considerations must ensure its use. Additionally, its use should align with cybersecurity frameworks and safeguard individual privacy

About the author: