A Clear View

Apr 06

A Clear View

Posted by: George Grundy

As the Morrison government continues to avoid scrutiny relating to its curious allocation of sports grants, Peter Dutton has ramped up his years-long effort to increase the ability of the federal government to eavesdrop on Australians.

Dutton’s ‘International Production Orders Bill’ has made headlines as the proposed law would allow overseas police to monitor Australians phones, but further from view another rubicon has been crossed in the fraught digital relationship between citizen and state.

Despite initial denials, a recent Buzzfeed report has confirmed that federal and state Australian police forces are now using technology developed by Clearview AI, a controversial US-based facial recognition company that mines information from social media and boasts a database of over three billion images, scraped from sites such as Twitter, Instagram, Facebook and Linkedin.

At first glance, the use of advanced facial recognition in police work seems reasonable. In the US, Clearview has claimed astonishing results, and that their software can only be used by police forces. Who could complain about the police using modern technology to catch criminals?

But look a little closer and there are plenty of reasons to feel profoundly uneasy about Australian police’s new tech partner.

Clearview AI was founded in 2017 by Hoan Ton That, an Australian of Vietnamese descent, now based in New York. The company has developed ground-breaking facial recognition software that allows the user to trawl the internet for every single online image of an individual. These days nearly everyone has a digital footprint that tracks their life in great detail. Even those who avoid social media often appear in other people’s images posted online. Facial recognition technology can now identify someone’s face taken from an angle, or partially obscured. Hats and glasses prove no barrier to identification. But until now the disparate web locations of these images has proved a barrier to collation.

As reported by Kashmir Hill in the New York Times, although the technical capability to identify ‘everyone’ based on facial recognition has been available for some time, tech companies held off on the release of such a tool, fearful of the Pandora’s Box of privacy issues that would ensue. Clearview was the first to put aside such qualms.

The uptake was immediate. Clearview says over 600 law enforcement agencies and the US Department of Defense have commenced using the technology. Canadian police are conducting trials. Soon the world’s police will have access.

To grasp just how futuristic a leap access to Clearview’s technology is, imagine walking down the street and being able to identify every person you saw in real time. You could approach a stranger and know their name, their address and job, their friends names. Imagine the power of this technology when paired with augmented reality glasses. It’s already being done.

That may sound creepy enough, but it doesn’t take much imagination to consider how many more sinister opportunities afford themselves, how this could be used for both good and bad police work, or how a tyrannical government could harness this power to hinder protest, enforce conformity or enable tyranny. If linked to CCTV cameras the technology could practically end the concept of personal privacy, something recognised by the UN as a human right.

Modern communication offers malign actors an array of ways to abuse that right – internet pioneer Vint Cerf has said that (online) privacy is often ‘abused to commit crimes or other harmful acts’. A Buzzfeed report from February 2020 showed that Clearview’s clients had performed nearly half a million searches, all of which are tracked and logged by the company. Clearview retains details of all searches made using their database of photos and has had no external auditing of its ability to retain that information securely.

This search history data retention gives Clearview awesome power over the government and law enforcement agencies that make up its client list. Clearview knows what their clients have searched for and the results they got. Placing this power in the hands of a private company with no oversight seems a recipe for disaster.

You might expect that any company licensing technology so clearly open to abuse would put strict limits on who could use it, how they could use it and who their client base is. You would be wrong.

Although the New York Police Department chose not to adopt use of the technology in 2019 (citing questionable aspects of Hoan Ton That’s past) a New York Post article revealed that rogue NYPD officers were using the software on their personal phones. It is unclear if the officers were conducting searches in an official or personal capacity – Clearview appears not to care. A November email to a Wisconsin police lieutenant encouraged the officer to ‘try (it on) your friends and family’ and ‘run wild’, advice clearly inconsistent with the company’s public statements.

Any partnership with police forces needs iron-clad security, particularly one where the provider retains primary source information. So it was unfortunate, to say the least, when a security breach in late February 2020 resulted in Clearview’s entire customer list being stolen. Buzzfeed (again) was able to review Clearview’s client list in full and published a report detailing the breadth of the 2,200 customers using the software. The company’s sales approach was described as ‘flood the market’, giving access to organisations and individuals within those organisations, ‘sometimes with little or no oversight or awareness from their own management’. Agencies and companies from 27 countries were now actively using the software.

Clearview has claimed that there was ‘no compromise of Clearview’s systems or network’ but it is impossible to verify this without independent access, something the company has been careful to prevent.

When New York Times reporter Kashmir Hill began investigating Clearview she found that the company’s listed address didn’t exist and that its one employee listed on Linkedin was a fake identity being used by Hoan Ton That. It quickly became clear that Clearview was using its own software to spy on Ms Hill and her reporting. This is not normally the behaviour of a start-up software company.

Worse was to follow. Hill found a police officer willing to use his Clearview software to demonstrate its effectiveness to the reporter. But a search for (the widely published) Ms Hill’s photos came up blank, and five minutes later the officer got a call from Clearview asking why he was searching for the face of a New York Times reporter. Clearview appears to have the ability to flag certain faces (such as reporters asking questions) and knows who the police are looking for. The officer’s account was deactivated.

Social media companies have been quick to distance themselves from Clearview. Twitter, Google, Venmo, LinkedIn, Facebook and YouTube have issued cease and desist letters (or taken similar action), demanding Clearview stop ‘scraping’ data collected on the social sites. Further action by advocacy and other groups seeks to persuade the Department of Homeland Security to suspend all facial recognition software use.

Which brings us to Mr Ton-That, who (ironically) erased most of his own online persona as Clearview took shape (archives indicate a long interest in far right ideology and politics).

Ton-That clearly mixes in some odd circles. A photo taken in 2016 (posted by Mike Cernovich, a regular host on Infowars) showed Ton-That at a bar with Chuck C. Johnson, a far right white supremacist and holocaust denier. In the photo both men flash the ‘A-OK’ hand sign which these days doubles as the white power symbol, one that has become ubiquitous in the fringe world occupied by the far right and Donald Trump’s supporters (a Venn diagram that significantly overlaps). Johnson and Ton-That appear to be old friends. There is footage of them together on the night of Trump’s 2016 election. Ton-That has said that the bar photo was ‘completely innocuous’.

It seems an unusual association for a businessman working with the security sector. Johnson is a notorious figure who has been banned for life by Twitter for threatening behaviour and photographed making the white power symbol (again) next to Richard Spencer. It’s impossible to imagine someone as toxic as Johnson meeting with senior politicians in any administration in American history – except the Trump administration. Despite his reputation, Johnson has been afforded curious meetings with senior Trump officials in the last year. He has been seen at the Trump hotel, is a fixture on the MAGA rally scene and has met with Interior Secretary Ryan Zinke (to discuss Trump’s border wall). An article Johnson penned even made it to the President’s desk.

It’s not hard to link Clearview to America’s elections. Clearview was originally called Smartcheckr, and that company claimed the ability to provide political organisations with ‘extreme opposition research’ and advertisement micro-targeting. Services were offered to white nationalist Paul Nehlen, who at the time was running to replace the congressional seat held by Paul Ryan. Ton-That has described the offer (made by a racist and Trump supporting contractor) as ‘unauthorised’.

Immigration and Customs Enforcement (ICE) are reportedly one of Clearview’s most active customers, with almost 7,500 searches conducted. With Clearview’s links to white nationalism and the Trump administration, and ICE’s race-based deportation focus, it’s not hard to imagine how the software’s power could be used for political gain. Immigration enforcement could, for example, be used aggressively in the period before this November’s federal election to try to intimidate the Latino community, a demographic that overwhelmingly votes Democrat.

Some of the risks associated with wide-scale facial recognition are well known. Facial recognition is more prone to false positives for people of colour. Clearview claims a 98.6% match rate, but this has not been independently corroborated. In a country with a racially biased judicial system like America, the potential for ‘false positives’ resulting in the innocent being jailed is significant.

Without a clear line between law enforcement and non-law enforcement use of Clearview’s software it was inevitable that misuse would occur. What little information we have of this is already profoundly disturbing.

Despite prior claims to the contrary, Clearview has worked with a number of private companies, including Walmart, Macy’s, the NBA, and casinos interested in enforcing bans on certain individuals. Clearview has provided its software to entities in both Saudi Arabia and the UAE, countries where repressive misuse of digital power by the state is commonplace.

Clearview’s client list includes four Republican congressmen as well as Rep. John Ratcliffe (R), one of the most conservative members of congress and a former nominee for Director of National Intelligence, the most senior post in the United States intelligence community.

But it’s the way the software has been distributed and the people who have gained access to it that causes most concern. Potential investors have been offered free 30-day trials without apparent oversight. A couple of examples are instructive.

Billionaire John Catsimatidis was dining at an upscale Italian restaurant in New York when he saw his daughter walk in. She was on a date with a man Catsimatidis didn’t know, so he told the waiter to go over and take a photo. Retrieving the phone Mr Catsimatidis uploaded the photo to Clearview’s app and within seconds identified the date as a venture capitalist from San Francisco.

More ominously, the aforementioned Chuck C. Johnson was reported to have the software on his phone, and demonstrated it to a stranger on a plane.

The danger of handing some of the most powerful intelligence software to a leading white supremacist needs no elaboration. Yet most of those afforded a Clearview login appear to have used it as a party trick of sorts, showing it off for fun. It’s likely that Ashton Kutcher had the app at one time. Mr Catsimatidis trialled the software at a local market, trying to stop shoplifters.

So, Australia’s police are now working with a controversial privately owned American tech company, with dubious claims of success, recently compromised security and links to white nationalism and election manipulation. The company claims miraculous rates of accuracy but has not had these claims independently audited. This new partnership has taken place without the benefit of public debate, and appears to have involved some institutional dishonesty.

In late January an ABC report stated that Australian federal and state police forces were denying use of Clearview’s technology. Yet a month later a report by Hannah Ryan for Buzzfeed revealed that the AFP and police forces in Queensland, Victoria and South Australia had dozens of registered Clearview accounts. When Labor MP Mark Dreyfus asked police commissioner Reece Kershaw if the AFP uses Clearview’s technology Mr Kershaw was unable to immediately provide an answer.

Clearview’s practices have drawn the attention of Australian privacy commissioner Angelene Falk, who has launched an inquiry into whether Australians have had their data and images collected. Australia’s privacy act requires agencies collecting personal information to receive informed consent from the public. This clearly has not been the case with Clearview.

Think about your digital footprint in 2020. Imagine what someone, benign or otherwise, could find about you if they had access to your life online. To your phone tracking, credit card use, phone calls, emails, social media. To any conversation within hearing distance of your phone. Digital information is now so pervasive that most people’s footprint could paint a picture more complete than anyone else could ever know, including a spouse.

It might be possible to try to look at this with an unjaded eye. To believe that the police and government are, on the whole, a force for good and that new technology won’t concern you unless you are committing (or have committed) a crime. This has been a soothing adage promoted by western politicians since the commencement of the War on Terror. Former British Foreign Secretary William Hague famously said ‘if you are a law abiding citizen…you have nothing to fear’.

This Orwellian ‘nothing to hide, nothing to fear’ mantra seems reassuring at face value, but its reassurance vanishes with the least scrutiny. Western democratic governments have spied on protesters, harassed journalists and stifled dissent using digital data often without a hint of criminality. All this takes place at a time when governments and politicians are taking unprecedented steps to protect themselves from scrutiny.

It’s hard to see how this toothpaste can be put back in the tube. Legislation might control or ban facial recognition data mining by government agencies, but as Clearview’s technology becomes more widespread it’s going to be difficult for governments to control private companies selling the new software, who when threatened can just choose to move offshore.

As Adam Schiff has noted, the speed of facial technology development is out-pacing legislation designed to keep it from abuse. Legislation is only as strong as the government writing it – who is under any illusion that the Trump administration, for example, would not see this development and immediately put it to work in the cause of successfully re-electing Donald Trump this fall.

Facial recognition will inevitably change the relationship between surveillant and those surveilled. It may be possible to provide a healthy balance between necessary Australian state security and the privacy rights of individuals, but when progress is made behind closed doors, without oversight, and involving a notably nefarious private company, it’s unlikely to resolve as we might wish.

Orwell’s dystopian novel ‘1984’ described facial recognition technology succinctly.

“It was terribly dangerous to let your thoughts wander when you were in any public place or within range of a telescreen. The smallest thing could give you away. There was even a word for it in Newspeak: facecrime.”

It is time for a public debate regarding the profound privacy issues raised by Clearview’s software and its use by Australian police. Peter Dutton’s Department of Home Affairs is actively working to increase use of facial recognition systems. Given the police’s initial lies about its relationship with Clearview, public trust needs to be quickly restored. Without transparency our rights face further, perhaps irretrievable, erosion.