Do you know where you were five years ago? Did you have an Android phone at the time? It turns out Google might know--and it might be telling law enforcement.
In a new article, the New York Times details a little-known
technique increasingly used by law enforcement to figure out everyone who might have been within certain geographic areas during specific time periods in the past. The technique relies on detailed location data collected by Google from most Android
devices as well as iPhones and iPads that have Google Maps and other apps installed. This data resides in a Google-maintained database called Sensorvault, and because Google stores this data indefinitely, Sensorvault includes detailed location
records involving at least hundreds of millions of devices worldwide and dating back nearly a decade.
The data Google is turning over to law enforcement is so precise that one deputy police chief said it shows the whole
pattern of life. It's collected even when people aren't making calls or using apps, which means it can be even more detailed than data generated by cell towers.
The location data comes from GPS signals, cellphone towers,
nearby Wi-Fi devices and Bluetooth beacons. According to Google, users opt in to collection of the location data stored in Sensorvault. However, Google makes it very hard to resist opting in, and many users may not understand that they have done so.
Also, Android devices collect lots of other location data by default, and it's extremely difficult to opt out of that collection.
Using a single warrant--often called a geo-fence or reverse location warrant--police are able to
access location data from dozens to hundreds of devices--devices that are linked to real people, many of whom (and perhaps in some cases all of whom) have no tie to criminal activity and have provided no reason for suspicion. The warrants cover
geographic areas ranging from single buildings to multiple blocks, and time periods ranging from a few hours to a week.
So far, according to the Times and other outlets, this technique is being used by the FBI and police
departments in Arizona, North Carolina, California, Florida, Minnesota, Maine, and Washington, although there may be other agencies using it across the country. But police aren't limiting the use of the technique to egregious or violent crimes--
Minnesota Public Radio reported the technique has been used to try to identify suspects who stole a pickup truck and, separately, $650 worth of tires. Google is getting up to 180 requests a week for data and is, apparently, struggling to keep up with the
Law enforcement seems to be using a three-step process to learn the names of device holders (in some cases, a single warrant authorizes all three steps). In the first step, the officer specifies the area and time period of
interest, and in response, Google gives the police information on all the devices that were there, identified by anonymous numbers--this step may reveal hundreds of devices.
After that, officers can narrow the scope of their
request to fewer devices, and Google will release even more detailed data, including data on where devices traveled outside the original requested area and time period. This data, which still involves multiple devices, reveals detailed travel patterns.
In the final step, detectives review that travel data to see if any devices appear relevant to the crime, and they ask for the users' names and other information for specific individual devices.
Techniques like this also reveal
big problems with our current warrant system. Even though the standard for getting a warrant is higher than other legal procedures--and EFF pushes for a warrant requirement for digital data and devices--warrants, alone, are no longer enough to protect
our privacy. Through a single warrant the police can access exponentially more and more detailed information about us than they ever could in the past. Here, the police are using a single warrant to get access to location information for hundreds of
devices. In other contexts, through a single warrant, officers can access all the data on a cell phone or a hard drive; all email stored in a Google account (possibly going back years); and all information linked to a social media account (including
photos, posts, private communications, and contacts).
We shouldn't allow the government to have such broad access to our digital lives. One way we could limit access is by passing legislation that mandates heightened standards,
minimization procedures, and particularity requirements for digital searches. We already have this in laws that regulate wiretaps , where police, in addition to demonstrating probable cause, must state that they have first tried other investigative
procedures (or state why other procedures wouldn't work) and also describe how the wiretap will be limited in scope and time.
As the Times article notes, this technique implicates innocent people and has a real impact on people's
lives. Even if you are later able to clear your name, if you spend any time at all in police custody, this could cost you your job, your car, and your ability to get back on your feet after the arrest. One man profiled in the Times article spent nearly a
week in police custody and was having trouble recovering, even months after the arrest. He was arrested at work and subsequently lost his job. Due to the arrest, his car was impounded for investigation and later repossessed. These are the kinds of
far-reaching consequences that can result from overly broad searches, so courts should subject geo-location warrants to far more scrutiny.
Morality in Media (now calling itself The National Center on Sexual Exploitation), Utah State Senator Todd Weiler, Protect Young Eyes, child advocate Melissa McKay, and other organizations, are calling for an official censor to oversee age ratings for
apps. The groups claim that the present system of self rating by developers is often misleading, inconsistent across platforms, and does not appropriately warn parents of the potential dangers found in apps. Dawn Hawkins, Executive Director at the
National Center on Sexual Exploitation said:
Parents are empowered with rating information to keep kids out of R-rated films, but when it comes to apps, parents are left in the dark about the kind of content their
children are accessing. Apps such as Instagram, Facebook, and GroupMe need to be more transparent with families about the risks associated with their platforms, particularly regarding grooming for child sexual abuse and sex trafficking.
The moralists are calling for the following:
The creation of an independent app ratings board. This board would have powers similar to the Entertainment Software Ratings Board (ESRB) and MPAA for movies, which use a rating system that is clearly understood, enforced, trustworthy, and exists to
protect the innocence of minors.
The release of intuitive parental controls on iOS, Android, and Chrome operating systems. These controls should at a minimum include default settings based on a child's age, easy set-up, and one-touch screen time
controls (e.g., school and bedtime selective app shut-off).
Supporters believe that if these two steps are done properly, parents would have what they need to make informed decisions about the appropriateness of the digital places where their kids spend time.
The BBFC has just published a very short list of adjudications responding to website blocking complaints to mobile ISPs during the last quarter of 2018.
There are several cases where innocuous websites were erroneously blocked by ISPs for no apparent
reason whatsoever and a quick check by a staff member would have sorted out without the need to waste the BBFC's time. These sites should get compensation from the for grossly negligent and unfair blocking.
The only adjudication of note was that
the general archive website archive.org which of course keeps a snapshot of a wide range of websites including some porn.
The BBFC noted that this was the second time that they have taken a look at the site::
The BBFC provided a further adjudication when we viewed the website on 10 October 2018. As in September 2015, we determined that the site was a digital archive which hosted a range of media including video, books and
articles. We found a range of pornography across the archive which featured explicit images of sexual activity, in both animated and non-animated contexts. The site also contained repeated uses of very strong language. Additionally, out of copyright film
and video material which the BBFC has passed 18 was also present on the site.
As such, we concluded that we would continue to classify the site 18.
It is interesting to note that the BBFC have never been asked
to adjudicate about similarly broad websites where it would be totally untenable to come to the same 18 rated but correct conclusion, eg google.com, youtube.com, twitter.com. They would all have to be 18 rated and it would cause untold trouble for
everybody. I wonder who decides 'best not go there'?