|
European police chiefs disgracefully call for citizens to lose their basic internet protection from Russian and Chinese spies, scammers, thieves and blackmailers.
|
|
|
| | 23rd April 2024
|
|
| See article from
reclaimthenet.org See police statement [pdf] from docs.reclaimthenet.org |
European police chiefs have called for Europeans to be deprived of basic internet security used to protect against Russian & Chinese spies, scammers, thieves and blackmailers. The police chiefs write: Joint
Declaration of the European Police Chiefs We, the European Police Chiefs, recognise that law enforcement and the technology industry have a shared duty to keep the public safe, especially children. We have a proud partnership
of complementary actions towards that end. That partnership is at risk. Two key capabilities are crucial to supporting online safety. First, the ability of technology companies to reactively provide to law
enforcement investigations -- on the basis of a lawful authority with strong safeguards and oversight -- the data of suspected criminals on their service. This is known as lawful access. Second, the ability
of technology companies proactively to identify illegal and harmful activity on their platforms. This is especially true in regards to detecting users who have a sexual interest in children, exchange images of abuse and seek to commit contact sexual
offences. The companies currently have the ability to alert the proper authorities -- with the result that many thousands of children have been safeguarded, and perpetrators arrested and brought to justice. These are
quite different capabilities, but together they help us save many lives and protect the vulnerable in all our countries on a daily basis from the most heinous of crimes, including but not limited to terrorism, child sexual abuse, human trafficking, drugs
smuggling, murder and economic crime. They also provide the evidence that leads to prosecutions and justice for victims of crime. We are, therefore, deeply concerned that end to end encryption is being rolled out in a way that
will undermine both of these capabilities. Companies will not be able to respond effectively to a lawful authority. Nor will they be able to identify or report illegal activity on their platforms. As a result, we will simply not be able to keep the
public safe. Our societies have not previously tolerated spaces that are beyond the reach of law enforcement, where criminals can communicate safely and child abuse can flourish. They should not now. We cannot let ourselves be
blinded to crime. We know from the protections afforded by the darkweb how rapidly and extensively criminals exploit such anonymity. We are committed to supporting the development of critical innovations, such as encryption, as a
means of strengthening the cyber security and privacy of citizens. However, we do not accept that there need be a binary choice between cyber security or privacy on the one hand and public safety on the other. Absolutism on either side is not helpful.
Our view is that technical solutions do exist; they simply require flexibility from industry as well as from governments. We recognise that the solutions will be different for each capability, and also differ between platforms. We
therefore call on the technology industry to build in security by design, to ensure they maintain the ability to both identify and report harmful and illegal activities, such as child sexual exploitation, and to lawfully and exceptionally act on a lawful
authority. We call on our democratic governments to put in place frameworks that give us the information we need to keep our publics safe. Trends in crime are deeply concerning and show how offenders
increasingly use technology to find and exploit victims and to communicate with each other within and across international boundaries. It must be our shared objective to ensure that those who seek to abuse these platforms are identified and caught, and
that the platforms become more safe not less.
See article from
reclaimthenet.org Here we have Europol and the UK's National Crime Agency (NCA), teaming up to attack Meta for the one thing the company is apparently trying to do right. And that's implementing in its products
end-to-end encryption (E2EE), the very, necessary, irreplaceable software backbone of a safe and secure internet for everybody. Yet that is what many governments, and here we see the EU via Europol, and the UK, keep attempting to damage.
But mass surveillance is a hard sell, so the established pitch is to link the global and overall internet problem, to that of the safety of children online, and justify it that way. The Europol executive
director, Catherine De Bolle, compared E2EE to sending your child into a room full of strangers and locking the door. And yet, the technological truth and reality of the situation is that undermining E2EE is akin to giving the key to your front door and
access to everybody in it, children included, to somebody you trust (say, governments and organizations who like you to take their trustworthiness for granted). But once a copy of that key is out, it can be obtained and used by
anybody out there to get into your house at any time, for any reason. That includes governments and organizations you don't trust or like, straight-up criminals -- and anything active on the web in between.
|
|
EFF Asks Court to Uphold Federal Law That Protects Online Video Viewers' Privacy and Free Expression
|
|
|
| 7th January 2024
|
|
| See Creative Commons article from eff.org
See EFF brief from eff.org |
As millions of internet users watch videos online for news and entertainment, it is essential to uphold a federal privacy law that protects against the disclosure of everyone's viewing history, EFF argued in court last month. For
decades, the Video Privacy Protection Act (VPPA) has safeguarded people's viewing habits by generally requiring services that offer videos to the public to get their customers' written consent before disclosing that information to the government or a
private party. Although Congress enacted the law in an era of physical media, the VPPA applies to internet users' viewing habits , too. The VPPA, however, is under attack by Patreon. That service for content creators and viewers
is facing a lawsuit in a federal court in Northern California, brought by users who allege that the company improperly shared information about the videos they watched on Patreon with Facebook. Patreon argues that even if it did
violate the VPPA, federal courts cannot enforce it because the privacy law violates the First Amendment on its face under a legal doctrine known as overbreadth . This doctrine asks whether a substantial number of the challenged law's applications violate
the First Amendment, judged in relation to the law's plainly legitimate sweep. Courts have rightly struck down overbroad laws because they prohibit vast amounts of lawful speech. For example, the Supreme Court in Reno v. ACLU invalidated much of
the Communications Decency Act's (CDA) online speech restrictions because it placed an "unacceptably heavy burden on protected speech." EFF is second to none in fighting for everyone's First Amendment rights in court,
including internet users (in Reno mentioned above ) and the companies that host our speech online. But Patreon's First Amendment argument is wrong and misguided. The company seeks to elevate its speech interests over those of internet users who
benefit from the VPPA's protections. As EFF, the Center for Democracy & Technology, the ACLU, and the ACLU of Northern California argued in their friend-of-the-court brief, Patreon's argument is wrong because the VPPA directly
advances the First Amendment and privacy interests of internet users by ensuring they can watch videos without being chilled by government or private surveillance. "The VPPA provides Americans with critical, private space to
view expressive material, develop their own views, and to do so free from unwarranted corporate and government intrusion," we wrote. "That breathing room is often a catalyst for people's free expression." As the
brief recounts, courts have protected against government efforts to learn people's book buying and library history , and to punish people for viewing controversial material within the privacy of their home. These cases recognize that protecting people's
ability to privately consume media advances the First Amendment's purpose by ensuring exposure to a variety of ideas, a prerequisite for robust debate. Moreover, people's video viewing habits are intensely private, because the data can reveal intimate
details about our personalities, politics, religious beliefs, and values. Patreon's First Amendment challenge is also wrong because the VPPA is not an overbroad law. As our brief explains, "[t]he VPPA's purpose, application,
and enforcement is overwhelmingly focused on regulating the disclosure of a person's video viewing history in the course of a commercial transaction between the provider and user." In other words, the legitimate sweep of the VPPA does not violate
the First Amendment because generally there is no public interest in disclosing any one person's video viewing habits that a company learns purely because it is in the business of selling video access to the public. There is a
better path to addressing any potential unconstitutional applications of the video privacy law short of invalidating the statute in its entirety. As EFF's brief explains, should a video provider face liability under the VPPA for disclosing a customer's
video viewing history, they can always mount a First Amendment defense based on a claim that the disclosure was on a matter of public concern. Indeed, courts have recognized that certain applications of privacy laws, such as the
Wiretap Act and civil claims prohibiting the disclosure of private facts, can violate the First Amendment. But generally courts address the First Amendment by invalidating the case-specific application of those laws, rather than invalidating them
entirely. "In those cases, courts seek to protect the First Amendment interests at stake while continuing to allow application of those privacy laws in the ordinary course," EFF wrote. "This approach accommodates
the broad and legitimate sweep of those privacy protections while vindicating speakers' First Amendment rights." Patreon's argument would see the VPPA gutted--an enormous loss for privacy and free expression for the public.
The court should protect against the disclosure of everyone's viewing history and protect the VPPA.
|
|
Google limits the authorities access to people's location histories
|
|
|
| 16th December 2023
|
|
| See Creative Commons article from eff.org by Jennifer Lynch |
Google announced this week that it will be making several important changes to the way it handles users' "Location History" data. These changes would appear to make it much more difficult--if not impossible--for Google to provide mass location
data in response to a geofence warrant , a change we've been asking Google to implement for years. Geofence warrants require a provider--almost always Google--to search its entire reserve of user location data to identify all
users or devices located within a geographic area during a time period specified by law enforcement. These warrants violate the Fourth Amendment because they are not targeted to a particular individual or device, like a typical warrant for digital
communications. The only "evidence" supporting a geofence warrant is that a crime occurred in a particular area, and the perpetrator likely carried a cell phone that shared location data with Google. For this reason, they inevitably sweep up
potentially hundreds of people who have no connection to the crime under investigation--and could turn each of those people into a suspect . Geofence warrants have been possible because Google collects and stores specific user
location data (which Google calls "Location History" data) altogether in a massive database called " Sensorvault ." Google reported several years ago that geofence warrants make up 25% of all warrants it receives each year.
Google's announcement outlined three changes to how it will treat Location History data. First, going forward, this data will be stored, by default, on a user's device, instead of with Google in the cloud. Second, it will be set by
default to delete after three months; currently Google stores the data for at least 18 months. Finally, if users choose to back up their data to the cloud, Google will "automatically encrypt your backed-up data so no one can read it, including
Google." All of this is fantastic news for users, and we are cautiously optimistic that this will effectively mean the end of geofence warrants. These warrants are dangerous. They threaten privacy and liberty because they not
only provide police with sensitive data on individuals, they could turn innocent people into suspects. Further, they have been used during political protests and threaten free speech and our ability to speak anonymously, without fear of government
repercussions. For these reasons, EFF has repeatedly challenged geofence warrants in criminal cases and worked with other groups ( including tech companies) to push for legislative bans on their use. However, we are not yet
prepared to declare total victory. Google's collection of users' location data isn't limited to just the "Location History" data searched in response to geofence warrants; Google collects additional location information as well. It remains to
be seen whether law enforcement will find a way to access these other stores of location data on a mass basis in the future. Also, none of Google's changes will prevent law enforcement from issuing targeted warrants for individual users' location
data--outside of Location History--if police have probable cause to support such a search. But for now, at least, we'll take this as a win. It's very welcome news for technology users as we usher in the end of 2023.
|
|
Launching Default End-to-End Encryption on Messenger
|
|
|
|
8th December 2023
|
|
| See article from about.fb.com By Loredana Crisan, Head of
Messenger
|
I'm delighted to announce that we are rolling out default end-to-end encryption for personal messages and calls on Messenger and Facebook, as well as a suite of new features that let you further control your messaging experience. We take our
responsibility to protect your messages seriously and we're thrilled that after years of investment and testing, we're able to launch a safer, more secure and private service. Since 2016, Messenger has had the option for people to
turn on end-to-end encryption, but we're now changing private chats and calls across Messenger to be end-to-end encrypted by default. This has taken years to deliver because we've taken our time to get this right. Our engineers, cryptographers,
designers, policy experts and product managers have worked tirelessly to rebuild Messenger features from the ground up. We've introduced new privacy, safety and control features along the way like delivery controls that let people choose who can message
them, as well as app lock , alongside existing safety features like report, block and message requests. We worked closely with outside experts, academics, advocates and governments to identify risks and build mitigations to ensure that privacy and safety
go hand-in-hand. The extra layer of security provided by end-to-end encryption means that the content of your messages and calls with friends and family are protected from the moment they leave your device to the moment they reach
the receiver's device. This means that nobody, including Meta, can see what's sent or said, unless you choose to report a message to us. End-to-end encryption gives people more secure chats in Messenger. These chats will not only
have all of the things people know and love, like themes and custom reactions, but also a host of new features we know are important for our community. These new features will be available for use immediately, though it may take some time for Messenger
chats to be updated with default end-to-end encryption.
|
|
|