Every year millions of people come to Pinterest for Halloween inspiration, and even though this year looks different, searches related to the holiday continue to rise as people plan for their unique
Halloweens. Costumes are consistently a top-searched term, but many people may not know that certain costumes are appropriations of other cultures. As a platform for positivity, we want to make it easy to find culturally-appropriate Halloween ideas, and
bring awareness to the fact that costumes should not be opportunities to turn a person's identity into a stereotyped image.
It's why since 2016 we've prohibited advertisements with culturally inappropriate costumes, and make it
possible for Pinners to report culturally-insensitive content right from Pins. Starting this year, certain searches -- like for Day of the Dead costumes -- will show a Pin at the top of results with information curated by Pinterest employee group
PIndigenous and experts like Dr. Adrienne Keene on how to celebrate thoughtfully and respectfully. Additionally, we're limiting recommendations for costumes that appropriate cultures.
The BBC has issued new guidance on social media usage, which will force staff to maintain impartiality. Employees will be told not to express a personal opinion on matters of public policy, politics, or controversial subjects. Staff will also be told
they must not bring the BBC into disrepute or criticise colleagues in public.
The new guidance on social media will apply to staff whether they are using online platforms professionally or personally.
The announcement follows new director
general Tim Davie's pledge last month to impose new social media rules.
The BBC said it had considered impartiality in the context of public expressions of opinion, taking part in campaigns and participating in marches or protests.
also be issued on avoiding bias through follows, likes, retweeting or other forms of sharing. The BBC said there would be tougher guidelines for some staff in news, current affairs, factual journalism, senior leadership, and a small number of presenters
who have a significant public profile.
The guidance states staff should avoid using disclaimers such as My views, not the BBC's in their biographies and profiles, as they provide no defence against personal expressions of opinion. It also advises
staff against using emojis which could reveal an opinion and undercut an otherwise impartial post, and to always assume they are posting publicly even if they have tight security settings.
The guidance states employees should avoid virtue signalling
and adds: Remember that your personal brand on social media is always secondary to your responsibility to the BBC.
China's internet censor, the Cyberspace Administration of China (CAC), has announced plans to start a 'rectification' of Chinese mobile internet browsers to address social concerns over the chaos of information being published online.
CAC, firms operating mobile browsers have until 9 November to conduct a self examination and rectify problems. These problems include spreading of rumors, the use of sensationalist headlines and publishing content that infringes the core values of
CAC threatened that after the 'rectification', mobile browsers that still have outstanding problems will be dealt with strictly according to laws and regulations until related businesses are banned.
Huawei said it plans to start a
'self-examination and clean-up' in line with the regulator's requests.
Google writes on its blog (the claims about privacy are theirs):
Coming soon: Increase your online security with the VPN by Google One With one tap from the Google One app, you can encrypt your online activity for an extra layer of
protection wherever you're connected.
Extend Google's world-class security to encrypt your Android phone's online traffic - no matter what app or browser you're using
Stream, download and browse on an encrypted, private connection
Shield against hackers on unsecured networks (like public Wi-Fi)
Hide your IP address and prevent third
parties from using it to track your location
Privacy and security is core to everything we make.
Google will never use the VPN connection to track, log, or sell your browsing activity
Our systems have advanced security built in so no one can use the VPN to tie your online activity to your identity
Don't just take our word for it. Our client libraries are open sourced, and our end to end systems will be independently audited (coming soon in 2021)
In a landmark decison that shines a light on widespread data protecton failings by the entire data broker industry, the UK data protection censor ICO, has taken enforcement action against Experian, based in part on a complaint made by Privacy
International in 2018.
Privacy International (PI) welcomes the report from the UK Information Commissioner's Office (ICO) into three credit reference agencies (CRAs) which also operate as data brokers for direct marketing purposes. As a result, the
ICO has ordered the credit reference agency Experian to make fundamental changes to how it handles people's personal data within its offline direct marketing services.
Experian now has until July 2021 to inform people that it holds their personal
data and how it intends to use it for marketing purposes. The ICO also requires Experian to stop using personal data derived from the credit referencing side of its business by January 2021.
The ICO investigation found widespread and systemic data
protection failings across the sector, significant data protection failures at each company and that significant invisible processing took place, likely affecting millions of individuals in the UK. As the report underlines, between the CRAs, the data of
almost every adult in the UK was, in some way, screened, traded, profiled, enriched, or enhanced to provide direct marketing services.
Moreover, the report notes that all three of the credit referencing agencies investigated were also using
profiling to generate new or previously unknown information about people. This can be extremely invasive and can also have discriminatory effects for individuals.
Experian has said it intends to appeal the ICO decisions saying:
We believe the ICO's view goes beyond the legal requirements. This interpretation (of General Data Protection Regulation) also risks damaging the services that help consumers, thousands of small businesses and charities, particularly
as they try to recover from the COVID-19 crisis.
In November of 2019, Tobias Schmid began a crusade to regulate some of porn's biggest players. Schmid , the director of the State Media Authority (LMA) of the German state North Rhine-Westphalia, wanted to enforce existing mandatory age laws on porn
sites like Pornhub, YouPorn, and xHamster. In practice, this would mean that all visitors to the sites would have to upload pictures of official IDs and risk the data falling into the hands of moralists and blackmailers.
Now, after an almost year long
legal scramble and porn sites refusing to back down, it looks like Schmid could get his way. After telecommunication providers like Vodafone and Deutsche Telekom refusing to voluntarily implement DNS blocks against a number of sites, including Pornhub,
YouPorn, and MyDirtyHobby, German authorities are now in the process of legally enforcing the bans.
Instagram has announced it will be introducing a new nudity policy this week, which will now allow pictures of women holding, cupping or wrapping their arms around their breasts.
Instagram said the change was prompted by a campaign by Nyome
Nicholas-Williams, a Black British plus-sized model, who had accused the Facebook-owned company of removing images showing her covering her breasts with her arms due to racial biases in its algorithm.
According to Thomson Reuters, Instagram
apologized last month to Nicholas-Williams and said it would update its policy, amid global concern over racism in technology following the global Black Lives Matter protests this year.
One bad privacy idea that won't die is the so-called data dividend, which imagines a world where companies have to pay you in order to use your data. Sound too good to be true? It is. By Hayley Tsukayama
The Telegraph has reported on the current government thinking about its news internet censorship bill that it refers to as the Online Harms Bill.
Another update will be published after the US elections suggesting that the government's plans for
internet censorship are abound up in negotiations for a US trade deal and the amount of scope for censorship will depend on whether Donald Trump or Joe Biden is in charge.
The Online Harms Bill is set to require websites and apps with user
interaction to agree legally-binding terms and conditions that lock them into a rather vaguely define 'duty of care'.
Culture Secretary Oliver Dowden -- who has presented the plan to Number 10 with Home Secretary Priti Patel -- has pledged the
firms' codes to tackle content such as self-harm and eating disorders will have to be meaningful and vetted by the new internet censor Ofcom to ensure they are proper and effective.
The current proposals are thought to stop short of criminal
sanctions against the firms for breaches over legal but harmful content like self-harm videos, but named executives will be held accountable for companies' policies and face fines and disqualification for breaches. Criminal sanctions will be reserved for
illegal online material such as child abuse and terrorism.
The proposals, set out as a response to the consultation on last year's white paper , are expected to be published after the US elections, once agreed by the Prime Minister.
is expected to draft a tight duty of care bill early next year that will lay down the sanctions and investigative powers of the new regulator but leave the scope of the duty of care on legal harms to secondary legislation to be voted on by MPs.
Moralist campaigners in the US have been pushing for computers and smart phones to be pre-loaded before sale with unspecified porn blocking software that can only be removed once users pay an unblocking fee.
But the campaigners haven't really done
much to specify how this idea could be implemented in practice. Now the proposal introduced by representative Susan Pulsipher has run into a wall of dissent in the Utah legislature at an interim committee hearing, and the idea was rejected without a
Pulsipher said the goal of her effort was to create another wall of defense to help protect children from the damaging impact of pornography and empower parents and legal guardians to limit a minor's exposure to such online harmful material.
But committee members balked at Pulsipher's approach, noting that it would be extremely difficult to identify which entity in the consumer electronics supply chain should be held liable for ensuring that software was activated.
Bramble, R-Provo, pointed out that Pulsipher's proposal failed to identify whether the responsible party was the manufacturer, the company that distributed the product, or the store or reseller that sold the product to the consumer.
she appreciated the opportunity to field the concerns of committee members and promised to work on revising the bill in time for further consideration in the next interim session. But Senate Majority Whip Dan Hemmert said he was unlikely to end up a
supporter of the effort, regardless of what changes Pulsipher came back with.
The European Union has made the first step towards a significant overhaul of its core platform regulation, the e-Commerce Directive .
In order to inspire the European Commission, which is currently preparing a proposal for a Digital Services Act Package , the EU Parliament has voted on three related Reports ( IMCO , JURI , and LIBE reports), which address the legal
responsibilities of platforms regarding user content, include measures to keep users safe online, and set out special rules for very large platforms that dominate users' lives.
Clear EFF's Footprint
Ahead of the votes, together with our allies , we argued to preserve what works for a free Internet and innovation, such as to retain the E-Commerce directive's approach of limiting platforms' liability over user content and banning Member States from imposing obligations to track and monitor users' content. We also stressed that it is time to fix what is broken: to imagine a version of the Internet where users have a right to remain anonymous, enjoy substantial procedural rights in the context of content moderation, can have more control over how they interact with content, and have a true choice over the services they use through interoperability obligations .
It's a great first step in the right direction that all three EU Parliament reports have considered EFF suggestions. There is an overall agreement that platform intermediaries have a pivotal role to play in ensuring the
availability of content and the development of the Internet. Platforms should not be held responsible for ideas, images, videos, or speech that users post or share online. They should not be forced to monitor and censor users' content and
communication--for example, using upload filters. The Reports also makes a strong call to preserve users' privacy online and to address the problem of targeted advertising. Another important aspect of what made the E-Commerce Directive a success is the
"country or origin" principle. It states that within the European Union, companies must adhere to the law of their domicile rather than that of the recipient of the service. There is no appetite from the side of the Parliament to change this
Even better, the reports echo EFF's call to stop ignoring the walled gardens big platforms have become. Large Internet companies should no longer nudge users to stay on a platform that disregards their privacy or
jeopardizes their security, but enable users to communicate with friends across platform boundaries. Unfair trading, preferential display of platforms' own downstream services and transparency of how users' data are collected and shared: the EU
Parliament seeks to tackle these and other issues that have become the new "normal" for users when browsing the Internet and communicating with their friends. The reports also echo EFF's concerns about automated content moderation, which is
incapable of understanding context. In the future, users should receive meaningful information about algorithmic decision-making and learn if terms of service change. Also, the EU Parliament supports procedural justice for users who see their content
removed or their accounts disabled.
The focus on fundamental rights protection and user control is a good starting point for the ongoing reform of Internet legislation in Europe.
However, there are also a number of pitfalls and risks. There is a suggestion that platforms should report illegal content to enforcement authorities and there are open questions about public electronic identity systems. Also, the general focus of
consumer shopping issues, such as liability provision for online marketplaces, may clash with digital rights principles: the Commission itself acknowledged in a recent internal document that "speech can also be reflected in goods, such as books,
clothing items or symbols, and restrictive measures on the sale of such artefacts can affect freedom of expression." Then, the general idea to also include digital services providers established outside the EU could turn out to be a problem to the
extent that platforms are held responsible to remove illegal content. Recent cases ( Glawischnig-Piesczek v Facebook ) have demonstrated the perils of worldwide content takedown orders.
It's Your Turn Now @EU_Commission
The EU Commission is expected to present a legislative package on 2 December. During the public consultation process, we urged the Commission to protect freedom of expression and to give control to users rather than the big platforms.
We are hopeful that the EU will work on a free and interoperable Internet and not follow the footsteps of harmful Internet bills such as the German law NetzDG or the French Avia Bill, which EFF helped to strike down . It's time to make it right. To
preserve what works and to fix what is broken.
Australia's eSafety Commissioner Julie Inman-Grant has rejected the practicality of a know your customer-type ID verification requirement for social media companies to ensure the age of their users.
Addressing Senate Inman-Grant said such a regime
works in the banking industry as it has been heavily regulated for many years, particularly around anti-money laundering:
It would be very challenging, I would think, for Facebook for example to re-identify -- or
identify -- its 2.7 billion users, she said. How do they practically go back and do that and part of this has to do with how the internet is architected.
While she admitted it was not impossible, she said it would create a range of
other issues and that removing the ability for anonymity or to use a pseudonym is unlikely to deter cyberbullying and the like. Similarly, she said, if the social media sites were to implement a real names policy, it wouldn't be effective given the way
the systems are set up. She added:
I would also suspect there would be huge civil libertarian pushback in the US.
I think there are incremental steps we could make, I think totally getting rid
of anonymity or even [the use of] pseudonyms on the internet is going to be a very hard thing to achieve.
I want to be pragmatic here about what's in the realm of the possible, it would be great if everyone had a name tag online
so they couldn't do things without [consequence].
Ofcom has published its burdensome censorship rules that will apply to video sharing platforms that are stupid enough to be based in the UK. In particular the rules are quite vague about age verification requirements for the two adult video sharing sites
that remain in the UK. Maybe Ofcom is a bit shy about requiring onerous and unviable red tape of British companies trying to compete with large numbers of foreign companies that operate with a massive commercial advantage of not having age verification.
Ofcom do however note that these censorship rules are a stop gap until a wider scoped 'online harms' censorship regime which will start up in the next couple of years.
(VSPs) are a type of online video service which allows users to upload and share videos with members of the public.
From 1 November 2020, UK-established VSPs will be required to comply with new rules around protecting users from
The main purpose of the new regulatory regime is to protect consumers who engage with VSPs from the risk of viewing harmful content. Providers must have appropriate measures in place to protect minors from content
which might impair their physical, mental or moral development; and to protect the general public from criminal content and material likely to incite violence or hatred.
Ofcom has published a short guide outlining the new
statutory requirements on providers. The guide is intended to assist platforms to determine whether they fall in scope of the new regime and to understand what providers need to do to ensure their services are compliant.
also explains how Ofcom expects to approach its new duties in the period leading up to the publication of further guidance on the risk of harms and appropriate measures, which we will consult on in early 2021.
Ofcom will also be
consulting on guidance on scope and jurisdiction later in 2020. VSP providers will be required to notify their services to Ofcom from 6 April 2021 and we expect to have the final guidance in place ahead of this time.
Trigger warnings in classic Disney films have been updated from last year and now preach from the bible of critical race theory.
When played on the Disney+ streaming service, films such as Dumbo , Peter Pan and Jungle Book now
appear with a Disney statement acknowledging its racist content and the company's racism. The statement reads:
This programme includes negative depictions and/or mistreatment of people or cultures, the warning says.
These stereotypes were wrong then and are wrong now.
Rather than remove the content, we want to acknowledge its harmful impact, learn from it and spark conversation to create a more inclusive future
Other films to carry the warning are The Aristocats , which shows a cat in yellow-face playing the piano with chopsticks, and Peter Pan , where Native Americans are referred to by the racist slur 'redskins'.
Facebook and Twitter censored a controversial New York Post article critical of Joe Biden, sparking debate over social media platforms and their role in influencing the US presidential election.
In an unprecedented step against a major news
publication, Twitter blocked users from posting links to the Post story or photos from the unconfirmed report. Users attempting to share the story were shown a notice saying:
We can't complete this request because this
link has been identified by Twitter or our partners as being potentially harmful.
Users clicking or retweeting a link already posted to Twitter are shown a warning the link may be unsafe.
Twitter claimed it was limiting the
article's spread due to questions about the origins of the materials included in the article. Jack Dorsey, the CEO of Twitter, said the company's communication about the decision to limit the article's spread was not great, saying the team should have
shared more context publicly.
Facebook, meanwhile, placed restrictions on linking to the article, claiming there were questions about its validity.
The social media censorship drew swift backlash from figures on the political right, who accused
Facebook and Twitter of protecting Biden, who is leading Trump in national polls.
Twitter has updated a censorship policy which led it to block people from sharing a link to a story from the New York Post about Joe Biden and his son, Hunter.
The article contained screenshots of emails allegedly sent and received by Hunter Biden,
presidential candidate Joe Biden's son. It also contained personal photos of Hunter Biden, allegedly removed from a laptop computer while it was undergoing repairs at a store.
Twitter's Vijaya Gadde has now said posts will be flagged as containing
hacked material, rather than blocked. She tweeted:
We tried to find the right balance between people's privacy and the right of free expression, but we can do better.
Empowering people to assess
content for themselves was a better alternative for the public.
The European Commission is beefing up its weapons to take on Big Tech.
Under Commission Executive Vice President Margrethe Vestager, the commission is planning to merge two major legislative initiatives on competition into a single text.
the so-called New Competition Tool, a market investigation tool that would allow competition enforcers to act more swiftly and forcefully. The other is a part of the Digital Services Act , a new set of rules due to be unveiled in December for companies
like Google, Apple and Amazon. Combined, the new powers would be known as the Digital Markets Act.
The act will include a list of do's and don'ts for so-called gatekeeping platforms -- or those who are indispensable for other companies to reach
consumers online -- to curb what it sees as anti-competitive behavior.
The CJEU has ruled to prevent national legislation from ordering telecommunication companies to transfer data in a general and indiscriminate manner to security agencies, even for purposes of national security
A group of tech companies, publishers, and activist groups including the Electronic Frontier Foundation, Mozilla, and DuckDuckGo are backing a new standard to let internet users set their cookie privacy settings for the entire web.
Under EU law, every
website needs to ask for permission from users before being able to set cookies. In particular this applied to cookies that allow website usage analytics and also for website history snooping that is used for targeted advertising. This permission is only
mandatory in the EU and parts of the USA but no doubt this will spread.
Companies often try and make opting out from tacking cookies difficult by asking users to drill down into multiple forms, or else to present the options in such a way as to
hide the ramifications of the choice.
Now there the group of companies are champion a new standard new standard, called Global Privacy Control , which lets users set a single setting in their browsers or through browser extensions telling
each website that they visit not to sell or share their data. It's already backed by some publishers including The New York Times , The Washington Post, and the Financial Times, as well as companies including Automattic, which operates blogging platforms
wordpress.com and Tumblr.
Advocates believe that under a provision of the California Consumer Privacy Act, activating the setting should send a legally binding request that website operators not sell their data. The setting may also be enforceable
under Europe's General Data Protection Regulation, and the backers of the standard are planning to communicate with European privacy regulators about the details of how that would work.
It is expected to take a little while for this new standard
to get legal backing, and in the meantime it will be implemented as simply advice to websites of a users privacy preferences.
If adopted the move will be a massive improvement for user privacy, but one also needs to know that estimates suggest
that this would lead to a halving of advertising income for websites, which may then lead to the end of some websites maintaining a free service.
Facebook's human rights team said it would not comply with a controversial social media law passed in Ankara this summer.
The bill requires social media companies with more than 1 million daily users in Turkey to appoint representatives in the
country, store user data locally and comply with state content removal requests, among other measures, by Oct. 1 or face steep fines and domestic access blocks on their platforms.
In Turkey, where 90 to 95% of traditional media outlets are run by
the government or government-friendly entities, social media platforms remain one of the few mediums for free expression in the country. Since the passing of a new social media law in late July, the future of free speech on such platforms has been in
limbo as social media giants consider their options to continue operating in the country.
Twitter and Google have yet to respond to the legislation, passing an Oct. 1 deadline to open an office in Turkey and appoint a representative that would be
subject to local tax codes and content removal requests from the Turkish authorities.
The Turkish internet censor will now issue warnings to noncompliant companies before issuing a growing scale of punishments, ranging from fines of $1.3 million
in November to $3.8 million in December, before local advertisement bans are imposed in January, followed by bandwidth throttling in April and May that would eventually render the platforms unusable in Turkey.
We are running a consultation about an updated version of the Statutory guidance on how the ICO will exercise its data protection regulatory functions of information
notices, assessment notices, enforcement notices and penalty notices.
This guidance is a requirement of the Data Protection Act 2018 and only covers data protection law under that Act. Our other regulatory activity and the other
laws we regulate are covered in our Regulatory action policy (which is currently under review).
We welcome written responses from all interested parties including members of the public and data controllers and those who represent
them. Please answer the questions in the survey and also tell us whether you are responding on behalf of an organisation or in a personal capacity.
We will use your responses to this survey to help us understand the areas where
organisations and members of the public are seeking further clarity about information notices, assessment notices, enforcement notices and penalty notices. We will only use this information to inform the final version of this guidance and not to consider
any regulatory action.
We will publish this guidance after the UK has left the EU and we have therefore drafted it accordingly.
EFF is standing with a huge coalition of organizations to urge Congress to oppose the Online Content Policy Modernization Act (OCPMA, S. 4632 ). Introduced by Sen. Lindsey Graham (R-SC), the OCPMA is yet another of this year's flood of misguided attacks
on Internet speech ( read bill [pdf] ). The bill would make it harder for online platforms to take common-sense moderation measures like removing spam or correcting disinformation, including disinformation about the upcoming election. But it doesn't stop
there: the bill would also upend longstanding balances in copyright law, subjecting ordinary Internet users to up to $30,000 in fines for everyday activities like sharing photos and writing online, without even the benefit of a judge and jury.
The OCPMA combines two previous bills. The first--the Online Freedom and Viewpoint Diversity Act ( S. 4534 )--undermines Section 230, the most important law protecting free speech online. Section 230 enshrines the common-sense
principle that if you say something unlawful online, you should be the one held responsible, not the website or platform where you said it. Section 230 also makes it clear that platforms have liability protections for the decisions they make to moderate
or remove online speech: platforms are free to decide their own moderation policies however they see fit. The OCPMA would flip that second protection on its head, shielding only platforms that agree to confine their moderation policies to a narrowly
tailored set of rules. As EFF and a coalition of legal experts explained to the Senate Judiciary Committee:
This narrowing would create a strong disincentive for companies to take action against a whole host of
disinformation, including inaccurate information about where and how to vote, content that aims to intimidate or discourage people from casting a ballot, or misleading information about the integrity of our election systems. S.4632 would also create a
new risk of liability for services that editorialize alongside user-generated content. In other words, sites that direct users to voter-registration pages, that label false information with fact-checks, or that provide accurate information about mail-in
voting, would face lawsuits over the user-generated content they were intending to correct.
It's easy to see the motivations behind the Section 230 provisions in this bill, but they simply don't hold up to scrutiny.
This bill is based on the flawed premise that social media platforms' moderation practices are rampant with bias against conservative views; while a popular meme in some right-wing circles, this view doesn't hold water. There are serious problems with
platforms' moderation practices, but the problem isn't the liberal silencing the conservative; the problem is the powerful silencing the powerless . Besides, it's absurd to suggest that the situation would somehow be improved by putting such severe
limits on how platforms moderate; the Internet is a better place when multiple moderation philosophies can coexist , some more restrictive and some more freeform.
The government forcing platforms to adopt a specific approach to
moderation is not just a bad idea; in fact; it's unconstitutional. As EFF explained in its own letter to the Judiciary Committee:
The First Amendment prohibits Congress from directly interfering with intermediaries'
decisions regarding what user-generated content they host and how they moderate that content. The OCPM Act seeks to coerce the same result by punishing services that exercise their rights. This is an unconstitutional condition. The government cannot
condition Section 230's immunity on interfering with intermediaries' First Amendment rights.
Sen. Graham has also used the OCPMA as his vehicle to bring back the CASE Act, a 2019 bill that would have created a new
tribunal for hearing small ($30,000!) copyright disputes, putting everyday Internet users at risk of losing everything simply for sharing copyrighted images or text online . This tribunal would exist within the Copyright Office, not the judicial branch,
and it would lack important protections like the right to a jury trial and registration requirements. As we explained last year, the CASE Act would usher in a new era of copyright trolling , with copyright owners or their agents sending notices en masse
to users for sharing memes and transformative works. When Congress was debating the CASE Act last year, its proponents laughed off concerns that the bill would put everyday Internet users at risk, clearly not understanding what a $30,000 fee would mean
to the average family. As EFF and a host of other copyright experts explained to the Judiciary Committee:
The copyright small claims dispute provisions in S. 4632 are based upon S. 1273, the Copyright Alternative in
Small-Claims Enforcement Act of 2019 (CASE Act), which could potentially bankrupt millions of Americans, and be used to target schools, libraries and religious institutions at a time when more of our lives are taking place online than ever before due to
the COVID-19 pandemic. Laws that would subject any American organization or individual -- from small businesses to religious institutions to nonprofits to our grandparents and children -- to up to $30,000 in damages for something as simple as posting a
photo on social media, reposting a meme, or using a photo to promote their nonprofit online are not based on sound policy.
The Senate Judiciary Committee plans to consider the OCPMA soon. This bill is far too much of
a mess to be saved by amendments. We urge the Committee to reject it.
Five bar owners in France have been arrested in Grenoble for offering public WiFi without keeping connection logs and spying on its users.
All establishments offering public WiFi in France are required to keep logs tracking WiFi users since 2006.
Shockingly, cafe and bar owners found in violation of this law face a maximum of one year in prison and a maximum fine of euro 75,000.
The bar owners said they were unaware of the law, but whether restaurants are aware of the law or not, it does not
change the fact that the law is a testament to the infringement of privacy by the French government. The existence of the law means that the public should avoid using public WiFi and/or use a VPN.