Dmitry Kuznetsov, better-known by his stage name Husky, was a minor star on Russia's flourishing hip hop scene until police arrested him last month for staging an impromptu concert from the roof of a parked car.
A brief brush with the law has boosted the rapper's profile and turned his I'll Sing My Music single into a national battle cry against arts censorship.
Husky is by no means the only artist feeling the heat as Russia cracks down on alternative music. But the public outcry about his case has highlighted the risks the Kremlin faces as it moves to exert control over Russian youth's favourite form of
Husky had leapt on to the roof of a car to perform in the southern city of Krasnodar on November 21st after a local club, citing concern about Russian anti-extremist laws, abruptly cancelled a gig he had planned. The following day he was
sentenced to 12 days in police detention on twin charges of petty hooliganism and refusing to take a drink and drugs test. Government censorship
In a surprise development Husky was released a few hours before his next performance having served less than half of his sentence. Navalny, who attended the Moscow concert with his family, said the authorities had let the rapper out not just
because they are scared but because they know they are in the wrong.
Lord Ashton of Hyde to move that the draft Regulations laid before the House on 10 October be approved. Special attention drawn to the instrument by the Joint Committee on Statutory Instruments, 38th Report, 4th Report from the Secondary
Legislation Scrutiny Committee (Sub-Committee B)
Guidance on Age-verification Arrangements
Lord Ashton of Hyde to move that the draft Guidance laid before the House on 25 October be approved. Special attention drawn to the instrument by the Joint Committee on Statutory Instruments, 39th Report, 4th Report from the Secondary Legislation
Scrutiny Committee (Sub-Committee B)
Lord Stevenson of Balmacara to move that this House regrets that the draft Online Pornography (Commercial Basis) Regulations 2018 and the draft Guidance on Age-verification Arrangements do not bring into force section 19 of the Digital Economy
Act 2017, which would have given the regulator powers to impose a financial penalty on persons who have not complied with their instructions to require that they have in place an age verification system which is fit for purpose and effectively
managed so as to ensure that commercial pornographic material online will not normally be accessible by persons under the age of 18.
Guidance on Ancillary Service Providers
Lord Ashton of Hyde to move that the draft Guidance laid before the House on 25 October be approved. Special attention drawn to the instrument by the Joint Committee on Statutory Instruments, 39th Report, 4th Report from the Secondary Legislation
Scrutiny Committee (Sub-Committee B)
The DCMS and BBFC age verification scheme has been widely panned as fundamentally the law provides no requirement to actually protect people's identity data that can be coupled with their sexual preferences and sexuality. The scheme only offers
voluntary suggestions that age verification services and websites should protect their user's privacy. But one only has to look to Google, Facebook and Cambridge Analytica to see how worthless mere advice is. GDPR is often quoted but that only
requires that user consent is obtained. One will have to simply to the consent to the 'improved user experience' tick box to watch the porn, and thereafter the companies can do what the fuck they like with the data.
The UK's intelligence agencies are to significantly increase their use of large-scale data hacking after claiming that more targeted operations are being rendered obsolete by technology.
The move will see an expansion in what is known as the bulk equipment interference (EI) regime -- the process by which GCHQ can target entire communication networks overseas in a bid to identify individuals who pose a threat to national security.
[Note that the idea this is somehow only targeted at foreigners is misleading. Five countries cooperate so that they can mutually target each others users to work round limits on snooping on one's own country].
A letter from the security minister, Ben Wallace, to the head of the intelligence and security committee, Dominic Grieve, quietly filed in the House of Commons library last week, states:
Following a review of current operational and technical realities, GCHQ have ... determined that it will be necessary to conduct a higher proportion of ongoing overseas focused operational activity using the bulk EI regime than was originally
As the controversy over the EU's Article 13 censorship machines continue, Twitter appears to be the communications weapon of choice for parties on both sides.
As one of the main opponents of Article 13 and in particular its requirement for upload filtering, Julia Reda MEP has been a frequent target for proponents. Accused of being a YouTube/Google shill (despite speaking out loudly against YouTube's
maneuvering), Reda has endured a lot of criticism. As an MEP, she's probably used to that.
However, a recent response to one of her tweets from music giant IFPI opens up a somewhat ironic can of worms that deserves a closer look.
Since kids will be affected by Article 13, largely due to their obsessiveness with YouTube, Reda recently suggested that they should lobby their parents to read up on the legislation. In tandem with pop-ups from YouTube advising users to oppose
Article 13, that seemed to irritate some supporters of the proposed law.
As the response from IFPI's official account shows, Reda's advice went down like a lead balloon with the music group, a key defender of Article 13. The IFPI tweeted:.
Shame on you: Do you really approve of minors being manipulated by big tech companies to deliver their commercial agenda?
It's pretty ironic that IFPI has called out Reda for informing kids about copyright law to further the aims of big tech companies. As we all know, the music and movie industries have been happily doing exactly the same to further their own aims
for at least ten years and probably more.
Digging through the TF archives, there are way too many articles detailing how big media has directly targeted kids with their message over the last decade. Back in 2009, for example, a former anti-piracy consultant for EMI lectured kids as young
as five on anti-piracy issues.
Ofcom has appointed Stephen Nuttall to its Content Board.
Ofcom's Content Board is a committee of the main Ofcom Board. It has advisory responsibility for a wide range of content issues, including the regulation of television, radio and video-on-demand quality and standards.
Stephen Nuttall has more than thirty years' experience working as a senior executive and a consultant in the sports, media and digital industries. Stephen's previous positions include Senior Director at YouTube EMEA and Group Commercial Director
Video game developer Ubisoft recently made the censorship news by deciding that their Rainbow Six Siege , a shooter in an environmental setting, will be unified into a single worldwide version. This meant that the game would have to be
heavily censored to the standards of the lowest common denominator, China.
This caused a little bit of stink amongst fans, so Ubisoft announced a U turn for the China friendly censorship policy.
Now it seems that Unbsoft is still keen on a heavily censored version that can be played in China, but this time the company is changing tack on its reasoning. Ubisoft are now reporting that parents and consumer groups are complaining that the
game has too many references to sex, many violent scenes, and allusions to gambling . It adds that parents say these issues are troubling in a game intended for teenagers.
'After listening to criticism', the company decided to make some changes to the game. It will remove some of the sexual references and violent content, and make the loot boxes easier to come by. Ubisoft is hoping the changes will be enough to
satisfy the critics and make the customers happy as well. (Especially those in China).
Online games distributor Steam recently relaxed is previous prohibitions on adult gaming but it still draws the line at games it considers illegal.
Now, according to some developers, Valve, the company behind Steam, is going after games that feature themes of child exploitation, which it seems to define, at least in part, as games with sex scenes or nudity where the characters are in high
Over the past few weeks, the company has removed the store pages of several visual novels, including cross-dressing yaoi romance Cross Love , catholic school adult visual novel Hello Goodbye , a story about the love between siblings
Imolicious , and cat girl game MaoMao Discovery Team . The developers of these games all claim to have received similar emails stating that their games could not be released on Steam.
There are a common threads that link the games in question: 1) Cross Love, Hello Goodbye, and Imolicious feature school settings, and 2) all four of the aforementioned games contain adult elements and centre around anime-styled characters who
appear young -- in some cases uncomfortably so.
A bill that would force ISPs in Israel to censor pornographic sites by default has been amended after heavy criticism from lawmakers over privacy concerns.
AN earlier version of the bill that was unanimously approved by the Ministerial Committee for Legislation in late Octoberr but now a new version of the legislation has been passed which was sponsored by Likud MK Miki Zohar and Jewish Home MK
Shuli Moalem-Refaeli. The differences seem subtle and are whether customers opt in or opt out of network level website blocking.
Customers will have to confirm their preferences for website blocking every 3 months but may change their settings at any time.
The bill will incentivize internet companies to actively market existing website blocking software to families. ISPs will receive NIS 0.50 ($0.13 cents) for every subscriber who opts to block adult sites.
In a refreshing divergence from UK internet censorship, ISPs will be legally required to delete all data related to their users' surfing habits, to prevent creating de facto -- and easily leaked -- black lists of pornography consumers.
In comparison, internet companies are allowed to use or sell UK customer data for any purpose they so desire as long as customers tick a consent box with some woolly text about improving the customer's experience.
Israeli Prime Minister Benjamin Netanyahu moved to halt the adoption of a new law aimed at curbing pornographic content on the Internet and possibly keeping tabs on people who watch porn. Netanyahu inquired:
We don't want our children to be exposed to harmful content, but my concern is about inserting regulation into a space in which there is no government regulation. Who will decide which content is permitted and which is forbidden? Who will decide
Facebook has added a new category of censorship, sexual solicitation. It added the update on 15thh October but no one really noticed until recently.
The company has quietly updated its content-moderation policies to censor implicit requests for sex.The expanded policy specifically bans sexual slang, hints of sexual roles, positions or fetish scenarios, and erotic art when mentioned with a sex
act. Vague, but suggestive statements such as looking for a good time tonight when soliciting sex are also no longer allowed.
The new policy reads:
15. Sexual Solicitation Policy
Do not post:
Content that attempts to coordinate or recruit for adult sexual activities including but not limited to:
Filmed sexual activities Pornographic activities, strip club shows, live sex performances, erotic dances Sexual, erotic, or tantric massages
Content that engages in explicit sexual solicitation by, including but not limited to the following, offering or asking for:
Sex or sexual partners Sex chat or conversations Nude images
Content that engages in implicit sexual solicitation, which can be identified by offering or asking to engage in a sexual act and/or acts identified by other suggestive elements such as any of the following:
Vague suggestive statements, such as "looking for a good time tonight" Sexualized slang Using sexual hints such as mentioning sexual roles, sex positions, fetish scenarios, sexual preference/sexual partner preference, state of arousal,
act of sexual intercourse or activity (sexual penetration or self-pleasuring), commonly sexualized areas of the body such as the breasts, groin, or buttocks, state of hygiene of genitalia or buttocks Content (hand drawn, digital, or real-world
art) that may depict explicit sexual activity or suggestively posed person(s).
Content that offers or asks for other adult activities such as:
Commercial pornography Partners who share fetish or sexual interests
Sexually explicit language that adds details and goes beyond mere naming or mentioning of:
A state of sexual arousal (wetness or erection) An act of sexual intercourse (sexual penetration, self-pleasuring or exercising fetish scenarios)
Comment: Facebook's Sexual Solicitation Policy is a Honeypot for Trolls
Facebook just quietly adopted a policy that could push thousands of innocent people off of the platform. The new " sexual solicitation " rules forbid pornography and other explicit sexual content (which was already functionally
banned under a different statute ), but they don't stop there: they also ban "implicit sexual solicitation" , including the use of sexual slang, the solicitation of nude images, discussion of "sexual partner
preference," and even expressing interest in sex . That's not an exaggeration: the new policy bars "vague suggestive statements, such as 'looking for a good time tonight.'" It wouldn't be a stretch to think that asking
" Netflix and chill? " could run afoul of this policy.
The new rules come with a baffling justification, seemingly blurring the line between sexual exploitation and plain old doing it:
[P]eople use Facebook to discuss and draw attention to sexual violence and exploitation. We recognize the importance of and want to allow for this discussion. We draw the line, however, when content facilitates, encourages or coordinates sexual
encounters between adults.
In other words, discussion of sexual exploitation is allowed, but discussion of consensual, adult sex is taboo. That's a classic censorship model: speech about sexuality being permitted only when sex is presented as dangerous and shameful. It's
especially concerning since healthy, non-obscene discussion about sex--even about enjoying or wanting to have sex--has been a component of online communities for as long as the Internet has existed, and has for almost as long been the target of
governmental censorship efforts .
Until now, Facebook has been a particularly important place for groups who aren't well represented in mass media to discuss their sexual identities and practices. At very least, users should get the final say about whether they want to see such
speech in their timelines.
Overly Restrictive Rules Attract Trolls
Is Facebook now a sex-free zone ? Should we be afraid of meeting potential partners on the platform or even disclosing our sexual orientations ?
Maybe not. For many users, life on Facebook might continue as it always has. But therein lies the problem: the new rules put a substantial portion of Facebook users in danger of violation. Fundamentally, that's not how platform moderation
policies should work--with such broadly sweeping rules, online trolls can take advantage of reporting mechanisms to punish groups they don't like.
Combined with opaque and one-sided flagging and reporting systems , overly restrictive rules can incentivize abuse from bullies and other bad actors. It's not just individual trolls either: state actors have systematically abused Facebook's
flagging process to censor political enemies. With these new rules, organizing that type of attack just became a lot easier. A few reports can drag a user into Facebook's labyrinthine enforcement regime , which can result in having a group page
deactivated or even being banned from Facebook entirely. This process gives the user no meaningful opportunity to appeal a bad decision .
Given the rules' focus on sexual interests and activities, it's easy to imagine who would be the easiest targets: sex workers (including those who work lawfully), members of the LGBTQ community, and others who congregate online to discuss issues
relating to sex. What makes the policy so dangerous to those communities is that it forbids the very things they gather online to discuss.
Even before the recent changes at Facebook and Tumblr , we'd seen trolls exploit similar policies to target the LGBTQ community and censor sexual health resources . Entire harassment campaigns have organized to use payment processors' reporting
systems to cut off sex workers' income . When online platforms adopt moderation policies and reporting processes, it's essential that they consider how those policies and systems might be weaponized against marginalized groups.
A recent Verge article quotes a Facebook representative as saying that people sharing sensitive information in private Facebook groups will be safe , since Facebook relies on reports from users. If there are no tattle-tales in your group, the
reasoning goes, then you can speak freely without fear of punishment. But that assurance rings rather hollow: in today's world of online bullying and brigading, there's no question of if your private group will be infiltrated by the trolls
; it's when .
Did SESTA/FOSTA Inspire Facebook's Policy Change?
The rule change comes a few months after Congress passed the Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act (SESTA/FOSTA), and it's hard not to wonder if the policy is the direct result of
the new Internet censorship laws.
SESTA/FOSTA opened online platforms to new criminal and civil liability at the state and federal levels for their users' activities. While ostensibly targeted at online sex trafficking, SESTA/FOSTA also made it a crime for a platform to
"promote or facilitate the prostitution of another person." The law effectively blurred the distinction between adult, consensual sex work and sex trafficking. The bill's supporters argued that forcing platforms to clamp down on all
sex work was the only way to curb trafficking--nevermind the growing chorus of trafficking experts arguing the very opposite .
As SESTA/FOSTA was debated in Congress, we repeatedly pointed out that online platforms would have little choice but to over-censor : the fear of liability would force them not just to stop at sex trafficking or even sex work, but to take much
more restrictive approaches to sex and sexuality in general, even in the absence of any commercial transaction. In EFF's ongoing legal challenge to SESTA/FOSTA , we argue that the law unconstitutionally silences lawful speech online.
While we don't know if the Facebook policy change came as a response to SESTA/FOSTA, it is a perfect example of what we feared would happen: platforms would decide that the only way to avoid liability is to ban a vast range of discussions of sex.
Wrongheaded as it is, the new rule should come as no surprise. After all, Facebook endorsed SESTA/FOSTA . Regardless of whether one caused the other or not, both reflect the same vision of how the Internet should work--a place where certain
topics simply cannot be discussed. Like SESTA/FOSTA, Facebook's rule change might have been made to fight online sexual exploitation. But like SESTA/FOSTA, it will do nothing but push innocent people offline.
Facebook has been fined ?10m (£8.9m) by Italian authorities for misleading users over its data practices.
The two fines issued by Italy's competition watchdog are some of the largest levied against the social media company for data misuse.
The Italian regulator found that Facebook had breached the country's consumer code by:
Misleading users in the sign-up process about the extent to which the data they provide would be used for commercial purposes.
Emphasising only the free nature of the service, without informing users of the "profitable ends that underlie the provision of the social network", and so encouraging them to make a decision of a commercial nature that they would not
have taken if they were in full possession of the facts.
Forcing an "aggressive practice" on registered users by transmitting their data from Facebook to third parties, and vice versa, for commercial purposes.
The company was specifically criticised for the default setting of the Facebook Platform services, which in the words of the regulator, prepares the transmission of user data to individual websites/apps without express consent from users.
Although users can disable the platform, the regulator found that its opt-out nature did not provide a fully free choice.
The authority has also directed Facebook to publish an apology to users on its website and on its app.
Cuba has passed a new law that gives the government Inspectorate the power to close down any exhibition or performances that are considered a violation of the socialist revolutionary values of the country.
The law, known as decree 349, published in July, allows so-called 'Supervisory inspectors' to censor cultural events ranging from art exhibitions to concerts, and to immediately close any of them if they saw it as denigrating the value of the
country. They also have the right to confiscate a license for business of any restaurant or bar host an 'undesirable' event.
The decree applies to obscene speech, vulgarity, sexism, excessive use of force and more.
Despite the claim that the authorities are trying to reduce the degree of resentment in society, cultural representatives still cal the law fascist.
Image hosting service Tumblr is banning all adult images of sex and nudity from 17th December 2018. This seems to have been sparked by the app being banned from Apple Store after a child porn image was detected being hosted by Tumblr. Tumblr
explained the censorship process in a blog post:
Starting Dec 17, adult content will not be allowed on Tumblr, regardless of how old you are. You can read more about what kinds of content are not allowed on Tumblr in our Community Guidelines. If you spot a post that you don't think belongs on
Tumblr, period, you can report it: From the dashboard or in search results, tap or click the share menu (paper airplane) at the bottom of the post, and hit Report.
Adult content primarily includes photos, videos, or GIFs that show real-life human genitals or female-presenting nipples, and any content204including photos, videos, GIFs and illustrations204that depicts sex acts.
Examples of exceptions that are still permitted are exposed female-presenting nipples in connection with breastfeeding, birth or after-birth moments, and health-related situations, such as post-mastectomy or gender confirmation surgery. Written
content such as erotica, nudity related to political or newsworthy speech, and nudity found in art, such as sculptures and illustrations, are also stuff that can be freely posted on Tumblr.
Any images identified as adult will be set as unviewable by anyone except the poster. There will be an appeals process to contest decisions held to be incorrect.
Inevitably Tumblr algorithms are not exactly accurate when it comes to detecting sex and nudity. The Guardian noted that ballet dancers, superheroes and a picture of Christ have all fallen foul of Tumblr's new pornography ban, after the images
were flagged up as explicit content by the blogging site's artificial intelligence (AI) tools.
The actor and Tumblr user Wil Wheaton posted one example:
An image search for beautiful men kissing, which was flagged as explicit within 30 seconds of me posting it.
These images are not explicit. These pictures show two adults, engaging in consensual kissing. That's it. It isn't violent, it isn't pornographic. It's literally just two adult humans sharing a kiss.
Other users chronicled flagged posts, including historical images of (clothed) women of colour, a photoset of the actor Sebastian Stan wearing a selection of suits with no socks on, an oil painting of Christ wearing a loincloth, a still of ballet
dancers and a drawing of Wonder Woman carrying fellow superhero Harley Quinn. None of the images violate Tumblr's stated policy.
Tumblr, after years of being a space for nsfw artists to reach a community of like-minded individuals to enjoy their work, has decided to close their metaphorical doors to adult content.
Solution Stop it. Let people post porn, it's 90% of the reason anybody is on the site in the first place. Or, if you really want a non-18+ tumblr, start a new one with that specific goal in mind. Don't rip down what people have spent years
The Free Speech coalition [representing the US adult trade] released the following statement regarding the recent announcement about censorship at Tumblr:
The social media platform Tumblr has announced that on December 17, it will effectively ban all adult content. Tumblr follows the lead of Facebook, Instagram, YouTube and other social media platforms, who over the past few years have meticulously
scrubbed their corners of the internet of adult content, sex, and sexuality, in the name of brand protection and child protection.
While some in the adult industry may cheer the end of Tumblr as a never-ending source of free content, specifically pirated content, it is concerning that of the major social media platforms, only Twitter and Reddit remain in any way tolerant of
adult workers -- and there are doubts as to how much longer that will last.
As legitimate platforms ban or censor adult content -- having initially benefited from traffic that adult content brought them -- illegitimate platforms for distribution take their place. The closure of Tumblr only means more piracy, more
dispersal of community, and more suffering for adult producers and performers.
Free Speech Coalition was founded to fight government censorship -- set raids and FBI entrapment, bank seizures and jail terms. The internet gave us freedom from much that had plagued us, particularly local ordinances and overzealous prosecutors.
But now, when corporate censors suspend your account, the only choice is to abandon the platform 203 there is no opportunity for arbitration or appeal.
When companies like Google and Facebook (and subsidiaries like YouTube and Instagram) control over 70% of all web traffic, adult companies are denied a market as effectively as a state-level sex toy ban. And when sites like Tumblr and Twitter can
close an account with millions of followers without warning, the effect is the same on a business -- particularly a small, performer-run one -- as an FBI seizure.
As social media companies become more powerful, we must demand recourse, but we also must look beyond our industry and continue to build alliances -- with women, with LGBTQ groups, with sex workers and sex educators, with artists -- who
implicitly understand the devastating effect of this new form of censorship.
These communities have seen the devastation wreaked when platforms use purges of adult content as a sledgehammer, broadly banning sexual health information, vibrant communities based around non-normative genders and sexualities, resources for sex
workers, and political and cultural commentary that engages with such topics.
The loss of these platforms isn't just about business, it's about the loss of vital communities and education -- and organizing. We use these platforms not only to grow our reach, but to communicate with one another, to rally, to drive awareness
of issues of sex and sexuality. They have become a central source of power. And today, we're one step closer to losing that as well.
Poland stands up to the EU to champion the livelihoods of thosands of Europeans against the disgraceful EU that wants to grant large, mostly American companies, dictatorial copyright control of the internet
In 2011, Europeans rose up over ACTA , the misleadingly named "Anti-Counterfeiting Trade Agreement," which created broad surveillance and censorship regimes for the internet. They were successful in large part thanks to the Polish
activists who thronged the streets to reject the plan, which had been hatched and exported by the US Trade Representative.
The Poles aren't having any of it:
a broad coalition of Poles from the left and the right have come together to oppose the new Directive, dubbing it "ACTA2," which should give you an idea of how they feel about the matter.
There are now enough national governments opposed to the Directive to constitute a "blocking minority" that could stop it dead. Alas, the opposition is divided on whether to reform the offending parts of the Directive, or eliminate them
outright (this division is why the Directive squeaked through the last vote, in September), and unless they can work together, the Directive still may proceed.
A massive coalition of 15,000 Polish creators whose videos, photos and text are enjoyed by over 20,000,000 Poles have signed an open letter supporting the idea of a strong, creator-focused copyright and rejecting the new Copyright Directive as a
direct path to censoring filters that will deprive them of their livelihoods.
The coalition points out that online media is critical to the lives of everyday Poles for purposes that have nothing to do with the entertainment industry: education, the continuation of Polish culture, and connections to the global Polish
Polish civil society and its ruling political party are united in opposing ACTA2; Polish President Andrzej Duda vowed to oppose it.
Early next month, the Polish Internet Governance Forum will host a roundtable on the question; they have invited proponents of the Directive to attend and publicly debate the issue.
The Daily Mail reports on large scale data harvesting of your data and notes that Paypal have been passing on passport photos used for account verification to Microsoft for their facial recognition database
Parliament's fake news inquiry has published a cache of seized Facebook documents including internal emails sent between Mark Zuckerberg and the social network's staff. The emails were obtained from the chief of a software firm that is suing the
tech giant. About 250 pages have been published, some of which are marked highly confidential.
Facebook had objected to their release.
Damian Collins MP, the chair of the parliamentary committee involved, highlighted several key issues in an introductory note. He wrote that:
Facebook allowed some companies to maintain "full access" to users' friends data even after announcing changes to its platform in 2014/2015 to limit what developers' could see. "It is not clear that there was any user consent for
this, nor how Facebook decided which companies should be whitelisted," Mr Collins wrote
Facebook had been aware that an update to its Android app that let it collect records of users' calls and texts would be controversial. "To mitigate any bad PR, Facebook planned to make it as hard as possible for users to know that this
was one of the underlying features," Mr Collins wrote
Facebook used data provided by the Israeli analytics firm Onavo to determine which other mobile apps were being downloaded and used by the public. It then used this knowledge to decide which apps to acquire or otherwise treat as a threat
there was evidence that Facebook's refusal to share data with some apps caused them to fail
there had been much discussion of the financial value of providing access to friends' data
In response, Facebook has said that the documents had been presented in a very misleading manner and required additional context.
Mastercard and Microsoft are collaborating in an identity management system that promises to remember users' identity verification and passwords between sites and services.
Mastercard highlights four particular areas of use: financial services, commerce, government services, and digital services (eg social media, music streaming services and rideshare apps). This means the system would let users manage their data
across both websites and real-world services.
However, the inclusion of government services is an eyebrow-raising one. Microsoft and Mastercard's system could link personal information including taxes, voting status and criminal record, with consumer services like social media accounts,
online shopping history and bank accounts.
As well as the stifling level of tailored advertising you'd receive if the system knew everything you did, this sets the dangerous precedent for every byte of users' information to be stored under one roof -- perfect for an opportunistic hacker
or businessman. Mastercard mention it is working closely with players like Microsoft, showing that many businesses have access to the data.
Neither Microsoft nor Mastercard have slated a release date for the system, only promising additional details on these efforts will be shared in the coming months.
Defending equal access to the free and open internet is core to Reddit's ideals, and something that redditors have told us time and again they hold dear too, from the SOPA/PIPA battle to the fight for Net Neutrality. This is why even though
we are an American company with a user base primarily in the United States, we've nevertheless spent a lot of time this year
warning about how an overbroad EU Copyright Directive could restrict Europeans' equal access to the open Internet--and to Reddit.
Despite these warnings, it seems that EU lawmakers still don't fully appreciate the law's potential impact, especially on small and medium-sized companies like Reddit. So we're stepping things up to draw attention to the problem. Users in the EU
will notice that when they access Reddit via desktop, they are greeted by a modal informing them about the Copyright Directive and referring them to
detailed resources on proposed fixes .
The problem with the Directive lies in Articles 11 (link licensing fees) and 13 (copyright filter requirements), which set sweeping, vague requirements that create enormous liability for platforms like ours. These requirements eliminate the
previous safe harbors that allowed us the leeway to give users the benefit of the doubt when they shared content. But under the new Directive, activity that is core to Reddit, like sharing links to news articles, or the use of existing content
for creative new purposes (r/photoshopbattles, anyone?) would suddenly become questionable under the law, and it is not clear right now that there are feasible mitigating actions that we could take while preserving core site functionality. Even
worse, smaller but similar attempts in various countries in Europe in the past have shown that
such efforts have actually harmed publishers and creators .
Accordingly, we hope that today's action will drive the point home that there are grave problems with Articles 11 and 13, and that the current trilogue negotiations will choose to remove both entirely. Barring that, however, we have a number of
suggestions for ways to improve both proposals. Engine and the Copia Institute have compiled them
https://dontwreckthe.net/ . We hope you will read them and consider calling your Member of European Parliament (
look yours up here ). We also hope that EU lawmakers will listen to those who use and understand the internet the most, and reconsider these problematic articles. Protecting rights holders need not come at the cost of silencing European
Parliamentary scrutiny committee condemns as 'defective' a DCMS Statutory Instrument excusing Twitter and Google images from age verification. Presumably one of the reasons for the delayed introduction
There's a joint committee to scrutinise laws passed in parliament via Statutory Instruments. These are laws that are not generally presented to parliament for discussion, and are passed by default unless challenged.
The committee has now taken issue with a DCMS law to excuse the likes of social media and search engines from requiring age verification for any porn images that may get published on the internet. The committee reports from a session on 21st
November 2018 that the law was defective and 'makes an unexpected use of the enabling power'. Presumably this means that the DCMS has gone beyond the scope of what can be passed without full parliamentary scrutiny.
Draft S.I.: Reported for defective drafting and for unexpected use of powers Online Pornography (Commercial Basis) Regulations 2018
7.1 The Committee draws the special attention of both Houses to these draft Regulations on the grounds that they are defectively drafted and make an unexpected use of the enabling power.
7.2 Part 3 of the Digital Economy Act 2017 ("the 2017 Act") contains provisions designed to prevent persons under the age of 18 from accessing internet sites which contain pornographic material. An age-verification regulator 1 is given
a number of powers to enforce the requirements of Part 3, including the power to impose substantial fines. 2
7.3 Section 14(1) is the key requirement. It provides:
"A person contravenes [Part 3 of the Act] if the person makes pornographic material available on the internet to persons in the United Kingdom on a commercial basis other than in a way that secures that, at any given time, the material
is not normally accessible by persons under the age of 18".
7.4 The term "commercial basis" is not defined in the Act itself. Instead, section 14(2) confers a power on the Secretary of State to specify in regulations the circumstances in which, for the purposes of Part 3, pornographic material
is or is not to be regarded as made available on a commercial basis. These draft regulations would be made in exercise of that power. Regulation 2 provides:
"(1) Pornographic material is to be regarded as made available on the internet to persons in the United Kingdom on a commercial basis for the purposes of Part 3 of the Digital Economy Act 2017 if either paragraph (2) or (3) are met.
(2) This paragraph applies if access to that pornographic material is available only upon payment.
(3) This paragraph applies (subject to paragraph (4)) if the pornographic material is made available free of charge and the person who makes it available receives (or reasonably expects to receive) a payment, reward or other benefit in
connection with making it available on the internet.
(4) Subject to paragraph (5), paragraph (3) does not apply in a case where it is reasonable for the age-verification regulator to assume that pornographic material makes up less than one-third of the content of the material made available on
or via the internet site or other means (such as an application program) of accessing the internet by means of which the pornographic material is made available.
(5) Paragraph (4) does not apply if the internet site or other means (such as an application program) of accessing the internet (by means of which the pornographic material is made available) is marketed as an internet site or other means of
accessing the internet by means of which pornographic material is made available to persons in the United Kingdom."
7.5 The Committee finds these provisions difficult to understand, whether as a matter of simple English or as legal propositions. Paragraphs (4) and (5) are particularly obscure.
7.6 As far as the Committee can gather from the Explanatory Memorandum, the policy intention is that a person will be regarded as making pornographic material available on the internet on a commercial basis if:
(A) a charge is made for access to the material; OR
(B) the internet site is accessible free of charge, but the person expects to receive a payment or other commercial benefit, for example through advertising carried on the site.
7.7 There is, however, an exception to (B): in cases in which no access charge is made, the person will NOT be regarded as making the pornographic material available on a commercial basis if the material makes up less than one-third of the
content on the internet site--even if the person expects to receive a payment or other commercial benefit from the site. But that exception does not apply in a case where the person markets it as a pornographic site, or markets an "app"
as a means of accessing pornography on the site.
7.8 As the Committee was doubtful whether regulation 2 as drafted is effective to achieve the intended result, it asked the Department for Digital, Culture, Media and Sport a number of questions. These were designed to elicit information about
the regulation's meaning and effect.
7.9 The Committee is disappointed with the Department's memorandum in response, printed at Appendix 7: it fails to address adequately the issues raised by the Committee.
7.10 The Committee's first question asked the Department to explain why paragraph (1) of regulation 2 refers to whether either paragraph (2) or (3) "are met" 3 rather than "applies". The Committee raised this point because
paragraphs (2) and (3) each begin with "This paragraph applies if ...". There is therefore a mismatch between paragraph (1) and the subsequent paragraphs, which could make the regulation difficult to interpret. It would be appropriate
to conclude paragraph (1) with "is met" only if paragraphs (2) and (3) began with "The condition in this paragraph is met if ...". The Department's memorandum does not explain this discrepancy. The Committee accordingly
reports regulation 2(1) for defective drafting.
7.11 The first part of the Committee's second question sought to probe the intended effect of the words in paragraph (4) of regulation 2 italicised above, and how the Department considers that effect is achieved.
7.12 While the Department's memorandum sets out the policy reasons for setting the one-third threshold, it offers little enlightenment on whether paragraph (4) is effective to achieve the policy aims. Nor does it deal properly with the second
part of the Committee's question, which sought clarification of the concept of "one-third of ... material ... on ... [a] means .... of accessing the internet ...".
7.13 The Committee is puzzled by the references in regulation 2(4) to the means of accessing the internet. Section 14(2) of the 2017 Act confers a power on the Secretary of State to specify in regulations circumstances in which pornographic
material is or is not to be regarded as made available on the internet on a commercial basis. The means by which the material is accessed (for example, via an application program on a smart phone) appears to be irrelevant to the question of
whether it is made available on the internet on a commercial basis. The Committee remains baffled by the concept of "one-third of ... material ... on [a] means ... of accessing the internet".
7.14 More generally, regulation 2(4) fails to specify how the one-third threshold is to be measured and what exactly it applies to. Will the regulator be required to measure one-third of the pictures or one-third of the words on a particular
internet site or both together? And will a single webpage on the site count towards the total if less than one-third of the page's content is pornographic--for example, a sexually explicit picture occupying 32% of the page, with the remaining 68%
made up of an article about fishing? The Committee worries that the lack of clarity in regulation 2(4) may afford the promoter of a pornographic website opportunities to circumvent Part 3 of the 2017 Act.
7.15 The Committee is particularly concerned that a promoter may make pornographic material available on one or more internet sites containing multiple pages, more than two-thirds of which are non-pornographic. For every 10 pages of pornography,
there could be 21 pages about (for example) gardening or football. Provided the sites are not actively marketed as pornographic, they would not be regarded as made available on a commercial basis. This means that Part 3 of the Act would not
apply, and the promoter would be free to make profits through advertising carried on the sites, while taking no steps at all to ensure that they were inaccessible to persons under 18.
7.16 The Committee anticipates that the shortcomings described above are likely to cause significant difficulty in the application and interpretation of regulation 2(4). The Committee also doubts whether Parliament contemplated, when enacting
Part 3 of the 2017 Act, that the power conferred by section 14(2) would be exercised in the way provided for in regulation 2(4). The Committee therefore reports regulation 2(4) for defective drafting and on the ground that it appears to make
an unexpected use of the enabling power.
Sony president Atsushi Morita has made the first official comments about his company's new found enthusiasm for video game censorship. Posted on Japanese website Ebitsu.net, but without official translation, he purportedly told attendees at
a Japan Studio event that expression restrictions [have been] adjusted to the global standards. He apparently concluded:
Considering the balance between freedom of expression and safety to children, I think that it is a difficult problem.
One video game series thats been affected by Sony's censorship is Senran Kagura . The producer of the latest game, Kenichiro Takaki commented that the next title in the series is going to take time as it deals with these new
regulations. He said:
We have to make games in a way that they aren't misunderstood. Certain things are harder than they've ever been before. Given that, I think [the game] is going to take some time.
Kingdom Hearts 3 is an upcoming video game that features Winnie the Pooh.
Now China's president Xi Jinping has taken offence at his gait and pot belly being likened to Pooh bear so Chinese censors have to spend hours ensuring that images of the bear are airbrushed out of Chinese life.
A Chinese website sharing images of the upcoming game revealed the game's interesting form of censorship. The iconic Winnie the Pooh is censored out with a gigantic white light.
Chinese internet companies have started keeping detailed records of their users' personal information and online activity. The new rules from China's internet censor went into effect Friday.
The new requirements apply to any company that provides online services which can influence public opinion or mobilize the public to engage in specific activities, according to a notice posted on the Cyber Administration of China's website.
Citing the need to safeguard national security and social order, the Chinese internet censor said companies must be able to verify users' identities and keep records of key information such as call logs, chat logs, times of activity and network
Officals will carry out inspections of companies' operations to ensure compliance. But the Cyber Administration didn't make clear under what circumstances the companies might be required to hand over logs to authorities.
When the EU started planning its new Copyright Directive (the "Copyright in the Digital Single Market Directive"), a group of powerful entertainment industry lobbyists pushed a terrible idea: a mandate that all online platforms
would have to create crowdsourced databases of "copyrighted materials" and then block users from posting anything that matched the contents of those databases.
At the time, we, along with academics and technologists explained why this would undermine the Internet, even as it would prove unworkable. The filters would be incredibly expensive to create, would erroneously block whole libraries' worth of
legitimate materials, allow libraries' more worth of infringing materials to slip through, and would not be capable of sorting out "fair dealing" uses of copyrighted works from infringing ones.
The Commission nonetheless included it in their original draft. Two years later, after the European Parliament went back and forth on whether to keep the loosely-described filters, with German MEP Axel Voss finally squeezing a narrow victory in
his own committee, and an emergency vote of the whole Parliament. Now, after a lot of politicking and lobbying, Article 13 is potentially only a few weeks away from becoming officially an EU directive, controlling the internet access of more than
The proponents of Article 13 have a problem, though: filters don't work, they cost a lot, they underblock, they overblock, they are ripe for abuse (basically, all the objections the Commission's experts raised the first time around). So to keep
Article 13 alive, they've spun, distorted and obfuscated its intention, and now they can be found in the halls of power, proclaiming to the politicians who'll get the final vote that "Article 13 does not mean copyright filters."
But it does.
Here's a list of Frequently Obfuscated Questions and our answers. We think that after you've read them, you'll agree: Article 13 is about filters, can only be about filters, and will result in filters.
Article 13 is about filtering, not "just" liability
Today, most of the world (including the EU) handles copyright infringement with some sort of takedown process. If you provide the public with a place to publish their thoughts, photos, videos, songs, code, and other copyrightable works, you
don't have to review everything they post (for example, no lawyer has to watch 300 hours of video every minute at YouTube before it goes live). Instead, you allow rightsholders to notify you when they believe their copyrights have been violated
and then you are expected to speedily remove the infringement. If you don't, you might still not be liable for your users' infringement, but you lose access to the quick and easy 'safe harbor' provided by law in the event that you are named as
part of any copyright lawsuit (and since the average internet company has a lot more money than the average internet user, chances are you will be named in that suit). What you're not expected to be is the copyright police. And in
fact, the EU has a specific Europe-wide law that stops member states from forcing Internet services from having to play this role: the same rule that defines the limits of their liability, the E-Commerce Directive, in the very next article,
prohibits a "general obligation to monitor." That's to stop countries from saying "you should know that your users are going to break some law, some time, so you should actively be checking on them all the time -- and if
you don't, you're an accomplice to their crimes." The original version of Article tried to break this deal, by re-writing that second part. Instead of a prohibition on monitoring, it required it, in the form of a mandatory filter.
When the European Parliament rebelled against that language, it was because millions of Europeans had warned them of the dangers of copyright filters. To bypass this outrage, Axel Voss proposed an amendment to the Article that replaced an
explicit mention of filters, but rewrote the other part of the E-Commerce directive. By claiming this "removed the filters", he got his amendment passed -- including by gaining votes by MEPs who thought they were striking down
Article 13.Voss's rewrite says that sharing sites are liable unless they take steps to stop that content before it goes online.
So yes, this is about liability, but it's also about filtering. What happens if you strip liability protections from the Internet? It means that services are now legally responsible for everything on their site. Consider a photo-sharing
site where millions of photos are posted every hour. There are not enough lawyers -- let alone copyright lawyers -- let alone copyright lawyers who specialise in photography -- alive today to review all those photos before they are permitted to
Add to that all the specialists who'd have to review every tweet, every video, every Facebook post, every blog post, every game mod and livestream. It takes a fraction of a second to take a photograph, but it might take hours or even days to
ensure that everything the photo captures is either in the public domain, properly licensed, or fair dealing. Every photo represents as little as an instant's work, but making it comply with Article 13 represents as much as several weeks' work.
There is no way that Article 13's purpose can be satisfied with human labour.
It's strictly true that Axel Voss's version of Article 13 doesn't mandate filters -- but it does create a liability system that can only be satisfied with filters.
But there's more: Voss's stripping of liability protections has Big Tech like YouTube scared, because if the filters aren't perfect, they will be potentially liable for any infringement that gets past them -- and given their billions, that
means anyone and everyone might want to get a piece of them. So now, YouTube has started lobbying for the original text, copyright filters and all. That text is still on the table, because the trilogue uses both Voss' text (liability to get
filters) and member states' proposal (all filters, all the time) as the basis for the negotiation.
Most online platforms cannot have lawyers review all the content they make available
The only online services that can have lawyers review their content are services for delivering relatively small libraries of entertainment content, not the general-purpose speech platforms that make the Internet unique. The Internet isn't
primarily used for entertainment (though if you're in the entertainment industry, it might seem that way): it is a digital nervous system that stitches together the whole world of 21st Century human endeavor. As the UK Champion for Digital
Inclusion discovered when she commissioned a study of the impact of Internet access on personal life, people use the Internet to do everything, and people with Internet access experience positive changes across their lives : in education,
political and civic engagement, health, connections with family, employment, etc.
The job we ask, say, iTunes and Netflix to do is a much smaller job than we ask the online companies to do. Users of online platforms do sometimes post and seek out entertainment experiences on them, but as a subset of doing everything else:
falling in love, getting and keeping a job, attaining an education, treating chronic illnesses, staying in touch with their families, and more. iTunes and Netflix can pay lawyers to check all the entertainment products they make available
because that's a fraction of a slice of a crumb of all the material that passes through the online platforms. That system would collapse the instant you tried to scale it up to manage all the things that the world's Internet users say to each
other in public.
It's impractical for users to indemnify the platforms
Some Article 13 proponents say that online companies could substitute click-through agreements for filters, getting users to pay them back for any damages the platform has to pay out in lawsuits. They're wrong. Here's why.
Imagine that every time you sent a tweet, you had to click a box that said, "I promise that this doesn't infringe copyright and I will pay Twitter back if they get sued for this." First of all, this assumes a legal regime that lets
ordinary Internet users take on serious liability in a click-through agreement, which would be very dangerous given that people do not have enough hours in the day to read all of the supposed 'agreements' we are subjected to by our technology.
Some of us might take these agreements seriously and double-triple check everything we posted to Twitter but millions more wouldn't, and they would generate billions of tweets, and every one of those tweets would represent a potential lawsuit.
For Twitter to survive those lawsuits, it would have to ensure that it knew the true identity of every Twitter user (and how to reach that person) so that it could sue them to recover the copyright damages they'd agreed to pay. Twitter would
then have to sue those users to get its money back. Assuming that the user had enough money to pay for Twitter's legal fees and the fines it had already paid, Twitter might be made whole... eventually. But for this to work, Twitter would have
to hire every contract lawyer alive today to chase its users and collect from them. This is no more sustainable than hiring every copyright lawyer alive today to check every tweet before it is published.
Small tech companies would be harmed even more than large ones
It's true that the Directive exempts "Microenterprises and small-sized enterprises" from Article 13, but that doesn't mean that they're safe. The instant a company crosses the threshold from "small" to "not-small"
(which is still a lot smaller than Google or Facebook), it has to implement Article 13's filters. That's a multi-hundred-million-dollar tax on growth, all but ensuring that the small Made-in-the-EU competitors to American Big Tech firms will
never grow to challenge them. Plus, those exceptions are controversial in the Trilogue, and may disappear after yet more rightsholder lobbying.
Existing filter technologies are a disaster for speech and innovation
ContentID is YouTube's proprietary copyright filter. It works by allowing a small, trusted cadre of rightsholders to claim works as their own copyright, and limits users' ability to post those works according to the rightsholders' wishes, which
are more restrictive than what the law's user protections would allow. ContentID then compares the soundtrack (but not the video component) of any user uploads to the database to see whether it is a match.
Everyone hates ContentID. Universal and the other big rightsholders complain loudly and frequently that ContentID is too easy for infringers to bypass. YouTube users point out that ContentID blocks all kind of legit material, including silence
, birdsong , and music uploaded by the actual artist for distribution on YouTube . In many cases, this isn't a 'mistake,' in the sense that Google has agreed to let the big rightsholders block or monetize videos that do not infringe any
copyright, but instead make a fair use of copyrighted material.
ContentID does a small job, poorly: filtering the soundtracks of videos to check for matches with a database populated by a small, trusted group. No one (who understands technology) seriously believes that it will scale up to blocking
everything that anyone claims as a copyrighted work (without having to show any proof of that claim or even identify themselves!), including videos, stills, text, and more.
Online platforms aren't in the entertainment business
The online companies most impacted by Article 13 are platforms for general-purpose communications in every realm of human endeavor, and if we try to regulate them like a cable operator or a music store, that's what they will become.
The Directive does not adequately protect fair dealing and due process
Some drafts of the Directive do say that EU nations should have "effective and expeditious complaints and redress mechanisms that are available to users" for "unjustified removals of their content. Any complaint filed under such
mechanisms shall be processed without undue delay and be subject to human review. Right holders shall reasonably justify their decisions to avoid arbitrary dismissal of complaints."
What's more, "Member States shall also ensure that users have access to an independent body for the resolution of disputes as well as to a court or another relevant judicial authority to assert the use of an exception or limitation to
On their face, these look like very good news! But again, it's hard (impossible) to see how these could work at Internet scale. One of EFF's clients had to spend ten years in court when a major record label insisted -- after human review,
albeit a cursory one-- that the few seconds' worth of tinny background music in a video of her toddler dancing in her kitchen infringed copyright. But with Article 13's filters, there are no humans in the loop: the filters will result in
millions of takedowns, and each one of these will have to receive an "expeditious" review. Once again, we're back to hiring all the lawyers now alive -- or possibly, all the lawyers that have ever lived and ever will live -- to check
the judgments of an unaccountable black box descended from a system that thinks that birdsong and silence are copyright infringements.
It's pretty clear the Directive's authors are not thinking this stuff through. For example, some proposals include privacy rules: "the cooperation shall not lead to any identification of individual users nor the processing of their
personal data." Which is great: but how are you supposed to prove that you created the copyrighted work you just posted without disclosing your identity? This could not be more nonsensical if it said, "All tables should weigh at least
five tonnes and also be easy to lift with one hand."
The speech of ordinary Internet users matters
Eventually, arguments about Article 13 end up here: "Article 13 means filters, sure. Yeah, I guess the checks and balances won't scale. OK, I guess filters will catch a lot of legit material. But so what? Why should I have to tolerate
copyright infringement just because you can't do the impossible? Why are the world's cat videos more important than my creative labour?"
One thing about this argument: at least it's honest. Article 13 pits the free speech rights of every Internet user against a speculative theory of income maximisation for creators and the entertainment companies they ally themselves with: that
filters will create revenue for them.
It's a pretty speculative bet. If we really want Google and the rest to send more money to creators, we should create a Directive that fixes a higher price through collective licensing.
But let's take a moment here and reflect on what "cat videos" really stand in for here. The personal conversations of 500 million Europeans and 2 billion global Internet users matter : they are the social, familial, political
and educational discourse of a planet and a species. They have worth, and thankfully it's not a matter of choosing between the entertainment industry and all of that -- both can peacefully co-exist, but it's not a good look for arts groups to
advocate that everyone else shut up and passively consume entertainment product as a way of maximising their profits.
In an assault on freedom of expression, a court in China sentenced a successful novelist, Ms. Liu, to 10 years in prison on October 31 for including explicit homoerotic content in her work. The charge against her was making and selling obscene
material for profit. Information about the case has just recently been circulated online, generating a widespread outcry on social media against censorship as well as the disproportionate and excessive severity of her sentence.
The writer, who uses the pen name Tianyi, was arrested in 2017, after the publication of her novel Occupy . Pornography is illegal in China . The 1997 penal code forbids depicting sexual acts except for medical or artistic purposes.
According to police in Anhui Province, in eastern China, the book described obscene behavior between males, including violence, abuse, and humiliation.
The Madras High Court has handed down one of the most aggressive site-blocking orders granted anywhere in the world. Following an application by Lyca Productions , more than 12,500 sites will be preemptively blocked by 37 Indian ISPs to prevent
2.0 - India's most expensive film ever - being leaked following its premiere.
What we're looking at here is a preemptive blocking order of a truly huge scale against sites that have not yet made the movie available and may never do so.
In the meantime, however, a valuable lesson about site-blocking is already upon us. Within hours of the blocks being handed down, a copy of 2.0 appeared online and is now available via various torrent and streaming sites labeled as a 1080p
PreDVDRip. Forums reviewed by TF suggest users aren't having a problem obtaining it.
With a reported budget of US$76 million, 2.0 is the most expensive Indian film. The sci-fi flick is attracting huge interest and at one stage it was reported that Arnold Schwarzenegger had been approached to play a leading role in the flagship
California is still trying to gag websites from sharing true, publicly available, newsworthy information about actors. While this effort is aimed at the admirable goal of fighting age discrimination in Hollywood, the law unconstitutionally
punishes publishers of truthful, newsworthy information and denies the public important information it needs to fully understand the very problem the state is trying to address. So we have once again filed a friend of the court brief opposing
The case, IMDB v. Becerra , challenges the constitutionality of California Civil Code section 1798.83.5 , which requires "commercial online entertainment employment services providers" to remove an actor's date of birth or other age
information from their websites upon request. The purported purpose of the law is to prevent age discrimination by the entertainment industry. The law covers any "provider" that "owns, licenses, or otherwise possesses computerized
information, including, but not limited to, age and date of birth information, about individuals employed in the entertainment industry, including television, films, and video games, and that makes the information available to the public or
potential employers." Under the law, IMDb.com, which meets this definition because of its IMDb Pro service, would be required to delete age information from all of its websites, not just its subscription service.
We filed a brief in the trial court in January 2017, and that court granted IMDb's motion for summary judgment, finding that the law was indeed unconstitutional. The state and the Screen Actors Guild, which intervened in the case to defend the
law, appealed the district court's ruling to the U.S. Court of Appeals for the Ninth Circuit. We have now filed an amicus brief with that court. We were once again joined by First Amendment Coalition, Media Law Resource Center, Wikimedia
Foundation, and Center for Democracy and Technology.
As we wrote in our brief, and as we and others urged the California legislature when it was considering the law, the law is clearly unconstitutional. The First Amendment provides very strong protection to publish truthful information about a
matter of public interest. And the rule has extra force when the truthful information is contained in official governmental records, such as a local government's vital records, which contain dates of birth.
This rule, sometimes called the Daily Mail rule after the Supreme Court opinion from which it originates, is an extremely important free speech protection. It gives publishers the confidence to publish important information even when they
know that others want it suppressed. The rule also supports the First Amendment rights of the public to receive newsworthy information.
Our brief emphasizes that although IMDb may have a financial interest in challenging the law, the public too has a strong interest in this information remaining available. Indeed, if age discrimination in Hollywood is really such a compelling
issue, and EFF does not doubt that it is, hiding age information from the public makes it difficult for people to participate in the debate about alleged age discrimination in Hollywood, form their own opinions, and scrutinize their government's
response to it.
We are Google employees. Google must drop Dragonfly.
We are Google employees and we join Amnesty International in calling on Google to cancel project Dragonfly, Google's effort to create a censored search engine for the Chinese market that enables state surveillance.
We are among thousands of employees who have raised our voices for months. International human rights organizations and investigative reporters have also sounded the alarm, emphasizing serious human rights concerns and repeatedly calling on
Google to cancel the project. So far, our leadership's response has been unsatisfactory.
Our opposition to Dragonfly is not about China: we object to technologies that aid the powerful in oppressing the vulnerable, wherever they may be. The Chinese government certainly isn't alone in its readiness to stifle freedom of expression, and
to use surveillance to repress dissent. Dragonfly in China would establish a dangerous precedent at a volatile political moment, one that would make it harder for Google to deny other countries similar concessions.
Our company's decision comes as the Chinese government is openly expanding its surveillance powers and tools of population control. Many of these rely on advanced technologies, and combine online activity, personal records, and mass monitoring to
track and profile citizens. Reports are already showing who bears the cost, including Uyghurs, women's rights advocates, and students. Providing the Chinese government with ready access to user data, as required by Chinese law, would make Google
complicit in oppression and human rights abuses.
Dragonfly would also enable censorship and government-directed disinformation, and destabilize the ground truth on which popular deliberation and dissent rely. Given the Chinese government's reported suppression of dissident voices, such controls
would likely be used to silence marginalized people, and favor information that promotes government interests.
Many of us accepted employment at Google with the company's values in mind, including its previous position on Chinese censorship and surveillance, and an understanding that Google was a company willing to place its values above its profits.
After a year of disappointments including Project Maven, Dragonfly, and Google's support for abusers, we no longer believe this is the case. This is why we're taking a stand.
We join with Amnesty International in demanding that Google cancel Dragonfly. We also demand that leadership commit to transparency, clear communication, and real accountability. Google is too powerful not to be held accountable. We deserve to
know what we're building and we deserve a say in these significant decisions.
The Australian Parliament has passed controversial amendments to copyright law. There will now be a tightened site-blocking regime that will tackle mirrors and proxies more effectively, restrict the appearance of blocked sites in Google
search, and introduce the possibility of blocking dual-use cyberlocker type sites.
Section 115a of Australia's Copyright Act allows copyright holders to apply for injunctions to force ISPs to prevent subscribers from accessing pirate sites. While rightsholders say that it's been effective to a point, they have lobbied hard for
The resulting Copyright Amendment (Online Infringement) Bill 2018 contained proposals to close the loopholes. After receiving endorsement from the Senate earlier this week, the legislation was today approved by Parliament.
Once the legislation comes into force, proxy and mirror sites that appear after an injunction against a pirate site has been granted can be blocked by ISPs without the parties having to return to court. Assurances have been given, however, that
the court will retain some oversight.
Search engines, such as Google and Bing, will also be affected. Accused of providing backdoor access to sites that have already been blocked, search providers will now have to remove or demote links to overseas-based infringing sites, along with
their proxies and mirrors.
The Australian Government will review the effectiveness of the new amendments in two years' time.
Russia's state censors have formally accused Google of breaking the law by not removing links to websites that are banned in the country.
Roskomnadzor, the state communications censor, said in a statement that the company had not connected to a database of banned sources in the country, leaving it out of compliance.
The potential penalty that Google could face is currently 700,000 roubles, or about $10,000. But Reuters reports that the Russian government has been considering more drastic actions, including fining companies up to 1 percent of annual revenue
for failing to comply with similar laws.