David Shanks, the New Zealand Chief Censor, writes:
There's a new documentary out on Netflix which is trending on social media and making headlines around the world.
Social Dilemma looks at how social
media companies are exploiting human psychology and using surveillance and data mining to keep people addicted, all to make a huge profit. It explores impacts like the declining mental health of populations, the rise of fake news and conspiracies, and
giving terrorists a platform to promote hate and livestream their crimes.
It was the part about livestreaming that brought it to my attention. We received a complaint from a member of the public last week -- just after the
documentary was released -- saying that it contains excerpts from the Christchurch terrorist's video which he livestreamed on Facebook on 15 March 2019.
I had banned that same video in New Zealand days after the attacks. I
classified it as an unlawful (objectionable) publication in New Zealand for its promotion of terrorism and extreme violence.
So was it illegal for Netflix to stream this documentary in New Zealand?
answer is no. As we detailed in guidance we issued at the time , classification of the livestream video in its entirety doesn't mean that every excerpt from the livestream is unlawful, although we had urged media to demonstrate extreme care in the
treatment of this material.
The clips that are used in Social Dilemma support the documentary's narrative, yet it's important to remember that they show a real-life atrocity in New Zealand, that happened only last year, and they
show real people. The timing couldn't be worse. Survivors and relatives of those who were subject to the attacks have only recently worked through the sentencing process.
I watched the documentary, and I was deeply concerned about
I asked Netflix to change their age rating for this documentary from 7+ to 13+ and to add a warning for Violence, including brief images from the Christchurch terror attacks, suicide references and content that may disturb.
I also offered other options - to put up a warning screen at the start of the documentary or remove the footage of the attacks altogether but those options weren't taken up.
Netflix has since updated their rating and warning,
which I appreciate.
The good news is that this type of situation is less likely to come up in the future. A recent law change means that from late next year, Netflix and other streaming services will be required by law to display
New Zealand age ratings and content warnings on all films, shows and documentaries.
If you plan to watch Social Dilemma, I recommend that you watch with care and consider those around you that may be triggered by the content.
The Electronic Frontier Foundation, the 30-year-old advocacy group that has been a pioneer in defending digital civil liberties, sent a letter this week to the United States Senate, opposing the controversial EARN IT Act -- which the EFF says will
result in online censorship that will disproportionately impact marginalized communities, will jeopardize access to encrypted services, and will place at risk the prosecutions of the very abusers the law is meant to catch.
Eliminating Abusive and Rampant Neglect of Interactive Technologies Act of 2020, or EARN IT, is designed to roll back protections for online platforms under Section 230 of the 1996 Communications Decency Act. Section 230 is widely considered the First
Amendment of the Internet. As AVN reported last month, the law is not only the backbone of open online communications, but for adult content online as well.
Efforts to roll back Section 230 protection will have a significant
adverse impact on the adult entertainment industry if passed, First Amendment attorney Lawrence Walters told AVN in August. Any change to Section 230 could result in restrictive content moderation rules or elimination of the platforms themselves.
Platforms would be required to earn the protections currently afforded by Section 230 by following a set of vaguely defined best practices to prevent illegal activities, specifically sex trafficking and Child Sex Abuse Material
(CSAM), if EARN IT passes.
Under EARN IT, states will be free to impose any liability standard they please on platforms, including holding platforms liable for CSAM they did not actually know was present on their services, EFF
warned in its letter to the Senate. Nothing in the bill would prevent a state from passing a law in the future holding a provider criminally responsible under a 'reckless' or 'negligence' standard.
In other words, under EARN IT,
state governments could punish online platforms for almost anything that could be broadly interpreted as CSAM or Sex trafficking, even bringing criminal charges against site operators. The dangers for the adult industry are clear if states are allowed to
define a wide range of sexual content as promoting sex trafficking.
But sex worker advocacy groups have also warned that the EARN IT law could lead to increased surveillance of workers in the sex industry. EFF also addresses the
surveillance threat in its letter to the Senate.
End-to-end encryption ensures the privacy and security of sensitive communications such that only the sender and receiver can view them, the group wrote. But the EARN IT Act
threatens to undermine and disincentivize providers from providing strong encryption.
The EFF compares EARN IT to a previous sex trafficking law, FOSTA/SESTA, which is the only law so far passed that actually curtails Section 230
protections, in cases when sites are deemed to promote online sex trafficking. But that law had the opposite effect from its stated intention.
Instead, it has forced sex workers, whether voluntarily engaging in sex work or forced
into sex trafficking against their wills, offline and into harm's way, EFF wrote. It has also chilled their online expression generally, including the sharing of health and safety information, and speech wholly unrelated to sex work.
In the letter, EFF urges the Senate not to fast track the EARN IT bill -- and to vote it down if or when it finally comes before the entire Senate. The bill passed through the Judiciary Commitee in July.
Computer security investigators have long held that the TikTok app is a Trojan horse in that it offers a popular platform for sharing short videos whilst aggressively snooping on its users. For instance it was recently found to be grabbing passwords for
other applications as they pass through the paste buffer from password managers to apps.
President Trump's administration had set a deadline that the Chinese app be sold to a US company that can sort out the security issues.
ByteDance have indeed done a deal to partner with the US company Oracle. However the deal does not allow Oracle to get to see or control the app's software and to address US security concerns,
So the US has announced that beginning Sunday, it will
be illegal to host or transfer internet traffic associated with WeChat and TikTok. The Trump administration is currently weighing a proposal involving ByteDance, TikTok's Chinese parent, and Oracle, designed to resolve the administration's national
security concerns related to TikTok; the deadline for a deal is Nov. 12.
The UK Government's 'Online Harms' plans will lead to sweeping online censorship unprecedented in a democracy. Some of the harms the plans describe are vague, like unacceptable content and disinformation. The new regulations will prohibit material
that may directly or indirectly cause harm even if it is not necessarily illegal.
In other words, the regulator will be empowered to censor lawful content, a huge infringement on our freedoms. The White Paper singled out offensive
material, as if giving offense is a harm the public need protection from by the state. In fact, the White Paper does not properly define harm or hate speech, but empowers a future regulator to do so. Failure to define harm means the definition may be
outsourced to the most vocal activists who see in the new regulator a chance to ban opinions they don't like.
The government claims its proposals are inspired by Germany's 2017 NetzDG law. But Human Rights Watch has said the law
turns private companies into overzealous censors and called on Germany to scrap it. NetzDG's other fans include President Lukashenko of Belarus, who cited it to justify a 2017 clampdown on dissent. Vladimir Putin's United Russia Party cited NetzDG as the
model for its internet law. So did Venezuela. Chillingly, the plans bear a striking similarity to some of Beijing's internet censorship policies. The Cyberspace Administration of China censors rumours because they cause social harms.
In 2019, the High Court of England and Wales ruled that by offering an index of non UK-based or unlicensed radio stations to UK residents, radio aggregator service TuneIn breached copyright.
In response the service has now geo-blocked thousands of
stations leaving UK customers without their favorite sounds. Unless they use a VPN, then it's business as usual.
TuneIn is one of the most prominent providers of radio content in the world. Available for free or on a premium basis, its site and
associated app provide access to more than 100,000 stations and podcasts. Unless you happen to live in the UK, which is now dramatically underserved by the company. Sued by Labels in the UK For Mass Copyright Infringement
In 2017, Sony Music
Entertainment and Warner Music Group sued the US-based radio index in the High Court of England and Wales, alleging that the provision of links to stations unlicensed in the UK represented a breach of copyright.
One of the most interesting aspects
of the case is that TuneIn is marketed as an audio guide service, which means that it indexes stations that are already freely available on the web and curates them so that listeners can more easily find them.
When stations are more easily found,
more people listen to them, which means that TuneIn arguably boosts the market overall. Nevertheless, the labels claimed this was illegal and detrimental to the music industry in the UK on licensing grounds.
In response to the apparent decimation
of its offering, TuneIn took to Twitter to address the complaints:
Due to a court ruling in the United Kingdom, we will be restricting international stations to prohibit their availability in the UK, with limited
exceptions. We apologize for the inconvenience, the company wrote.
The European Commission adopted a proposal for a Regulation on a temporary derogation from certain provisions of the ePrivacy Directive as regards the use of technologies by number-independent interpersonal communications providers for the processing of
personal data and other data for the purpose of combatting child sexual abuse online .
A growing number of online services providers have been using specific technological tools on a voluntary basis to detect child sex abuse online in their networks.
The law-enforcement agencies all across the EU and globally have been confronted with an unprecedented spike in reports of child sexual abuse material (CSAM) online, which go beyond their capacity to address the volumes now circulating, as they focus
their efforts on imagery depicting the youngest and most vulnerable victim. Online services providers have therefore been instrumental in the fight against child sexual abuse online.
MEP David Lega commented:
welcome this legislative proposal that allows online services providers to keep making use of technological tools to detect child sexual abuse online, as a step forward in the right direction to fight against child sexual abuse online. The cooperation
with the private sector is essential if we want to succeed in eradicating child sexual abuse online, identifying the perpetrators and the victims. It is our responsibility as legislators to ensure that online services providers are held responsible and
prescribe a legal obligation for them to make use of technological tools to detect child sexual abuse online, therefore enabling them to ensure that their platforms are not used for illegal activities.
Privacy campaigner Duncan McCann has filed a legal case accusing YouTube of selling the data of children using their service to advertisers in contravention of EU and UK law The case was lodged with the UK High Court in July and is the first of its kind
It is understood that Google will strongly dispute the claim. One of its arguments is that the main YouTube platform is not intended for those under 13, who should be using the YouTube Kids app, which incorporates more safeguards.
Google is also expected to point to a series of changes that it introduced last year to improve notification to parents, limit data collection and restrict personalised adverts.
The case seeks compensation of £500 payable to those whose data was breached. But crucially it would set a precedent, potentially making YouTube liable for payouts to the estimated five million children in Britain who use the site as well as their
parents or guardians.
It cannot be right that Google can take children's private data without explicit permission and then sell it to advertisers to target children. I believe it is only through
legal action and damages that these companies will change their behaviour, and it is only through a class action that we can fight these companies on an equal basis.
The case, which focuses on children who have watched YouTube since May
2018 when the Data Protection Act became law, is backed by digital privacy campaigners Foxglove, and the global law firm Hausfeld. The case is not expected to come to court before next autumn and has been underwritten by Vannin Capital, a company which
will take a cut of any compensation that remains unclaimed. The action will also depend on the outcome of another data and privacy case against Google which does not cover children.
Reform of the law is needed to protect victims from harmful online behaviour including abusive messages, cyber-flashing, pile-on harassment, and the malicious sharing of information known to be false. The Law Commission is consulting on proposals to
improve the protection afforded to victims by the criminal law, while at the same time provide better safeguards for freedom of expression.
In our Consultation Paper launched on 11 September 2020, we make a number of proposals for
reform to ensure that the law is clearer and effectively targets serious harm and criminality arising from online abuse. This is balanced with the need to better protect the right to freedom of expression.
The proposals include:
A new offence to replace the communications offences (the Malicious Communications Act 1988 (MCA 1988) and the Communications Act 2003 (CA 2003)), to criminalise behaviour where a communication would likely cause harm.
This would cover emails, social media posts and WhatsApp messages, in addition to pile-on harassment (when a number of different individuals send harassing communications to a victim).
include communication sent over private networks such as Bluetooth or a local intranet, which are not currently covered under the CA 2003.
The proposals include introduction of the requirement of proof of likely harm.
Currently, neither proof of likely harm nor proof of actual harm are required under the existing communications offences.
Cyber-flashing -- the unsolicited sending of images or video recordings of one's genitals -- should be included as a sexual offence under section 66 of the Sexual Offences Act 2003. This would ensure that additional
protections for victims are available.
Raising the threshold for false communications so that it would only be an offence if the defendant knows the post is false, they are intending to cause non-trivial emotional,
psychological, or physical harm, and if they have no excuse.
The consultation period will run until 18 December 2020.
During the Article 17 (formerly #Article13) discussions about the availability of copyright-protected works online, we fought hand-in-hand with European civil society to avoid all communications being subjected to interception and arbitrary censorship by
automated upload filters. However, by turning tech companies and online services operators into copyright police, the final version of the EU Copyright Directive failed to live up to the expectations of millions of affected users who fought for an
Internet in which their speech is not automatically scanned, filtered, weighed, and measured.
EU "Directives" are not automatically applicable. EU member states must "transpose" the directives into national
law. The Copyright Directive includes some safeguards to prevent the restriction of fundamental free expression rights, ultimately requiring national governments to
balance the rights of users and copyright holders alike. At the EU level, the Commission has launched a
Stakeholder Dialogue to support the drafting of guidelines for the application of Article 17, which must be implemented in national laws by June 7, 2021. EFF and other digital rights organizations have a seat at the table, alongside rightsholders from
the music and film industries and representatives of big tech companies like Google and Facebook.
During the stakeholder meetings, we made a strong case for preserving users' rights to free speech, making suggestions for averting
a race among service providers to over-block user content. We also asked the EU Commission to share the draft guidelines with rights organizations and the public, and allow both to comment on and suggest improvements to ensure that they comply with
European Union civil and human rights requirements.
The Commission has partly complied with EFF and its partners' request for transparency and participation. The Commission launched a targeted consultation addressed to members of
the EU Stakeholder Group on Article 17. Our response focuses on mitigating the dangerous consequences of the Article 17 experiment by
focusing on user rights, specifically free speech, and by limiting the use of automated filtering, which is notoriously inaccurate.
Our main recommendations are:
Produce a non-exhaustive list of service providers that are excluded from the obligations under the Directive. Service providers not listed might not fall under the Directive's rules, and would have to be evaluated on a
Ensure that the platforms' obligation to show best efforts to obtain rightsholders' authorization and ensure infringing content is not available is a mere due diligence duty and must be interpreted in
light of the principles of proportionality and user rights exceptions;
Recommend that Member States not mandate the use of technology or impose any specific technological solutions on service providers in order to demonstrate
Establish a requirement to avoid general user (content) monitoring. Spell out that the implementation of Art 17 should never lead to the adoption of upload filters and hence general monitoring of
State that the mere fact that content recognition technology is used by some companies does not mean that it must be used to comply with Art 17. Quite the opposite is true: automated technologies to
detect and remove content based on rightsholders' information may not be in line with the balance sought by Article 17.
Safeguard the diversity of platforms and not put disproportionate burden on smaller companies, which play
an important role in the EU tech ecosystem;
Establish that content recognition technology cannot assess whether the uploaded content is infringing or covered by a legitimate use. Filter technology may serve as assistants, but
can never replace a (legal) review by a qualified human;
Filter-technology can also not assess whether user content is likely infringing copyright;
If you believe that filters work, prove it. The
Guidance should contain a recommendation to create and maintain test suites if member states decide to establish copyright filters. These suites should evaluate the filters' ability to correctly identify both infringing materials and non-infringing uses.
Filters should not be approved for use unless they can meet this challenge;
Complaint and redress procedures are not enough. Fundamental rights must be protected from the start and not only after content has been taken down;
The Guidance should address the very problematic relationship between the use of automated filter technologies and privacy rights, in particular the right not to be subject to a decision based solely on automated processing
under the GDPR.
Cuties ( Mignonnes) is a 2020 France comedy drama by Maïmouna Doucouré. Starring Fathia Youssouf, Médina El Aidi-Azouni and Esther Gohourou.
Amy, an 11-year-old girl, joins a group of dancers
named "the cuties" at school, and rapidly grows aware of her burgeoning femininity - upsetting her mother and her values in the process.
The Turkish government has said it will order Netflix to block local access to the
movie Cuties. The country's TV censor claims the film contains images of child exploitation. Turkey's Family Ministry had previously said the film may cause children to be open to negligence and abuse, and negatively impact their psychosocial
Cuties is due to launch in the country on September 9. The movie was at the center of a furor last month when Netflix launched the film's international poster, which was widely criticized for sexualizing children. Netflix quickly
apologized and removed the offending artwork, but not before the film was lynched on social media.
Update: BBFC rated
10th September 2020.
The Netflix UK release has been BBFC 15 rated uncut for rude humour,
threat, dangerous behaviour, bullying, violence.
Pakistan authorities have blocked Tinder, Grindr and three other dating apps for not adhering to local laws, its latest move to curb online platforms deemed to be disseminating immoral content.
The Pakistan Telecommunications Authority said it has
sent notices to the management of the five apps, keeping in view the negative effects of immoral/indecent content streaming. PTA said the notices issued to Tinder, Grindr, Tagged, Skout and SayHi sought the removal of dating services and moderation of
live streaming content in accordance with local laws.
Data from analytics firm Sensor Tower shows Tinder has been downloaded more than 440,000 times in Pakistan within the last 12 months. Grindr, Tagged and SayHi had each been downloaded about
300,000 times and Skout 100,000 times in that same period.
According to Greatfire.org, a site that monitors internet censorship in China, internet users in China cannot access Scratch's website anymore.
Scratch programming language was developed by the Lifelong Kindergarten Group at the MIT Media Lab. There
are around 60 million kids who use Scratch's interactive programming features to learn how to make games, animated stories, and more. A total of 5.65% or 3 million Scratch users reside in China.
The censorship seems re lated to a Chinese news
report about the projects on Scratch on August 21. It claimed that the platform harbored a great deal of humiliating, fake, and libelous content about China, that included placing Hong Kong, Macau, and Taiwan in a dropdown list of independent countries.
The report says that any service distributing information in China has to comply with the local regulations. It also suggested that Scratch's website and user forum had been banned in the country.
It is unclear whether the ban is temporary or
a permanent one. In any case, if the ban is proven permanent then China will probably whip up a home-baked alternative.