Melon Farmers Original Version

Americas Censorship News


2021

 2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   Latest 

 

Hating the people...

Canada's government will a ban on politically incorrect speech enforced by a fine of up to $40,000


Link Here25th June 2021
Canada's ruling 'Liberal' government has announced that it plans to make online hate speech a crime punishable by as much as $20,000 ($16,250 US) for the first offense and $50,000 ($40,600 US) for the second.

The proposal would punish social media users who broke the law but exempt social media companies that host such content from fines.

Canada's Attorney General David Lametti has claimed that the proposed law would not target simple expressions of dislike or disdain during a virtual press conference. Instead, Lametti said, the law is only designed to punish the most extreme forms of hatred that expresses detestation or vilification of a person or group on the basis of a prohibited ground of discrimination.

The government, headed by Prime Minsiter Justin Trudeau, released a statement outlining the goals of the proposed legislation , known as Bill C-36, as well as other steps being taken in the name of online racial abuse. The government also notes that it will released a detailed technical discussion paper in the coming weeks to inform Canadians about the nitty gritty of this proposed law.

 

 

Verified dangers...

Canada's Privacy Commissioner warns of inadequate privacy protection for a proposed porn age verification law


Link Here5th June 2021
Legislation that would require Canadians to verify their age before they could look at online pornography could result in a number of privacy concerns, the country's federal privacy commissioner has said.

Bill S-203, introduced by Senator Julie Miville-Dechêne, doesn't specify what that verification would look like. Options under consideration include presenting some type of ID to a third-party company or organization, or the use of technologies such as biometrics or artificial intelligence to estimate age.

If adequate privacy measures aren't taken, the age verification process could increase the risk of revealing adults' private browsing habits, privacy commissioner Daniel Therrien said.

He told the Senate legal and constitutional affairs committee that current digital age verification systems are all different, but what they have in common is that the user will ultimately be required to provide some amount of personal information. That brings up questions about how secure that information is.

On the other hand, the use of biometrics or facial recognition to verify or estimate a user's age raises unique privacy concerns, Therrien said, noting biometric technology is generally very intrusive and how accurate it is in verifying an individual's age still hasn't been proven. He said there's a considerable margin of error, and an error of two to three years could be significant depending on the age of the person.

The bill would also introduce fines for those who make available sexually explicit material on the internet to a young person. Individuals could be fined up to $20,000 and face six months in jail, while fines for corporations would range from $250,000 to $500,000. The way to avoid the fine would be to put in place an unspecified prescribed age-verification method.

 

 

Age old practicality problems...

EFF argues against a Canadian impossible to comply with age verification for porn bill


Link Here24th April 2021

Canadian Senate Bill S-203 , AKA the Protecting Young Persons from Exposure to Pornography Act, is another woefully misguided proposal aimed at regulating sexual content online. To say the least, this bill fails to understand how the internet functions and would be seriously damaging to online expression and privacy. It's bad in a variety of ways, but there are three specific problems that need to be laid out: 1) technical impracticality, 2) competition harms, and 3) privacy and security.

First, S-203 would make any person or company criminally liable for any time an underage user engages with sexual content through its service. The law applies even if the person or company believed the user to be an adult, unless the person or company implemented a prescribed age-verification method.

Second, the bill seemingly imposes this burden on a broad swath of the internet stack. S-203 would criminalize the acts of independent performers, artists, blogs, social media, message boards, email providers, and any other intermediary or service in the stack that is in some way for commercial purposes and makes available sexually explicit material on the Internet to a young person. The only meaningful defense against the financial penalties that a person or company could assert would be to verify the legal adult age of every user and then store that data.

The bill would likely force many companies to simply eliminate sexual content

The sheer amount of technical infrastructure it would take for such a vast portion of the internet to implement a prescribed age-verification method would be costly and overwhelmingly complicated. It would also introduce many security concerns that weren't previously there. Even if every platform had server side storage with robust security posture, processing high level personally identifiable information (PII) on the client side would be a treasure trove for anyone with a bit of app exploitation skills. And then if this did create a market space for third-party proprietary solutions to take care of a secure age verification system, the financial burden would only advantage the largest players online. Not only that, it's ahistorical to assume that younger teenagers wouldn't figure out ways to hack past whatever age verification system is propped up.

Then there's the privacy angle. It's ludicrous to expect all adult users to provide private personal information every time they log onto an app that might contain sexual content. The implementation of verification schemes in contexts like this may vary on how far privacy intrusions go, but it generally plays out as a cat and mouse game that brings surveillance and security threats instead of responding to initial concerns. The more that a verification system fails, the more privacy-invasive measures are taken to avoid criminal liability.

Because of the problems of implementing age verification, the bill would likely force many companies to simply eliminate sexual content instead of carrying the huge risk that an underage user will access it. But even a company that wanted to eliminate prohibited sexual content would face significant obstacles in doing so if they, like much of the internet, host user-generated content. It is difficult to detect and define the prohibited sexual content, and even more difficult when the bill recognizes that the law is not violated if such material has a legitimate purpose related to science, medicine, education or the arts. There is no automated tool that can make such distinctions; the inevitable result is that protected materials will be removed out of an abundance of caution. And history teaches us that the results are often sexist , misogynist , racist , LGBT-phobic, ableist , and so on. It is a feature, not a bug, that there is no one-size-fits-all way to neatly define what is and isn't sexual content.

Ultimately, Canadian Senate Bill S-203 is another in a long line of morally patronizing legislation that doesn't understand how the internet works. Even if there were a way to keep minors away from sexual content, there is no way without vast collateral damage. Sen. Julie Miville-DechĂȘne, who introduced the bill, stated it makes no sense that the commercial porn platforms don't verify age. I think it's time to legislate. We gently recommend that next time her first thought be to consult with experts.

 

 

Deserving of insult...

Canadian politician introduces legislation to ban politicians being insulted online


Link Here12th April 2021
A Canadian politician has proposed the banning of 'hurtful' language against politicians online. The provision is going to be included in the upcoming internet censorship bill, to be discussed in parliament in the next few weeks.

Steven Guilbeault, a 'Liberal' member of parliament has oftenn been the subject of controversy for favoring internet censorship. He said in a recent podcast:

We have seen too many examples of public officials retreating from public service due to the hateful online content targeted towards themselves.

If the bill passes social media companies will have to remove posts containing hurtful words targeted at Canadian politicians. The provision is a danger to free speech not only in Canada but also the rest of the world as other governments will surely try to get away with similar censorship laws.

 

 

Age of censorship...

An internet porn age verification bill progresses in Canada


Link Here19th March 2021
A bill has passed 2nd reading in the Canadian Senate that would require porn websites to implement age verification for users.

Bill S-203, An Act to restrict young persons' online access to sexually explicit material, will now be referred to the Standing Senate Committee on Legal and Constitutional Affairs.

 

 

Updated: Ethical porn...

Pornhub comes under scrutiny in the Canadian Parliament


Link Here6th February 2021
Full story: Pornhub...An ongoing target of censors
The Committee on Access to Information, Privacy and Ethics in Canada's House of Commons held a hearing concerning allegations made against Pornhub's content moderation policies. The allegations featured in a New York Times article by Nicholas Kristof and were based on a religious group Exodus Cry's Traffickinghub campaign against the tube site and parent company MindGeek.

MindGeek is headquartered in Luxembourg, although many of its operations are run from Montreal and the two people identified by the New York Times as owners are Canadian nationals.

The committee heard from a witness who retold her story of having difficulties getting Pornhub to remove a video she had shot of herself as a teenager, which then she sent to a boyfriend and which was allegedly repeatedly uploaded onto the tube site by unidentified third parties.

The committee also heard from New York lawyer Michael Bowe, who has previously represented disgraced evangelist Jerry Falwell Jr. and Donald Trump. Bowe made repeated claims about a supposed conspiracy masterminded by MindGeek, and their agents and allies to gaslight the public opinion about the organized international campaign against Pornhub. Bowe also asked for Canada to change their laws to make MindGeek accountable, and stated that in his opinion the company committed criminal offenses.

Update: Pornhub Releases Statement About Content Moderation Changes

6th February 2021. See statement from help.pornhub.com

Going forward, we will only allow properly identified users to upload content. We have banned downloads. We have made some key expansions to our moderation process, and we recently launched a Trusted Flagger Program with dozens of non-profit organizations. Earlier this year, we also partnered with the National Center for Missing & Exploited Children, and next year we will issue our first transparency report. Full details on our expanded policies can be found below.

If you wish to report any content that violates our terms of service, including CSAM or other illegal content, please click this link .

1. Verified Uploaders Only

Effective immediately, only content partners and people within the Model Program will be able to upload content to Pornhub. In the new year, we will implement a verification process so that any user can upload content upon successful completion of identification protocol.

2. Banning Downloads

Effective immediately, we have removed the ability for users to download content from Pornhub, with the exception of paid downloads within the verified Model Program. In tandem with our fingerprinting technology, this will mitigate the ability for content already removed from the platform to be able to return.

3. Expanded Moderation

We have worked to create comprehensive measures that help protect our community from illegal content. In recent months we deployed an additional layer of moderation. The newly established "Red Team" will be dedicated solely to self-auditing the platform for potentially illegal material. The Red Team provides an extra layer of protection on top of the existing protocol, proactively sweeping content already uploaded for potential violations and identifying any breakdowns in the moderation process that could allow a piece of content that violates the Terms of Service. Additionally, while the list of banned keywords on Pornhub is already extensive, we will continue to identify additional keywords for removal on an ongoing basis. We will also regularly monitor search terms within the platform for increases in phrasings that attempt to bypass the safeguards in place. Pornhub's current content moderation includes an extensive team of human moderators dedicated to manually reviewing every single upload, a thorough system for flagging, reviewing and removing illegal material, robust parental controls, and utilization of a variety of automated detection technologies. These technologies include:

  • CSAI Match, YouTube's proprietary technology for combating Child Sexual Abuse Imagery online

  • Content Safety API, Google's artificial intelligence tool that helps detect illegal imagery

  • PhotoDNA, Microsoft's technology that aids in finding and removing known images of child exploitation

  • Vobile, a fingerprinting software that scans any new uploads for potential matches to unauthorized materials to protect against banned videos being re-uploaded to the platform.

If a user encounters a piece of content they think may violate the Terms of Service, we encourage them to immediately flag the video or fill out the Content Removal Request Form , which is linked on every page.

Our policy is to immediately disable any content reported in the Content Removal Request Form for review.

4. Trusted Flagger Program

We recently launched a Trusted Flagger Program, a new initiative empowering non-profit partners to alert us of content they think may violate our Terms of Service. The Trusted Flagger Program consists of more than 40 leading non-profit organizations in the space of internet and child safety. Our partners have a direct line of access to our moderation team, and any content identified by a Trusted Flagger is immediately disabled. Partners include: Cyber Civil Rights Initiative (United States of America), National Center for Missing & Exploited Children (United States of America), Internet Watch Foundation (United Kingdom), Stopline (Austria), Child Focus (Belgium), Safenet (Bulgaria), Te Protejo Hotline - I Protect You Hotline (Colombia), CZ.NIC - Stop Online (Czech Republic ), Point de Contact (France), Eco-Association of the Internet Industry (Germany), Safeline (Greece), Save the Children (Iceland), Latvian Internet Association (Latvia), Meldpunt Kinderporno - Child Pornography Reporting Point (Netherlands), Centre for Safer Internet Slovenia (Slovenia), FPB Hotline - Film and Publication Board (South Africa), ECPAT (Sweden), ECPAT (Taiwan).

5. NCMEC Partnership

Last year, we voluntarily partnered with the National Center for Missing & Exploited Children (NCMEC) in order to transparently report and limit incidents of CSAM on our platform. In early 2021, NCMEC will release our total number of reported CSAM incidents alongside numbers from other major social and content platforms. We will also continue to work with law enforcement globally to report and curb any issues of illegal content.

6. Transparency Report

In 2021, we will release a Transparency Report detailing our content moderation results from 2020. This will identify not just the full number of reports filed with NCMEC, but also other key details related to the trust and safety of our platform. Much like Facebook, Instagram, Twitter and other tech platforms, Pornhub seeks to be fully transparent about the content that should and should not appear on the platform. This will make us the only adult content platform to release such a report.

7. Independent Review

As part of our commitment, in April 2020 we hired the law firm of Kaplan Hecker & Fink LLP to conduct an independent review of our content compliance function, with a focus on meeting legal standards and eliminating all non-consensual content, CSAM and any other content uploaded without the meaningful consent of all parties. We requested that the goal of the independent review be to identify the requisite steps to achieve a "best-in-class" content compliance program that sets the standard for the technology industry. Kaplan Hecker & Fink LLP is continuing its review, but has already identified and categorized a comprehensive inventory of remedial recommendations, supported by dozens of additional sub-recommendations, in addition to the steps identified above, based on an evaluation and assessment of our current policies and practices. Kaplan Hecker & Fink LLP is soliciting information to assist with its review and in developing recommendations regarding our compliance policies and procedures. If you would like to provide compliance suggestions, you can do so here .

Update: Pornhub: Canadian MPs Finally Invite Sex Worker Advocates

20th April 2021. See article from xbiz.com

Biased Canadian ethics committee shamed into listening to the other side of the argument in a diatribe against Pornhub.

 

 

Harming democratic expression...

The misleadingly titled Canadian Commission on 'Democratic Expression' bizarrely calls for internet censorship


Link Here27th January 2021

Following nine months of study and deliberations, the Canadian Commission on Democratic Expression has settled on a series of principles and recommendations that can lead to a practical course of action. What we set forth is a series of functional steps to enable citizens, governments and platforms to deal with the matter of harmful speech in a free and democratic, rights-based society like Canada. We recognize the complexity of the issues at play and offer these as a path forward and with the knowledge they will be subject to further debate and molding.

PRINCIPLES

  • Free speech is fundamental to a democratic society and that the internet enables more people to participate in public discussions and debates.

  • The rise of hatred, disinformation, conspiracies, bullying and other harmful communications online is undermining these gains and having a corrosive impact on democratic expression in Canada.

  • The status quo of leaving content moderation to the sole discretion of platforms has failed to stem the spread of these harms and that platform companies can find themselves in conflict between their private interests and the public good.

  • We find fault with the notion that platforms are neutral disseminators of information. Platforms curate content to serve their commercial interests and so must assume greater responsibility for the harms they amplify and spread.

  • Government must play a more active role in furthering the cause of democratic expression and protecting Canadians from online harms.

  • Any policy response must put citizens first, reduce online harms and guard against the potential for over-censorship of content in putting forth remedies. This requires a balanced and multi-pronged approach.

These principles have led the Commission to an integrated program of six scaffolding recommendations.

RECOMMENDATIONS

1. A new legislated duty on platforms to act responsibly.

Establishment by Parliament of a statutory Duty to Act Responsibly imposing an affirmative requirement on platforms under legislation and regulation, including social media companies, large messaging groups, search engines and other internet operators involved in the dissemination of user-generated and third-party content. In addressing harms, the details of this duty must take account of principles such as the fundamental nature of free speech.

2. A new regulator to oversee and enforce the Duty to Act Responsibly.

Creation of a new regulatory body, operating within legislated guidelines, that represents the public interest and moves content moderation and platform governance beyond the exclusive preserve of private sector companies. The regulator would oversee a Code of Conduct to guide the actions of parties under its supervision, while recognizing that not all platforms can be treated in precisely the same manner. Regulatory decisions will be judicially made, based in the rule of law and subject to a process of review.

3. A Social Media Council to serve as an accessible forum in reducing harms and improving democratic expression on the internet.

Ensuring an inclusive dialogue on ongoing platform governance policies and practices, including content moderation, through a broadly based social media council that places platforms, civil society, citizens and other interested parties around the same table.

4. A world-leading transparency regime to provide the flow of necessary information to the regulator and Social Media Council.

Embedding significant, world-leading transparency mechanisms at the core of the mandate for the regulator and Social Media Council -- on data, ads, bots and the right to compel information. This will also assist researchers, journalists and members of the public with access to the information required for a publicly accountable system.

5. Avenues to enable individuals and groups to deal with complaints of harmful content in an expeditious manner. An e-tribunal to facilitate and expedite dispute resolution and a process for addressing complaints swiftly and lightly before they become disputes.@

Creating rapid and accessible recourse to content-based dispute settlement by a dedicated e-tribunal charged with addressing online content disputes in a timely manner. And creating a process that enables targets of harms to compel platforms to make creators aware of a complaint.

6. A mechanism to quickly remove content that presents an imminent threat to a person.

Development of a quick-response system under the authority of the regulator to ensure the rapid removal of content -- even temporarily -- that creates a reasonable apprehension of an imminent threat to the health and safety of the targeted party.

The Commission considered imposing takedown requirements on platforms as some nations have done. These generally identify offending categories of content, provide a fixed window, such as 24 hours, for it to be removed and may levy significant penalties. We are concerned that such systems could create incentives for over-censorship by platform companies. Our recommendations do less to circumscribe speech in recognition of the fact that harmful speech and the unjustified denial of freedom of expression are both problematic.

The standards of the Duty to Act Responsibly are purposely left vague at this point to give government, the regulator, and the Social Media Council an opportunity to flesh it out as part of a Code of Conduct. To be sure, we are not recommending the creation of a self-standing new tort as the basis for a cause of action, but rather imposing affirmative requirements on the platforms to be developed under legislation and regulation. We expect the Code will evolve given the recentness of the problem and the rapid evolution of the internet.

 

 

On the hate bandwagon...

Canadian government to introduce new internet censorship laws in the name of 'hate speech'


Link Here16th January 2021
The Canadian government plans to introduce new internet censorship laws in the name of targeting hate speech on social media platforms. The laws will reportedly be tabled in 2021.

A briefing note on the new regulations from Canadian Heritage Minister Steven Guilbeault's department stated:

We are working to introduce regulations to reduce the spread of illegal content, including hate speech, in order to promote a safer and more inclusive online environment. We want to protect Canadians online.

The briefing added that:

Social media platforms can also be used to threaten, intimidate, bully and harass people, or used to promote racist, anti-Semitic, Islamophobic, misogynist and homophobic views that target communities, put people's safety and risk and undermine Canada's social cohesion or democracy.


 2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   Latest 

melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 


US  

Americas

World

Campaigns
 

UK  

W Europe

E Europe

Africa
 

Middle East

South Asia

Asia Pacific

Australia
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys