Melon Farmers Original Version

Facebook Censorship since 2020


Left wing bias, prudery and multiple 'mistakes'


 

Offsite Article: Facebook's purge of left-wing radicals...


Link Here 4th September 2020
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'
Having abandoned free speech, the left is in no position to defend itself from censorship. By Fraser Myers

See article from spiked-online.com

 

 

Price war...

Facebook says that if Australia forces social media to share news stories then Facebook will ban its users from sharing news articles


Link Here1st September 2020
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'
Facebook explains in a blog post:

Australia is drafting a new regulation that misunderstands the dynamics of the internet and will do damage to the very news organisations the government is trying to protect. When crafting this new legislation, the commission overseeing the process ignored important facts, most critically the relationship between the news media and social media and which one benefits most from the other.

Assuming this draft code becomes law, we will reluctantly stop allowing publishers and people in Australia from sharing local and international news on Facebook and Instagram. This is not our first choice -- it is our last. But it is the only way to protect against an outcome that defies logic and will hurt, not help, the long-term vibrancy of Australia's news and media sector.

We share the Australian Government's goal of supporting struggling news organisations, particularly local newspapers, and have engaged extensively with the Australian Competition and Consumer Commission that has led the effort. But its solution is counterproductive to that goal. The proposed law is unprecedented in its reach and seeks to regulate every aspect of how tech companies do business with news publishers. Most perplexing, it would force Facebook to pay news organisations for content that the publishers voluntarily place on our platforms and at a price that ignores the financial value we bring publishers.

The ACCC presumes that Facebook benefits most in its relationship with publishers, when in fact the reverse is true. News represents a fraction of what people see in their News Feed and is not a significant source of revenue for us. Still, we recognize that news provides a vitally important role in society and democracy, which is why we offer free tools and training to help media companies reach an audience many times larger than they have previously.

News organisations in Australia and elsewhere choose to post news on Facebook for this precise reason, and they encourage readers to share news across social platforms to increase readership of their stories. This in turn allows them to sell more subscriptions and advertising. Over the first five months of 2020 we sent 2.3 billion clicks from Facebook's News Feed back to Australian news websites at no charge -- additional traffic worth an estimated $200 million AUD to Australian publishers.

We already invest millions of dollars in Australian news businesses and, during discussions over this legislation, we offered to invest millions more. We had also hoped to bring Facebook News to Australia, a feature on our platform exclusively for news, where we pay publishers for their content. S ince it launched last year in the US, publishers we partner with have seen the benefit of additional traffic and new audiences.

But these proposals were overlooked. Instead, we are left with a choice of either removing news entirely or accepting a system that lets publishers charge us for as much content as they want at a price with no clear limits. Unfortunately, no business can operate that way.

Facebook products and services in Australia that allow family and friends to connect will not be impacted by this decision. O ur global commitment to quality news around the world will not change either. And we will continue to work with governments and regulators who rightly hold our feet to the fire. But successful regulation, like the best journalism, will be grounded in and built on facts. In this instance, it is not.

 

 

Election notices...

Facebook announces that it will censor content to protect itself against being prosecuted under local laws


Link Here 1st September 2020
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'
Facebook has announced changes to its Terms of Service that will allow it to remove content or restrict access if the company thinks it is necessary to avoid legal or regulatory impact.

Facebook users have started receiving notifications regarding a change to its Terms of Service which state:

Effective October 1, 2020, section 3.2 of our Terms of Service will be updated to include: We also can remove or restrict access to your content, services or information if we determine that doing so is reasonably necessary to avoid or mitigate adverse legal or regulatory impacts to Facebook.

It is not clear whether this action is in response to particular laws or perhaps this references creeping censorship being implemented worldwide. Of course it could be a pretext to continuing to impose biased political censorship in the run up to the US presidential election.

 

 

Proving the conspiracy...

Facebook bans 790 users connected to QAnon who believe that there are state level organisations conspiring to silence them


Link Here 20th August 2020
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'
Facebook writes:

An Update to How We Address Movements and Organizations Tied to Violence

Today we are taking action against Facebook Pages, Groups and Instagram accounts tied to offline anarchist groups that support violent acts amidst protests, US-based militia organizations and QAnon. We already remove content calling for or advocating violence and we ban organizations and individuals that proclaim a violent mission. However, we have seen growing movements that, while not directly organizing violence, have celebrated violent acts, shown that they have weapons and suggest they will use them, or have individual followers with patterns of violent behavior. So today we are expanding our Dangerous Individuals and Organizations policy to address organizations and movements that have demonstrated significant risks to public safety but do not meet the rigorous criteria to be designated as a dangerous organization and banned from having any presence on our platform. While we will allow people to post content that supports these movements and groups, so long as they do not otherwise violate our content policies, we will restrict their ability to organize on our platform.

Under this policy expansion, we will impose restrictions to limit the spread of content from Facebook Pages, Groups and Instagram accounts. We will also remove Pages, Groups and Instagram accounts where we identify discussions of potential violence, including when they use veiled language and symbols particular to the movement to do so.

We will take the following actions -- some effective immediately, and others coming soon:

  • Remove From Facebook : Pages, Groups and Instagram accounts associated with these movements and organizations will be removed when they discuss potential violence. We will continue studying specific terminology and symbolism used by supporters to identify the language used by these groups and movements indicating violence and take action accordingly.

  • Limit Recommendations : Pages, Groups and Instagram accounts associated with these movements that are not removed will not be eligible to be recommended to people when we suggest Groups you may want to join or Pages and Instagram accounts you may want to follow.

  • Reduce Ranking in News Feed : In the near future, content from these Pages and Groups and will also be ranked lower in News Feed, meaning people who already follow these Pages and are members of these Groups will be less likely to see this content in their News Feed.

  • Reduce in Search : Hashtags and titles of Pages, Groups and Instagram accounts restricted on our platform related to these movements and organizations will be limited in Search: they will not be suggested through our Search Typeahead function and will be ranked lower in Search results.

  • Reviewing Related Hashtags on Instagram: We have temporarily removed the Related Hashtags feature on Instagram, which allows people to find hashtags similar to those they are interacting with. We are working on stronger protections for people using this feature and will continue to evaluate how best to re-introduce it.

  • Prohibit Use of Ads, Commerce Surfaces and Monetization Tools : Facebook Pages related to these movements will be prohibited from running ads or selling products using Marketplace and Shop. In the near future, we'll extend this to prohibit anyone from running ads praising, supporting or representing these movements.

  • Prohibit Fundraising : We will prohibit nonprofits we identify as representing or seeking to support these movements, organizations and groups from using our fundraising tools. We will also prohibit personal fundraisers praising, supporting or representing these organizations and movements.

As a result of some of the actions we've already taken, we've removed over 790 groups, 100 Pages and 1,500 ads tied to QAnon from Facebook, blocked over 300 hashtags across Facebook and Instagram, and additionally imposed restrictions on over 1,950 Groups and 440 Pages on Facebook and over 10,000 accounts on Instagram. These numbers reflect differences in how Facebook and Instagram are used, with fewer Groups on Facebook with higher membership rates and a greater number of Instagram accounts with fewer followers comparably. Those Pages, Groups and Instagram accounts that have been restricted are still subject to removal as our team continues to review their content against our updated policy, as will others we identify subsequently. For militia organizations and those encouraging riots, including some who may identify as Antifa, we've initially removed over 980 groups, 520 Pages and 160 ads from Facebook. We've also restricted over 1,400 hashtags related to these groups and organizations on Instagram.

Today's update focuses on our Dangerous Individuals and Organizations policy but we will continue to review content and accounts against all of our content policies in an effort to keep people safe. We will remove content from these movements that violate any of our policies, including those against fake accounts, harassment, hate speech and/or inciting violence. Misinformation that does not put people at risk of imminent violence or physical harm but is rated false by third-party fact-checkers will be reduced in News Feed so fewer people see it. And any non-state actor or group that qualifies as a dangerous individual or organization will be banned from our platform. Our teams will also study trends in attempts to skirt our enforcement so we can adapt. These movements and groups evolve quickly, and our teams will follow them closely and consult with outside experts so we can continue to enforce our policies against them.

 

 

The technology of censorship...

Facebook outlines its technology now used to censor user posts


Link Here 12th August 2020
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'
Facebook described its technology improvements used for the censorship of Facebook posts:

The biggest change has been the role of technology in content moderation. As our Community Standards Enforcement Report shows, our technology to detect violating content is improving and playing a larger role in content review. Our technology helps us in three main areas:

  • Proactive Detection: Artificial intelligence (AI) has improved to the point that it can detect violations across a wide variety of areas without relying on users to report content to Facebook, often with greater accuracy than reports from users. This helps us detect harmful content and prevent it from being seen by hundreds or thousands of people.

  • Automation: AI has also helped scale the work of our content reviewers. Our AI systems automate decisions for certain areas where content is highly likely to be violating. This helps scale content decisions without sacrificing accuracy so that our reviewers can focus on decisions where more expertise is needed to understand the context and nuances of a particular situation. Automation also makes it easier to take action on identical reports, so our teams don't have to spend time reviewing the same things multiple times. These systems have become even more important during the COVID-19 pandemic with a largely remote content review workforce.

  • Prioritization: Instead of simply looking at reported content in chronological order, our AI prioritizes the most critical content to be reviewed, whether it was reported to us or detected by our proactive systems. This ranking system prioritizes the content that is most harmful to users based on multiple factors such as virality, severity of harm and likelihood of violation . In an instance where our systems are near-certain that content is breaking our rules, it may remove it. Where there is less certainty it will prioritize the content for teams to review.

Together, these three aspects of technology have transformed our content review process and greatly improved our ability to moderate content at scale. However, there are still areas where it's critical for people to review. For example, discerning if someone is the target of bullying can be extremely nuanced and contextual. In addition, AI relies on a large amount of training data from reviews done by our teams in order to identify meaningful patterns of behavior and find potentially violating content.

That's why our content review system needs both people and technology to be successful. Our teams focus on cases where it's essential to have people review and we leverage technology to help us scale our efforts in areas where it can be most effective.

 

 

State lawyers bully Facebook...

20 US state attorney's call on Facebook to censor more


Link Here 7th August 2020
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'
US Attorneys General from 20 different states have sent a letter urging Facebook to do a better job at censoring content. They wrote:

We, the undersigned State Attorneys General, write to request that you take additional steps to prevent Facebook from being used to spread disinformation and hate and to facilitate discrimination. We also ask that you take more steps to provide redress for users who fall victim to intimidation and harassment, including violence and digital abuse.

...

As part of our responsibilities to our communities, Attorneys General have helped residents navigate Facebook's processes for victims to address abuse on its platform. While Facebook has--on occasion--taken action to address violations of its terms of service in cases where we have helped elevate our constituents' concerns, we know that everyday users of Facebook can find the process slow, frustrating, and ineffective. Thus, we write to highlight positive steps that Facebook can take to strengthen its policies and practices.

The letter was written by the Attorneys General of New Jersey, Illinois, and District of Columbia, and addressed to CEO Mark Zuckerberg and COO Sheryl Sandberg. It was cos-signed by 17 other democrat AGs from states such as New York, California, Pennsylvania, Maryland, and Virginia.

The letter proceeds to highlight seven steps they think Facebook should take to better police content to avoid online abuse. They recommended things such as aggressive enforcement of hate speech policies, third-party enforcement and auditing of hate speech, and real-time assistance for users to report harassment.

 

 

Upodated: Brand censorship...

Brands demand that Facebook censors news that offends identitarian sensitivities


Link Here 27th June 2020
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'
Facebook has said it will start to label potentially harmful posts that it leaves up because of their news value. The move comes as the firm faces growing pressure to censor the content on its platform.

More than 90 advertisers have joined a boycott of the site, including consumer goods giant Unilever on Friday. The Stop Hate for Profit campaign was started by US civil rights groups after the death of George Floyd in May while in police custody. It has focused on Facebook, which also owns Instagram; and WhatsApp. The organisers, which include Color of Change and the National Association for the Advancement of Colored People, have said Facebook allows racist, violent and verifiably false content to run rampant on its platform.

Unilver said it would halt Twitter, Facebook and Instagram advertising in the US at least through 2020.

In a speech on Friday, Facebook boss Mark Zuckerberg defended the firm's record of taking down hate speech. But he said the firm was tightening its policies to address the reality of the challenges our country is facing and how they're showing up across our community. In addition to introducing labels, Facebook will ban ads that describe people from different groups, based on factors such as race or immigration status, as a threat. He said:

A handful of times a year, we leave up content that would otherwise violate our policies if the public interest value outweighs the risk of harm. Often, seeing speech from politicians is in the public interest, and in the same way that news outlets will report what a politician says, we think people should generally be able to see it for themselves on our platforms.

We will soon start labelling some of the content we leave up because it is deemed newsworthy, so people can know when this is the case.

He added that Facebook would remove content - even from politicians - if it determines that it incites violence or suppresses voting.

 

Update: Coke too

27th June 2020. See article from bbc.co.uk

Coca-Cola will suspend advertising on social media globally for at least 30 days, as pressure builds on platforms to crack down on hate speech. chairman and CEO James Quincey said:

There is no place for racism in the world and there is no place for racism on social media.

He demanded greater accountability and transparency from social media firms.

 

 

Mentally challenged...

Facebook opens an AI challenge to help it to censor hateful messages hidden in memes


Link Here 16th May 2020
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'
Facebook is seeking help in the censorship of hateful messages that have been encoded into meme. The company writes in a post:

In order for AI to become a more effective tool for detecting hate speech, it must be able to understand content the way people do: holistically. When viewing a meme, for example, we don't think about the words and photo independently of each other; we understand the combined meaning together. This is extremely challenging for machines, however, because it means they can't just analyze the text and the image separately. They must combine these different modalities and understand how the meaning changes when they are presented together. To catalyze research in this area, Facebook AI has created a data set to help build systems that better understand multimodal hate speech. Today, we are releasing this Hateful Memes data set to the broader research community and launching an associated competition, hosted by DrivenData with a $100,000 prize pool.

The challenges of harmful content affect the entire tech industry and society at large. As with our work on initiatives like the Deepfake Detection Challenge and the Reproducibility Challenge, Facebook AI believes the best solutions will come from open collaboration by experts across the AI community.

We continue to make progress in improving our AI systems to detect hate speech and other harmful content on our platforms, and we believe the Hateful Memes project will enable Facebook and others to do more to keep people safe.

 

 

The Facebook Oversight Board...

Former Guardian editor appointed to Facebook's new censorship appeals board


Link Here 14th May 2020
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'

We know that social media can spread speech that is hateful, harmful and deceitful. In recent years, the question of what content should stay up or come down, and who should decide this, has become increasingly urgent for society. Every content decision made by Facebook impacts people and communities. All of them deserve to understand the rules that govern what they are sharing, how these rules are applied, and how they can appeal those decisions.

The Oversight Board represents a new model of content moderation for Facebook and Instagram and today we are proud to announce our initial members. The Board will take final and binding decisions on whether specific content should be allowed or removed from Facebook and Instagram

The Board will review whether content is consistent with Facebook and Instagram's policies and values, as well as a commitment to upholding freedom of expression within the framework of international norms of human rights. We will make decisions based on these principles, and the impact on users and society, without regard to Facebook's economic, political or reputational interests. Facebook must implement our decisions, unless implementation could violate the law.

The four Co-Chairs and 16 other Members announced today are drawn from around the world. They speak over 27 languages and represent diverse professional, cultural, political, and religious backgrounds and viewpoints. Over time we expect to grow the Board to around 40 Members. While we cannot claim to represent everyone, we are confident that our global composition will underpin, strengthen and guide our decision-making.

All Board Members are independent of Facebook and all other social media companies. In fact, many of us have been publicly critical of how the company has handled content issues in the past. Members contract directly with the Oversight Board, are not Facebook employees and cannot be removed by Facebook. Our financial independence is also guaranteed by the establishment of a $130 million trust fund that is completely independent of Facebook, which will fund our operations and cannot be revoked. All of this is designed to protect our independent judgment and enable us to make decisions free from influence or interference.

When we begin hearing cases later this year, users will be able to appeal to the Board in cases where Facebook has removed their content, but over the following months we will add the opportunity to review appeals from users who want Facebook to remove content.

Users who do not agree with the result of a content appeal to Facebook can refer their case to the Board by following guidelines that will accompany the response from Facebook. At this stage the Board will inform the user if their case will be reviewed.

The Board can also review content referred to it by Facebook. This could include many significant types of decisions, including content on Facebook or Instagram, on advertising, or Groups. The Board will also be able to make policy recommendations to Facebook based on our case decisions.

See first 20 members in blog post from oversightboard.com

The list includes a British panel member, Alan Rusbridger, a former editor of The Guardian. Perhaps giving a hint of a 'progressive' leaning to proceedings.

Offsite Comment: Facebook's free-speech panel doesn't believe in free speech

14th May 2020. See article from spiked-online.com

Alan Rusbridger, one-time cheerleader of press regulation, is among the members.

 

 

Distanced from free speech...

Facebook censors anti-lockdown protests if prohibited by the state


Link Here 21st April 2020
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'
Facebook says it will consult with state governments on their lockdown orders and will shut down pages planning anti-quarantine protests accordingly.

Events that defy government's guidance on social distancing have also been banned from Facebook.

The move has been opposed by Donald Trump Jr and the Missouri Senator Josh Hawley. They note that Facebook is violating Americans' First Amendment rights.

Facebook said it has already removed protest messages in California, New Jersey and Nebraska. However, protests are still being organized on Facebook. A massive protest took place in Harrisburg, Pennsylvania on Monday afternoon that was organized on the Facebook group Pennsylvanians against Excessive Quarantine Orders.

 

 

Offsite Article: Nipples, Facebook, and what our society deems decent...


Link Here18th April 2020
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'
Why there's a danger in allowing a single entity to influence what our society deems decent. By Katie Wheeler

See article from theguardian.com

 

 

Charting a Way Forward on Online Content Censorship...

Facebook seems to be suggesting that if governments are so keen on censoring people's speech then perhaps the governments should take over the censorship job entirely...


Link Here18th February 2020
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'

Today, we're publishing a white paper setting out some questions that regulation of online content might address.

Charting a Way Forward: Online Content Regulation builds on recent developments on this topic, including legislative efforts and scholarship.

The paper poses four questions which go to the heart of the debate about regulating content online:

  • How can content regulation best achieve the goal of reducing harmful speech while preserving free expression? By requiring systems such as user-friendly channels for reporting content or external oversight of policies or enforcement decisions, and by requiring procedures such as periodic public reporting of enforcement data, regulation could provide governments and individuals the information they need to accurately judge social media companies' efforts.

  • How can regulations enhance the accountability of internet platforms? Regulators could consider certain requirements for companies, such as publishing their content standards, consulting with stakeholders when making significant changes to standards, or creating a channel for users to appeal a company's content removal or non-removal decision.

  • Should regulation require internet companies to meet certain performance targets? Companies could be incentivized to meet specific targets such as keeping the prevalence of violating content below some agreed threshold.

  • Should regulation define which "harmful content" should be prohibited on the internet? Laws restricting speech are generally implemented by law enforcement officials and the courts. Internet content moderation is fundamentally different. Governments should create rules to address this complexity -- that recognize user preferences and the variation among internet services, can be enforced at scale, and allow for flexibility across language, trends and context.

Guidelines for Future Regulation

The development of regulatory solutions should involve not just lawmakers, private companies and civil society, but also those who use online platforms. The following principles are based on lessons we've learned from our work in combating harmful content and our discussions with others.

  • Incentives. Ensuring accountability in companies' content moderation systems and procedures will be the best way to create the incentives for companies to responsibly balance values like safety, privacy, and freedom of expression.

  • The global nature of the internet. Any national regulatory approach to addressing harmful content should respect the global scale of the internet and the value of cross-border communications. They should aim to increase interoperability among regulators and regulations.

  • Freedom of expression. In addition to complying with Article 19 of the ICCPR (and related guidance), regulators should consider the impacts of their decisions on freedom of expression.

  • Technology. Regulators should develop an understanding of the capabilities and limitations of technology in content moderation and allow internet companies the flexibility to innovate. An approach that works for one particular platform or type of content may be less effective (or even counterproductive) when applied elsewhere.

  • Proportionality and necessity. Regulators should take into account the severity and prevalence of the harmful content in question, its status in law, and the efforts already underway to address the content.

If designed well, new frameworks for regulating harmful content can contribute to the internet's continued success by articulating clear ways for government, companies, and civil society to share responsibilities and work together. Designed poorly, these efforts risk unintended consequences that might make people less safe online, stifle expression and slow innovation.

We hope today's white paper helps to stimulate further conversation around the regulation of content online. It builds on a paper we published last September on data portability , and we plan on publishing similar papers on elections and privacy in the coming months.

 

 

A poisoned chalice...

Mark Zuckerberg thinks it is about time that governments took over the job of internet censorship


Link Here 17th February 2020
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'
Facebook boss Mark Zuckerberg has called for more regulation of harmful online content, saying it was not for companies like his to decide what counts as legitimate free speech.

He was speaking at the Munich Security Conference in Germany. He said:

We don't want private companies making so many decisions about how to balance social equities without any more democratic process.

The Facebook founder urged governments to come up with a new regulatory system for social media, suggesting it should be a mix of existing rules for telecoms and media companies. He added:

In the absence of that kind of regulation we will continue doing our best,

But I actually think on a lot of these questions that are trying to balance different social equities it is not just about coming up with the right answer, it is about coming up with an answer that society thinks is legitimate.

During his time in Europe, Zuckerberg is expected to meet politicians in Munich and Brussels to discuss data practices, regulation and tax reform.

 

 

Too many governments defining online harms that need censoring...

Mark Zuckerberg pushes back against too much censorship on Facebook


Link Here2nd February 2020
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'
Mark Zuckerberg has declared that Facebook is going to stand up for free expression in spite of the fact it will piss off a lot of people.

He made the claim during a fiery appearance at the Silicon Slopes Tech Summit in Utah on Friday. Zuckerberg told the audience that Facebook had previously tried to resist moves that would be branded as too offensive - but says he now believes he is being asked to partake in excessive censorship:

Increasingly we're getting called to censor a lot of different kinds of content that makes me really uncomfortable, he claimed. We're going to take down the content that's really harmful, but the line needs to be held at some point.

It kind of feels like the list of things that you're not allowed to say socially keeps on growing, and I'm not really okay with that.

This is the new approach [free expression], and I think it's going to piss off a lot of people. But frankly the old approach was pissing off a lot of people too, so let's try something different.




 

melonfarmers icon

Home

Top

Index

Links

Shop
 

UK

World

Media

Nutters

Liberty
 

Film Cuts

Cutting Edge

Information

Sex News

Sex Sells
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys