Melon Farmers Original Version

Internet News


March

 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   April   May   June   July   Aug   Sept   Oct   Nov   Dec    

 

Sexual Expression is Being Banned Online...

Free Speech Coalition Europe petitions the EU about considering the rights of sex workers in upcoming internet censorship laws


Link Here29th March 2021
Full story: Internet Censorship in EU...EU introduces swathes of internet censorship law
The Free Speech Coalition Europe is a group representing the adult trade. It has organised a petition to The Members of the European Parliament of the IMCO, JURI and LIBE Committees on the subject of how new EU internet censorship laws will impact sex workers. The petition reads:

10 Steps to a Safer Digital Space that Protects the Rights of Sexuality Professionals, Artists and Educators

"Online platforms have become integral parts of our daily lives, economies, societies and democracies."

Not our words but those of the European Commission. And after more than a year in the grips of a global pandemic, this statement rings truer than ever before. So why are some of society's already most marginalised people being excluded from these necessary spaces?

Sexual Expression is Being Banned Online

Sex in almost all its guises is being repressed in the public online sphere and on social media like never before. Accounts focused on sexuality -- from sexuality professionals, adult performers and sex workers to artists, activists and LGBTIQ folks, publications and organisations -- are being deleted without warning or explanation and with little regulation by private companies that are currently able to enforce discriminatory changes to their terms and conditions without explanation or accountability to those affected by these changes. Additionally, in many cases it is impossible for the users to have their accounts reinstated -- accounts that are often vitally linked to the users' ability to generate income, network, organise and share information.

Unpacking the Digital Services Act (DSA)

At the same time as sexual expression is being erased from digital spaces, new legislation is being passed in the European Union to safeguard internet users' online rights. The European Commission's Digital Services Act and Digital Markets Act encompass upgraded rules governing digital services with their focus, in part, building a safer and more open digital space. These rules will apply to online intermediary services used by millions every day, including major platforms such as Facebook, Instagram and Twitter. Amongst other things, they advocate for greater transparency from platforms, better-protected consumers and empowered users.

With the DSA promising to "shape Europe's digital future" and "to create a safer digital space in which the fundamental rights of all users of digital services are protected", it's time to demand that it's a future that includes those working, creating, organising and educating in the realm of sexuality. As we consider what a safer digital space can and should look like, it's also time to challenge the pervasive and frankly puritanical notion that sexuality -- a normal and healthy part of our lives -- is somehow harmful, shameful or hateful.

How the DSA Can Get It Right

The DSA is advocating for "effective safeguards for users, including the possibility to challenge platforms' content moderation decisions". In addition to this, the Free Speech Coalition Europe demands the following:

  • Platforms need to put in place anti-discrimination policies and train their content moderators so as to avoid discrimination on the basis of gender, sexual orientation, race, or profession -- the same community guidelines need to apply as much to an A-list celebrity or mainstream media outlet as they do to a stripper or queer collective;

  • Platforms must provide the reason to the user when a post is deleted or account is restricted or deleted. Shadowbanning is an underhanded means for suppressing users' voices. Users should have the right to be informed when they are shadowbanned and to challenge the decision;

  • Platforms must allow for the user to request a revision of a content moderation's decision, platforms must ensure moderation actions take place in the users' location, rather than arbitrary jurisdictions which may have different laws or custom; e.g., a user in Germany cannot be banned by reports & moderation in the middle east, and must be reviewed by the European moderation team;

  • Decision-making on notices of reported content as specified in Article 14 of the DSA should not be handled by automated software, as these have proven to delete content indiscriminately. A human should place final judgement.

  • The notice of content as described in Article 14.2 of the DSA should not immediately hold a platform liable for the content as stated in Article 14.3, since such liability will entice platforms to delete indiscriminately after report for avoiding such liability, which enables organized hate groups to mass report and take down users;

  • Platforms must provide for a department (or, at the very least, a dedicated contact person) within the company for complaints regarding discrimination or censorship;

  • Platforms must provide a means to indicate whether you are over the age of 18 as well as providing a means for adults to hide their profiles and content from children (e.g. marking profiles as 18+); Platforms must give the option to mark certain content as "sensitive";

  • Platforms must not reduce the features available to those who mark themselves as adult or adult-oriented (i.e. those who have marked their profiles as 18+ or content as "sensitive"). These profiles should then appear as 18+ or "sensitive" when accessed without a login or without set age, but should not be excluded from search results or appear as "non-existing";

  • Platforms must set clear, consistent and transparent guidelines about what content is acceptable, however, these guidelines cannot outright ban users focused on adult themes; e.g., you could ban highly explicit pornography (e.g., sexual intercourse videos that show penetration), but you'd still be able to post an edited video that doesn't show penetration;

  • Platforms cannot outright ban content intended for adult audiences, unless a platform is specifically for children, or >50% of their active users are children.

 

 

No comments...

Government notes that porn websites without user comments or uploads will not be within the censorship regime of the upcoming Online Safety Bill


Link Here27th March 2021
Written Question, answered on 24 March 2021

Baroness Grender Liberal Democrat Life peer Lords

To ask Her Majesty's Government which commercial pornography companies will be in scope of the Online Safety Bill; and whether commercial pornography websites which

  1. do not host user-generated content, or

  2. allow private user communication, will also be in scope.

Baroness Barran Conservative

The government is committed to ensuring children are protected from accessing online pornography through the new online safety framework. Where pornography sites host user-generated content or facilitate online user interaction such as video and image sharing, commenting and live streaming, they will be subject to the new duty of care. Commercial pornography sites which allow private user to user communication will be in scope. Where commercial pornography sites do not have user-generated functionality they will not be in scope. The online safety regime will capture both the most visited pornography sites and pornography on social media, therefore covering the majority of sites where children are most likely to be exposed to pornography.

We expect companies to use age assurance or age verification technologies to prevent children from accessing services which pose the highest risk of harm to children, such as online pornography. We are working closely with stakeholders across industry to establish the right conditions for the market to deliver age assurance and age verification technical solutions ahead of the legislative requirements coming into force.

 

 

Offsite Article: Free speech friendly video sharing platforms...


Link Here 27th March 2021
Full story: YouTube Censorship...YouTube censor videos by restricting their reach
A few suggestions that are not controlled by a Big Tech giant and that support free expression.

See article from reclaimthenet.org

 

 

Ofcom thinks it can 'regulate' cancel culture, PC lynch mobs and the kangaroo courts of wokeness...

The new internet censor sets outs its stall for the censorship of video sharing platforms


Link Here24th March 2021
Full story: Ofcom Video Sharing Censors...Video on Demand and video sharing
Ofcom has published its upcoming censorship rules for video sharing platforms and invites public responses up until 2nd June 2021. For a bit of self justification for its censorship, Ofcom has commissioned a survey to find that YouTube users and the likes are calling out for Ofcom censorship. Ofcom writes:

A third of people who use online video-sharing services have come across hateful content in the last three months, according to a new study by Ofcom.

The news comes as Ofcom proposes new guidance for sites and apps known as 'video-sharing platforms' (VSPs), setting out practical steps to protect users from harmful material.

VSPs are a type of online video service where users can upload and share videos with other members of the public. They allow people to engage with a wide range of content and social features.

Under laws introduced by Parliament last year, VSPs established in the UK must take measures to protect under-18s from potentially harmful video content; and all users from videos likely to incite violence or hatred, as well as certain types of criminal content. Ofcom's job is to enforce these rules and hold VSPs to account.

The  draft guidance is designed to help these companies understand what is expected of them under the new rules, and to explain how they might meet their obligations in relation to protecting users from harm.

Harmful experiences uncovered

To inform our approach, Ofcom has researched how people in the UK use VSPs, and their claimed exposure to potentially harmful content. Our major findings are: 

  • Hate speech. A third of users (32%) say they have witnessed or experienced hateful content. Hateful content was most often directed towards a racial group (59%), followed by religious groups (28%), transgender people (25%) and those of a particular sexual orientation (23%).

  • Bullying, abuse and violence. A quarter (26%) of users claim to have been exposed to bullying, abusive behaviour and threats, and the same proportion came across violent or disturbing content.

  • Racist content. One in five users (21%) say they witnessed or experienced racist content, with levels of exposure higher among users from minority ethnic backgrounds (40%), compared to users from a white background (19%). 

  • Most users encounter potentially harmful videos of some sort. Most VSP users (70%) say they have been exposed to a potentially harmful experience in the last three months, rising to 79% among 13-17 year-olds.

  • Low awareness of safety measures. Six in 10 VSP users are unaware of platforms' safety and protection measures, while only a quarter have ever flagged or reported harmful content.

Guidance for protecting users

As Ofcom begins its new role regulating video-sharing platforms, we recognise that the online world is different to other regulated sectors. Reflecting the nature of video-sharing platforms, the new laws in this area focus on measures providers must consider taking to protect their users -- and they afford companies flexibility in how they do that.

The massive volume of online content means it is impossible to prevent every instance of harm. Instead, we expect VSPs to take active measures against harmful material on their platforms. Ofcom's new guidance is designed to assist them in making judgements about how best to protect their users. In line with the legislation, our guidance proposes that all video-sharing platforms should provide:

  • Clear rules around uploading content. VSPs should have clear, visible terms and conditions which prohibit users from uploading the types of harmful content set out in law. These should be enforced effectively.

  • Easy flagging and complaints for users. Companies should implement tools that allow users to quickly and effectively report or flag harmful videos, signpost how quickly they will respond, and be open about any action taken. Providers should offer a route for users to formally raise issues or concerns with the platform, and to challenge decisions through dispute resolution. This is vital to protect the rights and interests of users who upload and share content.

  • Restricting access to adult sites. VSPs with a high prevalence of pornographic material should put in place effective age-verification systems to restrict under-18s' access to these sites and apps.

Enforcing the rules

Ofcom's approach to enforcing the new rules will build on our track record of protecting audiences from harm, while upholding freedom of expression. We will consider the unique characteristics of user-generated video content, alongside the rights and interests of users and service providers, and the general public interest.

If we find a VSP provider has breached its obligations to take appropriate measures to protect users, we have the power to investigate and take action against a platform. This could include fines, requiring the provider to take specific action, or -- in the most serious cases -- suspending or restricting the service.Consistent with our general approach to enforcement, we may, where appropriate, seek to resolve or investigate issues informally first, before taking any formal enforcement action.

Next steps

We are inviting all interested parties to comment on our proposed draft guidance, particularly services which may fall within scope of the regulation, the wider industry and third-sector bodies. The deadline for responses is 2 June 2021. Subject to feedback, we plan to issue our final guidance later this year. We will also report annually on the steps taken by VSPs to comply with their duties to protect users.

NOTES

Ofcom has been given new powers to regulate UK-established VSPs. VSP regulation sets out to protect users of VSP services from specific types of harmful material in videos. Harmful material falls into two broad categories under the VSP Framework, which are defined as:

  • Restricted Material , which refers to videos which have or would be likely to be given an R18 certificate, or which have been or would likely be refused a certificate. It also includes other material that might impair the physical, mental or moral development of under-18s.

  • Relevant Harmful Material , which refers to any material likely to incite violence or hatred against a group of persons or a member of a group of persons based on particular grounds. It also refers to material the inclusion of which would be a criminal offence under laws relating to terrorism; child sexual abuse material; and racism and xenophobia.

The Communications Act sets out the criteria for determining jurisdiction of VSPs, which are closely modelled on the provisions of the Audiovisual Media Services Directive. A VSP will be within UK jurisdiction if it has the required connection with the UK. It is for service providers to assess whether a service meets the criteria and notify to Ofcom that they fall within scope of the regulation. We recently published guidance about the criteria to assist them in making this assessment. In December 2020, the Government confirmed its intention to appoint Ofcom as the regulator of the future online harms regime . It re-stated its intention for the VSP Framework to be superseded by the regulatory framework in new Online Safety legislation.

 

 

MPs identified as totally uncaring for the safety of internet users...

MPs who don't like being insulted on Twitter line up to call for all users to hand over identifying personal details to the likes of Google, Facebook and the Chinese government


Link Here 24th March 2021
Online Anonymity was debated in the House of Commons debated on Wednesday 13 January 2021.

The long debate was little more than a list of complaints from MPs aggrieved at aggressive comments on social media, often against themselves.

As always seems to be the case with parliamentary debate, it turned into a a long calls of 'something must be done', and hardly comment thinking about the likely and harmful consequences of what they are calling for.

As an example here is part of the complaint from debate opener, Siobhan Bailiie:

The new legislative framework for tech companies will create a duty of care to their users. The legislation will require companies to prevent the proliferation of illegal content and activity online, and ensure that children who use their services are not exposed to harmful content. As it stands, the tech companies do not know who millions of their users are, so they do not know who their harmful operators are, either. By failing to deal with anonymity properly, any regulator or police force, or even the tech companies themselves, will still need to take extensive steps to uncover the person behind the account first, before they can tackle the issue or protect a user.

The Law Commission acknowledged that anonymity often facilitates and encourages abusive behaviours. It said that combined with an online disinhibition effect, abusive behaviours, such as pile-on harassment, are much easier to engage in on a practical level. The Online Harms White Paper focuses on regulation of platforms and the Law Commission's work addresses the criminal law provisions that apply for individuals. It is imperative, in my view, that the Law Commission's report and proposals are fully debated prior to the online harms Bill passing through Parliament. They should go hand in hand.

Standing in Parliament, I must mention that online abuse is putting people off going into public service and speaking up generally. One reason I became interested in this subject was the awful abuse I received for daring to have a baby and be an MP. Attacking somebody for being a mum or suggesting that a mum cannot do this job is misogynistic and, quite frankly, ridiculous, but I would be lying if I said that I did not find some of the comments stressful and upsetting, particularly given I had just had a baby.

Is there a greater impediment to freedom of expression than a woman being called a whore online or being told that she will be raped for expressing a view? It happens. It happens frequently and the authors are often anonymous. Fantastic groups like 50:50 Parliament, the Centenary Action Group, More United and Compassion in Politics are tackling this head on to avoid men and women being put off running for office. One of the six online harm asks from Compassion in Politics is to significantly reduce the prevalence and influence of anonymous accounts online.

The Open Rights Group said more about consequences in a short email than the MPs said in a hour of debate:

Mandatory ID verification would open a Pandora's Box of unintended consequences. A huge burden would be placed on site administrators big and small to own privatised databases of personally identifiable data. Large social media platforms would gain ever more advantage over small businesses, open source projects and startups that lack the resources to comply.

Requirements for formal documentation, such as a bank account, to verify ID would disenfranchise those on low incomes, the unbanked, the homeless, and people experiencing other forms of social exclusion. Meanwhile, the fate of countless accounts and astronomical amounts of legal content would be thrown into jeopardy overnight.

 

 

Fossil fuels outrage...

Lee Hurst briefly suspended from Twitter over a joke about Greta Thornberg


Link Here24th March 2021
Full story: Twitter Censorship...Twitter offers country by country take downs
Lee Hurst was briefly suspended from Twitter over a tweeted joke about Greta Thunberg.

The comedian wound up the easily after posting his joke about the 18-year-old environmental activist. He tweeted:

As soon as Greta discovers cock, she'll stop complaining about the single use plastic it's wrapped in.

But his account - which he headlines desperately trying to be relevant is now back up and running again.

 

 

Debeaked...

Russian speaks tough about Twitter refusing to play ball with local censorship requirements


Link Here19th March 2021
Full story: Internet Censorship in Russia 2020s...Russia and its repressive state control of media
This week Russian authorities warned that if Twitter doesn't fall into line of responding to Russian censorship demands then it could find itself blocked in the country in a month's time. Anticipating the possible fallout, including Russian users attempting to bypass the ban, a government minister has warned that blocking VPNs will be the next step.

For some time, local telecoms censor Roscomnadzor has criticized Twitter for not responding to its calls for prohibited content to be taken down. Roscomnadzor says that more than 3,100 takedown demands have gone unheeded so far.

In what appeared to be a retaliatory move, last week authorities attempted to slow down Twitter access in Russia, but this seems to have caused widespread disruption to many other websites, perhaps those that hang through waiting for linked Twitter content.

 

 

Age Appropriate Instagram?...

Facebook is creating an Instagram for kids


Link Here19th March 2021
Full story: ICO Age Appropriate Design...ICO calls for age assurance for websites accessed by children
Facebook is planning to build a version of the popular photo-sharing app Instagram that can be used by children under the age of 13, according to an internal company post obtained by BuzzFeed News.

Vishal Shah, Instagram's vice president of product, wrote on an employee message board:

I'm excited to announce that going forward, we have identified youth work as a priority for Instagram and have added it to our H1 priority list. We will be building a new youth pillar within the Community Product Group to focus on two things:
  • (a) accelerating our integrity and privacy work to ensure the safest possible experience for teens and
  • (b) building a version of Instagram that allows people under the age of 13 to safely use Instagram for the first time.

Instagram currently 'forbids' children under the age of 13 from using the service, but it is widely used by children anyway.

Maybe this announcement ties in with the UK's requirement for age appropriate data sharing that comes into force in September 2021.

 

 

Age of censorship...

An internet porn age verification bill progresses in Canada


Link Here19th March 2021
Full story: Internet Censorship in Canada...Proposal for opt in intenet blocking
A bill has passed 2nd reading in the Canadian Senate that would require porn websites to implement age verification for users.

Bill S-203, An Act to restrict young persons' online access to sexually explicit material, will now be referred to the Standing Senate Committee on Legal and Constitutional Affairs.

 

 

Updated: Dangerous legislation...

A diverse group of organisations criticise Australia's hastily drafted and wide ranging internet censorship bill


Link Here19th March 2021
Full story: Internet Censorship in Australia...Wide ranging state internet censorship
A number of legal, civil and digital rights, tech companies and adult organisations have raised significant concerns with Australia's proposed internet censorship legislation, and its potential to impact those working in adult industries, to lead to online censorship, and the vast powers it hands to a handful of individuals.

Despite this, the legislation was introduced to Parliament just 10 days after the government received nearly 400 submissions on the draft bill, and the senate committee is expected to deliver its report nine days after submissions closed. Stakeholders were also given only three working days to make a submission to the inquiry.

In a submission to the inquiry, Australian Lawyers Alliance (ALA) president Graham Droppert said the government should not proceed with the legislation because it invests excessive discretionary power in the eSafety Commissioner and also the Minister with respect to the consideration of community expectations and values in relation to online content. Droppert said:

The ALA considers that the bill does not strike the appropriate balance between protection against abhorrent material and due process for determining whether content comes within that classification.

Digital Rights Watch has been leading the charge against the legislationn. Digital Rights Watch programme director Lucie Krahulcova said:

The powers to be handed to the eSafety Commissioner, which was established in 2015 to focus on keeping children safe online, is a continuation of its broadly expanding remit, and should be cause for concern.

The new powers in the bill are discretionary and open-ended, giving all the power and none of the accountability to the eSafety Office. They are not liable for any damage their decisions may cause and not required to report thoroughly on how and why they make removal decisions. This is a dramatic departure from democratic standards globally.

Jarryd Bartle is a lecturer in criminal law and adult industry consultant, and is policy and campaigns advisor at the Eros Association. He said:

The bill as drafted is blatant censorship, with the eSafety commissioner empowered to strip porn, kink and sexually explicit art from the internet following a complaint, with nothing in the scheme capable of distinguishing moral panic from genuine harm.

Twitter and live streaming service Twitch have joined the mounting list of service providers, researchers, and civil liberties groups that take issue with Australia's pending Online Safety Bill.

Of concern to both Twitter and Twitch is the absence of due regard to different types of business models and content types, specifically around the power given to the relevant minister to determine basic online safety expectations for social media services, relevant electronic services, and designated internet services. Twitter said:

In order to continue to foster digital growth and innovation in the Australian economy, and to ensure reasonable and fair competition, it is critically important to avoid placing requirements across the digital ecosystem that only large, mature companies can reasonably comply with,

Likewise, Twitch believes it is important to consider a sufficiently flexible approach that gives due regard to different types of business models and content types.

Update: Fast tracked

19th March 2021. See article from ia.acs.org.au

The Online Safety Bill 2021 will likely get an easy ride into law after a senate environment and communications committee gave it the nod of approval last week.

Under the government's proposed laws, the eSafety Commissioner will be given expanded censorship powers to direct social media platforms and other internet services to take down material and remove links to content it deems offensive or abusive.

 

 

Offsite Article: #SaveAnonymity: Together we can defend anonymity...


Link Here 19th March 2021
Open Rights Group responds to a petition calling for identity verification for social media users

See article from openrightsgroup.org

 

 

Group think...

Facebook announces new censorship measures for Facebook groups


Link Here17th March 2021
Full story: Facebook Censorship since 2020...Left wing bias, prudery and multiple 'mistakes'

It's important to us that people can discover and engage safely with Facebook groups so that they can connect with others around shared interests and life experiences. That's why we've taken action to curb the spread of harmful content, like hate speech and misinformation, and made it harder for certain groups to operate or be discovered, whether they're Public or Private. When a group repeatedly breaks our rules, we take it down entirely.

We're sharing the latest in our ongoing work to keep Groups safe, which includes our thinking on how to keep recommendations safe as well as reducing privileges for those who break our rules. These changes will roll out globally over the coming months.

We are adding more nuance to our enforcement. When a group starts to violate our rules, we will now start showing them lower in recommendations, which means it's less likely that people will discover them. This is similar to our approach in News Feed, where we show lower quality posts further down, so fewer people see them.

We believe that groups and members that violate our rules should have reduced privileges and reach, with restrictions getting more severe as they accrue more violations, until we remove them completely. And when necessary in cases of severe harm, we will outright remove groups and people without these steps in between.

We'll start to let people know when they're about to join a group that has Community Standards violations, so they can make a more informed decision before joining. We'll limit invite notifications for these groups, so people are less likely to join. For existing members, we'll reduce the distribution of that group's content so that it's shown lower in News Feed. We think these measures as a whole, along with demoting groups in recommendations, will make it harder to discover and engage with groups that break our rules.

We will also start requiring admins and moderators to temporarily approve all posts when that group has a substantial number of members who have violated our policies or were part of other groups that were removed for breaking our rules. This means that content won't be shown to the wider group until an admin or moderator reviews and approves it. If an admin or moderator repeatedly approves content that breaks our rules, we'll take the entire group down.

When someone has repeated violations in groups, we will block them from being able to post or comment for a period of time in any group. They also won't be able to invite others to any groups, and won't be able to create new groups. These measures are intended to help slow down the reach of those looking to use our platform for harmful purposes and build on existing restrictions we've put in place over the last year.

 

 

Updated: All men are rapists...

So peer Floella Benjamin attempts to revive porn age verification censorship because porn viewing is just one step away from park murder


Link Here17th March 2021
The pro-censorship member of the House of Lords has tabled the following amendment to the Domestic Abuse Bill to reintroduce internet porn censorship and age verification requires previously dropped by the government in October 2019.

Amendment 87a introduces a new clause:

Impact of online pornography on domestic abuse

  1. Within three months of the day on which this Act is passed, the Secretary of State must commission a person appointed by the Secretary of State to investigate the impact of access to online pornography by children on domestic abuse.

  2. Within three months of their appointment, the appointed person must publish a report on the investigation which may include recommendations for the Secretary of State.

  3. As part of the investigation, the appointed person must consider the extent to which the implementation of Part 3 of the Digital Economy Act 2017 (online pornography) would prevent domestic abuse, and may make recommendations to the Secretary of State accordingly.

  4. Within three months of receiving the report, the Secretary of State must publish a response to the recommendations of the appointed person.

  5. If the appointed person recommends that Part 3 of the Digital Economy Act 2017 should be commenced, the Secretary of State must appoint a day for the coming into force of that Part under section 118(6) of the Act within the timeframe recommended by the appointed person."

Member's explanatory statement

This amendment would require an investigation into any link between online pornography and domestic abuse with a view to implementing recommendations to bring into effect the age verification regime in the Digital Economy Act 2017 as a means of preventing domestic abuse.

Update: Defeated

17th March 2021. See article from votes.parliament.uk

The amendment designed to resurrect the Age Verification clauses of the Digital Economy Act 2017 was defeated by 242 to 125 vodets in the House of Lords.

The government minister concluding the debate noted that the new censorship measures included in the Online Harms Bill are more comprensive than the measures under Digital Economy Act 2017. He also noted that although upcoming censorship measures would take significant time to implement but also noted that reviving the old censorship measures would also take time.

In passing the minister also explained one of the main failings of the act was that site blocking would not prove effective due to porn viewers being easily able to evade ISP blocks by switching to encrypted DNS servers via DNS over Https (DoH). Presumably government internet snooping agencies don't fancy losing the ability to snoop on the browsing habits of all those wanting to continue viewing a blocked porn site such as Pornhub.

 

 

Searching for a false sense of security...

Google to be sued for misleadingly naming Chrome's 'incognito mode' when in fact Google continues to snooping on browsing history and hands over the data to ne'er do wells such as advertisers and police


Link Here17th March 2021
A US federal judge has decided that an attempt to launch a class action lawsuit against Google can proceed. The filing concerns the incognito (or private) mode in Google's Chrome that five plaintiffs say is misleading users into expecting that their personal data would be protected while using the browser in this way.

Chrome informs incognito users that they'd gone incognito: now you can browse privately. This might lead many to believe they are free from Google's own invasive and omnipresent tracking and data collection, but in reality it only means other people who use the device won't see your browsing activity. But Google does not inform users that it will continue to collect data for targeted ads.

The lawsuit also alleges that Google unlawfully intercepted data under the US Wiretap Act while users were in incognito.

 

 

Demonetisation...

Twitch seems to be rating streamers as suitable or not for brand advertising


Link Here12th March 2021
Live-streaming platform Twitch seems to be testing a new program to match streamers with 'appropriate' brands. It's understood that the tool, named the Brand Safety Score, automatically assigns streamers a rating based on analysis of a number of factors (including age, partnership status, and suspension history). This rating is then used to pair content creators with relevant advertising opportunities.

Presumably this phraseology means that anyone doing anything a bit adult will be banned from being able to monetise their content.

The new tool is yet to be officially confirmed by Twitch. A spokesperson for the platform has revealed that they are making efforts to better match the appropriate ads to the right communities, but asserted that company will keep our community informed of any updates.

 

 

Offsite Article: Authoritarianism...


Link Here12th March 2021
Full story: Internet Censorship in India...India considers blanket ban on internet porn
India's New Internet Rules Are a Step Toward Digital Authoritarianism. Here's What They Will Mean

See article from time.com

 

 

Advantaging foreign companies...

If anyone is stupid enough to base a video sharing internet service in the UK, then you will have to sign up for censorship by Ofcom before 6th May 2021. After a year you will have to pay for the privilege too


Link Here10th March 2021
Full story: Ofcom Video Sharing Censors...Video on Demand and video sharing

Ofcom has published guidance to help providers self-assess whether they need to notify to Ofcom as UK-established video-sharing platforms.

Video-sharing platforms (VSPs) are a type of online video service which allow users to upload and share videos with the public.

Under the new VSP regulations , there are specific legal criteria which determine whether a service meets the definition of a VSP, and whether it falls within UK jurisdiction. Platforms must self-assess whether they meet these criteria, and those that do will be formally required to notify to Ofcom between 6 April and 6 May 2021. Following consultation, we have today published our final guidance to help service providers to make this assessment.

 

 

Google's FLoC Is a Terrible Idea...

Explaining Google's idea to match individuals to groups for targetting advertisng. By Bennett Cyphers


Link Here10th March 2021
Full story: Gooogle Privacy...Google's many run-ins with privacy

The third-party cookie is dying, and Google is trying to create its replacement.

No one should mourn the death of the cookie as we know it. For more than two decades, the third-party cookie has been the lynchpin in a shadowy, seedy, multi-billion dollar advertising-surveillance industry on the Web; phasing out tracking cookies and other persistent third-party identifiers is long overdue. However, as the foundations shift beneath the advertising industry, its biggest players are determined to land on their feet.

Google is leading the charge to replace third-party cookies with a new suite of technologies to target ads on the Web. And some of its proposals show that it hasn't learned the right lessons from the ongoing backlash to the surveillance business model. This post will focus on one of those proposals, Federated Learning of Cohorts (FLoC) , which is perhaps the most ambitious--and potentially the most harmful.

FLoC is meant to be a new way to make your browser do the profiling that third-party trackers used to do themselves: in this case, boiling down your recent browsing activity into a behavioral label, and then sharing it with websites and advertisers. The technology will avoid the privacy risks of third-party cookies, but it will create new ones in the process. It may also exacerbate many of the worst non-privacy problems with behavioral ads, including discrimination and predatory targeting.

Google's pitch to privacy advocates is that a world with FLoC (and other elements of the " privacy sandbox ") will be better than the world we have today, where data brokers and ad-tech giants track and profile with impunity. But that framing is based on a false premise that we have to choose between "old tracking" and "new tracking." It's not either-or. Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads.

We stand at a fork in the road. Behind us is the era of the third-party cookie, perhaps the Web's biggest mistake. Ahead of us are two possible futures.

In one, users get to decide what information to share with each site they choose to interact with. No one needs to worry that their past browsing will be held against them--or leveraged to manipulate them--when they next open a tab.

In the other, each user's behavior follows them from site to site as a label, inscrutable at a glance but rich with meaning to those in the know. Their recent history, distilled into a few bits, is "democratized" and shared with dozens of nameless actors that take part in the service of each web page. Users begin every interaction with a confession: here's what I've been up to this week, please treat me accordingly.

Users and advocates must reject FLoC and other misguided attempts to reinvent behavioral targeting. We implore Google to abandon FLoC and redirect its effort towards building a truly user-friendly Web.

What is FLoC?

In 2019, Google presented the Privacy Sandbox , its vision for the future of privacy on the Web. At the center of the project is a suite of cookieless protocols designed to satisfy the myriad use cases that third-party cookies currently provide to advertisers. Google took its proposals to the W3C, the standards-making body for the Web, where they have primarily been discussed in the Web Advertising Business Group , a body made up primarily of ad-tech vendors. In the intervening months, Google and other advertisers have proposed dozens of bird-themed technical standards: PIGIN , TURTLEDOVE , SPARROW , SWAN , SPURFOWL , PELICAN , PARROT ... the list goes on. Seriously . Each of the "bird" proposals is designed to perform one of the functions in the targeted advertising ecosystem that is currently done by cookies.

FLoC is designed to help advertisers perform behavioral targeting without third-party cookies. A browser with FLoC enabled would collect information about its user's browsing habits, then use that information to assign its user to a "cohort" or group. Users with similar browsing habits--for some definition of "similar"--would be grouped into the same cohort. Each user's browser will share a cohort ID, indicating which group they belong to, with websites and advertisers. According to the proposal, at least a few thousand users should belong to each cohort (though that's not a guarantee).

If that sounds dense, think of it this way: your FLoC ID will be like a succinct summary of your recent activity on the Web.

Google's proof of concept used the domains of the sites that each user visited as the basis for grouping people together. It then used an algorithm called SimHash to create the groups. SimHash can be computed locally on each user's machine, so there's no need for a central server to collect behavioral data. However, a central administrator could have a role in enforcing privacy guarantees. In order to prevent any cohort from being too small (i.e. too identifying), Google proposes that a central actor could count the number of users assigned each cohort. If any are too small, they can be combined with other, similar cohorts until enough users are represented in each one.

According to the proposal, most of the specifics are still up in the air. The draft specification states that a user's cohort ID will be available via Javascript, but it's unclear whether there will be any restrictions on who can access it, or whether the ID will be shared in any other ways. FLoC could perform clustering based on URLs or page content instead of domains; it could also use a federated learning-based system (as the name FLoC implies) to generate the groups instead of SimHash. It's also unclear exactly how many possible cohorts there will be. Google's experiment used 8-bit cohort identifiers, meaning that there were only 256 possible cohorts. In practice that number could be much higher; the documentation suggests a 16-bit cohort ID comprising 4 hexadecimal characters. The more cohorts there are, the more specific they will be; longer cohort IDs will mean that advertisers learn more about each user's interests and have an easier time fingerprinting them.

One thing that is specified is duration. FLoC cohorts will be re-calculated on a weekly basis, each time using data from the previous week's browsing. This makes FLoC cohorts less useful as long-term identifiers, but it also makes them more potent measures of how users behave over time.

New privacy problems

FLoC is part of a suite intended to bring targeted ads into a privacy-preserving future. But the core design involves sharing new information with advertisers. Unsurprisingly, this also creates new privacy risks.

Fingerprinting

The first issue is fingerprinting. Browser fingerprinting is the practice of gathering many discrete pieces of information from a user's browser to create a unique, stable identifier for that browser. EFF's Cover Your Tracks project demonstrates how the process works: in a nutshell, the more ways your browser looks or acts different from others', the easier it is to fingerprint.

Google has promised that the vast majority of FLoC cohorts will comprise thousands of users each, so a cohort ID alone shouldn't distinguish you from a few thousand other people like you. However, that still gives fingerprinters a massive head start. If a tracker starts with your FLoC cohort, it only has to distinguish your browser from a few thousand others (rather than a few hundred million). In information theoretic terms, FLoC cohorts will contain several bits of entropy --up to 8 bits, in Google's proof of concept trial. This information is even more potent given that it is unlikely to be correlated with other information that the browser exposes. This will make it much easier for trackers to put together a unique fingerprint for FLoC users.

Google has acknowledged this as a challenge, but has pledged to solve it as part of the broader "Privacy Budget" plan it has to deal with fingerprinting long-term. Solving fingerprinting is an admirable goal, and its proposal is a promising avenue to pursue. But according to the FAQ , that plan is "an early stage proposal and does not yet have a browser implementation." Meanwhile, Google is set to begin testing FLoC as early as this month .

Fingerprinting is notoriously difficult to stop. Browsers like Safari and Tor have engaged in years-long wars of attrition against trackers, sacrificing large swaths of their own feature sets in order to reduce fingerprinting attack surfaces. Fingerprinting mitigation generally involves trimming away or restricting unnecessary sources of entropy--which is what FLoC is. Google should not create new fingerprinting risks until it's figured out how to deal with existing ones.

Cross-context exposure

The second problem is less easily explained away: the technology will share new personal data with trackers who can already identify users. For FLoC to be useful to advertisers, a user's cohort will necessarily reveal information about their behavior.

The project's Github page addresses this up front:

This API democratizes access to some information about an individual's general browsing history (and thus, general interests) to any site that opts into it. ... Sites that know a person's PII (e.g., when people sign in using their email address) could record and reveal their cohort. This means that information about an individual's interests may eventually become public.

As described above, FLoC cohorts shouldn't work as identifiers by themselves. However, any company able to identify a user in other ways--say, by offering "log in with Google" services to sites around the Internet--will be able to tie the information it learns from FLoC to the user's profile.

Two categories of information may be exposed in this way:

  • Specific information about browsing history. Trackers may be able to reverse-engineer the cohort-assignment algorithm to determine that any user who belongs to a specific cohort probably or definitely visited specific sites.

  • General information about demographics or interests. Observers may learn that in general , members of a specific cohort are substantially likely to be a specific type of person. For example, a particular cohort may over-represent users who are young, female, and Black; another cohort, middle-aged Republican voters; a third, LGBTQ+ youth.

This means every site you visit will have a good idea about what kind of person you are on first contact, without having to do the work of tracking you across the web. Moreover, as your FLoC cohort will update over time, sites that can identify you in other ways will also be able to track how your browsing changes. Remember, a FLoC cohort is nothing more, and nothing less, than a summary of your recent browsing activity.

You should have a right to present different aspects of your identity in different contexts. If you visit a site for medical information, you might trust it with information about your health, but there's no reason it needs to know what your politics are. Likewise, if you visit a retail website, it shouldn't need to know whether you've recently read up on treatment for depression. FLoC erodes this separation of contexts, and instead presents the same behavioral summary to everyone you interact with.

Beyond privacy

FLoC is designed to prevent a very specific threat: the kind of individualized profiling that is enabled by cross-context identifiers today. The goal of FLoC and other proposals is to avoid letting trackers access specific pieces of information that they can tie to specific people. As we've shown, FLoC may actually help trackers in many contexts. But even if Google is able to iterate on its design and prevent these risks, the harms of targeted advertising are not limited to violations of privacy. FLoC's core objective is at odds with other civil liberties.

The power to target is the power to discriminate. By definition, targeted ads allow advertisers to reach some kinds of people while excluding others. A targeting system may be used to decide who gets to see job postings or loan offers just as easily as it is to advertise shoes.

Over the years, the machinery of targeted advertising has frequently been used for exploitation , discrimination , and harm . The ability to target people based on ethnicity, religion, gender, age, or ability allows discriminatory ads for jobs, housing, and credit. Targeting based on credit history--or characteristics systematically associated with it-- enables predatory ads for high-interest loans. Targeting based on demographics, location, and political affiliation helps purveyors of politically motivated disinformation and voter suppression. All kinds of behavioral targeting increase the risk of convincing scams .

Google, Facebook, and many other ad platforms already try to rein in certain uses of their targeting platforms. Google, for example, limits advertisers' ability to target people in " sensitive interest categories ." However, these efforts frequently fall short; determined actors can usually find workarounds to platform-wide restrictions on certain kinds of targeting or certain kinds of ads .

Even with absolute power over what information can be used to target whom, platforms are too often unable to prevent abuse of their technology. But FLoC will use an unsupervised algorithm to create its clusters. That means that nobody will have direct control over how people are grouped together. Ideally (for advertisers), FLoC will create groups that have meaningful behaviors and interests in common. But online behavior is linked to all kinds of sensitive characteristics-- demographics like gender, ethnicity, age, and income; "big 5" personality traits ; even mental health . It is highly likely that FLoC will group users along some of these axes as well. FLoC groupings may also directly reflect visits to websites related to substance abuse, financial hardship, or support for survivors of trauma.

Google has proposed that it can monitor the outputs of the system to check for any correlations with its sensitive categories. If it finds that a particular cohort is too closely related to a particular protected group, the administrative server can choose new parameters for the algorithm and tell users' browsers to group themselves again.

This solution sounds both orwellian and sisyphean. In order to monitor how FLoC groups correlate with sensitive categories, Google will need to run massive audits using data about users' race, gender, religion, age, health, and financial status. Whenever it finds a cohort that correlates too strongly along any of those axes, it will have to reconfigure the whole algorithm and try again, hoping that no other "sensitive categories" are implicated in the new version. This is a much more difficult version of the problem it is already trying, and frequently failing , to solve.

In a world with FLoC, it may be more difficult to target users directly based on age, gender, or income. But it won't be impossible. Trackers with access to auxiliary information about users will be able to learn what FLoC groupings "mean"--what kinds of people they contain--through observation and experiment. Those who are determined to do so will still be able to discriminate. Moreover, this kind of behavior will be harder for platforms to police than it already is. Advertisers with bad intentions will have plausible deniability--after all, they aren't directly targeting protected categories, they're just reaching people based on behavior. And the whole system will be more opaque to users and regulators.

Google, please don't do this

We wrote about FLoC and the other initial batch of proposals when they were first introduced , calling FLoC "the opposite of privacy-preserving technology." We hoped that the standards process would shed light on FLoC's fundamental flaws, causing Google to reconsider pushing it forward. Indeed, several issues on the official Github page raise the exact same concerns that we highlight here. However, Google has continued developing the system, leaving the fundamentals nearly unchanged. It has started pitching FLoC to advertisers, boasting that FLoC is a " 95% effective " replacement for cookie-based targeting. And starting with Chrome 89, released on March 2 , it's deploying the technology for a trial run . A small portion of Chrome users--still likely millions of people--will be (or have been) assigned to test the new technology.

Make no mistake, if Google does follow through on its plan to implement FLoC in Chrome, it will likely give everyone involved "options." The system will probably be opt-in for the advertisers that will benefit from it, and opt-out for the users who stand to be hurt. Google will surely tout this as a step forward for "transparency and user control," knowing full well that the vast majority of its users will not understand how FLoC works, and that very few will go out of their way to turn it off. It will pat itself on the back for ushering in a new, private era on the Web, free of the evil third-party cookie--the technology that Google helped extend well past its shelf life, making billions of dollars in the process .

It doesn't have to be that way. The most important parts of the privacy sandbox, like dropping third-party identifiers and fighting fingerprinting, will genuinely change the Web for the better. Google can choose to dismantle the old scaffolding for surveillance without replacing it with something new and uniquely harmful.

We emphatically reject the future of FLoC. That is not the world we want, nor the one users deserve. Google needs to learn the correct lessons from the era of third-party tracking and design its browser to work for users, not for advertisers.

 

 

Offsite Article: Twitter vs Texas...


Link Here10th March 2021
Full story: Internet Censorship in USA...Domain name seizures and SOPA
Twitter sues Texas Attorney General to avoid investigation into its censorship practices in silencing right wing speech

See article from reclaimthenet.org

 

 

Offsite Article: US copyright law...


Link Here10th March 2021
The Digital Copyright Act Will Chill Innovation and Harm The Internet

See article from torrentfreak.com

 

 

Age of nightmares...

ICO warns internet companies of the impending impossible to comply with Age Appropriate Design Code


Link Here7th March 2021
Full story: ICO Age Appropriate Design...ICO calls for age assurance for websites accessed by children
A survey by the Information Commissioner's Office (ICO) shows that three quarters of businesses surveyed are aware of the impending Children's Code. The full findings will be published in May but initial analysis shows businesses are still in the preparation stages.

And with just six months to go until the code comes into force, the ICO is urging organisations and businesses to make the necessary but onerous changes to their online services and products.

The Children's Code sets out 15 standards organisations must meet to ensure that children's data is protected online. The code will apply to all the major online services used by children in the UK and includes measures such as providing default settings which ensure that children have access to online services whilst minimising data collection and use.

Details of the code were first published in June 2018 and UK Parliament approved it last year. Since then, the ICO has been providing support and advice to help organisations adapt their online services and products in line with data protection law.

 

 

Constitutionally challenged...

US politicians queue up to censor the internet


Link Here7th March 2021
Full story: Internet Censorship in USA...Domain name seizures and SOPA
US Republican state lawmakers are pushing for social media giants to face costly lawsuits for policing content on their websites, taking aim at a federal law that prevents internet companies from being sued for removing posts.

GOP (Grand Old Party) politicians in roughly two dozen states have introduced bills that would allow for civil lawsuits against platforms for the censorship of right leaning posts.

Democrats who also have called for greater scrutiny of big tech, are sponsoring the same measures in at least two states.

Experts argue the legislative proposals are doomed to fail while the federal law, Section 230 of the Communications Decency Act, is in place. They said state lawmakers are wading into unconstitutional territory by trying to interfere with the editorial policies of private companies.

Len Niehoff, a professor at the University of Michigan Law School, described the idea as a constitutional non-starter. He said:

If an online platform wants to have a policy that it will delete certain kinds of tweets, delete certain kinds of users, forbid certain kinds of content, that is in the exercise of their right as a information distributor. And the idea that you would create a cause of action that would allow people to sue when that happens is deeply problematic under the First Amendment.

 

 

Censorship warning...

Twitter introduces a five strike rule for censoring covid tweets that Twitter does not like


Link Here3rd March 2021
Full story: Twitter Censorship...Twitter offers country by country take downs
Twitter has announced a new strikes system where accounts that repeatedly post what it deems to be COVID-19 "misinformation" will be permanently banned from the site.

Under the new system , users that tweet what Twitter deems to be a "high-severity" violation of its COVID-19 misleading information policy will be temporarily locked out of their accounts, forced to delete the tweets, and given two strikes.

Users that tweet rule-breaking content that isn't deemed to be a high severity violation of the policy will have their tweets labeled and will be given a strike if the labeled tweet is "determined to be harmful." Labeled tweets may also by shadow banned by having their visibility reduced, have their engagement metrics disabled, display a warning when people attempt to share or like the tweet, and contain a link to a curated landing page or Twitter's policies.

Accounts that have multiple strikes will be subject to the following sanctions:

  • Two or three strikes: 12-hour account lock
  • Four strikes: 7-day account lock
  • Five or more strikes: Permanent suspension

 

 

Ethical snooping...

Google promises not to replace cookie based web browsing snooping with another privacy invasive method of snooping


Link Here3rd March 2021
Full story: Gooogle Privacy...Google's many run-ins with privacy
David Temkin, Google's Director of Product Management, Ads Privacy and Trust has been commenting on Google's progress in reducing personalised advertising based on snooping of people's browsing history. Temkin commented:

72% of people feel that almost all of what they do online is being tracked by advertisers, technology firms or other companies, and 81% say that the potential risks they face because of data collection outweigh the benefits, according to a study by Pew Research Center. If digital advertising doesn't evolve to address the growing concerns people have about their privacy and how their personal identity is being used, we risk the future of the free and open web.

That's why last year Chrome announced its intent to remove support for third-party cookies, and why we've been working with the broader industry on the Privacy Sandbox to build innovations that protect anonymity while still delivering results for advertisers and publishers. Even so, we continue to get questions about whether Google will join others in the ad tech industry who plan to replace third-party cookies with alternative user-level identifiers. Today, we're making explicit that once third-party cookies are phased out, we will not build alternate identifiers to track individuals as they browse across the web, nor will we use them in our products.

We realize this means other providers may offer a level of user identity for ad tracking across the web that we will not -- like PII [Personally Identifying Information] graphs based on people's email addresses. We don't believe these solutions will meet rising consumer expectations for privacy, nor will they stand up to rapidly evolving regulatory restrictions, and therefore aren't a sustainable long term investment. Instead, our web products will be powered by privacy-preserving APIs which prevent individual tracking while still delivering results for advertisers and publishers.

People shouldn't have to accept being tracked across the web in order to get the benefits of relevant advertising. And advertisers don't need to track individual consumers across the web to get the performance benefits of digital advertising.

Advances in aggregation, anonymization, on-device processing and other privacy-preserving technologies offer a clear path to replacing individual identifiers. In fact, our latest tests of FLoC [Federated Learning of Cohorts] show one way to effectively take third-party cookies out of the advertising equation and instead hide individuals within large crowds of people with common interests. Chrome intends to make FLoC-based cohorts available for public testing through origin trials with its next release this month, and we expect to begin testing FLoC-based cohorts with advertisers in Google Ads in Q2. Chrome also will offer the first iteration of new user controls in April and will expand on these controls in future releases, as more proposals reach the origin trial stage, and they receive more feedback from end users and the industry.

This points to a future where there is no need to sacrifice relevant advertising and monetization in order to deliver a private and secure experience.

 

 

Fast tracked porn censorship...

Australian internet child protection bill inevitably slips in censorship capability to block adult consensual porn


Link Here1st March 2021
Full story: Internet Censorship in Australia...Wide ranging state internet censorship
Fast-tracked internet censorship legislation could ban all adult content online and force sex workers off the internet, sex workers and civil liberties groups have warned.

The Online Safety bill is supposedly aimed at giving powers to Australia's eSafety commissioner to target bullying and harassment online, extending existing powers protecting children from online bullying to adults.

It increases the maximum penalty for using a carriage service to menace, harass or cause offence from three to five years in jail, and allows for the removal of image-based abuse and other supposedly harmful online content.

The legislation also promotes the the eSafety commissioner to a new post of Internet Censor and gives her the power to rapidly block sites hosting violent and terrorist content. But the proposals go further. The bill carries over existing powers under the Broadcasting Services Act which allow for content with a rating of R18+ (equivalent to the UK 18 rating) to be blocked or for removal notices to be issued, and goes much further by giving the Internet Censor sole discretion over whether the content is rated R18+ or over and therefore should be removed.

A Sex Work Law Reform Victoria spokesperson, Roger Sorrenti, said the legislation's effect would be to effectively censor adult online content that could potentially have unintended consequences for the sex industry and porn industries and have a devastating impact on the ability of sex workers to earn a legitimate income.

Consultation for the draft legislation attracted more than 370 submissions between 23 December and 14 February, none of which the government published before the communications minister, Paul Fletcher, introduced the legislation into parliament 10 days later. The bill has been referred to a Senate committee, with submissions due on Tuesday. The government has decided to fast-track this bill despite repeated calls for caution by the industry and civil liberties organisations, as well as a parallel review currently occurring into Australia's classification scheme.

The eSafety commissioner, Julie Inman Grant, claimed to Guardian Australia that she didn't intend to use her powers under the legislation to go after consensual adult pornographic material online. But she ominously pointed out that hosting explicit adult sexual content is prohibited in Australia. Guardian Australia has also seen a notice sent by her office in January to adult websites requesting that content be removed for being R18+, X or refused classification.

 

 

Ethical snooping...

GCHQ discusses the ethics of using AI and mass snooping to analyse people's internet use to detect both serious crime and no doubt political incorrectness


Link Here1st March 2021
The UK snooping agency GCHQ has published a paper discussing the ethics of using AI for analysing internet posts. GCHQ note that the technology will be put at the heart of its operations.

The paper, Ethics of AI: Pioneering a New National Security , comments on the technology as used to assist its analysts in spotting patterns hidden inside large - and fast growing - amounts of data. including:

  • trying to spot fake online messages used by other states spreading disinformation
  • mapping international networks engaged in human or drug trafficking
  • finding child sex abusers hiding their identities online

But it says it cannot predict human behaviour such as moving towards executing a terrorist attack.

GCHQ is now detailin how it will ensure it uses AI fairly and transparently, including:

  • an AI ethical code of practice
  • recruiting more diverse talent to help develop and govern its use

The BBC comments that this maybe a sign the agency wants to avoid a repeat of criticism people were unaware how it used data, following whistleblower Edward Snowden's revelations.

GCHQ reports that a growing number of states are using AI to automate the production of false content to affect public debate, including "deepfake" video and audio. The technology can individually target and personalise this content or spread it through chatbots or by interfering with social-media algorithms. But it could also help GCHQ detect and fact-check it and identify "troll farms" and botnet accounts.

GCHQ speaks of capabilities in terms of detecting child abuse, where functionalities include:

  • help analyse evidence of grooming in chat rooms
  • track the disguised identities of offenders across multiple accounts
  • discover hidden people and illegal services on the dark web
  • help police officers infiltrate rings of offenders
  • filter content to prevent analysts from being unnecessarily exposed to disturbing imagery

and on trafficking:

  • mapping the international networks that enable trafficking - identifying individuals, accounts and transactions
  • "following the money" - analysing complex transactions, possibly revealing state sponsors or links to terrorist groups
  • bringing together different types of data - such as imagery and messaging - to track and predict where illegal cargos are being delivered

Now doubt these functionalities will also be used for more mundane reasons.

 

 

User control...

Twitter is set to introduce a voluntary censored 'Safety Mode' that blocks accounts that Twitter thinks you won't like


Link Here1st March 2021
Full story: Twitter Censorship...Twitter offers country by country take downs
Twitter is planning to roll-out a new feature that would allow users to auto-block or mute certain accounts that Twitter deems as abusive.

The new safety mode will further the company's efforts in censoring under the guise of protecting users from supposedly offensive content.

Twitter announced the new feature during its Analyst Day presentation . Documents from the presentation suggest that the feature will be available through Safety Mode, a setting the company is yet to roll out.

Twitter explained that once a user turns on the feature, it automatically blocks accounts that appear to break the Twitter Rules, and mute accounts that might be using insults, name-calling, strong language, or hateful remarks.

Twitter already allows users to block or mute accounts that the user selects but the latest censorship extension allows Twitter itself to do the selecting.

 

 

'Go fund yourself'...

Indian Netflix rolls back its censorship of a crude joke in South Park


Link Here1st March 2021
Full story: Netflix Censorship...Streaming TV to a variety of censorship regimes
An episode of the animated comedy series South Park was previously censored on Netflix in India (and nowhere else). A crude drawing of male genitalia and breasts was blurred out from a scene in Go Fund Yourself , the first episode of the show's eighteenth season.

Netflix removed the censorship after MediaNama asked the company about it. As is often the case with internet companies, when they are embarrassed by caught with their hands in the censorship cookie jar, they respond by claiming it was all some sort of a ghastly mistake.

And again in this case, Netflix claimed it was an incorrect version and replaced it with the uncensored version.

Interestingly the censorship incident has revealed that it may be possible to work around local censorship by selecting a different interface language. It was reported that the blurring wasn't present for users in India that selected French language.

The censored episode lampoons the Washington Redskins, an American football team that recently changed its name to Washington Football Team following renewed outrage over its name, which was criticized for using an offensive slur referring to Native Americans. In one scene from the episode, which was aired in 2014, a character responds to calls to change the team's logo by simply adding a drawing of male genitalia and breasts to the imagery.

 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   April   May   June   July   Aug   Sept   Oct   Nov   Dec    


 


 
TV  

Movies

Games

Internet
 
Advertising

Technology

Gambling

Food+Drink
Books

Music

Art

Stage

melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys