Melon Farmers Original Version

Internet News


2023: October

 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   April   May   June   July   Sept   Aug   Oct   Dec   Dec    

 

Online Censorship Act...

The Online Unsafety Bill gets Royal Assent and so becomes law


Link Here29th October 2023
Full story: Online Safety Act...UK Government legislates to censor social media
The Online Safety Bill received Royal Assenton 26th October 2023, heralding a new era of internet censorship.

The new UK internet Ofcom was quick off the mark to outline its timetable for implementing the new censorship regime.

Ofcom has set out our plans for putting online safety laws into practice, and what we expect from tech firms, now that the Online Safety Act has passed. Ofcom writes:

The Act makes companies that operate a wide range of online services legally responsible for keeping people, especially children, safe online. These companies have new duties to protect UK users by assessing risks of harm, and taking steps to address them. All in-scope services with a significant number of UK users, or targeting the UK market, are covered by the new rules, regardless of where they are based.

While the onus is on companies to decide what safety measures they need given the risks they face, we expect implementation of the Act to ensure people in the UK are safer online by delivering four outcomes:

  • stronger safety governance in online firms;

  • online services designed and operated with safety in mind;

  • choice for users so they can have meaningful control over their online experiences; and

  • transparency regarding the safety measures services use, and the action Ofcom is taking to improve them, in order to build trust .

We are moving quickly to implement the new rules

Ofcom will give guidance and set out codes of practice on how in-scope companies can comply with their duties, in three phases, as set out in the Act.

Phase one: illegal harms duties

We will publish draft codes and guidance on these duties on 9 November 2023, including:

  • analysis of the causes and impacts of online harm, to support services in carrying out their risk assessments;

  • draft guidance on a recommended process for assessing risk;

  • draft codes of practice, setting out what services can do to mitigate the risk of harm; and

  • draft guidelines on Ofcom's approach to enforcement.

We will consult on these documents, and plan to publish a statement on our final decisions in Autumn 2024. The codes of practices will then be submitted to the Secretary of State for Science, Innovation and Technology, and subject to their approval, laid before Parliament.

Phase two: child safety, pornography and the protection of women and girls

Child protection duties will be set out in two parts. First, online pornography services and other interested stakeholders will be able to read and respond to our draft guidance on age assurance from December 2023. This will be relevant to all services in scope of Part 5 of the Online Safety Act.

Secondly, regulated services and other interested stakeholders will be able to read and respond to draft codes of practice relating to protection of children, in Spring 2024.

Alongside this, we expect to consult on:

  • analysis of the causes and impacts of online harm to children; and

  • draft risk assessment guidance focusing on children's harms.

We expect to publish draft guidance on protecting women and girls by Spring 2025, when we will have finalised our codes of practice on protection of children.

Phase three: transparency, user empowerment, and other duties on categorised services

A small proportion of regulated services will be designated Category 1, 2A or 2B services if they meet certain thresholds set out in secondary legislation to be made by Government. Our final stage of implementation focuses on additional requirements that fall only on these categorised services. Those requirements include duties to:

  • produce transparency reports;

  • provide user empowerment tools;

  • operate in line with terms of service;

  • protect certain types of journalistic content; and

  • prevent fraudulent advertising.

We now plan to issue a call for evidence regarding our approach to these duties in early 2024 and a consultation on draft transparency guidance in mid 2024.

Ofcom must produce a register of categorised services. We will advise Government on the thresholds for these categories in early 2024, and Government will then make secondary legislation on categorisation, which we currently expect to happen by summer 2024. Assuming this is achieved, we will:

  • publish the register of categorised services by the end of 2024;

  • publish draft proposals regarding the additional duties on these services in early 2025; and

  • issue transparency notices in mid 2025.

 

 

Representing repression...

Ohio House Representative introduces a bill to criminalise the use of VPNs to circumvent age/ID verification


Link Here29th October 2023
Full story: Age Verification in USA...Requiring age verification for porn and social media
Ohio House Representative Steve Demetriou has introduced an extraordinarily repressive House Bill (HB) 295. Dubbed the Innocence Actwould implement an age verification requirement similar to what has already been implemented in other states. However this bill goes way beyond other is that it introduces criminal penalties for websites that don't comply and misdemeanor penalties for any internet user who tries to circumvent age verification, eg by using VPNs.

In its current form, companies and webmasters who don't implement reasonable age verification methods could be subject to criminal charges -- a third-degree felony. No other proposed and implemented age verification regulation in the country has such punitive criminal penalties.

Corey Silverstein, a First Amendment attorney, commented:

VPNs are available on most mobile devices through the Apple App Store or Google Play Store. They are also free or relatively inexpensive. And, to think that a 17-year-old high school student can't learn about and effectively deploy a VPN is short-sighted. I can't think of a worse idea than charging minors with criminal offenses for viewing adult content and potentially ruining their futures. Attempting to shame and embarrass minors for viewing adult-themed content goes so far beyond common sense that it begs the question of whether the supporters of this bill gave it any thought at all.

It is not yet clear if the bill has a chance of becoming law.

 

 

Encrypted Client Hello...

Internet company Cloudflare enables a feature preventing ISP website blocking at least for websites that use Cloudflare


Link Here9th October 2023
A few days ago, Internet infrastructure company Cloudflare implemented widespread support for Encrypted Client Hello (ECH), a privacy technology that aims to render web traffic surveillance futile. This means that site blocking implemented by ISPs will be rendered useless in most, if not all cases.

ECH is a newly proposed privacy standard that's been in the making for a few years. The goal is to increase privacy for Internet users and it has already gained support from Chrome , Firefox , Edge , and other browsers. Users can enable it in the settings, which may still be experimental in some cases.just

The main barrier to widespread adoption is that this privacy technology is a two-way street. This means that websites have to support it as well. Cloudflare has made a huge leap forward on that front by enabling it by default on all free plans, which currently serve millions of sites. Other subscribers can apply to have it enabled. Cloudflare writes in an announcement:

Cloudflare is a big proponent of privacy for everyone and is excited about the prospects of bringing this technology to life. Encrypted Client Hello (ECH) is a successor to ESNI and masks the Server Name Indication (SNI) that is used to negotiate a TLS handshake. This means that whenever a user visits a website on Cloudflare that has ECH enabled, no one except for the user, Cloudflare, and the website owner will be able to determine which website was visited.

If you're a website, and you care about users visiting your website in a fashion that doesn't allow any intermediary to see what users are doing, enable ECH today on Cloudflare

Tests conducted by TorrentFreak show that ISP blocking measures in the UK, the Netherlands, and Spain were rendered ineffective.

 

 

Not thinking hard enough about the risks associated with AI...

ICO data censor harangues Snap with a nonsensically abstract accusation, whilst noting that rules haven't actually been broken yet


Link Here 8th October 2023

UK Information Commissioner issues preliminary enforcement notice against Snap

  • Snap issued with preliminary enforcement notice over potential failure to properly assess the privacy risks posed by its generative AI chatbot 'My AI'

  • Investigation provisionally finds Snap failed to adequately identify and assess the risks to several million 'My AI' users in the UK including children aged 13 to 17.

The Information Commissioner's Office (ICO) has issued Snap Inc with a preliminary enforcement notice over potential failure to properly assess the privacy risks posed by Snap's generative AI chatbot 'My AI'.

The preliminary notice sets out the steps which the Commissioner may require, subject to Snap's representations on the preliminary notice. If a final enforcement notice were to be adopted, Snap may be required to stop processing data in connection with 'My AI'. This means not offering the 'My AI' product to UK users pending Snap carrying out an adequate risk assessment.

Snap launched the 'My AI' feature for UK Snapchat+ subscribers in February 2023, with a roll out to its wider Snapchat user base in the UK in April 2023. The chatbot feature, powered by OpenAI's GPT technology, marked the first example of generative AI embedded into a major messaging platform in the UK. As at May 2023 Snapchat had 21 million monthly active users in the UK.

The ICO's investigation provisionally found the risk assessment Snap conducted before it launched 'My AI' did not adequately assess the data protection risks posed by the generative AI technology, particularly to children. The assessment of data protection risk is particularly important in this context which involves the use of innovative technology and the processing of personal data of 13 to 17 year old children.

The Commissioner's findings in the notice are provisional. No conclusion should be drawn at this stage that there has, in fact, been any breach of data protection law or that an enforcement notice will ultimately be issued. The ICO will carefully consider any representations from Snap before taking a final decision.

John Edwards, Information Commissioner said:

The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching 'My AI'.

We have been clear that organisations must consider the risks associated with AI, alongside the benefits. Today's preliminary enforcement notice shows we will take action in order to protect UK consumers' privacy rights.

 

 

More censors...

Ofcom investigates BitChute as an early test case of internet censorship


Link Here7th October 2023
Full story: Ofcom Video Sharing Censors...Video on Demand and video sharing
BitChute is a British based video sharing platform that is particularly well known for hosting content that has been banned from more censorial websites, notably YouTube.

The name was conceived from a portmanteau of the words bit , a unit of information in computing, and parachute . At the time of the site's launch, founder Ray Vahey described BitChute as an alternative to mainstream platforms; he believed these platforms had demonstrated increased levels of censorship over the previous few years by banning and demonetising users (barring them from receiving advertising revenue), and tweaking algorithms to send certain content into obscurity. In 2018, the creators of BitChute described themselves as a small team making a stand against Internet censorship because we believe it is the right thing to do.

Of course right leaning opionion does not sit well with the British establishment so it isn't a surprise to see it as an early example of interest from the UK internet censor, Ofcom. Note that censorship powers have already been granted to Ofcom for video sharing platforms stupid enough to be based in Britain. So these platforms are an interesting forerunner to how Ofcom will censor the wider internet when powers from the Online Censorship Bill are enacted.

Ofcom writes:

Investigation into BitChute Limited Case considered 3 October 2023

Summary

Compliance assurances from BitChute regarding its obligations under Part 4B of the Communications Act 2003: Improvements to the measures BitChute has in place to protect users from videos containing harmful material.

Ofcom's role is to ensure video-sharing platforms (VSPs) based in the UK have appropriate systems and processes in place as required under Part 4B of the Act to effectively protect their users from harmful video content in scope of the VSP regime.

In May 2022, a far-right extremist carried out a racially motivated attack in Buffalo, New York. Ofcom conducted analysis of the measures in place to protect users from harmful material on several VSPs, including BitChute, in light of this incident.

Our analysis of BitChute's platform raised concerns that some of its measures were not effectively protecting users from encountering videos related to terrorism and other harmful material prohibited under the VSP regime.

Following a period of close engagement with BitChute to discuss its compliance with its obligations under Part 4B of the Communications Act 2003, it has made some important changes and also committed to further improvements to protect users from harmful material.

Background

On 14 May 2022, a far-right extremist carried out a racially motivated attack in Buffalo, New York, killing ten people and wounding three others. The attacker livestreamed the shooting online, and versions of the footage were distributed on multiple online services, including BitChute and other UK-based VSPs that we currently regulate . This resulted in UK users being potentially exposed to harmful material related to terrorism and material likely to incite violence and hatred.

Ofcom's role is to ensure VSPs have appropriate systems and processes in place as required under Part 4B of the Act to effectively protect their users from harmful video content in scope of the VSP regime. Our approach to securing compliance focuses on oversight, accountability, and transparency, working with the industry where possible to drive improvements, as well as taking formal enforcement action where appropriate.

Our concerns

In the weeks following the Buffalo attack, we engaged with relevant VSPs and the wider industry to learn more about how platforms can set up internal systems and processes to prevent the livestreaming of such attacks and protect users from the sharing of associated video content. In October 2022, Ofcom published a report on the Buffalo attack that explored how footage of the attack, and related material, came to be disseminated online and to understand the implications of this for platforms' efforts to keep people safe online.

Our analysis raised concerns that BitChute's reporting and flagging measures were not effectively protecting users from encountering videos related to terrorism and other harmful material prohibited under the VSP regime. In particular, Ofcom was concerned that BitChute's reporting function was not open to non-registered users, and that the capacity and coverage of BitChute's content moderation team was insufficient to enable it to respond promptly to reports of harmful content.

BitChute's commitments

In response to our concerns, BitChute has made some important changes to its reporting and flagging mechanisms and to the content moderation processes which underpin these, as well as committing to further changes.

1. Coverage and capacity of content moderation

In our 2022 VSP Report , published in October, we found that all VSPs, including BitChute, have adequate terms and conditions that prohibit material that would come within the scope of laws relating to terrorism, racism, and xenophobia, as well as material likely to incite violence or hatred

However, the Buffalo attack exposed key deficiencies in BitChute's ability to effectively enforce its terms and conditions relating to hate and terror content: footage of the shooting was easily accessible on the platform in the days after the attack, and we learnt that the platform's content moderation team was modest in size and limited to certain working hours. This restricted BitChute's ability to respond quickly to reports that footage was on the platform following the attack.

BitChute has committed to triple the capacity of its moderation team by taking on more human moderators. It is also extending the coverage of its moderation team by increasing the number of hours that moderators are available to review reports and has committed to having a safety team operational 24/7 in autumn 2023.

2. User reporting and flagging mechanisms

Prior to the Buffalo attack, BitChute had reporting and flagging mechanisms in place to allow users to report potentially harmful content. However, on-platform flagging was only available to users who had a registered BitChute account. While all users (registered or unregistered) were able to report content by sending an email to BitChute, we were concerned that requiring non-registered users to email the platform, rather than click a reporting button next to the video, introduces a layer of friction to the reporting process that could disincentivise the user from making a report and increase the time taken to respond to reports.

As a result of our remediation work, BitChute has changed the design of its platform to allow non-registered users to directly report potentially harmful content. It has also updated its user-facing guidelines to set out more clearly what registered and non-registered users can expect from the flagging and reporting process.

3. Measuring effectiveness

BitChute has also committed to collecting additional metrics to measure the impact of changes made to its systems and processes, including the volume of content review reports raised each day and average response time in minutes for content reports. These metrics will help BitChute and Ofcom to evaluate the effectiveness of the platform's measures more easily.

We have also encouraged BitChute to implement additional reports on risk metrics, that measure the risk of harmful material being encountered on the platform and process metrics, that measure the effectiveness of BitChute's moderation systems.

Our response

Taking into account BitChute's willingness to make timely improvements to its systems and processes to directly address the concerns we identified following the Buffalo incident, and our desire to work with industry to secure changes that protect users, we have decided not to open an investigation against BitChute into its compliance with its duties under Part 4B of the Communications Act 2003 at this time. We will, however, closely monitor the implementation of the proposed changes and the impact these changes have on user safety.

We also note that, on 14 June 2023, BitChute became a member of the Global Internet Forum to Counter Terrorism (GIFCT). GIFCT is a cross-industry initiative designed to prevent terrorists and violent extremists from exploiting digital platforms. Whilst we do not consider this an indicator of compliance, it is an encouraging step -- GIFCT has rigorous standards for membership , including demonstrating "a desire to explore new technical solutions to counter terrorist and violent extremist activity online" and "support for expanding the capacity of civil society organisations to challenge terrorism and violent extremism".

While we welcome BitChute's commitments to further improvements and measuring their effectiveness, we are aware of reports -- some of which have been communicated to us directly -- alleging that content likely to incite violence and hatred continues to be uploaded to BitChute, can be accessed easily, and may pose significant risks to users.

It is important to note that the VSP regime is a systems and processes regime, meaning the presence of harmful video content on a service is not in itself a breach of the rules. Accordingly, Ofcom's focus is to drive improvements to platforms systems and processes to minimise the risks of users encountering harmful videos online in the first place.

However, such content can be indicative of an underlying issue with the user protections in place, and we will therefore continue to monitor BitChute closely to assess whether the changes it has made to its user reporting and content moderation systems result in tangible improvements to user safety. If we find that, despite BitChute's assurances and improvements, users are not being adequately protected from the categories of harmful material covered by the VSP regime, we will not hesitate to take further action, including formal enforcement action if necessary.

 

 

Offsite Article: And if they don't comply they will turn off their payment cards...


Link Here 7th October 2023
Full story: Internet Censorship in Canada...Proposal for opt in intenet blocking
Canada Plots to Increase Online Censorship, Targeting AI, Search and Social Media Algorithms

See article from reclaimthenet.org


 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   April   May   June   July   Sept   Aug   Oct   Dec   Dec    


 


 
TV  

Movies

Games

Internet
 
Advertising

Technology

Gambling

Food+Drink
Books

Music

Art

Stage

melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys