|
Ofcom picks RevealMe.com seemingly as a start point to enforce ID verification requirements for adult content
|
|
|
| 30th September 2022
|
|
| See article from ofcom.org.uk
|
RevealMe.com is streaming service along the lines of OnlyFans that allows models to provide adult streaming videos and other content to subscribing fans. Ofcom have announced that they are investigating the company for failing to provide Ofcom with
information as to how they are implementing Age/ID verification ('protecting users' in Ofcom speak). Ofcom writes: Ofcom has been the regulator of UK established video sharing platforms (VSPs) since
November 2020. Earlier this year, Ofcom issued a number of information requests to VSPs to obtain information on the measures taken by VSPs to protect users. On 29 September 2022, Ofcom opened an investigation into Tapnet Ltd,
which provides the VSP RevealMe. This investigation concerns Tapnet's compliance with an information request notice, issued on 6 June 2022 under section 368Z10 of the Communications Act 2003. Tapnet was required to respond to the
Notice by no later than 4 July 2022. As of 29 September 2022, Tapnet had not provided a response to the Notice. The Notice explained that the reason for requesting the information was to understand and monitor the measures VSPs
have in place to protect users and to publish a report under section 368Z11 of the Act. Ofcom's investigation will examine whether there are reasonable grounds for believing that Tapnet has failed to comply with its statutory
duties in relation to Ofcom's information request. Ofcom will provide updates on this page as we progress this investigation.
|
|
Ofcom publishes report seemingly trying categorise or classify these 'harms' and associated risks with view to its future censorship role
|
|
|
| 25th
September 2022
|
|
| See article from ofcom.org.uk See
report [pdf] from ofcom.org.uk |
Ofcom writes: The Online Safety Bill, as currently drafted, will require Ofcom to assess, and publish its findings about the risks of harm arising from content that users may encounter on in-scope services, and will require in-scope
services to assess the risks of harm to their users from such content, and to have systems and processes for protecting individuals from harm. Online users can face a range of risks online, and the harms they may experience are
wide-ranging, complex and nuanced. In addition, the impact of the same harms can vary between users. In light of this complexity, we need to understand the mechanisms by which online content and conduct may give rise to harm, and use that insight to
inform our work, including our guidance to regulated services about how they might comply with their duties. This report sets out a generic model for understanding how online harms manifest. This research aimed to test a
framework, developed by Ofcom, with real-life user experiences. We wanted to explore if there were common risks and user experiences that could provide a single framework through which different harms could be analysed. There are a couple of important
considerations when reading this report:
The research goes beyond platforms' safety systems and processes to help shed broader light on what people are experiencing online. It therefore touches on issues that are beyond the scope of the proposed online safety regime.
The research reflects people's views and experiences of their online world: it is based on people self- identifying as having experienced 'significant harm', whether caused directly or indirectly, or 'illegal content'.
Participants' definitions of harmful and illegal content may differ and do not necessarily align with how the Online Safety Bill, Ofcom or others may define them.
|
|
UK Online Censorship Bill set to continue after 'tweaks'
|
|
|
| 16th September 2022
|
|
| See article from techdirt.com |
After a little distraction for the royal funeral, the UK's newly elected prime minister has said she will be continuing with the Online Censorship Bill. She said: We will be proceeding with the Online Safety Bill. There
are some issues that we need to deal with. What I want to make sure is that we protect the under-18s from harm and that we also make sure free speech is allowed, so there may be some tweaks required, but certainly he is right that we need to protect
people's safety online.
TechDirt comments: This is just so ridiculously ignorant and uninformed. The Online Safety Bill is a disaster in waiting and I wouldn't be surprised if some websites chose to
exit the UK entirely rather than continue to deal with the law. It won't actually protect the children, of course. It will create many problems for them. It won't do much at all, except make internet companies question whether
it's even worth doing business in the UK.
|
|
The continuingly dangerous campaign to force ALL people to hand over sensitive ID details to porn sites in the name of protecting children from handing over sensitive ID details.
|
|
|
| 3rd September 2022
|
|
| See article from ico.org.uk
|
The UK's data protection censors at the Information Commissioner's Office ICO have generated a disgracefully onerous red tape nightmare called the Age Appropriate Design Code that requires any internet service that provides any sort of grown up content
to evaluate the age of all users so that under 18s can be protected from handing over sensitive ID data. Of course the age checking usually requires all users to hand over lots of sensitive and dangerous ID data to any website that asks. Now the ICO
has decided to make these requirements of porn sites given that they are often accessed by under 18s. ICO writes: Next steps We will continue to evolve our approach, listening to others to
ensure the code is having the maximum impact. For example, we have seen an increasing amount of research (from the NSPCC, 5Rights, Microsoft and British Board of Film Classification), that children are likely to be accessing
adult-only services and that these pose data protection harms, with children losing control of their data or being manipulated to give more data, in addition to content harms. We have therefore revised our position to clarify that adult-only services are
in scope of the Children's code if they are likely to be accessed by children. As well as engaging with adult-only services directly to ensure they conform with the code, we will also be working closely with Ofcom and the
Department for Digital, Culture, Media and Sport (DCMS) to establish how the code works in practice in relation to adult-only services and what they should expect. This work is continuing to drive the improvements necessary to provide a better internet
for children.
|
|
Former UK Supreme Court judge savages the government's censorship bill
|
|
|
|
18th August 2022
|
|
| See article from spectator.co.uk by Jonathan Sumption
|
Weighing in at 218 pages, with 197 sections and 15 schedules, the Online Safety Bill is a clunking attempt to regulate content on the internet. Its internal contradictions and exceptions, its complex paper chase of definitions, its weasel language
suggesting more than it says, all positively invite misunderstanding. Parts of it are so obscure that its promoters and critics cannot even agree on what it does. The real vice of the bill is that its provisions are not limited to
material capable of being defined and identified. It creates a new category of speech which is legal but harmful. The range of material covered is almost infinite, the only limitation being that it must be liable to cause harm to some people.
Unfortunately, that is not much of a limitation. Harm is defined in the bill in circular language of stratospheric vagueness. It means any physical or psychological harm. As if that were not general enough, harm also extends to anything that may increase
the likelihood of someone acting in a way that is harmful to themselves, either because they have encountered it on the internet or because someone has told them about it. This test is almost entirely subjective. Many things which
are harmless to the overwhelming majority of users may be harmful to sufficiently sensitive, fearful or vulnerable minorities, or may be presented as such by manipulative pressure groups. At a time when even universities are warning adult students
against exposure to material such as Chaucer with his rumbustious references to sex, or historical or literary material dealing with slavery or other forms of cruelty, the harmful propensity of any material whatever is a matter of opinion. It will vary
from one internet user to the next. If the bill is passed in its current form, internet giants will have to identify categories of material which are potentially harmful to adults and provide them with options to cut it out or
alert them to its potentially harmful nature. This is easier said than done. The internet is vast. At the last count, 300,000 status updates are uploaded to Facebook every minute, with 500,000 comments left that same minute. YouTube adds 500 hours of
videos every minute. Faced with the need to find unidentifiable categories of material liable to inflict unidentifiable categories of harm on unidentifiable categories of people, and threatened with criminal sanctions and enormous regulatory fines (up to
10 per cent of global revenue). What is a media company to do? The only way to cope will be to take the course involving the least risk: if in doubt, cut it out. This will involve a huge measure of regulatory overkill. A new era
of intensive internet self-censorship will have dawned. See full article from spectator.co.uk
|
|
British Computer Society experts are not impressed by The Online Censorship Bill
|
|
|
| 15th August
2022
|
|
| See article from bcs.org See
BSC report [pdf] from bcs.org |
Plans to compel social media platforms to tackle online harms are not fit for purpose according to a new poll of IT experts. Only 14% of tech professionals believed the Online Harms Bill was fit for purpose, according to the
survey by BCS, The Chartered Institute for IT. Some 46% said the bill was not workable, with the rest unsure. The legislation would have a negative effect on freedom of speech, most IT specialists (58%)
told BCS. Only 19% felt the measures proposed would make the internet safer, with 51% saying the law would not make it safer to be online. There were nearly 1,300 responses from tech professionals to the
survey by BCS. Just 9% of IT specialists polled said they were confident that legal but harmful content could be effectively and proportionately removed. Some 74% of tech specialists said they felt the bill
would do nothing to stop the spread of disinformation and fake news.
|
|
whilst we still can!
|
|
|
|
31st July 2022
|
|
| |
Offsite Comment: Fixing the UK's Online Safety Bill, part 1: We need answers. 31st July 2022. See
article from webdevlaw.uk
by Heather Burns
Offsite Comment: The delay to the online safety bill It won't make it any easier to please everyone 17th July 2022. See
article from theguardian.com by Alex Hern
Offsite Comment: It’s time to kill the Online Safety Bill for good... Not only is it bad for business, bad for free speech, and -- by attacking encryption -- bad for online safety 16th July 2022. See
article from spectator.co.uk by Sam Ashworth-Hayes
|
|
Well John Penrose MP bizarrely proposes that social media companies keep a truthfulness score for all their users
|
|
|
| 10th July 2022
|
|
| See Online Censorship Bill proposed amendments [pdf] from docs.reclaimthenet.org
|
John Penrose, a Tory MP, has tabled an amendment to the Online Censorship Bill currently being debated in Parliament: To move the following Clause--
Factual Accuracy
(1) The purpose of this section is to reduce the risk of harm to users of regulated services caused my disinformation or misinformation. (2) Any Regulated Service must provide an index of the historic factual
accuracy of material published by each user who has-- (a) produced user-generated content, (b) news publisher content, or (c) comments and reviews on provider contact
whose content is viewed more widely than a minimum threshold to be defined and set by OFCOM.
(3) The index under subsection (1) must-- (a) satisfy minimum quality criteria to be set
by OFCOM, and (b) be displayed in a way which allows any user easily to reach an informed view of the likely factual accuracy of the content at the same time as they encounter it.
Surely it is a
case of be careful what you wish for. After all it would be great to see truth scores attached to all politicians social media posts. I somehow think that other MPs will rather see the flaws in this idea and will be rather quick to see it consigned to
the parliamentary trash can. |
|
|
|
|
| 7th July 2022
|
|
|
...er the same sleazy party loving people that gave you one rule for them and one rule for us! See article from reprobatepress.com
|
|
Legal analysis of UK internet censorship proposals
|
|
|
| 5th
July 2022
|
|
| |
Offsite Article: French lawyers provide the best summary yet 15th June 2022. See article
from taylorwessing.com Offsite Article: Have we opened Pandora's box? 20th June 2022. See
article from tandfonline.com
Abstract In thinking about the developing online harms regime (in the UK and elsewhere1) it is forgivable to think only of how laws placing responsibility on social media platforms to prevent hate speech may benefit
society. Yet these laws could have insidious implications for free speech. By drawing on Germany's Network Enforcement Act I investigate whether the increased prospect of liability, and the fines that may result from breaching the duty of care in the
UK's Online Safety Act - once it is in force - could result in platforms censoring more speech, but not necessarily hate speech, and using the imposed responsibility as an excuse to censor speech that does not conform to their objectives. Thus, in
drafting a Bill to protect the public from hate speech we may unintentionally open Pandora's Box by giving platforms a statutory justification to take more control of the message. See full
article from tandfonline.com Offsite Article: The Online Safety
Act - An Act of Betrayal 5th July 2022. See article from ukcolumn.org by Iain Davis
The Online Safety Bill (OSB) has been presented to the public as an attempt to protect children from online grooming and abuse and to limit the reach of terrorist propaganda. This, however, does not seem to be its primary focus.
The real objective of the proposed Online Safety Act (OSA) appears to be narrative control.
|
|
|