|
|
|
|
| 4th March 2022
|
|
|
Censoring adult entertainment does not reduce demand -- it just allows fraudsters, blackmailers and corruption to flourish See
article from reprobatepress.com |
|
Threatening to invade and repress the freedoms of a once proud people
|
|
|
| 20th February 2022
|
|
| See paywalled article from ft.com |
The Financial Times is reporting that the cabinet have agreed to extending UK online censorship to cover legal but harmful content. The government will define what material must be censored via its internet censor Ofcom. The FT reports:
A revised Online Safety bill will give Ofcom, the communications regulator, powers to require internet companies to use technology to proactively seek out and remove both illegal content and legal content which is harmful
to children. The new powers were proposed in a recent letter to cabinet colleagues by home secretary Priti Patel and culture secretary Nadine Dorries.
It seems that the tech industry is not best pleased by being forced to pre-vet and
censor content according to rules decreed by government or Ofcom. The FT reports: After almost three years of discussion about what was originally named the Online Harms bill, tech industry insiders said they
were blindsided by the eleventh-hour additions. The changes would make the UK a global outlier in how liability is policed and enforced online, said Coadec, a trade body for tech start-ups. It added the UK would be a significantly
less attractive place to start, grow and maintain a tech business. Westminster insiders said ministers were reluctant to be seen opposing efforts to remove harmful material from the internet.
|
|
Big Tech companies liken the Online 'Safety' Bill to what censors are doing in China
|
|
|
|
13th February 2022
|
|
| See paywalled article from ft.com |
The Financial Times is reporting on a letter from Priti Patel and Nadine Dorries calling for tech companies to pre-emptively vet and censor user posts on social media that are 'legal but harmful'. The tech companies see this as a censorship demand
that goes way beyond anything else demanded in the supposedly free world. Unnamed critics said such censorship could create a clash with European data protection rules and deter further investment from multinational tech companies in the UK.
One tech lobbyist said the plans have put a panic-stricken tech industry at Defcon 2: The broader implications are vast. It seems that Patel and Dorries have sent the letter to cabinet colleagues to argue for a step up in the
censorship demands of the as yet unpublished Online 'Safety' Bill. One tech industry executive, who has seen the proposals, said the potential requirement to monitor legal content, as well as material that is clearly designated as illegal, crossed
a huge red line for internet companies. Another said: This seems to go significantly beyond what is done in democratic countries around the world. It feels a bit closer to what they are doing in China.
|
|
scaremongering tactics being used to mislead the public and make bogus case for weakening encryption
|
|
|
| 6th February 2022
|
|
| See article from openrightsgroup.org
|
The UK Home Office plans to force technology companies to remove the privacy and security of encrypted services such as WhatsApp and Signal as part of its Online Safety Bill. Even worse, the Home Office has launched a scaremongering
campaign wasting hundreds of thousands of pounds on a London advertising agency to undermine public trust in a critical digital security tool to keep people and businesses safe online. Undermining encryption would make our
private communications unsafe, allowing hostile strangers and governments to intercept conversations. Undermining encryption would put at risk the safety of those who need it most. Survivors of abuse or domestic violence, including children, need secure
and confidential communications to speak to loved ones and access the information and support they need. As Stephen Bonner, executive director for technology and innovation at the UK Information Commissioner's Office recently noted, end-to-end encryption
"strengthens children's online safety by not allowing criminals and abusers to send them harmful content or access their pictures or location." [1] Operation: Safe Escape [2] and LGBT Tech [3] --two
organisations that represent and safeguard vulnerable stakeholders--stress the vital importance of encrypted communications victims of domestic abuse and for LGBTQ+ people in countries where they face harassment, victimisation and even the threat of
execution. Far from making them safer, denying at-risk people a confidential lifeline puts them at greater and sometimes mortal risk. Anti-encryption policies threaten the fundamental human right to freedom of expression.
Compromising encryption would undermine investigative journalism that exposes corruption and criminality. According to the Centre for Investigative Journalism, without a secure means of communication, sources would go unprotected and whistleblowers will
hesitate to come forward. [4] Contrary to what the Home Office claims, leading cybersecurity experts conclude that even message scanning "creates serious security and privacy risks for all society while the
assistance it can provide for law enforcement is at best problematic." [5] Backdoors create an entry point for hostile states, criminals and terrorists to gain access to highly sensitive information. Weakening encryption negatively impacts the
global Internet [6] and means our private messages, sensitive banking information, personal photographs and privacy would be undermined. MI6 head, Richard Moore, used his first public speech to warn of the increased data security threat from hostile
countries. [7] By Mr. Moore's analysis, the UK would be making things easier for hostile governments, in waging a war against our personal and national security. The UK government must reassess their decision to wage war
on a technology that is essential to so many people in the UK and beyond. Signatories:
- Access Now
- ACLAC (Latin American and Caribbean Encryption Coalition)
- Adam Smith Institute
- Africa Media and
Information Technology Initiative (AfriMITI)
- Alec Muffett, Security Researcher
- Annie Machon
- ARTICLE19
- Big Brother Watch
- Centre for Democracy and Technology
- Christopher Parsons, Senior Research Associate, Citizen Lab, Munk School of Global Affairs & Policy at the University of Toronto
- Collaboration on International ICT Policy for East and Southern Africa (CIPESA)
- Cybersecurity Advisors Network (CyAN)
- Dave Carollo, Product
Manager, TunnelBear LLC
- Derechos Digitales -- Latin America
- Digital Rights Watch
- Dr. Duncan Campbell
- Electronic Frontier Foundation
- Faud Khan, CEO, TwelveDot Incorporated
- Fundación Karisma
- Global Partners Digital
- Glyn Moody
- Index on Censorship
- Instituto de Desarrollo Digital de América Latina y el Caribe (IDDLAC)
- Internet Society
- Internet Society Brazil Chapter
- Internet Society Catalonia Chapter
- Internet Society Germany Chapter
- Internet Society India Hyderabad
- Internet Society Portugal Chapter
- Internet Society Tchad Chapter
- Internet Society UK England Chapter
- Internet Freedom Foundation, India
- JCA-NET (Japan)
- Jens Finkhaeuser, Interpeer Project
- Prof. Dr. Kai Rannenberg, Goethe University Frankfurt, Chair of Mobile Business & Multilateral Security
-
Kapil Goyal, Faculty Member, DAV College Amritsar
- Khalid Durrani, PureVPN
- Prof. Dr. Klaus-Peter Löhr, Freie Universität Berlin
-
LGBT Technology Partnership
- Liberty
- Luke Robert Mason
- Mark A. Lane, Cryptologist, UNIX / Software Engineer
- OpenMedia
- Open Rights Group
- Open Technology Institute
- Peter Tatchell Foundation
-
Privacy & Access Council of Canada
- Ranking Digital Rights
- Reporters Without Borders
- Riana Pfefferkorn, Research
Scholar, Stanford Internet Observatory
- Simply Secure
- Sofía Celi, Latin American Cryptographers.
- Dr. Sven Herpig, Director for International
Cybersecurity Policy, Stiftung Neue Verantwortung
- Tech For Good Asia
- The Law and Technology Research Institute of Recife (IP.rec)
- The Tor
Project
- Dr. Vanessa Teague, Australian National University
- Yassmin Abdel-Magied
- https://www.infosecurity-magazine.com/news/privacy-tsar-defense-encryption/
- https://safeescape.org/get-help/
- https://www.lgbttech.org/post/lgbt-tech-internet-society-release-new-encryption-infographic
- https://tcij.org/bespoke-training/information-security/
- https://arxiv.org/abs/2110.07450
- https://www.internetsociety.org/resources/doc/2022/iib-encryption-uk-online-safety-bill/
- https://www.bbc.com/news/uk-59470026
|
|
UK Government announces that the Online Censorship Bill will now extend to requiring identity/age verification to view porn
|
|
|
| 6th February 2022
|
|
| See press release from gov.uk
|
On Safer Internet Day, Digital Censorship Minister Chris Philp has announced the Online Safety Bill will be significantly strengthened with a new legal duty requiring all sites that publish pornography to put robust checks in place to ensure their users
are 18 years old or over. This could include adults using secure age verification technology to verify that they possess a credit card and are over 18 or having a third-party service confirm their age against government data.
If sites fail to act, the independent regulator Ofcom will be able fine them up to 10% of their annual worldwide turnover or can block them from being accessible in the UK. Bosses of these websites could also be held criminally liable
if they fail to cooperate with Ofcom. A large amount of pornography is available online with little or no protections to ensure that those accessing it are old enough to do so. There are widespread concerns this is impacting the
way young people understand healthy relationships, sex and consent. Half of parents worry that online pornography is giving their kids an unrealistic view of sex and more than half of mums fear it gives their kids a poor portrayal of women.
Age verification controls are one of the technologies websites may use to prove to Ofcom that they can fulfil their duty of care and prevent children accessing pornography. Many sites where children are likely
to be exposed to pornography are already in scope of the draft Online Safety Bill, including the most popular pornography sites as well as social media, video-sharing platforms and search engines. But as drafted, only commercial porn sites that allow
user-generated content - such as videos uploaded by users - are in scope of the bill. The new standalone provision ministers are adding to the proposed legislation will require providers who publish or place pornographic content
on their services to prevent children from accessing that content. This will capture commercial providers of pornography as well as the sites that allow user-generated content. Any companies which run such a pornography site which is accessible to people
in the UK will be subject to the same strict enforcement measures as other in-scope services. The Online Safety Bill will deliver more comprehensive protections for children online than the Digital Economy Act by going further and
protecting children from a broader range of harmful content on a wider range of services. The Digital Economy Act did not cover social media companies, where a considerable quantity of pornographic material is accessible, and which research suggests
children use to access pornography. The government is working closely with Ofcom to ensure that online services' new duties come into force as soon as possible following the short implementation period that will be necessary after
the bill's passage. The onus will be on the companies themselves to decide how to comply with their new legal duty. Ofcom may recommend the use of a growing range of age verification technologies available for companies to use
that minimise the handling of users' data. The bill does not mandate the use of specific solutions as it is vital that it is flexible to allow for innovation and the development and use of more effective technology in the future. Age verification technologies do not require a full identity check. Users may need to verify their age using identity documents but the measures companies put in place should not process or store data that is irrelevant to the purpose of checking age. Solutions that are currently available include checking a user's age against details that their mobile provider holds, verifying via a credit card check, and other database checks including government held data such as passport data.
Any age verification technologies used must be secure, effective and privacy-preserving. All companies that use or build this technology will be required to adhere to the UK's strong data protection regulations or face enforcement
action from the Information Commissioner's Office. Online age verification is increasingly common practice in other online sectors, including online gambling and age-restricted sales. In addition, the government is working with
industry to develop robust standards for companies to follow when using age assurance tech, which it expects Ofcom to use to oversee the online safety regime. Notes to editors: Since the publication
of the draft Bill in May 2021 and following the final report of the Joint Committee in December, the government has listened carefully to the feedback on children's access to online pornography, in particular stakeholder concerns about pornography on
online services not in scope of the bill. To avoid regulatory duplication, video-on-demand services which fall under Part 4A of the Communications Act will be exempt from the scope of the new provision. These providers are already
required under section 368E of the Communications Act to take proportionate measures to ensure children are not normally able to access pornographic content. The new duty will not capture user-to-user content or search results
presented on a search service, as the draft Online Safety Bill already regulates these. Providers of regulated user-to-user services which also carry published (i.e. non user-generated) pornographic content would be subject to both the existing
provisions in the draft Bill and the new proposed duty.
|
|
Government defines a wide range of harms that will lead to criminal prosecution and that will require censorship by internet intermediaries
|
|
|
|
3rd February 2022
|
|
| See press release from gov.uk
|
Online Safety Bill strengthened with new list of criminal content for tech firms to remove as a priority own after it had been reported to them by users but now they must be proactive and prevent people being exposed in the first
place. It will clamp down on pimps and human traffickers, extremist groups encouraging violence and racial hate against minorities, suicide chatrooms and the spread of private sexual images of women without their consent.
Naming these offences on the face of the bill removes the need for them to be set out in secondary legislation later and Ofcom can take faster enforcement action against tech firms which fail to remove the named illegal content.
Ofcom will be able to issue fines of up to 10 per cent of annual worldwide turnover to non-compliant sites or block them from being accessible in the UK. Three new criminal offences, recommended by the Law
Commission, will also be added to the Bill to make sure criminal law is fit for the internet age. The new communications offences will strengthen protections from harmful online behaviours such as coercive and controlling
behaviour by domestic abusers; threats to rape, kill and inflict physical violence; and deliberately sharing dangerous disinformation about hoax Covid-19 treatments. The government is also considering the Law Commission's
recommendations for specific offences to be created relating to cyberflashing, encouraging self-harm and epilepsy trolling. To proactively tackle the priority offences, firms will need to make sure the features, functionalities
and algorithms of their services are designed to prevent their users encountering them and minimise the length of time this content is available. This could be achieved by automated or human content moderation, banning illegal search terms, spotting
suspicious users and having effective systems in place to prevent banned users opening new accounts. New harmful online communications offences Ministers asked the Law Commission to review the
criminal law relating to abusive and offensive online communications in the Malicious Communications Act 1988 and the Communications Act 2003. The Commission found these laws have not kept pace with the rise of smartphones and
social media. It concluded they were ill-suited to address online harm because they overlap and are often unclear for internet users, tech companies and law enforcement agencies. It found the current law over-criminalises and
captures 'indecent' images shared between two consenting adults - known as sexting - where no harm is caused. It also under-criminalises - resulting in harmful communications without appropriate criminal sanction. In particular, abusive communications
posted in a public forum, such as posts on a publicly accessible social media page, may slip through the net because they have no intended recipient. It also found the current offences are sufficiently broad in scope that they could constitute a
disproportionate interference in the right to freedom of expression. In July the Law Commission recommended more coherent offences. The Digital Secretary today confirms new offences will be created and legislated for in the Online
Safety Bill. The new offences will capture a wider range of harms in different types of private and public online communication methods. These include harmful and abusive emails, social media posts and WhatsApp messages, as well
as 'pile-on' harassment where many people target abuse at an individual such as in website comment sections. None of the offences will apply to regulated media such as print and online journalism, TV, radio and film. The offences
are: A 'genuinely threatening' communications offence, where communications are sent or posted to convey a threat of serious harm. This offence is designed to better capture online threats to
rape, kill and inflict physical violence or cause people serious financial harm. It addresses limitations with the existing laws which capture 'menacing' aspects of the threatening communication but not genuine and serious threatening behaviour.
It will offer better protection for public figures such as MPs, celebrities or footballers who receive extremely harmful messages threatening their safety. It will address coercive and controlling online behaviour and stalking,
including, in the context of domestic abuse, threats related to a partner's finances or threats concerning physical harm.
A harm-based communications offence to capture communications sent to cause harm without a
reasonable excuse. This offence will make it easier to prosecute online abusers by abandoning the requirement under the old offences for content to fit within proscribed yet ambiguous categories such as "grossly
offensive," "obscene" or "indecent". Instead it is based on the intended psychological harm, amounting to at least serious distress, to the person who receives the communication, rather than requiring proof that harm was caused.
The new offences will address the technical limitations of the old offences and ensure that harmful communications posted to a likely audience are captured. The new offence will consider the context in which the communication was
sent. This will better address forms of violence against women and girls, such as communications which may not seem obviously harmful but when looked at in light of a pattern of abuse could cause serious distress. For example, in the instance where a
survivor of domestic abuse has fled to a secret location and the abuser sends the individual a picture of their front door or street sign. It will better protect people's right to free expression online. Communications that are
offensive but not harmful and communications sent with no intention to cause harm, such as consensual communication between adults, will not be captured. It will have to be proven in court that a defendant sent a communication without any reasonable
excuse and did so intending to cause serious distress or worse, with exemptions for communication which contributes to a matter of public interest.
An offence for when a person sends a communication they know to be
false with the intention to cause non-trivial emotional, psychological or physical harm. Although there is an existing offence in the Communications Act that captures knowingly false communications, this new offence
raises the current threshold of criminality. It covers false communications deliberately sent to inflict harm, such as hoax bomb threats, as opposed to misinformation where people are unaware what they are sending is false or genuinely believe it to be
true. For example, if an individual posted on social media encouraging people to inject antiseptic to cure themselves of coronavirus, a court would have to prove that the individual knew this was not true before posting it.
The maximum sentences for each offence will differ. If someone is found guilty of a harm based offence they could go to prison for up to two years, up to 51 weeks for the false communication offence and up to five years for the
threatening communications offence. The maximum sentence was six months under the Communications Act and two years under the Malicious Communications Act. Notes The draft Online Safety Bill in its
current form already places a duty of care on internet companies which host user-generated content, such as social media and video-sharing platforms, as well as search engines, to limit the spread of illegal content on these services. It requires them to
put in place systems and processes to remove illegal content as soon as they become aware of it but take additional proactive measures with regards to the most harmful 'priority' forms of online illegal content. The priority
illegal offences currently listed in the draft bill are terrorism and child sexual abuse and exploitation, with powers for the DCMS Secretary of State to designate further priority offences with Parliament's approval via secondary legislation once the
bill becomes law. In addition to terrorism and child sexual exploitation and abuse, the further priority offences to be written onto the face of the bill includes illegal behaviour which has been outlawed in the offline world for years but also newer
illegal activity which has emerged alongside the ability to target individuals or communicate en masse online. This list has been developed using the following criteria: (i) the prevalence of such content on regulated services,
(ii) the risk of harm being caused to UK users by such content and (iii) the severity of that harm. The offences will fall in the following categories:
- Encouraging or assisting suicide
- Offences relating to sexual images i.e. revenge and extreme pornography
- Incitement to and threats of violence
- Hate crime
- Public order offences - harassment and stalking
- Drug-related offences
- Weapons / firearms offences
- Fraud and financial crime
- Money laundering
- Controlling, causing or inciting prostitutes for gain
- Organised immigration offences
|
|
The Earl of Erroll spouts to Parliament that anal sex and blowjobs are not how to go about wooing a woman
|
|
|
|
27th January 2022
|
|
| See
2nd reading debate transcript from hansard.parliament.uk See
bill status from bills.parliament.uk |
The House of Lords has given a second reading to the Digital Economy Act 2017 (Commencement of Part 3) Bill [HL] which is attempting to resurrect the failed law requiring age verification for porn websites. The bill reads:
Commencement of Part 3 of the Digital Economy Act 2017 The Secretary of State must make regulations under section 118(6) (commencement) of the Digital Economy Act 2017 to ensure that by 20 June 2022 all
provisions under Part 3 (online pornography) of that Act have come into force. The full reasoning for law to come into force have never been published but this is most likely due to the law totally failing to address the issue of
keeping porn users' data safe from scammers, blackmailers and thieves. It also seems that the government would prefer to have general rules under which to harangue websites for not keeping children safe from harm rather than set to an expensive bunch of
film censors seeking out individual transgressors. The 2nd reading debate featured the usual pro-censorship peers queing up to have a whinge about the availability of porn. And as is always the case, most of them haven't been bothered thinking
about the effectiveness of the measures, their practicality and acceptability. And of course nothing about the safety of porn users who foolishly trust their very dangerous identity data to porn websites and age verification companies. Merlin Hay,
the Earl of Erroll seems to be something of a shill for those age verification companies. He chairs the Digital Policy Alliance ( dpalliance.org.uk ) which acts as a lobby group
for age verifiers. He excelled himself in the debate with a few words that have been noticed by the press. He spouted: What really worries me is not just extreme pornography, which has quite rightly been mentioned, but
the stuff you can access for free -- what you might call the teaser stuff to get you into the sites. It normalises a couple of sexual behaviours which are not how to go about wooing a woman. Most of the stuff you see up front is about men almost
attacking women. It normalises -- to be absolutely precise about this, because I think people pussyfoot around it -- anal sex and blowjobs. I am afraid I do not think that is how you go about starting a relationship.
|
|
The Sunday Times reports that the Government is preparing to require ID verification for all internet porn
|
|
|
| 26th November 2021
|
|
| See article from thetimes.co.uk |
The Sunday Times is reporting that UK government inisters are preparing to introduce mandatory ID verification for adults to be able to access internet porn. Plans to bring in ID verification for adult sites, which were shelved in 2019 over their
failure to include data protection for porn users, are now being looked on with approval by Nadine Dorries, the culture secretary, and Nadhim Zahawi, the education secretary. Their support follows an intervention by Dame Rachel de Souza, the
children's commissioner, who has sent a report to ministers recommending that age verification becomes compulsory on all porn sites, not just those with user uploaded content as proposed under the draft Online 'Safety' Bill. |
|
Government will define crimes in its Online Censorship Bill as those causing 'likely psychological harm'
|
|
|
|
1st November 2021
|
|
| See article from cityam.com |
The Department for Culture, Media & Sport has accepted recommendations from the Law Commission for crimes under its Online Censorship Bill to be based on likely psychological harm rather than just indecent or grossly offensive content. This widens
the purview of the law, and the proposed change will focus on the supposed harmful effect of a message rather than the content itself. A knowingly false communication offence will be created that will criminalise those who send or post a message they
know to be false with the intention to cause emotional, psychological, or physical harm to the likely audience. The move is justifiably likely to be met with resistance from freedom of speech campaigners. |
|
The UK government steps up internet censorship whilst Twitter asks for clarity on what legal but supposedly harmful material needs to be censored
|
|
|
|
20th October 2021
|
|
| See article from bbc.co.uk |
The government has decided to counter terrorist knife murders by censoring the internet and taking away everyone's rights to (justifiably) insult their knee jerking MPs. Writing in the Daily Mail, Nadine Dorries, Secretary of State for Digital
Censorship Culture, Media and Sport said: Online hate has poisoned public life, it's intolerable, it's often unbearable and it has to end. Enough is enough. Social media companies have no
excuses. And once this bill passes through Parliament, they will have no choice.
She also said the government had decided to re-examine how our legislation can go even further to ensure the biggest social media companies properly
protect users from anonymous abuse. Twitter is not impressed and has aired its concerns that the bill gives too much influence to the culture secretary over Ofcom. The current draft bill would allow Dorries to change the Ofcom code of
practice that would be used to regulate the likes of Facebook and Twitter. Speaking to Radio 4's Westminster Hour programme, Katy Minshall, the head of policy in the UK for Twitter, said the bill gave the minister unusual powers, leaving Ofcom
to muddle through. She also rejected the idea of stronger rules around online anonymity -- something some MPs have campaigned for. Minshall argued that clamping down on anonymous accounts would fail to deal with the problems of online abuse and could
damage people who rely on pseudonymity. She said: If you're a young person exploring their sexuality or you're a victim of domestic violence looking online for help and for support, pseudonymity is a really important
safety tool for you. She added that users already had to provide a date of birth, full name and email address when signing up, meaning that the police could access data about an account, even if someone had used a pseudonym. Minshall
said the bill had thrown up all sorts of really important questions, such as how do we define legal but harmful content and what sorts of exemptions should we make for journalistic content or content of democratic importance. |
|
|
|
|
| 12th
October 2021
|
|
|
Ex-DCMS minister Ed Vaizey predicts huge battleground over UK's plan to set internet censorship rules See
article from techcrunch.com |
|
How the Online Safety Bill lets politicians define free speech
|
|
|
|
17th September 2021
|
|
| See Creative Commons article from
openrightsgroup.org by Heather Burns |
The Joint Pre-Legislative Scrutiny committee has opened its work into the draft Online Safety Bill. Over the course of their enquiry, one area they must cover -- perhaps as their highest priority -- is the potential for the Bill to be abused as a means
of politicising free speech, and your ability to exercise it. As it has been drafted, the Bill gives sweeping powers to the Secretary of State for Digital, Culture, Media, and Sport, and potentially to the Home Secretary, to make
unilateral decisions, at any time they please, as to what forms of subjectively harmful content must be brought into the scope of the bill's content moderation requirements. Shockingly, it allows them to make those decisions for political reasons.
These risks come in Part 2, Chapter 5, Section 33 of the draft, which states (emphasis our own): (1) The Secretary of State may direct OFCOM to modify a code of practice submitted under section
32(1) where the Secretary of State believes that modifications are required-- (a) to ensure that the code of practice reflects government policy , or (b) in the case of a code of practice under section 29(1) or (2), for reasons of national
security or public safety.(nb this refers to terrorism and csea content) (2) A direction given under this section-- (a) may not require OFCOM to include in a code of practice provisiion about a particular step recommended to be taken by providers
of regulated services, and (b) must set out the Secretary of State's reasons for requiring modifications (except in a case where the Secretary of State considers that doing so would be against the interests of national security or against the
interests of relations with the government of a country outside the United Kingdom). (3) Where the Secretary of State gives a direction to OFCOM, OFCOM must, as soon as reasonably practicable-- (a) comply with the direction, (b) submit to the
Secretary of State the code of practice modified in accordance with the direction, (c) submit to the Secretary of State a document containing-- (i) (except in a case mentioned in subsection (2)(b)) details of the direction, and (ii) details about how
the code of practice has been revised in response to the direction, and (d) inform the Secretary of State about modifications that OFCOM have made to the code of practice that are not in response to the direction (if there are any). (4) The
Secretary of State may give OFCOM one or more further directions requiring OFCOM to modify the code of practice for the reasons mentioned in paragraph (a) or (b) of subsection (1), and subsections (2) and (3) apply again in relation to any such further
direction.
In other words, a government minister will have the authority to direct an (allegedly) independent regulator to modify the rules of content moderation on topics which are entirely subjective, entirely
legal, and entirely political, and to order that regulator to enforce those new rules. Online services, whether the biggest platform or the smallest startup, in turn, will have no choice but to follow those rules, lest they face
potential penalties, fines, and even service blocking. You don't have to be a policy
expert, or a lawyer, to see how these illiberal powers could be misused and abused. We've already provided an example of how this blatant politicisation of the boundaries of free speech could be used to silence public debate on legal topics which the
government of the day finds unacceptable, for example, migration . You may have strong opinions on that
topic yourself, and you have every right to do so. However, your own ability to discuss that topic is on the table here too. And as political currents shift and parties trade power, we risk a never-ending war of attrition where
the government of the day simply silences topics, opinions, and opposition voices it does not want you to hear. The political powers over free speech contained in the draft Bill are a rare area where the consensus is universal.
Other groups, even those who are strongly in favour of the Bill, are equally uncomfortable with the level of
government control over an allegedly independent regulator that has been placed on the table. These voices also include groups outside the UK who are alarmed by the potential these powers have to lower the UK's international standing as a free and
democratic nation which upholds the right to freedom of expression. This chorus should not be ignored. The clauses allowing government to politicise the boundaries of legal free speech have no place in this Bill, or indeed, in any
Bill. As the pre-legislative scrutiny committee draws its conclusions, and as the draft Bill approaches its final form, these clauses must be deleted and left in the bin where they belong.
|
|
|
|
|
| 7th
September 2021
|
|
|
How the draft Online Safety Bill would affect the development of Free/open source software. By Neil Brown See
article from decoded.legal |
|
Individuals and LGBT organisations speak out against the Governments Online Safety Bill
|
|
|
| 4th September 2021
|
|
| See article from
indexoncensorship.org |
As proud members of the LGBTQ+ community, we know first-hand the vile abuse that regularly takes place online. The data is clear; 78% of us have faced anti-LGBTQ+ hate crime or hate speech online in the last 5 years. So we understand why the Government
is looking for a solution, but the current version of the Online Safety Bill is not the answer -- it will make things worse not better. The new law introduces the "duty of care" principle and would give internet
companies extensive powers to delete posts that may cause 'harm.' But because the law does not define what it means by 'harm' it could result in perfectly legal speech being removed from the web. As LGBTQ+ people we have seen what
happens when vague rules are put in place to police speech. Marginalised voices are silenced. From historic examples of censors banning LGBTQ+ content to 'protect' the public, to modern day content moderation tools marking innocent LGBTQ+ content as
explicit or harmful. This isn't scaremongering. In 2017, Tumblr's content filtering system marked non-sexual LGBTQ+ content as explicit and blocked it, in 2020 TikTok censored depictions of homosexuality such as two men kissing or
holding hands and it reduced the reach of LGBTQ+ posts in some countries, and within the last two months LinkedIn removed a coming out post from a 16-year-old following complaints. This Bill, as it stands, would provide a legal
basis for this censorship. Moreover, its vague wording makes it easy for hate groups to put pressure on Silicon Valley tech companies to remove LGBTQ+ content and would set a worrying international standard. Growing calls to end
anonymity online also pose a danger. Anonymity allows LGBTQ+ people to share their experiences and sexuality while protecting their privacy and many non-binary and transgender people do not hold a form of acceptable ID and could be shut out of social
media. The internet provides a crucial space for our community to share experiences and build relationships. 90% of LGBTQ+ young people say they can be themselves online and 96% say the internet has helped them understand more
about their sexual orientation and/or gender identity. This Bill puts the content of these spaces at potential risk. Racism, homophobia, transphobia, and threats of violence are already illegal. But data shows that when they
happen online it is ignored by authorities. After the system for flagging online hate crime was underused by the police, the Home Office stopped including these figures in their annual report all together, leaving us in the dark about the scale of the
problem. The government's Bill should focus on this illegal content rather than empowering the censorship of legal speech. This is why we are calling for "the duty of care", which in the current form of the Online Safety
Bill could be used to censor perfectly legal free speech, to be reframed to focus on illegal content, for there to be specific, written, protections for legal LGBTQ+ content online, and for the LGBTQ+ community to be properly consulted throughout the
process.
- Stephen Fry , actor, broadcaster, comedian, director, and writer.
- Munroe Bergdorf , model, activist, and writer.
- Peter Tatchell ,
human rights campaigner.
- Carrie Lyell , Editor-in-Chief of DIVA Magazine.
- James Ball , Global Editor of The Bureau Of Investigative Journalism.
-
Jo Corrall , Founder of This is a Vulva.
- Clara Barker , material scientist and Chair of LGBT+ Advisory Group at Oxford University.
- Marc
Thompson , Director of The Love Tank and co-founder of PrEPster and BlackOut UK.
- Sade Giliberti , TV presenter, actor, and media personality.
- Fox Fisher ,
artist, author, filmmaker, and LGBTQIA+ rights advocate.
- Cara English , Head of Public Engagement at Gendered Intelligence, Founder OpenLavs.
- Paula Akpan ,
journalist, and founder of Black Queer Travel Guide.
- Tom Rasmussen , writer, singer, and drag performer.
- Jamie Wareham , LGBTQ journalist and host of the #QueerAF
podcast.
- Crystal Lubrikunt , international drag performer, host, and producer.
- David Robson, Chair of London LGBT+ Forums Network
-
Shane ShayShay Konno , drag performer, curator and host of the ShayShay Show, and founder of The Bitten Peach.
- UK Black Pride , Europe's largest celebration for African, Asian,
Middle Eastern, Latin America, and Caribbean-heritage LGBTQI+ people.
|
|
A pithy summary abut the current parliamentary clamour for age verification for porn and social media
|
|
|
|
2nd September 2021
|
|
| See article from us10.campaign-archive.com by Ben
Greenstone | |
Ben Greenstone comments on a recent article in the Times commenting on a cross party cartel of powerful parliamentarians all calling for more obtrusive age verification: The Chairs of both the Draft Online Safety Bill
Joint Committee and the DCMS Select Committee, alongside the Shadow DCMS Secretary of State and the Children's Commissioner, are all calling for tougher age verification measures online. It blows my mind that the piece does not
make more of the fact that DCMS tried to introduce age verification for *actual online pornography* and failed because it was too hard. 18 year olds can have a credit card which can be used as a proxy measure... what do 13 year olds have?
This is classic just fix it from people who don't seem to have spent any time actually thinking about what fixing it would look like and what it would require. It's bad news for online service providers, but great news if you are
planning to set up an age verification business.
|
|
|
|
|
| 9th August 2021
|
|
|
The government's online safety bill is another unseen power-grab. By Patrick Maxwell See article from politics.co.uk
|
|
And one can guess on her political allegiance as she is currently a boss of NewsGuard, who famously labelled the Daily Mail website as 'failing to maintain basic standards of accuracy and accountability'
|
|
|
|
28th July 2021
|
|
| See article from bbc.co.uk See
Don't Trust the Daily Mail from theguardian.com |
Britain's state internet censor Ofcom has announced that Anna-Sophie Harling will be its principal internet censor dealing with censorship under the Government's upcoming Online Safety Bill. Ofcom will be able to fine tech firms that fail to remove
'offending' content up to 10% of their global revenue. Harling will be part of a team reporting into Mark Bunting, director of online policy. Harling is currently managing director for Europe at NewsGuard, which audits online publishers for
'accuracy'. And one can guess on her political allegiance as NewsGuard famously labelled the Daily Mail website as 'failing to maintain basic standards of accuracy and accountability'. |
|
Comments about the UK Government's new Internet Censorship Bill
|
|
|
| 21st
July 2021
|
|
| |
Offsite Comment: The Online Safety Bill won’t solve online abuse 2nd July 2021. See article by Heather Burns The Online Safety Bill contains threats to freedom of expression, privacy, and commerce which will do nothing to solve online
abuse, deal with social media platforms, or make the web a better place to be.
Update: House of Lords Committee considers that social media companies are not the best 'arbiters of truth' 21st July 2021. See
article from dailymail.co.uk , See
report from committees.parliament.uk A house of Lords committee has warned that the government's plans for new online censorship
laws will diminish freedom of speech by making Facebook and Google the arbiters of truth. The influential Lords Communications and Digital Committee cautioned that legitimate debate is at risk of being stifled by the way major platforms filter out
misinformation. Committee chairman Lord Gilbert said: The benefits of freedom of expression online mustn't be curtailed by companies such as Facebook and Google, too often guided their commercial and political
interests than the rights and wellbeing of their users.
The report said: We are concerned that platforms approaches to misinformation have stifled legitimate debate, including between experts.
Platforms should not seek to be arbiters of truth. Posts should only be removed in exceptional circumstances.
The peers said the government should switch to enforcing existing laws more robustly, and criminalising
any serious harms that are not already illegal.
|
|
But would you trust money seeking age verification companies not to use facial identification to record who is watching porn anyway
|
|
|
| 10th July 2021
|
|
| See article from
theguardian.com See also CC article from alecmuffett.com |
Our Big Brother government is seeking ways for all websites users to be identified and tracked in the name of child protection. But for all the up and coming legislation that demands age verification, there aren't actually any methods yet that satisfy
both strict age verification and protect people's personal data from hackers, thieves, scammers, spammers, money grabbing age verification companies, the government, and the provably data abusing social media companies. The Observer has reported on a
face scanning scheme whereby the age verification company claims not to look up your identity via facial recognition and instead just trying and count the wrinkles on your photo. See
article from theguardian.com . Security expert Alec Muffet has also posted some
interesting and relevant background provided to the Observer that somehow did not make the cut. See article from alecmuffett.com |
|
Comments about the UK Government's new Internet Censorship Bill
|
|
|
| 28th
June 2021
|
|
| | Comment: Disastrous 11th May 2021. See
article from bigbrotherwatch.org.uk Mark Johnson, Legal and Policy Officer at Big Brother Watch
said:
The Online Safety Bill introduces state-backed censorship and monitoring on a scale never seen before in a liberal democracy. This Bill is disastrous for privacy rights and free expression online. The Government is
clamping down on vague categories of lawful speech. This could easily result in the silencing of marginalised voices and unpopular views. Parliament should remove lawful content from the scope of this Bill altogether and refocus
on real policing rather than speech-policing.
Offsite Comment: Online safety bill: a messy new minefield in the culture wars 13th May 2021.
See article from theguardian.com by Alex Hern
The message of the bill is simple: take down exactly the content the government wants taken down, and no more. Guess wrong and you could face swingeing fines. Keep guessing wrong and your senior managers could even go
to jail. Content moderation is a hard job, and it's about to get harder.
Offsite Comment: Harm Version 3.0 15th May
2021. See article from cyberleagle.com by Graham Smith
Two years on from the April 2019 Online Harms White Paper, the government has published its draft Online Safety Bill. It is a hefty beast: 133 pages and 141 sections. It raises a slew of questions, not least around press and journalistic material and the
newly-coined content of democratic importance. Also, for the first time, the draft Bill spells out how the duty of care regime would apply to search engines, not just to user generated content sharing service providers. This post
offers first impressions of a central issue that started to take final shape in the government's December 2020 Full Response to consultation: the apparent conflict between imposing content monitoring and removal obligations on the one hand, and the
government's oft-repeated commitment to freedom of expression on the other - now translated into express duties on service providers. The draft Bill represents the government's third attempt at defining harm (if we include the
White Paper, which set no limit). The scope of harm proposed in its second version (the Full Response) has now been significantly widened. See
article from cyberleagle.com
Offsite Comment: The unstoppable march of state censorship 17th May 2021. See article from spiked-online.com
Vaguely worded hate-speech laws can end up criminalising almost any opinion.
Offsite Comment: Drowning internet services in red tape 18th May 2021. See article from techmonitor.ai by
Laurie Clarke The UK government has unveiled sprawling new legislation that takes aim at online speech on internet services 203 stretching from illegal to legal yet harmful content. The wide-ranging nature of the proposals could
leave internet businesses large and small facing a huge bureaucratic burden, and render the bill impractical to implement.
Offsite Comment: UK online safety bill raises censorship concerns and questions on future of encryption 24th May 2021. See
article from cpj.org Offsite Comment:
Why the online safety bill threatens our civil liberties
26th May 2021. See article from politics.co.uk by Heather Burns With the recent
publication of the draft online safety bill, the UK government has succeeded in uniting the British population in a way not seen since the weekly clap for the NHS. This time, however, no one is applauding. After two years of dangled promises, the
government's roadmap to making the UK the safest place in the world to be online sets up a sweeping eradication of our personal privacy, our data security, and our civil liberties.
Offsite Comment: Misguided Online Safety Bill will be catastrophic for ordinary people's social media 23rd June 2021. See
article from dailymail.co.uk
The Government's new Online Safety Bill will be catastrophic for ordinary people's freedom of speech, former minister David Davis has warned. The Conservative MP said forcing social networks to take down
content in Britain they deem unacceptable seems out of Orwell's 1984. Davis slammed the idea Silicon Valley firms could take down posts they think are not politically correct - even though it is legal. See full
article from dailymail.co.uk
Offsite Comment: On the trail of the Person of Ordinary Sensibilities 28th June 2021. See article from cyberleagle.com by Graham
Smith The bill boils down to what a mythical adult or child of 'ordinary sensibilities' considers to be 'lawful but awful' content.
Offsite Comment: The Online Safety Bill won’t solve online abuse 2nd July 2021. See article by Heather Burns The Online Safety Bill contains threats to freedom of expression, privacy, and commerce which will do nothing to solve online
abuse, deal with social media platforms, or make the web a better place to be.
|
|
|
|
|
| 26th June 2021
|
|
|
A Conservative government that boasts it is a defender of free speech against the attacks of the woke is about to impose the severest censorship this country has seen in peacetime. By Nick Cohen See
article from spectator.co.uk |
|
Internet organisations write to MPs pointing how dangerous it will be for internet users to lose the protection of End to End Encryption for their communications
|
|
|
| 15th June 2021
|
|
| See article from globalencryption.org
| To Members of Parliament: end-to-end encryption keeps us safe 68 million of your constituents are at risk of losing the most
important tool to keep them safe and protected from cyber-criminals and hostile governments.
End-to-end encryption means that your constituents' family photographs, messages to friends and family, financial information, and the commercially sensitive data of businesses up and down the country, can all be kept safe from harm's way. It
also keeps us safer in a world where connected devices have physical effect: end-to-end encryption secures connected homes, cars and children's toys. The government should not be making those more vulnerable to attack. The draft Online Safety Bill
contains clauses that could undermine and in some situations even prohibit the use of end-to-end encryption, meaning UK citizens will be less secure online than citizens of other democracies. British businesses operating online will have less protection
for their data flows in London than in the United States or the European Union. Banning end-to-end encryption, or introducing requirements for companies to scan the content of our messages, will remove protections for private citizens and companies'
data. We all need that protection, but children and members of at-risk communities need it most of all. Don't leave them exposed.
With more people than ever before falling prey to criminals online, now is not the time for the UK to undertake a reckless policy experiment that puts its own citizens at greater risk. We, the undersigned, are calling on the Home Office to explain how it
plans to protect the British public from criminals online when it is taking away the very tools that keep the public safe. If the draft Online Safety Bill aims to make us safer, end-to-end encryption should not be threatened or undermined by this
legislation. Sincerely,
- ARTICLE 19*
- Association for Proper Internet Governance
- Big Brother Watch*
- Bikal
-
Blacknight*
- CCAOI*
- Centre for Democracy and Technology*
- Coalition for a Digital Economy (COADEC)
-
Defenddigitalme
- Derechos Digitales*
-
Digital Rights Watch*
- eco 203 Association of the Internet Industry*
- English PEN
- Global Partners Digital*
- Internet Governance Project, Georgia Institute of Technology*
- Internet Society*
- Internet Society Ghana Chapter*
-
Internet Society Hyderabad Chapter*
- Internet Society UK England Chapter*
- Mega Limited*
- New America's Open Technology
Institute*
- Open Rights Group*
- Paradigm Initiative (PIN)*
- Praxonomy*
-
Privacy International*
- Prostasia Foundation*
- Riana Pfefferkorn, Research Scholar, Stanford Internet Observatory
- Simply Secure*
- Statewatch
- techUK
- The Tor Project*
- Tresorit*
- Tutao GmbH 203
Tutanota*
*Members of the Global Encryption Coalition
|
|
|
|
|
| 13th June 2021
|
|
|
And will the Online Safety Bill enable political censorship by the government? See article from
openrightsgroup.org |
|
Strident Scottish feminist MSP tables motion calling for the resurrection of failed UK law requiring age verification for porn
|
|
|
| 11th June 2021
|
|
| See article from
scottishlegal.com |
Rhoda Grant is a campaigning MSP with a long and miserable history of calling for bans on sex work and lap dancing. She has now tabled a motion for consideration by the Scottish Parliament expressing concern the UK government's reported failure to
implement Part 3 of the Digital Economy Act 2017 seeking to impose age verification for porn but without any consideration for the dangers to porn users of having their personal data hacked or abused. Grant's motion has received the backing of Labour
and SNP MSPs and notes that a coalition of women's organisations, headteachers, children's charities and parliamentarians want the government to enforce Part 3 without further delay. Grant said: How we keep our
children safe online should be an absolute priority, so the failure to implement Part 3 of the Digital Economy Act 2017 is a terrible reflection on the UK government.
|
|
House of Lords Private Members Bills seek the restoration of failed age verification for porn and another that demands more perfect age assurance methods
|
|
|
| 9th
June 2021
|
|
| See article from parliament.uk |
Members of the House of Lords are clamouring for more red tape and censorship in the name of protecting children from the dangers of the internet. Of course these people don't seem to give a shit about the safety of adults using the internet. Maurice
Morrow is attempting to revive the failed age verification for porn in his bill, Digital Economy Act 2017 (Commencement of Part 3) Bill [HL]. The original bill failed firstly because it failed to consider data protection for porn user's identity
data. The original authors of the bill couldn't even be bothered to consider such security implications as porn users handing over identity data and porn browsing data directly to Russian porn sites, possibly acting as fronts for the Russian government
dirty tricks dept. Perhaps the bill also failed because the likes of GCHQ don't fancy half the porn using population of the UK using VPNs and Tor to work around age verification and ISP porn blocking. See Morrow's
bill progress from bills.parliament.uk and the bill text from
bills.parliament.uk . The bill had its first reading on 9th June. Meanwhile Beeban Kidron has proposed a bill demanding accurate age assurance. Age assurance is generally an attempt to determine age without the nightmare of dangerously handing
over full identity identity data. Eg estimating the age of soical media users from the age of their friends. See Kidron's bill progress from bills.parliament.uk and the bill text is at
bill text from bills.parliament.uk . The bill had its first reading on 27th May |
|
|
|
|
| 29th May 2021
|
|
|
Asking the interesting question for future age verification laws. In today's blame society who has to carry the can when people inevitably find ways to circumvent the system. Is it the user, the website, or the age verification service? See
article from bbc.co.uk |
|
The Government publishes its draft Internet Censorship Bill
|
|
|
| 11th May 2021
|
|
| See press
release from gov.uk See Internet Censorship Bill [pdf] from
assets.publishing.service.gov.uk |
New internet laws will be published today in the draft Online Safety Bill to protect children online and tackle some of the worst abuses on social media, including racist hate crimes. Ministers have added landmark new measures to
the Bill to safeguard freedom of expression and democracy, ensuring necessary online protections do not lead to unnecessary censorship. The draft Bill marks a milestone in the Government's fight to make the internet safe. Despite
the fact that we are now using the internet more than ever, over three quarters of UK adults are concerned about going online, and fewer parents feel the benefits outweigh the risks of their children being online -- falling from 65 per cent in 2015 to 50
per cent in 2019. The draft Bill includes changes to put an end to harmful practices, while ushering in a new era of accountability and protections for democratic debate, including:
New additions to strengthen people's rights to express themselves freely online, while protecting journalism and democratic political debate in the UK. Further provisions to tackle prolific online
scams such as romance fraud, which have seen people manipulated into sending money to fake identities on dating apps. Social media sites, websites, apps and other services hosting user-generated content or allowing people to
talk to others online must remove and limit the spread of illegal and harmful content such as child sexual abuse, terrorist material and suicide content. Ofcom will be given the power to fine companies failing in a new duty
of care up to £18 million or ten per cent of annual global turnover, whichever is higher, and have the power to block access to sites. A new criminal offence for senior managers has been included as a deferred power. This
could be introduced at a later date if tech firms don't step up their efforts to improve safety.
The draft Bill will be scrutinised by a joint committee of MPs before a final version is formally introduced to Parliament. The following elements of the Bill aim to create the most progressive, fair and
accountable system in the world. This comes only weeks after a boycott of social media by sports professionals and governing bodies in protest at the racist abuse of footballers online, while at the same time concerns continue to be raised at social
media platforms arbitrarily removing content and blocking users. Duty of care In line with the government's response to the
Online Harms White Paper , all companies in scope will have a duty of care
towards their users so that what is unacceptable offline will also be unacceptable online. They will need to consider the risks their sites may pose to the youngest and most vulnerable people and act to protect children from
inappropriate content and harmful activity. They will need to take robust action to tackle illegal abuse, including swift and effective action against hate crimes, harassment and threats directed at individuals and keep their
promises to users about their standards. The largest and most popular social media sites (Category 1 services) will need to act on content that is lawful but still harmful such as abuse that falls below the threshold of a criminal
offence, encouragement of self-harm and mis/disinformation. Category 1 platforms will need to state explicitly in their terms and conditions how they will address these legal harms and Ofcom will hold them to account. The draft
Bill contains reserved powers for Ofcom to pursue criminal action against named senior managers whose companies do not comply with Ofcom's requests for information. These will be introduced if tech companies fail to live up to their new responsibilities.
A review will take place at least two years after the new regulatory regime is fully operational. The final legislation, when introduced to Parliament, will contain provisions that require companies to report child sexual
exploitation and abuse (CSEA) content identified on their services. This will ensure companies provide law enforcement with the high-quality information they need to safeguard victims and investigate offenders. Freedom of
expression The Bill will ensure people in the UK can express themselves freely online and participate in pluralistic and robust debate. All in-scope companies will need to consider and put in place
safeguards for freedom of expression when fulfilling their duties. These safeguards will be set out by Ofcom in codes of practice but, for example, might include having human moderators take decisions in complex cases where context is important.
People using their services will need to have access to effective routes of appeal for content removed without good reason and companies must reinstate that content if it has been removed unfairly. Users will also be able to appeal to
Ofcom and these complaints will form an essential part of Ofcom's horizon-scanning, research and enforcement activity. Category 1 services will have additional duties. They will need to conduct and publish up-to-date assessments
of their impact on freedom of expression and demonstrate they have taken steps to mitigate any adverse effects. These measures remove the risk that online companies adopt restrictive measures or over-remove content in their
efforts to meet their new online safety duties. An example of this could be AI moderation technologies falsely flagging innocuous content as harmful, such as satire. Democratic content Ministers have
added new and specific duties to the Bill for Category 1 services to protect content defined as 'democratically important'. This will include content promoting or opposing government policy or a political party ahead of a vote in Parliament, election or
referendum, or campaigning on a live political issue. Companies will also be forbidden from discriminating against particular political viewpoints and will need to apply protections equally to a range of political opinions, no
matter their affiliation. Policies to protect such content will need to be set out in clear and accessible terms and conditions and firms will need to stick to them or face enforcement action from Ofcom. When moderating content,
companies will need to take into account the political context around why the content is being shared and give it a high level of protection if it is democratically important. For example, a major social media company may choose
to prohibit all deadly or graphic violence. A campaign group could release violent footage to raise awareness about violence against a specific group. Given its importance to democratic debate, the company might choose to keep that content up, subject to
warnings, but it would need to be upfront about the policy and ensure it is applied consistently. Journalistic content Content on news publishers' websites is not in scope. This includes both their
own articles and user comments on these articles. Articles by recognised news publishers shared on in-scope services will be exempted and Category 1 companies will now have a statutory duty to safeguard UK users' access to
journalistic content shared on their platforms. This means they will have to consider the importance of journalism when undertaking content moderation, have a fast-track appeals process for journalists' removed content, and will
be held to account by Ofcom for the arbitrary removal of journalistic content. Citizen journalists' content will have the same protections as professional journalists' content. Online fraud Measures
to tackle user-generated fraud will be included in the Bill. It will mean online companies will, for the first time, have to take responsibility for tackling fraudulent user-generated content, such as posts on social media, on their platforms. This
includes romance scams and fake investment opportunities posted by users on Facebook groups or sent via Snapchat. Romance fraud occurs when a victim is tricked into thinking that they are striking up a relationship with someone,
often through an online dating website or app, when in fact this is a fraudster who will seek money or personal information. Analysis by the National Fraud Intelligence Bureau found in 2019/20 there were 5,727 instances of romance
fraud in the UK (up 18 per cent year on year). Losses totalled more than £60 million. Fraud via advertising, emails or cloned websites will not be in scope because the Bill focuses on harm committed through user-generated content.
The Government is working closely with industry, regulators and consumer groups to consider additional legislative and non-legislative solutions. The Home Office will publish a Fraud Action Plan after the 2021 spending review and
the Department for Digital, Culture, Media and Sport will consult on online advertising, including the role it can play in enabling online fraud, later this year.
|
| |