Online Harms White Paper


UK Government seeks to censor social media



 

Policing the wild west...

Status report on the government's plans to introduce an internet censor for social media


Link Here 30th January 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media
The U.K. government is rushing to finalize a draft internet censorship law particularly targeting social media but key details of the proposal have yet to be finalised amid concerns about stifling innovation.

Government officials have been meeting with industry players, MPs, peers and other groups over the past month as they try to finalise their proposals.

People involved in those discussions said there is now broad agreement about the need to impose a new duty of care on big tech companies, as well as the need to back up their terms and conditions with the force of law.

A white paper is due be published by the end of winter. But the Department for Digital, Culture, Media and Sport, which is partly responsible for writing up the new rules alongside the Home Office, is still deliberating over key aspects with just weeks to go until the government said it would unveil an outline of its proposals.

Among the sticking points are worries that regulation could stifle innovation in one of the U.K. economy's most thriving sectors and concerns over whether it can keep pace with rapid technological change. Another is ensuring sufficient political support to pass the law despite likely opposition from parts of the Conservative Party. A third is deciding what regulatory agency would ultimately be responsible for enforcing the so-called Internet Safety Law.

A major unresolved question is what censorship body will be in charge of enforcing laws that could expose big tech companies to greater liability for hosted content, a prospect that firms including Google and Facebook have fought at the European level.

Several people who spoke to POLITICO said the government does not appear to have settled on who would be the censor, although the communications regulator Ofcom is very much in the mix, however there are concerns that Ofcom is already getting too big.

 

 

Updated: As always increased red tape benefits the largest (ie US) companies...

Daily Mail reports on government discussion about a new internet censor, codenamed Ofweb


Link Here 6th February 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media
Wrangling in Whitehall has held up plans to set up a social media censor dubbed Ofweb, The Mail on Sunday reveals.

The Government was due to publish a White Paper this winter on censorship of tech giants but this Mail has learnt it is still far from ready. Culture Secretary Jeremy Wright said it would be published within a month, but a Cabinet source said that timeline was wholly unrealistic. Other senior Government sources went further and said the policy document is unlikely to surface before the Spring.

Key details on how a new censor would work have yet to be decided while funding from the Treasury has not yet been secured. Another problem is that some Ministers believe the proposed clampdown is too draconian and are preparing to try to block or water down the plan.

There are also concerns that technically difficult requirements would benefit the largest US companies as smaller European companies and start ups would not be able to afford the technology and development required.

The Mail on Sunday understands Jeremy Wright has postponed a visit to Facebook HQ in California to discuss the measures, as key details are still up in the air.

Update: The Conservatives don't have a monopoly on internet censorship...Labour agrees

6th February 2019. See  article from ft.com

Labour has called for a new entity capable of taking on the likes of Facebook and Google. Tom Watson, the shadow digital secretary, will on Wednesday say a regulator should also have responsibility for competition policy and be able to refer cases to the Competition and Markets Authority.

According to Watson, any duty of care would only be effective with penalties that seriously affect companies' bottom lines. He has referred to regulators' ability to fine companies up to 4% of global turnover, or euro 20m, whichever is higher, for worst-case breaches of the EU-wide General Data Protection Regulation.

 

 

Offsite Article: A Lord Chamberlain for the internet?...


Link Here 8th February 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media
Thanks, but no thanks. By Graham Smith

See article from cyberleagle.com

 

 

Duty of care: an empty concept...

The Open Rights Group comments on government moves to create a social media censor


Link Here 9th February 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media

There is every reason to believe that the government and opposition are moving to a consensus on introducing a duty of care for social media companies to reduce harm and risk to their users. This may be backed by an Internet regulator, who might decide what kind of mitigating actions are appropriate to address the risks to users on different platforms.

This idea originated from a series of papers by Will Perrin and Lorna Woods and has been mentioned most recently in a recent Science and Technology committee report and by NGOs including children's charity 5Rights.

A duty of care has some obvious merits: it could be based on objective risks, based on evidence, and ensure that mitigations are proportionate to those risks. It could take some of the politicisation out of the current debate.

However, it also has obvious problems. For a start, it focuses on risk rather than process . It moves attention away from the fact that interventions are regulating social media users just as much as platforms. It does not by itself tell us that free expression impacts will be considered, tracked or mitigated.

Furthermore, the lack of focus that a duty of care model gives to process means that platform decisions that have nothing to do with risky content are not necessarily based on better decisions, independent appeals and so on. Rather, as has happened with German regulation, processes can remain unaffected when they are outside a duty of care.

In practice, a lot of content which is disturbing or offensive is already banned on online platforms. Much of this would not be in scope under a duty of care but it is precisely these kinds of material which users often complain about, when it is either not removed when they want it gone, or is removed incorrectly. Any model of social media regulation needs to improve these issues, but a duty of care is unlikely to touch these problems.

There are very many questions about the kinds of risk, whether to individual in general, vulnerable groups, or society at large; and the evidence required to create action. The truth is that a duty of care, if cast sensibly and narrowly, will not satisfy many of the people who are demanding action; equally, if the threshold to act is low, then it will quickly be seen to be a mechanism for wide-scale Internet censorship.

It is also a simple fact that many decisions that platforms make about legal content which is not risky are not the business of government to regulate. This includes decisions about what legal content is promoted and why. For this reason, we believe that a better approach might be to require independent self-regulation of major platforms across all of their content decisions. This requirement could be a legislative one, but the regulator would need to be independent of government and platforms.

Independent self-regulation has not been truly tried. Instead, voluntary agreements have filled its place. We should be cautious about moving straight to government regulation of social media and social media users. The government refuses to regulate the press in this way because it doesn't wish to be seen to be controlling print media. It is pretty curious that neither the media nor the government are spelling out the risks of state regulation of the speech of millions of British citizens.

That we are in this place is of course largely the fault of the social media platforms themselves, who have failed to understand the need and value of transparent and accountable systems to ensure they are acting properly. That, however, just demonstrates the problem: politically weak platforms who have created monopoly positions based on data silos are now being sliced and diced at the policy table for their wider errors. It's imperative that as these government proposals progress we keep focus on the simple fact that it is end users whose speech will ultimately be regulated.

 

 

Driving the internet into dark corners...

The IWF warns the government to think about unintended consequences when creating a UK internet censor


Link Here 22nd February 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media

Internet Watch Foundation's (IWF) CEO, Susie Hargreaves OBE, puts forward a voice of reason by urging politicians and policy makers to take a balanced approach to internet regulation which avoids a heavy cost to the victims of child sexual abuse.

IWF has set out its views on internet regulation ahead of the publication of the Government's Online Harms White Paper. It suggests that traditional approaches to regulation cannot apply to the internet and that human rights should play a big role in any regulatory approach.

The IWF, as part of the UK Safer Internet Centre, supports the Government's ambition to make the UK the safest place in the world to go online, and the best place to start a digital business.

IWF has a world-leading reputation in identifying and removing child sexual abuse images and videos from the internet. It takes a co-regulatory approach to combating child sexual abuse images and videos by working in partnership with the internet industry, law enforcement and governments around the world. It offers a suite of tools and services to the online industry to keep their networks safer. In the past 22 years, the internet watchdog has assessed -- with human eyes -- more than 1 million reports.

Ms Hargreaves said:

Tackling criminal child sexual abuse material requires a global multi-stakeholder effort. We'll use our 22 years' experience in this area to help the government and policy makers to shape a regulatory framework which is sustainable and puts victims at its heart. In order to do this, any regulation in this area should be developed with industry and other key stakeholders rather than imposed on them.

We recommend an outcomes-based approach where the outcomes are clearly defined and the government should provide clarity over the results it seeks in dealing with any harm. There also needs to be a process to monitor this and for any results to be transparently communicated.

But, warns Ms Hargreaves, any solutions should be tested with users including understanding impacts on victims: "The UK already leads the world at tackling online child sexual abuse images and videos but there is definitely more that can be done, particularly in relation to tackling grooming and livestreaming, and of course, regulating harmful content is important.

My worries, however, are about rushing into knee-jerk regulation which creates perverse incentives or unintended consequences to victims and could undo all the successful work accomplished to date. Ultimately, we must avoid a heavy cost to victims of online sexual abuse.

 

 

Wider definition of harm can be manipulated to restrict media freedom...

Index on Censorship responds to government plans to create a UK internet censor


Link Here 22nd February 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media

Index on Censorship welcomes a report by the House of Commons Digital, Culture, Media and Sport select committee into disinformation and fake news that calls for greater transparency on social media companies' decision making processes, on who posts political advertising and on use of personal data. However, we remain concerned about attempts by government to establish systems that would regulate harmful content online given there remains no agreed definition of harm in this context beyond those which are already illegal.

Despite a number of reports, including the government's Internet Safety Strategy green paper, that have examined the issue over the past year, none have yet been able to come up with a definition of harmful content that goes beyond definitions of speech and expression that are already illegal. DCMS recognises this in its report when it quotes the Secretary of State Jeremy Wright discussing the difficulties surrounding the definition. Despite acknowledging this, the report's authors nevertheless expect technical experts to be able to set out what constitutes harmful content that will be overseen by an independent regulator.

International experience shows that in practice it is extremely difficult to define harmful content in such a way that would target only bad speech. Last year, for example, activists in Vietnam wrote an open letter to Facebook complaining that Facebook's system of automatically pulling content if enough people complained could silence human rights activists and citizen journalists in Vietnam , while Facebook has shut down the livestreams of people in the United States using the platform as a tool to document their experiences of police violence.

Index on Censorship chief executive Jodie Ginsberg said:

It is vital that any new system created for regulating social media protects freedom of expression, rather than introducing new restrictions on speech by the back door. We already have laws to deal with harassment, incitement to violence, and incitement to hatred. Even well-intentioned laws meant to tackle hateful views online often end up hurting the minority groups they are meant to protect, stifle public debate, and limit the public's ability to hold the powerful to account.

The select committee report provides the example of Germany as a country that has legislated against harmful content on tech platforms. However, it fails to mention the German Network Reinforcement Act was legislating on content that was already considered illegal, nor the widespread criticism of the law that included the UN rapporteur on freedom of expression and groups such as Human Rights Watch. It also cites the fact that one in six of Facebook's moderators now works in Germany as practical evidence that legislation can work. Ginsberg said:

The existence of more moderators is not evidence that the laws work. Evidence would be if more harmful content had been removed and if lawful speech flourished. Given that there is no effective mechanism for challenging decisions made by operators, it is impossible to tell how much lawful content is being removed in Germany. But the fact that Russia, Singapore and the Philippines have all cited the German law as a positive example of ways to restrict content online should give us pause.

Index has reported on various examples of the German law being applied incorrectly, including the removal of a tweet of journalist Martin Eimermacher criticising the double standards of tabloid newspaper Bild Zeitung and the blocking of the Twitter account of German satirical magazine Titanic. The Association of German Journalists (DJV) has said the Twitter move amounted to censorship, adding it had warned of this danger when the German law was drawn up.

Index is also concerned about the continued calls for tools to distinguish between quality journalism and unreliable sources, most recently in the Cairncross Review . While we recognise that the ability to do this as individuals and through education is key to democracy, we are worried that a reliance on a labelling system could create false positives, and mean that smaller or newer journalism outfits would find themselves rejected by the system.

 

 

Putting Zuckerberg behind bars...

The Telegraph reports on the latest government thoughts about setting up a social media censor


Link Here 23rd February 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media

Social media companies face criminal sanctions for failing to protect children from online harms, according to drafts of the Government's White Paper circulating in Whitehall.

Civil servants are proposing a new corporate offence as an option in the White Paper plans for a tough new censor with the power to force social media firms to take down illegal content and to police legal but harmful material.

They see criminal sanctions as desirable and as an important part of a regulatory regime, said one source who added that there's a recognition particularly on the Home Office side that this needs to be a regulator with teeth. The main issue they need to satisfy ministers on is extra-territoriality, that is can you apply this to non-UK companies like Facebook and YouTube? The belief is that you can.

The White Paper, which is due to published mid-March followed by a Summer consultation, is not expected to lay out as definitive a plan as previously thought. A decision on whether to create a brand new censor or use Ofcom is expected to be left open. A Whitehall source said:

Criminal sanctions are going to be put into the White Paper as an option. We are not necessarily saying we are going to do it but these are things that are open to us. They will be allied to a system of fines amounting to 4% of global turnover or Euros 20m, whichever is higher.

Government minister Jeremy Wright told the Telegraph this week he was especially focused on ensuring that technology companies enforce minimum age standards. He also indicated the Government w ould fulfill a manifesto commitment to a levy on social media firms, that could fund the new censorr.

 

 

Six shooters...

Internet giants respond to impending government internet censorship laws with sex principles that should be followed


Link Here 1st March 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media
The world's biggest internet companies including Facebook, Google and Twitter are represented by a trade group call The Internet Association. This organisation has written to UK government ministers to outline how they believe harmful online activity should be regulated.

The letter has been sent to the culture, health and home secretaries. The letter will be seen as a pre-emptive move in the coming negotiation over new rules to govern the internet. The government is due to publish a delayed White Paper on online harms in the coming weeks.

The letter outlines six principles:

  • "Be targeted at specific harms, using a risk-based approach
  • "Provide flexibility to adapt to changing technologies, different services and evolving societal expectations
  • "Maintain the intermediary liability protections that enable the internet to deliver significant benefits for consumers, society and the economy
  • "Be technically possible to implement in practice
  • "Provide clarity and certainty for consumers, citizens and internet companies
  • "Recognise the distinction between public and private communication"

Many leading figures in the UK technology sector fear a lack of expertise in government, and hardening public sentiment against the excesses of the internet, will push the Online Harms paper in a more radical direction.

Three of the key areas of debate are the definition of online harm, the lack of liability for third-party content, and the difference between public and private communication.

The companies insist that government should recognise the distinction between clearly illegal content and content which is harmful, but not illegal. If these leading tech companies believe this government definition of harm is too broad, their insistence on a distinction between illegal and harmful content may be superseded by another set of problems.

The companies also defend the principle that platforms such as YouTube permit users to post and share information without fear that those platforms will be held liable for third-party content. Another area which will be of particular interest to the Home Office is the insistence that care should be taken to avoid regulation encroaching into the surveillance of private communications.

 

 

Offsite Article: Why an internet regulator is a bad idea...


Link Here 20th March 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media
We should be stripping away curbs on speech -- not adding more. By Andrew Tettenborn

See article from spiked-online.com

 

 

Ensuring that the UK is the most censored place in the western world to be online...

Government introduces an enormous package of internet censorship proposals


Link Here 8th April 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media
  The Government writes:

In the first online safety laws of their kind, social media companies and tech firms will be legally required to protect their users and face tough penalties if they do not comply.

As part of the Online Harms White Paper, a joint proposal from the Department for Digital, Culture, Media and Sport and Home Office, a new independent regulator will be introduced to ensure companies meet their responsibilities.

This will include a mandatory 'duty of care', which will require companies to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services. The regulator will have effective enforcement tools, and we are consulting on powers to issue substantial fines, block access to sites and potentially to impose liability on individual members of senior management.

A range of harms will be tackled as part of the Online Harms White Paper , including inciting violence and violent content, encouraging suicide, disinformation, cyber bullying and children accessing inappropriate material.

There will be stringent requirements for companies to take even tougher action to ensure they tackle terrorist and child sexual exploitation and abuse content.

The new proposed laws will apply to any company that allows users to share or discover user generated content or interact with each other online. This means a wide range of companies of all sizes are in scope, including social media platforms, file hosting sites, public discussion forums, messaging services, and search engines.

A regulator will be appointed to enforce the new framework. The Government is now consulting on whether the regulator should be a new or existing body. The regulator will be funded by industry in the medium term, and the Government is exploring options such as an industry levy to put it on a sustainable footing.

A 12 week consultation on the proposals has also been launched today. Once this concludes we will then set out the action we will take in developing our final proposals for legislation.

Tough new measures set out in the White Paper include:

  • A new statutory 'duty of care' to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services.

  • Further stringent requirements on tech companies to ensure child abuse and terrorist content is not disseminated online.

  • Giving a regulator the power to force social media platforms and others to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address this.

  • Making companies respond to users' complaints, and act to address them quickly.

  • Codes of practice, issued by the regulator, which could include measures such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.

  • A new "Safety by Design" framework to help companies incorporate online safety features in new apps and platforms from the start.

  • A media literacy strategy to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, including catfishing, grooming and extremism.

The UK remains committed to a free, open and secure Internet. The regulator will have a legal duty to pay due regard to innovation, and to protect users' rights online, being particularly mindful to not infringe privacy and freedom of expression.

Recognising that the Internet can be a tremendous force for good, and that technology will be an integral part of any solution, the new plans have been designed to promote a culture of continuous improvement among companies. The new regime will ensure that online firms are incentivised to develop and share new technological solutions, like Google's "Family Link" and Apple's Screen Time app, rather than just complying with minimum requirements. Government has balanced the clear need for tough regulation with its ambition for the UK to be the best place in the world to start and grow a digital business, and the new regulatory framework will provide strong protection for our citizens while driving innovation by not placing an impossible burden on smaller companies.

 

 

Updated Comments: The UK Government harms the British people...

The press and campaigners call out the Online Harms white paper for what it is...censorship


Link Here 12th April 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media
Newspapers and the press have generally given the new internet censorship proposals a jistifiable negative reception:

The Guardian

See Internet crackdown raises fears for free speech in Britain from theguardian.com

Critics of the government's flagship internet regulation policy are warning it could lead to a North Korean-style censorship regime, where regulators decide which websites Britons are allowed to visit, because of how broad the proposals are.

The Daily Mail

See New internet regulation laws will lead to widespread censorship from dailymail.co.uk

Critics brand new internet regulation laws the most draconian crackdown in the Western democratic world as they warn it could threaten the freedom of speech of millions of Britons

The Independent

See UK's new internet plans could bring state censorship of the internet, campaigners warn from independent. co.uk

The government's new proposals to try and protect people from harm on the internet could actually create a huge censorship operation, campaigners have warned.

Index on Censorship

See Online harms proposals pose serious risks to freedom of expressionfrom indexoncensorship.org

Index on Censorship has raised strong concerns about the government's focus on tackling unlawful and harmful online content, particularly since the publication of the Internet Safety Strategy Green Paper in 2017. In October 2018, Index published a joint statement with Global Partners Digital and Open Rights Group noting that any proposals that regulate content are likely to have a significant impact on the enjoyment and exercise of human rights online, particularly freedom of expression.

We have also met with officials from the Department for Digital, Culture, Media and Sport, as well as from the Home Office, to raise our thoughts and concerns.

With the publication of the Online Harms White Paper , we would like to reiterate our earlier points.

While we recognise the government's desire to tackle unlawful content online, the proposals mooted in the white paper -- including a new duty of care on social media platforms , a regulatory body , and even the fining and banning of social media platforms as a sanction -- pose serious risks to freedom of expression online.

These risks could put the United Kingdom in breach of its obligations to respect and promote the right to freedom of expression and information as set out in Article 19 of the International Covenant on Civil and Political Rights and Article 10 of the European Convention on Human Rights, amongst other international treaties.

Social media platforms are a key means for tens of millions of individuals in the United Kingdom to search for, receive, share and impart information, ideas and opinions. The scope of the right to freedom of expression includes speech which may be offensive, shocking or disturbing . The proposed responses for tackling online safety may lead to disproportionate amounts of legal speech being curtailed, undermining the right to freedom of expression.

In particular, we raise the following concerns related to the white paper:

  • Lack of evidence base

The wide range of different harms which the government is seeking to tackle in this policy process require different, tailored responses. Measures proposed must be underpinned by strong evidence, both of the likely scale of the harm and the measures' likely effectiveness. The evidence which formed the base of the Internet Safety Strategy Green Paper was highly variable in its quality. Any legislative or regulatory measures should be supported by clear and unambiguous evidence of their need and effectiveness.

  • Duty of care concerns/ problems with 'harm' definition

Index is concerned at the use of a duty of care regulatory approach. Although social media has often been compared the public square, the duty of care model is not an exact fit because this would introduce regulation -- and restriction -- of speech between individuals based on criteria that is far broader than current law. A failure to accurately define "harmful" content risks incorporating legal speech, including political expression, expressions of religious views, expressions of sexuality and gender, and expression advocating on behalf of minority groups.

  • Risks in linking liability/sanctions to platforms over third party content

While well-meaning, proposals such as these contain serious risks, such as requiring or incentivising wide-sweeping removal of lawful and innocuous content. The imposition of time limits for removal, heavy sanctions for non-compliance or incentives to use automated content moderation processes only heighten this risk, as has been evidenced by the approach taken in Germany via its Network Enforcement Act (or NetzDG), where there is evidence of the over-removal of lawful content.

  • Lack of sufficient protections for freedom of expression.

The obligation to protect users' rights online that is included in the white paper gives insufficient weight to freedom of expression. A much clearer obligation to protect freedom of expression should guide development of future regulation.

In recognition of the UK's commitment to the multistakeholder model of internet governance, we hope all relevant stakeholders, including civil society experts on digital rights and freedom of expression, will be fully engaged throughout the development of the Online Harms bill.

Privacy International

See  PI's take on the UK government's new proposal to tackle "online harms" from privacyinternational.org

PI welcomes the UK government's commitment to investigating and holding companies to account. When it comes to regulating the internet, however, we must move with care. Failure to do so will introduce, rather than reduce, "online harms". A 12-week consultation on the proposals has also been launched today. PI plans to file a submission to the consultation as it relates to our work. Given the breadth of the proposals, PI calls on others respond to the consultation as well.

Here are our initial suggestions:

  • proceed with care: proposals of regulation of content on digital media platforms should be very carefully evaluated, given the high risks of negative impacts on expression, privacy and other human rights. This is a very complex challenge and we support the need for broad consultation before any legislation is put forward in this area.

  • do not lose sight of how data exploitation facilitates the harms identified in the report and ensure any new regulator works closely with others working to tackle these issues.

  • assess carefully the delegation of sole responsibility to companies as adjudicators of content. This would empower corporate judgment over content, with would have implications for human rights, particularly freedom of expression and privacy.

  • require that judicial or other independent authorities, rather than government agencies, are the final arbiters of decisions regarding what is posted online and enforce such decisions in a manner that is consistent with human rights norms.

  • assess the privacy implications of any demand for "proactive" monitoring of content in digital media platforms.

  • ensure that any requirement or expectation of deploying automated decision making/AI is in full compliance with existing human rights and data protection standards (which, for example, prohibit, with limited exceptions, relying on solely automated decisions, including profiling, when they significantly affect individuals).

  • ensure that company transparency reports include information related to how the content was targeted at users.

  • require companies to provide efficient reporting tools in multiple languages, to report on action taken with regard to content posted online. Reporting tools should be accessible, user-friendly, and easy to find. There should be full transparency regarding the complaint and redress mechanisms available and opportunities for civil society to take action.

Offsite Comment: Ridiculous Plan

10th April 2019. See article from techdirt.com

UK Now Proposes Ridiculous Plan To Fine Internet Companies For Vaguely Defined Harmful Content

Last week Australia rushed through a ridiculous bill to fine internet companies if they happen to host any abhorrent content. It appears the UK took one look at that nonsense and decided it wanted some too. On Monday it released a white paper calling for massive fines for internet companies for allowing any sort of online harms. To call the plan nonsense is being way too harsh to nonsense

The plan would result in massive, widespread, totally unnecessary censorship solely for the sake of pretending to do something about the fact that some people sometimes do not so nice things online. And it will place all of the blame on the internet companies for the (vaguely defined) not so nice things that those companies' users might do online.

Read the full article from techdirt.com

Offsite Comment: Sajid Javid's new internet rules will have a chilling effect on free speech

11th April 2019. See article from spectator.co.uk by Toby Young

How can the government prohibit comments that might cause harm without defining what harm is?

Offsite Comment: Plain speaking from Chief Censor Sajid Javid

11th April 2019. See tweet from twitter.com

Letter to the Guardian: Online Harms white paper would make Chinese censors proud

11th April 2019. See article from theguardian.com

We agree with your characterisation of the online harms white paper as a flawed attempt to deal with serious problems (Regulating the internet demands clear thought about hard problems, Editorial, 9 April). However, we would draw your attention to several fundamental problems with the proposal which could be disastrous if it proceeds in its current form.

Firstly, the white paper proposes to regulate literally the entire internet, and censor anything non-compliant. This extends to blogs, file services, hosting platforms, cloud computing; nothing is out of scope.

Secondly, there are a number of undefined harms with no sense of scope or evidence thresholds to establish a need for action. The lawful speech of millions of people would be monitored, regulated and censored.

The result is an approach that would make China's state censors proud. It would be very likely to face legal challenge. It would give the UK the widest and most prolific internet censorship in an apparently functional democracy. A fundamental rethink is needed.

Antonia Byatt Director, English PEN,
Silkie Carlo Big Brother Watch
Thomas Hughes Executive director, Article 19
Jim Killock Executive director, Open Rights Group
Joy Hyvarinen Head of advocacy, Index on Censorship

Comment: The DCMS Online Harms Strategy must design in fundamental rights

12th April 2019. See article from openrightsgroup.org

Increasingly over the past year, DCMS has become fixated on the idea of imposing a duty of care on social media platforms, seeing this as a flexible and de-politicised way to emphasise the dangers of exposing children and young people to certain online content and make Facebook in particular liable for the uglier and darker side of its user-generated material.

DCMS talks a lot about the 'harm' that social media causes. But its proposals fail to explain how harm to free expression impacts would be avoided.

On the positive side, the paper lists free expression online as a core value to be protected and addressed by the regulator. However, despite the apparent prominence of this value, the mechanisms to deliver this protection and the issues at play are not explored in any detail at all.

In many cases, online platforms already act as though they have a duty of care towards their users. Though the efficacy of such measures in practice is open to debate, terms and conditions, active moderation of posts and algorithmic choices about what content is pushed or downgraded are all geared towards ousting illegal activity and creating open and welcoming shared spaces. DCMS hasn't in the White Paper elaborated on what its proposed duty would entail. If it's drawn narrowly so that it only bites when there is clear evidence of real, tangible harm and a reason to intervene, nothing much will change. However, if it's drawn widely, sweeping up too much content, it will start to act as a justification for widespread internet censorship.

If platforms are required to prevent potentially harmful content from being posted, this incentivises widespread prior restraint. Platforms can't always know in advance the real-world harm that online content might cause, nor can they accurately predict what people will say or do when on their platform. The only way to avoid liability is to impose wide-sweeping upload filters. Scaled implementation of this relies on automated decision-making and algorithms, which risks even greater speech restrictions given that machines are incapable of making nuanced distinctions or recognising parody or sarcasm.

DCMS's policy is underpinned by societally-positive intentions, but in its drive to make the internet "safe", the government seems not to recognise that ultimately its proposals don't regulate social media companies, they regulate social media users. The duty of care is ostensibly aimed at shielding children from danger and harm but it will in practice bite on adults too, wrapping society in cotton wool and curtailing a whole host of legal expression.

Although the scheme will have a statutory footing, its detail will depend on codes of practice drafted by the regulator. This makes it difficult to assess how the duty of care framework will ultimately play out.

The duty of care seems to be broadly about whether systemic interventions reduce overall "risk". But must the risk be always to an identifiable individual, or can it be broader - to identifiable vulnerable groups? To society as a whole? What evidence of harm will be required before platforms should intervene? These are all questions that presently remain unanswered.

DCMS's approach appears to be that it will be up to the regulator to answer these questions. But whilst a sensible regulator could take a minimalist view of the extent to which commercial decisions made by platforms should be interfered with, allowing government to distance itself from taking full responsibility over the fine detailing of this proposed scheme is a dangerous principle. It takes conversations about how to police the internet out of public view and democratic forums. It enables the government to opt not to create a transparent, judicially reviewable legislative framework. And it permits DCMS to light the touch-paper on a deeply problematic policy idea without having to wrestle with the practical reality of how that scheme will affect UK citizens' free speech, both in the immediate future and for years to come.

How the government decides to legislate and regulate in this instance will set a global norm.

The UK government is clearly keen to lead international efforts to regulate online content. It knows that if the outcome of the duty of care is to change the way social media platforms work that will apply worldwide. But to be a global leader, DCMS needs to stop basing policy on isolated issues and anecdotes and engage with a broader conversation around how we as society want the internet to look. Otherwise, governments both repressive and democratic are likely to use the policy and regulatory model that emerge from this process as a blueprint for more widespread internet censorship.

The House of Lords report on the future of the internet, published in early March 2019, set out ten principles it considered should underpin digital policy-making, including the importance of protecting free expression. The consultation that this White Paper introduces offers a positive opportunity to collectively reflect, across industry, civil society, academia and government, on how the negative aspects of social media can be addressed and risks mitigated. If the government were to use this process to emphasise its support for the fundamental right to freedom of expression - and in a way that goes beyond mere expression of principle - this would also reverberate around the world, particularly at a time when press and journalistic freedom is under attack.

The White Paper expresses a clear desire for tech companies to "design in safety". As the process of consultation now begins, we call on DCMS to "design in fundamental rights". Freedom of expression is itself a framework, and must not be lightly glossed over. We welcome the opportunity to engage with DCMS further on this topic: before policy ideas become entrenched, the government should consider deeply whether these will truly achieve outcomes that are good for everyone.

 

 

More like China, Russia or North Korea...

Tory MPs line up to criticise their own government's totalitarian-style internet censorship proposals


Link Here 14th April 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media

Ministers are facing a growing and deserved backlash against draconian new web laws which will lead to totalitarian-style censorship.

The stated aim of the Online Harms White Paper is to target offensive material such as terrorists' beheading videos. But under the document's provisions, the UK internet censor would have complete discretion to decide what is harmful, hateful or bullying -- potentially including coverage of contentious issues such as transgender rights.

After MPs lined up to demand a rethink, Downing Street has put pressure on Culture Secretary Jeremy Wright to narrow the definition of harm in order to exclude typical editorial content.

MPs have been led by Jacob Rees-Mogg, who said last night that while it was obviously a worthwhile aim to rid the web of the evils of terrorist propaganda and child pornography, it should not be at the expense of crippling a free Press and gagging healthy public expression. He added that the regulator could be used as a tool of repression by a future Jeremy Corbyn-led government, saying:

Sadly, the Online Harms White Paper appears to give the Home Secretary of the day the power to decide the rules as to which content is considered palatable. Who is to say that less scrupulous governments in the future would not abuse this new power?

I fear this could have the unintended consequence of reputable newspaper websites being subjected to quasi-state control. British newspapers freedom to hold authority to account is an essential bulwark of our democracy.

We must not now allow what amounts to a Leveson-style state-controlled regulator for the Press by the back door.

He was backed by Charles Walker, vice-chairman of the Tory Party's powerful backbench 1922 Committee, who said:

We need to protect people from the well-documented evils of the internet -- not in order to suppress views or opinions to which they might object.

In last week's Mail on Sunday, former Culture Secretary John Whittingdale warned that the legislation was more usually associated with autocratic regimes including those in China, Russia or North Korea.

Tory MP Philip Davies joined the criticism last night, saying:

Of course people need to be protected from the worst excesses of what takes place online. But equally, free speech in a free country is very, very important too. It's vital we strike the right balance. While I have every confidence that Sajid Javid as Home Secretary would strike that balance, can I have the same confidence that a future Marxist government would not abuse the proposed new powers?

And Tory MP Martin Vickers added:

While we must take action to curb the unregulated wild west of the internet, we must not introduce state control of the Press as a result.

 

 

Offsite Article: User's Behaving Badly...


Link Here 20th April 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media
An interesting look at the government's Online Harms white paper proposing extensive internet censorship for the UK

See article from cyberleagle.com

 

 

Extract: Lords of Censorship...

Lords debate about Online Harms sees peers line up as supporters of internet censorship and each adds their own little pet suggestions for even more censorship


Link Here 1st May 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media
The House of Lords saw a pre-legislation debate about the governments Online Harms white paper. Peers from all parties queued up to add their praise for internet censorship. And don't even think that maybe the LibDems may be a little more appreciative of free speech and a little less in favour of state censorship. Don't dream! all the lords that spoke were gagging for it...censorship that is.

And support for the internet censorship in the white paper wasn't enough. Many of the speakers presumed to add on their own pet ideas for even more censorship.

I did spot one piece of information that was new to me. It seems that the IWF have extended their remit to include cartoon child porn as material they work against.

Elspeth Howe said during the debate:

I am very pleased that, since the debates at the end of last year, the Internet Watch Foundation has adopted a new non-photographic images policy and URL block list, so that websites that contain these images can be blocked by IWF members. It allows for network blocking of non-photographic images to be applied to filtering solutions, and it can prevent pages containing non-photographic images being shown in online search engine results. In 2017, 3,471 reports of alleged non-photographic images of child sexual abuse were made to the IWF; the figure for 2018 was double that, at 7,091 alleged reports. The new IWF policy was introduced only in February, so it is early days to see whether this will be a success. The IWF is unable to remove content unless that content originates in the UK, which of course is rare. The IWF offers this list on a voluntary basis, not a statutory basis as would occur under the Digital Economy Act. Can the Minister please keep the House informed about the success of the new policy and, if necessary, address the loopholes in the legislative proposal arising from this White Paper?

Anyway read the full debate from hansard.parliament.uk

 

 

Offsite Article: Careless lawmaking...


Link Here 6th May 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media
Detailed legal analysis of Online Harms white paper does not impress

See article from cyberleagle.com

 

 

Updated: Tech companies criticise the government's Online Harms white paper...

The harms will be that British tech businesses will be destroyed so that politicians can look good for 'protecting the children'


Link Here 2nd June 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media
A scathing new report, seen by City A.M. and authored by the Internet Association (IA), which represents online firms including Google, Facebook and Twitter, has outlined a string of major concerns with plans laid out in the government Online Harms white paper last month.

The Online Harms white paper outlines a large number of internet censorship proposals hiding under the vague terminology of 'duties of care'.

Under the proposals, social media sites could face hefty fines or even a ban if they fail to tackle online harms such as inappropriate age content, insults, harassment, terrorist content and of course 'fake news'.

But the IA has branded the measures unclear and warned they could damage the UK's booming tech sector, with smaller businesses disproportionately affected.  IA executive director Daniel Dyball said:

Internet companies share the ambition to make the UK one of the safest places in the world to be online, but in its current form the online harms white paper will not deliver that, said

The proposals present real risks and challenges to the thriving British tech sector, and will not solve the problems identified.

The IA slammed the white paper over its use of the term duty of care, which it said would create legal uncertainty and be unmanageable in practice.

The lobby group also called for a more precise definition of which online services would be covered by regulation and greater clarity over what constitutes an online harm. In addition, the IA said the proposed measures could raise serious unintended consequences for freedom of expression.

And while most internet users favour tighter rules in some areas, particularly social media, people also recognise the importance of protecting free speech 203 which is one of the internet's great strengths.

Update: Main points

2nd June 2019. See article from uk.internetassociation.org

The Internet Association paper sets out five key concerns held by internet companies:

  • "Duty of Care" has a specific legal meaning that does not align with the obligations proposed in the White Paper, creating legal uncertainty, and would be unmanageable;
  • The scope of the services covered by regulation needs to be defined differently, and more closely related to the harms to be addressed;
  • The category of "harms with a less clear definition" raises significant questions and concerns about clarity and democratic process;
  • The proposed code of practice obligations raise potentially dangerous unintended consequences for freedom of expression;
  • The proposed measures will damage the UK digital sector, especially start-ups, micro-businesses and small- and medium-sized enterprises (SMEs), and slow innovation.

 

 

Offsite Article: Christian Concerns...


Link Here 15th June 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media
Who'd have thought that a Christian Campaign Group would be calling on its members to criticise the government's internet censorship bill in a consultation

See article from christianconcern.com

 

 

UK Internet Regulation Part II...

Open Rights Group reports on how the Online Harms Bill will harm free speech, justice and liberty


Link Here 18th June 2019
Full story: Online Harms White Paper...UK Government seeks to censor social media

This report follows our research into current Internet content regulation efforts, which found a lack of accountable, balanced and independent procedures governing content removal, both formally and informally by the state.

There is a legacy of Internet regulation in the UK that does not comply with due process, fairness and fundamental rights requirements. This includes: bulk domain suspensions by Nominet at police request without prior authorisation; the lack of an independent legal authorisation process for Internet Watch Foundation (IWF) blocking at Internet Service Providers (ISPs) and in the future by the British Board of Film Classification (BBFC), as well as for Counter-Terrorism Internet Referral Unit (CTIRU) notifications to platforms of illegal content for takedown. These were detailed in our previous report.

The UK government now proposes new controls on Internet content, claiming that it wants to ensure the same rules online as offline. It says it wants harmful content removed, while respecting human rights and protecting free expression.

Yet proposals in the DCMS/Home Office White Paper on Online Harms will create incentives for Internet platforms such as Google, Twitter and Facebook to remove content without legal processes. This is not the same rules online as offline. It instead implies a privatisation of justice online, with the assumption that corporate policing must replace public justice for reasons of convenience. This goes against the advice of human rights standards that government has itself agreed to and against the advice of UN Special Rapporteurs.

The government as yet has not proposed any means to define the harms it seeks to address, nor identified any objective evidence base to show what in fact needs to be addressed. It instead merely states that various harms exist in society. The harms it lists are often vague and general. The types of content specified may be harmful in certain circumstances, but even with an assumption that some content is genuinely harmful, there remains no attempt to show how any restriction on that content might work in law. Instead, it appears that platforms will be expected to remove swathes of legal-but-unwanted content, with as as-yet-unidentified regulator given a broad duty to decide if a risk of harm exists. Legal action would follow non-compliance by a platform. The result is the state proposing censorship and sanctions for actors publishing material that it is legal to publish.

 


melonfarmers icon
 

Top

Home

Index

Links

Email
 

UK

World

Media

Info

US
 

FilmCuts

Nutters

Liberty

Advertise
 


Cutting Edge

Shopping

Sex News

Sex+Shopping

UK Internet
 



Adult DVD+VoD

Online Shop Reviews
 

Online Shops

New  & Offers
 
Sex Machines
Fucking Machines
Adult DVD Empire
Adult DVD Empire
Simply Adult
30,000+ items in stock
Low prices on DVDs and sex toys
Simply Adult
Hot Movies
Hot Movies