The UK government is preparing to establish a new internet censor that would make tech firms liable for content published on their platforms and have the power to sanction companies that fail to take down illegal material and hate speech within hours.
Under legislation being drafted by the Home Office and the Department for Digital, Culture, Media and Sport (DCMS) due to be announced this winter, a new censorship framework for online social harms would be created.
BuzzFeed News has obtained
details of the proposals, which would see the establishment of an internet censor similar to Ofcom.
Home secretary Sajid Javid and culture secretary Jeremy Wright are considering the introduction of a mandatory code of practice for social media
platforms and strict new rules such as takedown times forcing websites to remove illegal hate speech within a set timeframe or face penalties. Ministers are also looking at implementing age verification for users of Facebook, Twitter, and Instagram.
The new proposals are still in the development stage and are due to be put out for consultation later this year. The new censor would also develop rules new regulations on controlling non-illegal content and online behaviour . The rules for what
constitutes non-illegal content will be the subject of what is likely to be a hotly debated consultation.
BuzzFeed News has also been told ministers are looking at creating a second new censor for online advertising. Its powers would include a
crackdown on online advertisements for food and soft drink products that are high in salt, fat, or sugar.
BuzzFeed News understands concerns have been raised in Whitehall that the regulation of non-illegal content will spark opposition from free
speech campaigners and MPs. There are also fears internally that some of the measures being considered, including blocking websites that do not adhere to the new regulations, are so draconian that they will generate considerable opposition.
government spokesperson confirmed to BuzzFeed News that the plans would be unveiled later this year.
Ofcom has published a prospectus angling for a role as the UK internet censor. It writes:
Ofcom has published a discussion document examining the area of harmful online content.
In the UK and around
the world, a debate is underway about whether regulation is needed to address a range of problems that originate online, affecting people, businesses and markets.
The discussion document is intended as a contribution to that
debate, drawing on Ofcom's experience of regulating the UK's communications sector, and broadcasting in particular. It draws out the key lessons from the regulation of content standards 203 for broadcast and on-demand video services 203 and the insights
that these might provide to policy makers into the principles that could underpin any new models for addressing harmful online content.
The UK Government intends to legislate to improve online safety, and to publish a White Paper
this winter. Any new legislation is a matter for Government and Parliament, and Ofcom has no view about the institutional arrangements that might follow.
Alongside the discussion paper, Ofcom has published joint research with the
Information Commissioner's Office on people's perception, understanding and experience of online harm. The survey of 1,686 adult internet users finds that 79% have concerns about aspects of going online, and 45% have experienced some form of online harm.
The study shows that protection of children is a primary concern, and reveals mixed levels of understanding around what types of media are regulated.
The sales pitch is more or less that Ofcom's TV censorship has 'benefited' viewers
so would be a good basis for internet censorship.
Ofcom particularly makes a point of pushing the results of a survey of internet users and their 'concerns'. The survey is very dubious and ends up suggesting thet 79% of users have concerns about
going on line.
And maybe this claim is actually true. After all, the Melon Farmers are amongst the 79% have concerns about going online: The Melon Farmers are concerned that:
There are vast amounts of scams and viruses waiting to be filtered out from Melon Farmers email inbox every day.
The authorities never seem interested in doing anything whatsoever about protecting people from being scammed out of their life
savings. Have you EVER heard of the police investigating a phishing scam?
On the other hand the police devote vast resources to prosecuting internet insults and jokes, whilst never investigating scams that see old folks lose their life savings.
So yes, there is concern about the internet. BUT, it would be a lie to infer that these concerns mean support for Ofcom's proposals to censor websites along the lines of TV.
In fact looking at the figures, some of the larger categories of
'concern's are more about fears of real crime rather than concerns about issues like fake news.
Interestingly Ofcom has published how the 'concerns' were hyped up by prompting the surveyed a bit. For instance, Ofcom reports that 12% of internet
users say they are 'concerned' about fake news without being prompted. With a little prompting by the interviewer, the number of people reporting being concerned about fake news magically increases to 29%.
It also has to be noted that there are NO
reports in the survey of internet users concerned about a lack news balancing opinions, a lack of algorithm transparency, a lack of trust ratings for news sources, or indeed for most of the other suggestions that Ofcom addresses.
I've seen more
fake inferences in the Ofcom discussion document than I have seen fake news items on the internet in the last ten years.
Parliament needs to stop creating piecemeal laws to address content online -- or which make new forms of speech illegal.
Index is very concerned about the plethora of law-making initiatives related to online communications,
the most recent being MP Lucy Powell's online forums regulation bill, which targets hate crime and
"secret" Facebook groups.
Powell's bill purports to "tackle online hate, fake news and radicalisation" by making social media companies liable for what is published in large, closed online forms -- and is the
latest in a series of poorly drafted attempts by parliamentarians to address communications online.
If only Powell's proposal were the worst piece of legislation parliament will consider this autumn. Yesterday, MPs debated the
Counter-Terrorism and Border Security Bill, which would make it a crime to view information online
that is "likely to be useful" to a terrorist. No terrorist intent would be required -- but you would risk up to 15 years in prison if found guilty. This would make the work of journalists and academics very difficult or impossible.
Attempts to tackle online content are coming from all corners with little coordination -- although a factor common to all these proposals is that they utterly fail to safeguard freedom of expression.
summer, the Commons Select Committee on Culture, Media and Sport issued a preliminary report on tackling fake news and the
government launched a consultation on a possible new law to prevent "intimidation" of those standing
In addition, the government is expected to publish later this year a white paper on internet
safety aimed " to make sure the UK is the safest place in the world to be online." The Law Commission, already tasked with publishing a
report on offensive online communications , was last week asked to review whether misogyny should be considered a hate crime.
Jodie Ginsberg, CEO of Index, said:
"We're having to play
whack-a-mole at the moment to prevent poorly drawn laws inadvertently stifling freedom of expression, especially online. The scattergun approach is no way to deal with concerns about online communications. Instead of paying lip service to freedom of
expression as a British value, it needs to be front and centre when developing policies".
"We already have laws to deal with harassment, incitement to violence, and even incitement to hatred. International experience
shows us that even well-intentioned laws meant to tackle hateful views online often end up hurting the minority groups they are meant to protect, stifle public debate, and limit the public's ability to hold the powerful to account."
Niche porn producer, Pandora Blake, Misha Mayfair, campaigning lawyer Myles Jackman and Backlash are campaigning to back a legal challenge to the upcoming internet porn censorship regime in the UK. They write on a new
We are mounting a legal challenge.
Do you lock your door when you watch porn 203 or do you publish a notice
in the paper? The new UK age verification law means you may soon have to upload a proof of age to visit adult sites. This would connect your legal identity to a database of all your adult browsing. Join us to prevent the damage to your privacy.
The UK Government is bringing in age verification for adults who want to view adult content online; yet have failed to provide privacy and security obligations to ensure your private information is securely protected.
The law does not currently limit age verification software to only hold data provided by you just in order to verify your age. Hence, other identifying data about you could include anything from your passport information to your
credit card details, up to your full search history information. This is highly sensitive data.
What are the Privacy Risks?
Data Misuse - Since age verification providers are legally permitted to
collect this information, what is to stop them from increasing revenue through targeting advertising at you, or even selling your personal data?
Data Breaches - No database is perfectly secure, despite good intentions. The leaking
or hacking of your sensitive personal information could be truly devastating. The Ashley Madison hack led to suicides. Don't let the Government allow your private sexual preferences be leaked into the public domain.
What are we
asking money for?
We're asking you to help us crowdfund legal fees so we can challenge the new age verification rules under the Digital Economy Act 2017. We re asking for 2£10,000 to cover the cost of initial legal advice,
since it's a complicated area of law. Ultimately, we'd like to raise even more money, so we can send a message to Government that your personal privacy is of paramount importance.
Lucy Powell writes in the Guardian, (presumably intended as an open comment):
Closed forums on Facebook allow hateful views to spread unchallenged among terrifyingly large groups. My bill would change that
You may wonder what could bring Nicky Morgan, Anna Soubry, David Lammy, Jacob Rees-Mogg and other senior MPs from across parliament together at the moment. Yet they are all sponsoring a bill I'm proposing that will tackle online hate,
fake news and radicalisation. It's because, day-in day-out, whatever side of an argument we are on, we see the pervasive impact of abuse and hate online 203 and increasingly offline, too.
Social media has given extremists a new
tool with which to recruit and radicalise. It is something we are frighteningly unequipped to deal with.
Worryingly, it is on Facebook, which most of us in Britain use, where people are being exposed to extremist material. Instead
of small meetings in pubs or obscure websites in the darkest corners of the internet, our favourite social media site is increasingly where hate is cultivated. From hope to hate: how the early internet fed the far right Read more
Online echo chambers are normalising and allowing extremist views to go viral unchallenged. These views are spread as the cheap thrill of racking up Facebook likes drives behaviour and reinforces a binary worldview. Some people are being groomed unwittingly as unacceptable language is treated as the norm. Others have a more sinister motive.
While in the real world, alternative views would be challenged by voices of decency in the classroom, staffroom, or around the dining-room table, there are no societal norms in the dark crevices of the online world. The impact of
these bubbles of hate can be seen, in extreme cases, in terror attacks from radicalised individuals. But we can also see it in the rise of the far right, with Tommy Robinson supporters rampaging through the streets this summer, or in increasing
Islamophobia and antisemitism.
Through Facebook groups (essentially forums), extremists can build large audiences. There are many examples of groups that feature anti-Muslim or antisemitic content daily, in an environment which,
because critics are removed from the groups, normalises these hateful views. If you see racist images, videos and articles in your feed but not the opposing argument, you might begin to think those views are acceptable and even correct. If you already
agree with them, you might be motivated to act.
This is the thinking behind Russia's interference in the 2016 US presidential election. The Russian Internet Research Agency set up Facebook groups, amassed hundreds of thousands of
members, and used them to spread hate and fake news, organise rallies, and attack Hillary Clinton. Most of its output was designed to stoke the country's racial tensions.
It's not only racism that is finding a home on Facebook.
Marines United was a secret group of 30,000 current and former servicemen in the British armed forces and US Marines. Members posted nude photos of their fellow servicewomen, taken in secret. A whistleblower described the group as revenge porn, creepy
stalker-like photos taken of girls in public, talk about rape. It is terrifying that the group grew so large before anyone spoke out, and that Facebook did nothing until someone informed the media.
Because these closed forums can
be given a secret setting, they can be hidden away from everyone but their members. This locks out the police, intelligence services and charities that could otherwise engage with the groups and correct disinformation. This could be particularly crucial
with groups where parents are told not to vaccinate their children against diseases. Internet warriors: inside the dark world of online haters Read more
Despite having the resources to solve the problem, Facebook lacks the will.
In fact, at times it actively obstructs those who wish to tackle hate and disinformation. Of course, it is not just Facebook, and the proliferation of online platforms and forums means that the law has been much too slow to catch up with our digital
We should educate people to be more resilient and better able to spot fake news and recognise hate, but we must also ensure there are much stronger protections to spread decency and police our online communities. The
responsibility to regulate these social media platforms falls on the government. It is past time to act. Advertisement
That's why I am introducing a bill in parliament which will do just that. By establishing legal accountability
for what's published in large online forums, I believe we can force those who run these echo chambers to stamp out the evil that is currently so prominent. Social media can be a fantastic way of bringing people together 203 which is precisely why we need
to prevent it being hijacked by those who instead wish to divide.
Pornhub's Age verification system AgeID has announced an exclusive partnership with OCL and its Portes solution for providing anonymous face-to-face age verification solution where retailers OK the age of customers who buy a card enabling porn access.
The similar AVSecure scheme allows over 25s to buy the access card without showing any ID but may require to see unrecorded ID from those appearing less than 25.
According to the company, the PortesCard is available to purchase from selected high
street retailers and any of the U.K.'s 29,000 PayPoint outlets as a voucher. Each PortesCard will cost £4.99 for use on a single device, or £8.99 for use across multiple devices. This compares with £10 for the AVSecure card.
Once a card or voucher
is purchased, its unique validation code must be activated via the Portes app within 24 hours before expiring. Once the user has been verified they will automatically be granted access to all adult sites using AgeID. Maybe this 24 hour limit is something
to do with an attempt to restrict secondary sales of porn access codes by ensuring that they get tied to devices almost immediately. It all sounds a little hasslesome.
As an additional layer of protection, parents can quickly and simply block
access on their children's devices to sites using Portes, so PortesCards cannot be associated with AgeID.
But note that an anonymously bought card is not quite a 100% safe solution. One has to consider whether if the authorities get hold of a
device whether the can then see a complete history of all websites accessed using the app or access code. One also has to consider whether someone can remotely correlate an 'anonymous' access code with all the tracking cookies holding one's id.
Elspeth Howe, a member of the House of Lords, has written an article in the Telegraph outlining her case that the remit for the BBFC to censor internet porn sites should be widened to include a wider range of material that she does not like.
seems to tally with other recent news that the CPS is reconsidering its views on what pornographic content should be banned from publication in Britain.
Surely these debates are related to the detailed guidelines to be used by the BBFC when either
banning porn sites, or else requiring them to implement strict age verification for users. It probably explains why the Telegraph recently reported that the publication of the final guidelines has been delayed until at least the autumn.
Categories of Porn
For clarity the categories of porn being discussed are as follows:
Beyond R18 (proposal by CPS)
Cartoon child porn (proposal by Howe))
Softcore porn rated 18 under BBFC guidelines
- Will be allowed subject to strict age verification
Vanilla hardcore porn rated R18 under current BBFC guidelines
- Will be allowed subject to strict age verification
Beyond R18 hardcore porn that includes material historically banned by the CPS claiming obscenity, ie fisting, golden showers, BDSM, female ejaculation, and famously from a recent anti censorship campaign, face sitting/breath play.
Such material is currently cut from R18s.
- Such content will be allowed under the current Digital Economy Act for online porn sites - This category is currently banned for offline sales in the UK, but the CPS has just opened a public
consultation on its proposal to legalise such content, as long as it is consensual. Presumably this is related to the government's overarching policy: What's illegal offline, is illegal online.
Extreme Porn as banned from
possession in the UK under the Dangerous Pictures Act. This content covers, bestiality, necrophilia, realistic violence likely to result in serious injury, realistic rape
- This content is illegal to possess in the UK and any websites with such
content will be banned by the BBFC regardless of age verification implementation
Cartoon Porn depicting under 18s
- This content is banned from possession in the UK but will be allowed online subject to age verification
Photographic child porn
This is already totally illegal in the UK on all media. Any foreign websites featuring such content are probably already being blocked by ISPs using lists maintained by the IWF. The BBFC
will ban anything it spots that may have slipped through the net.
'What's illegal offline, is illegal online'
Elspeth Howe writes:
I very much welcome part three of the Digital Economy Act 2017 which requires robust age verification checks to protect
children from accessing pornography. The Government deserves congratulations for bringing forward this seminal provision, due to come into effect later this year.
The Government's achievement, however, has been sadly undermined by
amendments that it introduced in the House of Lords, about which there has been precious little public debate. I very much hope that polling that I am placing in the public domain today will facilitate a rethink.
When the Digital
Economy Bill was introduced in the Lords, it proposed that legal pornography should be placed behind robust age verification checks. Not surprisingly, no accommodation for either adults or children was made for illegal pornography, which encompasses
violent pornography and child sex abuse images.
As the Bill passed through the Lords, however, pressure was put on the Government to allow adults to access violent pornography, after going through age-verification checks, which in
other contexts it would be illegal to supply. In the end the Government bowed to this pressure and introduced amendments so that only one category of illegal pornography will not be accessible by adults.
[When Howe mentions violent
pornography she is talking about the Beyond R18 category, not the Extreme Porn category, which will be the one category mentioned that will not be accessible to adults].
The trouble with the idea of banning Beyond R18
pornography is that Britain is out of step with the rest of the world. This category includes content that is ubiquitous in most of the major porn websites in the world. Banning so much content would be simply be impractical. So rather than banning all
foreign porn, the government opted to remove the prohibition of Beyond R18 porn from the original bill.
Another category that has not hitherto come to attention is the category of cartoon porn that depicts under 18s. The original law that bans
possession of this content seemed most concerned about material that was near photographic, and indeed may have been processed from real photos. However the law is of most relevance in practical terms when it covers comedic Simpsons style porn, or else
Japanese anime often featuring youthful, but vaguely drawn cartoon characters in sexual scenes.
Again there would be problems of practicality of banning foreign websites from carry such content. All the major tube sites seems to have a section
devoted to Hentai anime porn which edges into the category.
In July 2017, Howe introduced a bill that would put Beyond R18 and Cartoon Porn back into the list of prohibited material in the Digital Economy Act. The bill is titled the Digital
Economy Act 2017 (Amendment) (Definition of Extreme Pornography) Bill and is still open, but further consideration in Parliament has stalled, presumably as the Government itself is currently addressing these issues.
The bill adds in to the
list of prohibitions any content that has been refused a BBFC certificate or would be refused a certificate if it were to be submitted. This would catch both the Beyond Porn and Cartoon Porn categories.
The government is very keen on its policy
mantra: What's illegal offline, is illegal online and it seems to have addressed the issue of Beyond 18 material being illegal offline but legal online. The government is proposing to relax its own obscenity rules so that Beyond R18 material will
be legalised, (with the proviso that the porn is consensual). The CPS has published a public consultation with
this proposal, and it should be ready for implementation after the consultation closes on 17th October 2018.
Interestingly Howe seems to have dropped the call to ban Beyond R18 material in her latest piece, so presumably she has accepted that
Beyond R18 material will soon be classifiable by the BBFC, and so not an issue for her bill.
Still to be Addressed
That still leaves the category of Cartoon Porn to be addressed. The current Digital Economy Act renders it illegal
offline, but legal online. Perhaps the Government has given Howe the nod to rationalise the situation by making banning the likes of Hentai. Hence Howe is initiating a bit of propaganda to support her bill. She writes:
The polling that I am putting in the public domain specifically addresses the non-photographic child sex abuse images and is particularly interesting because it gauges the views of MPs whose detailed consideration of the Bill came
before the controversial Lords amendments were made.
According to the survey, which was conducted by ComRes on behalf of CARE, a massive 71% of MPs, rising to 76% of female MPs, stated that they did not believe it was right for
the Digital Economy Act to make non-photographic child sex abuse images available online to adults after age verification checks. Only 5% of MPs disagreed.
There is an opportunity to address this as part of a review in the next 18
months, but things are too serious to wait .The Government should put matters right now by adopting my very short, but very important two-clause Digital Economy Act (Amendment) (Extreme Pornography) Bill which would restore the effect of the Government's
initial prohibition of this material.
I -- along with 71 per cent of MPs -- urge the Government to take action to ensure that the UK's internet does not endorse the sexual exploitation of children.
heard of this issue being discussed before and I can't believe that anybody has much of an opinion on the matter. Presumably therefore, the survey presented out of the blue with the questions being worded in such a way as to get the required response.
Not unusual, but surely it shows that someone is making an effort to generate an issue where one didn't exists before. Perhaps an indication that Howe's solution is what the authorities have decreed will happen.
MPs left behind unfinished business when they broke for summer recess, and we aren't talking about Brexit negotiations. The rollout of mandatory age verification (AV) technology for adult websites is being held up once again while the Government
mulls over final details. AV tech will create highly sensitive databases of the public's porn watching habits, and Open Rights Groups submitted a
report warning the proposed privacy protections are woefully inadequate. The Government's hesitation could be a
sign they are receptive to our concerns, but we expect their final guidance will still treat privacy as an afterthought. MPs need to understand what's at stake before they are asked to approve AV guidelines after summer.
will be operated by private companies, but if the technology gets hacked and the personal data of millions of British citizens is breached, the Government will be squarely to blame. By issuing weak guidelines, the Government is begging for a Cambridge
Analytica-style data scandal. If this technology fails to protect user privacy, everybody loses. Businesses will be damaged (just look at Facebook), the Government will be embarrassed, and the over 20 million UK residents who view porn could have their
private sexual preferences exposed. It's in everybody's interest to fix this. The draft guidance lacks even the basic privacy protections required for other digital tools like credit card payments and email services. Meanwhile, major data breaches are
rocking international headlines on a regular basis. AV tech needs a dose of common sense.
UK Parliamentary committee claims that people failing to vote the 'correct' way is nothing to do with politicians' crap policies that don't look after British people, and must be all to do with fake news
Parliament's Digital, Culture, Media and Sport (DCMS) Committee has been investigating disinformation and fake news following the Cambridge Analytica data scandal and is claiming that the UK faces a democratic crisis due to the spread of pernicious
views and the manipulation of personal data.
In its first report it will suggest social media companies should face tighter censorship. It also proposes measures to combat election interference.
The report claims that the relentless targeting
of hyper-partisan views, which play to the fears and prejudices of people, in order to influence their voting plans is a threat to democracy.
The report was very critical of Facebook, which has been under increased scrutiny following the Cambridge
Analytica data scandal.
Facebook has hampered our efforts to get information about their company throughout this inquiry. It is as if it thinks that the problem will go away if it does not share information about the problem, and reacts only when
it is pressed, the report said. It provided witnesses who have been unwilling or unable to give full answers to the committee's questions.
The committee suggests:
1. Social media sites should be held responsible for harmful content on
Social media companies cannot hide behind the claim of being merely a 'platform', claiming that they are tech companies and have no role themselves in regulating the content of their sites, the committee said.
They continually change what is and is not seen on their sites, based on algorithms and human intervention.
They reward what is most engaging, because engagement is part of their business model and their growth strategy. They have profited greatly by using this model.
The committee suggested a new category of tech company should be created, which
was not necessarily a platform or a publisher but something in between.
This should establish clear legal liability for the tech companies to act against harmful and illegal content on their platforms, the report said.
rules on political campaigns should be made fit for the digital age
The committee said electoral law needed to be updated to reflect changes in campaigning techniques.
It suggested creating a public register for political
advertising so that anybody can see what messages are being distributed online political advertisements should have a digital imprint stating who was responsible, as is required with printed leaflets and advertisements social media sites should be held
responsible for interference in elections by malicious actors electoral fraud fines should be increased from a maximum of £20,000 to a percentage of organisations' annual turnover
3. Technology companies should be taxed to fund
education and regulation
Increased regulation of social media sites would result in more work for organisations such as the Electoral Commission and Information Commissioner's Office (ICO).
The committee suggested a levy on tech
companies should fund the expanded responsibilities of the regulators.
The money should also be spent on educational programmes and a public information campaign, to help people identify disinformation and fake news.
networks should be audited
The committee warned that fake accounts on sites such as Facebook and Twitter not only damage the user experience, but potentially defraud advertisers.
It suggested an independent authority such as the
Competition and Markets Authority should audit the social networks.
It also said security mechanisms and algorithms used by social networks should be available for audit by a government regulator, to ensure they are operating responsibly.
Offsite Comment: Now MPs want to police political discussion
Those members of parliament are half right at least. Democracy in Britain and the West is at risk today. But contrary to the wild claims in their fake-news report, the real risk does not come from Russian bloggers or shady groups farming Facebook users'
data. The big threat comes from political elitists like the cross-party clique of Remainer MPs who dominate the DCMS committee.
It looks a lot as if these MPs, like authoritarians from Moscow to Malaysia, have been inspired by the
strikingly illiberal precedent set by Angela Merkel's social media law . In particular, part of the idea behind sticking social media companies with legal liability is to scare them into going even further in muzzling free speech than the strict letter
of the law requires.
The Crown Prosecution Service has just published proposals to end obscenity prosecutions of images and videos of fisting, golden showers, squirting and bondage.
The key proposed prosecution policy update:
considering whether the content of an article is “obscene”, prosecutors should distinguish between:
Content showing or realistically depicting criminal conduct (whether non-consensual activity, or consensual activity where serious harm is caused), which is likely to be obscene;
Content showing or realistically depicting other conduct which is lawful,
which is unlikely to be obscene.
And there is a consultation question to ask about this new policy
Question 2 Do consultees agree or disagree with the guidance that prosecutors must exercise real caution when dealing with the moral
nature of acts not criminalized by law, and that the showing or realistic depiction of sexual activity / pornography which does not constitute acts or conduct contrary to the criminal law is unlikely to be obscene?
16. The following conduct (notwithstanding previous guidance indicating otherwise) will not likely fall to be prosecuted under the Act:
Activity involving bodily substances (including urine, vomit, blood and faeces)
Infliction of pain / torture
Bondage / restraint
Placing objects into the urethra
Any other sexual activity not prohibited by law
It is consensual;
No serious harm is caused;
It is not otherwise inextricably linked with other criminality; and
The likely audience is not under 18 or otherwise vulnerable.
More to follow after reading the document but the new policy seems to expand on the concept of obscenity to incorporate modern issues such as revenge porn, or non consensual publications eg upskirting.
Maybe this change of heart is
related to a delay in age verification guidelines for the new BBFC internet porn censorship regime. It would seem very closely related.
David Austin as penned what looks like an official BBFC campaigning piece trying to drum up support for the upcoming internet porn censorship regime. Disgracefully the article is hidden behind a paywall and is restricted to Telegraph paying subscribers.
Are children protected by endangering their parents or their marriage?
The article is very much a one sided piece, focusing almost entirely on the harms to children. It says nothing about the extraordinary dangers faced by adults when
handing over personal identifying data to internet companies. Not a word about the dangers of being blackmailed, scammed or simply outed to employers, communities or wives, where the standard punishment for a trivial transgression of PC rules is the sack
Austin speaks of the scale of the internet business and the scope of the expected changes. He writes:
There are around five million pornographic websites across the globe. Most of them have no
effective means of stopping children coming across their content. It's no great surprise, therefore, that Government statistics show that 1.4 million children in the UK visited one of these websites in one month.
The BBFC will be looking for a step change in the behaviour of the adult industry. We have been working with the industry to ensure that many websites carry age-verification when the law comes into force.
Millions of British adults watch pornography online. So age-verification will have a wide reach. But it's not new. It's been a requirement for many years for age-restricted goods and services, including some UK hosted pornographic
I guess at this last point readers will be saying I never knew that. I've never come across age verification ever before. But the point here is these previous rules devastated the British online porn industry and the reason
people don't ever come across it, is that there are barely any British sites left.
Are children being protected by impoverishing their parents?
Not that any proponents of age verification could care less about British people being
able to make money. Inevitably the new age verification will further compound the foreign corporate monopoly control on yet another internet industry.
Having lorded over a regime that threatens to devastate lives, careers and livelihoods, Austin
ironically notes that it probably won't work anyway:
The law is not a silver bullet. Determined, tech-savvy teenagers may find ways around the controls, and not all pornography online will be age-restricted. For
example, the new law does not require pornography on social media platforms to be placed behind age-verification controls.
The government is braced for criticism next week over an anticipated delay in its prospective curbs on under 18s' access to hardcore porn sites.
The current timetable culminating
in the implementation of UK porn censorship by the end of the year required that the final censorship guidelines are presented to MPs before they go on holiday on Thursday. They will then be ready to approve them when they return to work in the autumn.
It sound like they won't be ready for publishing by this Thursday.
The BBFC noted that they were due to send the results of the public consultation along with the BBFC censorship rules to the government by late May of this year so presumably the
government is still pondering what to do.
'Best practice' just like Facebook and Cambridge Analytica
Back in April when the BBFC initiated its rather naive draft rules for public consultation its prose tried to suggest that we can
trust age verifiers with our most sensitive porn browsing data because they will voluntarily follow 'best practice'. But in light of the major industry player, in this case Facebook, allowing Cambridge Analytica to so dramatically abuse our personal
data, the hope that these people will follow best practice' is surely forlorn.
And there was the implementation of GDPR. The BBFC seemed to think that this was all that was needed to keep our data safe. But when t comes down to
it all GDPR seems to have done is to train us, like Pavlov's dogs, to endlessly tick the consent box for all these companies to do what the hell they like with our data.
Then there was a nice little piece of research
this week that revealed that network level ISP filtering of porn has next to no impact on preventing young porn seekers from obtaining their kicks. The research notes seems to suggest that it is not enough to block porn one lad because he has 30 mates
whose house he can round to surf the web there, or else it only takes a few lads to be able to download porn and it will soon be circulated to the whole community on a memory stick or whatever.
Mass Buy in
I guess the government is
finding it tough to find age verification ideas that are both convenient for adult users, whilst remaining robust about preventing access by the under 18s. I think the governments needs to find a solution that will achieve a mass buy in by adult users.
If the adults don't want to play ball with the age verification process, then the first fall back position is for them to use a VPN. I know that from my use of VPNS that they are very good, and once you turn it on then I find it gets left on all day. I
am sure millions of people using VPNs would not go down well with the security services on the trail of more serious crimes than under age porn viewing.
I think the most likely age verification method proposed to date that has a chance of a mass
buy-in is the AVSecure system of anonymously buying a porn access card from a local shop, and using a PIN, perhaps typed in once a day. Then they are able to browse without further hassle on all participating websites. But I think it would require a
certain pragmatism from government to accept this idea, as it would be so open to over 18s buying a card and then selling the PIN to under 18s, or perhaps sons nicking their Dad's PINS when they see the card lying around, (or even perhaps installing a
keyboard logger to nick the password).
The government would probably like something more robust where PINS have to be matched to people's proven ID. But I think pron users would be stupid to hand over their ID to anyone on the internet who can
monitor porn use. The risks are enormous, reputational damage, blackmail, fraud etc, and in this nasty PC world, the penalty of the most trivial of moral transgressions is to lose your job or even career.
A path to failure
government is also setting out on a path when it can do nothing but fail. The Telegraph piece mentioned above is already lambasting the government for not applying the rules to social media websites such as Twitter, that host a fair bit of porn. The
Children will be free to watch explicit X-rated sex videos on social media sites because of a loophole in a new porn crackdown, Britain's chief censor has admitted.
Austin, chief executive of the BBFC, has been charged by ministers with enforcing new laws that require people to prove they are over 18 to access porn sites. However, writing for telegraph.co.uk, Mr Austin admitted it would not be a silver bullet as
online porn on sites such as Facebook and YouTube would escape the age restrictions. Social media companies will not be required to carry age-verification for pornographic content on their platforms. He said it was a matter for government to review this
Nobody seems to have heard much about the progress of the BBFC consultation about the process to censor internet porn in the UK.
The sketchy timetable laid out so far suggests that the result of the consultation should be published prior to the
Parliamentary recess scheduled for 26th July. Presumably this would provide MPs with some light reading over their summer hols ready for them to approve as soon as the hols are over.
Maybe this publication may have to be hurried along though, as
pesky MPs are messing up Theresa May's plans for a non-Brexit, and she would like to send them packing a week early before they can cause trouble. ( Update 18th July . The early holidays idea has
now been shelved).
The BBFC published meeting minutes this week that mentions the consultation:
The public consultation on the draft Guidance on Age Verification Arrangements and the draft Guidance on Ancillary
Service Providers closed on 23 April. The BBFC received 620 responses, 40 from organisations and 580 from individuals. Many of the individual responses were encouraged by a campaign organised by the Open Rights Group.
response to the consultation will be circulated to the Board before being sent to DCMS on 21 May.
So assuming that the response was sent to the government on the appointed day then someone has been sitting on the results for quite a
long time now.
Meanwhile its good to see that people are still thinking about the monstrosity that is coming our way. Ethical porn producer Erica Lust has been speaking to News Internationalist. She comments on the way the new law will compound
MindGeek's monopolitistc dominance of the online porn market:
The age verification laws are going to disproportionately affect smaller low-traffic sites and independent sex workers who cannot cover the costs of
installing age verification tools.
It will also impact smaller sites by giving MindGeek even more dominance in the adult industry. This is because the BBFC draft guidance does not enforce sites to offer more than one age
verification product. So, all of MindGeeks sites (again, 90% of the mainstream porn sites) will only offer their own product; Age ID. The BBFC have also stated that users do not have to verify their age on each visit if access is restricted by password
or a personal ID number. So users visiting a MindGeek site will only have to verify their age once using AgeID and then will be able to login to any complying site without having to verify again. Therefore, viewers will be less likely to visit competitor
sites not using the AgeID technology, and simultaneously competitor sites will feel pressured to use AgeID to protect themselves from losing viewers.
Sharon White, the CEO of Ofcom has put her case to be the British internet news censor, disgracefully from behind the paywalled website of the The Times.
White says Ofcom has done research showing how little users trust what they read on social media.
She said that only 39% consider social media to be a trustworthy news source, compared with 63% for newspapers, and 70% for TV.
But then again many people don't much trust the biased moralising from the politically correct mainstream media,
including the likes of Ofcom.
White claims social media platforms need to be more accountable in how they curate and police content on their platforms, or face regulation.
In reality, Facebook's algorithm seems pretty straightforward, it
just gives readers more of what they have liked in the past. But of course the powers that be don't like people choosing their own media sources, they would much prefer that the BBC, or the Guardian , or Ofcom do the choosing.
Sharon White, wrote
in the Times:
The argument for independent regulatory oversight of [large online players] has never been stronger.
In practice, this would place much greater scrutiny on how effectively the
online platforms respond to harmful content to protect consumers, with powers for a regulator to enforce standards, and act if these are not met.
She continued, disgracefully revealing her complete contempt of the British people:
Many people admit they simply don't have the time or inclination to think critically when engaging with news, which has important implications for our democracy.
White joins a growing number of the
establishment elite arguing that social media needs cenorship. The government has frequently suggested as much, with Matt Hancock, then digital, culture, media and sport secretary, telling Facebook in April:
media companies are not above the law and will not be allowed to shirk their responsibilities to our citizens.
Update: The whole pitch to offer Ofcom's services as a news censor
There seems to be 4 whinges about modern news reading via smart phones and all of them are just characteristics of the medium that will never change regardless of whether we have news censors or not.
Fake News: mostly only exists in the minds of politicians. No one else can find hardly any. So internet news readers are not much bothered by trying to detect it.
Passive news reading. Its far too much trouble typing in stuff on a smart
phone to be bothered to go out and find stuff for yourself. So the next best thing is to use apps that do the best job in feeding you articles that are of interest.
Skimming and shallow reading of news feeds. Well there's so much news out there
and the news feed algorithm isn't too hot anyway so if anything isn't quite 100% interesting, then just scroll on. This isn't going to change any time soon.
Echo chambers. This is just a put-down phrase for phone users choosing to read the news
that they like. If a news censor thinks that more worthy news should be force fed into people's news readers than they will just suffer the indignity of being rapidly swiped into touch.
Anyway this is Sharon White's take:
Picking up a newspaper with a morning coffee. Settling down to watch TV news after a day's work. Reading the sections of the Sunday papers in your favourite order.
For decades, habit and routine have helped to define our relationship with the news. In the past, people consumed news at set times of day, but heard little in between. But for many people, those habits, and the news landscape that
shapes them, have now changed fundamentally.
Vast numbers of news stories are now available 24/7, through a wide range of online platforms and devices, with social media now the most popular way of accessing news on the internet.
Today's readers and viewers face the challenge to keep up. So too, importantly, does regulation.
The fluid environment of social media certainly brings benefits to news, offering more choice, real-time updates, and a platform for
different voices and perspectives. But it also presents new challenges for readers and regulators alike -- something that we, as a regulator of editorial standards in TV and radio, are now giving thought for the online world.
new Ofcom research, we asked people about their relationship with news in our always-on society, and the findings are fascinating.
People feel there is more news than ever before, which presents a challenge for their time and
attention. This, combined with fear of missing out, means many feel compelled to engage with several sources of news, but only have the capacity to do so superficially.
Similarly, as many of us now read news through social media
on our smartphones, we're constantly scrolling, swiping and clearing at speed. We're exposed to breaking news notifications, newsfeeds, shared news and stories mixed with other types of content. This limits our ability to process, or even recognise, the
news we see. It means we often engage with it incidentally, rather than actively.
In fact, our study showed that, after being exposed to news stories online, many participants had no conscious recollection of them at all. For
example, one recalled seeing nine news stories online over a week -- she had actually viewed 13 in one day alone. Others remembered reading particular articles, but couldn't recall any of the detail.
Social media's attraction as a
source of news also raises questions of trust, with people much more likely to doubt what they see on these platforms. Our research shows only 39% consider social media to be a trustworthy news source, compared to 63% for newspapers, and 70% for TV.
Fake news and clickbait articles persist as common concerns among the people taking part in our research, but many struggle to check the validity of online news content. Some rely on gut instinct to tell fact from fiction, while
others seek second opinions from friends and family, or look for established news logos, such as the Times. Many people admit they simply don't have the time or inclination to think critically when engaging with news, which has important implications for
Education on how to navigate online news effectively is, of course, important. But the onus shouldn't be on the public to detect and deal with fake and harmful content. Online companies need to be much more
accountable when it comes to curating and policing the content on their platforms, where this risks harm to the public.
We welcome emerging actions by the major online players, but consider that the argument for independent
regulatory oversight of their activities has never been stronger. Such a regime would need to be based on transparency, and a set of clear underpinning principles.
In practice, this would place much greater scrutiny on how
effectively the online platforms respond to harmful content to protect consumers, with powers for a regulator to enforce standards, and act if these are not met. We will outline further thoughts on the role independent regulation could play in the
When it comes to trust and accountability, public service broadcasters like the BBC also have a vital role to play. Their news operations provide the bedrock for much of the news content we see online, and as the
broadcasting regulator, Ofcom will continue to hold them to the highest standards.
Ofcom's research can help inform the debate about how to regulate effectively in an online world. We will continue to shine a light on the
behavioural trends that emerge, as people's complex and evolving relationship with the media continues to evolve.
And perhaps if you have skimmed over White's piece a bit rapidly, here is the key paragraph again:
In practice, this would place much greater scrutiny on how effectively the online platforms respond to harmful content to protect consumers, with powers for a regulator to enforce standards, and act if these are not met. We will
outline further thoughts on the role independent regulation could play in the autumn.
A paper has been published on the effects of network level website blocking to try and prevent adolescents from seeking out porn.
Internet Filtering and Adolescent Exposure to Online Sexual Material
BY Andrew K. Przybylski, and Victoria Nash
Early adolescents are spending an increasing amount of time online, and a significant share of caregivers now use Internet
filtering tools to shield this population from online sexual material. Despite wide use, the efficacy of filters is poorly understood. In this article, we present two studies: one exploratory analysis of secondary data collected in the European Union,
and one preregistered study focused on British adolescents and caregivers to rigorously evaluate their utility. In both studies, caregivers were asked about their use of Internet filtering, and adolescent participants were interviewed about their recent
Analyses focused on the absolute and relative risks of young people encountering online sexual material and the effectiveness of Internet filters.
Results suggested that caregiver's use
of Internet filtering had inconsistent and practically insignificant links with young people reports of encountering online sexual material.
The struggle to shape the experiences young
people have online is now part of modern parenthood. This study was conducted to address the value of industry, policy, and professional advice concerning the appropriate role of Internet filtering in this struggle. Our preliminary findings suggested
that filters might have small protective effects, but evidence derived from a more stringent and robust empirical approach indicated that they are entirely ineffective. These findings highlight the need for a critical cost -- benefit analysis in light of
the financial and informational costs associated with filtering and age verification technologies such as those now being developed in some European countries like the United Kingdom. Further, our results highlight the need for registered trials to
rigorously evaluate the effectiveness of costly technological solutions for social and developmental goals.
The write up doesn't really put its conclusions with any real context as to what is actually happening beyond the kids still
being able to get hold of porn. The following paragraph gives the best clue of what is going on:
We calculated absolute risk reduction of exposure to online sexual material associated with caregivers using filtering
technology in practical terms. These resultswere used to calculate the number of households which would have to be filtered to prevent one young person, who would otherwise see sexual material online, from encountering it over a 12-month period.
Depending on the form of content, results indicated that between 17 and 77 households would need to be filtered to prevent a young adolescent from encountering online sexual material. A protective effect lower than we would consider practically
This seems to suggest that if one kid has a censored internet then he just goes around to a mate's house who isn't censored, and downloads from there. He wouldn't actually be blocked from viewing porn until his whole
circle of friends are similarly censored. It only takes one kid to be able download porn, as it can then be loaded on a memory stick to be passed around.
Sky, TalkTalk and Virgin Media would back the creation of an internet censor to set out a framework for internet companies in the UK, the House of Lords Communications Committee was told.
The three major UK ISPs were reporting to a House of Lords'
ongoing inquiry into internet censorship. The companies' policy heads pushed for a new censor, or the expansion of the responsibility of a current censor, to set the rules for content censorship and to better equip children using the internet amid safety
At the moment Information Commissioner's Office has responsibility for data protection and privacy; Ofcom censors internet TV; the Advertising Standards Authority censors adverts; and the BBFC censors adult porn.
Citing a report
by consultancy Communications Chambers, Sky's Adam Kinsley said that websites and internet providers are making decisions but in a non structured way. Speaking about the current state of internet regulation, Kinsley said:
Companies are already policing their own platforms. There is no accountability of what they are doing and how they are doing it. The only bit of transparency is when they decide to do it on a global basis and at a time of their
choosing. Policy makers need to understand what is happening, and at the moment they don't have that.
The 13-strong House of Lords committee, chaired by Lord Gilbert of Panteg, launched an inquiry earlier this year to explore how the
censorship of the internet should be improved. The committee will consider whether there is a need for new laws to govern internet companies. This inquiry will consider whether websites are sufficiently accountable and transparent, and whether they have
adequate governance and provide behavioural standards for users.
The committee is hearing evidence from April to September 2018 and will launch a report at the end of the year.
A campaign group of anti-sex works MPs comprising of feminists and religious moralists have just published a biased campaign document claiming all the usual bogies about trafficking, organised crime and so on.
The group misleadingly calls itself
the All-Party Parliamentary Group (APPG) on Prostitution and the Global Sex Trade, as if it was an official committee of parliament. It is not, it is just a self appointed campaign group with no attempt to include MPs independent of the campaign
nor to represent the wider views of Parliament.
Of course sex workers are definitely not party to the report., and in fact have been protesting against the report to highlight its lack of independence and representation of sex worker input.
A roughly 200-strong collection of sex workers and activists came out to Parliament Square on Wednesday to make their case, with banners such as "Decriminalise sex work, for safety's sake."
The report titled Behind Closed Doors
targets technology based tools used by modern sex workers, such as pop-up brothels using Airbnb, and internet platforms like Vivastreet and Adultwork, claimed to be the most significant enablers of sex-work and sex trafficking.
MP Sarah Champion iused the report to call for internet censorship along the lines of the US FOSTA internet censorship. By making internet platforms liable to penalties for content posted by their users, they end up censoring and blocking large swathes
of related content just in case something prohibited gets through. In America the law makers specifically prohibit material that aids sex trafficking, but because there is no obvious way of checking whether an advert is for a legal sex worker or for a
trafficked sex worker, then the companies have to take down the legal stuff too. In fact the effects are so wide spread that even dating services have been taken down just in case traffickers are lurking somewhere amongst the dating couples.
the campaigners don't stop there, comments to the media suggests a push for the UK to adopt The Nordic Model, a legal framework in which the selling of sexual services is legal but the purchase of those services is criminalised. The model has been
largely panned by sex workers, activists and researchers as ineffective and unsafe.
Furthermore in light of the publicity for the report, Jeremy Corbyn was asked by Sky's Sophy Ridge about the subject and he came out in favour of the #Nordic model
model of criminalising men buying sex.
So, as usual from the 'progressive' left are enjoying a good sneer at men, and will happily see them imprisoned and fined just for wanting to get laid.
Comment: Disappointed by
8th July 2018. Thanks to Alan
I'm disappointed to hear Jeremy Corbyn apparently backing the Nordic Model. In the past, he has favoured decriminalisation, to loud squeals from the pointless and reliably mouthy Jess Phillips. John McDonnell, by contrast, has always been on the side of
I am baffled by the behaviour of nominally Labour politicians who prattle about sex work while ignoring sex workers. I can't imagine Champion or Phillips spouting about railways without talking to the RMT and ASLEF or
about higher education without consultation with the UCU. I think the organizations representing sex workers should hammer this point home at every opportunity.
Today we're releasing our latest desktop browser Brave 0.23 which features Private Tabs with Tor, a technology for defending against network surveillance. This new functionality, currently in beta,
integrates Tor into the browser and gives users a new browsing mode that helps protect their privacy not only on device but over the network. Private Tabs with Tor help protect Brave users from ISPs (Internet Service Providers), guest Wi-Fi providers,
and visited sites that may be watching their Internet connection or even tracking and collecting IP addresses, a device's Internet identifier.
Private Tabs with Tor are easily accessible from the File menu by clicking New Private
Tab with Tor. The integration of Tor into the Brave browser makes enhanced privacy protection conveniently accessible to any Brave user directly within the browser. At any point in time, a user can have one or more regular tabs, session tabs, private
tabs, and Private Tabs with Tor open.
The Brave browser already automatically blocks ads, trackers, cryptocurrency mining scripts, and other threats in order to protect users' privacy and security, and Brave's regular private tabs
do not save a user's browsing history or cookies. Private Tabs with Tor improve user privacy in several ways. It makes it more difficult for anyone in the path of the user's Internet connection (ISPs, employers, or guest Wi-Fi providers such as coffee
shops or hotels) to track which websites a user visits. Also, web destinations can no longer easily identify or track a user arriving via Brave's Private Tabs with Tor by means of their IP address. Users can learn more about how the Tor network works by
watching this video.
Private Tabs with Tor default to DuckDuckGo as the search engine, but users have the option to switch to one of Brave's other nineteen search providers. DuckDuckGo does not ever collect or share users'
personal information, and welcomes anonymous users without impacting their search experience 204 unlike Google which challenges anonymous users to prove they are human and makes their search less seamless.
In addition, Brave is
contributing back to the Tor network by running Tor relays. We are proud to be adding bandwidth to the Tor network, and intend to add more bandwidth in the coming months.