The Open Rights Group comments on the government censorship plans:
Online Harms: Blocking websites doesn't work -- use a rights-based approach instead
Blocking websites isn't working. It's not
keeping children safe and it's stopping vulnerable people from accessing information they need. It's not the right approach to take on Online Harms.
This is the finding from our
recent research into website blocking by mobile and broadband
Internet providers. And yet, as part of its Internet regulation agenda, the UK Government wants to roll out even more blocking.
The Government's Online Harms White Paper is focused on making online companies fulfil a "duty
of care" to protect users from "harmful content" -- two terms that remain troublingly ill-defined. 1
The paper proposes giving a regulator various punitive measures to use against companies that fail to fulfil this duty, including powers to block websites.
If this scheme comes into effect, it could lead to
widespread automated blocking of legal content for people in the UK.
Mobile and broadband Internet providers have been blocking websites with parental control filters for five years. But through our
Blocked project -- which detects incorrect website blocking -- we know that systems are still blocking far too many sites and far too many types of sites by mistake.
Thanks to website blocking, vulnerable people and under-18s are losing access to crucial information and support from websites including counselling, charity, school, and sexual health websites. Small businesses are
losing customers. And website owners often don't know this is happening.
We've seen with parental control filters that blocking websites doesn't have the intended outcomes. It restricts access to legal, useful,
and sometimes crucial information. It also does nothing to prevent people who are determined to get access to material on blocked websites, who often use VPNs to get around the filters. Other solutions like filters applied by a parent to a child's
account on a device are more appropriate.
Unfortunately, instead of noting these problems inherent to website blocking by Internet providers and rolling back, the Government is pressing ahead with website blocking in other areas.
Blocking by Internet providers may not work for long. We are seeing a technical shift towards encrypted website address requests that will make this kind of website blocking by Internet providers much more
When I type a human-friendly web address such as openrightsgroup.org into a web browser and hit enter, my computer asks a Domain Name System (DNS) for that website's computer-friendly IP address - which will
look something like 126.96.36.199 . My web browser can then use that computer-friendly address to load the website.
At the moment, most DNS requests are unencrypted. This allows mobile and broadband Internet providers to
see which website I want to visit. If a website is on a blocklist, the system won't return the actual IP address to my computer. Instead, it will tell me that that site is blocked, or will tell my computer that the site doesn't exist. That stops me
visiting the website and makes the block effective.
Increasingly, though, DNS requests are being encrypted. This provides much greater security for ordinary Internet users. It also makes website blocking by Internet providers
incredibly difficult. Encrypted DNS is becoming widely available through Google's Android devices, on Mozilla's Firefox web browser and through Cloudflare's mobile application for Android and iOS. Other encrypted DNS services are also available.
Blocking websites may be the Government's preferred tool to deal with social problems on the Internet but it doesn't work, both in policy terms and increasingly at a technical level as well.
The Government must accept that website blocking by mobile and broadband Internet providers is not the answer. They should concentrate instead on a rights-based approach to Internet regulation and on educational and social approaches that address the roots of complex societal issues.
Offsite Article: CyberLegal response to the Online Harms Consultation
The Digital Policy Alliance (DPA) is a private lobby group connecting digital industries with Parliament. Its industry members include both Age Verification (AV) providers, eg OCL, and adult entertainment, eg
Just before the Government announcement that the commencement of adult verification requirements for porn websites would be delayed, the DPA wrote a letter explaining that the industry was not yet ready to implement AV, and had asked
for a 3 month delay.
The letter is unpublished but fragments of it have been reported in news reports about AV.
The Telegraph reported:
The Digital Policy Alliance called for the scheme to be delayed or
risk nefarious companies using this opportunity to harvest and manipulate user data.
The strongly-worded document complains that the timing is very tight, a fact that has put some AVPs [age verification providers] and adult
entertainment providers in a very difficult situation.
It warns that unless the scheme is delayed there will be less protection for public data, as it appears that there is an intention for uncertified providers to use this
opportunity to harvest and manipulate user data.
Rowland Manthorpe from Sky News contributed a few interesting snippets
too. He noted that the AVPs were unsurprisingly not pleased by the government delay:
Serge Acker, chief executive of OCL, which provides privacy-protecting porn passes for purchase at newsagents, told Sky News: As a
business, we have been gearing up to get our solution ready for July 15th and we, alongside many other businesses, could potentially now be being endangered if the government continues with its attitude towards these delays.
only does it make the government look foolish, but it's starting to make companies like ours look it too, as we all wait expectantly for plans that are only being kicked further down the road.
There are still issues with
how the AV providers can make money
And interestingly Manthorpe revealed in the accompanying video news report that the AV providers were also distinctly unimpressed by the BBFC stipulating that certified AV providers must not use Identity
Data provided by porn users for any other purpose than verifying age. The sensible idea being that the data should not be made available for the the likes of targeted advertising. And one particular example of prohibited data re-use has caused particular
problems, namely that ID data should not be used to sign people up for digital wallets.
Now AV providers have got to be able to generate their revenue somehow. Some have proposed selling AV cards in newsagents for about £10, but others had been
planning on using AV to generate a customer base for their digital wallet schemes.
So it seems that there are still quite a few fundamental issues that have not yet been resolved in how the AV providers get their cut.
providers would rather not sign up to BBFC accreditation
issues with BBFC AV accreditation requirements are behind a move to use an alternative standard. An AV provider called VeriMe has announced that it has the first AV company to receive a PAS1296 certification.
The PAS1296 was developed between the
British Standards Institution and the Age Check Certification Scheme (ACCS). It stands for Public Accessible Specification and is designed to define good practice standards for a product, service or process. The standard was also championed by the
Digital Policy Alliance.
Rudd Apsey, the director of VeriMe said:
The PAS1296 certification augments the voluntary standards outlined by the BBFC, which don't address how third-party websites handle consumer
data, Apsey added. We believe it fills those gaps and is confirmation that VeriMe is indeed leading the world in the development and implementation of age verification technology and setting best practice standards for the industry.
We are incredibly proud to be the first company to receive the standard and want consumers and service providers to know that come the July 15 roll out date, they can trust VeriMe's systems to provide the most robust solution for age
This is not a very convincing argument as PAS1296 is not available for customers to read, (unless they pay about 120 quid for the privilege). At least the BBFC standard can be read by anyone for free, and they can then
make up their own minds as to whether their porn browsing history and ID data is safe.
However it does seem that some companies at least are planning to give the BBFC accreditation scheme a miss.
The BBFC standard fails to
provide safety for porn users data anyway.
The AV company 18+ takes issue with the BBFC accreditation standard, noting that it allows AV providers to dangerously log people's porn browsing history:
Here's the problem with the design of
most age verification systems: when a UK user visits an adult website, most solutions will present the user with an inline frame displaying the age verifier's website or the user will be redirected to the age verifier's website. Once on the age
verifier's website, the user will enter his or her credentials. In most cases, the user must create an account with the age verifier, and on subsequent visits to the adult website, the user will enter his account details on the age verifier's website
(i.e., username and password). At this point in the process, the age verifier will validate the user and, if the age verifier has a record the user being at least age 18, will redirect the user back to the adult website. The age verification system will
transmit to the adult website whether the user is at least age 18 but will not transmit the identity of the user.
The flaw with this design from a user privacy perspective is obvious: the age verification website will know the
websites the user visits. In fact, the age verification provider obtains quite a nice log of the digital habits of each user. To be fair, most age verifiers claim they will delete this data. However, a truly privacy first design would ensure the data
never gets generated in the first place because logs can inadvertently be kept, hacked, leaked, or policies might change in the future. We viewed this risk to be unacceptable, so we set about building a better system.
age verification solutions set to roll out in July 2019 do not provide two-way anonymity for both the age verifier and the adult website, meaning, there remains some log of?204?or potential to log -- which adult websites a UK based user visits.
In fact one AV provider revealed that up until recently the government demanded that AV providers keep a log of people's porn browsing history and it was a bit of a late concession to practicality that companies were able to opt out if
Note that the logging capability is kindly hidden by the BBFC by passing it off as being used for only as long as is necessary for fraud prevention. Of course that is just smoke and mirrors, fraud, presumably meaning that passcodes
could be given or sold to others, could happen anytime that an age verification scheme is in use, and the time restriction specified by the BBFC may as well be forever.
Jeremy Wright, the Secretary of State for Digital, Culture, Media and Sport addressed parliament to explain that the start data for Age Verification scheme for porn has been delayed by about 6 months. The reason is that the Government failed to inform
the EU about laws that effect free trade (eg those that that allow EU websites to be blocked in the UK). Although the main Digital Economy Act was submitted to the EU, extra bolt on laws added since, have not been submitted. Wright explained:
In autumn last year, we laid three instruments before the House for approval. One of them204the guidance on age verification arrangements204sets out standards that companies need to comply with. That should have been notified to the
European Commission, in line with the technical standards and regulations directive, and it was not. Upon learning of that administrative oversight, I instructed my Department to notify this guidance to the EU and re-lay the guidance in Parliament as
soon as possible. However, I expect that that will result in a delay in the region of six months.
Perhaps it would help if I explained why I think that six months is roughly the appropriate time. Let me set out what has to happen
now: we need to go back to the European Commission, and the rules under the relevant directive say that there must be a three-month standstill period after we have properly notified the regulations to the Commission. If it wishes to look into this in
more detail204I hope that it will not204there could be a further month of standstill before we can take matters further, so that is four months. We will then need to re-lay the regulations before the House. As she knows, under the negative procedure,
which is what these will be subject to, there is a period during which they can be prayed against, which accounts for roughly another 40 days. If we add all that together, we come to roughly six months.
Wright apologised profusely to
supporters of the scheme:
I recognise that many Members of the House and many people beyond it have campaigned passionately for age verification to come into force as soon as possible to ensure that children are
protected from pornographic material they should not see. I apologise to them all for the fact that a mistake has been made that means these measures will not be brought into force as soon as they and I would like.
However the law has
not been received well by porn users. Parliament has generally shown no interest in the privacy and safety of porn users. In fact much of the delay has been down belatedly realising that the scheme might not get off the ground at all unless they at least
pay a little lip service to the safety of porn users.
Even now Wright decided to dismiss people's privacy fears and concerns as if they were all just deplorables bent on opposing child safety. He said:
there are also those who do not want these measures to be brought in at all, so let me make it clear that my statement is an apology for delay, not a change of policy or a lessening of this Government's determination to bring these changes about. Age
verification for online pornography needs to happen. I believe that it is the clear will of the House and those we represent that it should happen, and that it is in the clear interests of our children that it must.
his point by simply not acknowledging that if, given a choice people, would prefer not to hand over their ID. Voluntarily complying websites would have to take a major hit from customers who would prefer to seek out the safety of non-complying sites.
I see no reason why, in most cases, they [websites] cannot begin to comply voluntarily. They had expected to be compelled to do this from 15 July, so they should be in a position to comply. There seems to
be no reason why they should not.
In passing Wright also mentioned how the government is trying to counter encrypted DNS which reduces. the capabilities of ISPs to block websites. Instead the Government will try and press the
browser companies into doing their censorship dirty work for them instead:
It is important to understand changes in technology and the additional challenges they throw up, and she is right to say that the so-called D
over H changes will present additional challenges. We are working through those now and speaking to the browsers, which is where we must focus our attention. As the hon. Lady rightly says, the use of these protocols will make it more difficult, if not
impossible, for ISPs to do what we ask, but it is possible for browsers to do that. We are therefore talking to browsers about how that might practically be done, and the Minister and I will continue those conversations to ensure that these provisions
can continue to be effective.
This report follows our research into current Internet content regulation efforts, which found a lack of accountable, balanced and independent procedures governing content removal, both formally and informally by the state.
There is a legacy of Internet regulation in the UK that does not comply with due process, fairness and fundamental rights requirements. This includes: bulk domain suspensions by Nominet at police request without prior authorisation; the lack of an independent legal authorisation process for Internet Watch Foundation (IWF) blocking at Internet Service Providers (ISPs) and in the future by the British Board of Film Classification (BBFC), as well as for Counter-Terrorism Internet Referral Unit (CTIRU) notifications to platforms of illegal content for takedown. These were detailed in our previous report.
The UK government now proposes new controls on Internet content, claiming that it wants to ensure the same rules online as offline. It says it wants harmful content removed, while respecting human rights and protecting free
Yet proposals in the DCMS/Home Office White Paper on Online Harms will create incentives for Internet platforms such as Google, Twitter and Facebook to remove content without legal processes. This is not the same rules
online as offline. It instead implies a privatisation of justice online, with the assumption that corporate policing must replace public justice for reasons of convenience. This goes against the advice of human rights standards that government has itself
agreed to and against the advice of UN Special Rapporteurs.
The government as yet has not proposed any means to define the harms it seeks to address, nor identified any objective evidence base to show what in fact needs to be
addressed. It instead merely states that various harms exist in society. The harms it lists are often vague and general. The types of content specified may be harmful in certain circumstances, but even with an assumption that some content is genuinely
harmful, there remains no attempt to show how any restriction on that content might work in law. Instead, it appears that platforms will be expected to remove swathes of legal-but-unwanted content, with as as-yet-unidentified regulator given a broad duty
to decide if a risk of harm exists. Legal action would follow non-compliance by a platform. The result is the state proposing censorship and sanctions for actors publishing material that it is legal to publish.
The BBFC's Age-verification Certificate Standard ("the Standard") for providers of age verification services, published in April 2019, fails to meet adequate standards of cyber security and
data protection and is of little use for consumers reliant on these providers to access adult content online.
This document analyses the Standard and certification scheme and makes recommendations for improvement and remediation.
It sub-divides generally into two types of concern: operational issues (the need for a statutory basis, problems caused by the short implementation time and the lack of value the scheme provides to consumers), and substantive issues (seven problems with
the content as presently drafted).
The fact that the scheme is voluntary leaves the BBFC powerless to fine or otherwise discipline providers that fail to protect people's data, and makes it tricky for consumers to distinguish
between trustworthy and untrustworthy providers. In our view, the government must legislate without delay to place a statutory requirement on the BBFC to implement a mandatory certification scheme and to grant the BBFC powers to require reports and
penalise non-compliant providers.
The Standard's existence shows that the BBFC considers robust protection of age verification data to be of critical importance. However, in both substance and operation the Standard fails to
deliver this protection. The scheme allows commercial age verification providers to write their own privacy and security frameworks, reducing the BBFC's role to checking whether commercial entities follow their own rules rather than requiring them to
work to a mandated set of common standards. The result is uncertainty for Internet users, who are inconsistently protected and have no way to tell which companies they can trust.
Even within its voluntary approach, the BBFC gives
providers little guidance to providers as to what their privacy and security frameworks should contain. Guidance on security, encryption, pseudonymisation, and data retention is vague and imprecise, and often refers to generic "industry
standards" without explanation. The supplementary Programme Guide, to which the Standard refers readers, remains unpublished, critically undermining the scheme's transparency and accountability.
Grant the BBFC statutory powers:
The BBFC Standard should be substantively revised to set out comprehensive and concrete standards for handling highly sensitive age verification data.
The government should legislate to grant the BBFC statutory power to mandate compliance.
The government should enable the BBFC to require remedial action or apply financial penalties for non-compliance.
The BBFC should be given statutory powers to require annual compliance reports from providers and fine those who sign up to the certification scheme but later violate its requirements.
Information Commissioner should oversee the BBFC's age verification certification scheme
Delay implementation and enforcement:
Delay implementation and enforcement of age verification until both (a) a statutory standard of data privacy and security is in place, and (b) that standard has been
implemented by providers.
Improve the scheme content:
Even if the BBFC certification scheme remains voluntary, the Standard should at least contain a definitive set of precisely delineated objectives
that age verification providers must meet in order to say that they process identity data securely.
Improve communication with the public:
Where a provider's certification is revoked, the BBFC should
issue press releases and ensure consumers are individually notified at login.
The results of all penetration tests should be provided to the BBFC, which must publish details of the framework it uses to evaluate test results, and
publish annual trends in results.
Strengthen data protection requirements:
Data minimisation should be an enforceable statutory requirement for all registered age verification providers.
The Standard should outline specific and very limited circumstances under which it's acceptable to retain logs for fraud prevention purposes. It should also specify a hard limit on the length of time logs may be kept.
The Standard should set out a clear, strict and enforceable set of policies to describe exactly how providers should "pseudonymise" or "deidentify" data.
Providers that no longer meet the
Standard should be required to provide the BBFC with evidence that they have destroyed all the user data they collected while supposedly compliant.
The BBFC should prepare a standardised data protection risk assessment framework
against which all age verification providers will test their systems. Providers should limit bespoke risk assessments to their specific technological implementation.
Strengthen security, testing, and encryption requirements:
Providers should be required to undertake regular internal and external vulnerability scanning and a penetration test at least every six months, followed by a supervised remediation programme to correct any discovered
Providers should be required to conduct penetration tests after any significant application or infrastructure change.
Providers should be required to use a comprehensive and specific
testing standard. CBEST or GBEST could serve as guides for the BBFC to develop an industry-specific framework.
The BBFC should build on already-established strong security frameworks, such as the Center for Internet Security Cyber
Controls and Resources, the NIST Cyber Security Framework, or Cyber Essentials Plus.
At a bare minimum, the Standard should specify a list of cryptographic protocols which are not adequate for certification.
Here at the IWF, we've created life-changing technology and data sets helping people who were sexually abused as children and whose images appear online. The IWF URL List , or more commonly, the block list, is a list of live webpages that show children
being sexually abused, a list used by the internet industry to block millions of criminal images from ever reaching the public eye.
It's a crucial service, protecting children, and people of all ages in their homes and places of
work. It stops horrifying videos from being stumbled across accidentally, and it thwarts some predators who visit the net to watch such abuse.
But now its effectiveness is in jeopardy. That block list which has for years stood
between exploited children and their repeated victimisation faces a challenge called DNS over HTTPS which could soon render it obsolete.
It could expose millions of internet users across the globe - and of any age -- to the risk
of glimpsing the most terrible content.
So how does it work? DNS stands for Domain Name System and it's the phonebook by which you look something up on the internet. But the new privacy technology could hide user requests, bypass
filters like parental controls, and make globally-criminal material freely accessible. What's more, this is being fast-tracked, by some, into service as a default which could make the IWF list and all kinds of other protections defunct.
At the IWF, we don't want to demonise technology. Everyone's data should be secure from unnecessary snooping and encryption itself is not a bad thing. But the IWF is all about protecting victims and we say that the way in which DNS
over HTTPS is being implemented is the problem.
If it was set as the default on the browsers used by most of us in the UK, it would have a catastrophic impact. It would make the horrific images we've spent all these years blocking
suddenly highly accessible. All the years of work for children's protection could be completely undermined -- not just busting the IWF's block list but swerving filters, bypassing parental controls, and dodging some counter terrorism efforts as well.
From the IWF's perspective, this is far more than just a privacy or a tech issue, it's all about putting the safety of children at the top of the agenda, not the bottom. We want to see a duty of care placed upon DNS providers so they
are obliged to act for child safety and cannot sacrifice protection for improved customer privacy.
The Information Commissioner's Office has for some bizarre reason have been given immense powers to censor the internet.
And in an early opportunity to exert its power it has proposed a 'regulation' that would require strict age verification for
nearly all mainstream websites that may have a few child readers and some material that may be deemed harmful for very young children. Eg news websites that my have glamour articles or perhaps violent news images.
In a mockery of 'data protection'
such websites would have to implement strict age verification requiring people to hand over identity data to most of the websites in the world.
Unsurprisingly much of the internet content industry is unimpressed. A six weerk consultation on the
new censorship rules has just closed and according to the Financial Times:
Companies and industry groups have loudly pushed back on the plans, cautioning that they could unintentionally quash start-ups and endanger
people's personal data. Google and Facebook are also expected to submit critical responses to the consultation.
Tim Scott, head of policy and public affairs at Ukie, the games industry body, said it was an inherent contradiction
that the ICO would require individuals to give away their personal data to every digital service.
Dom Hallas, executive director at the Coalition for a Digital Economy (Coadec), which represents digital start-ups in the UK, said
the proposals would result in a withdrawal of online services for under-18s by smaller companies:
The code is seen as especially onerous because it would require companies to provide up to six different versions of
their websites to serve different age groups of children under 18.
This means an internet for kids largely designed by tech giants who can afford to build two completely different products. A child could access YouTube Kids, but
not a start-up competitor.
Stephen Woodford, chief executive of the Advertising Association -- which represents companies including Amazon, Sky, Twitter and Microsoft -- said the ICO needed to conduct a full technical
and economic impact study, as well as a feasibility study. He said the changes would have a wide and unintended negative impact on the online advertising ecosystem, reducing spend from advertisers and so revenue for many areas of the UK media.
An ICO spokesperson said:
We are aware of various industry concerns about the code. We'll be considering all the responses we've had, as well as engaging further where necessary, once the consultation
A scathing new report, seen by City A.M. and authored by the Internet Association (IA), which represents online firms including Google, Facebook and Twitter, has outlined a string of major concerns with plans laid out in the government Online Harms white
paper last month.
The Online Harms white paper outlines a large number of internet censorship proposals hiding under the vague terminology of 'duties of care'.
Under the proposals, social media sites could face hefty fines or even a ban if they
fail to tackle online harms such as inappropriate age content, insults, harassment, terrorist content and of course 'fake news'.
But the IA has branded the measures unclear and warned they could damage the UK's booming tech sector, with smaller
businesses disproportionately affected. IA executive director Daniel Dyball said:
Internet companies share the ambition to make the UK one of the safest places in the world to be online, but in its current form the online harms white paper
will not deliver that, said
The proposals present real risks and challenges to the thriving British tech sector, and will not solve the problems identified.
The IA slammed the white paper over
its use of the term duty of care, which it said would create legal uncertainty and be unmanageable in practice.
The lobby group also called for a more precise definition of which online services would be covered by regulation and
greater clarity over what constitutes an online harm. In addition, the IA said the proposed measures could raise serious unintended consequences for freedom of expression.
And while most internet users favour tighter rules in some areas,
particularly social media, people also recognise the importance of protecting free speech 203 which is one of the internet's great strengths.
A recent internet protocol allows for websites to be located without using the traditional approach of asking your ISP's DNS server, and so evading website blocks implemented by the ISP. Because the new protocol is encrypted then the ISP is restricted in
its ability to monitor websites being accessed.
This very much impacts the ISPs ability to block illegal child abuse as identified in a block list maintained by the IWF. Over the years the IWF have been very good at sticking to its universally
supported remit. Presumably it has realised that extending its blocking capabilities to other less critical areas may degrade its effectiveness as it would then lose that universal support.
Now of course the government has stepped in and will use
the same mechanism as used for the IWF blocks to block legal and very popular adult porn websites. The inevitable interest in circumvention options will very much diminish the IWF's ability to block child abuse. So the IWF has taken to campaign to
supports its capabilities. Fred Langford, the deputy CEO of IWF, told Techworld about the implementation of encrypted DNS:
Everything would be encrypted; everything would be dark. For the last 15 years, the IWF have
worked with many providers on our URL list of illegal sites. There's the counterterrorism list as well and the copyright infringed list of works that they all have to block. None of those would work.
We put the entries onto our
list until we can work with our international stakeholders and partners to get the content removed in their country, said Langford. Sometimes that will only be on the list for a day. Other times it could be months or years. It just depends on the regime
at the other end, wherever it's physically located.
The IWF realises the benefit of universal support so generally acknowledged the benefits of the protocol on privacy and security and focusing on the needs for it to be deployed with
the appropriate safeguards in place. It is calling for the government to insert a censorship rule that includes the IWF URL List in the forthcoming online harms regulatory framework to ensure that the service providers comply with current UK laws and
security measures. Presumably the IWF would like its block list t be implemented by encrypted DNS servers worldwide. IWF's Fred Langford said:
The technology is not bad; it's how you implement it. Make sure your
policies are in place, and make sure there's some way that if there is an internet service provider that is providing parental controls and blocking illegal material that the DNS over HTTPS server can somehow communicate with them to redirect the traffic
on their behalf.
Given the IWF's respect, then this could be a possibility, but if the government then step in and demand adult porn sites be blocked too, then this approach would surely stumble as every world dictator and
international moralist campaigner would expect the same.
Elizabeth Denham, Information Commissioner Information Commissioner's Office,
Dear Commissioner Denham,
Re: The Draft Age Appropriate Design Code for Online Services
We write to
you as civil society organisations who work to promote human rights, both offline and online. As such, we are taking a keen interest in the ICO's Age Appropriate Design Code. We are also engaging with the Government in its White Paper on Online Harms,
and note the connection between these initiatives.
Whilst we recognise and support the ICO's aims of protecting and upholding children's rights online, we have severe concerns that as currently drafted the Code will not achieve
these objectives. There is a real risk that implementation of the Code will result in widespread age verification across websites, apps and other online services, which will lead to increased data profiling of both children and adults, and restrictions
on their freedom of expression and access to information.
The ICO contends that age verification is not a silver bullet for compliance with the Code, but it is difficult to conceive how online service providers could realistically
fulfil the requirement to be age-appropriate without implementing some form of onboarding age verification process. The practical impact of the Code as it stands is that either all users will have to access online services via a sorting age-gate or adult
users will have to access the lowest common denominator version of services with an option to age-gate up. This creates a de facto compulsory requirement for age-verification, which in turn puts in place a de facto restriction for both children and
adults on access to online content.
Requiring all adults to verify they are over 18 in order to access everyday online services is a disproportionate response to the aim of protecting children online and violates fundamental
rights. It carries significant risks of tracking, data breach and fraud. It creates digital exclusion for individuals unable to meet requirements to show formal identification documents. Where age-gating also applies to under-18s, this violation and
exclusion is magnified. It will put an onerous burden on small-to-medium enterprises, which will ultimately entrench the market dominance of large tech companies and lessen choice and agency for both children and adults -- this outcome would be the
antithesis of encouraging diversity and innovation.
In its response to the June 2018 Call for Views on the Code, the ICO recognised that there are complexities surrounding age verification, yet the draft Code text fails to engage
with any of these. It would be a poor outcome for fundamental rights and a poor message to children about the intrinsic value of these for all if children's safeguarding was to come at the expense of free expression and equal privacy protection for
adults, including adults in vulnerable positions for whom such protections have particular importance.
Mass age-gating will not solve the issues the ICO wishes to address with the Code and will instead create further problems. We
urge you to drop this dangerous idea.
Open Rights Group Index on Censorship Article19 Big Brother Watch Global Partners Digital
New proposals to safeguard children will require everyone to prove they are over 18 before accessing online content.
These proposals - from the Information Commissioner's Office (ICO) - aim at protecting children's privacy,
but look like sacrificing free expression of adults and children alike. But they are just plans: we believe and hope you can help the ICO strike the right balance, and abandon compulsory age gates, by making your voice heard.
rules cover websites (including social media and search engines), apps, connected toys and other online products and services.
The ICO is requesting public feedback on its proposals until Friday 31 May 2019. Please urgently write
to the consultation to tell them their plan goes too far! You can use these bullet points to help construct your own unique message:
In its current form, the Code is likely to result in widespread age verification across everyday websites, apps and online services for children and adults alike.
Age checks for everyone are a step too
far. Age checks for everyone could result in online content being removed or services withdrawn. Data protection regulators should stick to privacy. It's not the Information Commissioner's job to restrict adults' or children's access to content.
With no scheme to certify which providers can be trusted, third-party age verification technologies will lead to fakes and scams, putting people's personal data at risk.
Large age verification providers
will seek to offer single-sign-in across a wide variety of online services, which could lead to intrusive commercial tracking of children and adults with devastating personal impacts in the event of a data breach.
The authorities have admitted for the first time they will be unable to enforce the porn block law if browsers such as Firefox and Chrome roll out DNS over HTTPS encryption.
The acknowledgement comes as senior representatives of ISPs privately told
Daily Star Online they believe the porn block law could be delayed.
Earlier this month, this publication revealed Mozilla Firefox is thought to be pushing ahead with the roll out of DNS encryption, despite government concerns they and ISPs will be
unable to see what website we are looking at and block them.
Speaking at the Internet Service Providers Association's Annual Conference last week, Mark Hoe, from the government's National Cyber Security Centre (NCSC), said they would not be able
to block websites that violate the porn block and enforce the new law. He said:
The age verification -- although those are not directly affected [by DNS encryption] it does effect enforcement of access to non-compliant
So, whereas we had previously envisaged that ISPs would be able to block access to non-compliant sites, [those] using DNS filtering techniques don't provide a way around that.
Hoe said that the
browsers were responding to legitimate concerns after the Daily Star reported Google Chrome was thought to have changed its stance on the roll out of encrypted DNS.
However, industry insiders still think Firefox will press ahead, potentially
leading to people who want to avoid the ban switching to their browser.
In an official statement, a government spokesman told Daily Star Online the law would come into force in a couple of months, as planned, but without explaining how it will
Meanwhile a survey reveals three quarters of Brit parents are worried the porn block could leave them open to ID theft because they will be forced to hand over details to get age verified. AgeChecked surveyed 1,500 UK parents and found
73% would be apprehensive about giving personal information as verification online, for fear of how the data would be used.
Ofcom has published a wide ranging report about internet usage in Britain. Of course Ofcom takes teh opportunity to bolster the UK government's push to censor the internet. Ofcom writes:
When prompted, 83% of adults expressed
concern about harms to children on the internet. The greatest concern was bullying, abusive behaviour or threats (55%) and there were also high levels of concern about children's exposure to inappropriate content including pornography (49%), violent /
disturbing content (46%) and content promoting self-harm (42%). Four in ten adults (39%) were concerned about children spending too much time on the internet.
Many 12 to 15-year-olds said they have experienced potentially harmful
conduct from others on the internet. More than a quarter (28%) said they had had unwelcome friend or follow requests or unwelcome contact, 23% had experienced bullying, abusive behaviour or threats, 20% had been trolled'4 and 19% had experienced someone
pretending to be another person. Fifteen per cent said they had viewed violent or disturbing content.
Social media sites, and Facebook in particular, are the most commonly-cited source of online harm for most of the types of
potential harm we asked about. For example, 69% of adults who said they had come across fake news said they had seen it on Facebook. Among 12 to 15-year-olds, Facebook was the most commonly-mentioned source of most of the potentially harmful experiences.
Most adults say they would support more regulation of social media sites (70%), video sharing sites (64%) and instant messenger services (61%). Compared to our 2018 research, support for more online regulation appears to have
strengthened. However, just under half (47%) of adult internet users recognised that websites and social media sites have a careful balance to maintain in terms of supporting free speech, even where some users might find the content offensive
Tom Watson asked a parliamentary question about the censor busting technology of DNS over HTTPS.
Up until now, ISPs have been able to intercept website address look ups (via a DNS server) and block the ones that they, or the state, don't like.
This latest internet protocol allows browsers and applications to bypass ISPs' censored DNS servers and use encrypted alternatives that cannot then be intercepted by ISPs and so can't be censored by the state. (note that they can offer a censored service such as an option for a family friendly feeds, but this is on their own terms and not the state's).
Anyway Labour Deputy leader has been enquiring about whether browsers are intending to implement the new protocol. Perhaps revealing an idea to try and pressurise browsers into not offering options to circumvent the state's blocking list.
Tom Watson Deputy Leader of the Labour Party, Shadow Secretary of State for Digital, Culture, Media and Sport
To ask the Secretary of State for Digital, Culture, Media and Sport, how many internet
browser providers have informed his Department that they will not be adopting the Internet Engineering Task Force DNS over HTTPS ( DOH ) protocol.
Margot James The Minister of State, Department for Culture, Media and Sport
How DOH will be deployed is still a subject of discussion within the industry, both for browser providers and the wider internet industry. We are aware of the public statements made by some browser providers on deployment and we
are seeking to understand definitively their rollout plans. DCMS is in discussions with browser providers, internet industry and other stakeholders and we are keen to see a resolution that is acceptable for all parties.
Here's another indication that the government is trying to preserve its internet censorship capabilities by pressurising browser companies:
The Internet Service Providers Association (ISPA) - representing firms
including BT, Virgin, and Sky - has expressed concerns over the implications the encryption on Firefox could have on internet safety.
A spokesperson said, We remain concerned about the consequences these proposed changes will have
for online safety and security, and it is therefore important that the Government sends a strong message to the browser manufacturers such as Mozilla that their encryption plans do not undermine current internet safety standards in the UK.
Age verification for porn is pushing internet users into areas of the internet that provide more privacy, security and resistance to censorship.
I'd have thought that security services would prefer that internet users to remain in the more open areas
of the internet for easier snooping.
So I wonder if it protecting kids from stumbling across porn is worth the increased difficulty in monitoring terrorists and the like? Or perhaps GCHQ can already see through the encrypted internet.
RQ12: Privacy & Security for Firefox
Mozilla has an interest in potentially integrating more of Tor into Firefox, for the purposes of providing a Super Private Browsing (SPB) mode for our users.
Tor offers privacy and anonymity on the Web, features which are sorely needed in the modern era of mass surveillance, tracking and fingerprinting. However, enabling a large number of additional users to make use of the Tor network
requires solving for inefficiencies currently present in Tor so as to make the protocol optimal to deploy at scale. Academic research is just getting started with regards to investigating alternative protocol architectures and route selection protocols,
such as Tor-over-QUIC, employing DTLS, and Walking Onions.
What alternative protocol architectures and route selection protocols would offer acceptable gains in Tor performance? And would they preserve Tor properties? Is it truly
possible to deploy Tor at scale? And what would the full integration of Tor and Firefox look like?
At the moment when internet users want to view a page, they specify the page they want in the clear. ISPs can see the page requested and block it if the authorities don't like it. A new internet protocol has been launched that encrypts the specification
of the page requested so that ISPs can't tell what page is being requested, so can't block it.
This new DNS Over HTTPS protocol is already available in Firefox which also provides an uncensored and encrypted DNS server. Users simply have to change the
settings in about:config (being careful of the dragons of course)
Questions have been
raised in the House of Lords about the impact on the UK's ability to censor the internet.
House of Lords, 14th May 2019, Internet Encryption Question
Baroness Thornton Shadow Spokesperson (Health)
2:53 pm, 14th May 2019
To ask Her Majesty 's Government what assessment they have made of the deployment of the Internet Engineering Task Force 's new " DNS over HTTPS " protocol and its implications for the blocking
of content by internet service providers and the Internet Watch Foundation ; and what steps they intend to take in response.
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
My Lords, DCMS is working together with the National Cyber Security Centre to understand and resolve the implications of DNS over HTTPS , also referred to as DoH, for the blocking of content online. This involves liaising
across government and engaging with industry at all levels, operators, internet service providers, browser providers and pan-industry organisations to understand rollout options and influence the way ahead. The rollout of DoH is a complex commercial and
technical issue revolving around the global nature of the internet.
Baroness Thornton Shadow Spokesperson (Health)
My Lords, I thank the Minister for that Answer, and I apologise to the House for
this somewhat geeky Question. This Question concerns the danger posed to existing internet safety mechanisms by an encryption protocol that, if implemented, would render useless the family filters in millions of homes and the ability to track down
illegal content by organisations such as the Internet Watch Foundation . Does the Minister agree that there is a fundamental and very concerning lack of accountability when obscure technical groups, peopled largely by the employees of the big internet
companies, take decisions that have major public policy implications with enormous consequences for all of us and the safety of our children? What engagement have the British Government had with the internet companies that are represented on the Internet
Engineering Task Force about this matter?
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
My Lords, I thank the noble Baroness for discussing this
with me beforehand, which was very welcome. I agree that there may be serious consequences from DoH. The DoH protocol has been defined by the Internet Engineering Task Force . Where I do not agree with the noble Baroness is that this is not an obscure
organisation; it has been the dominant internet technical standards organisation for 30-plus years and has attendants from civil society, academia and the UK Government as well as the industry. The proceedings are available online and are not restricted.
It is important to know that DoH has not been rolled out yet and the picture in it is complex--there are pros to DoH as well as cons. We will continue to be part of these discussions; indeed, there was a meeting last week, convened by the NCSC , with
DCMS and industry stakeholders present.
Lord Clement-Jones Liberal Democrat Lords Spokesperson (Digital)
My Lords, the noble Baroness has raised a very important issue, and it sounds from the
Minister 's Answer as though the Government are somewhat behind the curve on this. When did Ministers actually get to hear about the new encrypted DoH protocol? Does it not risk blowing a very large hole in the Government's online safety strategy set out
in the White Paper ?
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
As I said to the noble Baroness, the Government attend the IETF . The
protocol was discussed from October 2017 to October 2018, so it was during that process. As far as the online harms White Paper is concerned, the technology will potentially cause changes in enforcement by online companies, but of course it does not
change the duty of care in any way. We will have to look at the alternatives to some of the most dramatic forms of enforcement, which are DNS blocking.
Lord Stevenson of Balmacara Opposition Whip (Lords)
My Lords, if there is obscurity, it is probably in the use of the technology itself and the terminology that we have to use--DoH and the other protocols that have been referred to are complicated. At heart, there are two issues at
stake, are there not? The first is that the intentions of DoH, as the Minister said, are quite helpful in terms of protecting identity, and we do not want to lose that. On the other hand, it makes it difficult, as has been said, to see how the Government
can continue with their current plan. We support the Digital Economy Act approach to age-appropriate design, and we hope that that will not be affected. We also think that the soon to be legislated for--we hope--duty of care on all companies to protect
users of their services will help. I note that the Minister says in his recent letter that there is a requirement on the Secretary of State to carry out a review of the impact and effectiveness of the regulatory framework included in the DEA within the
next 12 to 18 months. Can he confirm that the issue of DoH will be included?
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
Clearly, DoH is on
the agenda at DCMS and will be included everywhere it is relevant. On the consideration of enforcement--as I said before, it may require changes to potential enforcement mechanisms--we are aware that there are other enforcement mechanisms. It is not true
to say that you cannot block sites; it makes it more difficult, and you have to do it in a different way.
The Countess of Mar Deputy Chairman of Committees, Deputy Speaker (Lords)
My Lords, for the
uninitiated, can the noble Lord tell us what DoH means --very briefly, please?
Lord Ashton of Hyde The Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
It is not possible
to do so very briefly. It means that, when you send a request to a server and you have to work out which server you are going to by finding out the IP address, the message is encrypted so that the intervening servers are not able to look at what is in
the message. It encrypts the message that is sent to the servers. What that means is that, whereas previously every server along the route could see what was in the message, now only the browser will have the ability to look at it, and that will put more
power in the hands of the browsers.
Lord West of Spithead Labour
My Lords, I thought I understood this subject until the Minister explained it a minute ago. This is a very serious issue. I was
unclear from his answer: is this going to be addressed in the White Paper ? Will the new officer who is being appointed have the ability to look at this issue when the White Paper comes out?
Lord Ashton of Hyde The
Parliamentary Under-Secretary of State for Digital, Culture, Media and Sport
It is not something that the White Paper per se can look at, because it is not within the purview of the Government. The protocol is designed by the
IETF , which is not a government body; it is a standards body, so to that extent it is not possible. Obviously, however, when it comes to regulating and the powers that the regulator can use, the White Paper is consulting precisely on those matters,
which include DNS blocking, so it can be considered in the consultation.
Jackie Doyle-Price is the government's first suicide prevention minister. She seems to believe that this complex and tragic social problem can somehow be cure by censorship and an end to free speech.
She said society had come to tolerate behaviour
online which would not be tolerated on the streets. She urged technology giants including Google and Facebook to be more vigilant about removing harmful comments.
Doyle-Price told the Press Association:
great that we have these platforms for free speech and any one of us is free to generate our own content and put it up there, ...BUT... free speech is only free if it's not abused. I just think in terms of implementing their duty of care to
their customers, the Wild West that we currently have needs to be a lot more regulated by them.
Watching pornography on buses is to be banned, ministers have announced. Bus conductors and the police will be given powers to tackle those who watch sexual material on mobile phones and tablets.
Ministers are also drawing up plans for a
national database of claimed harassment incidents. It will record incidents at work and in public places, and is likely to cover wolf-whistling and cat-calling as well as more serious incidents.
In addition, the Government is considering whether
to launch a public health campaign warning of the effects of pornography -- modelled on smoking campaigns.
As of 15 July, people in the UK who try to access porn on the internet will be required to verify their age or identity online.
The new UK Online Pornography (Commercial Basis) Regulations 2018 law does not affect the Channel Islands but the
States have not ruled out introducing their own regulations.
The UK Department for Censorship, Media and Sport said it was working closely with the Crown Dependencies to make the necessary arrangements for the extension of this legislation to the
A spokeswoman for the States said they were monitoring the situation in the UK to inform our own policy development in this area.
The BBFC has re-iterated that its Age Verification certification scheme does not allow for personal data to be used for another purpose beyond age verification. In particular age verification should not be coupled with electronic wallets.
Presumably this is intended to prevent personal date identifying porn users to be dangerously stored in databases use for other purposes.
In passing, this suggests that there may be commercial issues as age verification systems for porn may not be reusable for age verification for social media usage or identity verification required for online gambling. I suspect that several AV
providers are only interested in porn as a way to get established for social media age verification.
This BBFC warning may be of particular interest to users of the porn site xHamster. The preferred AV option for that website is the electronic
The BBFC write in a press release:
The Age-verification Regulator under the UK's Digital Economy Act, the British Board of Film Classification (BBFC), has advised age-verification providers that
they will not be certified under the Age-verification Certificate (AVC) if they use a digital wallet in their solution.
The AVC is a voluntary, non-statutory scheme that has been designed specifically to ensure age-verification
providers maintain high standards of privacy and data security. The AVC will ensure data minimisation, and that there is no handover of personal information used to verify an individual is over 18 between certified age-verification providers and
commercial pornography services. The only data that should be shared between a certified AV provider and an adult website is a token or flag indicating that the consumer has either passed or failed age-verification.
Perkins, Policy Director for the BBFC, said:
A consumer should be able to consider that their engagement with an age-verification provider is something temporary.
In order to
preserve consumer confidence in age-verification and the AVC, it was not considered appropriate to allow certified AV providers to offer other services to consumers, for example by way of marketing or by the creation of a digital wallet. The AVC is
necessarily robust in order to allow consumers a high level of confidence in the age-verification solutions they choose to use.
Accredited providers will be indicated by the BBFC's green AV symbol, which is what consumers should
look out for. Details of the independent assessment will also be published on the BBFC's age-verification website, ageverificationregulator.com, so consumers can make an informed choice between age-verification providers.
Standard for the AVC imposes limits on the use of data collected for the purpose of age-verification, and sets out requirements for data minimisation.
The AVC Standard has been developed by the BBFC and NCC Group - who are experts
in cyber security and data protection - in cooperation with industry, with the support of government, including the National Cyber Security Centre and Chief Scientific Advisors, and in consultation with the Information Commissioner's Office. In order to
be certified, AV Providers will undergo an on-site audit as well as a penetration test.
Further announcements will be made on AV Providers' certification under the scheme ahead of entry into force on July 15.
The House of Lords saw a pre-legislation debate about the governments Online Harms white paper. Peers from all parties queued up to add their praise for internet censorship. And don't even think that maybe the LibDems may be a little more appreciative of
free speech and a little less in favour of state censorship. Don't dream! all the lords that spoke were gagging for it...censorship that is.
And support for the internet censorship in the white paper wasn't enough. Many of the speakers presumed to add
on their own pet ideas for even more censorship.
I did spot one piece of information that was new to me. It seems that the IWF have extended their remit to include cartoon child porn as material they work against.
Elspeth Howe said during
I am very pleased that, since the debates at the end of last year, the Internet Watch Foundation has adopted a new non-photographic images policy and URL block list, so that websites that contain these
images can be blocked by IWF members. It allows for network blocking of non-photographic images to be applied to filtering solutions, and it can prevent pages containing non-photographic images being shown in online search engine results. In 2017, 3,471
reports of alleged non-photographic images of child sexual abuse were made to the IWF; the figure for 2018 was double that, at 7,091 alleged reports. The new IWF policy was introduced only in February, so it is early days to see whether this will be a
success. The IWF is unable to remove content unless that content originates in the UK, which of course is rare. The IWF offers this list on a voluntary basis, not a statutory basis as would occur under the Digital Economy Act. Can the Minister please
keep the House informed about the success of the new policy and, if necessary, address the loopholes in the legislative proposal arising from this White Paper?
The ASA has banned an advert for the extra security provided by VPNs in response to 9 complainants objecting to the characterisation of the internet as dangerous place full of hackers and fraudsters.
It is not as if the claims are 'offensive'
or anything, so these are unlikely to be complaints from the public. One has to suspect that the authorities really don't want people to get interested in VPNs lest they evade website blocking and internet surveillance.
Anyway the ASA writes:
A TV ad for NordVPN seen on 9 January 2019. The ad began with a man walking down a train cubicle. Text on screen appeared that stated Name: John Smith. A man's voice then said, Look it's me, giving out my credit card details. The ad
then showed the man handing his credit card to passengers on the train. On-screen text appeared that stated Credit card number 1143 0569 7821 9901. CVV/CVC 987. The ad then cut to another shot of the man showing other passengers his phone. The man's
voice said, Sharing my password with strangers. On-screen text stated Password: John123. The ad then cut to a shot of the man taking a photo of himself with a computer generated character. The man's voice said, Being hackers' best friend. The ad then cut
to the man looking down the corridor of the carriage as three computer generated characters walked towards him. The man's voice then said, Your sensitive online data is just as open to snoopers on public WiFi. The man then pulled out his phone, which
showed his security details again. The voice said, Connect to Nord VPN. Help protect your privacy and enjoy advanced internet security. On-screen text stated Advanced security. 6 devices. 30-day money-back guarantee. The ad cut to show the computer
generated characters disappear as the man appeared to use the NordVPN app on his phone.
Nine complainants challenged whether the ad exaggerated the extent to which users were at risk from data theft without their service. Response
ASA Assessment: Complaints Upheld
The ASA noted that the ad showed the character John Smith walking around a train, handing out personal information including credit card details and passwords to
passengers while he stated he was being hackers' best friend. The character then said Your sensitive online data is just as open to snoopers on public WiFi. Based on that, we considered consumers would understand that use of public WiFi connections would
make them immediately vulnerable to hacking or phishing attempts by virtue of using those connections. Therefore NordVPN needed to demonstrate that using public networks posed such a risk.
With regards to the software, we
acknowledged that the product was designed to add an additional layer of encryption beyond the HTTPS encryption which already existed on public WiFi connections to provide greater security from threats on public networks.
the explanations from NordVPN and Clearcast that public networks presented security risks and that the use of HTTPS encryption, which was noticeable from the use of a padlock in a user's internet browser, did not in all circumstances indicate that a
connection was completely secure.
However, while we acknowledged that such data threats could exist we considered the overwhelming impression created by the ad was that public networks were inherently insecure and that access to
them was akin to handing out security information voluntarily. As acknowledged by NordVPN, we understood that HTTPS did provide encryption to protect user data so therefore, while data threats existed, data was protected by a significant layer of
Therefore, because the ad created the impression that users were at significant risk from data theft, when that was not the case, we concluded it was misleading.
The ad must not appear again in
its current form. We told Tefincom SA t/a NordVPN not to exaggerate the risk of data theft without using their service.
Starting with a little background into the authorship of the document under review. AVSecure CMO Steve Winyard told XBIZ:
The accreditation plan appears to have very strict rules and was crafted with significant
input from various governmental bodies, including the DCMS (Department for Culture, Media & Sport), NCC Group plc (an expert security and audit firm), GCHQ (U.K. Intelligence and Security Agency), ICO (Information Commissioner's Office) and of course
But computer security expert Alec Muffett writes:
This is the document which is being proffered to protect the facts & details of _YOUR_ online #Porn viewing. Let's read it together!
What could possibly go wrong?
This document's approach to data protection is fundamentally flawed.
The (considerably) safer approach - one easier to certificate/validate/police
- would be to say everything is forbidden except for upon for ; you would then allow vendors to appeal for exceptions under review.
It makes a few passes at
pretending that this is what it's doing, but with subjective holes (green) that you can drive a truck through:
What we have here is a rehash of quite a lot of reasonable physical/operational security, business continuity & personnel security management thinking -- with digital stuff almost entirely punted.
It's better than #PAS1296 , but it's still not fit for purpose.
The BBFC has published a detailed standard for age verifiers to get tested against to obtain a green AV kite mark aiming to convince users that their identity data and porn browsing history is safe.
I have read through the document and conclude
that it is indeed a rigorous standard that I guess will be pretty tough for companies to obtain. I would say it would be almost impossible for a small or even medium size website to achieve the standard and more or less means that using an age
verification service is mandatory.
The standard has lots of good stuff about physical security of data and vetting of staff access to the data.
Age verifier AVSecure commented:
We received the final
documents and terms for the BBFC certification scheme for age verification providers last Friday. This has had significant input from various Government bodies including DCMS (Dept for Culture, Media & Sport), NCC Group plc (expert security and audit
firm), GCHQ (UK Intelligence & Security Agency) ICO (Information Commissioner's Office) and of course the BBFC (the regulator).
The scheme appears to have very strict rules.
It is a multi-disciplined
scheme which includes penetration testing, full and detailed audits, operational procedures over and above GDPR and the DPA 2018 (Data Protection Act). There are onerous reporting obligations with inspection rights attached. It is also a very costly
scheme when compared to other quality standard schemes, again perhaps designed to deter the faint of heart or shallow of pocket.
Consumers will likely be advised against using any systems or methods where the prominent green AV
accreditation kitemark symbol is not displayed.
But will the age verifier be logging your ID data and browsing history?
And the answer is very hard to pin down from the document. At first read it suggests that minimal data will be retained, but a more sceptical read, connecting a few paragraphs together suggests that the verifier will be required to keep extensive records
about the users porn activity.
Maybe this is a reflection of a recent change of heart. Comments from AVSecure suggested that the BBFC/Government originally mandated a log of user activity but recently decided that keeping a log or not is down to
the age verifier.
As an example of the rather evasive requirements:
8.5.9 Physical Location
Personal data relating to the physical location of a user shall not be collected as part of the
age-verification process unless required for fraud prevention and detection. Personal data relating to the physical location of a user shall only be retained for as long as required for fraud prevention and detection.
Here it sounds
like keeping tabs on location is optional, but another paragraph suggest otherwise:
8.4.14 Fraud Prevention and Detection
Real-time intelligent monitoring and fraud prevention and
detection systems shall be used for age-verification checks completed by the age-verification provider.
Now it seems that the fraud prevention is mandatory, and so a location record is mandatory after all.
Also the use off the
phrase only be retained for as long as required for fraud prevention and detection. seems a little misleading too, as in reality fraud prevention will be required for as long as the customer keeps on using it. This may as well be forever.
There are other statements that sound good at first read, but don't really offer anything substantial:
8.5.6 Data Minimisation
Only the minimum amount of personal data required to verify a user's age shall be collected.
But if the minimum is to provide name and address + eg a
drivers licence number or a credit card number then the minimum is actually pretty much all of it. In fact there are only the porn pass methods that offer any scope for 'truely minimal' data collection. Perhaps the minimal data also applies to the
verified mobile phone method as although the phone company probably knows your identity, then maybe they won't need to pass it on to the age verifier.
What does the porn site get to know
The rare unequivocal and reassuring statement is
8.5.8 Sharing Results
Age-verification providers shall only share the result of an age-verification check (pass or fail) with the requesting
So it seems that identity details won't be passed to the websites themselves.
However the converse is not so clear:
8.5.6 Data Minimisation
the requesting website that the user has visited shall not be collected against the user's activity.
Why add the phrase, against the user's activity. This is worded such that information about the requesting website could
indeed be collected for another reason, fraud detection maybe.
Maybe the scope for an age verifier to maintain a complete log of porn viewing is limited more by the practical requirement for a website to record a successful age verification in a
cookie such that the age verifier only gets to see one interaction with each website.
No doubt we shall soon find out whether the government wants a detailed log of porn viewed, as it will be easy to spot if a website queries the age
verifier for every film you watch.
And what about all this reference to fraud detection. Presumably the BBFC/Government is a little worried that passwords and accounts will be shared by enterprising kids.
But on the other hand it may make life tricky for those using shared devices, or perhaps those who suddenly move from London to New York in an instant, when in fact this is totally normal for someone using a VPN on a PC.
The BBFC/Government have moved on a long way from the early days when the lawmakers created the law without any real protection for porn users and the BBFC first proposed that this could be rectified by asking porn companies to voluntarilyfollow 'best practice' in keeping people's data safe.
A definite improvement now, but I think I will stick to my VPN.
A TV channel, a porn producer, an age verifier and maybe even the government got together this week to put out a live test of age verification. The test was implemented on a specially created website featuring a single porn video.
required a well advertised website to provide enough traffic of viewers positively wanting to see the content. Channel 4 obliged with its series Mums Make Porn. The series followed a group of mums making a porn video that they felt would be
more sex positive and less harmful to kids than the more typical porn offerings currently on offer.
The mums did a good job and produced a decent video with a more loving and respectful interplay than is the norm. The video however is still proper
hardcore porn and there is no way it could be broadcast on Channel 4. So the film was made available, free of charge, on its own dedicated website complete with an age verification requirement.
The website was announced as a live test for
AgeChecked software to see how age verification would pan out in practice. It featured the following options for age verification
entering full credit card details + email
entering driving licence number + name and address + email
mobile phone number + email (the phone must have been verified as 18+ by the service provider and must must be ready to receive an
SMS message containing login details)
Nothing has been published in detail about the aims of the test but presumably they were interested in the basic questions such as:
What proportion of potential viewers will be put off by the age verification?
What proportion of viewers would be stupid enough to enter their personal data?
Which options of identification would be preferred by viewers?
The official test 'results'
Alastair Graham, CEO of AgeChecked provided a few early answers inevitably claiming that:
The results of this first mainstream test of our software were hugely
He went on to claim that customers are willing to participate in the process, but noted that verified phone number method emerged as by far the most popular method of verification. He said that this finding
would be a key part of this process moving forward.
Reading between the lines perhaps he was saying that there wasn't much appetite for handing over detailed personal identification data as required by the other two methods.
I suspect that
we will never get to hear more from AgeChecked especially about any reluctance of people to identify themselves as porn viewers.
The unofficial test results
Maybe they were also interested in other questions too:
Will people try and work around the age verification requirements?
if people find weaknesses in the age verification defences, will they pass on their discoveries to others?
Interestingly the age verification requirement was easily sidestepped by those with a modicum of knowledge about downloading videos from websites such as YouTube and PornHub. The age verification mechanism effectively only hid the start button from
view. The actual video remained available for download, whether people age verified or not. All it took was a little examination of the page code to locate the video. There are several tools that allow this: video downloader addons, file downloaders or
just using the browser's built in debugger to look at the page code.
Presumably the code for the page was knocked up quickly so this flaw could have been a simple oversight that is not likely to occur in properly constructed commercial websites.
Or perhaps the vulnerability was deliberately included as part of the test to see if people would pick up on it.
However it did identify that there is a community of people willing to stress test age verification restrictions and see if work
rounds can be found and shared.
I noted on Twitter that several people had posted about the ease of downloading the video and had suggested a number of tools or methods that enabled this.
There was also an interesting article posted on
achieving age verification using an expired credit card. Maybe that is not so catastrophic as it still identifies a cardholder as over 18, even if cannot be used to make a payment. But of course it may open new possibilities for misuse of old data. Note
that random numbers are unlikely to work because of security algorithms. Presumably age verification companies could strengthen the security by testing that a small transaction works, but this intuitively this would have significant cost implications. I
guess that to achieve any level of take up, age verification needs to be cheap for both websites and viewers.
It was very heartening to see how many people were helpfully contributing their thoughts
about testing the age verification software.
Over the course of a couple of hours reading, I learnt an awful lot about how websites hide and protect video content, and what tools are available to see through the protection. I suspect that many
others will soon be doing the same... and I also suspect that young minds will be far more adept than I at picking up such knowledge.
A final thought
I feel a bit sorry for small websites who sell content. It adds a
whole new level complexity as a currently open preview area now needs to be locked away behind an age verification screen. Many potential customers will be put off by having to jump through hoops just to see the preview material. To then ask them to
enter all their credit card details again to subscribe, may be a hurdle too far.
Update: The Guardian reports that age verification were easily circumvented
The Guardian reported that the credit card check used by AgeChecked could be easily fooled by generating a totally false credit card number. Note that a random number will not work as there is a well known sum check algorithm which invalidates a lot of
random numbers. But anyone who knows or looks up the algorithm would be able to generate acceptable credit card numbers that would at least defeat AgeChecked.
Or they would have been had AgeChecked not now totally removed the credit card check
option from its choice of options.
Still the damage was done when the widely distributed Guardian article has established doubts about the age verification process.
Of course the workaround is not exactly trivial and will stop younger kids
from 'stumbling on porn' which seems to be the main fall back position of this entire sorry scheme.
VPNCompare is reporting that internet users in Britain are responding to the upcoming porn censorship regime by investigating the option to get a VPN so as to workaround most age verification requirements without handing over dangerous identity
VPNCompare says that the number of UK visitors to its website has increased by 55% since the start date of the censorship scheme was announced. The website also sated that Google searches for VPNs had trippled. Website editor, Christopher
Seward told the Independent:
We saw a 55 per cent increase in UK visitors alone compared to the same period the previous day. As the start date for the new regime draws closer, we can expect this number to rise even
further and the number of VPN users in the UK is likely to go through the roof.
The UK Government has completely failed to consider the fact that VPNs can be easily used to get around blocks such as these.
Whilst the immediate assumption is that porn viewers will reach for a VPN to avoid handing over dangerous identity information, there may be another reason to take out a VPN, a lack of choice of appropriate options for age validation.
3 companies run the 6 biggest adult websites. Mindgeek owns Pornhub, RedTube and Youporn. Then there is Xhamster and finally Xvideos and xnxx are connected.
Now Mindgeek has announced that it will partner with Portes Card for age
verification, which has options for identity verification, giving a age verified mobile phone number, or else buying a voucher in a shop and showing age ID to the shop keeper (which is hopefully not copied or recorded).
has announced that it is partnering with 1Account which accepts a verified mobile phone, credit card, debit card, or UK drivers licence. It does not seem to have an option for anonymous verification beyond a phone being age verified without having to
Perhaps most interestingly is that both of these age verifiers are smart phone based apps. Perhaps the only option for people without a phone is to get a VPN. I also spotted that most age verification providers that I have looked at seem
to be only interested in UK cards, drivers licences or passports. I'd have thought there may be legal issues in not accepting EU equivalents. But foreigners may also be in the situation of not being able to age verify and so need a VPN.
course the very fact that is no age verification option common to the major porn website then it may just turn out to be an awful lot simpler just to get a VPN.
The BBFC (on its Age Verification website)...err...no!...:
An assessment and accreditation under the AVC is not a
guarantee that the age-verification provider and its solution (including its third party companies) comply with the relevant legislation and standards, or that all data is safe from malicious or criminal interference.
the BBFC shall not be responsible for any losses, damages, liabilities or claims of whatever nature, direct or indirect, suffered by any age-verification provider, pornography services or consumers/ users of age-verification provider's services or
pornography services or any other person as a result of their reliance on the fact that an age-verification provider has been assessed under the scheme and has obtained an Age-verification Certificate or otherwise in connection with the scheme.
The UK will become the first country in the world to bring in age-verification for online pornography when the measures come into force on 15 July 2019.
It means that commercial providers of online pornography will be required by law to carry out
robust age-verification checks on users, to ensure that they are 18 or over.
Websites that fail to implement age-verification technology face having payment services withdrawn or being blocked for UK users.
The British Board of Film
Classification (BBFC) will be responsible for ensuring compliance with the new laws. They have confirmed that they will begin enforcement on 15 July, following an implementation period to allow websites time to comply with the new standards.
Minister for Digital Margot James said that she wanted the UK to be the most censored place in the world to b eonline:
Adult content is currently far too easy for children to access online. The introduction of mandatory age-verification is a world-first, and we've taken the time to balance privacy concerns with the need to protect
children from inappropriate content. We want the UK to be the safest place in the world to be online, and these new laws will help us achieve this.
Government has listened carefully to privacy concerns and is clear that
age-verification arrangements should only be concerned with verifying age, not identity. In addition to the requirement for all age-verification providers to comply with General Data Protection Regulation (GDPR) standards, the BBFC have created a
voluntary certification scheme, the Age-verification Certificate (AVC), which will assess the data security standards of AV providers. The AVC has been developed in cooperation with industry, with input from government.
solutions which offer these robust data protection conditions will be certified following an independent assessment and will carry the BBFC's new green 'AV' symbol. Details will also be published on the BBFC's age-verification website,
ageverificationregulator.com so consumers can make an informed choice between age-verification providers.
BBFC Chief Executive David Austin said:
The introduction of age-verification to restrict access to
commercial pornographic websites to adults is a ground breaking child protection measure. Age-verification will help prevent children from accessing pornographic content online and means the UK is leading the way in internet safety.
On entry into force, consumers will be able to identify that an age-verification provider has met rigorous security and data checks if they carry the BBFC's new green 'AV' symbol.
The change in law is part of the
Government's commitment to making the UK the safest place in the world to be online, especially for children. It follows last week's publication of the Online Harms White Paper which set out clear responsibilities for tech companies to keep UK citizens
safe online, how these responsibilities should be met and what would happen if they are not.
Ministers are facing a growing and deserved backlash against draconian new web laws which will lead to totalitarian-style censorship.
The stated aim of the Online Harms White Paper is to target offensive material such as terrorists' beheading
videos. But under the document's provisions, the UK internet censor would have complete discretion to decide what is harmful, hateful or bullying -- potentially including coverage of contentious issues such as transgender rights.
After MPs lined
up to demand a rethink, Downing Street has put pressure on Culture Secretary Jeremy Wright to narrow the definition of harm in order to exclude typical editorial content.
MPs have been led by Jacob Rees-Mogg, who said last night that while it was
obviously a worthwhile aim to rid the web of the evils of terrorist propaganda and child pornography, it should not be at the expense of crippling a free Press and gagging healthy public expression. He added that the regulator could be used as a tool of
repression by a future Jeremy Corbyn-led government, saying:
Sadly, the Online Harms White Paper appears to give the Home Secretary of the day the power to decide the rules as to which content is considered palatable.
Who is to say that less scrupulous governments in the future would not abuse this new power?
I fear this could have the unintended consequence of reputable newspaper websites being subjected to quasi-state control. British
newspapers freedom to hold authority to account is an essential bulwark of our democracy.
We must not now allow what amounts to a Leveson-style state-controlled regulator for the Press by the back door.
backed by Charles Walker, vice-chairman of the Tory Party's powerful backbench 1922 Committee, who said:
We need to protect people from the well-documented evils of the internet -- not in order to suppress views or
opinions to which they might object.
In last week's Mail on Sunday, former Culture Secretary John Whittingdale warned that the legislation was more usually associated with autocratic regimes including those in China, Russia or North
Tory MP Philip Davies joined the criticism last night, saying:
Of course people need to be protected from the worst excesses of what takes place online. But equally, free speech in a free country is very,
very important too. It's vital we strike the right balance. While I have every confidence that Sajid Javid as Home Secretary would strike that balance, can I have the same confidence that a future Marxist government would not abuse the proposed new
And Tory MP Martin Vickers added:
While we must take action to curb the unregulated wild west of the internet, we must not introduce state control of the Press as a result.
The legislators behind the Digital Economy Act couldn't be bothered to include any provisions for websites and age verifiers to keep the identity and browsing history of porn users safe. It has now started to dawn on the authorities that this was a
mistake. They are currently implementing a voluntary kitemark scheme to try and assure users that porn website's and age verifier's claims of keeping data safe can be borne out.
It is hardly surprising that significant numbers of people are likely to
be interested in avoiding having to register their identity details before being able to access porn.
It seems obvious that information about VPNs and Tor will therefore be readily circulated amongst any online community with an interest in
keeping safe. But perhaps it is a little bit of a shock to see it is such large letters in a mainstream magazine on the shelves of supermarkets and newsagents.
And perhaps anther thought is that once the BBFC starting ISPs to block non-compliant
websites then circumvention will be the only way see your blocked favourite websites. So people stupidly signing up to age verification will have less access to porn and a worse service than those that circumvent it.
Critics of the government's flagship internet regulation policy are warning it could lead to a North Korean-style censorship regime, where regulators decide which websites Britons are allowed to visit, because of how broad
the proposals are.
Index on Censorship has raised strong concerns about the government's focus on tackling unlawful and harmful online content, particularly since the publication of the Internet Safety Strategy
Green Paper in 2017. In October 2018, Index published a joint statement with Global Partners Digital and Open Rights Group noting that any proposals that regulate content are likely to have a significant impact on the enjoyment and exercise of human
rights online, particularly freedom of expression.
We have also met with officials from the Department for Digital, Culture, Media and Sport, as well as from the Home Office, to raise our thoughts and concerns.
With the publication of the Online Harms White Paper , we would like to reiterate our earlier points.
While we recognise the government's desire to tackle unlawful content online, the proposals mooted in the
white paper -- including a new duty of care on social media platforms , a regulatory body , and even the fining and banning of social media platforms as a sanction -- pose serious risks to freedom of expression online.
could put the United Kingdom in breach of its obligations to respect and promote the right to freedom of expression and information as set out in Article 19 of the International Covenant on Civil and Political Rights and Article 10 of the European
Convention on Human Rights, amongst other international treaties.
Social media platforms are a key means for tens of millions of individuals in the United Kingdom to search for, receive, share and impart information, ideas and
opinions. The scope of the right to freedom of expression includes speech which may be offensive, shocking or disturbing . The proposed responses for tackling online safety may lead to disproportionate amounts of legal speech being curtailed, undermining
the right to freedom of expression.
In particular, we raise the following concerns related to the white paper:
Lack of evidence base
The wide range of different harms which the government is seeking to tackle in this policy process require different, tailored responses. Measures proposed must be underpinned by strong evidence, both of the likely scale of the harm
and the measures' likely effectiveness. The evidence which formed the base of the Internet Safety Strategy Green Paper was highly variable in its quality. Any legislative or regulatory measures should be supported by clear and unambiguous evidence of
their need and effectiveness.
Duty of care concerns/ problems with 'harm' definition
Index is concerned at the use of a duty of care regulatory approach. Although social media has often been compared the public square, the duty of care model is not an exact fit because this would introduce regulation -- and
restriction -- of speech between individuals based on criteria that is far broader than current law. A failure to accurately define "harmful" content risks incorporating legal speech, including political expression, expressions of religious
views, expressions of sexuality and gender, and expression advocating on behalf of minority groups.
Risks in linking liability/sanctions to platforms over third party content
While well-meaning, proposals such as these contain serious risks, such as requiring or incentivising wide-sweeping removal of lawful and innocuous content. The imposition of time limits for removal, heavy sanctions for non-compliance
or incentives to use automated content moderation processes only heighten this risk, as has been evidenced by the approach taken in Germany via its Network Enforcement Act (or NetzDG), where there is evidence of the over-removal of lawful content.
Lack of sufficient protections for freedom of expression.
The obligation to protect users' rights online that is included in the white paper gives insufficient weight to freedom of expression. A much clearer obligation to protect freedom of expression should guide development of future
In recognition of the UK's commitment to the multistakeholder model of internet governance, we hope all relevant stakeholders, including civil society experts on digital rights and freedom of expression, will be fully
engaged throughout the development of the Online Harms bill.
PI welcomes the UK government's commitment to investigating and holding companies to account. When it comes to regulating the internet, however, we must move with care. Failure to do so
will introduce, rather than reduce, "online harms". A 12-week consultation on the proposals has also been launched today. PI plans to file a submission to the consultation as it relates to our work. Given the breadth of the proposals, PI calls
on others respond to the consultation as well.
Here are our initial suggestions:
proceed with care: proposals of regulation of content on digital media platforms should be very carefully evaluated, given the high risks of negative impacts on expression, privacy and other human rights. This is a very complex
challenge and we support the need for broad consultation before any legislation is put forward in this area.
do not lose sight of how data exploitation facilitates the harms identified in the report and ensure any new regulator works closely with others working to tackle these issues.
assess carefully the delegation of sole responsibility to companies as adjudicators of content. This would empower corporate judgment over content, with would have implications for human rights, particularly freedom of expression
require that judicial or other independent authorities, rather than government agencies, are the final arbiters of decisions regarding what is posted online and enforce such decisions in a manner that is consistent with human
assess the privacy implications of any demand for "proactive" monitoring of content in digital media platforms.
ensure that any requirement or expectation of deploying automated decision making/AI is in full compliance with existing human rights and data protection standards (which, for example, prohibit, with limited exceptions, relying on
solely automated decisions, including profiling, when they significantly affect individuals).
ensure that company transparency reports include information related to how the content was targeted at users.
require companies to provide efficient reporting tools in multiple languages, to report on action taken with regard to content posted online. Reporting tools should be accessible, user-friendly, and easy to find. There should be
full transparency regarding the complaint and redress mechanisms available and opportunities for civil society to take action.
UK Now Proposes Ridiculous Plan To Fine Internet Companies For Vaguely Defined Harmful Content
Last week Australia rushed through a ridiculous bill to fine internet companies if they happen to host any abhorrent content. It
appears the UK took one look at that nonsense and decided it wanted some too. On Monday it released a white paper calling for massive fines for internet companies for allowing any sort of online harms. To call the plan nonsense is being way too harsh to
The plan would result in massive, widespread, totally unnecessary censorship solely for the sake of pretending to do something about the fact that some people sometimes do not so nice things online. And it will place all
of the blame on the internet companies for the (vaguely defined) not so nice things that those companies' users might do online.
We agree with your characterisation of the online harms white paper as a flawed attempt to deal with serious problems (Regulating the internet demands clear thought about hard problems, Editorial, 9 April). However, we would draw your attention to
several fundamental problems with the proposal which could be disastrous if it proceeds in its current form.
Firstly, the white paper proposes to regulate literally the entire internet, and censor anything non-compliant. This
extends to blogs, file services, hosting platforms, cloud computing; nothing is out of scope.
Secondly, there are a number of undefined harms with no sense of scope or evidence thresholds to establish a need for action. The lawful
speech of millions of people would be monitored, regulated and censored.
The result is an approach that would make China's state censors proud. It would be very likely to face legal challenge. It would give the UK the widest and
most prolific internet censorship in an apparently functional democracy. A fundamental rethink is needed.
Antonia Byatt Director, English PEN, Silkie Carlo Big Brother Watch Thomas Hughes Executive director, Article 19 Jim Killock
Executive director, Open Rights Group Joy Hyvarinen Head of advocacy, Index on Censorship
Comment: The DCMS Online Harms Strategy must design in fundamental rights
Increasingly over the past year, DCMS has become fixated on the idea of imposing a duty of care on social media platforms, seeing this as a flexible and de-politicised way to emphasise the dangers of exposing children and young people to certain
online content and make Facebook in particular liable for the uglier and darker side of its user-generated material.
DCMS talks a lot about the 'harm' that social media causes. But its proposals fail to explain how harm to free
expression impacts would be avoided.
On the positive side, the paper lists free expression online as a core value to be protected and addressed by the regulator. However, despite the apparent prominence of this value, the
mechanisms to deliver this protection and the issues at play are not explored in any detail at all.
In many cases, online platforms already act as though they have a duty of care towards their users. Though the efficacy of such
measures in practice is open to debate, terms and conditions, active moderation of posts and algorithmic choices about what content is pushed or downgraded are all geared towards ousting illegal activity and creating open and welcoming shared spaces.
DCMS hasn't in the White Paper elaborated on what its proposed duty would entail. If it's drawn narrowly so that it only bites when there is clear evidence of real, tangible harm and a reason to intervene, nothing much will change. However, if it's drawn
widely, sweeping up too much content, it will start to act as a justification for widespread internet censorship.
If platforms are required to prevent potentially harmful content from being posted, this incentivises widespread
prior restraint. Platforms can't always know in advance the real-world harm that online content might cause, nor can they accurately predict what people will say or do when on their platform. The only way to avoid liability is to impose wide-sweeping
upload filters. Scaled implementation of this relies on automated decision-making and algorithms, which risks even greater speech restrictions given that machines are incapable of making nuanced distinctions or recognising parody or sarcasm.
DCMS's policy is underpinned by societally-positive intentions, but in its drive to make the internet "safe", the government seems not to recognise that ultimately its proposals don't regulate social media companies, they
regulate social media users. The duty of care is ostensibly aimed at shielding children from danger and harm but it will in practice bite on adults too, wrapping society in cotton wool and curtailing a whole host of legal expression.
Although the scheme will have a statutory footing, its detail will depend on codes of practice drafted by the regulator. This makes it difficult to assess how the duty of care framework will ultimately play out.
The duty of care seems to be broadly about whether systemic interventions reduce overall "risk". But must the risk be always to an identifiable individual, or can it be broader - to identifiable vulnerable groups? To society
as a whole? What evidence of harm will be required before platforms should intervene? These are all questions that presently remain unanswered.
DCMS's approach appears to be that it will be up to the regulator to answer these
questions. But whilst a sensible regulator could take a minimalist view of the extent to which commercial decisions made by platforms should be interfered with, allowing government to distance itself from taking full responsibility over the fine
detailing of this proposed scheme is a dangerous principle. It takes conversations about how to police the internet out of public view and democratic forums. It enables the government to opt not to create a transparent, judicially reviewable legislative
framework. And it permits DCMS to light the touch-paper on a deeply problematic policy idea without having to wrestle with the practical reality of how that scheme will affect UK citizens' free speech, both in the immediate future and for years to come.
How the government decides to legislate and regulate in this instance will set a global norm.
The UK government is clearly keen to lead international efforts to regulate online content. It
knows that if the outcome of the duty of care is to change the way social media platforms work that will apply worldwide. But to be a global leader, DCMS needs to stop basing policy on isolated issues and anecdotes and engage with a broader conversation
around how we as society want the internet to look. Otherwise, governments both repressive and democratic are likely to use the policy and regulatory model that emerge from this process as a blueprint for more widespread internet censorship.
The House of Lords report on the future of the internet, published in early March 2019, set out ten principles
it considered should underpin digital policy-making, including the importance of protecting free expression. The consultation that this White Paper introduces offers a positive opportunity to collectively reflect, across industry, civil society, academia
and government, on how the negative aspects of social media can be addressed and risks mitigated. If the government were to use this process to emphasise its support for the fundamental right to freedom of expression - and in a way that goes beyond mere
expression of principle - this would also reverberate around the world, particularly at a time when press and journalistic freedom is under attack.
The White Paper expresses a clear desire for tech companies to "design in
safety". As the process of consultation now begins, we call on DCMS to "design in fundamental rights". Freedom of expression is itself a framework, and must not be lightly glossed over. We welcome the opportunity to engage with DCMS
further on this topic: before policy ideas become entrenched, the government should consider deeply whether these will truly achieve outcomes that are good for everyone.
Totalitarian-style new online code that could block websites and fine them 2£20million for harmful content will not limit press freedom, Culture Secretary promises
Government proposals have sparked fears that they could backfire and turn Britain into the first Western nation to adopt the kind of censorship usually associated with totalitarian regimes.
Former culture secretary John Whittingdale drew parallels with China, Russia and North Korea. Matthew Lesh of the Adam Smith Institute, a free market think-tank, branded the white paper a historic attack on freedom of speech.
[However] draconian laws designed to tame the web giants will not limit press freedom, the Culture Secretary said yesterday.
In a letter to the Society of Editors, Jeremy Wright vowed that journalistic or
editorial content would not be affected by the proposals.
And he reassured free speech advocates by saying there would be safeguards to protect the role of the Press.
But as for the safeguarding the free speech rights of
ordinary British internet users, he more or less told them they could fuck off!
Zippyshare is a long running data locker and file sharing platform that is well known particularly for the distribution of porn.
Last month UK users noted that they have been blocked from accessing the website and that it can now only be accessed
via a VPN.
Zippyshare themselves has made no comment about the block, but TorrentFreak have investigated the censorship and have determined that the block is self imposed and is not down to action by UK courts or ISPs.
Alan wonders if this
is a premature reaction to the Great British Firewall, noting it's quite a popular platform for free porn.
Of course it poses the interesting question that if websites generally decide to address the issue of UK porn censorship by self imposed
blocks, then keen users will simply have to get themselves VPNs. Being willing to sign up for age verification simply won't work. Perhaps VPNs will be next to mandatory for British porn users, and age verification will become an unused technology.
In the first online safety laws of their kind, social media companies and tech firms will be legally required to protect their users and face tough penalties if they do not comply.
part of the Online Harms White Paper, a joint proposal from the Department for Digital, Culture, Media and Sport and Home Office, a new independent regulator will be introduced to ensure companies meet their responsibilities.
will include a mandatory 'duty of care', which will require companies to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services. The regulator will have effective enforcement tools, and we are consulting
on powers to issue substantial fines, block access to sites and potentially to impose liability on individual members of senior management.
A range of harms will be tackled as part of the
Online Harms White Paper , including inciting violence and violent content, encouraging suicide, disinformation, cyber bullying and
children accessing inappropriate material.
There will be stringent requirements for companies to take even tougher action to ensure they tackle terrorist and child sexual exploitation and abuse content.
new proposed laws will apply to any company that allows users to share or discover user generated content or interact with each other online. This means a wide range of companies of all sizes are in scope, including social media platforms, file hosting
sites, public discussion forums, messaging services, and search engines.
A regulator will be appointed to enforce the new framework. The Government is now consulting on whether the regulator should be a new or existing body. The
regulator will be funded by industry in the medium term, and the Government is exploring options such as an industry levy to put it on a sustainable footing.
12 week consultation on the proposals has also been launched today. Once this concludes we will then set out the action we will take in
developing our final proposals for legislation.
Tough new measures set out in the White Paper include:
A new statutory 'duty of care' to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services.
Further stringent requirements
on tech companies to ensure child abuse and terrorist content is not disseminated online.
Giving a regulator the power to force social media platforms and others to publish annual transparency reports on the amount of harmful
content on their platforms and what they are doing to address this.
Making companies respond to users' complaints, and act to address them quickly.
Codes of practice, issued by the regulator,
which could include measures such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.
A new "Safety by Design"
framework to help companies incorporate online safety features in new apps and platforms from the start.
A media literacy strategy to equip people with the knowledge to recognise and deal with a range of deceptive and
malicious behaviours online, including catfishing, grooming and extremism.
The UK remains committed to a free, open and secure Internet. The regulator will have a legal duty to pay due regard to innovation, and to protect users' rights online, being particularly mindful to not infringe privacy and freedom of
Recognising that the Internet can be a tremendous force for good, and that technology will be an integral part of any solution, the new plans have been designed to promote a culture of continuous improvement among
companies. The new regime will ensure that online firms are incentivised to develop and share new technological solutions, like Google's "Family Link" and Apple's Screen Time app, rather than just complying with minimum requirements. Government
has balanced the clear need for tough regulation with its ambition for the UK to be the best place in the world to start and grow a digital business, and the new regulatory framework will provide strong protection for our citizens while driving
innovation by not placing an impossible burden on smaller companies.
In an interesting article on the Government age verification and internet porn censorship scheme, technology website Techdirt reports on the ever slipping deadlines.
Seemingly with detailed knowledge of government requirements for the scheme, Tim
Cushing explains that up until recently the government has demand that age verification companies retain a site log presumably recording people's porn viewing history. He writes:
The government refreshed its porn
blockade late last year, softening a few mandates into suggestions. But the newly-crafted suggestions were backed by the implicit threat of heavier regulation. All the while, the government has ignored the hundreds of critics and experts who have pointed
out the filtering plan's numerous problems -- not the least of which is a government-mandated collection of blackmail fodder.
The government is no longer demanding retention of site logs by sites performing age verification, but
it's also not telling companies they shouldn't retain the data. Companies likely will retain this data anyway, if only to ensure they have it on hand when the government inevitably changes it mind.
Cushing concludes with a comment
perhaps suggesting that the Government wants a far more invasive snooping regime than commercial operators are able or willing to provide. He notes:
Shortly. April 1st will come and go with no porn filter. The next
best guess is around Easter (April 21st). But I'd wager that date comes and goes as well with zero new porn filters. The UK government only knows what it wants. It has no idea how to get it. I
And it seems that some age verification
companies are getting wound up by negative internet and press coverage of the dangers inherent in their services. @glynmoody tweeted:
I see age verification companies that will create the biggest database of people's
porn preferences - perfect for blackmail - are now trying to smear people pointing out this is a stupid idea as deliberately creating a climate of fear and confusion about the technologies nope
The age verification company AgeChecked and porn producer Erika Lust have created a test website for a live trial of age verification.
The test website iwantfourplay.com features the porn video created by the mums in the Channel 4 series Mums
The website presented the video free of charge, but only after viewers passed one of 3 options for age verification:
entering full credit card details + email
entering driving licence number + name and address + email
mobile phone number + email (the phone must have been verified as 18+ by the service provider and must must be ready to receive an
SMS message containing login details)
The AgeChecked forms are unimpressive, the company seems reluctant to inform customers about requirements before handing over details. The forms do not even mention that the age requirement is 18+. It certainly does not try to make it clear that say a
debit card is unacceptable or that a driving licence is not acceptable if registered to a 17 year old. It seems that they would prefer users to type in all their details and then tell them sorry, the card/licence/phone number doesn't pass the test. In
fact the mobile phone option is distinctly misleading it suggests that it may be quicker to use the other options if the mobile phone is not age verified. It should say more positively that an unverified phone cannot be used.
forms also make contradictory claims about users personal data not being stored by Age Checked (or shared with iwantfourplay.com)... but then goes on to ask for email address for logging into existing existing AgeChecked accounts, so obviously that
item of personal data must be stored by AgeChecked for practical recurring usage.
AgeChecked has already reported on the early results from the test. Alastair Graham, CEO of AgeChecked said:
The results of this
first mainstream test of our software were hugely encouraging.
Whilst an effective date for the new legislation's implementation is yet to be confirmed by the British Board of Film Classification, this suggests a clear
preparedness to offer robust, secure age verification procedures to the adult industry's 24-30 million UK users.
It also highlights that customers are willing to participate in the process when they know that they are being
verified by a secure provider, with whom their identity is fully protected.
The popularity of mobile phone verification was interesting and presumably due the simplicity of using this device. This is something that we foresee as
being a key part of this process moving forward.
Don't these people spout rubbish sometimes, pretending that not wanting to have one's credit card details, name and address details associated with watching porn is just down to
Graham also did not mention other perhaps equally important results from the test. In particular I wonder how many people seeking the video simply decided not to proceed further when presented by age verification options.
wonder also how many people watched the video without going through age verification. I noted that with a little jiggery pokery, the video could be viewed by VPN. I also noted that although the age verification got in the way of clicking on the video,
file/video downloading browser addons were still able to access the video without bothering with the age verification.
And congratulations to the mums for making a good porn video. It feature very attractive actors participating in all the usual
porn elements, whilst getting across the mums' wishes for a more positive/loving approach to sex.