The UK Supreme Court has today ruled that trade mark holders are not able to compel ISPs to bear the cost of implementing orders to block websites selling counterfeit goods.
Jim, Alex and Myles at the Supreme CourtOpen Rights Group acted as an intervener in this case. We argued that Internet service providers (ISPs) as innocent parties should not bear the costs of website blocking, and that this was a long-standing
principle of English law.
Jim Killock, Executive Director of Open Rights Group said:
This case is important because if ISPs paid the costs of blocking websites, the result would be an increasing number of blocks for relatively trivial reasons and the costs would be passed to customers.
While rights holders may want websites blocked, it needs to be economically rational to ask for this.
Solicitor in the case David Allen Green said:
I am delighted to have acted, through my firm Preiskel, successfully for the Open Rights Group in their intervention.
We intervened to say that those enforcing private rights on internet should bear the costs of doing so, not others. This morning, the UK Supreme Court held unanimously that the rights holders should bear the costs.
The main party to the case was BT who opposed being forced to pay for costs incurred in blocking websites. Now rights-holders must reimburse ISPs for the costs of blocking rights-infringing material.
Supreme Court judge Lord Sumption, one of five n the panel, ruled:
There is no legal basis for requiring a party to shoulder the burden of remedying an injustice if he has no legal responsibility for the infringement and is not a volunteer but is acting under the compulsion of an order of the court.
It follows that in principle the rights-holders should indemnify the ISPs against their compliance costs. Section 97A of the Copyright, Designs and Patents Act 1988 allows rights-holders to go to court and get a blocking order -- the question in
the current case is who stumps up for the costs of complying with that order?
Of course this no asks the question about who should pay for mass porn website blocking that will be needed when the BBFC porn censorship regime stats its work.
Who is liable if a user posts copyrighted music to YouTube without authority? Is it the user or is it YouTube? The answer is of course that it is the user who would be held liable should copyright holders seek compensation. YouTube would be
held responsible only if they were informed of the infringement and refused to take it down.
This is the practical compromise that lets the internet work.
So what would happen if the government changed the liability laws so that YouTube was held liable for unauthorised music as soon as it was posted. There maybe millions of views before it was spotted. If YouTube were immediately liable they may
have to pay millions in court judgements against them.
There is lot of blather about YouTube having magic Artificial Intelligence that can detect copyrighted music and block it before it us uploaded. But this is nonsense, music is copyrighted by default, even a piece that has never been published and
is not held in any computer database.
YouTube does not have a database that contains all the licensing and authorisation, and who exactly is allowed to post copyrighted material. Even big companies lie, so how could YouTube really know what could be posted and what could not.
If the law were to be changed, and YouTube were held responsible for the copyright infringement of their posters, then the only possible outcome would be for YouTube to use its AI to detect any music at all and block all videos which contain
music. The only music allowed to be published would be from the music companies themselves, and even then after providing YouTube with paperwork to prove that they had the necessary authorisation.
So when the government speaks of changes to liability law they are speaking of a massive step up in internet censorship as the likely outcome.
In fact the censorship power of such liability tweaks has been proven in the US. The recently passed FOSTA law changed liability law so that internet companies are now held liable for user posts facilitating sex trafficking. The law was
sold as a 'tweak' just to take action against trafficking. But it resulted in the immediate and almost total internet censorship of all user postings facilitating adult consensual sex work, and a fair amount of personal small ads and dating
services as well.
The rub was that sex traffickers do not in any way specify that their sex workers have been trafficked, their adverts are exactly the same as for adult consensual sex workers. With all the artificial intelligence in the world, there is no way
that internet companies can distinguish between the two.
When they are told they are liable for sex trafficking adverts, then the only possible way to comply is to ban all adverts or services that feature anything to do with sex or personal hook ups. Which is of course exactly what happened.
So when UK politicians speak of internet liability changes and sex trafficking then they are talking about big time, large scale internet censorship.
And Theresa May said today via a government press release as reported in the Daily Mail:
Web giants such as Facebook and Twitter must automatically remove vile abuse aimed at women, Theresa May will demand today.
The Prime Minister will urge companies to utilise the same technology used to take down terrorist propaganda to remove rape threats and harassment.
Speaking at the G7 summit in Quebec, Mrs May will call on firms to do more to tackle content promoting and depicting violence against women and girls, including illegal violent pornography.
She will also demand the automatic removal of adverts that are linked to people-trafficking.
May will argue they must ensure women can use the web without fear of online rape threats, harassment, cyberstalking, blackmail or vile comments.
She will say: We know that technology plays a crucial part in advancing gender equality and empowering women and girls, but these benefits are being undermined by vile forms of online violence, abuse and harassment.
What is illegal offline is illegal online and I am calling on world leaders to take serious action to deal with this, just like we are doing in the UK with our commitment to legislate on online harms such as cyber-stalking and harassment.
In a world that is being ripped apart by identitarian intolerance of everyone else, its seems particularly unfair that men should be expected to happily put up with the fear of online threats, harassment, cyberstalking, blackmail or vile
comments. Surely laws should be written so that all people are treated totally equally.
Online platforms need to take responsibility for the content they host. They need to proactively tackle harmful behaviours and content. Progress has been made in removing illegal content, particularly terrorist material, but more needs to be
done to reduce the amount of damaging content online, legal and illegal.
We are developing options for increasing the liability online platforms have for illegal content on their services. This includes examining how we can make existing frameworks and definitions work better, as well as what the liability regime
should look like in the long-run.
Terms and Conditions
Platforms use their terms and conditions to set out key information about who can use the service, what content is acceptable and what action can be taken if users don't comply with the terms. We know that users frequently break these rules. In
such circumstances, the platforms' terms state that they can take action, for example they can remove the offending content or stop providing services to the user. However, we do not see companies proactively doing this on a routine basis. Too
often companies simply do not enforce their own terms and conditions.
Government wants companies to set out clear expectations of what is acceptable on their platforms in their terms, and then enforce these rules using sanctions when necessary. By doing so, companies will be helping users understand what is and
We believe that it is right for Government to set out clear standards for social media platforms, and to hold them to account if they fail to live up to these. DCMS and Home Office will jointly work on the White Paper which will set out our
proposals for forthcoming legislation. We will focus on proposals which will bring into force real protections for users that will cover both harmful and illegal content and behaviours. In parallel, we are currently
assessing legislative options to modify the online liability regime in the UK, including both the smaller changes consistent with the EU's eCommerce directive, and the larger changes that may be possible when we leave the EU.
Age verification has been hanging over us for several years now - and has now been put back to the end of 2018 after enforcement was originally planned to start last month.
I'm enormously encouraged by how many people took the opportunity to speak up and reply to the BBFC consultation on the new regulations .
Over 500 people submitted a response using the tool provided by the Open Rights Group , emphasising the need for age verification tech to be held to robust privacy and security standards.
I'm told that around 750 consultation responses were received by the BBFC overall, which means that a significant majority highlighted the regulatory gap between the powers of the BBFC to regulate adult websites, and the powers of the Information
Commissioner to enforce data protection rules.
Pornhub, the dominant force amongst the world's porn websites, has sent a challenge to the BBFC's porn censorship regime by offering a free workaround to any porn viewer who would prefer to hide their tracks rather then open themselves up to
the dangers of offering up their personal ID to age verifiers.
And rather bizarrely Pornhub are one of the companies offering age verification services to porn sites who want to comply with UK age verification requirements.
Pornhub describes its VPN service with references to UK censorship:
Browse all websites anonymously and without restrictions.
VPNhub helps you bypass censorship while providing secure and private access to Internet. Access all of your favorite websites without fear of being monitored.
Hide your information and surf the Internet without a trace.
Enjoy the pleasure of protection with VPNhub. With full data encryption and guaranteed anonymity, go with the most trusted VPN to protect your privacy anywhere in the world.
Free and Unlimited
Enjoy totally free and unlimited bandwidth on your device of choice.
Culture Secretary Matt Hancock has issued to the following press release from the Department for Digital, Culture, Media & Sport:
New laws to make social media safer
New laws will be created to make sure that the UK is the safest place in the world to be online, Digital Secretary Matt Hancock has announced.
The move is part of a series of measures included in the government's response to the Internet Safety Strategy green paper, published today.
The Government has been clear that much more needs to be done to tackle the full range of online harm.
Our consultation revealed users feel powerless to address safety issues online and that technology companies operate without sufficient oversight or transparency. Six in ten people said they had witnessed inappropriate or harmful content online.
The Government is already working with social media companies to protect users and while several of the tech giants have taken important and positive steps, the performance of the industry overall has been mixed.
The UK Government will therefore take the lead, working collaboratively with tech companies, children's charities and other stakeholders to develop the detail of the new legislation.
Matt Hancock, DCMS Secretary of State said:
Digital technology is overwhelmingly a force for good across the world and we must always champion innovation and change for the better. At the same time I have been clear that we have to address the Wild West elements of the Internet through
legislation, in a way that supports innovation. We strongly support technology companies to start up and grow, and we want to work with them to keep our citizens safe.
People increasingly live their lives through online platforms so it's more important than ever that people are safe and parents can have confidence they can keep their children from harm. The measures we're taking forward today will help make
sure children are protected online and balance the need for safety with the great freedoms the internet brings just as we have to strike this balance offline.
DCMS and Home Office will jointly work on a White Paper with other government departments, to be published later this year. This will set out legislation to be brought forward that tackles a range of both legal and illegal harms, from
cyberbullying to online child sexual exploitation. The Government will continue to collaborate closely with industry on this work, to ensure it builds on progress already made.
Home Secretary Sajid Javid said:
Criminals are using the internet to further their exploitation and abuse of children, while terrorists are abusing these platforms to recruit people and incite atrocities. We need to protect our communities from these heinous crimes and vile
propaganda and that is why this Government has been taking the lead on this issue.
But more needs to be done and this is why we will continue to work with the companies and the public to do everything we can to stop the misuse of these platforms. Only by working together can we defeat those who seek to do us harm.
The Government will be considering where legislation will have the strongest impact, for example whether transparency or a code of practice should be underwritten by legislation, but also a range of other options to address both legal and illegal
We will work closely with industry to provide clarity on the roles and responsibilities of companies that operate online in the UK to keep users safe.
The Government will also work with regulators, platforms and advertising companies to ensure that the principles that govern advertising in traditional media -- such as preventing companies targeting unsuitable advertisements at children -- also
apply and are enforced online.
It seems that the latest call for internet censorship is driven by some sort revenge for having been snubbed by the industry.
The culture secretary said he does not have enough power to police social media firms after admitting only four of 14 invited to talks showed up.
Matt Hancock told the BBC it had given him a big impetus to introduce new laws to tackle what he has called the internet's Wild West culture.
He said self-policing had not worked and legislation was needed.
He told BBC One's Andrew Marr Show , presented by Emma Barnett, that the government just don't know how many children of the millions using using social media were not old enough for an account and he was very worried about age
verification. He told the programme he hopes we get to a position where all users of social media users has to have their age verified.
Two government departments are working on a White Paper expected to be brought forward later this year. Asked about the same issue on ITV's Peston on Sunday , Hancock said the government would be legislating in the next couple of years
because we want to get the details right.
Update: Internet safety just means internet censorship
This week, Matt Hancock, Secretary of State for Digital, Culture, Media and Sport, announced the launch of a consultation on new legislative measures to clean up the Wild West elements of the Internet. In response, music group BPI says the
government should use the opportunity to tackle piracy with advanced site-blocking measures, repeat infringer policies, and new responsibilities for service providers.
This week, the Government published its response to the Internet Safety Strategy green paper , stating unequivocally that more needs to be done to tackle online harm. As a result, the Government will now carry through with its threat to introduce
new legislation, albeit with the assistance of technology companies, children's charities and other stakeholders.
While emphasis is being placed on hot-button topics such as cyberbullying and online child exploitation, the Government is clear that it wishes to tackle the full range of online harms. That has been greeted by UK music group BPI with a request
that the Government introduces new measures to tackle Internet piracy.
In a statement issued this week, BPI chief executive Geoff Taylor welcomed the move towards legislative change and urged the Government to encompass the music industry and beyond. He said:
This is a vital opportunity to protect consumers and boost the UK's music and creative industries. The BPI has long pressed for internet intermediaries and online platforms to take responsibility for the content that they promote to users.
Government should now take the power in legislation to require online giants to take effective, proactive measures to clean illegal content from their sites and services. This will keep fans away from dodgy sites full of harmful content and
prevent criminals from undermining creative businesses that create UK jobs.
The BPI has published four initial requests, each of which provides food for thought.
The demand to establish a new fast-track process for blocking illegal sites is not entirely unexpected, particularly given the expense of launching applications for blocking injunctions at the High Court.
The BPI has taken a large number of actions against individual websites -- 63 injunctions are in place against sites that are wholly or mainly infringing and whose business is simply to profit from criminal activity, the BPI says.
Those injunctions can be expanded fairly easily to include new sites operating under similar banners or facilitating access to those already covered, but it's clear the BPI would like something more streamlined. Voluntary schemes, such as the one
in place in Portugal , could be an option but it's unclear how troublesome that could be for ISPs. New legislation could solve that dilemma, however.
Another big thorn in the side for groups like the BPI are people and entities that post infringing content. The BPI is very good at taking these listings down from sites and search engines in particular (more than 600 million requests to date)
but it's a game of whac-a-mole the group would rather not engage in.
With that in mind, the BPI would like the Government to impose new rules that would compel online platforms to stop content from being re-posted after it's been taken down while removing the accounts of repeat infringers.
Thirdly, the BPI would like the Government to introduce penalties for online operators who do not provide transparent contact and ownership information. The music group isn't any more specific than that, but the suggestion is that operators of
some sites have a tendency to hide in the shadows, something which frustrates enforcement activity.
Finally, and perhaps most interestingly, the BPI is calling on the Government to legislate for a new duty of care for online intermediaries and platforms. Specifically, the BPI wants effective action taken against businesses that use the Internet
to encourage consumers to access content illegally.
While this could easily encompass pirate sites and services themselves, this proposal has the breadth to include a wide range of offenders, from people posting piracy-focused tutorials on monetized YouTube channels to those selling fully-loaded
Kodi devices on eBay or social media.
Overall, the BPI clearly wants to place pressure on intermediaries to take action against piracy when they're in a position to do so, and particularly those who may not have shown much enthusiasm towards industry collaboration in the past.
Legislation in this Bill, to take powers to intervene with respect to operators that do not co-operate, would bring focus to the roundtable process and ensure that intermediaries take their responsibilities seriously, the BPI says.
Adults who want to watch online porn (or maybe by adults only products such as alcohol) will be able to buy codes from newsagents and supermarkets to prove that they are over 18 when online.
One option available to the estimated 25 million Britons who regularly visit such websites will be a 16-digit code, dubbed a 'porn pass'.
While porn viewers will still be able to verify their age using methods such as registering credit card details, the 16-digit code option would be a fully anonymous option. According to AVSecure's the cards will be sold for £10 to anyone who
looks over 18 without the need for any further identification. It doesn't say on the website, but presumably in the case where there is doubt about a customer's age, then they will have to show ID documents such as a passport or driving licence,
but hopefully that ID will not have to be recorded anywhere.
It is hope he method will be popular among those wishing to access porn online without having to hand over personal details to X-rated sites.
The user will type in a 16 digit number into websites that belong to the AVSecure scheme. It should be popular with websites as it offers age verification to them for free (with the £10 card fee being the only source of income for the company).
This is a lot better proposition for websites than most, if not all, of the other age verification companies.
AVSecure also offer an encrypted implementation via blockchain that will not allow websites to use the 16 digit number as a key to track people's website browsing. But saying that they could still use a myriad of other standard technologies to
The BBFC is assigned the task of deciding whether to accredit different technologies and it will be very interesting to see if they approve the AVSecure offering. It is easily the best solution to protect the safety and privacy of porn viewers,
but it maybe will test the BBFC's pragmatism to accept the most workable and safest solution for adults which is not quite fully guaranteed to protect children. Pragmatism is required as the scheme has the technical drawback of having no further
checks in place once the card has been purchased. The obvious worry is that an over 18s can go around to other shops to buy several cards to pass on to their under 18 mates. Another possibility is that kids could stumble on their parent's card
and get access. Numbers shared on the web could be easily blocked if used simultaneously from different IP addresses.
We asked the BBFC to tell government that the legislation is not fit for purpose, and that they should halt the scheme until privacy regulation is in place. We pointed out that card payments and email services are both subject to stronger
privacy protections that Age Verification.
The government's case for non-action is that the Information Commissioner and data protection fines for data breaches are enough to deal with the risk. This is wrong: firstly because fines cannot address the harm created by the leaking of
people's sexual habits. Secondly, it is wrong because data breaches are only one aspect of the risks involved.
We outlined over twenty risks from Age Verification technologies. We pointed out that Age Verification contains a set of overlapping problems. You can read our list below. We may have missed some: if so, do let us know.
The government has to act. It has legislated this requirement without properly evaluating the privacy impacts. If and when it goes wrong, the blame will lie squarely at the government's door.
The consultation fails to properly distinguish between the different functions and stages of an age verification system. The risks associated with each are separate but interact. Regulation needs to address all elements of these systems. For
Choosing a method of age verification, whereby a user determines how they wish to prove their age.
The method of age verification, where documents may be examined and stored.
The tool's approach to returning users, which may involve either:
attaching the user's age verification status to a user account or log-in credentials; or
providing a means for the user to re-attest their age on future occasions.
The re-use of any age verified account, log-in or method over time, and across services and sites.
The focus of attention has been on the method of pornography-related age verification, but this is only one element of privacy risk we can identify when considering the system as a whole. Many of the risks stem from the fact that users may be
permanently 'logged in' to websites, for instance. New risks of fraud, abuse of accounts and other unwanted social behaviours can also be identified. These risks apply to 20-25 million adults, as well as to teenagers attempting to bypass the
restrictions. There is a great deal that could potentially go wrong.
Business models, user behaviours and potential criminal threats need to be taken into consideration. Risks therefore include:
Collecting identity documents in a way that allows them to potentially be correlated with the pornographic content viewed by a user represents a serious potential risk to personal and potentially highly sensitive data.
Risks from logging of porn viewing
A log-in from an age-verified user may persist on a user's device or web browser, creating a history of views associated with an IP address, location or device, thus easily linked to a person, even if stored 'pseudonymously'.
An age verified log-in system may track users across websites and be able to correlate tastes and interests of a user visiting sites from many different providers.
Data from logged-in web visits may be used to profile the sexual preferences of users for advertising. Tool providers may encourage users to opt in to such a service with the promise of incentives such as discounted or free content.
The current business model for large porn operations is heavily focused on monetising users through advertising, exacerbating the risks of re-use and recirculation and re-identification of web visit data.
Any data that is leaked cannot be revoked, recalled or adequately compensated for, leading to reputational, career and even suicide risks.
Everyday privacy risks for adults
The risk of pornographic web accounts and associated histories being accessed by partners, parents, teenagers and other third parties will increase.
Companies will trade off security for ease-of-use, so may be reluctant to enforce strong passwords, two-factor authentication and other measures which make it harder for credentials to leak or be shared.
Everyday privacy tools used by millions of UK residents such as 'private browsing' modes may become more difficult to use to use due to the need to retain log-in cookies, increasing the data footprint of people's sexual habits.
Some users will turn to alternative methods of accessing sites, such as using VPNs. These tools have their own privacy risks, especially when hosted outside of the EU, or when provided for free.
Risks to teenagers' privacy
If age-verified log-in details are acquired by teenagers, personal and sexual information about them may become shared including among their peers, such as particular videos viewed. This could lead to bullying, outing or worse.
Child abusers can use access to age verified accounts as leverage to create and exploit a relationship with a teenager ('grooming').
Other methods of obtaining pornography would be incentivised, and these may carry new and separate privacy risks. For instance the BitTorrent network exposes the IP addresses of users publicly. These addresses can then be captured by services
like GoldenEye, whose business model depends on issuing legal threats to those found downloading copyrighted material. This could lead to the pornographic content downloaded by young adults or teenagers being exposed to parents or carers.
While copyright infringement is bad, removing teenagers' sexual privacy is worse. Other risks include viruses and scams.
Trust in age verification tools and potential scams
Users may be obliged to sign up to services they do not trust or are unfamiliar with in order to access specific websites.
Pornographic website users are often impulsive, with lower risk thresholds than for other transactions. The sensitivity of any transactions involved gives them a lower propensity to report fraud. Pornography users are therefore particularly
vulnerable targets for scammers.
The use of credit cards for age verification in other markets creates an opportunity for fraudulent sites to engage in credit card theft.
Use of credit cards for pornography-related age verification risks teaching people that this is normal and reasonable, opening up new opportunities for fraud, and going against years of education asking people not to hand card details to
There is no simple means to verify which particular age verification systems are trustworthy, and which may be scams.
Market related privacy risks
The rush to market means that the tools that emerge may be of variable quality and take unnecessary shortcuts.
A single pornography-related age verification system may come to dominate the market and become the de-facto provider, leaving users no real choice but to accept whatever terms that provider offers.
One age verification product which is expected to lead the market -- AgeID -- is owned by MindGeek, the dominant pornography company online. Allowing pornographic sites to own and operate age verification tools leads to a conflict of interest
between the privacy interests of the user, and the data-mining and market interests of the company.
The online pornography industry as a whole, including MindGeek, has a poor record of privacy and security, littered with data breaches. Without stringent regulation prohibiting the storage of data which might allow users' identity and browsing
to be correlated, there is no reason to assume that data generated as a result of age verification tools will be exempt from this pattern of poor security.
I agree with the BBFC's Approach as set out in Chapter 2
Re Age-verification Standards set out in Chapter 3
4. This guidance also outlines good practice in relation to age-verification to encourage consumer choice and the use of mechanisms that confirm age but not identity.
I think you should point out to porn viewers that your ideas on good practice are in no way enforceable on websites. You should not mislead porn viewers into thinking that their data is safe because of the assumption that websites will follow
best practice. They may not.
5c. A requirement that either a user age-verify each visit or access is restricted by controls, manual or electronic, such as, but not limited to, password or personal identification numbers
This is a very glib sentence that could be the make or break of user acceptability of age verification.
This is not like watching films on Netflix, ie entering a PIN and watching a film. Viewing porn is more akin to browsing, hopping from one website to another, starting a film, quickly deciding it is no good and searching for another, maybe on a
different site. Convenient browsing requires that a verification is stored for at least a reasonable time in a cookie. So that it can be access automatically by all websites using the same verification provider (or even different verification
providers if they could get together to arrange this).
At the very least the BBFC should make a clearer statement about persistence of PINs or passwords and whether it is acceptable to maintain valid verifications in cookies.(or age verifier databases). The Government needs adults to buy into age
verification. If the BBFC get too fussy about eliminating the risk that under 18s could view porn then the whole system could become too inconvenient for adults to be bothered with, resulting in a mass circumvention of the system with lots of
information in lots of places about how and where porn could be more easily obtained. The under 18s would probably see this too, and so this would surely diminish the effectiveness of the whole idea. The very suggestion that users age verify
each visit suggests that the BBFC is simply not on the right wavelength for a viable solution. Presumably not much thought has been put into specifying advance requirements, and that instead the BBFC will consider the merits of proposals as
they arise. The time scales for enactment of the law should therefore allow for technical negotiations between developers and the BBFC about how each system should work.
5d. the inclusion of measures that are effective at preventing use by non-human operators including algorithms
What a meaningless statement, surely the age verification software process itself will be non human working on algorithms. Do bots need to be protected from porn? Are you saying that websites should not allow their sites to be accessed by
Google's search engine bots? Unless there is an element of repeat access, a website does not really know that it is being accessed by a bot or a human. I think you probably have a more specific restriction in mind, and this has not been
articulated in this vague and meaningless statement
7. Although not a requirement under section 14(1) the BBFC recommends that age-verification providers adopt good practice in the design and implementation of their solutions. These include solutions that: include clear information for end-users
on data protection
When have websites or webs services ever provided clear information about data protection? The most major players of the internet refuse to provide clear information, eg Facebook or Google.
9. During the course of this age-verification assessment, the BBFC will normally be able to identify the following in relation to data protection compliance concerns: failure to include clear information for end-users on data protection and how
data is used; and requesting more data than is necessary to confirm age, for example, physical location information.
Excellent! This would be good added value from the BBFC At the very least the BBFC should inform porn viewers that for foreign non-EU sites, there will be absolutely no data protection, and for EU websites, once users give their consent then
the websites can do more or less anything with the data.
10. The BBFC will inform the Information Commissioner's Office where concerns arise during its assessment of the age-verification effectiveness that the arrangement does not comply with data protection legislation. The ICO will consider if
further investigation is appropriate. The BBFC will inform the online commercial pornography provider(s) that it has raised concerns with the ICO.
Perhaps the BBFC could make it clear to porn users, the remit of the ICO over non-EU porn sites, and how the BBFC will handle these issues for a non-EU website.
Re Data Protection and the Information Commissioner's Office
The world's major websites such as Facebook that follow all the guidelines noted in this section but end up telling you nothing about how your data is used, I don't suppose porn sites will be any more open.
3b Where an organisation processing personal data is based outside the EU, an EU-based representative must be appointed and notified to the individual
Will the BBFC block eg a Russian website that complies with age verification by requiring credit card payments but has no EU representative? I think the BBFC/ICO needs to add a little bit more about data protection for websites and services
outside of the EU. Porn viewers need to know.
Perhaps the BBFC could keep a FAQ for porn viewers eg Does the UK vetting service for people working with children have access to age verification data used for access to porn sites?
This is so wrong on so many levels. Britain would undergo a mass tantrum.
How are parents supposed to entertain their kids if they can't spend all day on YouTube?
And what about all the privacy implications of letting social media companies have complete identity details of their users. It will be like Cambridge Analytica on speed.
Jeremy Hunt wrote to the social media companies:
Thank you for participating in the working group on children and young people's mental health and social media with officials from my Department and DCMS. We appreciate your time and engagement, and your willingness to continue discussions and
potentially support a communications campaign in this area, but I am disappointed by the lack of voluntary progress in those discussions.
We set three very clear challenges relating to protecting children and young people's mental health: age verification, screen time limits and cyber-bullying. As I understand it, participants have focused more on promoting work already underway
and explaining the challenges with taking further action, rather than offering innovative solutions or tangible progress.
In particular, progress on age verification is not good enough. I am concerned that your companies seem content with a situation where thousands of users breach your own terms and conditions on the minimum user age. I fear that you are
collectively turning a blind eye to a whole generation of children being exposed to the harmful emotional side effects of social media prematurely; this is both morally wrong and deeply unfair on parents, who are faced with the invidious choice
of allowing children to use platforms they are too young to access, or excluding them from social interaction that often the majority of their peers are engaging in. It is unacceptable and irresponsible for you to put parents in this position.
This is not a blanket criticism and I am aware that these aren't easy issues to solve. I am encouraged that a number of you have developed products to help parents control what their children an access online in response to Government's concerns
about child online protection, including Google's Family Link. And I recognise that your products and services are aimed at different audiences, so different solutions will be required. This is clear from the submissions you've sent to my
officials about the work you are delivering to address some of these challenges.
However, it is clear to me that the voluntary joint approach has not delivered the safeguards we need to protect our children's mental health. In May, the Department
for Digital, Culture, Media and Sport will publish the Government response to the Internet Safety Strategy consultation, and I will be working with the Secretary of State to explore what other avenues are open to us to pursue the reforms we need.
We will not rule out legislation where it is needed.
In terms of immediate next steps, I appreciate the information that you provided our officials with last month but would be grateful if you would set out in writing your companies' formal responses, on the three challenges we posed in November.
In particular, I would like to know what additional new steps you have taken to protect children and young people since November in each of the specific categories we raised: age verification, screen time limits and cyber-bullying. I invite you
to respond by the end of this month, in order to inform the Internet Safety Strategy response. It would also be helpful if you can set out any ideas or further plans you have to make progress in these areas.
During the working group meetings I understand you have pointed to the lack of conclusive evidence in this area — a concern which I also share. In order to address this, I have asked the Chief Medical Officer to undertake an evidence review on
the impact of technology on children and young people's mental health, including on healthy screen time. 1 will also be working closely with DCMS and UKRI to commission research into all these questions, to ensure we have the best possible
empirical basis on which to make policy. This will inform the Government's approach as we move forwards.
Your industry boasts some of the brightest minds and biggest budgets globally. While these issues may be difficult, I do not believe that solutions on these issues are outside your reach; I do question whether there is sufficient will to reach
I am keen to work with you to make technology a force for good in protecting the next generation. However, if you prove unwilling to do so, we will not be deterred from making progress.
The BBFC is consulting on its procedures for deciding if porn websites have implemented adequately strictly such that under 18s won't normally be able to access the website. Any websites not complying will be fined/blocked and/or pressurised by
hosting/payment providers and advertisers who are willing to support the BBFC censorship.
Now I'm sure that the BBFC will diligently perform their duties with fairness and consideration for all, but the trouble is that all the horrors of scamming, hacking, snooping, blackmail, privacy etc are simply not the concern of the BBFC. It is
pointless to point out how the age verification will endanger porn viewers, it is not in their remit.
If a foreign website were to implement strict age verification and then pass over all the personal details and viewing habits straight to its blackmail, scamming and dirty tricks department, then this will be perfectly fine with the BBFC. It is
only their job to ensure that under 18s won't get through the ID checking.
There is a little privacy protection for porn websites with a presence in the EU, as the new GDPR rues have some generic things to say about keeping data safe. However these are mostly useless if you give your consent to the websites to use your
data as they see fit. And it seems pretty easy to get consent for just about anything just be asking people to tick a box, or else not be allowed to see the porn. For example, Facebook will still be allowed to slurp all you personal data even
within the constraints of GDPR, so will porn websites.
As a porn viewer, the only person who will look after you, is yourself.
The woeful flaws of this bill need addressing (by the government rather than the BBFC). We need to demand of the government: Don't save the children by endangering their parents.
At the very least we need a class of critically private data that websites simply must not use, EVER, under any circumstances, for any reason, and regardless of nominal user consent. Any company that uses this critically private data must be
liable to criminal prosecution.
Anyway there have been a few contributions to the debate in the run up to the end of the BBFC consultation.
AgeID says it wants to set the record straight on user data privacy under pending UK smut age check rules. As soon as a customer enters their login credentials, AgeID anonymises them. This ensures AgeID does not have a list of email addresses. We
cannot market to them, we cannot even see them
[You always have to be a bit sceptical about claims that anonymisation protects your data. Eg if Facebook strips off your name and address and then sells your GPS track as 'anonymised', when in fact your address and then name can be restored by
noting that you spend 12 hours a day at 32 Acacia avenue and commute to work at Snoops R Us. Perhaps more to the point of PornHub, may indeed not know that it was Damian@Green.com that hashed to 00000666, but the browsing record of 0000666 will
be stored by PornHub anyway. And when the police come along and find from the ID company that
Damian@Green.com hashes to 0000666 then the can simply ask PornHub to reveal the browsing history of 0000666.
Tell the BBFC that age verification will do more harm than good
MindGeek's age verification solution, AgeID, will inevitably have broad takeup due to their using it on their free tube sites such as PornHub. This poses a massive conflict of interest: advertising is their main source of revenue, and they have a
direct profit motive to harvest data on what people like to look at. AgeID will allow them to do just that.
MindGeek have a terrible record on keeping sensitive data secure, and the resulting database will inevitably be leaked or hacked. The Ashley Madison data breach is a clear warning of what can happen when people's sex lives are leaked into the
public domain: it ruins lives, and can lead to blackmail and suicide. If this policy goes ahead without strict rules forcing age verification providers to protect user privacy, there is a genuine risk of loss of life.
Update: Marc Dorcel Issues Plea to Participate in U.K. Age-Verification Consultation
French adult content producer Marc Dorcel has issued a plea for industry stakeholders to participate in a public consultation on the U.K.'s upcoming age-verification system for adult content. The consultation period closes on Monday. The studio
said the following about participation in the BBFC public consultation:
The time of a wild internet where everyone could get immediate and open access to porn seems to be over as many governments are looking for concrete solutions to control it.
U.K. is the first one to have voted a law regarding this subject and who will apply a total blockage on porn websites which do not age verify and protect minors. Australian, Polish and French authorities are also looking very closely into this
issue and are interested in the system that will be elected in the U.K.
BBFC is the organization which will define and manage the operation. In a few weeks, the BBFC will deliver the government its age-verification guidance in order to define and detail how age-verification should comply with this new law.
BBFC wants to be pragmatic and is concerned about how end users and website owners will be able to enact this measure.
The organization has launched an open consultation in order to collect the public and concerned professionals' opinion regarding this matter
As a matter of fact, age-verification guideline involves a major challenge for the whole industry: age-verification processor cannot be considered neither as a gateway nor a toll. Moreover, it cannot be an instrument to gather internet users'
data or hijack traffic.
Marc Dorcel has existed since 1979 and operates on numerous platforms -- TV, mobile, press, web networks. We are used to regulation authorities.
According to our point of view, the two main requirements to define an independent age-verification system that would not serve specific corporate interests are: 1st requirement -- neither an authenticated adult, nor his data should belong to
any processor; 2nd requirement -- processor systems should freely be chosen because of their efficiency and not because of their dominant position.
We are also thinking that our industry should have two requests for the BBFC to insure a system which do not create dependency:
Any age-verification processor scope should be limited to a verification task without a user-registration system. As a consequence, processors could not get benefits on any data user or traffic control, customers' verified age would
independently be stored by each website or website network and users would have to age verify for any new website or network.
If the BBFC allows any age-verification processor to control a visitor data base and to manage login and password, they should commit to share the 18+ login/password to the other certified processors. As a consequence, users would only
have one age verification enrollment on their first visit of a website, users would be able to log in with the same login/password on any age verification system to prove their age, and verified adults would not belong to any processor to
avoid any dependency.
In those cases, we believe that an age-verification solution will act like a MPSP (multiple payment service provider) which processes client payments but where customers do not belong to payment processors, but to the website and where credit
card numbers can be used by any processor.
We believe that any adult company concerned with the future of our business should take part in this consultation, whatever his point of view or worries are.
It is our responsibility to take our fate into our own hands.