Melon Farmers Original Version

UK Government Watch


2019: April-June

 2004   2005   2006   2007   2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan-March   April-June   July-Sept   Oct-Dec    

 

Government consultation on its internet censorship plans...

Monday is the last day to respond and the Open Rights Group makes some suggestions


Link Here 30th June 2019
The Government is accepting public feedback on their plan until Monday 1 July. Send a message to their consultation using Open Rights Group  tool before the end of Monday!

The Open Rights Group comments on the government censorship plans:

Online Harms: Blocking websites doesn't work -- use a rights-based approach instead

Blocking websites isn't working. It's not keeping children safe and it's stopping vulnerable people from accessing information they need. It's not the right approach to take on Online Harms.

This is the finding from our recent research into website blocking by mobile and broadband Internet providers. And yet, as part of its Internet regulation agenda, the UK Government wants to roll out even more blocking.

The Government's Online Harms White Paper is focused on making online companies fulfil a "duty of care" to protect users from "harmful content" -- two terms that remain troublingly ill-defined. 1

The paper proposes giving a regulator various punitive measures to use against companies that fail to fulfil this duty, including powers to block websites.

If this scheme comes into effect, it could lead to widespread automated blocking of legal content for people in the UK.

Mobile and broadband Internet providers have been blocking websites with parental control filters for five years. But through our Blocked project -- which detects incorrect website blocking -- we know that systems are still blocking far too many sites and far too many types of sites by mistake.

Thanks to website blocking, vulnerable people and under-18s are losing access to crucial information and support from websites including counselling, charity, school, and sexual health websites. Small businesses are losing customers. And website owners often don't know this is happening.

We've seen with parental control filters that blocking websites doesn't have the intended outcomes. It restricts access to legal, useful, and sometimes crucial information. It also does nothing to prevent people who are determined to get access to material on blocked websites, who often use VPNs to get around the filters. Other solutions like filters applied by a parent to a child's account on a device are more appropriate.

Unfortunately, instead of noting these problems inherent to website blocking by Internet providers and rolling back, the Government is pressing ahead with website blocking in other areas.

Blocking by Internet providers may not work for long. We are seeing a technical shift towards encrypted website address requests that will make this kind of website blocking by Internet providers much more difficult.

When I type a human-friendly web address such as openrightsgroup.org into a web browser and hit enter, my computer asks a Domain Name System (DNS) for that website's computer-friendly IP address - which will look something like 46.43.36.233 . My web browser can then use that computer-friendly address to load the website.

At the moment, most DNS requests are unencrypted. This allows mobile and broadband Internet providers to see which website I want to visit. If a website is on a blocklist, the system won't return the actual IP address to my computer. Instead, it will tell me that that site is blocked, or will tell my computer that the site doesn't exist. That stops me visiting the website and makes the block effective.

Increasingly, though, DNS requests are being encrypted. This provides much greater security for ordinary Internet users. It also makes website blocking by Internet providers incredibly difficult. Encrypted DNS is becoming widely available through Google's Android devices, on Mozilla's Firefox web browser and through Cloudflare's mobile application for Android and iOS. Other encrypted DNS services are also available.

Our report DNS Security - Getting it Right discusses issues around encrypted DNS in more detail.

Blocking websites may be the Government's preferred tool to deal with social problems on the Internet but it doesn't work, both in policy terms and increasingly at a technical level as well.

The Government must accept that website blocking by mobile and broadband Internet providers is not the answer. They should concentrate instead on a rights-based approach to Internet regulation and on educational and social approaches that address the roots of complex societal issues.

Offsite Article: CyberLegal response to the Online Harms Consultation

30th June 2019. See article from cyberleagle.com

Speech is not a tripping hazard

 

 

Law around non-consensual sexual images to be reviewed by the Law Commission...

Deep fake news, cyber flashing, upskirting and revenge porn


Link Here 26th June 2019

Laws around the making and sharing of non-consensual intimate images are to be reviewed under plans to ensure protections keep pace with emerging technology.

Justice Minister Paul Maynard and Digital Secretary Jeremy Wright have asked the Law Commission to examine whether current legislation is fit to tackle new and evolving types of abusive and offensive communications, including image-based abuse, amid concerns it has become easier to create and distribute sexual images of people online without their permission.

The review, which will be launched shortly, will consider a range of disturbing digital trends such as 'cyber-flashing' -- when people receive unsolicited sexual images of someone over the phone -- and 'deepfake' pornography -- the degrading practice of superimposing an individual's face onto pornographic photos or videos without consent.

The move builds on government action in recent years to better protect victims and bring more offenders to justice, including making 'upskirting' and 'revenge porn' specific criminal offences.

The review will also consider the case for granting automatic anonymity to revenge porn victims, so they cannot be named publicly, as is the case for victims of sexual offences.

Tackling sexual offences is a priority for this government, and in many cases this behaviour will already be caught by a number of existing offences such as 'voyeurism' under the Sexual Offences Act 2003.

However, ministers are committed to ensuring the right protections are in place for the modern age, and alongside the review, a public consultation will be launched on strengthening the law -- seeking views from victims, groups representing them, law enforcement, academics and anyone else with an interest in the issue.

This review is part of joint work between the Ministry of Justice and Department for Digital Culture, Media and Sport and Government Equalities Office to consider reform of communications offences, examining the glorification of violent crime and the encouragement of self-harm online, and whether co-ordinated harassment by groups of people online could be more effectively addressed by the criminal law.

 

 

Offsite Article: 'A very strange thing for Parliament to do, to regulate how bits travel over a wire'...


Link Here26th June 2019
The Internet Society warns off the UK government from trying legislate against internet protocols it does not like, namely encrypted DNS

See article from theregister.co.uk

 

 

Offsite Article: Verifiably Stupid...


Link Here24th June 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
The UK Porn Block's Latest Failure. By David Flint

See article from reprobatepress.com

 

 

Who pays for Age Verification? You do of course...one way or another!...

Maybe its a good job the government has delayed Age Verification as there are a still a lot of issues to resolve for the AV companies


Link Here21st June 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
The AV industry is not yet ready

The Digital Policy Alliance (DPA) is a private lobby group connecting digital industries with Parliament. Its industry members include both Age Verification (AV) providers, eg OCL, and adult entertainment, eg Portland TV.

Just before the Government announcement that the commencement of adult verification requirements for porn websites would be delayed, the DPA wrote a letter explaining that the industry was not yet ready to implement AV, and had asked for a 3 month delay.

The letter is unpublished but fragments of it have been reported in news reports about AV.

The Telegraph reported:

The Digital Policy Alliance called for the scheme to be delayed or risk nefarious companies using this opportunity to harvest and manipulate user data.

The strongly-worded document complains that the timing is very tight, a fact that has put some AVPs [age verification providers] and adult entertainment providers in a very difficult situation.

It warns that unless the scheme is delayed there will be less protection for public data, as it appears that there is an intention for uncertified providers to use this opportunity to harvest and manipulate user data.
 

The AV industry is  unimpressed by a 6 month delay

See article from news.sky.com

Rowland Manthorpe from Sky News contributed a few interesting snippets too. He noted that the AVPs were unsurprisingly not pleased by the government delay:

Serge Acker, chief executive of OCL, which provides privacy-protecting porn passes for purchase at newsagents, told Sky News: As a business, we have been gearing up to get our solution ready for July 15th and we, alongside many other businesses, could potentially now be being endangered if the government continues with its attitude towards these delays.

Not only does it make the government look foolish, but it's starting to make companies like ours look it too, as we all wait expectantly for plans that are only being kicked further down the road.
 

There are still issues with how the AV providers can make money

And interestingly Manthorpe revealed in the accompanying video news report that the AV providers were also distinctly unimpressed by the BBFC stipulating that certified AV providers must not use Identity Data provided by porn users for any other purpose than verifying age. The sensible idea being that the data should not be made available for the the likes of targeted advertising. And one particular example of prohibited data re-use has caused particular problems, namely that ID data should not be used to sign people up for digital wallets.

Now AV providers have got to be able to generate their revenue somehow. Some have proposed selling AV cards in newsagents for about £10, but others had been planning on using AV to generate a customer base for their digital wallet schemes.

So it seems that there are still quite a few fundamental issues that have not yet been resolved in how the AV providers get their cut.
 

Some AV providers would rather not sign up to BBFC accreditation

See article from adultwebmasters.org

Maybe these issues with BBFC AV accreditation requirements are behind a move to use an alternative standard. An AV provider called VeriMe has announced that it has the first AV company to receive a PAS1296 certification.

The PAS1296 was developed between the British Standards Institution and the Age Check Certification Scheme (ACCS). It stands for Public Accessible Specification and is designed to define good practice standards for a product, service or process. The standard was also championed by the Digital Policy Alliance.

Rudd Apsey, the director of VeriMe said:

The PAS1296 certification augments the voluntary standards outlined by the BBFC, which don't address how third-party websites handle consumer data, Apsey added. We believe it fills those gaps and is confirmation that VeriMe is indeed leading the world in the development and implementation of age verification technology and setting best practice standards for the industry.

We are incredibly proud to be the first company to receive the standard and want consumers and service providers to know that come the July 15 roll out date, they can trust VeriMe's systems to provide the most robust solution for age verification.

This is not a very convincing argument as PAS1296 is not available for customers to read, (unless they pay about 120 quid for the privilege). At least the BBFC standard can be read by anyone for free, and they can then make up their own minds as to whether their porn browsing history and ID data is safe.

However it does seem that some companies at least are planning to give the BBFC accreditation scheme a miss.
 

The BBFC standard fails to provide safety for porn users data anyway.

See article from medium.com

The AV company 18+ takes issue with the BBFC accreditation standard, noting that it allows AV providers to dangerously log people's porn browsing history:

Here's the problem with the design of most age verification systems: when a UK user visits an adult website, most solutions will present the user with an inline frame displaying the age verifier's website or the user will be redirected to the age verifier's website. Once on the age verifier's website, the user will enter his or her credentials. In most cases, the user must create an account with the age verifier, and on subsequent visits to the adult website, the user will enter his account details on the age verifier's website (i.e., username and password). At this point in the process, the age verifier will validate the user and, if the age verifier has a record the user being at least age 18, will redirect the user back to the adult website. The age verification system will transmit to the adult website whether the user is at least age 18 but will not transmit the identity of the user.

The flaw with this design from a user privacy perspective is obvious: the age verification website will know the websites the user visits. In fact, the age verification provider obtains quite a nice log of the digital habits of each user. To be fair, most age verifiers claim they will delete this data. However, a truly privacy first design would ensure the data never gets generated in the first place because logs can inadvertently be kept, hacked, leaked, or policies might change in the future. We viewed this risk to be unacceptable, so we set about building a better system.

Almost all age verification solutions set to roll out in July 2019 do not provide two-way anonymity for both the age verifier and the adult website, meaning, there remains some log of?204?or potential to log -- which adult websites a UK based user visits.

In fact one AV provider revealed that up until recently the government demanded that AV providers keep a log of people's porn browsing history and it was a bit of a late concession to practicality that companies were able to opt out if they wanted.

Note that the logging capability is kindly hidden by the BBFC by passing it off as being used for only as long as is necessary for fraud prevention. Of course that is just smoke and mirrors, fraud, presumably meaning that passcodes could be given or sold to others, could happen anytime that an age verification scheme is in use, and the time restriction specified by the BBFC may as well be forever.

 

 

Age Verification for porn delayed by 6 months...

Jeremy Wright apologises to supporters for an admin cock-up, and takes the opportunity to sneer at the millions of people who just want to keep their porn browsing private and safe


Link Here 20th June 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
Jeremy Wright, the Secretary of State for Digital, Culture, Media and Sport addressed parliament to explain that the start data for Age Verification scheme for porn has been delayed by about 6 months. The reason is that the Government failed to inform the EU about laws that effect free trade (eg those that that allow EU websites to be blocked in the UK). Although the main Digital Economy Act was submitted to the EU, extra bolt on laws added since, have not been submitted. Wright explained:

In autumn last year, we laid three instruments before the House for approval. One of them204the guidance on age verification arrangements204sets out standards that companies need to comply with. That should have been notified to the European Commission, in line with the technical standards and regulations directive, and it was not. Upon learning of that administrative oversight, I instructed my Department to notify this guidance to the EU and re-lay the guidance in Parliament as soon as possible. However, I expect that that will result in a delay in the region of six months.

Perhaps it would help if I explained why I think that six months is roughly the appropriate time. Let me set out what has to happen now: we need to go back to the European Commission, and the rules under the relevant directive say that there must be a three-month standstill period after we have properly notified the regulations to the Commission. If it wishes to look into this in more detail204I hope that it will not204there could be a further month of standstill before we can take matters further, so that is four months. We will then need to re-lay the regulations before the House. As she knows, under the negative procedure, which is what these will be subject to, there is a period during which they can be prayed against, which accounts for roughly another 40 days. If we add all that together, we come to roughly six months.

Wright apologised profusely to supporters of the scheme:

I recognise that many Members of the House and many people beyond it have campaigned passionately for age verification to come into force as soon as possible to ensure that children are protected from pornographic material they should not see. I apologise to them all for the fact that a mistake has been made that means these measures will not be brought into force as soon as they and I would like.

However the law has not been received well by porn users. Parliament has generally shown no interest in the privacy and safety of porn users. In fact much of the delay has been down belatedly realising that the scheme might not get off the ground at all unless they at least pay a little lip service to the safety of porn users.

Even now Wright decided to dismiss people's privacy fears and concerns as if they were all just deplorables bent on opposing child safety. He said:

However, there are also those who do not want these measures to be brought in at all, so let me make it clear that my statement is an apology for delay, not a change of policy or a lessening of this Government's determination to bring these changes about. Age verification for online pornography needs to happen. I believe that it is the clear will of the House and those we represent that it should happen, and that it is in the clear interests of our children that it must.

Wright compounded his point by simply not acknowledging that if, given a choice people, would prefer not to hand over their ID. Voluntarily complying websites would have to take a major hit from customers who would prefer to seek out the safety of non-complying sites. Wright said:

I see no reason why, in most cases, they [websites] cannot begin to comply voluntarily. They had expected to be compelled to do this from 15 July, so they should be in a position to comply. There seems to be no reason why they should not.

In passing Wright also mentioned how the government is trying to counter encrypted DNS which reduces.  the capabilities of ISPs to block websites. Instead the Government will try and press the browser companies into doing their censorship dirty work for them instead:

It is important to understand changes in technology and the additional challenges they throw up, and she is right to say that the so-called D over H changes will present additional challenges. We are working through those now and speaking to the browsers, which is where we must focus our attention. As the hon. Lady rightly says, the use of these protocols will make it more difficult, if not impossible, for ISPs to do what we ask, but it is possible for browsers to do that. We are therefore talking to browsers about how that might practically be done, and the Minister and I will continue those conversations to ensure that these provisions can continue to be effective.

 

 

UK Internet Regulation Part II...

Open Rights Group reports on how the Online Harms Bill will harm free speech, justice and liberty


Link Here18th June 2019

This report follows our research into current Internet content regulation efforts, which found a lack of accountable, balanced and independent procedures governing content removal, both formally and informally by the state.

There is a legacy of Internet regulation in the UK that does not comply with due process, fairness and fundamental rights requirements. This includes: bulk domain suspensions by Nominet at police request without prior authorisation; the lack of an independent legal authorisation process for Internet Watch Foundation (IWF) blocking at Internet Service Providers (ISPs) and in the future by the British Board of Film Classification (BBFC), as well as for Counter-Terrorism Internet Referral Unit (CTIRU) notifications to platforms of illegal content for takedown. These were detailed in our previous report.

The UK government now proposes new controls on Internet content, claiming that it wants to ensure the same rules online as offline. It says it wants harmful content removed, while respecting human rights and protecting free expression.

Yet proposals in the DCMS/Home Office White Paper on Online Harms will create incentives for Internet platforms such as Google, Twitter and Facebook to remove content without legal processes. This is not the same rules online as offline. It instead implies a privatisation of justice online, with the assumption that corporate policing must replace public justice for reasons of convenience. This goes against the advice of human rights standards that government has itself agreed to and against the advice of UN Special Rapporteurs.

The government as yet has not proposed any means to define the harms it seeks to address, nor identified any objective evidence base to show what in fact needs to be addressed. It instead merely states that various harms exist in society. The harms it lists are often vague and general. The types of content specified may be harmful in certain circumstances, but even with an assumption that some content is genuinely harmful, there remains no attempt to show how any restriction on that content might work in law. Instead, it appears that platforms will be expected to remove swathes of legal-but-unwanted content, with as as-yet-unidentified regulator given a broad duty to decide if a risk of harm exists. Legal action would follow non-compliance by a platform. The result is the state proposing censorship and sanctions for actors publishing material that it is legal to publish.

 

 

Offsite Comment: Bloody stupid idea...


Link Here18th June 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
Porn Block Demonstrates the Government Is More Concerned With Censorship Than Security

See article from gizmodo.co.uk

 

 

Offsite Article: Christian Concerns...


Link Here15th June 2019
Who'd have thought that a Christian Campaign Group would be calling on its members to criticise the government's internet censorship bill in a consultation

See article from christianconcern.com

 

 

Strangling UK business and endangering people's personal data...

Internet companies slam the data censor's disgraceful proposal to require age verification for large swathes of the internet


Link Here 5th June 2019
Full story: ICO Age Appropriate Design...ICO calls for age assurance for websites accessed by children
The Information Commissioner's Office has for some bizarre reason have been given immense powers to censor the internet.

And in an early opportunity to exert its power it has proposed a 'regulation' that would require strict age verification for nearly all mainstream websites that may have a few child readers and some material that may be deemed harmful for very young children. Eg news websites that my have glamour articles or perhaps violent news images.

In a mockery of 'data protection' such websites would have to implement strict age verification requiring people to hand over identity data to most of the websites in the world.

Unsurprisingly much of the internet content industry is unimpressed. A six weerk consultation on the new censorship rules has just closed and according to the Financial Times:

Companies and industry groups have loudly pushed back on the plans, cautioning that they could unintentionally quash start-ups and endanger people's personal data. Google and Facebook are also expected to submit critical responses to the consultation.

Tim Scott, head of policy and public affairs at Ukie, the games industry body, said it was an inherent contradiction that the ICO would require individuals to give away their personal data to every digital service.

Dom Hallas, executive director at the Coalition for a Digital Economy (Coadec), which represents digital start-ups in the UK, said the proposals would result in a withdrawal of online services for under-18s by smaller companies:

The code is seen as especially onerous because it would require companies to provide up to six different versions of their websites to serve different age groups of children under 18.

This means an internet for kids largely designed by tech giants who can afford to build two completely different products. A child could access YouTube Kids, but not a start-up competitor.

Stephen Woodford, chief executive of the Advertising Association -- which represents companies including Amazon, Sky, Twitter and Microsoft -- said the ICO needed to conduct a full technical and economic impact study, as well as a feasibility study. He said the changes would have a wide and unintended negative impact on the online advertising ecosystem, reducing spend from advertisers and so revenue for many areas of the UK media.

An ICO spokesperson said:

We are aware of various industry concerns about the code. We'll be considering all the responses we've had, as well as engaging further where necessary, once the consultation has finished.

 

 

Updated: Tech companies criticise the government's Online Harms white paper...

The harms will be that British tech businesses will be destroyed so that politicians can look good for 'protecting the children'


Link Here2nd June 2019
A scathing new report, seen by City A.M. and authored by the Internet Association (IA), which represents online firms including Google, Facebook and Twitter, has outlined a string of major concerns with plans laid out in the government Online Harms white paper last month.

The Online Harms white paper outlines a large number of internet censorship proposals hiding under the vague terminology of 'duties of care'.

Under the proposals, social media sites could face hefty fines or even a ban if they fail to tackle online harms such as inappropriate age content, insults, harassment, terrorist content and of course 'fake news'.

But the IA has branded the measures unclear and warned they could damage the UK's booming tech sector, with smaller businesses disproportionately affected.  IA executive director Daniel Dyball said:

Internet companies share the ambition to make the UK one of the safest places in the world to be online, but in its current form the online harms white paper will not deliver that, said

The proposals present real risks and challenges to the thriving British tech sector, and will not solve the problems identified.

The IA slammed the white paper over its use of the term duty of care, which it said would create legal uncertainty and be unmanageable in practice.

The lobby group also called for a more precise definition of which online services would be covered by regulation and greater clarity over what constitutes an online harm. In addition, the IA said the proposed measures could raise serious unintended consequences for freedom of expression.

And while most internet users favour tighter rules in some areas, particularly social media, people also recognise the importance of protecting free speech 203 which is one of the internet's great strengths.

Update: Main points

2nd June 2019. See article from uk.internetassociation.org

The Internet Association paper sets out five key concerns held by internet companies:

  • "Duty of Care" has a specific legal meaning that does not align with the obligations proposed in the White Paper, creating legal uncertainty, and would be unmanageable;
  • The scope of the services covered by regulation needs to be defined differently, and more closely related to the harms to be addressed;
  • The category of "harms with a less clear definition" raises significant questions and concerns about clarity and democratic process;
  • The proposed code of practice obligations raise potentially dangerous unintended consequences for freedom of expression;
  • The proposed measures will damage the UK digital sector, especially start-ups, micro-businesses and small- and medium-sized enterprises (SMEs), and slow innovation.

 

 

Joint letter to Information Commissioner on age appropriate websites plan...

Pointing out that it is crazy for the data protection police to require internet users to hand over their private identity data to all and sundry (all in the name of child protection of course)


Link Here31st May 2019
Full story: ICO Age Appropriate Design...ICO calls for age assurance for websites accessed by children

Elizabeth Denham, Information Commissioner Information Commissioner's Office,

Dear Commissioner Denham,

Re: The Draft Age Appropriate Design Code for Online Services

We write to you as civil society organisations who work to promote human rights, both offline and online. As such, we are taking a keen interest in the ICO's Age Appropriate Design Code. We are also engaging with the Government in its White Paper on Online Harms, and note the connection between these initiatives.

Whilst we recognise and support the ICO's aims of protecting and upholding children's rights online, we have severe concerns that as currently drafted the Code will not achieve these objectives. There is a real risk that implementation of the Code will result in widespread age verification across websites, apps and other online services, which will lead to increased data profiling of both children and adults, and restrictions on their freedom of expression and access to information.

The ICO contends that age verification is not a silver bullet for compliance with the Code, but it is difficult to conceive how online service providers could realistically fulfil the requirement to be age-appropriate without implementing some form of onboarding age verification process. The practical impact of the Code as it stands is that either all users will have to access online services via a sorting age-gate or adult users will have to access the lowest common denominator version of services with an option to age-gate up. This creates a de facto compulsory requirement for age-verification, which in turn puts in place a de facto restriction for both children and adults on access to online content.

Requiring all adults to verify they are over 18 in order to access everyday online services is a disproportionate response to the aim of protecting children online and violates fundamental rights. It carries significant risks of tracking, data breach and fraud. It creates digital exclusion for individuals unable to meet requirements to show formal identification documents. Where age-gating also applies to under-18s, this violation and exclusion is magnified. It will put an onerous burden on small-to-medium enterprises, which will ultimately entrench the market dominance of large tech companies and lessen choice and agency for both children and adults -- this outcome would be the antithesis of encouraging diversity and innovation.

In its response to the June 2018 Call for Views on the Code, the ICO recognised that there are complexities surrounding age verification, yet the draft Code text fails to engage with any of these. It would be a poor outcome for fundamental rights and a poor message to children about the intrinsic value of these for all if children's safeguarding was to come at the expense of free expression and equal privacy protection for adults, including adults in vulnerable positions for whom such protections have particular importance.

Mass age-gating will not solve the issues the ICO wishes to address with the Code and will instead create further problems. We urge you to drop this dangerous idea.

Yours sincerely,

Open Rights Group
Index on Censorship
Article19
Big Brother Watch
Global Partners Digital

 

 

Malevolent spirits, spooks and ghosts...

Human rights groups and tech companies unite in an open letter condemning GCHQ's Ghost Protocol suggestion to open a backdoor to snoop on 'encrypted' communication apps


Link Here31st May 2019
Full story: Snooper's Charter...Tories re-start massive programme of communications snooping

To GCHQ

The undersigned organizations, security researchers, and companies write in response to the proposal published by Ian Levy and Crispin Robinson of GCHQ in Lawfare on November 29, 2018 , entitled Principles for a More Informed Exceptional Access Debate. We are an international coalition of civil society organizations dedicated to protecting civil liberties, human rights, and innovation online; security researchers with expertise in encryption and computer science; and technology companies and trade associations, all of whom share a commitment to strong encryption and cybersecurity. We welcome Levy and Robinson's invitation for an open discussion, and we support the six principles outlined in the piece. However, we write to express our shared concerns that this particular proposal poses serious threats to cybersecurity and fundamental human rights including privacy and free expression.

The six principles set forth by GCHQ officials are an important step in the right direction, and highlight the importance of protecting privacy rights, cybersecurity, public confidence, and transparency. We especially appreciate the principles' recognition that governments should not expect unfettered access to user data, that the trust relationship between service providers and users must be protected, and that transparency is essential.

Despite this, the GCHQ piece outlines a proposal for silently adding a law enforcement participant to a group chat or call. This proposal to add a ghost user would violate important human rights principles, as well as several of the principles outlined in the GCHQ piece. Although the GCHQ officials claim that you don't even have to touch the encryption to implement their plan, the ghost proposal would pose serious threats to cybersecurity and thereby also threaten fundamental human rights, including privacy and free expression. In particular, as outlined below, the ghost proposal would create digital security risks by undermining authentication systems, by introducing potential unintentional vulnerabilities, and by creating new risks of abuse or misuse of systems. Importantly, it also would undermine the GCHQ principles on user trust and transparency set forth in the piece.

How the Ghost Proposal Would Work The security in most modern messaging services relies on a technique called public key cryptography. In such systems, each device generates a pair of very large mathematically related numbers, usually called keys. One of those keys -- the public key -- can be distributed to anyone. The corresponding private key must be kept secure, and not shared with anyone. Generally speaking, a person's public key can be used by anyone to send an encrypted message that only the recipient's matching private key can unscramble. Within such systems, one of the biggest challenges to securely communicating is authenticating that you have the correct public key for the person you're contacting. If a bad actor can fool a target into thinking a fake public key actually belongs to the target's intended communicant, it won't matter that the messages are encrypted in the first place because the contents of those encrypted communications will be accessible to the malicious third party.

Encrypted messaging services like iMessage, Signal, and WhatsApp, which are used by well over a billion people around the globe, store everyone's public keys on the platforms' servers and distribute public keys corresponding to users who begin a new conversation. This is a convenient solution that makes encryption much easier to use. However, it requires every person who uses those messaging applications to trust the services to deliver the correct, and only the correct, public keys for the communicants of a conversation when asked.

The protocols behind different messaging systems vary, and they are complicated. For example, in two-party communications, such as a reporter communicating with a source, some services provide a way to ensure that a person is communicating only with the intended parties. This authentication mechanism is called a safety number in Signal and a security code in WhatsApp (we will use the term safety number). They are long strings of numbers that are derived from the public keys of the two parties of the conversation, which can be compared between them -- via some other verifiable communications channel such as a phone call -- to confirm that the strings match. Because the safety number is per pair of communicators -- more precisely, per pair of keys -- a change in the value means that a key has changed, and that can mean that it's a different party entirely. People can thus choose to be notified when these safety numbers change, to ensure that they can maintain this level of authentication. Users can also check the safety number before each new communication begins, and thereby guarantee that there has been no change of keys, and thus no eavesdropper. Systems without a safety number or security code do not provide the user with a method to guarantee that the user is securely communicating only with the individual or group with whom they expect to be communicating. group with whom they expect to be communicating. Other systems provide security in other ways. For example, iMessage, has a cluster of public keys -- one per device -- that it keeps associated with an account corresponding to an identity of a real person. When a new device is added to the account, the cluster of keys changes, and each of the user's devices shows a notice that a new device has been added upon noticing that change.

The ghost key proposal put forward by GCHQ would enable a third party to see the plain text of an encrypted conversation without notifying the participants. But to achieve this result, their proposal requires two changes to systems that would seriously undermine user security and trust. First, it would require service providers to surreptitiously inject a new public key into a conversation in response to a government demand. This would turn a two-way conversation into a group chat where the government is the additional participant, or add a secret government participant to an existing group chat. Second, in order to ensure the government is added to the conversation in secret, GCHQ's proposal would require messaging apps, service providers, and operating systems to change their software so that it would 1) change the encryption schemes used, and/or 2) mislead users by suppressing the notifications that routinely appear when a new communicant joins a chat.

The Proposal Creates Serious Risks to Cybersecurity and Human Rights The GCHQ's ghost proposal creates serious threats to digital security: if implemented, it will undermine the authentication process that enables users to verify that they are communicating with the right people, introduce potential unintentional vulnerabilities, and increase risks that communications systems could be abused or misused. These cybersecurity risks mean that users cannot trust that their communications are secure, as users would no longer be able to trust that they know who is on the other end of their communications, thereby posing threats to fundamental human rights, including privacy and free expression. Further, systems would be subject to new potential vulnerabilities and risks of abuse.

Integrity and Authentication Concerns As explained above, the ghost proposal requires modifying how authentication works. Like the end-to-end encryption that protects communications while they are in transit, authentication is a critical aspect of digital security and the integrity of sensitive data. The process of authentication allows users to have confidence that the other users with whom they are communicating are who they say they are. Without reliable methods of authentication, users cannot know if their communications are secure, no matter how robust the encryption algorithm, because they have no way of knowing who they are communicating with. This is particularly important for users like journalists who need secure encryption tools to guarantee source protection and be able to do their jobs.

Currently the overwhelming majority of users rely on their confidence in reputable providers to perform authentication functions and verify that the participants in a conversation are the people they think they are, and only those people. The GCHQ's ghost proposal completely undermines this trust relationship and the authentication process.

Authentication is still a difficult challenge for technologists and is currently an active field of research. For example, providing a meaningful and actionable record about user key transitions presents several known open research problems, and key verification itself is an ongoing subject of user interface research. If, however, security researchers learn that authentication systems can and will be bypassed by third parties like government agencies, such as GCHQ, this will create a strong disincentive for continuing research in this critical area.

Potential for Introducing Unintentional Vulnerabilities Beyond undermining current security tools and the system for authenticating the communicants in an encrypted chat, GCHQ's ghost proposal could introduce significant additional security threats. There are also outstanding questions about how the proposal would be effectively implemented.

The ghost proposal would introduce a security threat to all users of a targeted encrypted messaging application since the proposed changes could not be exposed only to a single target. In order for providers to be able to suppress notifications when a ghost user is added, messaging applications would need to rewrite the software that every user relies on. This means that any mistake made in the development of this new function could create an unintentional vulnerability that affects every single user of that application.

As security researcher Susan Landau points out, the ghost proposal involves changing how the encryption keys are negotiated in order to accommodate the silent listener, creating a much more complex protocol--raising the risk of an error. (That actually depends on how the algorithm works; in the case of iMessage, Apple has not made the code public.) A look back at recent news stories on unintentional vulnerabilities that are discovered in encrypted messaging apps like iMessage, and devices ranging from the iPhone to smartphones that run Google's Android operating system, lend credence to her concerns. Any such unintentional vulnerability could be exploited by malicious third parties.

Possibility of Abuse or Misuse of the Ghost Function The ghost proposal also introduces an intentional vulnerability. Currently, the providers of end-to-end encrypted messaging applications like WhatsApp and Signal cannot see into their users' chats. By requiring an exceptional access mechanism like the ghost proposal, GCHQ and U.K. law enforcement officials would require messaging platforms to open the door to surveillance abuses that are not possible today.

At a recent conference on encryption policy, Cindy Southworth, the Executive Vice President at the U.S. National Network to End Domestic Violence (NNEDV), cautioned against introducing an exceptional access mechanism for law enforcement, in part, because of how it could threaten the safety of victims of domestic and gender-based violence. Specifically, she warned that [w]e know that not only are victims in every profession, offenders are in every profession...How do we keep safe the victims of domestic violence and stalking? Southworth's concern was that abusers could either work for the entities that could exploit an exceptional access mechanism, or have the technical skills required to hack into the platforms that developed this vulnerability.

While companies and some law enforcement and intelligence agencies would surely implement strict procedures for utilizing this new surveillance function, those internal protections are insufficient. And in some instances, such procedures do not exist at all. In 2016, a U.K. court held that because the rules for how the security and intelligence agencies collect bulk personal datasets and bulk communications data (under a particular legislative provision) were unknown to the public, those practices were unlawful. As a result of that determination, it asked the agencies - GCHQ, MI5, and MI6 - to review whether they had unlawfully collected data about Privacy International. The agencies subsequently revealed that they had unlawfully surveilled Privacy International.12

Even where procedures exist for access to data that is collected under current surveillance authorities, government agencies have not been immune to surveillance abuses and misuses despite the safeguards that may have been in place. For example, a former police officer in the U.S. discovered that 104 officers in 18 different agencies across the state had accessed her driver's license record 425 times, using the state database as their personal Facebook service.13 Thus, once new vulnerabilities like the ghost protocol are created, new opportunities for abuse and misuse are created as well.14

Finally, if U.K. officials were to demand that providers rewrite their software to permit the addition of a ghost U.K. law enforcement participant in encrypted chats, there is no way to prevent other governments from relying on this newly built system. This is of particular concern with regard to repressive regimes and any country with a poor record on protecting human rights.

The Proposal Would Violate the Principle That User Trust Must be Protected The GCHQ proponents of the ghost proposal argue that [a]ny exceptional access solution should not fundamentally change the trust relationship between a service provider and its users. This means not asking the provider to do something fundamentally different to things they already do to run their business.15 However, the exceptional access mechanism that they describe in the same piece would have exactly the effect they say they wish to avoid: it would degrade user trust and require a provider to fundamentally change its service.

The moment users find out that a software update to their formerly secure end-to-end encrypted messaging application can now allow secret participants to surveil their conversations, they will lose trust in that service. In fact, we've already seen how likely this outcome is. In 2017, the Guardian published a flawed report in which it incorrectly stated that WhatsApp had a backdoor that would allow third parties to spy on users' conversations. Naturally, this inspired significant alarm amongst WhatsApp users, and especially users like journalists and activists who engage in particularly sensitive communications. In this case, the ultimate damage to user trust was mitigated because cryptographers and security organizations quickly understood and disseminated critical deficits in the report,16 and the publisher retracted the story.17

However, if users were to learn that their encrypted messaging service intentionally built a functionality to allow for third-party surveillance of their communications, that loss of trust would understandably be widespread and permanent. In fact, when President Obama's encryption working group explored technical options for an exceptional access mechanism, it cited loss of trust as the primary reason not to pursue provider-enabled access to encrypted devices through current update procedures. The working group explained that this could be dangerous to overall cybersecurity, since its use could call into question the trustworthiness of established software update channels. Individual users aware of the risk of remote access to their devices, could also choose to turn off software updates, rendering their devices significantly less secure as time passed and vulnerabilities were discovered [but] not patched.18 While the proposal that prompted these observations was targeted at operating system updates, the same principles concerning loss of trust and the attendant loss of security would apply in the context of the ghost proposal.

Any proposal that undermines user trust penalizes the overwhelming majority of technology users while permitting those few bad actors to shift to readily available products beyond the law's reach. It is a reality that encryption products are available all over the world and cannot be easily constrained by territorial borders.19 Thus, while the few nefarious actors targeted by the law will still be able to avail themselves of other services, average users -- who may also choose different services -- will disproportionately suffer consequences of degraded security and trust.

The Ghost Proposal Would Violate the Principle That Transparency is Essential Although we commend GCHQ officials for initiating this public conversation and publishing their ghost proposal online, if the U.K. were to implement this approach, these activities would be cloaked in secrecy. Although it is unclear which precise legal authorities GCHQ and U.K. law enforcement would rely upon, the Investigatory Powers Act grants U.K. officials the power to impose broad non-disclosure agreements that would prevent service providers from even acknowledging they had received a demand to change their systems, let alone the extent to which they complied. The secrecy that would surround implementation of the ghost proposal would exacerbate the damage to authentication systems and user trust as described above.

Conclusion For these reasons, the undersigned organizations, security researchers, and companies urge GCHQ to abide by the six principles they have announced, abandon the ghost proposal, and avoid any alternate approaches that would similarly threaten digital security and human rights. We would welcome the opportunity for a continuing dialogue on these important issues.

Sincerely,

Civil Society Organizations Access Now Big Brother Watch Blueprint for Free Speech Center for Democracy & Technology Defending Rights and Dissent Electronic Frontier Foundation Engine Freedom of the Press Foundation Government Accountability Project Human Rights Watch International Civil Liberties Monitoring Group Internet Society Liberty New America's Open Technology Institute Open Rights Group Principled Action in Government

Privacy International Reporters Without Borders Restore The Fourth Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic (CIPPIC) TechFreedom The Tor Project X-Lab

Technology Companies and Trade Associations ACT | The App Association Apple Google Microsoft Reform Government Surveillance ( RGS is a coalition of technology companies)

Startpage.com WhatsApp

Security and Policy Experts* Steven M. Bellovin, Percy K. and Vida L.W. Hudson Professor of Computer Science; Affiliate faculty, Columbia Law School Jon Callas, Senior Technology Fellow, ACLU L Jean Camp, Professor of Informatics, School of Informatics, Indiana University Stephen Checkoway, Assistant Professor, Oberlin College Computer Science Department Lorrie Cranor, Carnegie Mellon University Zakir Durumeric, Assistant Professor, Stanford University Dr. Richard Forno, Senior Lecturer, UMBC, Director, Graduate Cybersecurity Program & Assistant Director, UMBC Center for Cybersecurity Joe Grand, Principal Engineer & Embedded Security Expert, Grand Idea Studio, Inc. Daniel K. Gillmor, Senior Staff Technologist, ACLU Peter G. Neumann, Chief Scientist, SRI International Computer Science Lab Dr. Christopher Parsons, Senior Research Associate at the Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto Phillip Rogaway, Professor, University of California, Davis Bruce Schneier Adam Shostack, Author, Threat Modeling: Designing for Security Ashkan Soltani, Researcher and Consultant - Former FTC CTO and Whitehouse Senior Advisor Richard Stallman, President, Free Software Foundation Philip Zimmermann, Delft University of Technology Cybersecurity Group

 

 

From the data 'protection' office that trains us to brainlessly click website consent boxes...

A new proposal forcing people to brainlessly hand over identity data to any Tom, Dick or Harry website that asks. Open Rights Group suggests we take a stand


Link Here30th May 2019
Full story: ICO Age Appropriate Design...ICO calls for age assurance for websites accessed by children

New proposals to safeguard children will require everyone to prove they are over 18 before accessing online content.

These proposals - from the Information Commissioner's Office (ICO) - aim at protecting children's privacy, but look like sacrificing free expression of adults and children alike. But they are just plans: we believe and hope you can help the ICO strike the right balance, and abandon compulsory age gates, by making your voice heard.

The rules cover websites (including social media and search engines), apps, connected toys and other online products and services.

The ICO is requesting public feedback on its proposals until Friday 31 May 2019. Please urgently write to the consultation to tell them their plan goes too far! You can use these bullet points to help construct your own unique message:

  • In its current form, the Code is likely to result in widespread age verification across everyday websites, apps and online services for children and adults alike.

  • Age checks for everyone are a step too far. Age checks for everyone could result in online content being removed or services withdrawn. Data protection regulators should stick to privacy. It's not the Information Commissioner's job to restrict adults' or children's access to content.

  • With no scheme to certify which providers can be trusted, third-party age verification technologies will lead to fakes and scams, putting people's personal data at risk.

  • Large age verification providers will seek to offer single-sign-in across a wide variety of online services, which could lead to intrusive commercial tracking of children and adults with devastating personal impacts in the event of a data breach.

 

 

Encrypted DNS which defeats ISP website blocking may delay age verification for porn...

Presumably GCHQ would rather not half the population using technology that makes surveillance more difficult


Link Here30th May 2019
The authorities have admitted for the first time they will be unable to enforce the porn block law if browsers such as Firefox and Chrome roll out DNS over HTTPS encryption.

The acknowledgement comes as senior representatives of ISPs privately told Daily Star Online they believe the porn block law could be delayed.

Earlier this month, this publication revealed Mozilla Firefox is thought to be pushing ahead with the roll out of DNS encryption, despite government concerns they and ISPs will be unable to see what website we are looking at and block them.

Speaking at the Internet Service Providers Association's Annual Conference last week, Mark Hoe, from the government's National Cyber Security Centre (NCSC), said they would not be able to block websites that violate the porn block and enforce the new law. He said:

The age verification -- although those are not directly affected [by DNS encryption] it does effect enforcement of access to non-compliant websites.

So, whereas we had previously envisaged that ISPs would be able to block access to non-compliant sites, [those] using DNS filtering techniques don't provide a way around that.

Hoe said that the browsers were responding to legitimate concerns after the Daily Star reported Google Chrome was thought to have changed its stance on the roll out of encrypted DNS.

However, industry insiders still think Firefox will press ahead, potentially leading to people who want to avoid the ban switching to their browser.

In an official statement, a government spokesman told Daily Star Online the law would come into force in a couple of months, as planned, but without explaining how it will enforce it.

Meanwhile a survey reveals three quarters of Brit parents are worried the porn block could leave them open to ID theft because they will be forced to hand over details to get age verified. AgeChecked surveyed 1,500 UK parents and found 73% would be apprehensive about giving personal information as verification online, for fear of how the data would be used.

 

 

Offsite Article: Do you want to be identified as a refusenik?...


Link Here 23rd May 2019
The government is quietly creating a digital ID card without us noticing

See article from news.sky.com

 

 

Can't we have laws that apply to everyone equally?...

Government rejects wide definition of 'islamophobia', considered a backdoor blasphemy law


Link Here16th May 2019
Proposals for an official definition of 'Islamophobia' were rejected by the Government yesterday.

Downing Street said the suggested definition had not been broadly accepted, adding: This is a matter that will need further careful consideration. '

The definition had been proposed by a parliamentary campaign group, the all-party parliamentary group on British Muslims. It wanted the Government to define Islamaphobia as rooted in racism or a type of racism that targets expressions of Muslimness or perceived Muslimness.

Ministers are now expected to appoint two independent advisers to draw up a less legally problematic definition, the Times reported.

A parliamentary debate on anti-Muslim prejudice is due to be held today in Parliament.

The  criticism of the definition has been published in an open letter to the Home Secretary Sajid Javid:

Open Letter: APPG Islamophobia Definition Threatens Civil Liberties

The APPG on British Muslims' definition of Islamophobia has now been adopted by the Labour Party, the Liberal Democrats Federal board, Plaid Cymru and the Mayor of London, as well as several local councils. All of this is occurring before the Home Affairs Select Committee has been able to assess the evidence for and against the adoption of the definition nationally.

Meanwhile the Conservatives are having their own debate about rooting out Islamophobia from the party.

According to the APPG definition, "Islamophobia is rooted in racism and is a type of racism that targets expressions of Muslimness or perceived Muslimness".

With this definition in hand, it is perhaps no surprise that following the horrific attack on a mosque in Christchurch, New Zealand, some place responsibility for the atrocity on the pens of journalists and academics who have criticised Islamic beliefs and practices, commented on or investigated Islamist extremism.

The undersigned unequivocally, unreservedly and emphatically condemn acts of violence against Muslims, and recognise the urgent need to deal with anti-Muslim hatred. However, we are extremely concerned about the uncritical and hasty adoption of the APPG's definition of Islamophobia.

This vague and expansive definition is being taken on without an adequate scrutiny or proper consideration of its negative consequences for freedom of expression, and academic and journalistic freedom. The definition will also undermine social cohesion -- fuelling the very bigotry against Muslims which it is designed to prevent.

We are concerned that allegations of Islamophobia will be, indeed already are being, used to effectively shield Islamic beliefs and even extremists from criticism, and that formalising this definition will result in it being employed effectively as something of a backdoor blasphemy law.

The accusation of Islamophobia has already been used against those opposing religious and gender segregation in education, the hijab, halal slaughter on the grounds of animal welfare, LGBT rights campaigners opposing Muslim views on homosexuality, ex-Muslims and feminists opposing Islamic views and practices relating to women, as well as those concerned about the issue of grooming gangs. It has been used against journalists who investigate Islamism, Muslims working in counter-extremism, schools and Ofsted for resisting conservative religious pressure and enforcing gender equality.

Evidently abuse, harmful practices, or the activities of groups and individuals which promote ideas contrary to British values are far more likely to go unreported as a result of fear of being called Islamophobic. This will only increase if the APPG definition is formally adopted in law.

We are concerned that the definition will be used to shut down legitimate criticism and investigation. While the APPG authors have assured that it does not wish to infringe free speech, the entire content of the report, the definition itself, and early signs of how it would be used, suggest that it certainly would. Civil liberties should not be treated as an afterthought in the effort to tackle anti-Muslim prejudice.

The conflation of race and religion employed under the confused concept of 'cultural racism' expands the definition beyond anti-Muslim hatred to include 'illegitimate' criticism of the Islamic religion. The concept of Muslimness can effectively be transferred to Muslim practices and beliefs, allowing the report to claim that criticism of Islam is instrumentalised to hurt Muslims.

No religion should be given special protection against criticism. Like anti-Sikh, anti-Christian, or anti-Hindu hatred, we believe the term anti-Muslim hatred is more appropriate and less likely to infringe on free speech. A proliferation of 'phobias' is not desirable, as already stated by Sikh and Christian organisations who recognise the importance of free discussion about their beliefs.

Current legislative provisions are sufficient, as the law already protects individuals against attacks and unlawful discrimination on the basis of their religion. Rather than helping, this definition is likely to create a climate of self-censorship whereby people are fearful of criticising Islam and Islamic beliefs. It will therefore effectively shut down open discussions about matters of public interest. It will only aggravate community tensions further and is therefore no long term solution.

If this definition is adopted the government will likely turn to self-appointed 'representatives of the community' to define 'Muslimness'. This is clearly open to abuse. The APPG already entirely overlooked Muslims who are often considered to be "insufficiently Muslim" by other Muslims, moderates, liberals, reformers and the Ahmadiyyah, who often suffer persecution and violence at the hands of other Muslims.

For all these reasons, the APPG definition of Islamophobia is deeply problematic and unfit for purpose. Acceptance of this definition will only serve to aggravate community tensions and to inhibit free speech about matters of fundamental importance. We urge the government, political parties, local councils and other organisations to reject this flawed proposed definition.

  • Emma Webb, Civitas
  • Hardeep Singh, Network of Sikh Organisations (NSOUK)
  • Lord Singh of Wimbledon
  • Tim Dieppe, Christian Concern
  • Stephen Evans, National Secular Society (NSS)
  • Sadia Hameed, Council of Ex-Muslims of Britain (CEMB)
  • Prof. Paul Cliteur, candidate for the Dutch Senate, Professor of Law, University of Leiden
  • Brendan O'Neill, Editor of Spiked
  • Maajid Nawaz, Founder, Quilliam International
  • Rt. Rev'd Dr Gavin Ashenden
  • Pragna Patel, director of Southall Black Sisters
  • Professor Richard Dawkins
  • Rahila Gupta, author and Journalist
  • Peter Whittle, founder and director of New Culture Forum
  • Trupti Patel, President of Hindu Forum of Britain
  • Dr Lakshmi Vyas, President Hindu Forum of Europe
  • Harsha Shukla MBE, President Hindu Council of North UK
  • Tarang Shelat, President Hindu Council of Birmingham
  • Ashvin Patel, Chairman, Hindu Forum (Walsall)
  • Ana Gonzalez, partner at Wilson Solicitors LLP
  • Baron Desai of Clement Danes
  • Baroness Cox of Queensbury
  • Lord Alton of Liverpool
  • Bishop Michael Nazir-Ali
  • Ade Omooba MBE, Co-Chair National Church Leaders Forum (NCLF)
  • Wilson Chowdhry, British Pakistani Christian Association
  • Ashish Joshi, Sikh Media Monitoring Group
  • Satish K Sharma, National Council of Hindu Temples
  • Rumy Hasan, Academic and author
  • Amina Lone, Co-Director, Social Action and Research Foundation
  • Peter Tatchell, Peter Tatchell Foundation
  • Seyran Ates, Imam
  • Gina Khan, One Law for All
  • Mohammed Amin MBE
  • Baroness D'Souza
  • Michael Mosbacher, Acting Editor, Standpoint Magazine
  • Lisa-Marie Taylor, CEO FiLiA
  • Julie Bindel, journalist and feminist campaigner
  • Dr Adrian Hilton, academic
  • Neil Anderson, academic
  • Tom Holland, historian
  • Toby Keynes
  • Prof. Dr. Bassam Tibi, Professor Emeritus for International Relations, University of Goettingen
  • Dr Stephen Law, philosopher and author

 

 

Government minister blames online trolling for suicide...

It couldn't possibly be anything to do with her government's policies to impoverish people through austerity, globalisation, benefits sanctions, universal credit failures and the need for food banks


Link Here 15th May 2019
Jackie Doyle-Price is the government's first suicide prevention minister. She seems to believe that this complex and tragic social problem can somehow be cure by censorship and an end to free speech.

She said society had come to tolerate behaviour online which would not be tolerated on the streets. She urged technology giants including Google and Facebook to be more vigilant about removing harmful comments.

Doyle-Price told the Press Association:

It's great that we have these platforms for free speech and any one of us is free to generate our own content and put it up there, ...BUT... free speech is only free if it's not abused. I just think in terms of implementing their duty of care to their customers, the Wild West that we currently have needs to be a lot more regulated by them.

 

 

UK mass snooping laws can be investigated by UK courts...

Privacy International Wins Historic Victory at UK Supreme Court


Link Here 15th May 2019

Today, after a five year battle with the UK government, Privacy International has won at the UK Supreme Court. The UK Supreme Court has ruled that the Investigatory Powers Tribunal's (IPT) decisions are subject to judicial review in the High Court. The Supreme Court's judgment is a major endorsement and affirmation of the rule of law in the UK. The decision guarantees that when the IPT gets the law wrong, its mistakes can be corrected.

Key point:

  • UK Supreme Court rules that the UK spying tribunal - the IPT - cannot escape the oversight of the ordinary UK courts

The leading judgment of Lord Carnwath confirms the vital role of the courts in upholding the rule of law. The Government's reliance on an 'ouster clause' to try to remove the IPT from judicial review failed. The judgment confirms hundreds of years of legal precedent condemning attempts to remove important decisions from the oversight of the courts.

Privacy International's case stems from a 2016 decision by the IPT that the UK government may use sweeping 'general warrants' to engage in computer hacking of thousands or even millions of devices, without any approval from by a judge or reasonable grounds for suspicion. The Government argued that it would be lawful in principle to use a single warrant signed off by a Minister (not a judge) to hack every mobile phone in a UK city - and the IPT agreed with the Government.

Privacy International challenged the IPT's decision before the UK High Court. The Government argued that even if the IPT had got the law completely wrong, or had acted unfairly, the High Court had no power to correct the mistake. That question went all the way to the UK Supreme Court, and resulted in today's judgment.

In his judgment, Lord Carnwath wrote:

"The legal issue decided by the IPT is not only one of general public importance, but also has possible implications for legal rights and remedies going beyond the scope of the IPT's remit. Consistent application of the rule of law requires such an issue to be susceptible in appropriate cases to review by ordinary courts."

Caroline Wilson Palow, Privacy International's General Counsel, said:

"Today's judgment is a historic victory for the rule of law. It ensures that the UK intelligence agencies are subject to oversight by the ordinary UK courts.

Countries around the world are currently grappling with serious questions regarding what power should reside in each branch of government. Today's ruling is a welcome precedent for all of those countries, striking a reasonable balance between executive, legislative and judicial power.

Today's ruling paves the way for Privacy International's challenge to the UK Government's use of bulk computer hacking warrants. Our challenge has been delayed for years by the Government's persistent attempt to protect the IPT's decisions from scrutiny. We are heartened that our case will now go forward."

Simon Creighton, of Bhatt Murphy Solicitors who acted for Privacy International, said:

"Privacy International's tenacity in pursuing this case has provided an important check on the argument that security concerns should be allowed to override the rule of law. Secretive national security tribunals are no exception. The Supreme Court was concerned that no tribunal, however eminent its judges, should be able to develop its own "local law". Today's decision welcomes the IPT back from its legal island into the mainstream of British law."

 

 

Tubes banned on the Tube...

Government announces new law to ban watching porn in public places


Link Here13th May 2019

Watching pornography on buses is to be banned, ministers have announced. Bus conductors and the police will be given powers to tackle those who watch sexual material on mobile phones and tablets.

Ministers are also drawing up plans for a national database of claimed harassment incidents. It will record incidents at work and in public places, and is likely to cover wolf-whistling and cat-calling as well as more serious incidents.

In addition, the Government is considering whether to launch a public health campaign warning of the effects of pornography -- modelled on smoking campaigns.

 

 

The Porn Channel...

The Channel Islands is considering whether to join the UK in the censorship of internet porn


Link Here13th May 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust

As of 15 July, people in the UK who try to access porn on the internet will be required to verify their age or identity online.

The new UK Online Pornography (Commercial Basis) Regulations 2018 law does not affect the Channel Islands but the States have not ruled out introducing their own regulations.

The UK Department for Censorship, Media and Sport said it was working closely with the Crown Dependencies to make the necessary arrangements for the extension of this legislation to the Channel Islands.

A spokeswoman for the States said they were monitoring the situation in the UK to inform our own policy development in this area.

 

 

Offsite Article: Careless lawmaking...


Link Here6th May 2019
Detailed legal analysis of Online Harms white paper does not impress

See article from cyberleagle.com

 

 

The wrong type of press freedom...

Jeremy Hunt whinges about press freedom to mark World Press Freedom Day


Link Here3rd May 2019
Foreign secretary Jeremy Hunt declared that the Russian government-owned propaganda channel RT to be a weapon of disinformation in a speech to mark World Press Freedom Day.

The UK government is particularly annoyed at the channel for repeatedly deflecting blame from Russia for the poisoning attack in Salisbury.

Hunt noted that the Kremlin came up with over 40 separate narratives to explain that incident which RT broadcast to the world.

The foreign secretary said it remained a matter for Ofcom to independently decide whether the station should be closed down. At the end of last year RT was found guilty of seven breaches of the British broadcasting code in relation to programmes broadcast in the aftermath of the Salisbury novichok poisoning .

TV censor Ofcom has yet to announce sanctions for the breaches of the code.

It seems bizarre that the government should let the TV censor determine sanctions when these could have serious diplomatic consequences. Surely it is the government that should be leading the censorship of interference by a foreign power.

Hunt seems to have been doing a bit of anti-British propaganda himself. In a press release ahead of the speech he seemed to suggest that Britain and the west have fragile democracies. In the news release Hunt states:

Russia in the last decade very disappointingly seemed to have embarked on a foreign policy where their principal aim is to sow confusion and division and destabilise fragile democracies.

 

 

DNS Over HTTPS...

The UK government gets wind of a new internet protocol that will play havoc with their ability to block websites


Link Here23rd April 2019
A DNS server translates the text name of a website into the numerical IP address. At the moment ISPs provide the DNS servers and they use this facility to block websites. If you want to access bannedwebsite.com the ISP simply refuses to tell your browser the IP address of the website you are seeking. The ISPs use this capability to implement blocks on terrorist/child abuse, copyright infringing websites, porn websites with out age verification, network level parental control blocking and many more things envisaged in the Government's Online Harms white paper.

At the moment DNS requests are transmitted in the clear so even if you chose another DNS server the ISP can see what you are up to, intercept the message and apply its own censorship rules anyway.

This is all about to change, as the internet authorities have introduced a change meaning that DNS requests can now be encrypted using the web standard encryption as used by https. The new protocol option is known is DNS Over HTTPS or DOH.

The address being requested cannot be monitored under several internet protocols, DNS over TLS and DNSCrypt but DNS Over HTTPS goes one step further in that ISPs cannot even detect that it is DNS request at al. It appears exactly the same as a standard HTTPS request for the website content. This prevents the authorities from refusing to allow DNS Over HTTPS at all by blocking all such requests. If they tried they would have to block all https websites.

There's nothing to stop users from sticking with their ISPs DNS and submitting to all the familiar censorship policies. However if your browser allows, you can ask the browser to ask to use a non censorial DNS server over HTTPS. There are already plenty of servers out there to choose from, but it is down to the browser to define the choice available to you. Firefox already allows you to select their own encrypted DNS server. Google is not far behind with its Chrome Browser.

At the moment Firefox already allows those with techie bent to opt for the Firefox DOH, but Firefox recently made waves by suggesting that it would soon default to using its own server and make it a techie change to opt out and revert to ISP DNS. Perhaps this sounds a little unlikely.

The Government have got well wound up by the fear of losing censorship control over UK internet users  so no doubt will becalling in people from Firefox and Chrome to try to get them to enforce state censorship. However it may not be quite so easy. The new protocol allows for anyone to offer non censorial (or even censorial) DOH servers. If Firefox can be persuaded to toe the government line then other browsers can step in instead.

The UK Government broadband ISPs and the National Cyber Security Centre (NCSC) are now set to meet on the 8th May 2019 in order to discuss Google's forthcoming implementation of encrypted DOH. It should be an interesting meeting but I bet they'll never publish the minutes.

I rather suspect that the Government has shot itself in the foot over this with its requirements for porn users to identify themselves before being able to access porn. Suddenly they have will have spurred millions of users to take an interest in censorship circumvention to avoid endangering themselves, and probably a couple of million more who will be wanting to avoid the blocks because they are too young. DNS, DOH, VPNs, Tor and the likes will soon become everyday jargon.

 

 

Is it safe?...

Does the BBFC AV kite mark mean that at age verification service is safe?


Link Here22nd April 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
The BBFC has published a detailed standard for age verifiers to get tested against to obtain a green AV kite mark aiming to convince users that their identity data and porn browsing history is safe.

I have read through the document and conclude that it is indeed a rigorous standard that I guess will be pretty tough for companies to obtain. I would say it would be almost impossible for a small or even medium size website to achieve the standard and more or less means that using an age verification service is mandatory.

The standard has lots of good stuff about physical security of data and vetting of staff access to the data.

Age verifier AVSecure commented:

We received the final documents and terms for the BBFC certification scheme for age verification providers last Friday. This has had significant input from various Government bodies including DCMS (Dept for Culture, Media & Sport), NCC Group plc (expert security and audit firm), GCHQ (UK Intelligence & Security Agency) ICO (Information Commissioner's Office) and of course the BBFC (the regulator).

The scheme appears to have very strict rules.

It is a multi-disciplined scheme which includes penetration testing, full and detailed audits, operational procedures over and above GDPR and the DPA 2018 (Data Protection Act). There are onerous reporting obligations with inspection rights attached. It is also a very costly scheme when compared to other quality standard schemes, again perhaps designed to deter the faint of heart or shallow of pocket.

Consumers will likely be advised against using any systems or methods where the prominent green AV accreditation kitemark symbol is not displayed.

 

But will the age verifier be logging your ID data and browsing history?

And the answer is very hard to pin down from the document. At first read it suggests that minimal data will be retained, but a more sceptical read, connecting a few paragraphs together suggests that the verifier will be required to keep extensive records about the users porn activity.

Maybe this is a reflection of a recent change of heart. Comments from AVSecure suggested that the BBFC/Government originally mandated a log of user activity but recently decided that keeping a log or not is down to the age verifier.

As an example of the rather evasive requirements:

8.5.9 Physical Location

Personal data relating to the physical location of a user shall not be collected as part of the age-verification process unless required for fraud prevention and detection. Personal data relating to the physical location of a user shall only be retained for as long as required for fraud prevention and detection.

Here it sounds like keeping tabs on location is optional, but another paragraph suggest otherwise: 

8.4.14 Fraud Prevention and Detection

Real-time intelligent monitoring and fraud prevention and detection systems shall be used for age-verification checks completed by the age-verification provider.

Now it seems that the fraud prevention is mandatory, and so a location record is mandatory after all.

Also the use off the phrase only be retained for as long as required for fraud prevention and detection. seems a little misleading too, as in reality fraud prevention will be required for as long as the customer keeps on using it. This may as well be forever.

There are other statements that sound good at first read, but don't really offer anything substantial:

8.5.6 Data Minimisation

Only the minimum amount of personal data required to verify a user's age shall be collected.

But if the minimum is to provide name and address + eg a drivers licence number or a credit card number then the minimum is actually pretty much all of it. In fact there are only the porn pass methods that offer any scope for 'truely minimal' data collection. Perhaps the minimal data also applies to the verified mobile phone method as although the phone company probably knows your identity, then maybe they won't need to pass it on to the age verifier.

 

What does the porn site get to know

The rare unequivocal and reassuring statement is

8.5.8 Sharing Results

Age-verification providers shall only share the result of an age-verification check (pass or fail) with the requesting website.

So it seems that identity details won't be passed to the websites themselves.

However the converse is not so clear:

8.5.6 Data Minimisation

Information about the requesting website that the user has visited shall not be collected against the user's activity.

Why add the phrase, against the user's activity. This is worded such that information about the requesting website could indeed be collected for another reason, fraud detection maybe.

Maybe the scope for an age verifier to maintain a complete log of porn viewing is limited more by the practical requirement for a website to record a successful age verification in a cookie such that the age verifier only gets to see one interaction with each website.

No doubt we shall soon find out whether the government wants a detailed log of porn viewed, as it  will be easy to spot if a website queries the age verifier for every film you watch.
 

Fraud Detection

And what about all this reference to fraud detection. Presumably the BBFC/Government is a little worried that passwords and accounts will be shared by enterprising kids. But on the other hand it may make life tricky for those using shared devices, or perhaps those who suddenly move from London to New York in an instant, when in fact this is totally normal for someone using a VPN on a PC.


Wrap up

The BBFC/Government have moved on a long way from the early days when the lawmakers created the law without any real protection for porn users and the BBFC first proposed that this could be rectified by asking porn companies to voluntarilyfollow 'best practice' in keeping people's data safe.

A definite improvement now, but I think I will stick to my VPN.

 

 

Updated: Community Spirit...

It's good to see the internet community pull together to work around censorship via age verification


Link Here22nd April 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
A TV channel, a porn producer, an age verifier and maybe even the government got together this week to put out a live test of age verification. The test was implemented on a specially created website featuring a single porn video.

The test required a well advertised website to provide enough traffic of viewers positively wanting to see the content. Channel 4 obliged with  its series Mums Make Porn. The series followed a group of mums making a porn video that they felt would be more sex positive and less harmful to kids than the more typical porn offerings currently on offer.

The mums did a good job and produced a decent video with a more loving and respectful interplay than is the norm. The video however is still proper hardcore porn and there is no way it could be broadcast on Channel 4. So the film was made available, free of charge, on its own dedicated website complete with an age verification requirement.

The website was announced as a live test for AgeChecked software to see how age verification would pan out in practice. It featured the following options for age verification

  1. entering full credit card details + email
  2. entering driving licence number + name and address + email
  3. mobile phone number + email (the phone must have been verified as 18+ by the service provider and must must be ready to receive an SMS message containing login details)

Nothing has been published in detail about the aims of the test but presumably they were interested in the basic questions such as:

  • What proportion of potential viewers will be put off by the age verification?
  • What proportion of viewers would be stupid enough to enter their personal data?
  • Which options of identification would be preferred by viewers?

 

The official test 'results'

Alastair Graham, CEO of AgeChecked provided a few early answers inevitably claiming that:

The results of this first mainstream test of our software were hugely encouraging.

He went on to claim that customers are willing to participate in the process, but noted that verified phone number method emerged as by far the most popular method of verification. He said that this finding would be a key part of this process moving forward.

Reading between the lines perhaps he was saying that there wasn't much appetite for handing over detailed personal identification data as required by the other two methods.

I suspect that we will never get to hear more from AgeChecked especially about any reluctance of people to identify themselves as porn viewers.

 

The unofficial test results

Maybe they were also interested in other questions too:

  • Will people try and work around the age verification requirements?
  • if people find weaknesses in the age verification defences, will they pass on their discoveries to others?

Interestingly the age verification requirement was easily sidestepped by those with a modicum of knowledge about downloading videos from websites such as YouTube and PornHub. The age verification mechanism effectively only hid the start button from view. The actual video remained available for download, whether people age verified or not. All it took was a little examination of the page code to locate the video. There are several tools that allow this: video downloader addons, file downloaders or just using the browser's built in debugger to look at the page code.

Presumably the code for the page was knocked up quickly so this flaw could have been a simple oversight that is not likely to occur in properly constructed commercial websites. Or perhaps the vulnerability was deliberately included as part of the test to see if people would pick up on it.

However it did identify that there is a community of people willing to stress test age verification restrictions and see if work rounds can be found and shared.

I noted on Twitter that several people had posted about the ease of downloading the video and had suggested a number of tools or methods that enabled this.

There was also an interesting article posted on achieving age verification using an expired credit card. Maybe that is not so catastrophic as it still identifies a cardholder as over 18, even if cannot be used to make a payment. But of course it may open new possibilities for misuse of old data. Note that random numbers are unlikely to work because of security algorithms. Presumably age verification companies could strengthen the security by testing that a small transaction works, but this intuitively this would have significant cost implications. I guess that to achieve any level of take up, age verification needs to be cheap for both websites and viewers.

 

Community Spirit

It was very heartening to see how many people were helpfully contributing their thoughts about testing the age verification software.

Over the course of a couple of hours reading, I learnt an awful lot about how websites hide and protect video content, and what tools are available to see through the protection. I suspect that many others will soon be doing the same... and I also suspect that young minds will be far more adept than I at picking up such knowledge.

 

A final thought

I feel a bit sorry for small websites who sell content. It adds a whole new level complexity as a currently open preview area now needs to be locked away behind an age verification screen. Many potential customers will be put off by having to jump through hoops just to see the preview material. To then ask them to enter all their credit card details again to subscribe, may be a hurdle too far.

Update: The Guardian reports that age verification were easily circumvented

22nd April 2019. See article from theguardian.com

The Guardian reported that the credit card check used by AgeChecked could be easily fooled by generating a totally false credit card number. Note that a random number will not work as there is a well known sum check algorithm which invalidates a lot of random numbers. But anyone who knows or looks up the algorithm would be able to generate acceptable credit card numbers that would at least defeat AgeChecked.

Or they would have been had AgeChecked not now totally removed the credit card check option from its choice of options.

Still the damage was done when the widely distributed Guardian article has established doubts about the age verification process.

Of course the workaround is not exactly trivial and will stop younger kids from 'stumbling on porn' which seems to be the main fall back position of this entire sorry scheme.

 

 

Bad Research And Block Heads...

David Flint looks into flimsy porn evidence used to justify government censorship


Link Here22nd April 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust

 

 

Offsite Article: A government PR failure...


Link Here22nd April 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
John Carr, a leading supporter of the government's porn censorship regime, is a little exasperated by its negative reception in the media

See article from johnc1912.wordpress.com

 

 

Offsite Article: A good summary of where we are at...


Link Here21st April 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
Politics, privacy and porn: the challenges of age-verification technology. By Ray Allison

See article from computerweekly.com

 

 

Offsite Article: User's Behaving Badly...


Link Here20th April 2019
An interesting look at the government's Online Harms white paper proposing extensive internet censorship for the UK

See article from cyberleagle.com

 

 

Offsite Article: Age verification won't block porn...


Link Here18th April 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
But it will spell the end of ethical porn. By Girl on the Net

See article from theguardian.com

 

 

Get a VPN or fill your boots now! There's 3 months left for unhindered porn downloading...

The government announces that its internet porn censorship scheme will come into force on 15th July 2019


Link Here17th April 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
The UK will become the first country in the world to bring in age-verification for online pornography when the measures come into force on 15 July 2019.

It means that commercial providers of online pornography will be required by law to carry out robust age-verification checks on users, to ensure that they are 18 or over.

Websites that fail to implement age-verification technology face having payment services withdrawn or being blocked for UK users.

The British Board of Film Classification (BBFC) will be responsible for ensuring compliance with the new laws. They have confirmed that they will begin enforcement on 15 July, following an implementation period to allow websites time to comply with the new standards.

Minister for Digital Margot James said that she wanted the UK to be the most censored place in the world to b eonline:

Adult content is currently far too easy for children to access online. The introduction of mandatory age-verification is a world-first, and we've taken the time to balance privacy concerns with the need to protect children from inappropriate content. We want the UK to be the safest place in the world to be online, and these new laws will help us achieve this.

Government has listened carefully to privacy concerns and is clear that age-verification arrangements should only be concerned with verifying age, not identity. In addition to the requirement for all age-verification providers to comply with General Data Protection Regulation (GDPR) standards, the BBFC have created a voluntary certification scheme, the Age-verification Certificate (AVC), which will assess the data security standards of AV providers. The AVC has been developed in cooperation with industry, with input from government.

Certified age-verification solutions which offer these robust data protection conditions will be certified following an independent assessment and will carry the BBFC's new green 'AV' symbol. Details will also be published on the BBFC's age-verification website, ageverificationregulator.com so consumers can make an informed choice between age-verification providers.

BBFC Chief Executive David Austin said:

The introduction of age-verification to restrict access to commercial pornographic websites to adults is a ground breaking child protection measure. Age-verification will help prevent children from accessing pornographic content online and means the UK is leading the way in internet safety.

On entry into force, consumers will be able to identify that an age-verification provider has met rigorous security and data checks if they carry the BBFC's new green 'AV' symbol.

The change in law is part of the Government's commitment to making the UK the safest place in the world to be online, especially for children. It follows last week's publication of the Online Harms White Paper which set out clear responsibilities for tech companies to keep UK citizens safe online, how these responsibilities should be met and what would happen if they are not.

 

 

Proven privacy concerns...

When spouting on about keeping porn users data safe the DCMS proves that it simply can't be trusted by revealing journalists' private emails


Link Here17th April 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
 
  
 Believe us, we can cure all society's ills
 

A government department responsible for data protection laws has shared the private contact details of hundreds of journalists.

The Department for Censorship, Media and Sport emailed more than 300 recipients in a way that allowed their addresses to be seen by other people.

The email - seen by the BBC - contained a press release about age verifications for adult websites .

Digital Minister Margot James said the incident was embarrassing. She added:

It was an error and we're evaluating at the moment whether that was a breach of data protection law.

In the email sent on Wednesday, the department claimed new rules would offer robust data protection conditions, adding: Government has listened carefully to privacy concerns.

 

 

Does destroying the livelihoods of parents protect the children?...

ICO announces another swathe of internet censorship and age verification requirements in the name of 'protecting the children'


Link Here 15th April 2019
This is the biggest censorship event of the year. It is going destroy the livelihoods of many. It is framed as if it were targeted at Facebook and the like, to sort out their abuse of user data, particularly for kids.

However the kicker is that the regulations will equally apply to all UK accessed websites that earn at least earn some money and process user data in some way or other.  Even small websites will then be required to default to treating all their readers as children and only allow more meaningful interaction with them if they verify themselves as adults. The default kids-only mode bans likes, comments, suggestions, targeted advertising etc, even for non adult content.

Furthermore the ICO expects websites to formally comply with the censorship rules using market researchers, lawyers, data protection officers, expert consultants, risk assessors and all the sort of people that cost a grand a day.

Of course only the biggest players will be able to afford the required level of red tape and instead of hitting back at Facebook, Google, Amazon and co for misusing data, they will further add to their monopoly position as they will be the only companies big enough to jump over the government's child protection hurdles.

Another dark day for British internet users and businesses.

The ICO write in a press release

Today we're setting out the standards expected of those responsible for designing, developing or providing online services likely to be accessed by children, when they process their personal data.

Parents worry about a lot of things. Are their children eating too much sugar, getting enough exercise or doing well at school. Are they happy?

In this digital age, they also worry about whether their children are protected online. You can log on to any news story, any day to see just how children are being affected by what they can access from the tiny computers in their pockets.

Last week the Government published its white paper covering online harms.

Its proposals reflect people's growing mistrust of social media and online services. While we can all benefit from these services, we are also increasingly questioning how much control we have over what we see and how our information is used.

There has to be a balancing act: protecting people online while embracing the opportunities that digital innovation brings.

And when it comes to children, that's more important than ever. In an age when children learn how to use a tablet before they can ride a bike, making sure they have the freedom to play, learn and explore in the digital world is of paramount importance.

The answer is not to protect children from the digital world, but to protect them within it.

So today we're setting out the standards expected of those responsible for designing, developing or providing online services likely to be accessed by children, when they process their personal data. Age appropriate design: a code of practice for online services has been published for consultation.

When finalised, it will be the first of its kind and set an international benchmark.

It will leave online service providers in no doubt about what is expected of them when it comes to looking after children's personal data. It will help create an open, transparent and protected place for children when they are online.

Organisations should follow the code and demonstrate that their services use children's data fairly and in compliance with data protection law. Those that don't, could face enforcement action including a fine or an order to stop processing data.

Introduced by the Data Protection Act 2018, the code sets out 16 standards of age appropriate design for online services like apps, connected toys, social media platforms, online games, educational websites and streaming services, when they process children's personal data. It's not restricted to services specifically directed at children.

The code says that the best interests of the child should be a primary consideration when designing and developing online services. It says that privacy must be built in and not bolted on.

Settings must be "high privacy" by default (unless there's a compelling reason not to); only the minimum amount of personal data should be collected and retained; children's data should not usually be shared; geolocation services should be switched off by default. Nudge techniques should not be used to encourage children to provide unnecessary personal data, weaken or turn off their privacy settings or keep on using the service. It also addresses issues of parental control and profiling.

The code is out for consultation until 31 May. We will draft a final version to be laid before Parliament and we expect it to come into effect before the end of the year.

Our Code of Practice is a significant step, but it's just part of the solution to online harms. We see our work as complementary to the current initiatives on online harms, and look forward to participating in discussions regarding the Government's white paper.

The proposals are now open for public consultation:

The Information Commissioner is seeking feedback on her draft code of practice Age appropriate design -- a code of practice for online services likely to be accessed by children (the code).

The code will provide guidance on the design standards that the Commissioner will expect providers of online 'Information Society Services' (ISS), which process personal data and are likely to be accessed by children, to meet.

The code is now out for public consultation and will remain open until 31 May 2019. The Information Commissioner welcomes feedback on the specific questions set out below.

You can respond to this consultation via our online survey , or you can download the document below and email to ageappropriatedesign@ico.org.uk .

lternatively, print off the document and post to:

Age appropriate design code consultation
Policy Engagement Department
Information Commissioner's Office
Wycliffe House
Water Lane
Wilmslow
Cheshire
SK9 5AF

 

 

Comments: An unelected quango introducing draconian limitations on the internet...

Responses to the ICO internet censorship proposals


Link Here 15th April 2019

Comment: Entangling start ups in red tape

See article from adamsmith.org

Today the Information Commissioner's Office announced a consultation on a draft Code of Practice to help protect children online.

The code forbids the creation of profiles on children, and bans data sharing and nudges of children. Importantly, the code also requires everyone be treated like a child unless they undertake robust age-verification.

The ASI believes that this code will entangle start-ups in red tape, and inevitably end up with everyone being treated like children, or face undermining user privacy by requiring the collection of credit card details or passports for every user.

Matthew Lesh, Head of Research at free market think tank the Adam Smith Institute, says:

This is an unelected quango introducing draconian limitations on the internet with the threat of massive fines.

This code requires all of us to be treated like children.

An internet-wide age verification scheme, as required by the code, would seriously undermine user privacy. It would require the likes of Facebook, Google and thousands of other sites to repeatedly collect credit card and passport details from millions of users. This data collection risks our personal information and online habits being tracked, hacked and exploited.

There are many potential unintended consequences. The media could be forced to censor swathes of stories not appropriate for young people. Websites that cannot afford to develop 'children-friendly' services could just block children. It could force start-ups to move to other countries that don't have such stringent laws.

This plan would seriously undermine the business model of online news and many other free services by making it difficult to target advertising to viewer interests. This would be both worse for users, who are less likely to get relevant advertisements, and journalism, which is increasingly dependent on the revenues from targeted online advertising.

The Government should take a step back. It is really up to parents to keep their children safe online.

Offsite Comment: Web shake-up could force ALL websites to treat us like children

15th April 2019. See article from dailymail.co.uk

The information watchdog has been accused of infantilising web users, in a draconian new code designed to make the internet safer for children.

Web firms will be forced to introduce strict new age checks on their websites -- or treat all their users as if they are children, under proposals published by the Information Commissioner's Office today.

The rules are so stringent that critics fear people could end up being forced to demonstrate their age for virtually every website they visit, or have the services that they can access limited as if they are under 18.

 

 

Prime suspects...

The Government is already considering its next step for increased internet censorship


Link Here15th April 2019
The ink has yet dried on two enormous packaged of internet censorship and yet the Government is already planning the next.

The Government is considering an overhaul of censorship rules for Netflix and Amazon Prime Video. The Daily Telegraph understands that the Department for Cesnorship, Media and Sport is looking at whether censorship rules for on-demand video streaming sites should extended to those suffered by traditional broadcasters.

Cesnorship Secretary Jeremy Wright had signaled this could be a future focus for DCMS last month, saying rules for Netflix and Amazon Prime Video were not as robust as they were for other broadcasters.

Public service broadcasters currently have set requirements to commission content from within the UK. The BBC, for example, must ensure that UK-made shows make up a substantial proportion of its content, and around 50% of that content must come from outside the M25 area.

No such rules, over specific UK-made content, currently apply to Netflix and Amazon Prime Video, though . The European Union is currently finalising the details of rules for the bloc, which require streaming companies to ensure at least 30% of their libraries are dedicated to content made by EU-member states.

 

 

More like China, Russia or North Korea...

Tory MPs line up to criticise their own government's totalitarian-style internet censorship proposals


Link Here 14th April 2019

Ministers are facing a growing and deserved backlash against draconian new web laws which will lead to totalitarian-style censorship.

The stated aim of the Online Harms White Paper is to target offensive material such as terrorists' beheading videos. But under the document's provisions, the UK internet censor would have complete discretion to decide what is harmful, hateful or bullying -- potentially including coverage of contentious issues such as transgender rights.

After MPs lined up to demand a rethink, Downing Street has put pressure on Culture Secretary Jeremy Wright to narrow the definition of harm in order to exclude typical editorial content.

MPs have been led by Jacob Rees-Mogg, who said last night that while it was obviously a worthwhile aim to rid the web of the evils of terrorist propaganda and child pornography, it should not be at the expense of crippling a free Press and gagging healthy public expression. He added that the regulator could be used as a tool of repression by a future Jeremy Corbyn-led government, saying:

Sadly, the Online Harms White Paper appears to give the Home Secretary of the day the power to decide the rules as to which content is considered palatable. Who is to say that less scrupulous governments in the future would not abuse this new power?

I fear this could have the unintended consequence of reputable newspaper websites being subjected to quasi-state control. British newspapers freedom to hold authority to account is an essential bulwark of our democracy.

We must not now allow what amounts to a Leveson-style state-controlled regulator for the Press by the back door.

He was backed by Charles Walker, vice-chairman of the Tory Party's powerful backbench 1922 Committee, who said:

We need to protect people from the well-documented evils of the internet -- not in order to suppress views or opinions to which they might object.

In last week's Mail on Sunday, former Culture Secretary John Whittingdale warned that the legislation was more usually associated with autocratic regimes including those in China, Russia or North Korea.

Tory MP Philip Davies joined the criticism last night, saying:

Of course people need to be protected from the worst excesses of what takes place online. But equally, free speech in a free country is very, very important too. It's vital we strike the right balance. While I have every confidence that Sajid Javid as Home Secretary would strike that balance, can I have the same confidence that a future Marxist government would not abuse the proposed new powers?

And Tory MP Martin Vickers added:

While we must take action to curb the unregulated wild west of the internet, we must not introduce state control of the Press as a result.

 

 

Well if they legislate without giving a shit about the safety and privacy of porn users...

WebUser magazine kindly informs readers how to avoid being endangered by age verification


Link Here13th April 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
The legislators behind the Digital Economy Act couldn't be bothered to include any provisions for websites and age verifiers to keep the identity and browsing history of porn users safe. It has now started to dawn on the authorities that this was a mistake. They are currently implementing a voluntary kitemark scheme to try and assure users that porn website's and age verifier's claims of keeping data safe can be borne out.

It is hardly surprising that significant numbers of people are likely to be interested in avoiding having to register their identity details before being able to access porn.

It seems obvious that information about VPNs and Tor will therefore be readily circulated amongst any online community with an interest in keeping safe. But perhaps it is a little bit of a shock to see it is such large letters in a mainstream magazine on the shelves of supermarkets and newsagents.

And perhaps anther thought is that once the BBFC starting ISPs to block non-compliant websites then circumvention will be the only way see your blocked favourite websites. So people stupidly signing up to age verification will have less access to porn and a worse service than those that circumvent it.

 

 

Updated Comments: The UK Government harms the British people...

The press and campaigners call out the Online Harms white paper for what it is...censorship


Link Here 12th April 2019
Newspapers and the press have generally given the new internet censorship proposals a jistifiable negative reception:

The Guardian

See Internet crackdown raises fears for free speech in Britain from theguardian.com

Critics of the government's flagship internet regulation policy are warning it could lead to a North Korean-style censorship regime, where regulators decide which websites Britons are allowed to visit, because of how broad the proposals are.

The Daily Mail

See New internet regulation laws will lead to widespread censorship from dailymail.co.uk

Critics brand new internet regulation laws the most draconian crackdown in the Western democratic world as they warn it could threaten the freedom of speech of millions of Britons

The Independent

See UK's new internet plans could bring state censorship of the internet, campaigners warn from independent. co.uk

The government's new proposals to try and protect people from harm on the internet could actually create a huge censorship operation, campaigners have warned.

Index on Censorship

See Online harms proposals pose serious risks to freedom of expressionfrom indexoncensorship.org

Index on Censorship has raised strong concerns about the government's focus on tackling unlawful and harmful online content, particularly since the publication of the Internet Safety Strategy Green Paper in 2017. In October 2018, Index published a joint statement with Global Partners Digital and Open Rights Group noting that any proposals that regulate content are likely to have a significant impact on the enjoyment and exercise of human rights online, particularly freedom of expression.

We have also met with officials from the Department for Digital, Culture, Media and Sport, as well as from the Home Office, to raise our thoughts and concerns.

With the publication of the Online Harms White Paper , we would like to reiterate our earlier points.

While we recognise the government's desire to tackle unlawful content online, the proposals mooted in the white paper -- including a new duty of care on social media platforms , a regulatory body , and even the fining and banning of social media platforms as a sanction -- pose serious risks to freedom of expression online.

These risks could put the United Kingdom in breach of its obligations to respect and promote the right to freedom of expression and information as set out in Article 19 of the International Covenant on Civil and Political Rights and Article 10 of the European Convention on Human Rights, amongst other international treaties.

Social media platforms are a key means for tens of millions of individuals in the United Kingdom to search for, receive, share and impart information, ideas and opinions. The scope of the right to freedom of expression includes speech which may be offensive, shocking or disturbing . The proposed responses for tackling online safety may lead to disproportionate amounts of legal speech being curtailed, undermining the right to freedom of expression.

In particular, we raise the following concerns related to the white paper:

  • Lack of evidence base

The wide range of different harms which the government is seeking to tackle in this policy process require different, tailored responses. Measures proposed must be underpinned by strong evidence, both of the likely scale of the harm and the measures' likely effectiveness. The evidence which formed the base of the Internet Safety Strategy Green Paper was highly variable in its quality. Any legislative or regulatory measures should be supported by clear and unambiguous evidence of their need and effectiveness.

  • Duty of care concerns/ problems with 'harm' definition

Index is concerned at the use of a duty of care regulatory approach. Although social media has often been compared the public square, the duty of care model is not an exact fit because this would introduce regulation -- and restriction -- of speech between individuals based on criteria that is far broader than current law. A failure to accurately define "harmful" content risks incorporating legal speech, including political expression, expressions of religious views, expressions of sexuality and gender, and expression advocating on behalf of minority groups.

  • Risks in linking liability/sanctions to platforms over third party content

While well-meaning, proposals such as these contain serious risks, such as requiring or incentivising wide-sweeping removal of lawful and innocuous content. The imposition of time limits for removal, heavy sanctions for non-compliance or incentives to use automated content moderation processes only heighten this risk, as has been evidenced by the approach taken in Germany via its Network Enforcement Act (or NetzDG), where there is evidence of the over-removal of lawful content.

  • Lack of sufficient protections for freedom of expression.

The obligation to protect users' rights online that is included in the white paper gives insufficient weight to freedom of expression. A much clearer obligation to protect freedom of expression should guide development of future regulation.

In recognition of the UK's commitment to the multistakeholder model of internet governance, we hope all relevant stakeholders, including civil society experts on digital rights and freedom of expression, will be fully engaged throughout the development of the Online Harms bill.

Privacy International

See  PI's take on the UK government's new proposal to tackle "online harms" from privacyinternational.org

PI welcomes the UK government's commitment to investigating and holding companies to account. When it comes to regulating the internet, however, we must move with care. Failure to do so will introduce, rather than reduce, "online harms". A 12-week consultation on the proposals has also been launched today. PI plans to file a submission to the consultation as it relates to our work. Given the breadth of the proposals, PI calls on others respond to the consultation as well.

Here are our initial suggestions:

  • proceed with care: proposals of regulation of content on digital media platforms should be very carefully evaluated, given the high risks of negative impacts on expression, privacy and other human rights. This is a very complex challenge and we support the need for broad consultation before any legislation is put forward in this area.

  • do not lose sight of how data exploitation facilitates the harms identified in the report and ensure any new regulator works closely with others working to tackle these issues.

  • assess carefully the delegation of sole responsibility to companies as adjudicators of content. This would empower corporate judgment over content, with would have implications for human rights, particularly freedom of expression and privacy.

  • require that judicial or other independent authorities, rather than government agencies, are the final arbiters of decisions regarding what is posted online and enforce such decisions in a manner that is consistent with human rights norms.

  • assess the privacy implications of any demand for "proactive" monitoring of content in digital media platforms.

  • ensure that any requirement or expectation of deploying automated decision making/AI is in full compliance with existing human rights and data protection standards (which, for example, prohibit, with limited exceptions, relying on solely automated decisions, including profiling, when they significantly affect individuals).

  • ensure that company transparency reports include information related to how the content was targeted at users.

  • require companies to provide efficient reporting tools in multiple languages, to report on action taken with regard to content posted online. Reporting tools should be accessible, user-friendly, and easy to find. There should be full transparency regarding the complaint and redress mechanisms available and opportunities for civil society to take action.

Offsite Comment: Ridiculous Plan

10th April 2019. See article from techdirt.com

UK Now Proposes Ridiculous Plan To Fine Internet Companies For Vaguely Defined Harmful Content

Last week Australia rushed through a ridiculous bill to fine internet companies if they happen to host any abhorrent content. It appears the UK took one look at that nonsense and decided it wanted some too. On Monday it released a white paper calling for massive fines for internet companies for allowing any sort of online harms. To call the plan nonsense is being way too harsh to nonsense

The plan would result in massive, widespread, totally unnecessary censorship solely for the sake of pretending to do something about the fact that some people sometimes do not so nice things online. And it will place all of the blame on the internet companies for the (vaguely defined) not so nice things that those companies' users might do online.

Read the full article from techdirt.com

Offsite Comment: Sajid Javid's new internet rules will have a chilling effect on free speech

11th April 2019. See article from spectator.co.uk by Toby Young

How can the government prohibit comments that might cause harm without defining what harm is?

Offsite Comment: Plain speaking from Chief Censor Sajid Javid

11th April 2019. See tweet from twitter.com

Letter to the Guardian: Online Harms white paper would make Chinese censors proud

11th April 2019. See article from theguardian.com

We agree with your characterisation of the online harms white paper as a flawed attempt to deal with serious problems (Regulating the internet demands clear thought about hard problems, Editorial, 9 April). However, we would draw your attention to several fundamental problems with the proposal which could be disastrous if it proceeds in its current form.

Firstly, the white paper proposes to regulate literally the entire internet, and censor anything non-compliant. This extends to blogs, file services, hosting platforms, cloud computing; nothing is out of scope.

Secondly, there are a number of undefined harms with no sense of scope or evidence thresholds to establish a need for action. The lawful speech of millions of people would be monitored, regulated and censored.

The result is an approach that would make China's state censors proud. It would be very likely to face legal challenge. It would give the UK the widest and most prolific internet censorship in an apparently functional democracy. A fundamental rethink is needed.

Antonia Byatt Director, English PEN,
Silkie Carlo Big Brother Watch
Thomas Hughes Executive director, Article 19
Jim Killock Executive director, Open Rights Group
Joy Hyvarinen Head of advocacy, Index on Censorship

Comment: The DCMS Online Harms Strategy must design in fundamental rights

12th April 2019. See article from openrightsgroup.org

Increasingly over the past year, DCMS has become fixated on the idea of imposing a duty of care on social media platforms, seeing this as a flexible and de-politicised way to emphasise the dangers of exposing children and young people to certain online content and make Facebook in particular liable for the uglier and darker side of its user-generated material.

DCMS talks a lot about the 'harm' that social media causes. But its proposals fail to explain how harm to free expression impacts would be avoided.

On the positive side, the paper lists free expression online as a core value to be protected and addressed by the regulator. However, despite the apparent prominence of this value, the mechanisms to deliver this protection and the issues at play are not explored in any detail at all.

In many cases, online platforms already act as though they have a duty of care towards their users. Though the efficacy of such measures in practice is open to debate, terms and conditions, active moderation of posts and algorithmic choices about what content is pushed or downgraded are all geared towards ousting illegal activity and creating open and welcoming shared spaces. DCMS hasn't in the White Paper elaborated on what its proposed duty would entail. If it's drawn narrowly so that it only bites when there is clear evidence of real, tangible harm and a reason to intervene, nothing much will change. However, if it's drawn widely, sweeping up too much content, it will start to act as a justification for widespread internet censorship.

If platforms are required to prevent potentially harmful content from being posted, this incentivises widespread prior restraint. Platforms can't always know in advance the real-world harm that online content might cause, nor can they accurately predict what people will say or do when on their platform. The only way to avoid liability is to impose wide-sweeping upload filters. Scaled implementation of this relies on automated decision-making and algorithms, which risks even greater speech restrictions given that machines are incapable of making nuanced distinctions or recognising parody or sarcasm.

DCMS's policy is underpinned by societally-positive intentions, but in its drive to make the internet "safe", the government seems not to recognise that ultimately its proposals don't regulate social media companies, they regulate social media users. The duty of care is ostensibly aimed at shielding children from danger and harm but it will in practice bite on adults too, wrapping society in cotton wool and curtailing a whole host of legal expression.

Although the scheme will have a statutory footing, its detail will depend on codes of practice drafted by the regulator. This makes it difficult to assess how the duty of care framework will ultimately play out.

The duty of care seems to be broadly about whether systemic interventions reduce overall "risk". But must the risk be always to an identifiable individual, or can it be broader - to identifiable vulnerable groups? To society as a whole? What evidence of harm will be required before platforms should intervene? These are all questions that presently remain unanswered.

DCMS's approach appears to be that it will be up to the regulator to answer these questions. But whilst a sensible regulator could take a minimalist view of the extent to which commercial decisions made by platforms should be interfered with, allowing government to distance itself from taking full responsibility over the fine detailing of this proposed scheme is a dangerous principle. It takes conversations about how to police the internet out of public view and democratic forums. It enables the government to opt not to create a transparent, judicially reviewable legislative framework. And it permits DCMS to light the touch-paper on a deeply problematic policy idea without having to wrestle with the practical reality of how that scheme will affect UK citizens' free speech, both in the immediate future and for years to come.

How the government decides to legislate and regulate in this instance will set a global norm.

The UK government is clearly keen to lead international efforts to regulate online content. It knows that if the outcome of the duty of care is to change the way social media platforms work that will apply worldwide. But to be a global leader, DCMS needs to stop basing policy on isolated issues and anecdotes and engage with a broader conversation around how we as society want the internet to look. Otherwise, governments both repressive and democratic are likely to use the policy and regulatory model that emerge from this process as a blueprint for more widespread internet censorship.

The House of Lords report on the future of the internet, published in early March 2019, set out ten principles it considered should underpin digital policy-making, including the importance of protecting free expression. The consultation that this White Paper introduces offers a positive opportunity to collectively reflect, across industry, civil society, academia and government, on how the negative aspects of social media can be addressed and risks mitigated. If the government were to use this process to emphasise its support for the fundamental right to freedom of expression - and in a way that goes beyond mere expression of principle - this would also reverberate around the world, particularly at a time when press and journalistic freedom is under attack.

The White Paper expresses a clear desire for tech companies to "design in safety". As the process of consultation now begins, we call on DCMS to "design in fundamental rights". Freedom of expression is itself a framework, and must not be lightly glossed over. We welcome the opportunity to engage with DCMS further on this topic: before policy ideas become entrenched, the government should consider deeply whether these will truly achieve outcomes that are good for everyone.

 

 

Vote Texit...Tory Exit...

Culture of Censorship Secretary Jeremy Wright tells British people not worry about the proposed end to their free speech because newspapers will still be allowed free speech


Link Here11th April 2019
The Daily Mail writes:

Totalitarian-style new online code that could block websites and fine them 2£20million for harmful content will not limit press freedom, Culture Secretary promises

Government proposals have sparked fears that they could backfire and turn Britain into the first Western nation to adopt the kind of censorship usually associated with totalitarian regimes.

Former culture secretary John Whittingdale drew parallels with China, Russia and North Korea. Matthew Lesh of the Adam Smith Institute, a free market think-tank, branded the white paper a historic attack on freedom of speech.

[However] draconian laws designed to tame the web giants will not limit press freedom, the Culture Secretary said yesterday.

In a letter to the Society of Editors, Jeremy Wright vowed that journalistic or editorial content would not be affected by the proposals.

And he reassured free speech advocates by saying there would be safeguards to protect the role of the Press.

But as for the safeguarding the free speech rights of ordinary British internet users, he more or less told them they could fuck off!

 

 

Ensuring that the UK is the most censored place in the western world to be online...

Government introduces an enormous package of internet censorship proposals


Link Here8th April 2019
  The Government writes:

In the first online safety laws of their kind, social media companies and tech firms will be legally required to protect their users and face tough penalties if they do not comply.

As part of the Online Harms White Paper, a joint proposal from the Department for Digital, Culture, Media and Sport and Home Office, a new independent regulator will be introduced to ensure companies meet their responsibilities.

This will include a mandatory 'duty of care', which will require companies to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services. The regulator will have effective enforcement tools, and we are consulting on powers to issue substantial fines, block access to sites and potentially to impose liability on individual members of senior management.

A range of harms will be tackled as part of the Online Harms White Paper , including inciting violence and violent content, encouraging suicide, disinformation, cyber bullying and children accessing inappropriate material.

There will be stringent requirements for companies to take even tougher action to ensure they tackle terrorist and child sexual exploitation and abuse content.

The new proposed laws will apply to any company that allows users to share or discover user generated content or interact with each other online. This means a wide range of companies of all sizes are in scope, including social media platforms, file hosting sites, public discussion forums, messaging services, and search engines.

A regulator will be appointed to enforce the new framework. The Government is now consulting on whether the regulator should be a new or existing body. The regulator will be funded by industry in the medium term, and the Government is exploring options such as an industry levy to put it on a sustainable footing.

A 12 week consultation on the proposals has also been launched today. Once this concludes we will then set out the action we will take in developing our final proposals for legislation.

Tough new measures set out in the White Paper include:

  • A new statutory 'duty of care' to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services.

  • Further stringent requirements on tech companies to ensure child abuse and terrorist content is not disseminated online.

  • Giving a regulator the power to force social media platforms and others to publish annual transparency reports on the amount of harmful content on their platforms and what they are doing to address this.

  • Making companies respond to users' complaints, and act to address them quickly.

  • Codes of practice, issued by the regulator, which could include measures such as requirements to minimise the spread of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods.

  • A new "Safety by Design" framework to help companies incorporate online safety features in new apps and platforms from the start.

  • A media literacy strategy to equip people with the knowledge to recognise and deal with a range of deceptive and malicious behaviours online, including catfishing, grooming and extremism.

The UK remains committed to a free, open and secure Internet. The regulator will have a legal duty to pay due regard to innovation, and to protect users' rights online, being particularly mindful to not infringe privacy and freedom of expression.

Recognising that the Internet can be a tremendous force for good, and that technology will be an integral part of any solution, the new plans have been designed to promote a culture of continuous improvement among companies. The new regime will ensure that online firms are incentivised to develop and share new technological solutions, like Google's "Family Link" and Apple's Screen Time app, rather than just complying with minimum requirements. Government has balanced the clear need for tough regulation with its ambition for the UK to be the best place in the world to start and grow a digital business, and the new regulatory framework will provide strong protection for our citizens while driving innovation by not placing an impossible burden on smaller companies.

 

 

Offsite Article: Porn Wars...


Link Here8th April 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
sex, Lies And The Battle To Control Britain's Internet. By David Flint

See article from reprobatepress.com

 

 

Scary stuff: the government wanted a detailed log of your porn viewing history...

A report suggesting that government has (reluctantly) relaxed its requirements for internet porn age verifiers to keep a detailed log of people's porn access


Link Here5th April 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
In an interesting article on the Government age verification and internet porn censorship scheme, technology website Techdirt reports on the ever slipping deadlines.

Seemingly with detailed knowledge of government requirements for the scheme, Tim Cushing explains that up until recently the government has demand that age verification companies retain a site log presumably recording people's porn viewing history. He writes:

The government refreshed its porn blockade late last year, softening a few mandates into suggestions. But the newly-crafted suggestions were backed by the implicit threat of heavier regulation. All the while, the government has ignored the hundreds of critics and experts who have pointed out the filtering plan's numerous problems -- not the least of which is a government-mandated collection of blackmail fodder.

The government is no longer demanding retention of site logs by sites performing age verification, but it's also not telling companies they shouldn't retain the data. Companies likely will retain this data anyway, if only to ensure they have it on hand when the government inevitably changes it mind.

Cushing concludes with a comment perhaps suggesting that the Government wants a far more invasive snooping regime than commercial operators are able or willing to provide. He notes:

Shortly. April 1st will come and go with no porn filter. The next best guess is around Easter (April 21st). But I'd wager that date comes and goes as well with zero new porn filters. The UK government only knows what it wants. It has no idea how to get it. I

And it seems that some age verification companies are getting wound up by negative internet and press coverage of the dangers inherent in their services. @glynmoody tweeted:

I see age verification companies that will create the biggest database of people's porn preferences - perfect for blackmail - are now trying to smear people pointing out this is a stupid idea as deliberately creating a climate of fear and confusion about the technologies nope

 

 

Mums want Four Play...but can they get it?...

The porn movie from the TV series 'Mums Make Porn' is used as a live test for age verification


Link Here 4th April 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
The age verification company AgeChecked and porn producer Erika Lust have created a test website for a live trial of age verification.

The test website iwantfourplay.com features the porn video created by the mums in the Channel 4 series Mums Make Porn.

The website presented the video free of charge, but only after viewers passed one of 3 options for age verification:

  1. entering full credit card details + email
  2. entering driving licence number + name and address + email
  3. mobile phone number + email (the phone must have been verified as 18+ by the service provider and must must be ready to receive an SMS message containing login details)

The AgeChecked forms are unimpressive, the company seems reluctant to inform customers about requirements before handing over details. The forms do not even mention that the age requirement is 18+. It certainly does not try to make it clear that say a debit card is unacceptable or that a driving licence is not acceptable if registered to a 17 year old. It seems that they would prefer users to type in all their details and then tell them sorry, the card/licence/phone number doesn't pass the test. In fact the mobile phone option is distinctly misleading it suggests that it may be quicker to use the other options if the mobile phone is not age verified. It should say more positively that an unverified phone cannot be used.

The AgeChecked forms also make contradictory claims about users personal data not being stored by Age Checked (or shared with  iwantfourplay.com)... but then goes on to ask for email address for logging into existing existing AgeChecked accounts, so obviously that item of personal data must be stored by AgeChecked for practical recurring usage.

AgeChecked has already reported on the early results from the test. Alastair Graham, CEO of AgeChecked said:

The results of this first mainstream test of our software were hugely encouraging.

Whilst an effective date for the new legislation's implementation is yet to be confirmed by the British Board of Film Classification, this suggests a clear preparedness to offer robust, secure age verification procedures to the adult industry's 24-30 million UK users.

It also highlights that customers are willing to participate in the process when they know that they are being verified by a secure provider, with whom their identity is fully protected.

The popularity of mobile phone verification was interesting and presumably due the simplicity of using this device. This is something that we foresee as being a key part of this process moving forward.

Don't these people spout rubbish sometimes, pretending that not wanting to have one's credit card details, name and address details associated with watching porn is just down to convenience.

Graham also did not mention other perhaps equally important results from the test. In particular I wonder how many people seeking the video simply decided not to proceed further when presented by age verification options.

I wonder also how many people watched the video without going through age verification. I noted that with a little jiggery pokery, the video could be viewed by VPN. I also noted that although the age verification got in the way of clicking on the video, file/video downloading browser addons were still able to access the video without bothering with the age verification.

And congratulations to the mums for making a good porn video. It feature very attractive actors participating in all the usual porn elements, whilst getting across the mums' wishes for a more positive/loving approach to sex.


 2004   2005   2006   2007   2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan-March   April-June   July-Sept   Oct-Dec    

melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 
 

 
UK News

UK Internet

UK TV

UK Campaigns

UK Censor List
ASA

BBC

BBFC

ICO

Ofcom
Government

Parliament

UK Press

UK Games

UK Customs


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys