UK Government Watch


Latest

2004   2005   2006   2007   2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   Latest  


 

Offsite Article: Don't be a verified idiot...get a VPN!...


Link Here 18th March 2019
The Daily Mail highlights the dangers of identity checks for porn viewers and notes that the start date will be announced in April but could well be several months before is fully implemented

See article from dailymail.co.uk

 

 

Offsite Article: Sewing the seeds of our own demise...


Link Here 14th March 2019
Government complains about the power of internet monopolies whilst simultaneously advantaging them with age verification, censorship machines and link tax

See article from rightsinfo.org

 

 

Offsite Article: What could possibly go wrong?...


Link Here 13th March 2019
UK porn censorship risks creating sex tape black market on Twitter, WhatsApp and even USB sticks

See article from thescottishsun.co.uk

 

 

Offsite Article: Age old censorship...


Link Here 8th March 2019
The Daily Mail reports on vague details about a proposal from the Information Commissioner to require age verification for any website that hoovers up personal details

See article from dailymail.co.uk

 

 

Maybe realisation that endangering parents is not a good way to protect children...

Sky News confirms that porn age verification will not be starting from April 2019 and notes that a start date has yet to be set


Link Here 6th March 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust

Sky News has learned that the government has delayed setting a date for when age verification rules will come into force due to concerns regarding the security and human rights issues posed by the rules. A DCMS representative said:

This is a world-leading step forward to protect our children from adult content which is currently far too easy to access online.

The government, and the BBFC as the regulator, have taken the time to get this right and we will announce a commencement date shortly.

Previously the government indicated that age verification would start from about Easter but the law states that 3 months notice must be given for the start date. Official notice has yet to be published so the earliest it could start is already June 2019.

The basic issue is that the Digital Economy Act underpinning age verification does not mandate that identity data and browsing provided of porn users should be protected by law. The law makers thought that GDPR would be sufficient for data protection, but in fact it only requires that user consent is required for use of that data. All it requires is for users to tick the consent box, probably without reading the deliberately verbose or vague terms and conditions provided. After getting the box ticked the age verifier can then do more or less what they want to do with the data.

Realising that this voluntary system is hardly ideal, and that the world's largest internet porn company Mindgeek is likely to become the monopoly gatekeeper of the scheme, the government has moved on to considering some sort of voluntary kitemark scheme to try and convince porn users that an age verification company can be trusted with the data. The kitemark scheme would appoint an audit company to investigate the age verification implementations and to approve those that use good practises.

I would guess that this scheme is difficult to set up as it would be a major risk for audit companies to approve age verification systems based upon voluntary data protection rules. If an 'approved' company were later found to be selling, misusing data or even getting hacked, then the auditor could be sued for negligent advice, whilst the age verification company could get off scot-free.

 

 

AgeID scarily will require an email address and ID to view PornHub...

There's also a rather unconvincing option to use an app, but that seems to ID your device instead


Link Here 4th March 2019
Full story: BBFC Internet Porn Censors...BBFC: Age Verification We Don't Trust
Pornhub and sister websites will soon require ID from users before being able to browse its porn.

The government most recently suggested that this requirement would start from about Easter this year, but this date has already slipped. The government will give 3 months notice of the start date and as this has not yet been announced, the earliest start date is currently in June.

Pornhub and YouPorn will use the AgeID system, which requires users to identify themselves with an email address and a credit card, passport, driving licence or an age verified mobile phone number.

Metro.co.uk spoke to a spokesperson from AgeID to find out how it will work (and what you'll actually see when you try to log in). James Clark, AgeID spokesperson, said:

When a user first visits a site protected by AgeID, a landing page will appear with a prompt for the user to verify their age before they can access the site.

First, a user can register an AgeID account using an email address and password. The user verifies their email address and then chooses an age verification option from our list of 3rd party providers, using options such as Mobile SMS, Credit Card, Passport, or Driving Licence.

The second option is to purchase a PortesCard or voucher from a retail outlet. Using this method, a customer does not need to register an email address, and can simply access the site using the Portes app.

Thereafter, users will be able to use this username/password combination to log into all porn sites which use the Age ID system.

It is a one-time verification, with a simple single sign-on for future access. If a user verifies on one AgeID protected site, they will not need to perform this verification again on any other site carrying AgeID.

The PortesCard is available to purchase from selected high street retailers and any of the UK's 29,000 PayPoint outlets as a voucher. Once a card or voucher is purchased, its unique validation code must be activated via the Portes app within 24 hours before expiring.

If a user changes device or uses a fresh browser, they will need to login with the credentials they used to register. If using the same browser/device, the user has a choice as to whether they wish to login every time, for instance if they are on a shared device (the default option), or instead allow AgeID to log them in automatically, perhaps on a mobile phone or other personal device.

Clark claimed that AgeID's system does not store details of people's ID, nor does it store their browsing history. This sounds a little unconvincing and must be taken on trust. And this statement rather seems to be contradicted by a previous line noting that user's email will be verified, so that piece of identity information at least will need to be stored and read.

The Portes App solution seems a little doubtful too. It claims not to log device data and then goes on to explain that the PortesCard needs to be locked to a device, rather suggesting that it will in fact be using device data. It will be interesting to see what app permissions the app will require when installing. Hopefully it won't ask to read your contact list.

This AgeID statement rather leaves the AVSecure card idea in the cold. The AVSecure system of proving your age anonymously at a shop, and then obtaining a password for use on porn websites seems to be the most genuinely anonymous idea suggested so far, but it will be pretty useless if it can't be used on the main porn websites.

 

 

Six shooters...

Internet giants respond to impending government internet censorship laws with sex principles that should be followed


Link Here 1st March 2019
Full story: Internet Safety Bill...UK Government seeks to censor social media
The world's biggest internet companies including Facebook, Google and Twitter are represented by a trade group call The Internet Association. This organisation has written to UK government ministers to outline how they believe harmful online activity should be regulated.

The letter has been sent to the culture, health and home secretaries. The letter will be seen as a pre-emptive move in the coming negotiation over new rules to govern the internet. The government is due to publish a delayed White Paper on online harms in the coming weeks.

The letter outlines six principles:

  • "Be targeted at specific harms, using a risk-based approach
  • "Provide flexibility to adapt to changing technologies, different services and evolving societal expectations
  • "Maintain the intermediary liability protections that enable the internet to deliver significant benefits for consumers, society and the economy
  • "Be technically possible to implement in practice
  • "Provide clarity and certainty for consumers, citizens and internet companies
  • "Recognise the distinction between public and private communication"

Many leading figures in the UK technology sector fear a lack of expertise in government, and hardening public sentiment against the excesses of the internet, will push the Online Harms paper in a more radical direction.

Three of the key areas of debate are the definition of online harm, the lack of liability for third-party content, and the difference between public and private communication.

The companies insist that government should recognise the distinction between clearly illegal content and content which is harmful, but not illegal. If these leading tech companies believe this government definition of harm is too broad, their insistence on a distinction between illegal and harmful content may be superseded by another set of problems.

The companies also defend the principle that platforms such as YouTube permit users to post and share information without fear that those platforms will be held liable for third-party content. Another area which will be of particular interest to the Home Office is the insistence that care should be taken to avoid regulation encroaching into the surveillance of private communications.

 

 

Putting Zuckerberg behind bars...

The Telegraph reports on the latest government thoughts about setting up a social media censor


Link Here 23rd February 2019
Full story: Internet Safety Bill...UK Government seeks to censor social media

Social media companies face criminal sanctions for failing to protect children from online harms, according to drafts of the Government's White Paper circulating in Whitehall.

Civil servants are proposing a new corporate offence as an option in the White Paper plans for a tough new censor with the power to force social media firms to take down illegal content and to police legal but harmful material.

They see criminal sanctions as desirable and as an important part of a regulatory regime, said one source who added that there's a recognition particularly on the Home Office side that this needs to be a regulator with teeth. The main issue they need to satisfy ministers on is extra-territoriality, that is can you apply this to non-UK companies like Facebook and YouTube? The belief is that you can.

The White Paper, which is due to published mid-March followed by a Summer consultation, is not expected to lay out as definitive a plan as previously thought. A decision on whether to create a brand new censor or use Ofcom is expected to be left open. A Whitehall source said:

Criminal sanctions are going to be put into the White Paper as an option. We are not necessarily saying we are going to do it but these are things that are open to us. They will be allied to a system of fines amounting to 4% of global turnover or Euros 20m, whichever is higher.

Government minister Jeremy Wright told the Telegraph this week he was especially focused on ensuring that technology companies enforce minimum age standards. He also indicated the Government w ould fulfill a manifesto commitment to a levy on social media firms, that could fund the new censorr.

 

 

Driving the internet into dark corners...

The IWF warns the government to think about unintended consequences when creating a UK internet censor


Link Here 22nd February 2019
Full story: Internet Safety Bill...UK Government seeks to censor social media

Internet Watch Foundation's (IWF) CEO, Susie Hargreaves OBE, puts forward a voice of reason by urging politicians and policy makers to take a balanced approach to internet regulation which avoids a heavy cost to the victims of child sexual abuse.

IWF has set out its views on internet regulation ahead of the publication of the Government's Online Harms White Paper. It suggests that traditional approaches to regulation cannot apply to the internet and that human rights should play a big role in any regulatory approach.

The IWF, as part of the UK Safer Internet Centre, supports the Government's ambition to make the UK the safest place in the world to go online, and the best place to start a digital business.

IWF has a world-leading reputation in identifying and removing child sexual abuse images and videos from the internet. It takes a co-regulatory approach to combating child sexual abuse images and videos by working in partnership with the internet industry, law enforcement and governments around the world. It offers a suite of tools and services to the online industry to keep their networks safer. In the past 22 years, the internet watchdog has assessed -- with human eyes -- more than 1 million reports.

Ms Hargreaves said:

Tackling criminal child sexual abuse material requires a global multi-stakeholder effort. We'll use our 22 years' experience in this area to help the government and policy makers to shape a regulatory framework which is sustainable and puts victims at its heart. In order to do this, any regulation in this area should be developed with industry and other key stakeholders rather than imposed on them.

We recommend an outcomes-based approach where the outcomes are clearly defined and the government should provide clarity over the results it seeks in dealing with any harm. There also needs to be a process to monitor this and for any results to be transparently communicated.

But, warns Ms Hargreaves, any solutions should be tested with users including understanding impacts on victims: "The UK already leads the world at tackling online child sexual abuse images and videos but there is definitely more that can be done, particularly in relation to tackling grooming and livestreaming, and of course, regulating harmful content is important.

My worries, however, are about rushing into knee-jerk regulation which creates perverse incentives or unintended consequences to victims and could undo all the successful work accomplished to date. Ultimately, we must avoid a heavy cost to victims of online sexual abuse.

 

 

Wider definition of harm can be manipulated to restrict media freedom...

Index on Censorship responds to government plans to create a UK internet censor


Link Here 22nd February 2019
Full story: Internet Safety Bill...UK Government seeks to censor social media

Index on Censorship welcomes a report by the House of Commons Digital, Culture, Media and Sport select committee into disinformation and fake news that calls for greater transparency on social media companies' decision making processes, on who posts political advertising and on use of personal data. However, we remain concerned about attempts by government to establish systems that would regulate harmful content online given there remains no agreed definition of harm in this context beyond those which are already illegal.

Despite a number of reports, including the government's Internet Safety Strategy green paper, that have examined the issue over the past year, none have yet been able to come up with a definition of harmful content that goes beyond definitions of speech and expression that are already illegal. DCMS recognises this in its report when it quotes the Secretary of State Jeremy Wright discussing the difficulties surrounding the definition. Despite acknowledging this, the report's authors nevertheless expect technical experts to be able to set out what constitutes harmful content that will be overseen by an independent regulator.

International experience shows that in practice it is extremely difficult to define harmful content in such a way that would target only bad speech. Last year, for example, activists in Vietnam wrote an open letter to Facebook complaining that Facebook's system of automatically pulling content if enough people complained could silence human rights activists and citizen journalists in Vietnam , while Facebook has shut down the livestreams of people in the United States using the platform as a tool to document their experiences of police violence.

Index on Censorship chief executive Jodie Ginsberg said:

It is vital that any new system created for regulating social media protects freedom of expression, rather than introducing new restrictions on speech by the back door. We already have laws to deal with harassment, incitement to violence, and incitement to hatred. Even well-intentioned laws meant to tackle hateful views online often end up hurting the minority groups they are meant to protect, stifle public debate, and limit the public's ability to hold the powerful to account.

The select committee report provides the example of Germany as a country that has legislated against harmful content on tech platforms. However, it fails to mention the German Network Reinforcement Act was legislating on content that was already considered illegal, nor the widespread criticism of the law that included the UN rapporteur on freedom of expression and groups such as Human Rights Watch. It also cites the fact that one in six of Facebook's moderators now works in Germany as practical evidence that legislation can work. Ginsberg said:

The existence of more moderators is not evidence that the laws work. Evidence would be if more harmful content had been removed and if lawful speech flourished. Given that there is no effective mechanism for challenging decisions made by operators, it is impossible to tell how much lawful content is being removed in Germany. But the fact that Russia, Singapore and the Philippines have all cited the German law as a positive example of ways to restrict content online should give us pause.

Index has reported on various examples of the German law being applied incorrectly, including the removal of a tweet of journalist Martin Eimermacher criticising the double standards of tabloid newspaper Bild Zeitung and the blocking of the Twitter account of German satirical magazine Titanic. The Association of German Journalists (DJV) has said the Twitter move amounted to censorship, adding it had warned of this danger when the German law was drawn up.

Index is also concerned about the continued calls for tools to distinguish between quality journalism and unreliable sources, most recently in the Cairncross Review . While we recognise that the ability to do this as individuals and through education is key to democracy, we are worried that a reliance on a labelling system could create false positives, and mean that smaller or newer journalism outfits would find themselves rejected by the system.

 

2004   2005   2006   2007   2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   Latest  

melonfarmers icon
 

Top

Home

Index

Links

Email
 

UK

World

Media

Info

US
 

FilmCuts

Nutters

Liberty
 

Cutting Edge

Shopping

Sex News

Sex+Shopping

Advertise
 



UK News

UK TV News

UK Censor List

UK Campaigns

BBC Watch

Ofcom Watch

ASA Watch
 

IWF Watch

Extreme Porn News

Government Watch

Parliament Watch

Customs Watch

UK Press Censor Watch

UK Games Censor Watch
 


Adult DVD+VoD

Online Shop Reviews
 

Online Shops

New  & Offers
 
Sex Machines
Fucking Machines
Adult DVD Empire
Adult DVD Empire
Simply Adult
30,000+ items in stock
Low prices on DVDs and sex toys
Simply Adult
Hot Movies
Hot Movies