Melon Farmers Original Version

UK Government Watch


Latest

 2004   2005   2006   2007   2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   Latest 

 

Offsite Article: Co-censor...


Link Here16th December 2021
The Internet Watch Foundation petitions for a place in the UK's upcoming internet censorship regime

See article from iwf.org.uk

 

 

Online Safety Bill: Kill Switch for Encryption...

Open Rights Group explains how the Online 'Safety' Bill will endanger internet users


Link Here11th December 2021
Full story: UK Government vs Encryption...Government seeks to restrict peoples use of encryption

Of the many worrying provisions contained within the draft Online Safety Bill, perhaps the most consequential is contained within Chapter 4, at clauses 63-69. This section of the Bill hands OFCOM the power to issue "Use of Technology Notices" to search engines and social media companies. As worded, the powers will lead to the introduction of routine and perpetual surveillance of our online communications. They also threaten to fatally undermine the use of end-to-end encryption, one of the fundamental building blocks of digital technology and commerce.

Use of Technology Notices purport to tackle terrorist propaganda and Child Sexual Exploitation and Abuse (CSEA) content. OFCOM will issue a Notice based on the "prevalence" and "persistent presence" of such illegal content on a service. The phrases "perpetual" and "persistent" recur throughout the Bill but remain undefined, so the threshold for interference could be quite low.

Any company that receives a Notice will be forced to use certain "accredited technologies" to identify terrorist and CSEA content on the platform.

The phrase "accredited technologies" is wide-ranging. The Online Safety Bill defines it as technology that meets a "minimum standard" for successfully identifying illegal content, although it is currently unclear what that minimum standard may be.

The definition is silent on what techniques an accredited technology might deploy to achieve the minimum standard. So it could take the form of an AI that classifies images and text. Or it may be a system that compares all the content uploaded to the hashes of known CSEA images logged on the Home Office's Child Abuse Image Database (CAID) and other such collections.

Whatever the precise technique used, identifying terrorist or CSEA content must involve scanning each user's content as it is posted, or soon after. Content that a bot decides is related to terrorism or child abuse will be flagged and removed immediately.

Social media services are public platforms, and so it cannot be said that scanning the content we post to our timelines amounts to an invasion of privacy -- even when we post to a locked account or a closed group, we are still "publishing" to someone. Indeed, search engines have been scanning our content (albeit at their own pace) for many years, and YouTube users will be familiar with the way the platform recognises and monetises any copyrighted content.

It is nevertheless disconcerting to know that an automated pre-publication censor will examine everything we publish. It will chill freedom of expression in itself, and also lead to unnecessary automated takedowns when the system makes a mistake. Social media users routinely experience the problem of over-zealous bots causing the removal of public domain content, which impinges on free speech and damages livelihoods.

However, the greater worry is that these measures will not be limited to content posted only to public (or semi-public) feeds. The Interpretation section of the Bill (clause 137) defines "content" as "anything communicated by means of an internet service, whether publicly or privately ..."(emphasis added). So the Use of Technology Notices will apply to direct messaging services too .

This power presents two significant threats to civil liberties and digital rights.

The first is that once an "accredited technology" is deployed on a platform, it need not be limited to checking only for terrorism or child porn. Other criminal activity may eventually be added to the list through a simple amendment to the relevant section of the Act, ratcheting up the extent of the surveillance.

Meanwhile, other Governments around the world will take inspiration from OFCOM's powers to implement their own scanning regime, perhaps demanding that the social media companies scan for blasphemous, seditious, immoral or dissident content instead.

The second major threat is that the "accredited technologies" will necessarily undermine end-to-end encryption. If the tech companies are to scan all our content, then they have to be able to see it first. This demand, which the government overtly states as its goal , is incompatible with the concept of end-to-end encryption. Either such encryption will be disabled, or the technology companies will create some kind of "back door" that will leave those users vulnerable, to fraud, scams, and invasions of privacy.

Predictable examples include identity theft, credit card theft, mortgage deposit theft and theft of private messages and images. As victims of these crimes tell us, such thefts can lead to severe emotional distress and even contemplation of suicide -- precisely the 'harm' that the Online Safety Bill purports to prevent.

The trade-off, therefore, is not between privacy (or free speech) and security. Instead, it is a tension between two different types of online security: the 'negative' security to not experience harmful content online; and the 'positive' security of ensuring that our sensitive personal and corporate data is not exposed to those who would abuse it (and us).

As Cairan Martin, the former head of the National Cyber Security Centre said in November 2021 , "cyber security is a public good ... it is increasingly hard to think of instances where the benefit of weakening digital security outweighs the benefits of keeping the broad majority of the population as safe as possible online as often as possible. There is nothing to be gained in doing anything that will undermine user trust in their own privacy and security."

A fundamental principle of human rights law is that any encroachment on our rights must be necessary and proportionate. And as ORG's challenge to GCHQ's surveillance practices in Big Brother Watch v UK demonstrated, treating the entire population as a suspect whose communications must be scanned is neither a necessary nor proportionate way to tackle the problem. Nor is it proportionate to dispense with a general right to data security, only to achieve a marginal gain in the fight against illegal content.

While terrorism and CSEA are genuine threats, they cannot be dealt with by permanently dispensing with everyone's privacy.

Open Rights Group recommends

  • Removing the provisions for Use of technology Notices from the draft Online Safety Bill

  • If these provisions remain, Use of Technology Notices should only apply to public messages. The wording of clauses 64(4)(a) and (b) should be amended accordingly.

 

 

SnoopTec...

UK government funds development of methods to snoop on photos on your device


Link Here16th November 2021
Full story: UK Government vs Encryption...Government seeks to restrict peoples use of encryption
The UK government has announced that it is funding five projects to snoop on your device content supposedly in a quest to seek out child porn. But surely these technologies will have wider usage.

The five projects are the winners of the Safety Tech Challenge Fund, which aims to encourage the tech industry to find practical solutions to combat child sexual exploitation and abuse online, without impacting people's rights to privacy and data protection in their communications.

The winners will each receive an initial 85,000 from the Fund, which is administered by the Department for Digital, Culture, Media and Sport (DCMS) and the Home Office, to help them bring their technical proposals for new digital tools and applications to combat online child abuse to the market.

Based across the UK and Europe, and in partnership with leading UK universities, the winners of the Safety Tech Challenge Fund are:

  • Edinburgh-based Cyan Forensics and Crisp Thinking, in partnership with the University of Edinburgh and Internet Watch Foundation, will develop a plug-in to be integrated within encrypted social platforms. It will detect child sexual abuse material (CSAM) - by matching content against known illegal material.
  • SafeToNet and Anglia Ruskin University will develop a suite of live video-moderation AI technologies that can run on any smart device to prevent the filming of nudity, violence, pornography and CSAM in real-time, as it is being produced.
  • GalaxKey, based in St Albans, will work with Poole-based Image Analyser and Yoti, an age-assurance company, to develop software focusing on user privacy, detection and prevention of CSAM and predatory behavior, and age verification to detect child sexual abuse before it reaches an E2EE environment, preventing it from being uploaded and shared.
  • DragonflAI, based in Edinburgh, will also work with Yoti to combine their on-device nudity AI detection technology with age assurance technologies to spot new indecent images within E2EE environments.
  • T3K-Forensics are based in Austria and will work to implement their AI-based child sexual abuse detection technology on smartphones to detect newly created material, providing a toolkit that social platforms can integrate with their E2EE services.

 

 

Snowflakery on steroids...

Government will define crimes in its Online Censorship Bill as those causing 'likely psychological harm'


Link Here1st November 2021
Full story: Online Safety Bill...UK Government legislates to censor social media
The Department for Culture, Media & Sport has accepted recommendations from the Law Commission for crimes under its Online Censorship Bill to be based on likely psychological harm rather than just indecent or grossly offensive content.

This widens the purview of the law, and the proposed change will focus on the supposed harmful effect of a message rather than the content itself.

A knowingly false communication offence will be created that will criminalise those who send or post a message they know to be false with the intention to cause emotional, psychological, or physical harm to the likely audience.

The move is justifiably likely to be met with resistance from freedom of speech campaigners.


 2004   2005   2006   2007   2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   Latest 

melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 
 

 
UK News

UK Internet

UK TV

UK Campaigns

UK Censor List
ASA

BBC

BBFC

ICO

Ofcom
Government

Parliament

UK Press

UK Games

UK Customs


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys