Melon Farmers Original Version

Privacy


2019: Jan-March

 2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan-March   April-June   July-Sept   Oct-Dec    

 

Offsite Article: No doubt they will again say sorry, we'll do better next time...


Link Here 7th March 2019
Facebook asked to explain why it reveals people's private phone numbers used for security without permission

See article from privacyinternational.org

 

 

Offsite Article: Targeted realisation...


Link Here6th March 2019
Group of European privacy campaigners reveal that the likes of Google realised that their targeting advertising scheme is illegal under GDPR

See article from mashable.com

 

 

Guarded Secret...

Google has included a secret microphone in a home alarm system


Link Here20th February 2019

Google has acknowledged that one of its home alarm products contained a secret  microphone. Product specifications for the Nest Guard, an all-in-one alarm, keypad and motion sensor, available since 2017, had made no mention of the listening device.

Nest Guard is an all-in-one alarm, keypad, and motion sensor but, despite being announced well over a year ago, the word microphone was only added to the product's specification this month.

But earlier this month, the firm said a software update would make Nest Guard voice-controlled. On Twitter, concerned Nest owners were told the microphone has not been used up to this point.

In response to criticism, Google claimed:

The on-device microphone was never intended to be a secret and should have been listed in the tech specs. That was an error on our part. The microphone has never been on and is only activated when users specifically enable the option.

This is the kind of thing that makes me paranoid of smart home devices, commented Nick Heer , who writes the Pixel Envy blog.

If I owned one of these things and found out that the world's biggest advertising company hid a microphone in my home for a year, I'd be livid.

 

 

Parliamentary committee publishes report laying into Facebook for flagrant data abuse...

But inevitably concludes that the UK needs a new social media censor


Link Here18th February 2019
Full story: Fake news in the UK...Government sets up fake news unit

The Digital, Culture, Media and Sport Committee has published its final report on Disinformation and 'fake news'. The report calls for:

  • Compulsory Code of Ethics for tech companies overseen by independent regulator

  • Regulator given powers to launch legal action against companies breaching code

  • Government to reform current electoral communications laws and rules on overseas involvement in UK elections

  • Social media companies obliged to take down known sources of harmful content, including proven sources of disinformation

Further finds that:

  • Electoral law 'not fit for purpose'

  • Facebook intentionally and knowingly violated both data privacy and anti-competition laws

Chair's comment

Damian Collins MP, Chair of the DCMS Committee said:

"Our inquiry over the last year has identified three big threats to our society. The challenge for the year ahead is to start to fix them; we cannot delay any longer.

"Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised 'dark adverts' from unidentifiable sources, delivered through the major social media platforms we use everyday. Much of this is directed from agencies working in foreign countries, including Russia.

"The big tech companies are failing in the duty of care they owe to their users to act against harmful content, and to respect their data privacy rights.

"Companies like Facebook exercise massive market power which enables them to make money by bullying the smaller technology companies and developers who rely on this platform to reach their customers.

"These are issues that the major tech companies are well aware of, yet continually fail to address. The guiding principle of the 'move fast and break things' culture often seems to be that it is better to apologise than ask permission.

"We need a radical shift in the balance of power between the platforms and the people. The age of inadequate self regulation must come to an end. The rights of the citizen need to be established in statute, by requiring the tech companies to adhere to a code of conduct written into law by Parliament, and overseen by an independent regulator.

"We also have to accept that our electoral regulations are hopelessly out of date for the internet age. We need reform so that the same principles of transparency of political communications apply online, just as they do in the real world. More needs to be done to require major donors to clearly establish the source of their funds.

"Much of the evidence we have scrutinised during our inquiry has focused on the business practices of Facebook; before, during and after the Cambridge Analytica data breach scandal.

"We believe that in its evidence to the Committee Facebook has often deliberately sought to frustrate our work, by giving incomplete, disingenuous and at times misleading answers to our questions.

"Even if Mark Zuckerberg doesn't believe he is accountable to the UK Parliament, he is to the billions of Facebook users across the world. Evidence uncovered by my Committee shows he still has questions to answer yet he's continued to duck them, refusing to respond to our invitations directly or sending representatives who don't have the right information. Mark Zuckerberg continually fails to show the levels of leadership and personal responsibility that should be expected from someone who sits at the top of one of the world's biggest companies.

"We also repeat our call to the Government to make a statement about how many investigations are currently being carried out into Russian interference in UK politics. We want to find out what was the impact of disinformation and voter manipulation on past elections including the UK Referendum in 2016 and are calling on the Government to launch an independent investigation."

Final Report

This Final Report on Disinformation and 'Fake News' repeats a number of recommendations from the interim report published last summer. The Committee calls for the Government to reconsider a number of recommendations to which it did not respond and to include concrete proposals for action in its forthcoming White Paper on online harms.
Independent regulation of social media companies.

The Report repeats a recommendation from the Interim Report for clear legal liabilities to be established for tech companies to act against harmful or illegal content on their sites, and the report calls for a compulsory Code of Ethics defining what constitutes harmful content. An independent regulator should be responsible for monitoring tech companies, backed by statutory powers to launch legal action against companies in breach of the code.

Companies failing obligations on harmful or illegal content would face hefty fines. MPs conclude: "Social media companies cannot hide behind the claim of being merely a 'platform' and maintain that they have no responsibility themselves in regulating the content of their sites."

The Report's recommendation chimes with recent statements by Ministers indicating the Government is prepared to regulate social media companies following the death of teenager Molly Russell. The Committee hopes to see firm recommendations for legislation in the White Paper to create a regulatory system for online content that is as effective as that for offline content.

It repeats its recommendation for new independent regulation to be funded by a levy on tech companies operating in the UK.

Data use and data targeting

The Report highlights Facebook documents obtained by the Committee and published in December 2018 relating to a Californian court case brought by app developer Six4Three. Through scrutiny of internal Facebook emails between 2011 and 2015, the Report finds evidence to indicate that the company was willing to: override its users' privacy settings in order to transfer data to some app developers; to charge high prices in advertising to some developers, for the exchange of data, and starve some developers--such as Six4Three--of that data, contributing to them losing their business. MPs conclude: "It is evident that Facebook intentionally and knowingly violated both data privacy and anti-competition laws."

It recommends that the ICO carries out a detailed investigation into the practices of the Facebook platform, its use of users' and users' friends' data, and the use of 'reciprocity' of the sharing of data. The CMA (Competition and Markets Authority) should conduct a comprehensive audit of the advertising market on social media and investigate whether Facebook has been involved in anti-competitive practices.

MPs note that Facebook, in particular, is unwilling to be accountable to regulators around the world: "By choosing not to appear before the Committee and by choosing not to respond personally to any of our invitations, Mark Zuckerberg has shown contempt towards both our Committee and the 'International Grand Committee' involving members from nine legislators from around the world."

 

 

Next they will want to check your bank account and assess your wealth...

Gambling Commission now requires that gambling sites verify identity before allowing people to bet


Link Here 10th February 2019
The Gambling Commission (UKGC) has released a new set of rules, ensuring that operators implement a new wave of identity checks to make gambling safer and fairer.

Following an open consultation, and to further guard against the risk of children gambling, new rules mean operators must verify customer identity and age before they can either deposit funds into an account or gamble with the licensee, with either their own money or a free bet or bonus.

Furthermore, the UKGC has clamped down on free-to-play games, stressing that customer must now be age verified to access such versions gambling games on licensees' websites, emphasising that there is no legitimate reason why they should be available to children.

Changes are also designed to aid with the detection of criminal activity, whilst operators are reminded that they cannot demand that ID be submitted as a condition of cashing out, if they could have asked for that information earlier.

Finally, an increase in identifying self-excluded players was stressed, because effective verification by operators will mean that a customer will not be verified, and therefore unable to gamble, until they provide correct details. These details will then be checked against both the operator's own self-exclusion database and the verified data held by Gamstop.

Set to come into force on Tuesday 7 May, further new rules come as a result of a number of complaints to contact centre staff, regarding licensees not allowing a customer to withdraw funds until they submit certain forms of ID.

The new rules require remote licensees to:

  • Verify, as a minimum, the name, address and date of birth of a customer before allowing them to gamble
  • Ask for any additional verification information promptly
  • Inform customers, before they can deposit funds, of the types of identity documents or other information that might be required, the circumstances in which the information might be required, and how it should be supplied to the licensee
  • Take reasonable steps to ensure that information on their customers' identities remains accurate.

 

 

Offsite Article: The new tech totalitarianism...


Link Here10th February 2019
A book review of The Age of Surveillance Capitalism by Professor Shoshana Zuboff

See article from newstatesman.com

 

 

Offsite Article: Monopolistic data grabbers...


Link Here7th February 2019
German Competition watchdog bans Facebook from processing so much data without explicit permissions

See article from bbc.co.uk

 

 

Offsite Article: How can internet companies compile databases of our browsing habits without seeking consent?...


Link Here1st February 2019
ICO are asked to investigate by the Open Rights Group and others

See article from theregister.co.uk

 

 

Having to ask Google to find the way to opt out of personalised advertising...

Google fined 50 million euros for not providing clear consent when snooping on browsing history so as to personalise adverts


Link Here22nd January 2019
Full story: Gooogle Privacy...Google's many run-ins with privacy

Google has been fined 50 million euros by the French data censor CNIL, for a breach of the EU's data protection rules.

CNIL said it had levied the record fine for lack of transparency, inadequate information and lack of valid consent regarding ads personalisation. It judged that people were not sufficiently informed about how Google collected data to personalise advertising and that Google had not obtained clear consent to process data because essential information was disseminated across several documents. The relevant information is accessible after several steps only, implying sometimes up to five or six actions, CNIL said.

In a statement, Google said it was studying the decision to determine its next steps.

The first complaint under the EU's new General Data Protection Regulation (GDPR) was filed on 25 May 2018, the day the legislation took effect.The filing groups claimed Google did not have a valid legal basis to process user data for ad personalisation, as mandated by the GDPR.

Many internet companies rely on vague wording such as 'improving user experience' to gain consent for a wide range of data uses but the GDPR provides that the consent is 'specific' only if it is given distinctly for each purpose.

Perhaps this fine may help for the protection of data gathered on UK porn users under the upcoming age verification requirements. Obtaining consent for narrowly defined data usages may mean actions could be taken to prevent user identity and browsing history from being sold on.

 

 

General Data Protection Rights abuse...

Google may continue to use facial recognition to tag pictures obtained from Google Photos without obtaining consent


Link Here2nd January 2019
Full story: Gooogle Privacy...Google's many run-ins with privacy
A US federal judge has thrown out a lawsuit that Google's non-consensual use of facial recognition technology violated users' privacy rights, allowing the tech giant to continue to scan and store their biometric data.

The lawsuit, filed in 2016, alleged that Google violated Illinois state law by collecting biometric data without their consent. The data was harvested from their pictures stored on Google Photos.

The plaintiffs wanted more than $5 million in damages for hundreds of thousands of users affected, arguing that the unauthorized scanning of their faces was a violation of the Illinois Biometric Information Privacy Act, which completely outlaws the gathering of biometric information without consent.

Google countered claiming that the plaintiffs were not entitled to any compensation, as they had not been harmed by the data collection. On Saturday, US District Judge Edmond E. Chang sided with the tech giant, ruling that the plaintiffs had not suffered any concrete harm, and dismissing the suit.

As well as allowing Google to continue the practice, the ruling could have implications for other cases pending against Facebook and Snapchat. Both companies are currently being sued for violating the Illinois act.


 2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan-March   April-June   July-Sept   Oct-Dec    


 


Liberty

Privacy

Copyright
 

Free Speech

Campaigners

Religion
 

melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys