Melon Farmers Original Version

Privacy


2021: Jan-March

 2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan-March   April-June   July-Sept   Oct-Dec    

 

MPs identified as totally uncaring for the safety of internet users...

MPs who don't like being insulted on Twitter line up to call for all users to hand over identifying personal details to the likes of Google, Facebook and the Chinese government


Link Here 24th March 2021
Online Anonymity was debated in the House of Commons debated on Wednesday 13 January 2021.

The long debate was little more than a list of complaints from MPs aggrieved at aggressive comments on social media, often against themselves.

As always seems to be the case with parliamentary debate, it turned into a a long calls of 'something must be done', and hardly comment thinking about the likely and harmful consequences of what they are calling for.

As an example here is part of the complaint from debate opener, Siobhan Bailiie:

The new legislative framework for tech companies will create a duty of care to their users. The legislation will require companies to prevent the proliferation of illegal content and activity online, and ensure that children who use their services are not exposed to harmful content. As it stands, the tech companies do not know who millions of their users are, so they do not know who their harmful operators are, either. By failing to deal with anonymity properly, any regulator or police force, or even the tech companies themselves, will still need to take extensive steps to uncover the person behind the account first, before they can tackle the issue or protect a user.

The Law Commission acknowledged that anonymity often facilitates and encourages abusive behaviours. It said that combined with an online disinhibition effect, abusive behaviours, such as pile-on harassment, are much easier to engage in on a practical level. The Online Harms White Paper focuses on regulation of platforms and the Law Commission's work addresses the criminal law provisions that apply for individuals. It is imperative, in my view, that the Law Commission's report and proposals are fully debated prior to the online harms Bill passing through Parliament. They should go hand in hand.

Standing in Parliament, I must mention that online abuse is putting people off going into public service and speaking up generally. One reason I became interested in this subject was the awful abuse I received for daring to have a baby and be an MP. Attacking somebody for being a mum or suggesting that a mum cannot do this job is misogynistic and, quite frankly, ridiculous, but I would be lying if I said that I did not find some of the comments stressful and upsetting, particularly given I had just had a baby.

Is there a greater impediment to freedom of expression than a woman being called a whore online or being told that she will be raped for expressing a view? It happens. It happens frequently and the authors are often anonymous. Fantastic groups like 50:50 Parliament, the Centenary Action Group, More United and Compassion in Politics are tackling this head on to avoid men and women being put off running for office. One of the six online harm asks from Compassion in Politics is to significantly reduce the prevalence and influence of anonymous accounts online.

The Open Rights Group said more about consequences in a short email than the MPs said in a hour of debate:

Mandatory ID verification would open a Pandora's Box of unintended consequences. A huge burden would be placed on site administrators big and small to own privatised databases of personally identifiable data. Large social media platforms would gain ever more advantage over small businesses, open source projects and startups that lack the resources to comply.

Requirements for formal documentation, such as a bank account, to verify ID would disenfranchise those on low incomes, the unbanked, the homeless, and people experiencing other forms of social exclusion. Meanwhile, the fate of countless accounts and astronomical amounts of legal content would be thrown into jeopardy overnight.

 

 

Offsite Article: #SaveAnonymity: Together we can defend anonymity...


Link Here 19th March 2021
Open Rights Group responds to a petition calling for identity verification for social media users

See article from openrightsgroup.org

 

 

Surely someone has an idea somewhere...

Government seeks ideas on how to impose or implement age verification in retail


Link Here17th March 2021

Both on and off licenced retailers, bars and restaurants have been invited to put forward proposals to trial new technology when carrying out age verification checks.

The call for proposals has been launched by the Home Office and the Office for Product Safety and Standards, and retailers who are successful will be able to pilot new technology to improve the process of ID check during the sale of alcohol and other age restricted items.

The pilots will explore how technology can strengthen current measures in place to prevent those under 18 from buying alcohol, reduce violence or abuse towards shop workers and ensure there are robust age checks on the delivery, click and collect or dispatch of alcohol.

It will be up to applicants to suggest products to trial within their proposals, but technology that may potentially be tested include a holographic or ultraviolet identification feature on a mobile phone.

Retailers will be able to submit applications online on gov.uk and will be required to provide detail on how the technology works and how they plan to test it.

The pilots will allow a wide range of digital age verification technology to be tested, and the findings will be used to understand the impact of this technology and inform future policy, as part of the government's ambition to create an innovative digital economy.

Retailers will still be required to carry out physical age verification checks alongside any digital technology in line with the current law, which requires a physical identification card with a holographic mark or ultraviolet feature upon request in the sale of alcohol.

Trials by successful applicants will begin in the summer and must be completed by April 2022.

Retailers can submit their proposals to trial digital age verification technology on gov.uk. Submissions close on 31 May and successful applicants be notified by 2 July.

 

 

Google's FLoC Is a Terrible Idea...

Explaining Google's idea to match individuals to groups for targetting advertisng. By Bennett Cyphers


Link Here10th March 2021
Full story: Gooogle Privacy...Google's many run-ins with privacy

The third-party cookie is dying, and Google is trying to create its replacement.

No one should mourn the death of the cookie as we know it. For more than two decades, the third-party cookie has been the lynchpin in a shadowy, seedy, multi-billion dollar advertising-surveillance industry on the Web; phasing out tracking cookies and other persistent third-party identifiers is long overdue. However, as the foundations shift beneath the advertising industry, its biggest players are determined to land on their feet.

Google is leading the charge to replace third-party cookies with a new suite of technologies to target ads on the Web. And some of its proposals show that it hasn't learned the right lessons from the ongoing backlash to the surveillance business model. This post will focus on one of those proposals, Federated Learning of Cohorts (FLoC) , which is perhaps the most ambitious--and potentially the most harmful.

FLoC is meant to be a new way to make your browser do the profiling that third-party trackers used to do themselves: in this case, boiling down your recent browsing activity into a behavioral label, and then sharing it with websites and advertisers. The technology will avoid the privacy risks of third-party cookies, but it will create new ones in the process. It may also exacerbate many of the worst non-privacy problems with behavioral ads, including discrimination and predatory targeting.

Google's pitch to privacy advocates is that a world with FLoC (and other elements of the " privacy sandbox ") will be better than the world we have today, where data brokers and ad-tech giants track and profile with impunity. But that framing is based on a false premise that we have to choose between "old tracking" and "new tracking." It's not either-or. Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads.

We stand at a fork in the road. Behind us is the era of the third-party cookie, perhaps the Web's biggest mistake. Ahead of us are two possible futures.

In one, users get to decide what information to share with each site they choose to interact with. No one needs to worry that their past browsing will be held against them--or leveraged to manipulate them--when they next open a tab.

In the other, each user's behavior follows them from site to site as a label, inscrutable at a glance but rich with meaning to those in the know. Their recent history, distilled into a few bits, is "democratized" and shared with dozens of nameless actors that take part in the service of each web page. Users begin every interaction with a confession: here's what I've been up to this week, please treat me accordingly.

Users and advocates must reject FLoC and other misguided attempts to reinvent behavioral targeting. We implore Google to abandon FLoC and redirect its effort towards building a truly user-friendly Web.

What is FLoC?

In 2019, Google presented the Privacy Sandbox , its vision for the future of privacy on the Web. At the center of the project is a suite of cookieless protocols designed to satisfy the myriad use cases that third-party cookies currently provide to advertisers. Google took its proposals to the W3C, the standards-making body for the Web, where they have primarily been discussed in the Web Advertising Business Group , a body made up primarily of ad-tech vendors. In the intervening months, Google and other advertisers have proposed dozens of bird-themed technical standards: PIGIN , TURTLEDOVE , SPARROW , SWAN , SPURFOWL , PELICAN , PARROT ... the list goes on. Seriously . Each of the "bird" proposals is designed to perform one of the functions in the targeted advertising ecosystem that is currently done by cookies.

FLoC is designed to help advertisers perform behavioral targeting without third-party cookies. A browser with FLoC enabled would collect information about its user's browsing habits, then use that information to assign its user to a "cohort" or group. Users with similar browsing habits--for some definition of "similar"--would be grouped into the same cohort. Each user's browser will share a cohort ID, indicating which group they belong to, with websites and advertisers. According to the proposal, at least a few thousand users should belong to each cohort (though that's not a guarantee).

If that sounds dense, think of it this way: your FLoC ID will be like a succinct summary of your recent activity on the Web.

Google's proof of concept used the domains of the sites that each user visited as the basis for grouping people together. It then used an algorithm called SimHash to create the groups. SimHash can be computed locally on each user's machine, so there's no need for a central server to collect behavioral data. However, a central administrator could have a role in enforcing privacy guarantees. In order to prevent any cohort from being too small (i.e. too identifying), Google proposes that a central actor could count the number of users assigned each cohort. If any are too small, they can be combined with other, similar cohorts until enough users are represented in each one.

According to the proposal, most of the specifics are still up in the air. The draft specification states that a user's cohort ID will be available via Javascript, but it's unclear whether there will be any restrictions on who can access it, or whether the ID will be shared in any other ways. FLoC could perform clustering based on URLs or page content instead of domains; it could also use a federated learning-based system (as the name FLoC implies) to generate the groups instead of SimHash. It's also unclear exactly how many possible cohorts there will be. Google's experiment used 8-bit cohort identifiers, meaning that there were only 256 possible cohorts. In practice that number could be much higher; the documentation suggests a 16-bit cohort ID comprising 4 hexadecimal characters. The more cohorts there are, the more specific they will be; longer cohort IDs will mean that advertisers learn more about each user's interests and have an easier time fingerprinting them.

One thing that is specified is duration. FLoC cohorts will be re-calculated on a weekly basis, each time using data from the previous week's browsing. This makes FLoC cohorts less useful as long-term identifiers, but it also makes them more potent measures of how users behave over time.

New privacy problems

FLoC is part of a suite intended to bring targeted ads into a privacy-preserving future. But the core design involves sharing new information with advertisers. Unsurprisingly, this also creates new privacy risks.

Fingerprinting

The first issue is fingerprinting. Browser fingerprinting is the practice of gathering many discrete pieces of information from a user's browser to create a unique, stable identifier for that browser. EFF's Cover Your Tracks project demonstrates how the process works: in a nutshell, the more ways your browser looks or acts different from others', the easier it is to fingerprint.

Google has promised that the vast majority of FLoC cohorts will comprise thousands of users each, so a cohort ID alone shouldn't distinguish you from a few thousand other people like you. However, that still gives fingerprinters a massive head start. If a tracker starts with your FLoC cohort, it only has to distinguish your browser from a few thousand others (rather than a few hundred million). In information theoretic terms, FLoC cohorts will contain several bits of entropy --up to 8 bits, in Google's proof of concept trial. This information is even more potent given that it is unlikely to be correlated with other information that the browser exposes. This will make it much easier for trackers to put together a unique fingerprint for FLoC users.

Google has acknowledged this as a challenge, but has pledged to solve it as part of the broader "Privacy Budget" plan it has to deal with fingerprinting long-term. Solving fingerprinting is an admirable goal, and its proposal is a promising avenue to pursue. But according to the FAQ , that plan is "an early stage proposal and does not yet have a browser implementation." Meanwhile, Google is set to begin testing FLoC as early as this month .

Fingerprinting is notoriously difficult to stop. Browsers like Safari and Tor have engaged in years-long wars of attrition against trackers, sacrificing large swaths of their own feature sets in order to reduce fingerprinting attack surfaces. Fingerprinting mitigation generally involves trimming away or restricting unnecessary sources of entropy--which is what FLoC is. Google should not create new fingerprinting risks until it's figured out how to deal with existing ones.

Cross-context exposure

The second problem is less easily explained away: the technology will share new personal data with trackers who can already identify users. For FLoC to be useful to advertisers, a user's cohort will necessarily reveal information about their behavior.

The project's Github page addresses this up front:

This API democratizes access to some information about an individual's general browsing history (and thus, general interests) to any site that opts into it. ... Sites that know a person's PII (e.g., when people sign in using their email address) could record and reveal their cohort. This means that information about an individual's interests may eventually become public.

As described above, FLoC cohorts shouldn't work as identifiers by themselves. However, any company able to identify a user in other ways--say, by offering "log in with Google" services to sites around the Internet--will be able to tie the information it learns from FLoC to the user's profile.

Two categories of information may be exposed in this way:

  • Specific information about browsing history. Trackers may be able to reverse-engineer the cohort-assignment algorithm to determine that any user who belongs to a specific cohort probably or definitely visited specific sites.

  • General information about demographics or interests. Observers may learn that in general , members of a specific cohort are substantially likely to be a specific type of person. For example, a particular cohort may over-represent users who are young, female, and Black; another cohort, middle-aged Republican voters; a third, LGBTQ+ youth.

This means every site you visit will have a good idea about what kind of person you are on first contact, without having to do the work of tracking you across the web. Moreover, as your FLoC cohort will update over time, sites that can identify you in other ways will also be able to track how your browsing changes. Remember, a FLoC cohort is nothing more, and nothing less, than a summary of your recent browsing activity.

You should have a right to present different aspects of your identity in different contexts. If you visit a site for medical information, you might trust it with information about your health, but there's no reason it needs to know what your politics are. Likewise, if you visit a retail website, it shouldn't need to know whether you've recently read up on treatment for depression. FLoC erodes this separation of contexts, and instead presents the same behavioral summary to everyone you interact with.

Beyond privacy

FLoC is designed to prevent a very specific threat: the kind of individualized profiling that is enabled by cross-context identifiers today. The goal of FLoC and other proposals is to avoid letting trackers access specific pieces of information that they can tie to specific people. As we've shown, FLoC may actually help trackers in many contexts. But even if Google is able to iterate on its design and prevent these risks, the harms of targeted advertising are not limited to violations of privacy. FLoC's core objective is at odds with other civil liberties.

The power to target is the power to discriminate. By definition, targeted ads allow advertisers to reach some kinds of people while excluding others. A targeting system may be used to decide who gets to see job postings or loan offers just as easily as it is to advertise shoes.

Over the years, the machinery of targeted advertising has frequently been used for exploitation , discrimination , and harm . The ability to target people based on ethnicity, religion, gender, age, or ability allows discriminatory ads for jobs, housing, and credit. Targeting based on credit history--or characteristics systematically associated with it-- enables predatory ads for high-interest loans. Targeting based on demographics, location, and political affiliation helps purveyors of politically motivated disinformation and voter suppression. All kinds of behavioral targeting increase the risk of convincing scams .

Google, Facebook, and many other ad platforms already try to rein in certain uses of their targeting platforms. Google, for example, limits advertisers' ability to target people in " sensitive interest categories ." However, these efforts frequently fall short; determined actors can usually find workarounds to platform-wide restrictions on certain kinds of targeting or certain kinds of ads .

Even with absolute power over what information can be used to target whom, platforms are too often unable to prevent abuse of their technology. But FLoC will use an unsupervised algorithm to create its clusters. That means that nobody will have direct control over how people are grouped together. Ideally (for advertisers), FLoC will create groups that have meaningful behaviors and interests in common. But online behavior is linked to all kinds of sensitive characteristics-- demographics like gender, ethnicity, age, and income; "big 5" personality traits ; even mental health . It is highly likely that FLoC will group users along some of these axes as well. FLoC groupings may also directly reflect visits to websites related to substance abuse, financial hardship, or support for survivors of trauma.

Google has proposed that it can monitor the outputs of the system to check for any correlations with its sensitive categories. If it finds that a particular cohort is too closely related to a particular protected group, the administrative server can choose new parameters for the algorithm and tell users' browsers to group themselves again.

This solution sounds both orwellian and sisyphean. In order to monitor how FLoC groups correlate with sensitive categories, Google will need to run massive audits using data about users' race, gender, religion, age, health, and financial status. Whenever it finds a cohort that correlates too strongly along any of those axes, it will have to reconfigure the whole algorithm and try again, hoping that no other "sensitive categories" are implicated in the new version. This is a much more difficult version of the problem it is already trying, and frequently failing , to solve.

In a world with FLoC, it may be more difficult to target users directly based on age, gender, or income. But it won't be impossible. Trackers with access to auxiliary information about users will be able to learn what FLoC groupings "mean"--what kinds of people they contain--through observation and experiment. Those who are determined to do so will still be able to discriminate. Moreover, this kind of behavior will be harder for platforms to police than it already is. Advertisers with bad intentions will have plausible deniability--after all, they aren't directly targeting protected categories, they're just reaching people based on behavior. And the whole system will be more opaque to users and regulators.

Google, please don't do this

We wrote about FLoC and the other initial batch of proposals when they were first introduced , calling FLoC "the opposite of privacy-preserving technology." We hoped that the standards process would shed light on FLoC's fundamental flaws, causing Google to reconsider pushing it forward. Indeed, several issues on the official Github page raise the exact same concerns that we highlight here. However, Google has continued developing the system, leaving the fundamentals nearly unchanged. It has started pitching FLoC to advertisers, boasting that FLoC is a " 95% effective " replacement for cookie-based targeting. And starting with Chrome 89, released on March 2 , it's deploying the technology for a trial run . A small portion of Chrome users--still likely millions of people--will be (or have been) assigned to test the new technology.

Make no mistake, if Google does follow through on its plan to implement FLoC in Chrome, it will likely give everyone involved "options." The system will probably be opt-in for the advertisers that will benefit from it, and opt-out for the users who stand to be hurt. Google will surely tout this as a step forward for "transparency and user control," knowing full well that the vast majority of its users will not understand how FLoC works, and that very few will go out of their way to turn it off. It will pat itself on the back for ushering in a new, private era on the Web, free of the evil third-party cookie--the technology that Google helped extend well past its shelf life, making billions of dollars in the process .

It doesn't have to be that way. The most important parts of the privacy sandbox, like dropping third-party identifiers and fighting fingerprinting, will genuinely change the Web for the better. Google can choose to dismantle the old scaffolding for surveillance without replacing it with something new and uniquely harmful.

We emphatically reject the future of FLoC. That is not the world we want, nor the one users deserve. Google needs to learn the correct lessons from the era of third-party tracking and design its browser to work for users, not for advertisers.

 

 

Ethical snooping...

Google promises not to replace cookie based web browsing snooping with another privacy invasive method of snooping


Link Here3rd March 2021
Full story: Gooogle Privacy...Google's many run-ins with privacy
David Temkin, Google's Director of Product Management, Ads Privacy and Trust has been commenting on Google's progress in reducing personalised advertising based on snooping of people's browsing history. Temkin commented:

72% of people feel that almost all of what they do online is being tracked by advertisers, technology firms or other companies, and 81% say that the potential risks they face because of data collection outweigh the benefits, according to a study by Pew Research Center. If digital advertising doesn't evolve to address the growing concerns people have about their privacy and how their personal identity is being used, we risk the future of the free and open web.

That's why last year Chrome announced its intent to remove support for third-party cookies, and why we've been working with the broader industry on the Privacy Sandbox to build innovations that protect anonymity while still delivering results for advertisers and publishers. Even so, we continue to get questions about whether Google will join others in the ad tech industry who plan to replace third-party cookies with alternative user-level identifiers. Today, we're making explicit that once third-party cookies are phased out, we will not build alternate identifiers to track individuals as they browse across the web, nor will we use them in our products.

We realize this means other providers may offer a level of user identity for ad tracking across the web that we will not -- like PII [Personally Identifying Information] graphs based on people's email addresses. We don't believe these solutions will meet rising consumer expectations for privacy, nor will they stand up to rapidly evolving regulatory restrictions, and therefore aren't a sustainable long term investment. Instead, our web products will be powered by privacy-preserving APIs which prevent individual tracking while still delivering results for advertisers and publishers.

People shouldn't have to accept being tracked across the web in order to get the benefits of relevant advertising. And advertisers don't need to track individual consumers across the web to get the performance benefits of digital advertising.

Advances in aggregation, anonymization, on-device processing and other privacy-preserving technologies offer a clear path to replacing individual identifiers. In fact, our latest tests of FLoC [Federated Learning of Cohorts] show one way to effectively take third-party cookies out of the advertising equation and instead hide individuals within large crowds of people with common interests. Chrome intends to make FLoC-based cohorts available for public testing through origin trials with its next release this month, and we expect to begin testing FLoC-based cohorts with advertisers in Google Ads in Q2. Chrome also will offer the first iteration of new user controls in April and will expand on these controls in future releases, as more proposals reach the origin trial stage, and they receive more feedback from end users and the industry.

This points to a future where there is no need to sacrifice relevant advertising and monetization in order to deliver a private and secure experience.

 

 

Ethical snooping...

GCHQ discusses the ethics of using AI and mass snooping to analyse people's internet use to detect both serious crime and no doubt political incorrectness


Link Here1st March 2021
The UK snooping agency GCHQ has published a paper discussing the ethics of using AI for analysing internet posts. GCHQ note that the technology will be put at the heart of its operations.

The paper, Ethics of AI: Pioneering a New National Security , comments on the technology as used to assist its analysts in spotting patterns hidden inside large - and fast growing - amounts of data. including:

  • trying to spot fake online messages used by other states spreading disinformation
  • mapping international networks engaged in human or drug trafficking
  • finding child sex abusers hiding their identities online

But it says it cannot predict human behaviour such as moving towards executing a terrorist attack.

GCHQ is now detailin how it will ensure it uses AI fairly and transparently, including:

  • an AI ethical code of practice
  • recruiting more diverse talent to help develop and govern its use

The BBC comments that this maybe a sign the agency wants to avoid a repeat of criticism people were unaware how it used data, following whistleblower Edward Snowden's revelations.

GCHQ reports that a growing number of states are using AI to automate the production of false content to affect public debate, including "deepfake" video and audio. The technology can individually target and personalise this content or spread it through chatbots or by interfering with social-media algorithms. But it could also help GCHQ detect and fact-check it and identify "troll farms" and botnet accounts.

GCHQ speaks of capabilities in terms of detecting child abuse, where functionalities include:

  • help analyse evidence of grooming in chat rooms
  • track the disguised identities of offenders across multiple accounts
  • discover hidden people and illegal services on the dark web
  • help police officers infiltrate rings of offenders
  • filter content to prevent analysts from being unnecessarily exposed to disturbing imagery

and on trafficking:

  • mapping the international networks that enable trafficking - identifying individuals, accounts and transactions
  • "following the money" - analysing complex transactions, possibly revealing state sponsors or links to terrorist groups
  • bringing together different types of data - such as imagery and messaging - to track and predict where illegal cargos are being delivered

Now doubt these functionalities will also be used for more mundane reasons.

 

 

Taking a clear view on Clearview...

Swedish police fined for the illegal use of facial recognition AI software


Link Here11th February 2021
The Swedish Authority for Privacy Protection (IMY) has found that the Swedish Police Authority processed personal data in breach of the Swedish Criminal Data Act when using Clearview AI to identify individuals.

An IMY investigation concluded that Cleaview AI has been used by the Police on a number of occasions without any prior authorisation.

IMY concluded that the Police didn't fulfil its obligations as a data controller on a number of accounts with regards to the use of Clearview AI. The Police has failed to implement sufficient organisational measures to ensure and be able to demonstrate that the processing of personal data in this case has been carried out in compliance with the Criminal Data Act. When using Clearview AI the Police has unlawfully processed biometric data for facial recognition as well as having failed to conduct a data protection impact assessment which this case of processing would require.

IMY fined the police SEK 2,500,000 (approximately euro 250,000). IMY also ordered the Police to conduct further training and education of its employees in order to avoid any future processing of personal data in breach of data protection rules and regulations. In addition the Police were ordered to inform the data subjects, whose data has been disclosed to Clearview AI, when confidentiality rules so allows. Finally the Police are ordered to ensure, to the extent possible, that any personal data transferred to Clearview AI is erased.

 

 

Problem snooping...

The Gambling Commission consult on a despicable proposal requiring bookies to investigate the financial standings of their online customers


Link Here6th February 2021
The Gambling Commission has a problem. It holds gambling business in utter contempt and thinks that bookies are suitable private companies to forcibly and invasively snoop into people's financial affairs.

The Racing Post editor explains better than the Gambling Commission how this proposal will pan out:

Nothing to worry about, then. Just a perfectly normal proposal that a non-governmental quango staffed by unelected, unaccountable bureaucrats -- not even civil servants -- will determine a loss level at which you and I should be subject to checks on our personal finances. Just a demand that in order to continue betting if a couple of £25 each-way punts go awry, we must share payslips and bank statements with our bookie.

Well! Most punters would tell any betting operator asking for such invasive details of one's financial affairs to go whistle, but imagine for a second that you would subject yourself to such an illiberal and demeaning process. How would your capacity to bet be assessed? The Gambling Commission's consultation document gives some clues, and its suggestions should horrify punters and anyone who cares about the rights of the individual to manage their own affairs without overweening state interference.

First, it is essential to note that while proponents of affordability checks would have you believe these can take place seamlessly and without any inconvenience to punters by utilising information betting operators already hold or can access from credit agencies, the Gambling Commission states this is not the case, noting we would want to be clear that it is still likely that operators will need to collect information directly from customers.

So, let's say you are a daily £10 punter and after a fortnight of middling-to-poor results you hit the prospective £100 threshold for affordability checks. You reluctantly and with grave reservations hand over your most sensitive financial documents for review. How does the operator decide if you are barred from betting for the rest of the month and subject to punting restrictions for evermore?

According to the Gambling Commission, the most relevant way of assessing your capacity to bet before beginning to experience harms is through assessing what it calls discretionary income. This is what you have left each month after spending on essentials like taxes, bills, food and housing. Crucially, however, the commission adds it would not be expected that anyone could spend their entire discretionary income on gambling without experiencing harms.

As such, the Gambling Commission is not just suggesting your financial affairs should be subject to the sort of scrutiny you might find uncomfortable coming from your spouse, never mind Sky Bet, but that the sum of money you have left after meeting all obligations and purchasing all essentials still cannot be used as you see fit. This is a naked admission that this is not about affordability, but about prohibitionism and control.

Meanwhile there as an additional takeout from reading the consultation paper:

  • Bettors should simply never use bookies forums. The bookies are expected to crawl through people's conversations looking clues about people's mental state. So if you comment that you are a bit depressed that your bet failed you may find that you get banned on grounds of clinical depression.
  • Similarly bettors should think very carefully about what they tell bookies via helpline conversations or via messaging services. It is clear that the bookie's staff will be listening to every word wondering if what you say can be interpreted as some sort of clue about personal or financial difficulties.
  • Bettors should also consider whether using self control mechanisms such as staking limits or time outs may be interpreted as some sort of admission that bettors need to be closely surveilled.

 

 

Offsite Article: Twitchy about VPNs...


Link Here4th February 2021
VPN users are reporting that their chats no longer show up on Twitch streams

See article from techradar.com

 

 

Offsite Article: Don't let the censors get you down...


Link Here27th January 2021
An introduction to private and encrypted messaging apps

See article from reclaimthenet.org

 

 

Updated: Sharing your data...

Details and comments about the WhatsApp announcement that it will be handing over your personal data to Facebook


Link Here17th January 2021

WhatsApp is forcing users to agree to sharing information with Facebook if they want to keep using the service.

The company warns users in a pop-up notice that they need to accept these updates to continue using WhatsApp - or delete their accounts.

But Facebook, which owns WhatsApp, said European and UK users would not see the same data-sharing changes, although they will need to accept new terms.

See details in article from bbc.co.uk

See also comment piece WhatsApp users are really Facebook customers now -- it's getting harder to forget thatfrom theguardian.com

Update: Postpones

17th January 2021. See article from theverge.com

WhatsApp on Friday announced a three-month delay of a new privacy policy originally slated to go into effect on February 8th following widespread confusion over whether the new policy would mandate data sharing with Facebook. The changes will now apply from 15th May 2021.

The update does not in fact affect data sharing with Facebook with regard to user chats or other profile information; WhatsApp has repeatedly clarified that its update addresses business chats in the event a user converses with a company's customer service platform through WhatsApp.

 

 

TikTok sets accounts of under 16s to private...

Responding to child privacy concerns


Link Here 14th January 2021
TikTok users aged under 16 will have their accounts automatically set to private, as the app introduces a series of measures to improve child safety.

Approved followers only can comment on videos from these accounts. Users will also be prevented from downloading any videos created by under-16s.

TikTok said it hoped the changes would encourage young users to actively engage in their online privacy journey.

Those aged between 13 and 15 will be able to approve friends for comments and choose whether to make videos public. But those accounts will also not be suggested to other users on the app. media caption Why is TikTok so popular among teens?

The accounts of 16- and 17-year-olds will prevent others downloading their videos - but the youngsters will have the ability to turn off this restriction.


 2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan-March   April-June   July-Sept   Oct-Dec    


 


Liberty

Privacy

Copyright
 

Free Speech

Campaigners

Religion
 

melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys