Liberty News Control Freaks
Privacy News Bollox Britain
2012   2013   2014   2015   2016   2017   2018   Latest  

  'Welcome back Mr Jones, we see you have another new girlfriend, we've a special for you on condoms'...

7-Eleven convenience stores to snoop on customers using facial recognition technology

Link Here 17th March 2018
7 eleven logoThe convenience store 7-Eleven is rolling out artificial intelligence at its 11,000 stores across Thailand.

7-Eleven will use facial-recognition and behavior-analysis technologies for multiple purposes. The ones it has decided to reveal to the public are to identify loyalty members, analyze in-store traffic, monitor product levels, suggest products to customers, and even measure the emotions of customers as they walk around.

The company announced it will be using technology developed by US-based Remark Holdings, which says its facial-recognition technology has an accuracy rate of more than 96%. Remark, which has data partnerships with Alibaba, Tencent, and Baidu, has a significant presence in China.

The rollout at Thailand's 7-Eleven stores remains unique in scope. It could potentially be the largest number of facial-recognition cameras to be adopted by one company. No corporate entity is so entrenched in Thai lives, according to a report from Public Radio International. And that may be crucial not only to the success of facial recognition in 7-Eleven stores in Thailand, but across the region.


 Offsite Article: Close the Windows...

Link Here 15th March 2018
tails logo The Tails operating system provides privacy and anonymity and it runs from a memory stick

See article from


  Facing down researchers (and their governments)...

Facebook is commendably refusing to hand over private Facebook data to researchers who want to see how fake news (and no doubt other politically incorrect content) spreads

Link Here 12th March 2018
Facebook logo



MIT details new privacy service where web browsers are served with encrypted images that leaves little for trackers and snoopers

Link Here 27th February 2018

MIT veil browsing Today, most web browsers have private-browsing modes, in which they temporarily desist from recording the user's browsing history.

But data accessed during private browsing sessions can still end up tucked away in a computer's memory, where a sufficiently motivated attacker could retrieve it.

This week, at the Network and Distributed Systems Security Symposium, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and Harvard University presented a paper describing a new system, dubbed Veil, that makes private browsing more private.

Veil would provide added protections to people using shared computers in offices, hotel business centers, or university computing centers, and it can be used in conjunction with existing private-browsing systems and with anonymity networks such as Tor, which was designed to protect the identity of web users living under repressive regimes.

"Veil was motivated by all this research that was done previously in the security community that said, 'Private-browsing modes are leaky -- Here are 10 different ways that they leak,'" says Frank Wang, an MIT graduate student in electrical engineering and computer science and first author on the paper. "We asked, 'What is the fundamental problem?' And the fundamental problem is that [the browser] collects this information, and then the browser does its best effort to fix it. But at the end of the day, no matter what the browser's best effort is, it still collects it. We might as well not collect that information in the first place."

Wang is joined on the paper by his two thesis advisors: Nickolai Zeldovich, an associate professor of electrical engineering and computer science at MIT, and James Mickens , an associate professor of computer science at Harvard.

Shell game

With existing private-browsing sessions, Wang explains, a browser will retrieve data much as it always does and load it into memory. When the session is over, it attempts to erase whatever it retrieved.

But in today's computers, memory management is a complex process, with data continuously moving around between different cores (processing units) and caches (local, high-speed memory banks). When memory banks fill up, the operating system might transfer data to the computer's hard drive, where it could remain for days, even after it's no longer being used.

Generally, a browser won't know where the data it downloaded has ended up. Even if it did, it wouldn't necessarily have authorization from the operating system to delete it.

Veil gets around this problem by ensuring that any data the browser loads into memory remains encrypted until it's actually displayed on-screen. Rather than typing a URL into the browser's address bar, the Veil user goes to the Veil website and enters the URL there. A special server -- which the researchers call a blinding server -- transmits a version of the requested page that's been translated into the Veil format.

The Veil page looks like an ordinary webpage: Any browser can load it. But embedded in the page is a bit of code -- much like the embedded code that would, say, run a video or display a list of recent headlines in an ordinary page -- that executes a decryption algorithm. The data associated with the page is unintelligible until it passes through that algorithm.


Once the data is decrypted, it will need to be loaded in memory for as long as it's displayed on-screen. That type of temporarily stored data is less likely to be traceable after the browser session is over. But to further confound would-be attackers, Veil includes a few other security features.

One is that the blinding servers randomly add a bunch of meaningless code to every page they serve. That code doesn't affect the way a page looks to the user, but it drastically changes the appearance of the underlying source file. No two transmissions of a page served by a blinding sever look alike, and an adversary who managed to recover a few stray snippets of decrypted code after a Veil session probably wouldn't be able to determine what page the user had visited.

If the combination of run-time decryption and code obfuscation doesn't give the user an adequate sense of security, Veil offers an even harder-to-hack option. With this option, the blinding server opens the requested page itself and takes a picture of it. Only the picture is sent to the Veil user, so no executable code ever ends up in the user's computer. If the user clicks on some part of the image, the browser records the location of the click and sends it to the blinding server, which processes it and returns an image of the updated page.

The back end

Veil does, of course, require web developers to create Veil versions of their sites. But Wang and his colleagues have designed a compiler that performs this conversion automatically. The prototype of the compiler even uploads the converted site to a blinding server. The developer simply feeds the existing content for his or her site to the compiler.

A slightly more demanding requirement is the maintenance of the blinding servers. These could be hosted by either a network of private volunteers or a for-profit company. But site managers may wish to host Veil-enabled versions of their sites themselves. For web services that already emphasize the privacy protections they afford their customers, the added protections provided by Veil could offer a competitive advantage.

"Veil attempts to provide a private browsing mode without relying on browsers," says Taesoo Kim, an assistant professor of computer science at Georgia Tech, who was not involved in the research. "Even if end users didn't explicitly enable the private browsing mode, they still can get benefits from Veil-enabled websites. Veil aims to be practical -- it doesn't require any modification on the browser side -- and to be stronger -- taking care of other corner cases that browsers do not have full control of."


  Golden Oldies...

US judge strikes down law banning IMDb from publishing the age of movie stars

Link Here 21st February 2018
imdb logoA Californian law that prevented the Internet Movie Database (IMDb) from publishing the age of movie stars has been declared unconstitutional, and so the law is struck down on First Amendment grounds. A federal judge declared it not only to be unconstitutional, but also a bad solution to the wrong problem.

The law went into effect in 2017 after being signed by California Gov. Jerry Brown. The goal was to mitigate age discrimination in a youth-obsessed Hollywood by requiring IMDb to remove age-related information upon the request of a subscriber.

The judge explained:

Even if California had shown that the law was passed after targeted efforts to eliminate discrimination in the entertainment industry had failed, the law is not narrowly tailored. For one, the law is underinclusive, in that it bans only one kind of speaker from disseminating age-related information, leaving all other sources of that information untouched. Even looking just at, the law requires IMDb to take down some age-related information -- that of the members of its subscription service who request its removal -- but not the age-related information of those who don't subscribe to IMDbPro, or who don't ask to take their information down.

The judge adds that the law is also overinclusive:

For instance, it requires IMDb not to publish the age-related information of all those who request that their information not be published, not just of those protected by existing age discrimination laws, states the opinion (read below). If the state is concerned about discriminatory conduct affecting those not covered by current laws, namely those under 40, it certainly has a more direct means of addressing those concerns than imposing restrictions on IMDb's speech.

Californian officials said the state will be appealing this ruling to the Ninth Circuit Court of Appeals.


  Endangering porn stars...

German courts finds that Facebook's real name policy is illegal and a Belgian court tells Facebook to delete tracking data on people not signed up to Facebook

Link Here 17th February 2018  full story: Facebook Privacy...Facebook criticised for discouraging privacy

Facebook logoGermany

In a ruling of particular interest to those working in the adult entertainment biz, a German court has ruled that Facebook's real name policy is illegal and that users must be allowed to sign up for the service under pseudonyms.

The opinion comes from the Berlin Regional Court and disseminated by the Federation of German Consumer Organizations, which filed the suit against Facebook. The Berlin court found that Facebook's real name policy was a covert way of obtaining users' consent to share their names, which are one of many pieces of information the court said Facebook did not properly obtain users' permission for.

The court also said that Facebook didn't provide a clear-cut choice to users for other default settings, such as to share their location in chats. It also ruled against clauses that allowed the social media giant to use information such as profile pictures for commercial, sponsored or related content.

Facebook told Reuters it will appeal the ruling, but also that it will make changes to comply with European Union privacy laws coming into effect in June.


Facebook has been ordered to stop tracking people without consent, by a court in Belgium. The company has been told to delete all the data it had gathered on people who did not use Facebook. The court ruled the data was gathered illegally.

Belgium's privacy watchdog said the website had broken privacy laws by placing tracking code on third-party websites.

Facebook said it would appeal against the ruling.

The social network faces fines of 250,000 euros a day if it does not comply.

The ruling is the latest in a long-running dispute between the social network and the Belgian commission for the protection of privacy (CPP). In 2015, the CPP complained that Facebook tracked people when they visited pages on the site or clicked like or share, even if they were not members.


 Offsite Article: We already give up our privacy to use phones, why not with cars too?...

Link Here 15th February 2018
surveillance The future of transport looks like a sensor-riddled computer

See article from


  Sneaky tricks...

Instagram is trying to inform posters that their post has been saved, snapped or recorded before it self destructs

Link Here 13th February 2018  full story: Instagram Censorship...Photo hsaring website gets heavy on the censorship
instagram logoSome users have reported seeing pop ups in Instagram (IG) informing them that, from now on, Instagram will be flagging when you record or take a screenshot of other people's IG stories and informing the originator that you have rsnapped or ecorded the post.

According to a report by Tech Crunch , those who have been selected to participate in the IG trial can see exactly who has been creeping and snapping their stories. Those who have screenshotted an image or recorded a video will have a little camera shutter logo next to their usernames, much like Snapchat.

Of course, users have already found a nifty workaround to avoid social media stalking exposure. So here's the deal: turning your phone on airplane mode after you've loaded the story and then taking your screenshot means that users won't be notified of any impropriety (sounds easy for Instagram to fix this by saving the keypress until the next time it communicates with the Instagram server). You could also download the stories from Instagram's website or use an app like Story Reposter. Maybe PC users just need another small window on the desktop, then move the mouse pointer to the small window before snapping the display.

Clearly, there's concerns on Instagram's part about users' content being shared without their permission, but if the post is shared with someone for viewing, it is pretty tough to stop then from grabbing a copy for themselves as they view it.


 Offsite Article: Why should I use DuckDuckGo instead of Google?...

Link Here 12th February 2018
duckduckgo logo Promotional material but nevertheless makes a few good points

See article from


  Smartphone data tracking is more than creepy...

Here's why you should be worried

Link Here 8th February 2018

the conversation logoSmartphones rule our lives. Having information at our fingertips is the height of convenience. They tell us all sorts of things, but the information we see and receive on our smartphones is just a fraction of the data they generate. By tracking and monitoring our behaviour and activities, smartphones build a digital profile of shockingly intimate information about our personal lives.

These records aren’t just a log of our activities. The digital profiles they create are traded between companies and used to make inferences and decisions that affect the opportunities open to us and our lives. What’s more, this typically happens without our knowledge, consent or control.

New and sophisticated methods built into smartphones make it easy to track and monitor our behaviour. A vast amount of information can be collected from our smartphones, both when being actively used and while running in the background. This information can include our location, internet search history, communications, social media activity, finances and biometric data such as fingerprints or facial features. It can also include metadata – information about the data – such as the time and recipient of a text message.

email connections

Your emails can reveal your social network. David Glance

Each type of data can reveal something about our interests and preferences, views, hobbies and social interactions. For example, a study conducted by MIT demonstrated how email metadata can be used to map our lives , showing the changing dynamics of our professional and personal networks. This data can be used to infer personal information including a person’s background, religion or beliefs, political views, sexual orientation and gender identity, social connections, or health. For example, it is possible to deduce our specific health conditions simply by connecting the dots between a series of phone calls.

Different types of data can be consolidated and linked to build a comprehensive profile of us. Companies that buy and sell data – data brokers – already do this. They collect and combine billions of data elements about people to make inferences about them. These inferences may seem innocuous but can reveal sensitive information such as ethnicity, income levels, educational attainment, marital status, and family composition.

A recent study found that seven in ten smartphone apps share data with third-party tracking companies like Google Analytics. Data from numerous apps can be linked within a smartphone to build this more detailed picture of us, even if permissions for individual apps are granted separately. Effectively, smartphones can be converted into surveillance devices.

The result is the creation and amalgamation of digital footprints that provide in-depth knowledge about your life. The most obvious reason for companies collecting information about individuals is for profit, to deliver targeted advertising and personalised services. Some targeted ads, while perhaps creepy, aren’t necessarily a problem, such as an ad for the new trainers you have been eyeing up.

ad targetting

Payday load ads. Upturn , CC BY

But targeted advertising based on our smartphone data can have real impacts on livelihoods and well-being, beyond influencing purchasing habits. For example, people in financial difficulty might be targeted for ads for payday loans . They might use these loans to pay for unexpected expenses , such as medical bills, car maintenance or court fees, but could also rely on them for recurring living costs such as rent and utility bills. People in financially vulnerable situations can then become trapped in spiralling debt as they struggle to repay loans due to the high cost of credit.

Targeted advertising can also enable companies to discriminate against people and deny them an equal chance of accessing basic human rights, such as housing and employment. Race is not explicitly included in Facebook’s basic profile information, but a user’s “ethnic affinity” can be worked out based on pages they have liked or engaged with. Investigative journalists from ProPublica found that it is possible to exclude those who match certain ethnic affinities from housing ads , and certain age groups from job ads .

This is different to traditional advertising in print and broadcast media, which although targeted is not exclusive. Anyone can still buy a copy of a newspaper, even if they are not the typical reader. Targeted online advertising can completely exclude some people from information without them ever knowing. This is a particular problem because the internet, and social media especially, is now such a common source of information.

Social media data can also be used to calculate creditworthiness , despite its dubious relevance. Indicators such as the level of sophistication in a user’s language on social media, and their friends’ loan repayment histories can now be used for credit checks. This can have a direct impact on the fees and interest rates charged on loans, the ability to buy a house, and even employment prospects .

There’s a similar risk with payment and shopping apps. In China, the government has announced plans to combine data about personal expenditure with official records, such as tax returns and driving offences. This initiative, which is being led by both the government and companies, is currently in the pilot stage . When fully operational, it will produce a social credit score that rates an individual citizen’s trustworthiness. These ratings can then be used to issue rewards or penalties, such as privileges in loan applications or limits on career progression.

The ConversationThese possibilities are not distant or hypothetical – they exist now. Smartphones are effectively surveillance devices , and everyone who uses them is exposed to these risks. What’s more, it is impossible to anticipate and detect the full range of ways smartphone data is collected and used, and to demonstrate the full scale of its impact. What we know could be just the beginning.

Vivian Ng , Senior Research Officer, Human Rights Centre, University of Essex, University of Essex and Catherine Kent , Project Officer, Human Rights Centre, University of Essex

This article was originally published on The Conversation . Read the original article .


 Offsite Article: Detecting people who walk side by side...

Link Here 17th January 2018
Facebook logo Facebook's patent applications reveals some of its creepy ideas about working out who you know

See article from


 Commented: Social services may well be interested to know what your kids are watching on TV...

Smart phone games snoop on your TV viewing using the phone microphone

Link Here 5th January 2018
honeyquestsnoopingA large number of games apps are snooping on players using the smart phone's microphone to listen to what is playing on TV, The apps recognise TV audio and report back what is being watched to home base, supposedly to help in targeted advertising.

Software from Alphonso, a start-up that collects TV-viewing data for advertisers, is used in at least 1000 games. The games do actually seek user consent to use the microphone but users may not be fully aware of the consequences of leaving an open mike in their house or in their children's rooms

Alphonso's software can detail what people watch by identifying audio signals in TV ads and shows, sometimes even matching that information with the places people visit and the movies they see. The information can then be used to target ads more precisely and to try to analyze things like which ads prompted a person to go to a car dealership.

Alphonso claims that its software does not record human speech. The company claims that it did not approve of its software being used in apps meant for children. But it was, as of earlier this month, integrated in more than a dozen games like Teeth Fixed and Zap Balloons from KLAP Edutainment in India, which describes itself as primarily focusing on offering educational games for kids and students.

The app can record audio from the microphone when the game is being player or when it is still running in background on the phone.

Comment: Alphonso knows what you watched last summer

5th January 2018 See  article from

open rights group 2016 logo Technology startup Alphonso has caused widespread concern by using smartphones microphones to monitor the TV and media habits of games and apps users.

The New York Times has published a story about a company called Alphonso that has developed a technology that uses smartphone microphones to identify TV and films being played in the background. Alphonso claims not to record any conversations, but simply listen to and encode samples of media for matching in their database. The company combines the collected data with identifiers and uses the data to target advertising, audience measurement and other purposes. The technology is embedded in over one thousand apps and games but the company refuses to disclose the exact list.

Alphonso argues that users have willingly given their consent to this form of spying on their media consumption and can opt out at any time. They argue that their behaviour is consistent with US laws and regulations.

Even if Alphonso were not breaking any laws here or in the US, there is a systemic problem with the growing intrusion of these types of technologies that monitor ambient sounds in private spaces without sufficient public debate. Apps are sneaking this kind of surveillance in, using privacy mechanisms that clearly cannot cope. This is despite the apps displaying a widget asking for permission to use the microphone to detect TV content, which would be a "clear affirmative action" for consent as required by law. Something is not working, and app platforms and regulators need to take action.

In addition to the unethical abuse of users' lack of initiative or ignorance - a bit like tobacco companies - there could be some specific breaches of privacy. The developers are clearly following the letter of the law in the US, obtaining consent and providing an opt out, but in Europe they could face more trouble, particularly after May when the General Data Protection Regulaiton (GDPR) comes into force.

One of the newer requirements on consent under GDPR will be to make it as easy to withdraw as it was to give it in the first place. Alphonso has a web-page with information on how to opt out through the privacy settings of devices, and this information is copied in at least some of the apps' privacy policies, buried under tons of legalese. This may not be good enough. Besides, once that consent is revoked, companies will need to erase any data obtained if there is no other legitimate justification to keep it. It is far from clear this is happening now, or will be in May.

There is also a need for complete clarity on who is collecting the data and being responsible for handling any consent and its revocation. At present the roles of app developers, Apple, Google and Alphonso are blurred.

We have been asked whether individuals can take legal action. We think that under the current regime in the UK this may be difficult because the bar is quite high and the companies involved are covering the basic ground. GDPR will make it easier to launch consumer complaints and legal action. The new law will also explicitly allow non-material damages, which is possible already in limited circumstances, including for revealing "political opinions, religion or philosophical beliefs" . Alphonso is recording the equivalent of a reading list of audiovisual media and might be able to generate such information.

Many of these games are aimed at children. Under GDPR, all data processing of children data is seen as entailing a risk and will need extra care. Whether children are allowed to give consent or must get it from their parents/guardians will depend on their age. In all cases information aimed at children will need to be displayed in a language they can understand. Some of the Alphonso games we checked have an age rating of 4+.

Consumer organisations have presented complaints in the past for similar issues in internet connected toys and we think that Alphonso and the developers involved should be investigated by the Information Commissioner.


2012   2013   2014   2015   2016   2017   2018   Latest  

melonfarmers icon











Film Cuts



Cutting Edge


Sex News



Liberty News

Privacy News

Control Freaks

Bollox Britain

Adult DVD+VoD

Online Shop Reviews

Online Shops

New Releases & Offers
Sex Machines
Fucking Machines
Adult DVD Empire
Adult DVD Empire
Simply Adult
30,000+ items in stock
Low prices on DVDs and sex toys
Simply Adult
Hot Movies
Hot Movies