Melon Farmers Original Version

Internet News


2018: May

 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   Latest 
Jan   Feb   Mar   Apr   May   June   July   Aug   Sep   Oct   Nov   Dec    

 

Just the ticket...

ASA reports Viagogo to National Trading Standards and calls on search engines to block links and adverts to the company.


Link Here31st May 2018
The Advertising Standards Authority (ASA) has requested search engines Google and Bing remove some listings and ads for ticketing site Viagogo which controversially features several misleading sales ploys.

ASA today judged the site to be misleading consumers by failing to be transparent about fees, wrongfully using the term official site to suggest it was an authorised ticket agent and falsely claiming it could 100% guarantee entry to events.

The ASA had previously issued a warning to Viagogo about editing such claims on its website and advertising content. However, ASA chief executive Guy Parker said it failed to respond by the 29 May deadline.

The ASA has now referred the Geneva-based company to National Trading Standards (NTS). In addition, it issued requests to search engines Google and Bing to remove any links which would take a consumer through to a page containing non-compliant content.

NTS has since opened an investigation into Viagogo, which could see the company issued fines or face legal action against staff.

Meanwhile, digital minister Margot James has also urged consumers to boycott the company.

 

 

Updated: The Russian people send a Telegram to Putin...

Significant street protests in Moscow oppose Russian internet censorship attempts against Telegram


Link Here30th May 2018
Full story: Internet Censorship in Russia...Russia and its repressive state control of media
A demonstration in Moscow against the Russian government's effort to block the messaging app Telegram quickly morphed on Monday into a protest against President Vladimir Putin, with thousands of participants chanting against the Kremlin's increasingly restrictive censorship regime.

The key demand of the rally, with the hashtag #DigitalResistance, was that the Russian internet remain free from government censorship.

One speaker, Sergei Smirnov, editor in chief of Mediazona, an online news service , asked the crowd. Is he to blame for blocking Telegram? The crowd responded with a resounding Yes! 

Telegram is just the first step, Smirnov continued. If they block Telegram, it will be worse later. They will block everything. They want to block our future and the future of our children.

Russian authorities blocked Telegram after not being provided with decryption keys. The censors also briefly blocked thousands other websites sharing hosting facilities with Telegram in the hop of pressurising the hosts into taking down Telegram.

The censorship effort has provoked anger and frustration far beyond the habitual supporters of the political opposition, especially in the business sector, where the collateral damage continues to hurt the bottom line. There has been a flood of complaints on Twitter and elsewhere that the government broke the internet.

Update: Bad for business

23rd May  2018. See  article from meduza.io

Russia's Internet commissioner, Dmitry Marinichev, is calling on the Attorney General's Office to investigate the legality and validity of Roskomnadzor's actions against Telegram, arguing that the federal censor has caused undue harm to the country's business interests, by blocking millions of IP addresses in its campaign against the instant messenger, and disrupting hundreds of other online services.

Marinichev's suggestion is mentioned in the annual report submitted to Vladimir Putin by Russian Entrepreneurs' Rights Commissioner Boris Titov.

Update: Telegram not going down without a fight

26th May 2018. See  article from meduza.io

Alexander Zharov, the head of Russia's state internet censor, Roskomnadzor, has said that the government's decision to block the instant messenger Telegram is justified because federal agents have reliably established that all recent terrorist attacks in Russia and the near abroad were coordinated through Telegram.

Zharov also accused Telegram of using other online services as human shields by redirecting its traffic to their servers and forcing Roskomnadzor to disrupt a wide array of websites, when it cuts access to the new IP addresses Telegram adopts. Zharov claimed that Telegram's functionality has degraded by 15 to 30% in Russia, due to Roskomnadzor's blocking efforts.

Zharov added that the Federal Security Service has expressed similar concerns about the push-to-talk walkie-talkie app Zello, which Roskomnadzor banned in April 2017.

Update: Apple asked to block Telegram from its app store

30th May 2018. See  article from theverge.com

The secure messaging app Telegram was banned in Russia back in April, but so far, it's still available in the Russian version of Apple's App Store. Russia is now asking Apple to remove the app from the App Store. In a supposedly legally binding letter to Apple, authorities say they're giving the company one month to comply before they enforce punishment for violations.

Despite Russian censorship efforts so far, the majority of users in Russia are still accessing the app, the Kremlin's censorship arm Roskomnadzor announced yesterday. Only 15 to 30% of Telegram's operations have been disrupted so far.

Russian internet censors also say they are in talks with Google to ban the app from Google Play.

 

 

Offsite Article: WhoIs Europe to defy the US...


Link Here30th May 2018
Full story: EU GDPR law...Far reaching privay protection law
US internet authority sues EU domain register for breaking contract to publish personal details on WhoIs. But GDPR makes it illegal to publish such details

See article from theregister.co.uk

 

 

Commented: Spotify recommends...

R Kelly. Banned from algorithmic playlist suggestions after accusations of a bad attitude to women


Link Here29th May 2018

Beginning on May 10, Spotify users will no longer be able to find R. Kelly 's music on any of the streaming service's editorial or algorithmic playlists. Under the terms of a new public hate content and hateful conduct policy Spotify is putting into effect, the company will no longer promote the R&B singer's music in any way, removing his songs from flagship playlists like RapCaviar, Discover Weekly or New Music Friday, for example, as well as its other genre- or mood-based playlists.

"We are removing R. Kelly's music from all Spotify owned and operated playlists and algorithmic recommendations such as Discover Weekly," Spotify told Billboard in a statement. "His music will still be available on the service, but Spotify will not actively promote it. We don't censor content because of an artist's or creator's behavior, but we want our editorial decisions -- what we choose to program -- to reflect our values. When an artist or creator does something that is especially harmful or hateful, it may affect the ways we work with or support that artist or creator."

Over the past several years, Kelly has been accused by multiple women of sexual violence, coercion and running a "sex cult," including two additional women who came forward to Buzzfeed this week. Though he has never been convicted of a crime, he has come under increasing scrutiny over the past several weeks, particularly with the launch of the #MuteRKelly movement at the end of April. Kelly has vociferously defended himself , saying those accusing him are an "attempt to distort my character and to destroy my legacy." And while RCA Records has thus far not dropped Kelly from his recording contract, Spotify has distanced itself from promoting his music.

Update: #MuteRKelly: now it's #MeToo vs music

20th May 2018. See  article from spiked-online.com by Fraser Myers

Throwing alleged sex pests off Spotify playlists is a mockery of justice.

Update: Backing off a little from moral policing

29th May 2018. See  article from theverge.com

Earlier this month, Swedish streaming giant Spotify announced, that it would be introducing a policy on Hate Content and Hateful Conduct . The company left the policy intentionally vague, which allowed Spotify to remove artists from its playlists at will. When we are alerted to content that violates our policy, we may remove it (in consultation with rights holders) or refrain from promoting or playlisting it on our service, the company's PR team wrote in a statement at the time. They added that R. Kelly -- who, over the course of his career, has been repeatedly accused of sexual misconduct -- would be among those affected.

Now, following a backlash from artists and label executives, Bloomberg reports that Spotify has decided to back off the policy a little. That means restoring the rapper XXXTentacion's music to its playlists, despite that he was charged with battering a pregnant woman.

Part of the blowback has to do with the broad scope of the company's content policy, which seemed to leave the door open to policing artists' personal lives and conduct. We've also thought long and hard about how to handle content that is not hate content itself, but is principally made by artists or other creators who have demonstrated hateful conduct personally. So, in some circumstances, when an artist or creator does something that is especially harmful or hateful (for example, violence against children and sexual violence), it may affect the ways we work with or support that artist or creator.

Spotify says R Kelly will remain banned from its playlists.

 

 

Healthy scepticism...

Pandora Blake suggests that there have been about 750 responses to its consultation on age verification requirements for porn sites


Link Here28th May 2018

Age verification has been hanging over us for several years now - and has now been put back to the end of 2018 after enforcement was originally planned to start last month.

I'm enormously encouraged by how many people took the opportunity to speak up and reply to the BBFC consultation on the new regulations .

Over 500 people submitted a response using the tool provided by the Open Rights Group , emphasising the need for age verification tech to be held to robust privacy and security standards.

I'm told that around 750 consultation responses were received by the BBFC overall, which means that a significant majority highlighted the regulatory gap between the powers of the BBFC to regulate adult websites, and the powers of the Information Commissioner to enforce data protection rules.

 

 

Offsite Article: UK push for porn passes raises privacy and data concerns...


Link Here 28th May 2018
Age verification requirement has raised fears about privacy, and concerns that independent providers will suffer disproportionately.

See article from wikitribune.com

 

 

Everything is offensive to somebody these days, so free speech has been cancelled...

Judge decides that free speech is no defence for an offensive message and so holocaust denial is now a criminal offence


Link Here27th May 2018
Full story: Insulting UK Law...UK proesecutions of jokes and insults on social media
A woman has been convicted for performing offensive songs that included lyrics denying the Holocaust.

Alison Chabloz sang her compositions at a meeting of the far-right London Forum group.

A judge at Westminster Magistrates' Court found Chabloz had violated laws criminalising offence and intended to insult Jewish people.

District judge John Zani delayed her sentencing until 14 June but told the court: On the face of it this does pass the custody threshold.

Chabloz, a Swiss-British dual national, had uploaded tunes to YouTube including one defining the Nazi death camp Auschwitz as a theme park just for fools and the gas chambers a proven hoax. The songs remain available on YouTube.

The songs were partly set to traditional Jewish folk music, with lyrics like: Did the Holocaust ever happen? Was it just a bunch of lies? Seems that some intend to pull the wool over our eyes.

Adrian Davies, defending, previously told the judge his ruling would be a landmark one, setting a precedent on the exercise of free speech.

But Judge Zani said Chabloz failed by some considerable margin to persuade the court that her right to freedom of speech should provide her with immunity from prosecution. He said:

I am entirely satisfied that she will have intended to insult those to whom the material relates. Having carefully considered all evidence received and submissions made, I am entirely satisfied that the prosecution has proved beyond reasonable doubt that the defendant is guilty.

Chabloz was convicted of two counts of causing an offensive, indecent or menacing message to be sent over a public communications network after performing two songs at a London Forum event in 2016. As there wa nothing indecent or menacing in the songs, Chabloz was convicted for an offensive message.

See The Britisher for an eloquent and passionate defence of free speech.

 

 

Pornhub blows a raspberry at the BBFC...

And introduces a free VPN to short circuit UK porn censorship


Link Here 25th May 2018
Pornhub, the dominant force amongst the world's porn websites, has sent a challenge to the BBFC's porn censorship regime by offering a free workaround to any porn viewer who would prefer to hide their tracks rather then open themselves up to the dangers of offering up their personal ID to age verifiers.

And rather bizarrely Pornhub are one of the companies offering age verification services to  porn sites who want to comply with UK age verification requirements.

Pornhub describes its VPN service with references to UK censorship:

Browse all websites anonymously and without restrictions.

VPNhub helps you bypass censorship while providing secure and private access to Internet. Access all of your favorite websites without fear of being monitored.

Hide your information and surf the Internet without a trace.

Enjoy the pleasure of protection with VPNhub. With full data encryption and guaranteed anonymity, go with the most trusted VPN to protect your privacy anywhere in the world.

Free and Unlimited

Enjoy totally free and unlimited bandwidth on your device of choice.

 

 

Commented: Government bullies take on the internet...

New laws to make sure that the UK is the most censored place in the western world to be online


Link Here 25th May 2018
Culture Secretary Matt Hancock has issued to the following press release from the Department for Digital, Culture, Media & Sport:

New laws to make social media safer

New laws will be created to make sure that the UK is the safest place in the world to be online, Digital Secretary Matt Hancock has announced.

The move is part of a series of measures included in the government's response to the Internet Safety Strategy green paper, published today.

The Government has been clear that much more needs to be done to tackle the full range of online harm.

Our consultation revealed users feel powerless to address safety issues online and that technology companies operate without sufficient oversight or transparency. Six in ten people said they had witnessed inappropriate or harmful content online.

The Government is already working with social media companies to protect users and while several of the tech giants have taken important and positive steps, the performance of the industry overall has been mixed.

The UK Government will therefore take the lead, working collaboratively with tech companies, children's charities and other stakeholders to develop the detail of the new legislation.

Matt Hancock, DCMS Secretary of State said:

Internet Safety StrategyDigital technology is overwhelmingly a force for good across the world and we must always champion innovation and change for the better. At the same time I have been clear that we have to address the Wild West elements of the Internet through legislation, in a way that supports innovation. We strongly support technology companies to start up and grow, and we want to work with them to keep our citizens safe.

People increasingly live their lives through online platforms so it's more important than ever that people are safe and parents can have confidence they can keep their children from harm. The measures we're taking forward today will help make sure children are protected online and balance the need for safety with the great freedoms the internet brings just as we have to strike this balance offline.

DCMS and Home Office will jointly work on a White Paper with other government departments, to be published later this year. This will set out legislation to be brought forward that tackles a range of both legal and illegal harms, from cyberbullying to online child sexual exploitation. The Government will continue to collaborate closely with industry on this work, to ensure it builds on progress already made.

Home Secretary Sajid Javid said:

Criminals are using the internet to further their exploitation and abuse of children, while terrorists are abusing these platforms to recruit people and incite atrocities. We need to protect our communities from these heinous crimes and vile propaganda and that is why this Government has been taking the lead on this issue.

But more needs to be done and this is why we will continue to work with the companies and the public to do everything we can to stop the misuse of these platforms. Only by working together can we defeat those who seek to do us harm.

The Government will be considering where legislation will have the strongest impact, for example whether transparency or a code of practice should be underwritten by legislation, but also a range of other options to address both legal and illegal harms.

We will work closely with industry to provide clarity on the roles and responsibilities of companies that operate online in the UK to keep users safe.

The Government will also work with regulators, platforms and advertising companies to ensure that the principles that govern advertising in traditional media -- such as preventing companies targeting unsuitable advertisements at children -- also apply and are enforced online.

Update: Fit of pique

21st May 2018. See article from bbc.com

It seems that the latest call for internet censorship is driven by some sort revenge for having been snubbed by the industry.

The culture secretary said he does not have enough power to police social media firms after admitting only four of 14 invited to talks showed up.

Matt Hancock told the BBC it had given him a big impetus to introduce new laws to tackle what he has called the internet's Wild West culture.

He said self-policing had not worked and legislation was needed.

He told BBC One's Andrew Marr Show , presented by Emma Barnett, that the government just don't know how many children of the millions using using social media were not old enough for an account and he was very worried about age verification. He told the programme he hopes we get to a position where all users of social media users has to have their age verified.

Two government departments are working on a White Paper expected to be brought forward later this year. Asked about the same issue on ITV's Peston on Sunday , Hancock said the government would be legislating in the next couple of years because we want to get the details right.

Update: Internet safety just means internet censorship

25th May 2018. See  article from spiked-online.com by Fraser Meyers

Officials want to clean up the web. Bad news for free speech.

 

 

Lobbying for the UK to be the safest place in the world for big media company profits...

Music industry is quick to lobby for Hancock's safe internet plans to be hijacked for their benefit


Link Here24th May 2018
This week, Matt Hancock, Secretary of State for Digital, Culture, Media and Sport, announced the launch of a consultation on new legislative measures to clean up the Wild West elements of the Internet. In response, music group BPI says the government should use the opportunity to tackle piracy with advanced site-blocking measures, repeat infringer policies, and new responsibilities for service providers.

This week, the Government published its response to the Internet Safety Strategy green paper , stating unequivocally that more needs to be done to tackle online harm. As a result, the Government will now carry through with its threat to introduce new legislation, albeit with the assistance of technology companies, children's charities and other stakeholders.

While emphasis is being placed on hot-button topics such as cyberbullying and online child exploitation, the Government is clear that it wishes to tackle the full range of online harms. That has been greeted by UK music group BPI with a request that the Government introduces new measures to tackle Internet piracy.

In a statement issued this week, BPI chief executive Geoff Taylor welcomed the move towards legislative change and urged the Government to encompass the music industry and beyond. He said:

This is a vital opportunity to protect consumers and boost the UK's music and creative industries. The BPI has long pressed for internet intermediaries and online platforms to take responsibility for the content that they promote to users.

Government should now take the power in legislation to require online giants to take effective, proactive measures to clean illegal content from their sites and services. This will keep fans away from dodgy sites full of harmful content and prevent criminals from undermining creative businesses that create UK jobs.

The BPI has published four initial requests, each of which provides food for thought.

The demand to establish a new fast-track process for blocking illegal sites is not entirely unexpected, particularly given the expense of launching applications for blocking injunctions at the High Court.

The BPI has taken a large number of actions against individual websites -- 63 injunctions are in place against sites that are wholly or mainly infringing and whose business is simply to profit from criminal activity, the BPI says.

Those injunctions can be expanded fairly easily to include new sites operating under similar banners or facilitating access to those already covered, but it's clear the BPI would like something more streamlined. Voluntary schemes, such as the one in place in Portugal , could be an option but it's unclear how troublesome that could be for ISPs. New legislation could solve that dilemma, however.

Another big thorn in the side for groups like the BPI are people and entities that post infringing content. The BPI is very good at taking these listings down from sites and search engines in particular (more than 600 million requests to date) but it's a game of whac-a-mole the group would rather not engage in.

With that in mind, the BPI would like the Government to impose new rules that would compel online platforms to stop content from being re-posted after it's been taken down while removing the accounts of repeat infringers.

Thirdly, the BPI would like the Government to introduce penalties for online operators who do not provide transparent contact and ownership information. The music group isn't any more specific than that, but the suggestion is that operators of some sites have a tendency to hide in the shadows, something which frustrates enforcement activity.

Finally, and perhaps most interestingly, the BPI is calling on the Government to legislate for a new duty of care for online intermediaries and platforms. Specifically, the BPI wants effective action taken against businesses that use the Internet to encourage consumers to access content illegally.

While this could easily encompass pirate sites and services themselves, this proposal has the breadth to include a wide range of offenders, from people posting piracy-focused tutorials on monetized YouTube channels to those selling fully-loaded Kodi devices on eBay or social media.

Overall, the BPI clearly wants to place pressure on intermediaries to take action against piracy when they're in a position to do so, and particularly those who may not have shown much enthusiasm towards industry collaboration in the past.

Legislation in this Bill, to take powers to intervene with respect to operators that do not co-operate, would bring focus to the roundtable process and ensure that intermediaries take their responsibilities seriously, the BPI says.

 

 

Offsite Article: Google sued for secretly tracking millions of UK iPhone users...


Link Here 23rd May 2018
Full story: Gooogle Privacy...Google's many run-ins with privacy
Google accused of bypassing default browser Safari's privacy settings to collect a broad range of data and deliver targeted advertising.

See article from alphr.com

 

 

Preventing corporate giants from being able to stitch up the internet...

US House moves to try and restore net neutrality in the US.


Link Here 22nd May 2018
Full story: Net Neutrality in USA...US internet censors at FCC seem intent on letting big business take control
Democrats in the United States House of Representatives have gathered 90 of the 218 signatures they'll need to force a vote on whether or not to roll back net neutrality rules, while Federal Communications Commission Chair Ajit Pai has already predicted that the House effort will fail and large telecommunications companies publicly expressed their anger at last Wednesday's Senate vote to keep the Obama-era open internet rules in place.

Led by Pai, a Donald Trump appointee, the FCC voted 3-2 along party lines in December to scrap the net neutrality regulations, effectively creating an internet landscape dominated by whichever companies can pay the most to get into the online fast lane.

Telecommunications companies could also choose to block some sites simply based on their content, a threat to which the online porn industry would be especially vulnerable, after five states have either passed or are considering legislation labeling porn a public health hazard.

While the House Republican leadership has taken the position that the net neutrality issue should not even come to a vote, on May 17 Pennsylvania Democrat Mike Doyle introduced a discharge petition that would force the issue to the House floor. A discharge petition needs 218 signatures of House members to succeed in forcing the vote. As of Monday morning, May 21, Doyle's petition had received 90 signatures . The effort would need all 193 House Democrats plus 25 Republicans to sign on, in order to bring the net neutrality rollback to the House floor.

 

 

Google gushes over its AI based news app that counters the filter bubble...

But all they've done is banned the Daily Mail and then force feed you biased and bland news from the politically correct papers such as the Guardian


Link Here21st May 2018
Full story: Google Censorship...Google censors adult material froms its websites
For its updated news application, Google is claiming it is using artificial intelligence as part of an effort to weed out disinformation and feed users with viewpoints beyond their own filter bubble.

Google chief Sundar Pichai, who unveiled the updated Google News earlier this month, said the app now surfaces the news you care about from trusted sources while still giving you a full range of perspectives on events. It marks Google's latest effort to be at the centre of online news and includes a new push to help publishers get paid subscribers through the tech giant's platform.

In reality Google has just banned news from the likes of the Daily Mail whilst all the 'trusted sources' are just the likes of the politically correct papers such as the Guardian and Independent.

According to product chief Trystan Upstill, the news app uses the best of artificial intelligence to find the best of human intelligence - the great reporting done by journalists around the globe. While the app will enable users to get personalised news, it will also include top stories for all readers, aiming to break the so-called filter bubble of information designed to reinforce people's biases.

Nicholas Diakopoulos, a Northwestern University professor specialising in computational and data journalism, said the impact of Google's changes remain to be seen. Diakopoulos said algorithmic and personalised news can be positive for engagement but may only benefit a handful of news organisations.  His research found that Google concentrates its attention on a relatively small number of publishers, it's quite concentrated. Google's effort to identify and prioritise trusted news sources may also be problematic, according to Diakopoulos. Maybe it's good for the big guys, or the (publishers) who have figured out how to game the algorithm, he said. But what about the local news sites, what about the new news sites that don't have a long track record?

I tried it out and no matter how many times I asked it not to provide stories about the royal wedding and the cup final, it just served up more of the same. And indeed as Diakopoulos said, all it wants to do is push news stories from the politically correct papers, most notably the Guardian. I can't see it proving very popular. I'd rather have an app that feeds me what I actually like, not what I should like.

 

 

New Zealand's chief censor recommends...

13 Reasons Why, Season 2


Link Here17th May 2018

New Zealand's Chief Censor David Shanks warned parents and caregivers of vulnerable children and teenagers to be prepared for the release of Netflix's Season 2 release of 13 Reasons Why scheduled to screen this week on Friday, May 18, at 7pm.

The Office of Film and Literature Classification consulted with the Mental Health Foundation in classifying 13 Reasons Why: Season 2 as RP18 with a warning that it contains rape, suicide themes, drug use, and bullying. Shanks said:

"There is a strong focus on rape and suicide in Season 2 , as there was in Season 1 . We have told Netflix it is really important to warn NZ audiences about that."

"Rape is an ugly word for an ugly act. But young New Zealanders have told us that if a series contains rape -- they want to know beforehand."

An RP18 classification means that someone under 18 must be supervised by a parent or guardian when viewing the series. A guardian is considered to be a responsible adult (18 years and over), for example a family member or teacher who can provide guidance. Shanks said:

"This classification allows young people to access it in a similar fashion to the first season, while requiring the support from an adult they need to stay safe and to process the challenging topics in the series."

Netflix is required to clearly display the classification and warning.

"If a child you care for is planning to watch the show, you should sit down and watch it with them -- if not together then at least around the same time. That way you can at least try to have informed and constructive discussions with them about the content."

...

"The current picture about what our kids can be exposed to online is grim. We need to get that message across to parents that they need to help young people with this sort of content."

For parents and caregivers who don't have time to watch the entire series, the Classification Office and Mental Health Foundation have produced an episode-by-episode guide with synopses of problematic content, and conversation starters to have with teens. This will be available on both organisations' websites from 7pm on Friday night.

 

 

Offsite Article: Outed by Facebook...


Link Here17th May 2018
Full story: Facebook Privacy...Facebook criticised for discouraging privacy
Facebook lets advertisers target users based on sensitive interests by categorising users based on inferred interests such as Islam or homosexuality

See article from theguardian.com

 

 

Conservatives against social media...

Christian campaigners lead conservative fight back against the left wing bias of social media. As if the religious right are innocent of calling for censorship at every opportunity


Link Here16th May 2018

In response to the continued restriction and censorship of conservatives and their organizations by tech giants Facebook, Twitter, Google and YouTube, the Media Research Center (MRC) along with 18 leading conservative organizations announced Tuesday, May 15, 2018 the formation of a new, permanent coalition, Conservatives Against Online Censorship .

Conservatives Against Online Censorship will draw attention to the issue of political censorship on social media. This new coalition will urge Facebook, Twitter, Google and YouTube to address the four following key areas of concern:

  • Provide Transparency: We need detailed information so everyone can see if liberal groups and users are being treated the same as those on the right. Social media companies operate in a black-box environment, only releasing anecdotes about reports on content and users when they think it necessary. This needs to change. The companies need to design open systems so that they can be held accountable, while giving weight to privacy concerns.

  • Provide Clarity on 'Hate Speech': "Hate speech" is a common concern among social media companies, but no two firms define it the same way. Their definitions are vague and open to interpretation, and their interpretation often looks like an opportunity to silence thought. Today, hate speech means anything liberals don't like. Silencing those you disagree with is dangerous. If companies can't tell users clearly what it is, then they shouldn't try to regulate it.

  • Provide Equal Footing for Conservatives: Top social media firms, such as Google and YouTube, have chosen to work with dishonest groups that are actively opposed to the conservative movement, including the Southern Poverty Law Center. Those companies need to make equal room for conservative groups as advisers to offset this bias. That same attitude should be applied to employment diversity efforts. Tech companies need to embrace viewpoint diversity.

  • Mirror the First Amendment: Tech giants should afford their users nothing less than the free speech and free exercise of religion embodied in the First Amendment as interpreted by the U.S. Supreme Court. That standard, the result of centuries of American jurisprudence, would enable the rightful blocking of content that threatens violence or spews obscenity, without trampling on free speech liberties that have long made the United States a beacon for freedom.

"Social media is the most expansive and most game-changing form of communication today. It is these facts that make online political censorship one of the largest threats to free speech we have ever seen. Conservatives should be given the same ability to express their political ideas online as liberals, without the fear of being suppressed or censored," said Media Research Center President Brent Bozell.

"Meaningful debate only happens when both sides are given equal footing. Freedom of speech, regardless of ideological leaning, is something Americans hold dear. Facebook, Twitter and all other social media companies must acknowledge this and work to rectify these concerns unless they want to lose all credibility with the conservative movement. As leaders of this effort, we are launching this coalition to make sure that the recommendations we put forward on behalf of the conservative movement are followed through."

The Media Research Center sent letters to representatives at Facebook, Twitter, Google and YouTube last week asking each company to address these complaints and begin a conversation about how they can repair their credibility within the conservative movement. As of Tuesday, May 15, 2018 , only Facebook has issued a formal response.

 

 

Social media against conservatives...

Twitter steps up the censorship, no doubt conservatives will bear the brunt of it


Link Here16th May 2018
Full story: Twitter Censorship...Twitter offers country by country take downs
Twitter has outlined further censorship measures in a blog post:

In March, we introduced our new approach to improve the health of the public conversation on Twitter. One important issue we've been working to address is what some might refer to as "trolls." Some troll-like behavior is fun, good and humorous. What we're talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search. Some of these accounts and Tweets violate our policies, and, in those cases, we take action on them. Others don't but are behaving in ways that distort the conversation.

To put this in context, less than 1% of accounts make up the majority of accounts reported for abuse, but a lot of what's reported does not violate our rules. While still a small overall number, these accounts have a disproportionately large -- and negative -- impact on people's experience on Twitter. The challenge for us has been: how can we proactively address these disruptive behaviors that do not violate our policies but negatively impact the health of the conversation?

A New Approach

Today, we use policies, human review processes, and machine learning to help us determine how Tweets are organized and presented in communal places like conversations and search. Now, we're tackling issues of behaviors that distort and detract from the public conversation in those areas by integrating new behavioral signals into how Tweets are presented. By using new tools to address this conduct from a behavioral perspective, we're able to improve the health of the conversation, and everyone's experience on Twitter, without waiting for people who use Twitter to report potential issues to us.

There are many new signals we're taking in, most of which are not visible externally. Just a few examples include if an account has not confirmed their email address, if the same person signs up for multiple accounts simultaneously, accounts that repeatedly Tweet and mention accounts that don't follow them, or behavior that might indicate a coordinated attack. We're also looking at how accounts are connected to those that violate our rules and how they interact with each other.

These signals will now be considered in how we organize and present content in communal areas like conversation and search. Because this content doesn't violate our policies, it will remain on Twitter, and will be available if you click on "Show more replies" or choose to see everything in your search setting. The result is that people contributing to the healthy conversation will be more visible in conversations and search.

Results

In our early testing in markets around the world, we've already seen this new approach have a positive impact, resulting in a 4% drop in abuse reports from search and 8% fewer abuse reports from conversations. That means fewer people are seeing Tweets that disrupt their experience on Twitter.

Our work is far from done. This is only one part of our work to improve the health of the conversation and to make everyone's Twitter experience better. This technology and our team will learn over time and will make mistakes. There will be false positives and things that we miss; our goal is to learn fast and make our processes and tools smarter. We'll continue to be open and honest about the mistakes we make and the progress we are making. We're encouraged by the results we've seen so far, but also recognize that this is just one step on a much longer journey to improve the overall health of our service and your experience on it.

 

 

Social media against conservatives...

Facebook details its censorship enforcement, no doubt conservatives bear the brunt of it


Link Here16th May 2018
Full story: Facebook Censorship...Facebook quick to censor

We're often asked how we decide what's allowed on Facebook -- and how much bad stuff is out there. For years, we've had Community Standards that explain what stays up and what comes down. Three weeks ago, for the first time, we published the internal guidelines we use to enforce those standards. And today we're releasing numbers in a Community Standards Enforcement Report so that you can judge our performance for yourself.

Alex Schultz, our Vice President of Data Analytics, explains in more detail how exactly we measure what's happening on Facebook in both this Hard Questions post and our guide to Understanding the Community Standards Enforcement Report . But it's important to stress that this is very much a work in progress and we will likely change our methodology as we learn more about what's important and what works.

This report covers our enforcement efforts between October 2017 to March 2018, and it covers six areas: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts. The numbers show you:

  • How much content people saw that violates our standards;

  • How much content we removed; and

  • How much content we detected proactively using our technology -- before people who use Facebook reported it.

Most of the action we take to remove bad content is around spam and the fake accounts they use to distribute it. For example:

  • We took down 837 million pieces of spam in Q1 2018 -- nearly 100% of which we found and flagged before anyone reported it; and

  • The key to fighting spam is taking down the fake accounts that spread it. In Q1, we disabled about 583 million fake accounts -- most of which were disabled within minutes of registration. This is in addition to the millions of fake account attempts we prevent daily from ever registering with Facebook. Overall, we estimate that around 3 to 4% of the active Facebook accounts on the site during this time period were still fake.

In terms of other types of violating content:

  • We took down 21 million pieces of adult nudity and sexual activity in Q1 2018 -- 96% of which was found and flagged by our technology before it was reported. Overall, we estimate that out of every 10,000 pieces of content viewed on Facebook, 7 to 9 views were of content that violated our adult nudity and pornography standards.

  • For graphic violence, we took down or applied warning labels to about 3.5 million pieces of violent content in Q1 2018 -- 86% of which was identified by our technology before it was reported to Facebook.

  • For hate speech, our technology still doesn't work that well and so it needs to be checked by our review teams. We removed 2.5 million pieces of hate speech in Q1 2018 -- 38% of which was flagged by our technology.

As Mark Zuckerberg said at F8 , we have a lot of work still to do to prevent abuse. It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important. For example, artificial intelligence isn't good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue. And more generally, as I explained two weeks ago, technology needs large amounts of training data to recognize meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported. In addition, in many areas -- whether it's spam, porn or fake accounts -- we're up against sophisticated adversaries who continually change tactics to circumvent our controls, which means we must continuously build and adapt our efforts. It's why we're investing heavily in more people and better technology to make Facebook safer for everyone.

It's also why we are publishing this information. We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too. This is the same data we use to measure our progress internally -- and you can now see it to judge our progress for yourselves. We look forward to your feedback.

 

 

Offsite Article: Censorship trying to hide itself behind a fig leaf...


Link Here 16th May 2018
Instagram deletes photographer Dragana Jurisic's account and Facebook censors her work

See article from theartnewspaper.com

 

 

Data absuse...

Facebook report that 200 apps have been suspended in the wake of the Cambridge Analytica data slurp


Link Here15th May 2018
Full story: Facebook Privacy...Facebook criticised for discouraging privacy

Here is an update on the Facebook app investigation and audit that Mark Zuckerberg promised on March 21.

As Mark explained, Facebook will investigate all the apps that had access to large amounts of information before we changed our platform policies in 2014 -- significantly reducing the data apps could access. He also made clear that where we had concerns about individual apps we would audit them -- and any app that either refused or failed an audit would be banned from Facebook.

The investigation process is in full swing, and it has two phases. First, a comprehensive review to identify every app that had access to this amount of Facebook data. And second, where we have concerns, we will conduct interviews, make requests for information (RFI) -- which ask a series of detailed questions about the app and the data it has access to -- and perform audits that may include on-site inspections.

We have large teams of internal and external experts working hard to investigate these apps as quickly as possible. To date thousands of apps have been investigated and around 200 have been suspended -- pending a thorough investigation into whether they did in fact misuse any data. Where we find evidence that these or other apps did misuse data, we will ban them and notify people via this website. It will show people if they or their friends installed an app that misused data before 2015 -- just as we did for Cambridge Analytica.

There is a lot more work to be done to find all the apps that may have misused people's Facebook data -- and it will take time. We are investing heavily to make sure this investigation is as thorough and timely as possible. We will keep you updated on our progress.

 

 

Newsagents to sell 'porn passes'...

The press picks up on the age verification offering from AVSecure that offers anonymous porn browsing


Link Here14th May 2018
Adults who want to watch online porn (or maybe by adults only products such as alcohol) will be able to buy codes from newsagents and supermarkets to prove that they are over 18 when online.

One option available to the estimated 25 million Britons who regularly visit such websites will be a 16-digit code, dubbed a 'porn pass'.

While porn viewers will still be able to verify their age using methods such as registering credit card details, the 16-digit code option would be a fully anonymous option. According to AVSecure's the cards will be sold for £10 to anyone who looks over 18 without the need for any further identification. It doesn't say on the website, but presumably in the case where there is doubt about a customer's age, then they will have to show ID documents such as a passport or driving licence, but hopefully that ID will not have to be recorded anywhere.

It is hope he method will be popular among those wishing to access porn online without having to hand over personal details to X-rated sites.

The user will type in a 16 digit number into websites that belong to the AVSecure scheme. It should be popular with websites as it offers age verification to them for free (with the £10 card fee being the only source of income for the company). This is a lot better proposition for websites than most, if not all, of the other age verification companies.

AVSecure also offer an encrypted implementation via blockchain that will not allow websites to use the 16 digit number as a key to track people's website browsing. But saying that they could still use a myriad of other standard technologies to track viewers.

The BBFC is assigned the task of deciding whether to accredit different technologies and it will be very interesting to see if they approve the AVSecure offering. It is easily the best solution to protect the safety and privacy of porn viewers, but it maybe will test the BBFC's pragmatism to accept the most workable and safest solution for adults which is not quite fully guaranteed to protect children. Pragmatism is required as the scheme has the technical drawback of having no further checks in place once the card has been purchased. The obvious worry is that an over 18s can go around to other shops to buy several cards to pass on to their under 18 mates. Another possibility is that kids could stumble on their parent's card and get access. Numbers shared on the web could be easily blocked if used simultaneously from different IP addresses.

 

 

13 Reasons Why Not...

Calling for Netflix suicide themed series to be banned


Link Here13th May 2018
Mental health campaigners have criticised the return of the Netflix drama 13 Reasons Why , expressing concern that the second series of the drama about a teenager's suicide is due for release as summer exam stress peaks. The story of 17-year-old Hannah Baker's life and death continues on Friday 18 May.

The Royal College of Psychiatrists described the timing as callous, noting that suicide rates among young people typically rise during exam season and warning that the Netflix drama could trigger a further increase. Dr Helen Rayner, of the Royal College of Psychiatrists, said:

I feel extremely disappointed and angry. This glamourises suicide and makes it seductive. It also makes it a possibility for young people -- it puts the thought in their mind that this is something that's possible. It's a bad programme that should not be out there, and it's the timing.

The US-based series was a big hit for Netflix despite -- or perhaps because of -- the controversy surrounding the suicide storyline. The first series of 13 episodes depicted Hannah's friends listening to tapes she had made for each of them explaining the difficulties she faced that had prompted her to kill herself.

Supporters of the first series said it was an accurate portrayal of high school life that would spark conversations between parents and their children and encourage viewers to seek information on depression, suicide, bullying and sexual assault.

 

 

The Secure Data Act...

US lawmakers propose law to prevent the sate from demanding back door access to IT products and communications


Link Here11th May 2018
Full story: Encryption in the UK...Cameron demands a back door to encrypted data
US lawmakers from both political parties have come together to reintroduce a bill that, if passed, would prohibit the US government from forcing tech product makers to undermine users safety and security with back door access.

The bill, known as the Secure Data Act of 2018 , was returned to the US House of Representatives by Representative Zoe Lofgren and Thomas Massie.

The Secure Data Act forbids any government agency from demanding that a manufacturer, developer, or seller of covered products design or alter the security functions in its product or service to allow the surveillance of any user of such product or service, or to allow the physical search of such product, by any agency. It also prohibits courts from issuing orders to compel access to data.

Covered products include computer hardware, software, or electronic devices made available to the public. The bill makes an exception for telecom companies, which under the 1994 Communications Assistance for Law Enforcement Act (CALEA) would still have to help law enforcement agencies access their communication networks.

 

 

The government is acting negligently on privacy and porn AV...

Top of our concerns was the lack of privacy safeguards to protect the 20 million plus users who will be obliged to use Age Verification tools to access legal content.


Link Here 8th May 2018

We asked the BBFC to tell government that the legislation is not fit for purpose, and that they should halt the scheme until privacy regulation is in place. We pointed out that card payments and email services are both subject to stronger privacy protections that Age Verification.

The government's case for non-action is that the Information Commissioner and data protection fines for data breaches are enough to deal with the risk. This is wrong: firstly because fines cannot address the harm created by the leaking of people's sexual habits. Secondly, it is wrong because data breaches are only one aspect of the risks involved.

We outlined over twenty risks from Age Verification technologies. We pointed out that Age Verification contains a set of overlapping problems. You can read our list below. We may have missed some: if so, do let us know.

The government has to act. It has legislated this requirement without properly evaluating the privacy impacts. If and when it goes wrong, the blame will lie squarely at the government's door.

The consultation fails to properly distinguish between the different functions and stages of an age verification system. The risks associated with each are separate but interact. Regulation needs to address all elements of these systems. For instance:

  • Choosing a method of age verification, whereby a user determines how they wish to prove their age.

  • The method of age verification, where documents may be examined and stored.

  • The tool's approach to returning users, which may involve either:

    • attaching the user's age verification status to a user account or log-in credentials; or

    • providing a means for the user to re-attest their age on future occasions.

  • The re-use of any age verified account, log-in or method over time, and across services and sites.

The focus of attention has been on the method of pornography-related age verification, but this is only one element of privacy risk we can identify when considering the system as a whole. Many of the risks stem from the fact that users may be permanently 'logged in' to websites, for instance. New risks of fraud, abuse of accounts and other unwanted social behaviours can also be identified. These risks apply to 20-25 million adults, as well as to teenagers attempting to bypass the restrictions. There is a great deal that could potentially go wrong.

Business models, user behaviours and potential criminal threats need to be taken into consideration. Risks therefore include:

Identity risks

  • Collecting identity documents in a way that allows them to potentially be correlated with the pornographic content viewed by a user represents a serious potential risk to personal and potentially highly sensitive data.

Risks from logging of porn viewing

  • A log-in from an age-verified user may persist on a user's device or web browser, creating a history of views associated with an IP address, location or device, thus easily linked to a person, even if stored 'pseudonymously'.

  • An age verified log-in system may track users across websites and be able to correlate tastes and interests of a user visiting sites from many different providers.

  • Data from logged-in web visits may be used to profile the sexual preferences of users for advertising. Tool providers may encourage users to opt in to such a service with the promise of incentives such as discounted or free content.

  • The current business model for large porn operations is heavily focused on monetising users through advertising, exacerbating the risks of re-use and recirculation and re-identification of web visit data.

  • Any data that is leaked cannot be revoked, recalled or adequately compensated for, leading to reputational, career and even suicide risks.

Everyday privacy risks for adults

  • The risk of pornographic web accounts and associated histories being accessed by partners, parents, teenagers and other third parties will increase.

  • Companies will trade off security for ease-of-use, so may be reluctant to enforce strong passwords, two-factor authentication and other measures which make it harder for credentials to leak or be shared.

  • Everyday privacy tools used by millions of UK residents such as 'private browsing' modes may become more difficult to use to use due to the need to retain log-in cookies, increasing the data footprint of people's sexual habits.

  • Some users will turn to alternative methods of accessing sites, such as using VPNs. These tools have their own privacy risks, especially when hosted outside of the EU, or when provided for free.

Risks to teenagers' privacy

  • If age-verified log-in details are acquired by teenagers, personal and sexual information about them may become shared including among their peers, such as particular videos viewed. This could lead to bullying, outing or worse.

  • Child abusers can use access to age verified accounts as leverage to create and exploit a relationship with a teenager ('grooming').

  • Other methods of obtaining pornography would be incentivised, and these may carry new and separate privacy risks. For instance the BitTorrent network exposes the IP addresses of users publicly. These addresses can then be captured by services like GoldenEye, whose business model depends on issuing legal threats to those found downloading copyrighted material. This could lead to the pornographic content downloaded by young adults or teenagers being exposed to parents or carers. While copyright infringement is bad, removing teenagers' sexual privacy is worse. Other risks include viruses and scams.

Trust in age verification tools and potential scams

  • Users may be obliged to sign up to services they do not trust or are unfamiliar with in order to access specific websites.

  • Pornographic website users are often impulsive, with lower risk thresholds than for other transactions. The sensitivity of any transactions involved gives them a lower propensity to report fraud. Pornography users are therefore particularly vulnerable targets for scammers.

  • The use of credit cards for age verification in other markets creates an opportunity for fraudulent sites to engage in credit card theft.

  • Use of credit cards for pornography-related age verification risks teaching people that this is normal and reasonable, opening up new opportunities for fraud, and going against years of education asking people not to hand card details to unknown vendors.

  • There is no simple means to verify which particular age verification systems are trustworthy, and which may be scams.

Market related privacy risks

  • The rush to market means that the tools that emerge may be of variable quality and take unnecessary shortcuts.

  • A single pornography-related age verification system may come to dominate the market and become the de-facto provider, leaving users no real choice but to accept whatever terms that provider offers.

  • One age verification product which is expected to lead the market -- AgeID -- is owned by MindGeek, the dominant pornography company online. Allowing pornographic sites to own and operate age verification tools leads to a conflict of interest between the privacy interests of the user, and the data-mining and market interests of the company.

  • The online pornography industry as a whole, including MindGeek, has a poor record of privacy and security, littered with data breaches. Without stringent regulation prohibiting the storage of data which might allow users' identity and browsing to be correlated, there is no reason to assume that data generated as a result of age verification tools will be exempt from this pattern of poor security.

 

 

Updated: Courting Discord...

Iranian courts ban the Telegram app and even the government opposes the move


Link Here7th May 2018
Full story: Iranian Internet Censorship...Extensive internet blocking
Monday's ban on the popular encrypted Telegram messaging app by Iran's powerful judiciary has not been well received.

Telegram serves many Iranians as a kind of combination of Facebook and Whatsapp, allowing people inside the country to chat securely and to disseminate information to large audiences abroad. Until the court ban, the application was widely used by Iranian state media, politicians, companies and ordinary Iranians for business, pleasure and political organizing. Telegram is believed to have some 20 million users in Iran out of a total population of 80 million.

The judiciary's Culture and Media Court banned the app citing among its reasons its use by international terrorist groups and anti-government protesters, and the company's refusal to cooperate with Iran's Ministry of Information and Communications Technology to provide decryption keys.

The move came after extensive public debate in Iran, some conducted via the messaging service itself, about the limits of free expression, government authority and access to information in the Islamic Republic.

President Hassan Rouhani and other prominent reformers, who advocate increased freedom while retaining Iran's current Islamic system of government, argued against the proposed ban, saying that it would make society anxious.

Similarly, in the wake of the judiciary's announcement that the application would be blocked, Information and Communications Technology Minister Muhammad-Javad Azari Jahromi criticized the move on Twitter. Citizens' access to information sources is unstoppable, he wrote the day after the decision. Whenever one application or program is blocked, another will take its place, he wrote. This is the unique aspect and necessity of the free access to information in the age of communication.

Rouhani was even more forthright in his response to the ban in a message posted to Instagram on Friday. The government policy is... a safe, but not controlled Internet, he wrote. No Internet service or messaging app has been banned by this government, and none will be. He added that the block was the direct opposite to democracy.

Update: The judicial censorship of Telegram could be challenged by the president

7th May 2018. See  article from iranhumanrights.org

Two lawyers in Tehran told the Center for Human Rights in Iran (CHRI) that the Iranian president has the authority to refuse to the prosecutor's order to ban the Telegram messaging app.

An attorney in Tehran specializing in media affairs, who spoke on the condition of anonymity due to the threat of reprisals by the judiciary, told CHRI: From a legal standpoint, orders issued by assistant prosecutors must be enforced but they can be challenged. As the target of this order, the government can lodge a complaint and ask the provincial court to make a ruling. But the question is, does the government want to take legal action or not? This is more of a political issue. In the same manner, the judiciary had invoked security laws to shut down 40 newspapers in 2000.

 

 

Offsite Article: YouTube Won't Put Up With Blatant Piracy Tutorials Forever...


Link Here 7th May 2018
Full story: YouTube Censorship...YouTube censor videos by restricting their reach
YouTube has 'how to' videos for pretty much everything

See article from torrentfreak.com

 

 

Sympathy for the Devil...

Egypt's film censor bans film set in Morocco for supposedly encouraging revolution


Link Here6th May 2018
Razzia is a 2017 France / Morocco / Belgium drama by Nabil Ayouch.
Starring Maryam Touzani, Arieh Worthalter and Amine Ennaji. IMDb

The streets of Casablanca provide the centerpiece for five separate narratives that all collide into one.

Egypt's film censors have banned Nabil Ayouch's film Razzia for supposedly encouraging revolution, especially that the film tells the story of the marginalized poor in search of justice in Morocco.

The film censor specifically referred to events in the movie that recall the 2011 Egyptian revolution. The censor also reported concerns with the impact of religion, as it strongly believe that projecting Razzia will inspire the sympathy and compassion of the audience, as the movie follows the daily life of a Jewish restaurateur.

It's not the first time that the French-Moroccan director Nabil Ayouch has had to deal with censorship, as the Moroccan government banned his controversial film Much Loved in Moroccan cinemas in 2015.

 

 

Nazi censors...

German politician gets name calling censored as required under new internet censorship law, but she is now demanding that it should be censored worldwide, not just in Germany


Link Here5th May 2018
Full story: Internet Censorship in Germany...Germany considers state internet filtering
  Internet censors in training

It hasn't taken long for Germany's new internet censorship to be used against the trivial name calling of politicians.

A recent German law was intended to put a stop to hate speech, but its difficult and commercially expensive to bother considering every case on its merits, so its just easier and cheaper for internet companies to censor everything asked for.

So of course easily offended politician are quick to ask for trivial name calling insults to be taken down. But now there's a twist, for an easily offended politician, it is not enough for Facebook to block an insult in Germany, it must be blocked worldwide.

Courthouse News Service reports that a German court has indulged a politician's hypocritical outrage to demand the disappearance of an insulting comment posted to Facebook.

Alice Weidel, co-leader of the Alternative for Germany (AfD) party, objected to a Facebook post calling her a dirty Nazi swine for her opposition to same-sex marriage. Facebook immediately complied, but Weidel's lawyers complained it hadn't been vanished hard enough, pointing out that German VPN users could still access the comment.

Facebook's only comment, via Reuters, was to note it had already blocked the content in Germany , which is all the law really requires.

Of course once you allow mere insults to be censorable, you then hit the issue of fairness. Insults against some PC favoured groups are totally off limits and are considered to be a PC crime of the century, whilst insults against others (eg white men) are positively encouraged.

 

 

Offsite Video: Censorship committee...


Link Here3rd May 2018
Myles Jackman and the Open Rights Group speak to Parliament's Communications Committee which is considering how best to censor the internet

See article from parliamentlive.tv

 

 

Slackers and ne'er-do-wells...

Chinese video hosting website purges the Peppa Pig family


Link Here2nd May 2018
Full story: Internet Censorship in China...All pervading Chinese internet censorship
The wildly popular children's character Peppa Pig was recently scrubbed from Douyin, a video sharing platform in China , which deleted more than 30,000 clips. The hashtag #PeppaPig was also banned, according to the Global Times, a state-run tabloid newspaper.

Chinese authorities have claimed that Peppa pig has become associated with low lifes and slackers. The Global Times whinged:

People who upload videos of Peppa Pig tattoos and merchandise and make Peppa-related jokes run counter to the mainstream value and are usually poorly educated with no stable job. They are unruly slackers roaming around and the antithesis of the young generation the [Communist] party tries to cultivate. 

 

 

26 human rights organisations send a Telegram to Putin...

An open letter protesting Russia's censorship of Telegram


Link Here 1st May 2018
Full story: Internet Censorship in Russia...Russia and its repressive state control of media
We, the undersigned 26 international human rights, media and Internet freedom organisations, strongly condemn the attempts by the Russian Federation to block the Internet messaging service Telegram, which have resulted in extensive violations of freedom of expression and access to information, including mass collateral website blocking.

We call on Russia to stop blocking Telegram and cease its relentless attacks on Internet freedom more broadly. We also call the United Nations (UN), the Council of Europe (CoE), the Organisation for Security and Cooperation in Europe (OSCE), the European Union (EU), the United States and other concerned governments to challenge Russia's actions and uphold the fundamental rights to freedom of expression and privacy online as well as offline. Lastly, we call on Internet companies to resist unfounded and extra-legal orders that violate their users' rights.

Massive Internet disruptions

On 13 April 2018, Moscow's Tagansky District Court granted Roskomnadzor, Russia's communications regulator, its request to block access to Telegram on the grounds that the company had not complied with a 2017 order to provide decryption keys to the Russian Federal Security Service (FSB). Since then, the actions taken by the Russian authorities to restrict access to Telegram have caused mass Internet disruption, including:

  • Between 16-18 April 2018, almost 20 million Internet Protocol (IP) addresses were ordered to be blocked by Roskomnadzor as it attempted to restrict access to Telegram. The majority of the blocked addresses are owned by international Internet companies, including Google, Amazon and Microsoft. Currently 14.6 remain blocked.
  • This mass blocking of IP addresses has had a detrimental effect on a wide range of web-based services that have nothing to do with Telegram, including, but not limited to, online banking and booking sites, shopping, and flight reservations.
  • Agora, the human rights and legal group, representing Telegram in Russia, has reported it has received requests for assistance with issues arising from the mass blocking from about 60 companies, including online stores, delivery services, and software developers.
  • At least six online media outlets ( Petersburg Diary, Coda Story, FlashNord, FlashSiberia, Tayga.info , and 7x7 ) found access to their websites was temporarily blocked.
  • On 17 April 2018, Roskomnadzor requested that Google and Apple remove access to the Telegram app from their App stores, despite having no basis in Russian law to make this request. The app remains available, but Telegram has not been able to provide upgrades that would allow better proxy access for users.
  • Virtual Private Network (VPN) providers -- such as TgVPN, Le VPN and VeeSecurity proxy - have also been targeted for providing alternative means to access Telegram. Federal Law 276-FZ bans VPNs and Internet anonymisers from providing access to websites banned in Russia and authorises Roskomnadzor to order the blocking of any site explaining how to use these services.
Restrictive Internet laws

Over the past six years, Russia has adopted a huge raft of laws restricting freedom of expression and the right to privacy online. These include the creation in 2012 of a blacklist of Internet websites, managed by Roskomnadzor, and the incremental extension of the grounds upon which websites can be blocked, including without a court order.

The 2016 so-called 'Yarovaya Law' , justified on the grounds of "countering extremism", requires all communications providers and Internet operators to store metadata about their users' communications activities, to disclose decryption keys at the security services' request, and to use only encryption methods approved by the Russian government - in practical terms, to create a backdoor for Russia's security agents to access internet users' data, traffic, and communications.

In October 2017, a magistrate found Telegram guilty of an administrative offense for failing to provide decryption keys to the Russian authorities -- which the company states it cannot do due to Telegram's use of end-to-end encryption. The company was fined 800,000 rubles (approx. 11,000 EUR). Telegram lost an appeal against the administrative charge in March 2018, giving the Russian authorities formal grounds to block Telegram in Russia, under Article 15.4 of the Federal Law "On Information, Information Technologies and Information Protection".

The Russian authorities' latest move against Telegram demonstrates the serious implications for people's freedom of expression and right to privacy online in Russia and worldwide:

  • For Russian users apps such as Telegram and similar services that seek to provide secure communications are crucial for users' safety. They provide an important source of information on critical issues of politics, economics and social life, free of undue government interference. For media outlets and journalists based in and outside Russia, Telegram serves not only as a messaging platform for secure communication with sources, but also as a publishing venue. Through its channels, Telegram acts as a carrier and distributor of content for entire media outlets as well as for individual journalists and bloggers. In light of direct and indirect state control over many traditional Russian media and the self-censorship many other media outlets feel compelled to exercise, instant messaging channels like Telegram have become a crucial means of disseminating ideas and opinions.
  • Companies that comply with the requirements of the 'Yarovaya Law' by allowing the government a back-door key to their services jeopardise the security of the online communications of their Russian users and the people they communicate with abroad. Journalists, in particular, fear that providing the FSB with access to their communications would jeopardise their sources, a cornerstone of press freedom. Company compliance would also signal that communication services providers are willing to compromise their encryption standards and put the privacy and security of all their users at risk, as a cost of doing business.
  • Beginning in July 2018, other articles of the 'Yarovaya Law' will come into force requiring companies to store the content of all communications for six months and to make them accessible to the security services without a court order. This would affect the communications of both people in Russia and abroad.

Such attempts by the Russian authorities to control online communications and invade privacy go far beyond what can be considered necessary and proportionate to countering terrorism and violate international law.

International standards
  • Blocking websites or apps is an extreme measure , analogous to banning a newspaper or revoking the license of a TV station. As such, it is highly likely to constitute a disproportionate interference with freedom of expression and media freedom in the vast majority of cases, and must be subject to strict scrutiny. At a minimum, any blocking measures should be clearly laid down by law and require the courts to examine whether the wholesale blocking of access to an online service is necessary and in line with the criteria established and applied by the European Court of Human Rights. Blocking Telegram and the accompanying actions clearly do not meet this standard.
  • Various requirements of the 'Yarovaya Law' are plainly incompatible with international standards on encryption and anonymity as set out in the 2015 report of the UN Special Rapporteur on Freedom of Expression report ( A/HRC/29/32 ). The UN Special Rapporteur himself has written to the Russian government raising serious concerns that the 'Yarovaya Law' unduly restricts the rights to freedom of expression and privacy online. In the European Union, the Court of Justice has ruled that similar data retention obligations were incompatible with the EU Charter of Fundamental Rights. Although the European Court of Human Rights has not yet ruled on the compatibility of the Russian provisions for the disclosure of decryption keys with the European Convention on Human Rights, it has found that Russia's legal framework governing interception of communications does not provide adequate and effective guarantees against the arbitrariness and the risk of abuse inherent in any system of secret surveillance.
We, the undersigned organisations, call on:
  • The Russian authorities to guarantee internet users' right to publish and browse anonymously and ensure that any restrictions to online anonymity are subject to requirements of a court order, and comply fully with Articles 17 and 19(3) of the ICCPR, and articles 8 and 10 of the European Convention on Human Rights, by:
  • Desisting from blocking Telegram and refraining from requiring messaging services, such as Telegram, to provide decryption keys in order to access users private communications;
  • Repealing provisions in the 'Yarovaya Law' requiring Internet Service Providers (ISPs) to store all telecommunications data for six months and imposing mandatory cryptographic backdoors, and the 2014 Data Localisation law, which grant security service easy access to users' data without sufficient safeguards.
  • Repealing Federal Law 241-FZ, which bans anonymity for users of online messaging applications; and Law 276-FZ which prohibits VPNs and Internet anonymisers from providing access to websites banned in Russia;
  • Amending Federal Law 149-FZ "On Information, IT Technologies and Protection of Information" so that the process of blocking websites meets international standards. Any decision to block access to a website or app should be undertaken by an independent court and be limited by requirements of necessity and proportionality for a legitimate aim. In considering whether to grant a blocking order, the court or other independent body authorised to issue such an order should consider its impact on lawful content and what technology may be used to prevent over-blocking.
  • Representatives of the United Nations (UN), the Council of Europe (CoE), the Organisation for the Cooperation and Security in Europe (OSCE), the European Union (EU), the United States and other concerned governments to scrutinise and publicly challenge Russia's actions in order to uphold the fundamental rights to freedom of expression and privacy both online and-offline, as stipulated in binding international agreements to which Russia is a party.
  • Internet companies to resist orders that violate international human rights law. Companies should follow the United Nations' Guiding Principles on Business & Human Rights, which emphasise that the responsibility to respect human rights applies throughout a company's global operations regardless of where its users are located and exists independently of whether the State meets its own human rights obligations.

Signed by

  • ARTICLE 19
  • Agora International
  • Access Now
  • Amnesty International
  • Asociatia pentru Tehnologie si Internet -- ApTI
  • Associação D3 - Defesa dos Direitos Digitais
  • Committee to Protect Journalists
  • Civil Rights Defenders
  • Electronic Frontier Foundation
  • Electronic Frontier Norway
  • Electronic Privacy Information Centre (EPIC)
  • Freedom House
  • Human Rights House Foundation
  • Human Rights Watch
  • Index on Censorship
  • International Media Support
  • International Partnership for Human Rights
  • ISOC Bulgaria
  • Open Media
  • Open Rights Group
  • PEN America
  • PEN International
  • Privacy International
  • Reporters Without Borders (RSF)
  • WWW Foundation
  • Xnet

 

 

Fake justifcation...

Malaysia's first conviction for 'fake news' is inevitably for a political comment that teh government does not like


Link Here1st May 2018
Full story: Internet Censorship in Malaysia...Malaysia looks to censor the internet

In a verdict with grave implications for press freedom, a Malaysian court has handed down the nation's first conviction under its recently enacted 'fake news' law.

Salah Salem Saleh Sulaiman, a Danish citizen, was sentenced to one week in prison and fined 10,000 ringgit (US$2,500) for posting to the internet a two-minute video criticizing police's response to the April 21 assassination of a member of the militant group Hamas in Kuala Lumpur.

Shawn Crispin, CPJ's senior Southeast Asia representative said:

Malaysia's first conviction under its 'fake news' law shows authorities plan to abuse the new provision to criminalize critical reporting. The dangerous precedent should be overturned and this ill-conceived law repealed for the sake of press freedom.


 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   Latest 
Jan   Feb   Mar   Apr   May   June   July   Aug   Sep   Oct   Nov   Dec    


 


 
TV  

Movies

Games

Internet
 
Advertising

Technology

Gambling

Food+Drink
Books

Music

Art

Stage

melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys