Melon Farmers Original Version

Internet News


2019: September

 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   Apr   May   June   July   Aug   Sep   Oct   Nov   Dec    

 

Updated: Is Facebook's 'end to end encryption' worthless?...

UK and US set to sign treaty allowing UK police back door access to WhatsApp and other end to end encrypted messaging platforms


Link Here 1st October 2019
Full story: Internet Encryption...Encryption, essential for security but givernments don't see it that way
UK police will be able to force US-based social media platforms to hand over users' messages, including those that are end to end encrypted, under a treaty that is set to be signed next month.

According to a report in The Times, investigations into certain 'serious' criminal offenses, will be covered under the agreement between the two countries.

The UK government has been imploring Facebook to create back doors which would enable intelligence agencies to gain access to messaging platforms for matters of national security.

The news of the agreement between the US and UK is sure to ramp up discussion of the effectiveness of end to end encryption when implemented by large corporations. If this report is confirmed and Facebook/police can indeed listen in on 'end to end encryption' then such implementations of encryption are worthless.

Update: Don't jump to conclusions

1st October 2019. See article from techdirt.com

No, The New Agreement To Share Data Between US And UK Law Enforcement Does Not Require Encryption Backdoors

It's no secret many in the UK government want backdoored encryption. The UK wing of the Five Eyes surveillance conglomerate says the only thing that should be absolute is the government's access to communications . The long-gestating Snooper's Charter frequently contained language mandating lawful access, the government's preferred nomenclature for encryption backdoors. And officials have, at various times, made unsupported statements about how no one really needs encryption , so maybe companies should just stop offering it.

What the UK government has in the works now won't mandate backdoors, but it appears to be a way to get its foot in the (back)door with the assistance of the US government. An agreement between the UK and the US -- possibly an offshoot of the Cloud Act -- would mandate the sharing of encrypted communications with UK law enforcement, as Bloomberg reports .

Sharing information is fine. Social media companies have plenty of information. What they don't have is access to users' encrypted communications, at least in most cases. Signing an accord won't change that. There might be increased sharing of encrypted communications but it doesn't appear this agreement actually requires companies to decrypt communications or create backdoors.

 

 

Offsite Article: Revealed: how TikTok censors worldwide videos that do not please Beijing...


Link Here 26th September 2019
Leak spells out how social media app advances China's foreign policy aims

See article from theguardian.com

 

 

The contradictions that can't be forgotten...

The EU's highest court finds that the 'right to be forgotten' does not apply outside of the EU


Link Here25th September 2019
Full story: The Right to be Forgotten...Bureaucratic censorship in the EU
The EU's top court has ruled that Google does not have to apply the right to be forgotten globally.

It means the firm only needs to remove links from its search results in the EU, and not elsewhere

The ruling stems from a dispute between Google and a French privacy censor CNIL. In 2015 it ordered Google to globally remove search result listings to pages containing banned information about a person.

The following year, Google introduced a geoblocking feature that prevents European users from being able to see delisted links. But it resisted censoring search results for people in other parts of the world. And Googlechallenged a 100,000 euro fine that CNIL had tried to impose.

The right to be forgotten, officially known as the right to erasure, gives EU citizens the power to request data about them be deleted. Members of the public can make a request to any organisation verbally or in writing and the recipient has one month to respond. They then have a range of considerations to weigh up to decide whether they are compelled to comply or not.

Google had argued that internet censorship rules should not be extended to external jurisdictions lest other countries do the same, eg China would very much like to demand that the whole world forgets the Tiananmen Square massacre.

The court also issued a related second ruling, which said that links do not automatically have to be removed just because they contain information about a person's sex life or a criminal conviction. Instead, it ruled that such listings could be kept where strictly necessary for people's freedom of information rights to be preserved. However, it indicated a high threshold should be applied and that such results should fall down search result listings over time.

Notably, the ECJ ruling said that delistings must be accompanied by measures which effectively prevent or, at the very least, seriously discourage an internet user from being able to access the results from one of Google's non-EU sites. It will be for the national court to ascertain whether the measures put in place by Google Inc meet those requirements.

 

 

Words speak louder than facts...

Facebook justifiably decides that fact checking politicians isn't the way to go and that politicians at least will be given the right to free speech


Link Here25th September 2019
Full story: Facebook Censorship...Facebook quick to censor
Nick Clegg, the Facbook VP of Global Affairs and Communications writes in a blog post:

Fact-Checking Political Speech

We rely on third-party fact-checkers to help reduce the spread of false news and other types of viral misinformation, like memes or manipulated photos and videos. We don't believe, however, that it's an appropriate role for us to referee political debates and prevent a politician's speech from reaching its audience and being subject to public debate and scrutiny. That's why Facebook exempts politicians from our third-party fact-checking program. We have had this policy on the books for over a year now, posted publicly on our site under our eligibility guidelines. This means that we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements. You can find more about the third-party fact-checking program and content eligibility here .

Newsworthiness Exemption

Facebook has had a newsworthiness exemption since 2016 . This means that if someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm. Today, I announced that from now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard. However, in keeping with the principle that we apply different standards to content for which we receive payment, this will not apply to ads -- if someone chooses to post an ad on Facebook, they must still fall within our Community Standards and our advertising policies.

When we make a determination as to newsworthiness, we evaluate the public interest value of the piece of speech against the risk of harm. When balancing these interests, we take a number of factors into consideration, including country-specific circumstances, like whether there is an election underway or the country is at war; the nature of the speech, including whether it relates to governance or politics; and the political structure of the country, including whether the country has a free press. In evaluating the risk of harm, we will consider the severity of the harm. Content that has the potential to incite violence, for example, may pose a safety risk that outweighs the public interest value. Each of these evaluations will be holistic and comprehensive in nature, and will account for international human rights standards.

 

 

Group censorship...

Kenyan bill require licences to run social media groups


Link Here25th September 2019

A an oppressive censorship bill has been tabled in the Kenyan parliament targeting social media group admins and bloggers.

MP Malulu Injendi has tabled The Kenya Information and Communication (Amendment) Bill 2019 which specifically targets group admins, who will be used to police the kind of content shared in their groups.

The Bill defines social media platforms to include online publishing and discussion, media sharing, blogging, social networking, document and data sharing repositories, social media applications, social bookmarking and widgets. The bill reads;

The new part will introduce new sections to the Act on licensing of social media platforms, sharing of information by a licensed person, creates obligations to social media users, registration of bloggers and seeks to give responsibility to the Kenyan Communications Authority (CA)  to develop a bloggers' code of conduct in consultation with bloggers.

The Communications Authority will maintain  a registry of all bloggers and develop censorship rules for bloggers.

The proposed bill means that all group admins on any social platform will be required to get authorisation from CA before they can open such groups. The bill also states that admins should monitor content shared in their groups and remove any member that posts inappropriate content. The admins are also required to ensure all their members are over 18 years old. Group admins will also be required to have a physical address and keep a record of the group members.

 

 

Original censorship...

Twitter users in the US and Japan can now hide responses to their tweets for everyone


Link Here23rd September 2019
Full story: Twitter Censorship...Twitter offers country by country take downs
Twitter explains in a blog post:

Earlier this year we started testing a way to give people more control over the conversations they start. Today, we're expanding this test to Japan and the United States!

With this test, we want to understand how conversations on Twitter change if the person who starts a conversation can hide replies. Based on our research and surveys we conducted, we saw a lot of positive trends during our initial test in Canada, including:

  • People mostly hide replies that they think are irrelevant, abusive or unintelligible. Those who used the tool thought it was a helpful way to control what they saw, similar to when keywords are muted.

  • We saw that people were more likely to reconsider their interactions when their tweet was hidden: 27% of people who had their tweets hidden said they would reconsider how they interact with others in the future.

  • People were concerned hiding someone's reply could be misunderstood and potentially lead to confusion or frustration. As a result, now if you tap to hide a Tweet, we'll check in with you to see if you want to also block that account.

We're interested to see if these trends continue, and if new ones emerge, as we expand our test to Japan and the US. People in these markets use Twitter in many unique ways, and we're excited to see how they might use this new tool.

 

 

Offsite Article: Swiss Copyright Law: Downloading Stays Legal, No Site Blocking...


Link Here 23rd September 2019
Switzerland's parliament passes new copyright laws

See article from torrentfreak.com

 

 

Offsite Article: Identity, Privacy and Tracking...


Link Here21st September 2019
Full story: Behavioural Advertising...Serving adverts according to internet snooping
How cookies and tracking exploded, and why the adtech industry now wants full identity tokens. A good technical write up of where we are at and where it all could go

See article from iabtechlab.com

 

 

Offsite Article: Internet Villains...


Link Here20th September 2019
After siding with the censors against DNS over HTTPS, the UK ISP trade association chair is interviewed over future directions for government internet censorship

See article from ispreview.co.uk

 

 

Culture of censorship...

Culture Secretary makes a speech about censoring our internet along the lines of TV


Link Here19th September 2019
Culture Secretary Nicky Morgan's made the keynote address to the Royal Television Society at the University of Cambridge. She took the opportunity to announce that the government is considering how to censor internet more in line with strict TV censorship laws.

She set the background noting how toxic the internet has become. Politicians never seem to consider that a toxic response to politicians may be totally justified by the dreadful legislation being passed to marginalise, disempower and impoverish British people. She noted:

And this Government is determined to see a strong and successful future for our public service broadcasters and commercial broadcasters alike.

I really value the important contribution that they all make to our public life, at a time when our civil discourse is increasingly under strain.

Disinformation, fuelled by hermetically sealed online echo chambers, is threatening the foundations of truth that we all rely on.

And the tenor of public conversations, especially those on social media, has become increasingly toxic and hostile.

Later she spoke of work in progress to move censor the internet along the lines of TV. She said:

The second area where we need to adapt is the support offered by the Government and regulators.

We need to make sure that regulations, some of which were developed in the analogue age, are fit for the new ways that people create and consume content.

While I welcome the growing role of video on demand services and the investment and consumer choice they bring, it is important that we have regulatory frameworks that reflect this new environment.

For example, whereas a programme airing on linear TV is subject to Ofcom's Broadcasting Code, and the audience protections it contains, a programme going out on most video on demand services is not subject to the same standards.

This does not provide the clarity and consistency that consumers would expect.

So I am interested in considering how regulation should change to reflect a changing sector.

 

 

Facebook's Independent Oversight Board...

Facebook sets out a plans for a top level body to decide upon censorship policy and to arbitrate on cases brought by Facebook, and later, Facebook users


Link Here18th September 2019
Full story: Facebook Censorship...Facebook quick to censor

Mark Zuckerberg has previously described plans to create a high level oversight board to decide upon censorship issues with a wider consideration than just Facebook interests. He suggested that national government interests should be considered at this top level of policy making. Zuckerberg wrote:

We are responsible for enforcing our policies every day and we make millions of content decisions every week. But ultimately I don't believe private companies like ours should be making so many important decisions about speech on our own. That's why I've called for governments to set clearer standards around harmful content. It's also why we're now giving people a way to appeal our content decisions by establishing the independent Oversight Board.

If someone disagrees with a decision we've made, they can appeal to us first, and soon they will be able to further appeal to this independent board. The board's decision will be binding, even if I or anyone at Facebook disagrees with it. The board will use our values to inform its decisions and explain its reasoning openly and in a way that protects people's privacy.

The board will be an advocate for our community -- supporting people's right to free expression, and making sure we fulfill our responsibility to keep people safe. As an independent organization, we hope it gives people confidence that their views will be heard, and that Facebook doesn't have the ultimate power over their expression. Just as our Board of Directors keeps Facebook accountable to our shareholders, we believe the Oversight Board can do the same for our community.

As well as a detailed charter, Facebook provided a summary of the design of the board.

Along with the charter, we are providing a summary which breaks down the elements from the draft charter , the feedback we've received, and the rationale behind our decisions in relation to both. Many issues have spurred healthy and constructive debate. Four areas in particular were:

  • Governance: The majority of people we consulted supported our decision to establish an independent trust. They felt that this could help ensure the board's independence, while also providing a means to provide additional accountability checks. The trust will provide the infrastructure to support and compensate the Board.

  • Membership: We are committed to selecting a diverse and qualified group of 40 board members, who will serve three-year terms. We agreed with feedback that Facebook alone should not name the entire board. Therefore, Facebook will select a small group of initial members, who will help with the selection of additional members. Thereafter, the board itself will take the lead in selecting all future members, as explained in this post . The trust will formally appoint members.

  • Precedent: Regarding the board, the charter confirms that panels will be expected, in general, to defer to past decisions. This reflects the feedback received during the public consultation period. The board can also request that its decision be applied to other instances or reproductions of the same content on Facebook. In such cases, Facebook will do so, to the extent technically and operationally feasible.

  • Implementation : Facebook will promptly implement the board's content decisions, which are binding. In addition, the board may issue policy recommendations to Facebook, as part of its overall judgment on each individual case. This is how it was envisioned that the board's decisions will have lasting influence over Facebook's policies, procedures and practices.

Process

Both Facebook and its users will be able to refer cases to the board for review. For now, the board will begin its operations by hearing Facebook-initiated cases. The system for users to initiate appeals to the board will be made available over the first half of 2020.

Over the next few months, we will continue testing our assumptions and ensuring the board's operational readiness. In addition, we will focus on sourcing and selecting of board members, finalizing the bylaws that will complement the charter, and working toward having the board deliberate on its first cases early in 2020.

 

 

Offsite Article: Combating Hate and Extremism...


Link Here18th September 2019
Full story: Facebook Censorship...Facebook quick to censor
Facebook reports on how it developing capabilities to combat terrorism and hateful content

See article from newsroom.fb.com

 

 

Next will be the problem of fake flags...

Instagram adds facility to flag posts as 'fake news'


Link Here17th September 2019
Facebook has launched a new feature allowing Instagram users to flag posts they claim contain fake news to its fact-checking partners for vetting.

The move is part of a wider raft of measures the social media giant has taken to appease the authorities who claim that 'fake news' is the root of all social ills.

Launched in December 2016 following the controversy surrounding the impact of Russian meddling and online fake news in the US presidential election, Facebook's partnership now involves more than 50 independent 'fact-checkers' in over 30 countries .

The new flagging feature for Instagram users was first introduced in the US in mid-August and has now been rolled out globally.

Users can report potentially false posts by clicking or tapping on the three dots that appear in the top right-hand corner, selecting report, it's inappropriate and then false information.

No doubt the facility will be more likely to report posts that people don't like rather for 'false information'.

 

 

Endangering porn users doesn't qualify as online safety...

Ireland decides that age verification for porn is not a priority and will not be included in the upcoming online safety bill


Link Here 16th September 2019
Full story: Internet Censorship in Ireland...Ireland considers the UK's lead in censoring porn and social media
The Irish Communications Minister Richard Bruton has scrapped plans to introduce restrictions on access to porn in a new online safety bill, saying they are not a priority.

The Government said in June it would consider following a UK plan to block pornographic material until an internet user proves they are over 18. However, the British block has run into administrative problems and been delayed until later this year.

Bruton said such a measure in Ireland is not a priority in the Online Safety Bill, a draft of which he said would be published before the end of the year.

It's not the top priority. We want to do what we committed to do, we want to have the codes of practice, he said at the Fine Gael parliamentary party think-in. We want to have the online commissioner - those are the priorities we are committed to.

An online safety commissioner will have the power to enforce the online safety code and may in some cases be able to force social media companies to remove or restrict access. The commissioner will have responsibility for ensuring that large digital media companies play their part in ensuring the code is complied with. It will also be regularly reviewed and updated.

Bruton's bill will allow for a more comprehensive complaint procedure for users and alert the commissioner to any alleged dereliction of duty. The Government has been looking at Australia's pursuit of improved internet safety.

 

 

Offsite Article: Government interference in the commercial arrangements between large companies and Facebook...


Link Here16th September 2019
Full story: Facebook Censorship...Facebook quick to censor
Facebook has some strong words for an Australian government inquiry looking into ideas to censor the internet

See article from businessinsider.com.au

 

 

Offsite Article: British ISPs fight to make the web LESS secure...


Link Here 15th September 2019
Why Britain's broadband providers are worried about a new technology that guards against online snooping

See article from itpro.co.uk

 

 

Google Purse...

Google pays a small fine for not implementing Russian internet censorship demands


Link Here14th September 2019
Full story: Internet Censorship in Russia...Russia and its repressive state control of media
Google has paid a fine for failing to block access to certain websites banned in Russia.

Roscomnadzor, the Russian government's internet and media censor, said that Google paid a fine of 700,000 rubles ($10,900) related to the company's refusal to fully comply with rules imposed under the country's censorship regime.

Search engines are prohibited under Russian law from displaying banned websites in the results shown to users, and companies like Google are asked to adhere to a regularly updated blacklist maintained by Roscomnadzor.

Google does not fully comply with the blacklist, however, and more than a third of the websites banned in Russia could be found using its search engine, Roscomnadzor said previously.

No doubt Russia is no working on increased fines for future transgressions.

 

 

Russia recommends...

Calls to block email services Mailbox and Scryptmail for being to secure


Link Here14th September 2019
Full story: Internet Censorship in Russia...Russia and its repressive state control of media
Russia's powerful internal security agency FSB  has enlisted the help of the telecommunications, IT and media censor Roskomnadzor to ask a court to block Mailbox and Scryptmail email providers.

It seems that the services failed to register with the authorities as required by Russian law. Both are marketed as focusing strongly on the privacy segment and offering end-to-end encryption.

News source RBK noted that the process to block the two email providers will in legal terms follow the model applied to the Telegram messaging service -- adding, however, that imperfections in the blocking system are resulting in Telegram's continued availability in Russia.

On the other hand, some experts argued that it will be easier to block an email service than a messenger like Telegram. In any case, Russia is preparing to a new law to come into effect on November 1 that will see the deployment of Deep Packet Inspection equipment, which should result in more efficient blocking of services.

 

 

But will Australia give a little more thought than the UK to keeping porn users safe?...

Australian government initiates a parliamentary investigation into age verification requirements for viewing internet porn


Link Here13th September 2019
Full story: Internet Censorship in Australia...Wide ranging state internet censorship

A parliamentary committee initiated by the Australian government will investigate how porn websites can verify Australians visiting their websites are over 18, in a move based on the troubled UK age verification system.

The family and social services minister, Anne Ruston, and the minister for communications, Paul Fletcher, referred the matter for inquiry to the House of Representatives standing committee on social policy and legal affairs.

The committee will examine how age verification works for online gambling websites, and see if that can be applied to porn sites. According to the inquiry's terms of reference, the committee will examine whether such a system would push adults into unregulated markets, whether it would potentially lead to privacy breaches, and impact freedom of expression.

The committee has specifically been tasked to examine the UK's version of this system, in the UK Digital Economy Act 2017.

Hopefully they will understand better than UK lawmakers that it is paramount importance that legislation is enacted to keep people's porn browsing information totally safe from snoopers, hackers and those that want to make money selling it.

 

 

Firefox Private Network...

Firefox is testing a version in the US which includes a VPN


Link Here13th September 2019
One of the key learnings from recent events is that there is growing demand for privacy features. The Firefox Private Network is an extension which provides a secure, encrypted path to the web to protect your connection and your personal information anywhere and everywhere you use your Firefox browser.

There are many ways that your personal information and data are exposed: online threats are everywhere, whether it's through phishing emails or data breaches. You may often find yourself taking advantage of the free WiFi at the doctor's office, airport or a cafe. There can be dozens of people using the same network -- casually checking the web and getting social media updates. This leaves your personal information vulnerable to those who may be lurking, waiting to take advantage of this situation to gain access to your personal info. Using the Firefox Private Network helps protect you from hackers lurking in plain sight on public connections. To learn more about Firefox Private Network, its key features and how it works exactly, please take a look at this blog post .

As a Firefox user and account holder in the US, you can start testing the Firefox Private Network today . A Firefox account allows you to be one of the first to test potential new products and services when we make them available in Europe, so sign up today and stay tuned for further news and the Firefox Private Network coming to your location soon!

 

 

Chinese AI to instil dumb obedience...

Internet censors demand that AI and algorithms point users towards 'mainstream values


Link Here12th September 2019
Full story: Internet Censorship in China...All pervading Chinese internet censorship
China's internet censor has ordered online AI algorithms to promote 'mainstream values':
  • Systems should direct users to approved material on subjects like Xi Jinping Thought, or which showcase the country's economic and social development, Cyberspace Administration of China says
  • They should not recommend content that undermines national security, or is sexually suggestive, promotes extravagant lifestyles, or hypes celebrity gossip and scandals

The Cyberspace Administration of China released its draft regulations on managing the cyberspace ecosystem on Tuesday in another sign of how the ruling Communist Party is increasingly turning to technology to cement its ideological control over society.

The proposals will be open for public consultation for a month and are expected to go into effect later in the year.

The latest rules point to a strategy to use AI-driven algorithms to expand the reach and depth of the government's propaganda and ideology.

The regulations state that information providers on all manner of platforms -- from news and social media sites, to gaming and e-commerce -- should strengthen the management of recommendation lists, trending topics, hot search lists and push notifications. The regulations state:

Online information providers that use algorithms to push customised information [to users] should build recommendation systems that promote mainstream values, and establish mechanisms for manual intervention and override.

 

 

Concerned friends...

Facebook reports on its policies and resources to prevent suicide and self-harm


Link Here12th September 2019
Full story: Facebook Censorship...Facebook quick to censor
Today, on World Suicide Prevention Day, we're sharing an update on what we've learned and some of the steps we've taken in the past year, as well as additional actions we're going to take, to keep people safe on our apps, especially those who are most vulnerable.

Earlier this year, we began hosting regular consultations with experts from around the world to discuss some of the more difficult topics associated with suicide and self-injury. These include how we deal with suicide notes, the risks of sad content online and newsworthy depictions of suicide. Further details of these meetings are available on Facebook's new Suicide Prevention page in our Safety Center.

As a result of these consultations, we've made several changes to improve how we handle this content. We tightened our policy around self-harm to no longer allow graphic cutting images to avoid unintentionally promoting or triggering self-harm, even when someone is seeking support or expressing themselves to aid their recovery. On Instagram, we've also made it harder to search for this type of content and kept it from being recommended in Explore. We've also taken steps to address the complex issue of eating disorder content on our apps by tightening our policy to prohibit additional content that may promote eating disorders. And with these stricter policies, we'll continue to send resources to people who post content promoting eating disorders or self-harm, even if we take the content down. Lastly, we chose to display a sensitivity screen over healed self-harm cuts to help avoid unintentionally promoting self-harm.

And for the first time, we're also exploring ways to share public data from our platform on how people talk about suicide, beginning with providing academic researchers with access to the social media monitoring tool, CrowdTangle. To date, CrowdTangle has been available primarily to help newsrooms and media publishers understand what is happening on Facebook. But we are eager to make it available to two select researchers who focus on suicide prevention to explore how information shared on Facebook and Instagram can be used to further advancements in suicide prevention and support.

In addition to all we are doing to find more opportunities and places to surface resources, we're continuing to build new technology to help us find and take action on potentially harmful content, including removing it or adding sensitivity screens. From April to June of 2019, we took action on more than 1.5 million pieces of suicide and self-injury content on Facebook and found more than 95% of it before it was reported by a user. During that same time period, we took action on more than 800 thousand pieces of this content on Instagram and found more than 77% of it before it was reported by a user.

To help young people safely discuss topics like suicide, we're enhancing our online resources by including Orygen's #chatsafe guidelines in Facebook's Safety Center and in resources on Instagram when someone searches for suicide or self-injury content.

The #chatsafe guidelines were developed together with young people to provide support to those who might be responding to suicide-related content posted by others or for those who might want to share their own feelings and experiences with suicidal thoughts, feelings or behaviors.

 

 

Censorship on Demand...

New Zealand government to legislate to require age ratings on Internet TV


Link Here11th September 2019
Full story: Internet TV Censorship In New Zealand...OFLC to oversee internet video self rating

The New Zealand government has decided to legislate to require Internet TV services to provide age ratings using a self rating scheme overseen by the country's film censor.

Movies and shows available through internet television services such as Netflix and Lightbox will need to display content classifications in a similar way to films and shows released to cinemas and on DVD, Internal Affairs Minister Tracey Martin has announced.

The law change, which the Government plans to introduce to Parliament in November, would also apply to other companies that sell videos on demand, including Stuff Pix.

The tighter rules won't apply to websites designed to let people upload and share videos, so videos on YouTube's main site won't need to display classifications, but videos that YouTube sells through its rental service will.

In a compromise, internet television and video companies will be able to self-classify their content using a rating tool being developed by the Chief Censor, or use their own systems to do that if they first have them accredited by the Classification Office.

The Film and Literature Board of Review will be able to review classifications, as they do now for cinema movies and DVDs.

The Government decided against requiring companies to instead submit videos to the film censor for classification, heeding a Cabinet paper warning that this would result in hold-ups.

 

 

Offsite Article: How the Suitable For All Ages Standard Leads to Censorship Worldwide...


Link Here 11th September 2019
Is 'Suitable For All Ages' a euphemism for no LGBT content? By Eric Thurm

See article from vice.com

 

 

Extract: Countering religious calls for censorship...

Why Ban Netflix India Just For Hurting Hindu Sentiments?


Link Here10th September 2019
Full story: Internet TV Censorship in India...Netflix and Amazon Prime censored

What's the difference between a child throwing a tantrum and religious groups asking for a ban on something that hurt religious sentiments? Absolutely nothing, except maybe the child can be cajoled into understanding that they might be wrong. Try doing that with the religious group and you'll be facing trolls, bans, and rape, death or beheading threats. Thankfully, when it comes to the recent call for banning the streaming platform Netflix, those demanding it have taken recourse to the law and filed a police complaint.

Their concern? According to Shiv Sena committee member Ramesh Solanki, who filed the complaint, Netflix original shows are promoting anti-Hindu propaganda. The shows in question include Sacred Games 2 (a Hindu godman encouraging terrorism), Leila (depicts a dystopian society divided on the basis of caste) and comedian Hasan Minhaj's Patriot Act (claims how the Lok Sabha elections 2019 disenfranchised minorities).

...Read the full article from in.mashable.com

 

 

No Brains...

Google's censorship ineptitude leads to a ban of a restaurant advert for Fanny's faggots


Link Here10th September 2019
Full story: Google Censorship...Google censors adult material froms its websites

 

 

 

Safer browsing in the US...

Mozilla announces that encrypted DNS will be slowly rolled out to US Firefox users from September


Link Here8th September 2019
Full story: DNS Over Https...A new internet protocol will make government website blocking more difficult
DNS over HTTPS (DoH) is an encrypted internet protocol that makes it more difficult for ISPs and government censors to block users from being able to access banned websites It also makes it more difficult for state snoopers like GCHQ to keep tabs on users' internet browsing history.

Of course this protection from external interference also makes it much internet browsing more safe from the threat of scammers, identity thieves and malware.

Google were once considering introducing DoH for its Chrome browser but have recently announced that they will not allow it to be used to bypass state censors.

Mozilla meanwhile have been a bit more reasonable about it and allow users to opt in to using DoH. Now Mozilla is considering using DoH by default in the US, but still with the proviso of implementing DoH only if the user is not using parental control or maybe corporate website blocking.

Mozilla explains in a blog post:

What's next in making Encrypted DNS-over-HTTPS the Default

By Selena Deckelmann,

In 2017, Mozilla began working on the DNS-over-HTTPS (DoH) protocol, and since June 2018 we've been running experiments in Firefox to ensure the performance and user experience are great. We've also been surprised and excited by the more than 70,000 users who have already chosen on their own to explicitly enable DoH in Firefox Release edition. We are close to releasing DoH in the USA, and we have a few updates to share.

After many experiments, we've demonstrated that we have a reliable service whose performance is good, that we can detect and mitigate key deployment problems, and that most of our users will benefit from the greater protections of encrypted DNS traffic. We feel confident that enabling DoH by default is the right next step. When DoH is enabled, users will be notified and given the opportunity to opt out.

Results of our Latest Experiment

Our latest DoH experiment was designed to help us determine how we could deploy DoH, honor enterprise configuration and respect user choice about parental controls.

We had a few key learnings from the experiment.

  • We found that OpenDNS' parental controls and Google's safe-search feature were rarely configured by Firefox users in the USA. In total, 4.3% of users in the study used OpenDNS' parental controls or safe-search. Surprisingly, there was little overlap between users of safe-search and OpenDNS' parental controls. As a result, we're reaching out to parental controls operators to find out more about why this might be happening.

  • We found 9.2% of users triggered one of our split-horizon heuristics. The heuristics were triggered in two situations: when websites were accessed whose domains had non-public suffixes, and when domain lookups returned both public and private (RFC 1918) IP addresses. There was also little overlap between users of our split-horizon heuristics, with only 1% of clients triggering both heuristics.

Moving Forward

Now that we have these results, we want to tell you about the approach we have settled on to address managed networks and parental controls. At a high level, our plan is to:

  • Respect user choice for opt-in parental controls and disable DoH if we detect them;

  • Respect enterprise configuration and disable DoH unless explicitly enabled by enterprise configuration; and

  • Fall back to operating system defaults for DNS when split horizon configuration or other DNS issues cause lookup failures.

We're planning to deploy DoH in "fallback" mode; that is, if domain name lookups using DoH fail or if our heuristics are triggered, Firefox will fall back and use the default operating system DNS. This means that for the minority of users whose DNS lookups might fail because of split horizon configuration, Firefox will attempt to find the correct address through the operating system DNS.

In addition, Firefox already detects that parental controls are enabled in the operating system, and if they are in effect, Firefox will disable DoH. Similarly, Firefox will detect whether enterprise policies have been set on the device and will disable DoH in those circumstances. If an enterprise policy explicitly enables DoH, which we think would be awesome, we will also respect that. If you're a system administrator interested in how to configure enterprise policies, please find documentation here.

Options for Providers of Parental Controls

We're also working with providers of parental controls, including ISPs, to add a canary domain to their blocklists. This helps us in situations where the parental controls operate on the network rather than an individual computer. If Firefox determines that our canary domain is blocked, this will indicate that opt-in parental controls are in effect on the network, and Firefox will disable DoH automatically.

This canary domain is intended for use in cases where users have opted in to parental controls. We plan to revisit the use of this heuristic over time, and we will be paying close attention to how the canary domain is adopted. If we find that it is being abused to disable DoH in situations where users have not explicitly opted in, we will revisit our approach.

Plans for Enabling DoH Protections by Default

We plan to gradually roll out DoH in the USA starting in late September. Our plan is to start slowly enabling DoH for a small percentage of users while monitoring for any issues before enabling for a larger audience. If this goes well, we will let you know when we're ready for 100% deployment.

 

 

A Bully's Charter...

MPs and campaigners call for 'misogyny' to be defined as on 'online harm' requiring censorship by social media. What could go wrong?


Link Here7th September 2019

MPs and activists have urged the government to protect women through censorship. They write in a letter

Women around the world are 27 times more likely to be harassed online than men. In Europe, 9 million girls have experienced some kind of online violence by the time they are 15 years old. In the UK, 21% of women have received threats of physical or sexual violence online. The basis of this abuse is often, though not exclusively, misogyny.

Misogyny online fuels misogyny offline. Abusive comments online can lead to violent behaviour in real life. Nearly a third of respondents to a Women's Aid survey said where threats had been made online from a partner or ex-partner, they were carried out. Along with physical abuse, misogyny online has a psychological impact. Half of girls aged 11-21 feel less able to share their views due to fear of online abuse, according to Girlguiding UK .

The government wants to make Britain the safest place in the world to be online, yet in the online harms white paper, abuse towards women online is categorised as harassment, with no clear consequences, whereas similar abuse on the grounds of race, religion or sexuality would trigger legal protections.

If we are to eradicate online harms, far greater emphasis in the government's efforts should be directed to the protection and empowerment of the internet's single largest victim group: women. That is why we back the campaign group Empower's calls for the forthcoming codes of practice to include and address the issue of misogyny by name, in the same way as they would address the issue of racism by name. Violence against women and girls online is not harassment. Violence against women and girls online is violence.

Ali Harris Chief executive, Equally Ours
Angela Smith MP Independent
Anne Novis Activist
Lorely Burt Liberal Democrat, House of Lords
Ruth Lister Labour, House of Lords
Barry Sheerman MP Labour
Caroline Lucas MP Green
Daniel Zeichner MP Labour
Darren Jones MP Labour
Diana Johnson MP Labour
Flo Clucas Chair, Liberal Democrat Women
Gay Collins Ambassador, 30% Club
Hannah Swirsky Campaigns officer, René Cassin
Joan Ryan MP Independent Group for Change
Joe Levenson Director of communications and campaigns, Young Women's Trust
Jonathan Harris House of Lords, Labour
Luciana Berger MP Liberal Democrats
Mandu Reid Leader, Women's Equality Party
Maya Fryer WebRoots Democracy
Preet Gill MP Labour
Sarah Mann Director, Friends, Families and Travellers
Siobhan Freegard Founder, Channel Mum
Jacqui Smith Empower

Offsite Patreon Comment: What will go wrong?

See subscription article from patreon.com

 

 

Extract: The Pentagon Wants More Control Over the News. What Could Go Wrong?...

The US sets its military scientists on a quest to automatically recognise 'fake news' so that ISPs can block it


Link Here 7th September 2019

One of the Pentagon's most secretive agencies, the Defense Advanced Research Projects Agency (DARPA), is developing custom software that can unearth fakes hidden among more than 500,000 stories, photos, video and audio clips.

DARPA now is developing a semantic analysis program called SemaFor and an image analysis program called MediFor, ostensibly designed to prevent the use of fake images or text. The idea would be to develop these technologies to help private Internet providers sift through content.

....Read the full article from rollingstone.com

 

 

Children will see but not be heard...

YouTube will remove all ad personalisation from kids videos on YouTube and will also turn off comments


Link Here5th September 2019
Full story: YouTube Censorship...YouTube censor videos by restricting their reach
Google have announced potentially far reaching new policies about kids' videos on YouTube. A Google blog post explains:

An update on kids and data protection on YouTube

From its earliest days, YouTube has been a site for people over 13, but with a boom in family content and the rise of shared devices, the likelihood of children watching without supervision has increased. We've been taking a hard look at areas where we can do more to address this, informed by feedback from parents, experts, and regulators, including COPPA concerns raised by the U.S. Federal Trade Commission and the New York Attorney General that we are addressing with a settlement announced today.

New data practices for children's content on YouTube

We are changing how we treat data for children's content on YouTube. Starting in about four months, we will treat data from anyone watching children's content on YouTube as coming from a child, regardless of the age of the user. This means that we will limit data collection and use on videos made for kids only to what is needed to support the operation of the service. We will also stop serving personalized ads on this content entirely, and some features will no longer be available on this type of content, like comments and notifications. In order to identify content made for kids, creators will be required to tell us when their content falls in this category, and we'll also use machine learning to find videos that clearly target young audiences, for example those that have an emphasis on kids characters, themes, toys, or games.

Improvements to YouTube Kids

We continue to recommend parents use YouTube Kids if they plan to allow kids under 13 to watch independently. Tens of millions of people use YouTube Kids every week but we want even more parents to be aware of the app and its benefits. We're increasing our investments in promoting YouTube Kids to parents with a campaign that will run across YouTube. We're also continuing to improve the product. For example, we recently raised the bar for which channels can be a part of YouTube Kids, drastically reducing the number of channels on the app. And we're bringing the YouTube Kids experience to the desktop.

Investing in family creators

We know these changes will have a significant business impact on family and kids creators who have been building both wonderful content and thriving businesses, so we've worked to give impacted creators four months to adjust before changes take effect on YouTube. We recognize this won't be easy for some creators and are committed to working with them through this transition and providing resources to help them better understand these changes.

We are also going to continue investing in the future of quality kids, family and educational content. We are establishing a $100 million fund, disbursed over three years, dedicated to the creation of thoughtful, original children's content on YouTube and YouTube Kids globally.

Today's changes will allow us to better protect kids and families on YouTube, and this is just the beginning. We'll continue working with lawmakers around the world in this area, including as the FTC seeks comments on COPPA . And in the coming months, we'll share details on how we're rethinking our overall approach to kids and families, including a dedicated kids experience on YouTube.

 

 

Offsite Article: Eye-catching shots of magnified body parts -- with a twist...


Link Here 5th September 2019
Marius Sperlich's provocative pics are the antidote to Instagram censorship

See article from dazeddigital.com

 

 

Offsite Article: Depressing...


Link Here5th September 2019
Privacy International finds that some online depression tests share your results with third parties

See article from privacyinternational.org

 

 

All bets are off...

Switzerland issues its first blocking list of banned foreign gambling websites


Link Here4th September 2019
The Swiss Lottery and Betting Board has published its first censorship list of foreign gambling websites to be blocked by the country's ISPs.

The censorship follows a change to the law on online gambling intended to preserve a monopoly for Swiss gambling providers.

Over 60 foreign websites external link have been blocked to Swiss gamblers. Last June, 73% of voters approved the censorship law. The law came into effect in January but blocking of foreign gambling websites only started in August.

Swiss gamblers can bet online only with Swiss casinos and lotteries that pay tax in the country.

Foreign service providers that voluntarily withdraw from the Swiss market with appropriate measures will not be blocked.

 

 

Shooting messengers...

35 people get into trouble with New Zealand police over Brenton Tarrant's mosque murder video


Link Here3rd September 2019
Full story: Siege of Kalima...Tunisia police harass and close radio station
35 people in New Zealand have been charged by police for sharing and possession of Brenton Tarrant's Christchurch terrorist attack video.

As of August 21st, 35 people have been charged in relation to the video, according to information released under the Official Information Act. At least 10 of the charges are against minors, which have now been referred to the Youth Court.

Under New Zealand law, knowingly possessing or distributing objectionable material is a serious offence with a maximum jail term of 14 years.

So far, nine people have been issued warnings, while 14 have been prosecuted for their involvement.

 

 

The 4 Rs of YouTube censorship...

YouTube CEO reports on how 'wrong think' is being marginalised and how the mainstream media is being prioritised for news


Link Here2nd September 2019
Full story: YouTube Censorship...YouTube censor videos by restricting their reach
After a long introduction about how open and diverse YouTube is, CEO Susan Wojcick gets down to the nitty gritty of how YouTube censorship works. SHe writes in a blog:

Problematic content represents a fraction of one percent of the content on YouTube and we're constantly working to reduce this even further. This very small amount has a hugely outsized impact, both in the potential harm for our users, as well as the loss of faith in the open model that has enabled the rise of your creative community. One assumption we've heard is that we hesitate to take action on problematic content because it benefits our business. This is simply not true -- in fact, the cost of not taking sufficient action over the long term results in lack of trust from our users, advertisers, and you, our creators. We want to earn that trust. This is why we've been investing significantly over the past few years in the teams and systems that protect YouTube. Our approach towards responsibility involves four "Rs":

  • We REMOVE content that violates our policy as quickly as possible. And we're always looking to make our policies clearer and more effective, as we've done with pranks and challenges , child safety , and hate speech just this year. We aim to be thoughtful when we make these updates and consult a wide variety of experts to inform our thinking, for example we talked to dozens of experts as we developed our updated hate speech policy. We also report on the removals we make in our quarterly Community Guidelines enforcement report. I also appreciate that when policies aren't working for the creator community, you let us know. One area we've heard loud and clear needs an update is creator-on-creator harassment. I said in my last letter that we'd be looking at this and we will have more to share in the coming months.

  • We RAISE UP authoritative voices when people are looking for breaking news and information, especially during breaking news moments. Our breaking and top news shelves are available in 40 countries and we're continuing to expand that number.

  • We REDUCE the spread of content that brushes right up against our policy line. Already, in the U.S. where we made changes to recommendations earlier this year, we've seen a 50% drop of views from recommendations to this type of content, meaning quality content has more of a chance to shine. And we've begun experimenting with this change in the UK, Ireland, South Africa and other English-language markets.

  • And we set a higher bar for what channels can make money on our site, REWARDING trusted, eligible creators. Not all content allowed on YouTube is going to match what advertisers feel is suitable for their brand, we have to be sure they are comfortable with where their ads appear. This is also why we're enabling new revenue streams for creators like Super Chat and Memberships. Thousands of channels have more than doubled their total YouTube revenue by using these new tools in addition to advertising.

 

 

Fake news hub...

Thailand outlines its 'fake news' internet censorship centre set for launch in November 2019


Link Here2nd September 2019
Thailand's Ministry of a Digital Economy and Society plans to open a 'Fake News' Center by November 1st at the latest. The minister has said that the centre will focus on four categories of internet censorship.

Digital Minister Puttipong Punnakanta, said that the coordinating committee of the Fake News Center has set up four subcommittees to screen the various categories of news which might 'disrupt public peace and national security':

  • natural disasters such as flooding, earthquakes, dam breaks and tsunamis;
  • economics, the financial and banking sector;
  • health products, hazardous items and illegal goods,
  • and of course, government policies.

The Fake News Center will analyse, verify and clarify news items and distribute its findings via its own website, Facebook and Line (a Whatsapp like messaging service that is the dominant in much of Asia).

The committee meeting considered protocols to be used and plans to consult with representatives of major social media platforms and all cellphone service providers. It will encourage them to take part in the delivery of countermeasures to expose fake news.

 

 

Offsite Article: Leave your friends at home...


Link Here1st September 2019
Are US border police checking out your Facebook postings and friends at the airport immigrations desk?

See article from theverge.com


 2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024   Latest 
Jan   Feb   Mar   Apr   May   June   July   Aug   Sep   Oct   Nov   Dec    


 


 
TV  

Movies

Games

Internet
 
Advertising

Technology

Gambling

Food+Drink
Books

Music

Art

Stage

melonfarmers icon

Home

Top

Index

Links

Search
 

UK

World

Media

Liberty

Info
 

Film Index

Film Cuts

Film Shop

Sex News

Sex Sells
 


Adult Store Reviews

Adult DVD & VoD

Adult Online Stores

New Releases/Offers

Latest Reviews

FAQ: Porn Legality
 

Sex Shops List

Lap Dancing List

Satellite X List

Sex Machines List

John Thomas Toys