Google News is limiting the reach of two Russian media outlets, RT and Sputnik, according to Alphabet executive chairman Eric Schmidt.
Schmidt said Google is de-ranking sites it claims have been spreading Russian state-sponsored propaganda. We're trying to engineer the systems to prevent it.
However, Schmidt added that he isn't in favor of censorship ...BUT.. his company also has a responsibility to stop the misinformation.
In response of teh censorship, Sputnik quoted research psychologist Robert Epstein:
Google is deciding what people see, which is very dangerous since they are legally a tech company and do not adhere to any type of editorial standards our guidelines
What we're talking about here is a means of mind control on a massive scale that there is no precedent for in human history, he said at the time. Research participants spent a much larger percentage of web browsing time visiting search results
that were higher up. According to Epstein, biased Google results could have provided an extra 2.6 million votes in support of Democratic candidate Hillary Clinton in the 2016 race.
A group of international broadcasters have come together to support a new website that aims to help internet users around the world access news and information.
The Broadcasting Board of Governors (US), the BBC (UK), Deutsche Welle (Germany) and France M39dias Monde (France) have co-sponsored the Bypass Censorship website: bypasscensorship.org
Bypass Censorship provides internet users information on how to access and download security-conscious tools which will enable them to access news websites and social media blocked by governments.
When governments try to block these circumvention tools, the site is updated with information to help users stay ahead of the censors and maintain access to news sites.
BBG CEO, John F. Lansing said:
The right to seek, and impart, facts and ideas is a universal human right which many repressive governments seek to control. This website presents an incredible opportunity to provide citizens around the world with the resources they need to
access a free and open internet for uncensored news and information essential to making informed decisions about their lives and communities.
The broadcasters supporting the Bypass Censorship site are part of the DG7 group of media organisations which are consistent supporters of UN resolutions on media freedom and the safety of journalists.
On 11th November, thousands of people marched in the streets of Warsaw, Poland, to celebrate the
country's Independence Day. The march attracted massive numbers of people from the nationalist or far right end of the political spectrum.
The march proved very photogenic, with images showing the scale of the march and also the stylised symbology proved very powerful and thought provoking.
But the images caused problems for the likes of Facebook, on what should be censored and what should not.
Once could argue that the world needs to see what is going on amongst large segments of the population in Poland, and indeed across Europe. Perhaps if they see the popularity of the far right then maybe communities and politicians can be spurred
into addressing some of the fundamental societal break downs leading to this mass movement.
On the other hand, there will be those that consider the images to be something that could attract and inspire others to join the cause.
But from just looking at news pictures, it would be hard to know what to think. And that dilemma is exactly what caused confusion amongst censors at Facebook.
) reports on a collection of such images, published on Facebook by a renowned photojournalist in Poland, that was taken down by the social media's content censors. Chris Niedenthal attended the march to practice his craft, not to participate, and
posted his photos on Nov. 12, the day after the march. Facebook took them down. He posted them again the next day. Facebook took them down again on Nov. 14. Niedenthal himself was also blocked from Facebook for 24 hours. The author concludes that
a legitimate professional journalist or photojournalist should not be 'punished' for doing his duty.
Facebook told Quartz that the photos, because they contained hate speech symbols, were taken down for violating the platform's community standards policy barring content that shows support for hate groups. The captions on the photos were neutral,
so Facebook's moderators could not tell if the person posting them supported, opposed, or was indifferent about hate groups, a spokesperson said. Content shared that condemns or merely documents events can remain up. But that which is interpreted
to show support for hate groups is banned and will be removed.
Eventually Facebook allowed the photos to remain on the platform. Facebook apologized for the error, in a message, and in a personal phone call.
The European Union is in the process of creating an authority to monitor and censor so-called fake news. It is setting up a High-Level 'Expert'
Group. The EU is currently consulting media professionals and the public to decide what powers to give to this EU body, which is to begin operation next spring.
The World Socialist Web Site
has its own colourful view on the intentions of the body, but I don't suppose it is too far from the truth:
An examination of the EU's announcement shows that it is preparing mass state censorship aimed not at false information, but at news reports or political views that encourage popular opposition to the European ruling class.
It aims to create conditions where unelected authorities control what people can read or say online.
EU Vice-President Frans Timmermans explained the move in ominous tersm
We live in an era where the flow of information and misinformation has become almost overwhelming. The EU's task is to protect its citizens from fake news and to manage the information they receive.
According to an EU press release, the EU Commission, another unelected body, will select the High-Level Expert Group, which is to start in January 2018 and will work over several months. It will discuss possible future actions to strengthen
citizens' access to reliable and verified information and prevent the spread of disinformation online.
Who will decide what views are verified, who is reliable and whose views are disinformation to be deleted from Facebook or removed from Google search results? The EU, of course.
Twitter announced yesterday that it would begin removing verification badges for famous tweeters that it does not
approve of. Not for what is tweeted, but for offline behaviour Twitter does not like.
The key phrase in Twitter's policy update is this one: Reasons for removal may reflect behaviors on and off Twitter. Before yesterday, the rules explicitly applied only to behavior on Twitter. From now on, holders of verified badges will be held
accountable for their behavior in the real world as well. Twitter has promised further information about the new censorship policy in due course.
Many questions remain unanswered. What will the company's review consist of? How will it examine users' offline behavior? Will it simply respond to reports, or will it actively look for violations? Will it handle the work with its existing team,
or will it expand its trust and safety team?
Twitter has immediately rescinded blue tick verification from accounts belonging to far-right activists, including Jason Kessler, a US white supremacist, and Tommy Robinson, founder of the English Defence League.
The European Union voted on November 14, to pass the new internet censorship regulation nominally in the name of consumer protection. But of course
censorship often hides behind consumer protection, eg the UK's upcoming internet porn ban is enacted in the name of protecting under 18 internet consumers.
The new EU-wide law gives extra power to national consumer protection agencies, but which also contains a vaguely worded clause that also grants them the power to block and take down websites without judicial oversight.
Member of the European Parliament Julia Reda said in a speech in the European Parliament Plenary during a last ditch effort to amend the law:
The new law establishes overreaching Internet blocking measures that are neither proportionate nor suitable for the goal of protecting consumers and come without mandatory judicial oversight,
According to the new rules, national consumer protection authorities can order any unspecified third party to block access to websites without requiring judicial authorization, Reda added later in the day on her blog .
This new law is an EU regulation and not a directive, meaning its obligatory for all EU states.
The new law proposal started out with good intentions, but sometimes in the spring of 2017, the proposed regulation received a series of amendments that watered down some consumer protections but kept intact the provisions that ensured national
consumer protection agencies can go after and block or take down websites.
Presumably multinational companies had been lobbying for new weapons n their battle against copyright infringement. For instance, the new law gives national consumer protection agencies the legal power to inquire and obtain information about
domain owners from registrars and Internet Service Providers.
Besides the website blocking clause, authorities will also be able to request information from banks to detect the identity of the responsible trader, to freeze assets, and to carry out mystery shopping to check geographical discrimination or
Governments around the world are dramatically increasing their efforts to manipulate information on social media, threatening the notion of the internet as a liberating technology, according to Freedom on the Net 201 7 , the latest
edition of the annual country-by-country assessment of online freedom, released today by Freedom House.
Online manipulation and disinformation tactics played an important role in elections in at least 18 countries over the past year, including the United States, damaging citizens' ability to choose their leaders based on factual news and authentic
debate. The content manipulation contributed to a seventh consecutive year of overall decline in internet freedom, along with a rise in disruptions to mobile internet service and increases in physical and technical attacks on human rights
defenders and independent media.
"The use of paid commentators and political bots to spread government propaganda was pioneered by China and Russia but has now gone global," said Michael J. Abramowitz, president of Freedom House. "The effects of these rapidly
spreading techniques on democracy and civic activism are potentially devastating."
"Governments are now using social media to suppress dissent and advance an antidemocratic agenda," said Sanja Kelly, director of the Freedom on the Net project. "Not only is this manipulation difficult to detect, it is more
difficult to combat than other types of censorship, such as website blocking, because it's dispersed and because of the sheer number of people and bots deployed to do it."
"The fabrication of grassroots support for government policies on social media creates a closed loop in which the regime essentially endorses itself, leaving independent groups and ordinary citizens on the outside," Kelly said.
Freedom on the Net 2017 assesses internet freedom in 65 countries, accounting for 87 percent of internet users worldwide. The report primarily focuses on developments that occurred between June 2016 and May 2017, although some more recent
events are included as well.
Governments in a total of 30 countries deployed some form of manipulation to distort online information, up from 23 the previous year. Paid commentators, trolls, bots, false news sites, and propaganda outlets were among the techniques used by
leaders to inflate their popular support and essentially endorse themselves.
In the Philippines, members of a "keyboard army" are tasked with amplifying the impression of widespread support of the government's brutal crackdown on the drug trade. Meanwhile, in Turkey, reportedly 6,000 people have been enlisted by
the ruling party to counter government opponents on social media.
Most governments targeted public opinion within their own borders, but others sought to expand their interests abroad--exemplified by a Russian disinformation campaign to influence the American election. Fake news and aggressive trolling of
journalists both during and after the presidential election contributed to a score decline in the United States' otherwise generally free environment.
Governments in at least 14 countries actually restricted internet freedom in a bid to address content manipulation. Ukrainian authorities, for example, blocked Russia-based services, including the country's most widely used social network and
search engine, after Russian agents flooded social media with fabricated stories advancing the Kremlin's narrative.
"When trying to combat online manipulation from abroad, it is important for countries not to overreach," Kelly said. "The solution to manipulation and disinformation lies not in censoring websites but in teaching citizens how to
detect fake news and commentary. Democracies should ensure that the source of political advertising online is at least as transparent online as it is offline."
For the third consecutive year, China was the world's worst abuser of internet freedom, followed by Syria and Ethiopia. In Ethiopia, the government shut down mobile networks for nearly two months as part of a state of emergency declared in
October 2016 amid large-scale antigovernment protests.
Less than one-quarter of the world's internet users reside in countries where the internet is designated Free, meaning there are no major obstacles to access, onerous restrictions on content, or serious violations of user rights in the form of
unchecked surveillance or unjust repercussions for legitimate speech.
Governments manipulated social media to undermine democracy : Governments in 30 countries of the 65 countries assessed attempted to control online discussions. The practice has become significantly more widespread and technically
sophisticated over last few years.
State censors targeted mobile connectivity : An increasing number of governments have restricted mobile internet service for political or security reasons. Half of all internet shutdowns in the past year were specific to mobile
connectivity, with most others affecting mobile and fixed-line service simultaneously. Most mobile shutdowns occurred in areas populated with ethnic or religious minorities such as Tibetan areas in China and Oromo areas in Ethiopia.
More governments restricted live video : As live video gained popularity with the emergence of platforms like Facebook Live, and Snapchat's Live Stories internet users faced restrictions or attacks for live streaming in at least nine
countries, often to prevent streaming of antigovernment protests. Countries likes Belarus disrupted mobile connectivity to prevent livestreamed images from reaching mass audience.
Technical attacks against news outlets, opposition, and rights defenders increased: Cyberattacks against government critics were documented in 34 out of 65 countries. Many governments took additional steps to restrict encryption, leaving
citizens further exposed.
New restrictions on virtual private networks (VPNs) : 14 countries now restrict tools used to circumvent censorship in some form and six countries introduced new restrictions, either legal bans or technical blocks on VPN websites or
Physical attacks against netizens and online journalists expanded dramatically : The number of countries that featured physical reprisals for online speech increased by 50 percent over the past year--from 20 to 30 of the countries
assessed. In eight countries, people were murdered for their online expression. In Jordan, a Christian cartoonist was murdered for mocking Islamist militants' vision of heaven, while in Myanmar, a journalist was murdered after posting on
Facebook notes that alleged corruption.
Since June 2016, 32 of the 65 countries assessed in Freedom on the Net saw internet freedom deteriorate. Most notable declines were documented in Ukraine, Egypt, and Turkey.
Theresa May has made a speech at the Lord Mayor's Banquet saying that fake news and Russian propaganda are threatening the international
order. She said:
It is seeking to weaponise information. Deploying its state-run media organisations to plant fake stories and photo-shopped images in an attempt to sow discord in the west and undermine our institutions.
The UK did not want to return to the Cold War, or to be in a state of perpetual confrontation but the UK would have to act to protect the interests of the UK, Europe and rest of the world if Russia continues on its current path.
May did not say whether she was concerned with Russian intervention in any UK democratic processes, but Ben Bradshaw, a leading Labour MP, is among those to have called for a judge-led inquiry into the possibility that Moscow tried to influence
the result of the Brexit referendum.
Russia has been accused of running troll factories that disseminate fake news and divisive posts on social media. It emerged on Monday that a Russian bot account was one of those that shared a viral image that claimed a Muslim woman ignored
victims of the Westminster terror attack as she walked across the bridge.
Surely declining wealth and poor economic prospects are a more likely root cause of public discontent rather than a little trivial propaganda.
Three countries are using the European Council to put dangerous pro-censorship amendments into the already controversial Copyright Directive.
The copyright law that Openmedia has been campaigning on -- the one pushing the link tax and censorship machines -- is facing some dangerous sabotage from the European Council. In particular, France, Spain and Portugal are directly harming the
The Bill is currently being debated in the European Parliament but the European Council also gets to make its own proposed version of the law, and the two versions eventually have to compromise with each other. This European Council is made up of
ministers from the governments of all EU member states. Those ministers are usually represented by staff who do most of the negotiating on their behalf. It is not a transparent body, but it does have a lot of power.
The Council can choose to agree with Parliament's amendments, but it doesn't look like that's going to happen in this case. In fact they've been taking worrying steps, particularly when it comes to the censorship machine proposals.
As the proposal stands before the Council intervention, it encourages sites where users upload and make content to install filtering mechanisms -- a kind of censorship machine which would use algorithms to look for copyrighted content and then
block the post. This is despite the fact that there many legal reasons to use copyrighted content.
These new changes want to go a step further. They firstly want to make the censorship machine demand even more explicit. As Julia Reda puts it:
They want to add to the Commission proposal that platforms need to automatically remove media that has once been classified as infringing, regardless of the context in which it is uploaded.
Then, they go all in with a suggested rewrite of existing copyright law to end the liability protections which are vital for a functioning web.
Liability protection laws mean we (not websites) are responsible for what we say and post online. This is so that websites are not obliged to monitor everything we say or do. If they were liable there would be much overzealous blocking and
censorship. These rules made YouTube, podcast platforms, social media, all possible. The web as we know it works because of these rules.
But the governments of France, Spain, Portugal and the Estonian President of the Council want to undo them. It would mean all these sites could be sued for any infringement posted there. It would put off new sites from developing. And it
would cause huge legal confusion -- given that the exact opposite is laid out in a different EU law.
Home Secretary Amber Rudd told an audience at New America, a Washington think tank, on Thursday night that there was an
online arms race between militants and the forces of law and order.
She said that social media companies should press ahead with development and deployment of AI systems that could spot militant content before it is posted on the internet and block it from being disseminated.
Since the beginning of 2017, violent militant operatives have created 40,000 new internet destinations, Rudd said. As of 12 months ago, social media companies were taking down about half of the violent militant material from their sites within
two hours of its discovery, and lately that proportion has increased to two thirds, she said.
YouTube is now taking down 83% of violent militant videos it discovers, Rudd said, adding that UK authorities have evidence that the Islamic State was now struggling to get some of its materials online.
She added that in the wake of an increasing number of vehicle attacks by islamic terrorists British security authorities were reviewing rental car regulations and considering ways for authorities to collect more relevant data from car hire
YouTube has announced an extension of its age restriction policy for parody videos using children's characters but with inappropriate themes
The new policy was announced on Thursday and will see age restrictions apply on content featuring inappropriate use of family entertainment characters like unofficial videos depicting Peppa Pig. The company already had a policy that rendered such
videos ineligible for advertising revenue, in the hope that doing would reduce the motivation to create them in the first place. Juniper Downs, , YouTube's director of policy explained:
Earlier this year, we updated our policies to make content featuring inappropriate use of family entertainment characters ineligible for monetisation,We're in the process of implementing a new policy that age restricts this content in the
YouTube main app when flagged. Age-restricted content is automatically not allowed in YouTube Kids. The YouTube team is made up of parents who are committed to improving our apps and getting this right.
Age-restricted videos can't be seen by users who aren't logged in, or by those who have entered their age as below 18 on both the site and the app. More importantly, they also don't show up on YouTube Kids, a separate app aimed at parents who
want to let their children under 13 use the site unsupervised.
The latest measure to deny Russian people the freedom of expression online will take effect on 1st
November. New laws will require VPNs to comply with the Russian State's online censorship programme and block all websites that are on the government censor's block list.
The Russian State Duma passed the new piece of legislation earlier this year and it was quickly signed into law by President Vladimir Putin.
Most of the major international VPN providers are not expected to comply with the law. Some, including Private Internet Access (PIA) , has already confirmed this. PIA also removed all their servers from Russia last year after a number were seized
without prior warning. It remains to be seen how the Russian state will try and sanction them as a result, but their own websites can certainly be expected to be added to the blacklist.
Online rights activists have also been quick to condemn the new law. Eva Galperin, the Director of Cybersecurity at the Electronic Frontier Foundation (EFF) said she believed the law would only be applied selectively. It is expected that
the Russian regime will use the new powers to target opposition activists ahead of next year's Presidential Elections. Overseas companies and businesspeople based in Russia which use VPNs are unlikely to see their service affected.
Update: Small Russian ISPs won't be able to afford new state blocking requirements
A draft order of Roskomnadzor, Russia's Federal internet censor, requires the most expensive and degrading method of blocking - a deep packet analysis of all traffic passing (DPI, deep packet inspection). Because of its high cost, the
requirements of Roskomnadzor will lead to the sale of small and medium providers business to large one, experts say.
The law on the prohibition of VPNs, enacted in Russia, has not yet affected access to sites that are prohibited. As before, you can still access them via anonymizers, VPN and TOR.
Analysts of Roskomsvoboda - a public organization, which activities are aimed at counteracting censorship on the Internet, explain that users will not see any effects before December anyway, as the process of the law allows 36 days for VPN
providers to respond to blocking requests before taking any action against them.
Some well-known VPN-services have already reacted to the next round of censorship in the Russian segment of the Internet. Representatives of ExpressVPN expressed surprise at this issue, asking how exactly Russia intends to implement this new
regulation in practice?
ExpressVPN will certainly never agree with any standards that would jeopardize the ability of our product to protect the digital rights of users, remarks the company.
Tunnelbear imparted that the service belongs to a Canadian company, hence operates according to the local laws, which do not limit them in any way.
VPN-service TorGuard also does not intend to cooperate with Roskomnadzor, directly declaring that it will refuse to block sites if they are approached with such requests.
The Senate Commerce Committee just approved a slightly modified version of SESTA, the Stop Enabling Sex Traffickers Act ( S. 1693 ).
SESTA was and continues to be a deeply flawed bill. It is intended to weaken the section commonly known as CDA 230 or simply Section 230, one of the most important laws protecting free expression online . Section 230 says that for purposes of
enforcing certain laws affecting speech online, an intermediary cannot be held legally responsible for any content created by others.
It's not surprising when a trade association endorses a bill that would give its own members a massive competitive advantage.
SESTA would create an exception to Section 230 for laws related to sex trafficking, thus exposing online platforms to an immense risk of civil and criminal litigation. What that really means is that online platforms would be forced to take
drastic measures to censor their users.
Some SESTA supporters imagine that compliance with SESTA would be easy--that online platforms would simply need to use automated filters to pinpoint and remove all messages in support of sex trafficking and leave everything else untouched. But
such filters do not and cannot exist: computers aren't good at recognizing subtlety and context, and with severe penalties at stake, no rational company would trust them to .
Online platforms would have no choice but to program their filters to err on the side of removal, silencing a lot of innocent voices in the process. And remember, the first people silenced are likely to be trafficking victims themselves: it would
be a huge technical challenge to build a filter that removes sex trafficking advertisements but doesn't also censor a victim of trafficking telling her story or trying to find help.
Along with the Center for Democracy and Technology, Access Now, Engine, and many other organizations, EFF signed a letter yesterday urging the Commerce Committee to change course . We explained the silencing effect that SESTA would have on online
Pressures on intermediaries to prevent trafficking-related material from appearing on their sites would also likely drive more intermediaries to rely on automated content filtering tools, in an effort to conduct comprehensive content moderation
at scale. These tools have a notorious tendency to enact overbroad censorship, particularly when used without (expensive, time-consuming) human oversight. Speakers from marginalized groups and underrepresented populations are often the hardest
hit by such automated filtering.
It's ironic that supporters of SESTA insist that computerized filters can serve as a substitute for human moderation: the improvements we've made in filtering technologies in the past two decades would not have happened without the safety
provided by a strong Section 230, which provides legal cover for platforms that might harm users by taking down, editing or otherwise moderating their content (in addition to shielding platforms from liability for illegal user-generated content).
We find it disappointing, but not necessarily surprising, that the Internet Association has endorsed this deeply flawed bill . Its member companies--many of the largest tech companies in the world--will not feel the brunt of SESTA in the same way
as their smaller competitors. Small Internet startups don't have the resources to police every posting on their platforms, which will uniquely pressure them to censor their users--that's particularly true for nonprofit and noncommercial platforms
like the Internet Archive and Wikipedia. It's not surprising when a trade association endorses a bill that would give its own members a massive competitive advantage.
If you rely on online communities in your day-to-day life; if you believe that your right to speak matters just as much on the web as on the street; if you hate seeing sex trafficking victims used as props to advance an agenda of censorship;
please take a moment to write your members of Congress and tell them to oppose SESTA .
The UK's domestic pornography industry is being screwed by age verification laws unveiled by the Government.
New laws passed as part of the Digital Economy Act will require websites hosting pornographic material to verify the ages of visitors from the UK or face being blocked by ISPs.
Pandora/Blake, who described themself as a feminist pornographer, as well as obscenity lawyer and legal officer at Open Rights Group Myles Jackman, told Sky News that this posed an enormous privacy risk to viewers.
They argue the age verification requirements may harm small businesses and curtail the freedom of expression by allowing multinational pornography giants to monopolise the industry.
Many of the most popular pornographic websites (Pornhub, RedTube, YouPorn) and production studios (Brazzers, Digital Playground) are owned by one company: MindGeek.
MindGeek stands to increase its already considerable market share by offering age verification services to smaller sites.
Pandora/Blake said the Government is refusing to engage with pornographers who are concerned the laws will harm their business.
Age checks are going to be expensive, they said, noting figures given to them ranged from 2£0.05 to 2£1.50 per age check. If you know anything about the economics of porn you realise that if you're paying a cost per viewer, rather than per
customer, then you're going to be orders of magnitude making a loss.
I'm seeing a lot of smaller sites simply giving up pre-emptively. There's already a chilling effect of sites not knowing how they're going to possibly be able to comply, said Pandora/Blake.
A Government spokesperson told Sky News that the BBFC was the intended regulator for the age verification system, and would be required to publish guidance regarding the arrangements for making pornographic material available in a compliant
The BBFC said that as it had not yet been appointed the regulator, it could not comment on the concerns raised to Sky News.
On Tuesday 7 November, three joined cases brought by civil liberties and human rights organisations challenging UK Government
surveillance will be heard in the Grand Chamber of the European Court of Human Rights (ECtHR).
Big Brother Watch and Others v UK will be heard alongside 10 Human Rights Organisations and Others v UK and the Bureau of Investigative Journalism and Alice Ross v UK, four years after the initial application to the ECtHR.
Big Brother Watch, English PEN, Open Rights Group and Dr Constanze Kurz made their application to the Court in 2013 following Edward Snowden's revelations that UK intelligence agencies were running a mass surveillance and bulk communications
interception programme, TEMPORA, as well as receiving data from similar US programmes, PRISM and UPSTREAM, interfering with UK citizens' right to privacy.
The case questions the legality of the indiscriminate surveillance of UK citizens and the bulk collection of their personal information and communications by UK intelligence agencies under the Regulation of Investigatory Powers Act (RIPA). The UK
surveillance regime under RIPA was untargeted, meaning that UK citizens' personal communications and information was collected at random without any element of suspicion or evidence of wrongdoing, and this regime was effective indefinitely.
The surveillance regime is being challenged on the grounds that there was no sufficient legal basis, no accountability, and no adequate oversight of these programmes, and as a result infringed UK citizens' Article 8 right to a private life.
In 2014, the Bureau of Investigative Journalism made an application to the ECtHR, followed by 10 Human Rights Organisations and others in 2015 after they received a judgment from the UK Investigatory Powers Tribunal. All three cases were joined
together, and the Court exceptionally decided that there would be a hearing.
The result of these three cases has the potential to impact the current UK surveillance regime under the Investigatory Powers Act. This legal framework has already been strongly criticized by the Court of Justice of the European Union in Watson .
A favourable judgment in this case will finally push the UK Government to constrain these wide-ranging surveillance powers, implement greater judicial control and introduce greater protection such as notifying citizens that they have been put
Daniel Carey of Deighton Pierce Glynn, solicitor for Big Brother Watch, Open Rights Group, English PEN and Constanze Kurz, said:
Historically, it has required a ruling from this Court before improvements in domestic law in this area are made. Edward Snowden broke that cycle by setting in motion last year's Investigatory Power Act, but my clients are asking the Court to
limit bulk interception powers in a much more meaningful way and to require significant improvements in how such intrusive powers are controlled and reported.
Griff Ferris, Researcher at Big Brother Watch, said:
This case raises long-standing issues relating to the UK Government's unwarranted intrusion into people's private lives, giving the intelligence agencies free reign to indiscriminately intercept and monitor people's private communications
without evidence or suspicion.
UK citizens who are not suspected of any wrongdoing should be able to live their lives in both the physical and the digital world safely and securely without such Government intrusion.
If the Court finds that the UK Government infringed UK citizens' right to privacy, this should put further pressure on the Government to implement measures to ensure that its current surveillance regime doesn't make the same mistakes.
Antonia Byatt, Interim Director of English PEN, said:
More than four years since Edward Snowden's revelations and nearly one year since the Investigatory Powers Act was passed, this is a landmark hearing that seeks to safeguard our privacy and our right to freedom of expression.
The UK now has the most repressive surveillance legislation of any western democracy, this is a vital opportunity to challenge the unprecedented erosion of our private lives and liberty to communicate.
Jim Killock, Executive Director of Open Rights Group, said:
Mass surveillance must end. Our democratic values are threatened by the fact of pervasive, constant state surveillance. This case gives the court the opportunity to rein it back, and to show the British Government that there are clear limits.
Hoovering everything up and failing to explain what you are doing is not acceptable.
The truth is that a lot of the material that terrorists share is not actually illegal at all. Instead, it was often comprised of news reports about perceived injustices in Palestine, stuff that you could never censor in a free society.
A trade group representing giants of Internet business from Facebook to Microsoft has just endorsed a "compromise" version of the Stop Enabling Sex Traffickers Act (SESTA), a misleadingly named bill that would be disastrous for free
speech and online communities.
Just a few hours after Senator Thune's amended version of SESTA surfaced online, the Internet Association rushed to praise the bill's sponsors for their "careful work and bipartisan collaboration." The compromise bill has all of the
same fundamental flaws as the original. Like the original, it does nothing to fight sex traffickers, but it would silence
legitimate speech online
It shouldn't really come as a surprise that the Internet Association has fallen in line to endorse SESTA. The Internet Association doesn't represent the Internet--it represents the few companies that profit the most off of Internet activity.
The Internet Association can tell itself and its members whatever it wants--that it held its ground for as long as it could despite overwhelming political opposition, that the law will motivate its members to make amazing strides in filtering
technologies--but there is one thing that it simply cannot say: that it has done something to fight sex trafficking.
A serious problem calls for serious solutions, and SESTA is not a serious solution. At the heart of the sex trafficking problem lies a complex set of economic, social, and legal issues. A
broken immigration system
and a torn safety net. A law enforcement regime that puts trafficking victims at risk for reporting their traffickers. Officers who aren't adequately trained to use the online tools at their disposal, or use them against victims. And yes, if
there are cases where online platforms themselves directly contribute to unlawful activity , it's a problem that the Department of Justice won't use the powers Congress has already given it
. These are the factors that deserve intense deliberation and debate by lawmakers, not a hamfisted attempt to punish online communities.
The Internet Association let the Internet down today. Congress should not make the same mistake.
A federal court in California has rendered an order from the Supreme Court of Canada unenforceable. The order in question required Google to remove
a company's websites from search results globally, not just in Canada. This ruling violates US law and puts free speech at risk, the California court found.
When the Canadian company Equustek Solutions requested Google to remove competing websites claimed to be illegally using intellectual property, it refused to do so globally.
This resulted in a legal battle that came to a climax in June, when the Supreme Court of Canada ordered Google to remove a company's websites from its search results. Not just in Canada, but all over the world.
With options to appeal exhausted in Canada, Google took the case to a federal court in the US. The search engine requested an injunction to disarm the Canadian order, arguing that a worldwide blocking order violates the First Amendment.
Surprisingly, Equustek decided not to defend itself and without opposition, a California District Court sided with Google. During a hearing, Google attorney Margaret Caruso stressed that it should not be possible for foreign countries to
implement measures that run contrary to core values of the United States.
The search engine argued that the Canadian order violated Section 230 of the Communications Decency Act, which immunizes Internet services from liability for content created by third parties. With this law, Congress specifically chose not to
deter harmful online speech by imposing liability on Internet services.
In an order, signed shortly after the hearing, District Judge Edward Davila concludes that Google qualifies for Section 230 immunity in this case. As such, he rules that the Canadian Supreme Court's global blocking order goes too far.
The ruling is important in the broader scheme. If foreign courts are allowed to grant worldwide blockades, free speech could be severely hampered. Today it's a relatively unknown Canadian company, but what if the Chinese Government asked Google
to block the websites of VPN providers?
Prager University, a nonprofit that creates educational videos with conservative slants, has filed a lawsuit against YouTube and its
parent company, Google, alleging that the company is censoring its content.
PragerU claims that more than three dozen of its videos have been restricted by YouTube over the past year. As a result, those who browse YouTube in restricted mode -- including many college and high school students -- are prevented from viewing
the content. Furthermore, restricted videos cannot earn any ad revenue.
PragerU says that by limiting access to their videos without a clear reason, YouTube has infringed upon PragerU's First Amendment rights.
YouTube has restricted edgy content in order to protect advertisers' brands. A number of advertisers told Google that they did not want their brand to be associated with edgy content. Google responded by banning all advertising from videos
claimed to contain edgy content. It keeps the brands happy but it has decimated many an online small business.
The Reddit moderators have explained new censorship rules in the following post:
We want to let you know that we have made some updates to our site-wide rules regarding violent content. We did this to alleviate user and moderator confusion about allowable content on the site. We also are making this update so that Reddit's
content policy better reflects our values as a company.
In particular, we found that the policy regarding inciting violence was too vague, and so we have made an effort to adjust it to be more clear and comprehensive. Going forward, we will take action against any content that encourages, glorifies,
incites, or calls for violence or physical harm against an individual or a group of people; likewise, we will also take action against content that glorifies or encourages the abuse of animals. This applies to ALL content on Reddit, including
memes, CSS/community styling, flair, subreddit names, and usernames.
We understand that enforcing this policy may often require subjective judgment, so all of the usual caveats apply with regard to content that is newsworthy, artistic, educational, satirical, etc, as mentioned in the policy. Context is key.
Whilst speaking about the Government's recently published Internet Safety Strategy green paper, Suzie Hargreaves of the Internet Watch Foundation
noted upcoming changes to the UK Council for Child Internet Safety (UKCCIS). This is a government run body that includes many members from industry and child protection campaigners. It debates many internet issues about the protection of children
which routinely touches on internet control and censorship. Hargreaves noted that the UKCCIS looks set to expand its remit. She writes:
The Government recognises the work of UKCCIS and wants to align it more closely with the Internet Safety Strategy. Renaming it the UK Council for Internet Safety (UKCIS), the Government is proposing broadening the council's remit to adults,
having a smaller and higher-profile executive board, reconsidering the role of the working groups to ensure that there is flexibility to respond to new issues, looking into an independent panel or working group to discuss the social media levy,
and reviewing available online safety resources.
There was plenty of strong language flying around on Twitter in response to the Harvey Weinstein scandal. Twitter got a bit
confused about who was harassing who, and ended up suspending Weinstein critic Rose McGowan for harassment. Twitter ended up being boycotted over its wrong call, and so Twitter bosses have been banging their heads together to do something.
Wired has got hold of an email outline an expansion of content liable to Twitter censorship and also for more severe sanctions for errant tweeters. Twitter's head of safety policy wrote of new measures to rolled out in the coming weeks:
Our definition of "non-consensual nudity" is expanding to more broadly include content like upskirt imagery, "creep shots," and hidden camera content. Given that people appearing in this content often do not know the material
exists, we will not require a report from a target in order to remove it.
While we recognize there's an entire genre of pornography dedicated to this type of content, it's nearly impossible for us to distinguish when this content may/may not have been produced and distributed consensually. We would rather error on the
side of protecting victims and removing this type of content when we become aware of it.
Unwanted sexual advances
Pornographic content is generally permitted on Twitter, and it's challenging to know whether or not sexually charged conversations and/or the exchange of sexual media may be wanted. To help infer whether or not a conversation is consensual, we
currently rely on and take enforcement action only if/when we receive a report from a participant in the conversation.
We are going to update the Twitter Rules to make it clear that this type of behavior is unacceptable. We will continue taking enforcement action when we receive a report from someone directly involved in the conversation.
Hate symbols and imagery (new)
We are still defining the exact scope of what will be covered by this policy. At a high level, hateful imagery, hate symbols, etc will now be considered sensitive media (similar to how we handle and enforce adult content and graphic violence).
More details to come.
Violent groups (new)
We are still defining the exact scope of what will be covered by this policy. At a high level, we will take enforcement action against organizations that use/have historically used violence as a means to advance their cause. More details to come
here as well
Tweets that glorify violence (new)
We already take enforcement action against direct violent threats ("I'm going to kill you"), vague violent threats ("Someone should kill you") and wishes/hopes of serious physical harm, death, or disease ("I hope someone
kills you"). Moving forward, we will also take action against content that glorifies ("Praise be to for shooting up. He's a hero!") and/or condones ("Murdering makes sense. That way they won't be a drain on social
services"). More details to come.
Offsite Article: Changes to the way that 'sensitive' content is defined and blocked from Twitter search