mong other things, Amazon Prime provides a good many of their digital videos available to stream for free. Well, until now anyway. Many indie horror filmmakers are having their videos removed from the Prime service in an apparent new policy on the
part of Amazon.
Amazon says it is cracking down on extreme content and is sending out emails to film makers to explain the new censorship policy.
Here is an example email supplied by Scott Schirmer in regards to his film Harvest Lake
Amazon Video Direct periodically revises our content policy in order to improve the Amazon Video customer experience. Effective March 1, 2017, Amazon Video Direct will no longer
allow titles containing persistent or graphic sexual or violent acts, gratuitous nudity and/or erotic themes ('adult content') to be offered as Included with Prime or Free with Pre-Roll Ad .
We have identified the
following titles within your catalog which contain adult content:
In alignment with our new policy, the Included with Prime and/or Free with Pre-Roll Ad offers will be removed
from these titles on March 1, 2017.
For any title to remain available to customers with an Included with Prime or Free with Pre-Roll Ad offer, its content including cover images, metadata, and/or video content must
be free of persistent or graphic sexual or violent acts, gratuitous nudity and/or erotic themes.
A politically correct Californian law targeting age discrimination has failed to win the immediate approval of a judge. The law requires date of births or age to be withheld from documents and publications used for job recruitment. One high profile
consequence is that the Internet Movie Database (IMDb) would be banned from including age information in the profiles of stars and crew. This has led to the challenge of the law on grounds of unconstitutional censorship.
This week's ruling does
not look good for the Californian law as the judge decided that birthday prohibition shall not apply until the full legal challenge is decided. District Judge Vince Chhabria ruled:
[I]t's difficult to imagine how AB
1687 could not violate the First Amendment. The statute prevents IMDb from publishing factual information (information about the ages of people in the entertainment industry) on its website for public consumption. This is a restriction of non-commercial
speech on the basis of content.
To be sure, the government has identified a compelling goal -- preventing age discrimination in Hollywood. But the government has not shown how AB 1687 is 'necessary' to advance that goal. In fact, it's not clear
how preventing one mere website from publishing age information could meaningfully combat discrimination at all. And even if restricting publication on this one website could confer some marginal antidiscrimination benefit, there are likely more direct,
more effective, and less speech-restrictive ways of achieving the same end.
Chhabria held that -- because the law restricts IMDb's speech rights -- the site is suffering irreparable harm and enjoined the government from enforcing the
law pending the resolution of this lawsuit.
Twitter has introduced a new censorship system with the unlikely sounding capability to detect abusive tweets and suspend accounts without waiting for complaints to be flagged. Transgressions results in the senders receiving half-day suspensions.
company has refused to provide details on specifically how the new system works, but using a combination of behavioral and keyword indicators, the filter flags posts it deems to be violations of Twitter's acceptable speech policy and issues users
suspensions of half a day during which they cannot post new globally accessible tweets and their existing tweets are visible only to followers.
From the platform that once called itself the free speech wing of the free speech party, these
new tools mark an incredible turn of events. The anti-censorship ethic seems to have been lost in a failed attempt to sell the company after prospective buyers were unhappy with the lack of censorship control over the platform.
has refused to provide even outline ideas of the indicators it is using, especially when it comes to the particular linguistic cues it is concerned with. While offering too much detail might give the upper hand to those who would try to work around the
new system, it is important for the broader community to have at least some understanding of the kinds of language flagged by Twitter's new tool so that they can try and stay within the rules.
It is also unclear why Twitter chose not to permit
users to contest what they believe to be a wrongful suspension. Given that the feature is brand-new and bound to encounter plenty of unforeseen contexts where it could yield a wrong result, it is surprising that Twitter chose not to provide a recovery
mechanism where it could catch these before they become news.
And the first example of censorship was quick to follow. Many outlets this morning picked up on a frightening instance of the Twitter algorithm's new power to police not only the
language we use but the thoughts we express. In this case a user allegedly tweeted a response to a news report about comments made by Senator John McCain and argued that it was his belief that the senator was a traitor who had committed formal
treason against the nation. Twitter did not respond to a request for more information about what occurred in this case and if this was indeed the tweet that caused the user to be suspended, but did not dispute that the user had been suspended or that his
use of the word traitor had factored heavily into that suspension.
A congressman ahs introduced a law bill demanding that visitors to America hand over URLs to their social network accounts.
Representatve Jim Banks says his proposed rules, titled the Visa Investigation and Social Media Act (VISA) of 2017, require
visa applicants to provide their social media handles to immigration officials. Banks said:
We must have confidence that those entering our country do not intend us harm. Directing Homeland Security to review visa
applicants' social media before granting them access to our country is common sense. Employers vet job candidates this way, and I think it's time we do the same for visa applicants.
Right now, at the US border you can be asked to give
up your usernames by border officers. You don't have to reveal your public profiles, of course. However, if you're a non-US citizen, border agents don't have to let you in, either. Your devices can be seized and checked, and you can be put on a flight
back, if you don't cooperate.
Banks' proposed law appears to end any uncertainty over whether or not non-citizens will have their online personas vetted: if the bill is passed, visa applicants will be required to disclose their online account
names so they can be scrutinized for any unwanted behavior. For travellers on visa-waiver programs, revealing your social media accounts is and will remain optional, but again, being allowed into the country is optional, too.
Banks did not say how
his bill would prevent hopefuls from deleting or simply not listing any accounts that may be unfavorable.
The Register reports that the bill is unlikely to progress.
Changes to the penalties for online copyright infringement could leave UK citizens vulnerable to blackmail by unscrupulous companies that demand payment for alleged copyright infringements.
Proposals in the Digital Economy
Bill would mean that anyone found guilty of online copyright infringement could now get up to ten years in prison. These changes could be misused by companies, such as Goldeneye International, which send threatening letters about copyright infringement.
Typically, the letters accuse the recipients of downloading files illegally and demand that they pay hundreds of pounds or be taken to court.
Often they refer to downloaded pornographic content, to shame the recipients into paying
rather than challenging the company in court. The Citizens Advice Bureau has criticised "unscrupulous solicitors and companies acting on behalf of copyright owners" who take part in such "pay up or else schemes". It advises people who
receive such letters to seek legal advice rather than simply paying them.
How do copyright trolls get 'evidence'?
Copyright trolls compel Internet Service Providers to hand over the personal contact
details of the account holder whose IP addresses are associated with illegal file downloads. However, this in itself is not evidence that the illicit downloading observed is the responsibility of the person receiving the letter.
Common problems include:
Sharing wifi with family, friends or neighbours who may be the actual infringer
Errors with timestamps and logs at the ISP@
Why the Digital Economy Bill will make this worse
The Government has argued that it is increasing prison sentences to bring the penalties for online copyright infringement in line with copyright infringement in the real
world. It also insists that it is not trying to impose prison sentences for minor infringements such as file sharing. However, the loose wording of the Bill means that it could be interpreted in this way, and this will undoubtedly be exploited by
Executive Director Jim Killock said:
Unscrupulous companies will seize on these proposals and use them to exploit people into paying huge fines for online infringements
that they may not have committed.
The Government needs to tighten up these proposals so that only those guilty of serious commercial copyright infringements receive prison sentences.
Helping companies send
threatening letters to teenagers is in no one's interest."
What does the Government need to do?
ORG has asked the Government to amend the Digital Economy Bill to ensure that jail
sentences are available for serious online copyright infringement. While this will not put an end to the dubious practices of copyright trolls completely, it will prevent them from taking advantage of the law.
Secretary of Homeland Security John Kelly told Congress this week that the Department of Homeland Security is exploring the possibility of asking visa applicants not only for an accounting of what they do online, but for full access to their online
accounts. In a hearing in the House of Representatives, Kelly said:
We want to say for instance, What sites do you visit? And give us your passwords. So that we can see what they do on the internet. And this might
be a week, might be a month. They may wait some time for us to vet. If they don't want to give us that information then they don't come. We may look at their204we want to get on their social media with passwords. What do you do? What do you say? If they
don't want to cooperate, then they don't come in.
TechCrunch' s Devin Coldewey pointed out, asking people to surrender passwords would raise "obvious" privacy and security problems. But beyond privacy and security,
the proposed probing of online accounts204including social media and other communications platforms204would, if implemented, be a major threat to free expression.
The company speaks of providing tools to get such speech removed in a blog post:
Recently, many passionate users have reached out to us regarding instances of hate speech across our network. Language that offends, threatens, or
insults groups solely based on race, color, gender, religion, national origin, sexual orientation, or other traits is against our network terms and has no place on the Disqus network. Hate speech is the antithesis of community and an impediment to the
type of intellectual discussion that we strive to facilitate.
We know that language published on our network does not exist within a vacuum. It has the power to reach billions of people, change opinions and incite action. Hate
speech is a threat, not only to those it targets, but to constructive discourse of all forms across all communities. Hate speech creates fear, deters participation in public debate, and hinders diversity of thoughts and opinions.
We have the opportunity and the responsibility to combat hate speech on our network. Our goal is to foster environments where users can express their diverse opinions without the fear of experiencing hate speech. We persistently remove content that contains hate speech or that otherwise violates our terms and policies . However, we know that simply reactively removing hate speech is not sufficient. That is why we are dedicated to building tools for readers and publishers to combat hate speech, and are open to partnering with other organizations who share our goal.
We recently released several features to help readers and publishers better control offensive and otherwise unwanted content. User Blocking and User Flagging allow users to block and report other users who are violating our terms
of service. Our new moderation panel makes it easier for publishers to identify and moderate comments based on user reputation .
Currently, we are working on improved tools to help publishers effectively prevent troublesome users
from returning to their sites. And as we get smarter about identifying hate speech, we are working on ways to automatically remove it from our network.
As an organization, Disqus firmly stands against hate speech in all forms. To
recap, in an effort to combat hate speech both on and off our network, we are making the following commitments:
We will enforce our terms of service by removing hate speech and harassment on our network. To report hate speech and other abusive behavior, please follow these instructions .
We will invest in new
features for publishers and readers to better manage hate speech. We hope to talk more about this soon.
To support this philosophy, we will also be supporting organizations that are equipped to fight hate speech outside of
Disqus. We are exploring several options and plan to dedicate portions of our advertising profits to fight hate speech.
Wikipedia editors have voted to ban the Daily Mail as a source for the website in all but exceptional circumstances after claiming the newspaper was generally unreliable .
The move is highly unusual for the online encyclopaedia, which rarely
puts in place a blanket ban on publications and which still allows links to move obvious sources of 'fake news' such as Kremlin backed news organisation Russia Today, and Fox News.
The Wikimedia Foundation, which runs Wikipedia but does not
control its editing processes, said in a statement that volunteer editors on English Wikipedia had discussed the reliability of the Mail since at least early 2015. The fundation said:
This means that the Daily Mail
will generally not be referenced as a 'reliable source' on English Wikipedia, and volunteer editors are encouraged to change existing citations to the Daily Mail to another source deemed reliable by the community.
Some editors opposed
the move saying the Daily Mail was sometimes reliable, that historically its record may have been better, and that there were other publications that were also unreliable. Opponents also pointed to inaccurate stories in other respected publications, and
suggested the proposed ban was driven by a dislike of the publication.
However, the fact of the matter is that the
DE Bill gives the BBFC (the regulator, TBC) the power to block any pornographic website that doesn't use age
verification tools. It can even block websites that publish pornography that doesn't fit their guidelines of taste and acceptability - which are significantly narrower than what is legal, and certainly narrower than what is viewed as acceptable by US
A single video of "watersports" or whipping produces marks, for instance, would be enough for the BBFC to ban a website for every UK adult. The question is, how many sites does the regulator want to block, and
how many can it block?
Parliament has been told that the regulator wants to block just a few, major websites, maybe 50 or 100, as an "incentive" to implement age checks. However, that's not what Clause 23 says. The
"Age-verification regulator's power to direct internet service providers to block access to material" just says that any site that fits the criteria can be blocked by an administrative request.
What could possibly go
Imagine, not implausibly, that some time after the Act is in operation, one of the MPs who pushed for this power goes and sees how it is working. This MP tries a few searches, and finds to their surprise that it is
still possible to find websites that are neither asking for age checks nor blocked.
While the first page or two of results under the new policy would find major porn sites that are checking, or else are blocked, the results on
page three and four would lead to sites that have the same kinds of material available to anyone.
In short, what happens when MPs realise this policy is nearly useless?
They will, of course, ask for
more to be done. You could write the Daily Mail headlines months in advance: BBFC lets kids watch porn .
MPs will ask why the BBFC isn't blocking more websites. The answer will come back that it would be possible, with more
funding, to classify and block more sites, with the powers the BBFC has been given already. While individual review of millions of sites would be very expensive, maybe it is worth paying for the first five or ten thousand sites to be checked. (And if
that doesn't work, why not use machines to produce the lists?)
And then, it is just a matter of putting more cash the way of the BBFC and they can block more and more sites, to "make the Internet safe".
That's the point we are making. The power in the Digital Economy Bill given to the BBFC will create a mechanism to block literally millions of websites; the only real restraint is the amount of cash that MPs are willing to pour into
Government says privacy safeguards are not "necessary" in Digital Economy Bill
The Government still doesn't consider privacy safeguards necessary in the Digital Economy Bill and they see court orders for website
blocking as excessively burdensome.
The House of Lords debated age verification for online pornography last week as the Committee stage of the Digital Economy Bill went ahead.
Peers tabled a considerable
number of amendments to improve the flawed Part 3 of the Bill, which covers online pornography. In their recent report, the Committee
on the Constitution said that they are worried about whether a proper parliamentary scrutiny can be delivered considering the lack of details written on the face of the Bill. Shortly after the start of the debate it became obvious that their concerns
Lords debated various aspects of age verification at length, however issues of appeal processes for website blocking by Internet service providers and privacy safeguards for data collected for the age-verification
purposes will have to be resolved at a later stage.
In our view, if the Government is not prepared to make changes to the Bill to safeguard privacy, the opposition parties should be ready to force the issue to a vote.
Appeals process for ISP blocking
Labour and Lib Dem Lords jointly introduced an amendment that would implement a court order process into the blocking of websites by Internet service providers. The
proposal got a lot of traction during the debate. Several Peers disagreed with the use of court orders, arguing about the costs and the undue burden that it would place on the system.
The court order process is currently
implemented for the blocking of websites that provide access to content that infringes copyright. However, the Government is not keen on using it for age verification. Lord Ashton, the Government Minister for Culture, Media and Sport, noted that even the
copyright court order process "is not without issues". He also stressed that the power to instruct ISPs to block websites carrying adult content would be used "sparingly". The Government is trying to encourage compliance by the
industry and therefore they find it more appropriate that ISP blocking is carried out by direction from the regulator.
The Bill doesn't express any of these policy nuances mentioned by the Government. According to Clause 23 on ISP
blocks, age-verification regulator can give a notice to ISPs to block non-complying websites. There is no threshold set out in the clause that would suggest this power will be used sparingly. Without such threshold, the age-verification regulator has an
unlimited power to give out notices and is merely trusted by the Government not to use the full potential of the power.
The Government failed to address the remaining lack of legal structure that would secure transparency for
website blocking by ISPs. Court orders would provide independent oversight for this policy. Neither the method of oversight, nor enforcement of blocking have been specified on the face of the Bill.
For now, the general public can
find solace in knowing that the Government is aware that blocking all of social media sites is a ridiculous plan. Lord Ashton said that the Government "don't want to get to the
situation where we close down the whole of Twitter, which would make us one of two countries in the world to have done that".
Privacy protections and anonymity
Labour Peers - Baroness Jones and
Lord Stevenson and Lord Paddick (Lib Dem) introduced an amendment that would ensure that age-verification systems have high privacy and data protection safeguards.
The amendment goes beyond basic compliance with data protection
regulations. It would deliver anonymity for age-verification system users and make it impossible to identify users throughout different websites. This approach could encourage people's trust in age-verification systems and will reassure people to safely
access legal material. By securing anonymity, people's right to freedom of expression would be less adversely impacted. Not all the problems go away: people may still not trust the tools, but fears can at least be reduced, and the worst calamities of
data leaks may be avoided.
People subjected to age verification should be able to choose which age-verification system they prefer and trust. It is necessary that the Bill sets up provisions for "user choice" to assure a
functioning market. Without this, a single age-verification provider could conquer the market offering a low-cost solution with inadequate privacy protections.
The amendment received wider support from the Lords.
Despite the wide-ranging support from Lib Dem, Labour and cross-bench Lords, the Government found this amendment "unnecessary". Lord Ashton referred to the guidance published by the age-verification regulator that will
outline types of arrangement that will be treated as compliant with the age-verification regulator's requirements. Since the arrangements for data retention and protection will be made in the guidance, the Government asked Lord Paddick to withdraw the
Guidance to be published by the age-verification regulator drew fire in the Delegated Powers and
Regulatory Reform Committee's Report published in December 2016. In their criticism, the Committee made it clear that they find it unsatisfactory that none of the age-verification regulator's guidelines have been published or approved by Parliament.
Lord Ashton did not tackle these concerns during the Committee sitting.
The issue of privacy safeguards is very likely to come up again at the Report stage. Lord Paddick was not convinced by the Government's answer and promised to
bring this issue up at the next stage. The Government also promised to respond to the Delegated Powers and Regulatory Reform Committee's Report before the next stage of the Bill's passage.
Given the wide support in the Lords to
put privacy safeguards on the face of the Bill, Labour and Lib Dem Lords have an opportunity to change the Government's stance. Together they can press the Government to address privacy concerns.
The Government was unprepared to
discuss crucial parts of the Part 3. Age verification for online pornography is proving to be more complex and demanding than the Government anticipated and they lack an adequate strategy. The Report stage of the Bill (22 February) could offer some
answers to the questions raised during the last week's Committee sittings, but Labour and Lib Dems need to be prepared to push for votes on crucial amendments to get the Government to address privacy and free expression concerns.
The European Union agreed Tuesday on new rules allowing subscribers of online services in one E.U. country access to them while traveling in another.
The new portability ruling is the first step of regulation under a drive by the European
Commission to introduce a single digital market in Europe.
Announced in May 2015, the proposed Digital Single Market was met with full-throated opposition from Hollywood and Europe's movie and TV industry, which viewed it as a threat to its
territory-by-territory licensing of movies and TV shows.
The European Commission, the European Parliament and the E.U.'s Council of Ministers all agreed to new laws which will allow consumers to fully use their online subscriptions to films,
sports events, e-books, video games or music services when traveling within the E.U.
The online service providers will have nine months to adapt to the new rules, which means will come into force by the beginning of 2018.
It is a bit of a fad to berate the social networks for passing on 'fake news' and other user posts deemed harmful to politicians and their jobs.
Although introduced last year, a nonsense private members bill is now getting a bit of attention for its
proposals to demand that social media censors its users posts.
Labour MP Anna Turley's Malicious Communications (Social Media) Bill, calls for media censor Ofcom to impose fines up to £2 million for social networks who don't adequately prevent
threatening content appearing on their services.
The bill would see social networks like Facebook and Twitter, and likely include apps like Snapchat and Instagram, to be added to a register of regulated platforms by the Secretary of State.
If the bill is passed into law, the companies on the list would be required to implement some sort of age verification blocking system akin to ISP blocking where over verified 18s could opt out of the content blocking.
The core of the bill as follows:
1 Requirements on operators of regulated social media platforms
(1) Operators of social media platforms on the register of regulated
social media platforms in section 5 (1) must have in place reasonable means to prevent threatening content from being received by users of their service in the United 5 Kingdom during normal use of the
service when the users--
(a) access the platforms, and
(b) have not requested the operator to allow the user to use the service without filtering of
(2) Operators must not activate an unfiltered service when requested by the user, 10 unless--
(a) the user has
registered as over 18 years of age, and
(b) the request includes an age verification mechanism.
(3) In implementing an age verification mechanism operators must follow
guidance published by the age verification regulator.
(4) 15 In subsection (3), "age verification regulator" has the meaning given by section 17 of the Digital Economy Act
2 Duties of OFCOM
(1) OFCOM must assist, on request, the Secretary of State to meet his or her duties in respect of the register of
regulated social media platforms.
(2) 20 It shall be the duty of OFCOM to monitor and assess the performance of the operators of regulated social media platforms in meeting the requirements of
3) In order to assess the adequacy of the arrangements of an operator of a regulated social media platform to meet the requirements of section 1, OFCOM may
(a) survey the content of the social media platform, and