An informal group of MPs, the All Party Parliamentary Group on Social Media and Young People's Mental Health and Wellbeing has published a report calling for the establishment of an internet censor. The report clams:
80% of the UK public believe tighter regulation is needed to address the impact of social media on the health and wellbeing of young people.
63% of young people reported social media to be a good source of health information.
However, children who spend more than three hours a day using social media are twice as likely to display symptoms of mental ill health.
Pressure to conform to beauty standards perpetuated and praised online can encourage harmful behaviours to achieve "results", including body shame and disordered eating, with 46% of girls compared to 38% of all young people reporting
social media has a negative impacted on their self-esteem.
Establish a duty of care on all social media companies with registered UK users aged 24 and under in the form of a statutory code of conduct, with Ofcom to act as regulator.
Create a Social Media Health Alliance, funded by a 0.5% levy on the profits of social media companies, to fund research, educational initiatives and establish clearer guidance for the public.
Review whether the "addictive" nature of social media is sufficient for official disease classification.
Urgently commission robust, longitudinal research, into understanding the extent to which the impact of social media on young people's mental health and wellbeing is one of cause or correlation.
Chris Elmore MP, Chair of the APPG on Social Media on Young People's Mental Health and Wellbeing said:
"I truly think our report is the wakeup call needed to ensure - finally - that meaningful action is taken to lessen the negative impact social media is having on young people's mental health.
For far too long social media companies have been allowed to operate in an online Wild West. And it is in this lawless landscape that our children currently work and play online. This cannot continue. As the report makes clear, now is the time
for the government to take action.
The recommendations from our Inquiry are both sensible and reasonable; they would make a huge difference to the current mental health crisis among our young people.
I hope to work constructively with the UK Government in the coming weeks and months to ensure we see real changes to tackle the issues highlighted in the report at the earliest opportunity."
The BBFC has launched an innovative new industry collaboration with Netflix to move towards classifying all content on the service using BBFC age ratings.
Netflix will produce BBFC age ratings for content using a manual tagging system along with an automated rating algorithm, with the BBFC taking up an auditing role. Netflix and the BBFC will work together to make sure Netflix's classification
process produces ratings which are consistent with the BBFC's Classification Guidelines for the UK.
It comes as new research by the British Board of Film Classification (BBFC) and the Video Standards Council Rating Board (VSC) has revealed that almost 80% of parents are concerned about children seeing inappropriate content on video on demand or
online games platforms.
The BBFC and the VSC have joined forces to respond to calls from parents and are publishing a joint set of Best Practice Guidelines to help online services deliver what UK consumers want.
The Best Practice Guidelines will help online platforms work towards greater and more consistent use of trusted age ratings online. The move is supported by the Department for Digital, Culture, Media and Sport as part of the Government's strategy
to make the UK the safest place to be online.
This includes recommending the use of consistent and more comprehensive use of BBFC age labelling symbols across all Video On Demand (VOD) services, and PEGI symbols across online games services, including additional ratings info and mapping
parental controls to BBFC age ratings and PEGI ratings.
The voluntary Guidelines are aimed at VOD services offering video content to UK consumers via subscription, purchase and rental, but exclude pure catch-up TV services like iPlayer, ITV Hub, All4, My 5 and UKTV Player.
The research also shows that 90% of parents believe that it is important to display age ratings when downloading or streaming a film online, and 92% of parents think it's important for video on demand platforms to show the same type of age
ratings they would expect at the cinema or on DVD and Blu-ray 203 confirmed by 94% of parents saying it's important to have consistent ratings across all video on demand platforms, rather than a variety of bespoke ratings systems.
With nine in 10 (94%) parents believing it is important to have consistent ratings across all online game platforms rather than a variety of bespoke systems, the VSC is encouraging services to join the likes of Microsoft, Sony PlayStation,
Nintendo and Google in providing consumers with the nationally recognised PEGI ratings on games - bringing consistency between the offline and online worlds.
The Video Recordings Act requires that the majority of video works and video games released on physical media must be classified by the BBFC or the VSC prior to release. While there is no equivalent legal requirement that online releases must be
classified, the BBFC has been working with VOD services since 2008, and the VSC has been working with online games platforms since 2003. The Best Practice Guidelines aim to build on the good work that is already happening, and both authorities
are now calling for the online industry to work with them in 2019 and beyond to better protect children.
David Austin, Chief Executive of the BBFC, said:
Our research clearly shows a desire from the public to see the same trusted ratings they expect at the cinema, on DVD and on Blu-ray when they choose to watch material online. We know that it's not just parents who want age ratings, teenagers
want them too. We want to work with the industry to ensure that families are able to make the right decisions for them when watching content online.
Ian Rice, Director General of the VSC, said:
We have always believed that consumers wanted a clear, consistent and readily recognisable rating system for online video games and this research has certainly confirmed that view. While the vast majority of online game providers are compliant
and apply PEGI ratings to their product, it is clear that more can be done to help consumers make an informed purchasing decision. To this end, the best practice recommendations will certainly make a valuable contribution in achieving this aim.
Digital Minister Margot James said:
Our ambition is for the UK to be the safest place to be online, which means having age ratings parents know and trust applied to all online films and video games. I welcome the innovative collaboration announced today by Netflix and the BBFC,
but more needs to be done.
It is important that more of the industry takes this opportunity for voluntary action, and I encourage all video on demand and games platforms to adopt the new best practice standards set out by the BBFC and Video Standards Council.
The BBFC is looking at innovative ways to open up access to its classifications to ensure that more online video content goes live with a trusted age rating. Today the BBFC and Netflix announce a year-long self-ratings pilot which will see the
online streaming service move towards in-house classification using BBFC age ratings, under licence.
Netflix will use an algorithm to apply BBFC Guideline standards to their own content, with the BBFC setting those standards and auditing ratings to ensure consistency. The goal is to work towards 100% coverage of BBFC age ratings across the
Mike Hastings, Director of Editorial Creative at Netflix, said:
The BBFC is a trusted resource in the UK for providing classification information to parents and consumers and we are excited to expand our partnership with them. Our work with the BBFC allows us to ensure our members always press play on
content that is right for them and their families.
David Austin added:
We are fully committed to helping families chose content that is right for them, and this partnership with Netflix will help us in our goal to do just that. By partnering with the biggest streaming service, we hope that others will follow
Netflix's lead and provide comprehensive, trusted, well understood age ratings and ratings info, consistent with film and DVD, on their UK platforms. The partnership shows how the industry are working with us to find new and innovative ways to
deliver 100% age ratings for families.
The new EU Copyright Directive will be up for its final vote in the week of Mar 25, and like any piece of major EU policy, it has been under discussion for many years and had all its areas of controversy resolved a year ago -- but then German
MEP Axel Voss took over as the "rapporteur" (steward) of the Directive and reintroduced the long-abandoned idea of forcing all online services to use filters to block users from posting anything that anyone, anywhere claimed was their
There are so many obvious deficiencies with adding filters to every message-board, online community, and big platform that the idea became political death, as small- and medium-sized companies pointed out that you can't fix the EU's internet by
imposing costs that only US Big Tech firms could afford to pay, thus wiping out all European competition.
So Voss switched tactics, and purged all mention of filters from the Directive, and began to argue that he didn't care how online services guaranteed that their users didn't infringe anyone's copyrights, even copyrights in works that had only
been created a few moments before and that no one had ever seen before, ever. Voss said that it didn't matter how billions of user posts were checked, just so long as it all got filtered.
(It's like saying, "I expect you to deliver a large, four-legged African land-mammal with a trunk, tusk and a tail, but it doesn't have to be an elephant -- any animal that fits those criteria will do).
Now, in a refreshingly frank interview, Voss has come clean: the only way to comply with Article 13 will be for every company to install filters.
When asked whether filters will be sufficient to keep Youtube users from infringing copyright, Voss said, "If the platform's intention is to give people access to copyrighted works, then we have to think about whether that kind of business
should exist." That is, if Article 13 makes it impossible to have an online platform where the public is allowed to make work available without first having to submit it to legal review, maybe there should just no longer be anywhere for the
public to make works available.
Here's what Europeans can do about this:
Pledge 2019 : make your MEP promise to vote against Article 13. The vote comes just before elections, so MEPs are extremely interested in the issues on voters' minds.
Save Your Internet : contact your MEP and ask them to protect the internet from this terrible idea.
* Turn out and protest
on March 23 , two days ahead of the vote. Protests are planned in cities and towns in every EU member-state.
Since Tumblr announced its porn ban in December, many users reacted by explaining that they mainly used the site for browsing not-safe-for-work content, and they threatened to leave the platform if the ban were enforced. It now appears that many
users have made good on that threat: Tumblr's traffic has dropped nearly 30% since December.
The ban removed explicit posts from public view, including any media that portrayed sex acts, exposed genitals, and female-presenting nipples.
Despite the prevailing porn ban in Uganda, it can safely be said that pornographic materials and information has never been more consumed than now. The latest web rankings from Alexa show that Ugandans consume more pornographic materials and
information than news and government information, among other relevant materials.
The US website Porn555.com is ranked as the 6th most popular website in Uganda, ahead of Daily Monitor, Twitter, BBC among others.
The country's internet censors claim to have blocked 30 of the main porn websites so perhaps that is the reason for porn555 to be the most popular rather then the more obvious PornHub, YouPorn, xHamster etc.
Thousands of people in Moscow and other Russian cities took to the streets over the weekend to protest legislation they fear could lead to widespread internet censorship in the country.
The protests, which were some of the biggest protests in the Russian capital in years, came in response to a bill in parliament that would route all internet traffic through servers in Russia, making virtual private networks (VPNs) ineffective.
Critics note that the bill creates an internet firewall similar to China's.
People gathered in a cordoned off Prospekt Sakharova street in Moscow, made speeches on a stage and chanted slogans such as hands off the internet and no to isolation, stop breaking the Russian internet. The rally gathered around 15,300 people,
according to White Counter, an NGO that counts participants at rallies. Moscow police put the numbers at 6,500.
The House of Lords Communications Committee has called for a new, overarching censorship framework so that the services in the digital world are held accountable to an enforceable set of government rules.
The Lords Communications Committee writes:
In its report 'Regulating in a digital world' the committee notes that over a dozen UK regulators have a remit covering the digital world but there is no body which has complete oversight. As a result, regulation of the digital environment is
fragmented, with gaps and overlaps. Big tech companies have failed to adequately tackle online harms.
Responses to growing public concern have been piecemeal and inadequate. The Committee recommends a new Digital Authority, guided by 10 principles to inform regulation of the digital world.
The chairman of the committee, Lord Gilbert of Panteg , said:
"The Government should not just be responding to news headlines but looking ahead so that the services that constitute the digital world can be held accountable to an agreed set of principles.
Self-regulation by online platforms is clearly failing. The current regulatory framework is out of date. The evidence we heard made a compelling and urgent case for a new approach to regulation. Without intervention, the largest tech companies
are likely to gain ever more control of technologies which extract personal data and make decisions affecting people's lives. Our proposals will ensure that rights are protected online as they are offline while keeping the internet open to
innovation and creativity, with a new culture of ethical behaviour embedded in the design of service."
Recommendations for a new regulatory approach Digital Authority
A new 'Digital Authority' should be established to co-ordinate regulators, continually assess regulation and make recommendations on which additional powers are necessary to fill gaps. The Digital Authority should play a key role in providing the
public, the Government and Parliament with the latest information. It should report to a new joint committee of both Houses of Parliament, whose remit would be to consider all matters related to the digital world.
10 principles for regulation
The 10 principles identified in the committee's report should guide all regulation of the internet. They include accountability, transparency, respect for privacy and freedom of expression. The principles will help the industry, regulators, the
Government and users work towards a common goal of making the internet a better, more respectful environment which is beneficial to all. If rights are infringed, those responsible should be held accountable in a fair and transparent way.
Recommendations for specific action Online harms and a duty of care
A duty of care should be imposed on online services which host and curate content which can openly be uploaded and accessed by the public. Given the urgent need to address online harms, Ofcom's remit should expand to include responsibility for
enforcing the duty of care.
Online platforms should make community standards clearer through a new classification framework akin to that of the British Board of Film Classification. Major platforms should invest in more effective moderation systems to uphold their
Users should have greater control over the collection of personal data. Maximum privacy and safety settings should be the default.
Data controllers and data processors should be required to publish an annual data transparency statement detailing which forms of behavioural data they generate or purchase from third parties, how they are stored, for how long, and how they are
used and transferred.
The Government should empower the Information Commissioner's Office to conduct impact-based audits where risks associated with using algorithms are greatest. Businesses should be required to explain how they use personal data and what their
The modern internet is characterised by the concentration of market power in a small number of companies which operate online platforms. Greater use of data portability might help, but this will require more interoperability.
The Government should consider creating a public-interest test for data-driven mergers and acquisitions.
Regulation should recognise the inherent power of intermediaries.
Russia's parliament has advanced repressive new internet laws allowing the authorities to jail or fine those who spread supposed 'fake news' or disrespect government officials online.
Under the proposed laws, which still await final passage and presidential signature, people found guilty of spreading indecent posts that demonstrate disrespect for society, the state, (and) state symbols of the Russian Federation, as well as
government officials such as President Vladimir Putin, can face up to 15 days in administrative detention. Private individuals who post fake news can be hit will small fines of between $45 and $75, and legal entities face much higher penalties of
up to $15,000, according to draft legislation.
The anti-fake news bill, which passed the Duma, or lower house of parliament, also compels ISPs to block access to content which offends human dignity and public morality.
It defines fake news as any unverified information that threatens someone's life and (or) their health or property, or threatens mass public disorder or danger, or threatens to interfere or disrupt vital infrastructure, transport or social
services, credit organizations, or energy, industrial, or communications facilities.
A chef has criticised Instagram after it decided that a photograph she posted of two pigs' trotters and a pair of ears needed to be protected from 'sensitive' readers.
Olia Hercules, a writer and chef who regularly appears on Saturday Kitchen and Sunday Brunch , shared the photo alongside a caption in which she praised the quality and affordability of the ears and trotters before asking why the
cuts had fallen out of favour with people in the UK.
However Hercules later discovered that the image had been censored by the photo-sharing app with a warning that read: Sensitive content. This photo contains sensitive content which some people may find offensive or disturbing.
Hercules hit back at the decision on Twitter, condemning Instagram and the general public for becoming detached from reality.
Sky News has learned that the government has delayed setting a date for when age verification rules will come into force due to concerns regarding the security and human rights issues posed by the rules. A DCMS representative said:
This is a world-leading step forward to protect our children from adult content which is currently far too easy to access online.
The government, and the BBFC as the regulator, have taken the time to get this right and we will announce a commencement date shortly.
Previously the government indicated that age verification would start from about Easter but the law states that 3 months notice must be given for the start date. Official notice has yet to be published so the earliest it could start is already
The basic issue is that the Digital Economy Act underpinning age verification does not mandate that identity data and browsing provided of porn users should be protected by law. The law makers thought that GDPR would be sufficient for data
protection, but in fact it only requires that user consent is required for use of that data. All it requires is for users to tick the consent box, probably without reading the deliberately verbose or vague terms and conditions provided. After
getting the box ticked the age verifier can then do more or less what they want to do with the data.
Realising that this voluntary system is hardly ideal, and that the world's largest internet porn company Mindgeek is likely to become the monopoly gatekeeper of the scheme, the government has moved on to considering some sort of voluntary
kitemark scheme to try and convince porn users that an age verification company can be trusted with the data. The kitemark scheme would appoint an audit company to investigate the age verification implementations and to approve those that use
I would guess that this scheme is difficult to set up as it would be a major risk for audit companies to approve age verification systems based upon voluntary data protection rules. If an 'approved' company were later found to be selling,
misusing data or even getting hacked, then the auditor could be sued for negligent advice, whilst the age verification company could get off scot-free.
The Counter-Terrorism Internet Referral Unit (CTIRU) was set up in 2010 by ACPO (and run by the Metropolitan Police) to remove unlawful terrorist material content from the Internet, with a specific focus on UK based material.
CTIRU works with internet platforms to identify content which breaches their terms of service and requests that they remove the content.
CTIRU also compile a list of URLs for material hosted outside the UK which are blocked on networks of the public estate.
As of December 2017, CTIRU is linked to the removal of 300,000 pieces of illegal terrorist material from the internet
Censor or not censor?
The CTIRU consider its scheme to be voluntary, but detailed notification under the e-Commerce Directive has legal effect, as it may strip the platform of liability protection. Platforms may have "actual knowledge" of potentially
criminal material, if they receive a well-formed notification, with the result that they would be regarded in law as the publisher from this point on.
At volume, any agency will make mistakes. The CTIRU is said to be reasonably accurate: platforms say they decline only 20 or 30% of material. That shows considerable scope for errors. Errors could unduly restrict the speech of individuals,
meaning journalists, academics, commentators and others who hold normal, legitimate opinions.
A handful of CTIRU notices have been made public via the Lumen transparency project. Some of these show some very poor decisions to send a notification. In one case, UKIP Voices, an obviously fake, unpleasant and defamatory blog portraying the
UKIP party as cartoon figures but also vile racists and homophobes, was considered to be an act of violent extremism. Two notices were filed by the CTIRU to have it removed for extremism. However, it is hard to see that the site could fall within
the CTIRU's remit as the site's content is clearly fictional.
In other cases, we believe the CTIRU had requested removal of extremist material that had been posted in an academic or journalistic context.
Some posters, for instance at wordpress.com, are notified by the service's owners, Automattic, that the CTIRU has asked for content to be removed. This affords a greater potential for a user to contes tor object to requests. However, the CTIRU is
not held to account for bad requests. Most people will find it impossible to stop the CTIRU from making requests to remove lawful material, which might still be actioned by companies, despite the fact that the CTIRU would be attempting to remove
legal material, which is clearly beyond its remit.
When content is removed, there is no requirement to notify people viewing the content that it has been removed because it may be unlawful or what those laws are, nor that the police asked for it to be removed. There is no advice to people that
may have seen the content or return to view it again about the possibility that the content may have been intended to draw them into illegal and dangerous activities, nor are they given advice about how to seek help.
There is also no external review, as far as we are aware. External review would help limit mistakes. Companies regard the CTIRU as quite accurate, and cite a 70 or 80% success rate in their applications. That is potentially a lot of requests that
should not have been filed, however, and that might not have been accepted if put before a legally-trained and independent professional for review.
As many companies will perform little or no review, and requests are filed to many companies for the same content, which will then sometimes be removed in error and sometimes not, any errors at all should be concerning.
Crime or not crime?
The CTIRU is organised as part of a counter-terrorism programme, and claim its activities warrant operating in secrecy, including rejecting freedom of information requests on the grounds of national security and detection and prevention of crime.
However, its work does not directly relate to specific threats or attempt to prevent crimes. Rather, it is aimed at frustrating criminals by giving them extra work to do, and at reducing the availability of material deemed to be unlawful.
Taking material down via notification runs against the principles of normal criminal investigation. Firstly, it means that the criminal is "tipped off" that someone is watching what they are doing. Some platforms forward notices to
posters, and the CTIRU does not suggest that this is problematic.
Secondly, even if the material is archived, a notification results in destruction of evidence. Account details, IP addresses and other evidence normally vital for investigations is destroyed.
This suggests that law enforcement has little interest in prosecuting the posters of the content at issue. Enforcement agencies are more interested in the removal of content, potentially prioritised on political rather than law enforcement
grounds, as it is sold by politicians as a silver bullet in the fight against terrorism.
Beyond these considerations, because there is an impact on free expression if material is removed, and because police may make mistakes, their work should be seen as relating to content removal rather than as a secretive matter.
Little is know about the CTIRU's work, but it claims to be removing up to 100,000 "pieces of content" from around 300 platforms annually. This statistic is regularly quoted to parliament, and is given as an indication of the
irresponsibility of major platforms to remove content. It has therefore had a great deal of influence on the public policy agenda.
However, the statistic is inconsistent with transparency reports at major platforms, where we would expect most of the takedown notices to be filed. The CTIRU insists that its figure is based on individual URLs removed. If so, much further
analysis is needed to understand the impact of these URL removals, as the implication is that they must be hosted on small, relatively obscure services.
Additionally, the CTIRU claims that there are no other management statistics routinely created about its work. This seems somewhat implausible, but also, assuming it is true, negligent. For instance, the CTIRU should know its success and failure
rate, or the categorisation of the different organisations or belief systems it is targeting. An absence of collection of routine data implies that the CTIRU is not ensuring it is effective in its work. We find this position, produced in response
to our Freedom of Information requests, highly surprising and something that should be of interest to parliamentarians.
Lack of transparency increases the risks of errors and bad practice at the CTIRU, and reduces public confidence in its work. Given the government's legitimate calls for greater transparency on these matters at platforms, it should apply the same
standards to its own work.
Both government and companies can improve transparency at the CTIRU. The government should provide specific oversight, much in the same way as CCTV and Biometrics have a Commissioner. Companies should publish notifications, redacted if necessary,
to the Lumen database or elsewhere. Companies should make the full notifications available for analysis to any suitably-qualified academic, using the least restrictive agreements practical.
The idea is that the government of any European Member State will be able to order any website to remove content considered "terrorist". No independent judicial authorisation will be needed to do so, letting governments abuse the wide
definition of "terrorism". The only thing IMCO accepted to add is for government's orders to be subject to "judicial review", which can mean anything.
In France, the government's orders to remove "terrorist content" are already subject to "judicial review", where an independent body is notified of all removal orders and may ask judges to asses them. This has not been of much
help: only once has this censorship been submitted to a judge's review. It was found to be unlawful, but more than one year and half after it was ordered. During this time, the French government was able to abusively censor content, in this case,
far-left publications by two French Indymedia outlets.
Far from simplifying, this Regulation will add confusion as authorities from one member state will be able to order removal in other one, without necessarily understanding context.
Unrealistic removal delays
Regarding the one hour delay within which the police can order a hosting service provider to block any content reported as "terrorist", there was no real progress either. It has been replaced by a deadline of at least eight hours, with
a small exception for "microentreprises" that have not been previously subject to a removal order (in this case, the "deadline shall be no sooner than the end of the next working day").
This narrow exception will not allow the vast majority of Internet actors to comply with such a strict deadline. Even if the IMCO Committee has removed any mention of proactive measures that can be imposed on Internet actors, and has stated that
"automated content filters" shall not be used by hosting service providers, this very tight deadline, and the threat of heavy fines will only incite them to adopt the moderation tools developed by the Web's juggernauts (Facebook and
Google) and use the broadest possible definition of terrorism to avoid the risk of penalties. The impossible obligation to provide a point of contact reachable 24/7 has not been modified either. The IMCO opinion has even worsened the financial
penalties that can be imposed: it is now "at least" 1% and up to 4% of the hosting service provider's turnover.
The next step will be on 11 March, when the CULT Committee (Culture and Education) will adopt its opinion.
The last real opportunity to obtain the rejection of this dangerous text will be on 21 March 2019, in the LIBE Committee (Civil Liberties, Justice and Home Affairs). European citizens must contact their MEPs to demand this rejection. We have
dedicated page on our website with an analysis of this Regulation and a tool to directly contact the MEPs in charge.
Starting today, and for the weeks to come, call your MEPS and demand they reject this text.