President Trump plans to create a panel to examine cases of bias against conservatives and suppression of free speech on social media, reported The Wall Street Journal. Last week the president tweeted:
The Radical Left is
in total command & control of Facebook, Instagram, Twitter and Google. The Administration is working to remedy this illegal situation. Stay tuned, and send names & events. Thank you Michelle!
It is not clear what department the
panel would fall under or what authority it would have. However, the WSJ talked to sources that said the plans might lead to the creation of a commission that would work with agencies such as the Federal Communication Commission and the Federal Elections
In May 2019, the White House launched a tool that allowed people to share their experiences with political censorship but nothing really came of it. At a Social Media Summit held last July , several conservatives voiced concerns about
censorship on social media and the shadow-banning of their content.
The GDPR is a reprehensible and bureaucratic law that is impossible to fully comply with, and dictates an onerous process of risk assessments that are enforced by inspection and audits. It is not the sort of thing that you would wish on your grandmother.
So the law makers built in an important exclusion such that the law does not apply to the processing of personal data by a natural person in the exercise of a purely personal or household activity.
But now a Dutch court has weighed in and decided that
this important exclusion does not applying to posting family pictures on the likes of Twitter.
The court got involved in a family dispute between a grandmother who wanted to post pictures of her grand children on social against the wishes of the
The court decided that the posting of pictures for public consumption on social media went beyond 'purely personal or household activity'. The details weren't fully worked out, but the court judgement suggested that they may have taken a
different view had the pictures been posted to a more restricted audience, say to Facebook friends only. But saying that such nuance doesn't apply to Twitter where posts are by default public.
The outcome of the case was that the grandmother was
therefore in the wrong and has been ordered to remove the pictures from her social media accounts.
But the horrible outcome of this court judgement is that anyone posting pictures of private individuals to Twitter must now register as a data
controller, so requiring submission to the full bureaucratic nightmare that is the GDPR.
PlayerUnknown's Battleground is a 2017 South Korea Battle Royale by PUBG Corporation.
The game made the news in spring 2019 when it was banned in Nepal, Jordan, Iraq and parts of India. In Pakistan calls for a ban were directed to the
courts and so the country is a little behind the curve.
A petition filed in the Lahore High Court stated that the players of the online game were facing psychological problems like lack of decision-making capabilities and social relations, as well as
taking them aside from their academic activities and creating violent behaviour.
The court responded on 18thh May 2020 by rather passing the buck to Pakistan's internet censors. The court seems to have agreed with the petitioner that the game should
be banned but has ordered the Pakistan Telecommunication Authority to take the final decision within 6 weeks.
Google disgracefully censors a totally rational well reasoned academic who was just a little more optimistic about the petering out of coronavirus than the rest... and later reverses the unjustified censorship after social media uproar
The Russian government has demanded that Google censor a news story that accuses the nation of artificially reducing the reported number of deaths from COVID-19. The news data, however, comes from government-run institutions and official records.
nation's internet censor, the Roskomnadzor, is trying to remove a news item from the MBKh Media website for being considered disinformation. In fact the MBKh Media article was based on a piece published by the Financial Times, and that piece also is
under scrutiny by Roskomnadzor.
The news in question states that the Russian government is trying to reduce the actual COVID-19 death toll by attributing the deaths to other diseases. According to the report, the death toll should be at least 70%
higher, which means that the actual death toll would be close to 5,000. Moscow's Health Department confirmed that the reports are based on their data.
To block the news, the Roskomnadzor has turned to Google directly since MBKh Media has refused to
delete the report.
Maybe the Russian censors should consider that its reputation for censoring embarrassing but true information means that the act of censorship ends up reinforcing the credibility of what's being censored. Maybe then attempts to
censor say 5G theories may end up enforcing the conspiracy.
The Age Appropriate Design Code has been written by the Information Commissioner's Office (ICO) to inform websites what they must do to keep ICO internet censors at bay with regards to the government's interpretations of GDPR provisions. Perhaps in the
same way that the Crown Prosecution Service provides prosecution guidance as to how it interprets criminal law.
The Age Appropriate Design Code dictates how websites, and in particular social media, make sure that they are not exploiting children's
personal data. Perhaps the most immediate effect is that social media will have to allow a level of usages that simply does not require children to hand over personal data. Requiring more extensive personal data, say in the way that Facebook does,
requires users to provide 'age assurance' that they are old enough to take such decisions wisely.
However adult users may not be so willing to age verify, and may in fact also appreciate an option to use such websites without handing over data
into the exploitative hands of social media companies.
So one suspects that US internet social media giants may not see Age Appropriate Design and the government's Online Harms model for internet censorship as commercially very desirable for their
best interests. And one suspects that maybe US internet industry pushback may be something that is exerting pressure on UK negotiators seeking a free trade agreement with the US.
Pure conjecture of course, but the government does seem very cagey
about its timetable for both the Age Appropriate Design Code and the Online Harms bill. Here is the latest parliamentary debate in the House of Lords very much on the subject of the government's timetable.
House of Lords
Hansard: Age-appropriate Design Code, 18 May 2020
Lord Stevenson of Balmacara:
To ask Her Majesty's Government when they intend to lay the regulation giving effect to the age- appropriate
design code required under section 123 of the Data Protection Act 2018 before Parliament.
The Parliamentary Under-Secretary of State, Department for Digital, Culture, Media and Sport (Baroness Barran) (Con)
The age-appropriate design code will play an important role in protecting children's personal data online. The Government notified the final draft of the age-appropriate design code to the European Commission as part of
our obligations under the technical standards and regulations directive. The standstill period required under the directive has concluded. The Data Protection Act requires that the code is laid in Parliament as soon as is practicably possible.
Lord Stevenson of Balmacara:
I am delighted to hear that, my Lords, although no date has been given. The Government have a bit of ground to make up here, so perhaps it will not be
delayed too long. Does the Minister agree that the Covid-19 pandemic is a perfect storm for children and for young people's digital experience? More children are online for more time and are more reliant on digital technology. In light of that, more
action needs to be taken. Can she give us some information about when the Government will publish their final response to the consultation on the online harms White Paper, for example, and a date for when we are likely to see the draft Bill for
I spent some time this morning with a group of young people, in part discussing their experience online. The noble Lord is right that the
pandemic presents significant challenges, and they were clear that they wanted a safe space online as well as physical safe spaces. The Government share that aspiration. We expect to publish our response to the online harms consultation this autumn and
to introduce the legislation this Session.
Lord Clement-Jones (LD)
My Lords, I was very disappointed to see in the final version of the code that the section dealing with
age-appropriate application has been watered down to leave out reference to age-verification mechanisms. Is this because the age-verification provisions of the Digital Economy Act have been kicked into the long grass at the behest of the pornography
industry so that we will not have officially sanctioned age-verification tools available any time soon?
There is no intention to water down the code. Its content is
the responsibility of the Information Commissioner, who has engaged widely to develop the code, with a call for evidence and a full public consultation.
Lord Moynihan (Con)
Lords, is my noble friend the Minister able to tell the House the results of the consultation process with the industry on possible ways to implement age verification online?
We believe that our online harms proposals will deliver a much higher level of protection for children, as is absolutely appropriate. We expect companies to use a proportionate range of tools, including age-assurance and
age-verification technologies, to prevent children accessing inappropriate behaviour, whether that be via a website or social media.
The Earl of Erroll (CB)
May I too push the
Government to use the design code to cover the content of publicly accessible parts of pornographic websites, since the Government are not implementing Part 3 of the Digital Economy Act to protect children? Any online harms Act will be a long time in
becoming effective, and such sites are highly attractive to young teenagers.
We agree absolutely about the importance of protecting young children online and that is
why we are aiming to have the most ambitious online harms legislation in the world. My right honourable friend the Secretary of State and the Minister for Digital and Culture meet representatives of the industry regularly to urge them to improve their
actions in this area.
Lord Holmes of Richmond (Con)
My Lords, does my noble friend agree that the code represents a negotiation vis-√ -vis the tech companies and thus there is
no reason for any delay in laying it before Parliament? Does she further agree that it should be laid before Parliament before 10 June to enable it to pass before the summer break? This would enable the Government to deliver on the claim that the UK is
the safest place on the planet to be online. Share The edit just sent has not been saved. The following error was returned: This content has already been edited and is awaiting review.
The negotiation is not just with the tech companies. We have ambitions to be not only a commercially attractive place for tech companies but a very safe place to be online, while ensuring that freedom of speech is upheld. The timing
of the laying of the code is dependent on discussions with the House authorities. As my noble friend is aware, there is a backlog of work which needs to be processed because of the impact of Covid-19.
The BBFC has commissioned research into public attitudes towards certain types of bad language. The BBFC writes:
This will comprise qualitative and quantitative elements, examining attitudes across the UK towards both
the rating of bad language and how these elements should be described in ratings info. In particular, the research will focus on: strong language [ie 'fuck'] at 12A/12; very strong language [ie 'cunt'] at 15; reclaimed uses of the 'n' word at the 12A/12
level, particularly in music videos; and implied bad language and word play, such as WTF.
Polish public radio has censored an anti-government song that topped the charts and was then removed from the station's website.
Kazik's Your Pain is Better than Mine is widely seen as criticising the head of Poland's ruling nationalist
The station director has claimed the chart was fixed, but MPs from the ruling party as well as the opposition have condemned the song's removal. ?
The song's theme is grieving and the lockdown of the nation's cemeteries during the
coronavirus outbreak. Kazik Staszewski's song doesn't mention Jaroslaw Kaczynski, the head of Law and Justice, by name, but his target is pretty clear.
When cemeteries were closed, Kaczynski still visited the Warsaw grave of his mother and the graves
of victims of a Russian air disaster in Smolensk in which his twin brother, President Lech Kaczynski, was killed. By Friday, Kazik's song had topped Poland's renowned chart on Radio Three, highlighting a sense of one law for ordinary Poles and another
for the ruling party's leader.
Shortly after the chart show was broadcast, internet links and news about the veteran singer's hit were disabled on the website of Radio Three, known as Trojka.The chart is voted on by Trojka listeners and station boss
Tomasz Kowalczewski insisted it had been manipulated: We already know for sure that this song did not win. It was manually moved to number one. In other words, it was fixed for sure, he claimed.
Ofcom has today imposed a sanction on the licensee Loveworld Limited, which broadcasts the religious television channel Loveworld, after a news programme and a live sermon included potentially harmful claims about causes of, and treatments for, Covid-19.
Our investigation found that a report on Loveworld News included unsubstantiated claims that 5G was the cause of the pandemic, and that this was the subject of a global cover-up. Another report during the programme presented
hydroxychloroquine as a cure for Covid-19, without acknowledging that its effectiveness and safety as a treatment was clinically unproven, or making clear that it has potentially serious side effects.
A sermon broadcast on Your
Loveworld also included unsubstantiated claims linking the pandemic to 5G technology; as well as claims which cast serious doubt on the necessity for lockdown measures and the motives behind official health advice on Covid-19, including in relation to
vaccination. These views were presented as facts without evidence or challenge.
Ofcom stresses that there is no prohibition on broadcasting controversial views which diverge from, or challenge, official authorities on public
health information. However, given the unsubstantiated claims in both these programmes were not sufficiently put into context, they risked undermining viewers' trust in official health advice, with potentially serious consequences for public health.
Given these serious failings, we concluded that Loveworld Limited did not adequately protect viewers from the potentially harmful content in the news programme and the sermon, and the news reports were not duly accurate. We have
directed Loveworld Limited to broadcast statements of our findings and are now considering whether to impose any further sanction.
Ofcom has announced that Alison Marsden has been appointed as Director of Content Standards, Licensing and Enforcement.
Alison will be leading the team with responsibility for setting and enforcing content standards for television, radio and on-demand
services and Ofcom's broadcast licensing programme. She will also sit on Ofcom's Content Board, a committee of the main Ofcom Board, which has advisory responsibility for a wide range of content issues.
Alison joined Ofcom in 2007 as a broadcasting
standards specialist. Since 2016 she has worked as Director of the Standards and Audience Protection team, responsible for setting and enforcing Ofcom's Broadcasting Code.
Before joining Ofcom, Alison worked in television production, firstly at the
BBC producing and directing specialist factual and factual programmes, and later for various independent production companies.
Alison takes up her new roles with immediate effect.
Twitch has introduced a new PC censor in the following blog post:
Keeping our community safe and healthy is a top priority for Twitch. Today, we're excited to announce the formation of the Twitch Safety Advisory Council, which
will support the growth of our community moving forward.
The Safety Advisory Council will inform and guide decisions made at Twitch by contributing their experience, expertise, and belief in Twitch's mission of
empowering communities to create together. The Council will advise on a number of topics including:
Drafting new policies and policy updates
Developing products and features to improve safety and moderation
Promoting healthy streaming and work-life
Protecting the interests of marginalized groups
Identifying emerging trends that could impact the Twitch experience
This group is composed of online safety experts and Twitch creators who have a deep understanding of Twitch, its content, and its community. When developing this council we felt it was essential to include both experts who can
provide an external perspective, as well as Twitch streamers who deeply understand creators' unique challenges and viewpoints. Each member of the council was carefully selected based on their familiarity with the Twitch community and their relevant
personal and professional experiences.
We are excited to work with this talented group to make Twitch the best place to grow and foster a community. The creation of the Safety Advisory Council is just one way we are
enhancing our approach to issues of trust and safety. We will continue to invest in tools, products, and policies that promote the safety and well-being of everyone on Twitch.
50,000 people have signed a petition calling for the sacking of Piers Morgan from his job presenting ITV's Good Morning Britain.
This comes after thousands of complaints were filed to Ofcom over numerous combative interviews he has had with
politicians amid the coronavirus crisis.
The petition ludicrously claims that Morgan is one of the country's most heinous public figures. In particular, the petition organiser takes issue with his reporting on transgender issues. The petitioners say:
Wake up to the reality of Morgan's behaviour. Hate crimes are on the rise, transphobia and discrimination over gender identity is becoming commonplace both upon social media and in the real world, and ITV continue to sit
idly and let it play out in the name of entertainment.
China has reportedly threatened to sanction a Houston congressman, Dan Crenshaw, who has promoted legislation allow let U.S. citizens to China for costs stemming from the coronavirus pandemic. Crenshaw is one of at least four U.S. politicians identified
by China for abusing litigation against China.
Now China's Global Times has reported that China is threatening that the four lawmakers should expect Chinese sanctions that will make them feel the pain,
The Global Times named Crenshaw and three
other Republicans as targets: Sens. Tom Cotton of Arkansas and Josh Hawley of Missouri, and Rep. Chris Smith of New Jersey. All have called for legislation allowing Americans to sue China over the outbreak. Two state attorneys general, also Republicans
-- Eric Schmitt of Missouri and Lynn Fitch of Mississippi -- who have sued China to recover costs from the outbreak were also named.
Facebook is seeking help in the censorship of hateful messages that have been encoded into meme. The company writes in a post:
In order for AI to become a more effective tool for detecting hate speech, it must be able to understand
content the way people do: holistically. When viewing a meme, for example, we don't think about the words and photo independently of each other; we understand the combined meaning together. This is extremely challenging for machines, however, because it
means they can't just analyze the text and the image separately. They must combine these different modalities and understand how the meaning changes when they are presented together. To catalyze research in this area, Facebook AI has created a data set
to help build systems that better understand multimodal hate speech. Today, we are releasing this Hateful Memes data set to the broader research community and launching an associated competition, hosted by DrivenData with a $100,000 prize pool.
The challenges of harmful content affect the entire tech industry and society at large. As with our work on initiatives like the Deepfake Detection Challenge and the Reproducibility Challenge, Facebook AI believes the best solutions
will come from open collaboration by experts across the AI community.
We continue to make progress in improving our AI systems to detect hate speech and other harmful content on our platforms, and we believe the Hateful Memes
project will enable Facebook and others to do more to keep people safe.
Scotland's government has joined the ranks of many others around the world who are actively working on constraining free speech by amending existing laws to make them even more oppressive than before.
The current law restricting 'hate crimes' is
similar to that in England and Wales, covering threats, abuse, and insults.
But based on what's described as a hard-line report from 2018, Scotland's upgraded Hate Crime and Public Order Bill proposed by parliament now looks to change that and
introduce three new offences,
The first will enable for prosecution of doing anything, or communicating any material, which is threatening or abusive and is intended or likely to engender hatred based on age, disability, religion, sexual orientation, transgender or intersex
Secondly having material of this kind in one's possession meant to be in any way communicated to others will in itself now be a crime,
and thirdly, managers in organizations of any type not acting to prevent the new set of
criminalized behaviours will be criminalized themselves.
The proposals' critics say it is anti-liberal and must not be allowed to pass, pointing out that the bill takes the focus away from punishing acts of hostility based on their gravity regardless of who they target, and instead introduces a tiered
approach, depending on groups that are designated as considered more 'worthy' of the victimhood status.
Offsite Comment: Scotland's new hate speech law will be too censorious
The DCMS minister for censorship, Caroline Dinenage and the Home Office minister in the House of Lords, Susan Williams were quizzed by Parliament's home affairs committee on the progress of the Online Harms Bill.
Caroline Dinenage in particular gave
the impression that the massive scope of the bill includes several issues that have not yet been fully thought through. The government does not yet seem able to provide a finalised timetable.
Dinenage told the home affairs committee that she could
not commit to introducing the new laws in Parliament in the current session. She said it was an aspiration or intention rather than a commitment as pledged by her predecessor.
She said the government's final consultation response outlining its plans
would not be published until probably in the Autumn, more than 18 months after the White Paper in 2019 and more than two and a half years since the green paper.
Julian Knight, Conservative chair of the culture committee, said:
If you don't do it it 2021, then it would have to go through the whole process and it could be 2023 before it is on the statute book with implementation in 2024. Given we have been working on this through the last Parliament, that is
not good enough.
The disinformation online about coronavirus underlines why we need this legislation. Unless we can get the architecture in place, we will see further instances of serious erosion of public trust and even damage to
the fabric of society.
Dinenage disclosed that the new internet censor, probably Ofcom, would initially be paid for by the taxpayer before shifting all funding to the tech industry.
France has adopted a new censorship law forcing internet companies to take down content that that the government doesn't like at breakneck speed.
After months of debate, the lower house of Parliament adopted the legislation, which will require
platforms such as Google, Twitter and Facebook to remove flagged hateful content within 24 hours and flagged terrorist propaganda within one hour. Failure to do so could result in fines of up to 1.25 million euro.
The new rules apply to all websites,
whether large or small. But there are concerns that only internet giants such as Facebook and Google actually have the resources to remove content as quickly as required.
Digital rights group La Quadrature du Net said the requirement to take down
content that the police considered terrorism in just one hour was impractical.
The worrying outcome maybe that small companies are forced to present their content via larger US companies that can offer the capability that content will be censored
automatically on receiving a complaint. This will of course result in the likes of Google taking even more control of the internet.
The law, which echoes similar rules already in place in Germany, piles more pressure on Silicon Valley firms to
police millions of daily posts in Europe's two most populous countries.
The censorship law targets search engines as well as social media companies, has been the source of plenty of controversy. Online digital rights groups, tech companies,
opposition parties have all criticized the initiative, and the Senate has led an effort to water it down by deleting the systematic deadline for removing content.
Opponents argued in particular that the law would lead to lawful content being taken
down and would hand too much power to the companies charged with making decisions on what content is considered obviously unlawful.
The European Commission has also voiced criticism , writing to the French government in November to ask for the
legislation to be postponed. The EU executive argued that Paris should wait for its own planned rules on platforms, the Digital Services Act, to pass to set a common EU-wide standard on policing illegal content online.
The US government continues to have the right to snoop on internet users' browsing histories, as well and also internet search histories. A bill that would have stripped the government of its right to conduct the searches with no warrant failed in the
The bipartisan bill, an amendment to a surveillance authority first established under the 2001 Patriot Act, was sponsored by Oregon Democrat Ron Wyden, and Montana Republican Steve Daines. But the amendment required 60 votes to move forward,
and the final Senate vote was 59-37 in favor.
We know that social media can spread speech that is hateful, harmful and deceitful. In recent years, the question of what content should stay up or come down, and who should decide this, has become increasingly urgent for society. Every content decision
made by Facebook impacts people and communities. All of them deserve to understand the rules that govern what they are sharing, how these rules are applied, and how they can appeal those decisions.
The Oversight Board represents a
new model of content moderation for Facebook and Instagram and today we are proud to announce our initial members. The Board will take final and binding decisions on whether specific content should be allowed or removed from Facebook and Instagram
The Board will review whether content is consistent with Facebook and Instagram's policies and values, as well as a commitment to upholding freedom of expression within the framework of international norms of human rights. We will
make decisions based on these principles, and the impact on users and society, without regard to Facebook's economic, political or reputational interests. Facebook must implement our decisions, unless implementation could violate the law.
The four Co-Chairs and 16 other Members announced today are drawn from around the world. They speak over 27 languages and represent diverse professional, cultural, political, and religious backgrounds and viewpoints. Over time we
expect to grow the Board to around 40 Members. While we cannot claim to represent everyone, we are confident that our global composition will underpin, strengthen and guide our decision-making.
All Board Members are independent of
Facebook and all other social media companies. In fact, many of us have been publicly critical of how the company has handled content issues in the past. Members contract directly with the Oversight Board, are not Facebook employees and cannot be removed
by Facebook. Our financial independence is also guaranteed by the establishment of a $130 million trust fund that is completely independent of Facebook, which will fund our operations and cannot be revoked. All of this is designed to protect our
independent judgment and enable us to make decisions free from influence or interference.
When we begin hearing cases later this year, users will be able to appeal to the Board in cases where Facebook has removed their content,
but over the following months we will add the opportunity to review appeals from users who want Facebook to remove content.
Users who do not agree with the result of a content appeal to Facebook can refer their case to the Board
by following guidelines that will accompany the response from Facebook. At this stage the Board will inform the user if their case will be reviewed.
The Board can also review content referred to it by Facebook. This could include
many significant types of decisions, including content on Facebook or Instagram, on advertising, or Groups. The Board will also be able to make policy recommendations to Facebook based on our case decisions.
In serving the public conversation, our goal is to make it easy to find credible information on Twitter and to limit the spread of potentially harmful and misleading content. Starting today, we're introducing new labels and warning messages that will
provide additional context and information on some Tweets containing disputed or misleading information related to COVID-19.
During active conversations about disputed issues, it can be helpful to see additional context from
trusted sources. Earlier this year , we introduced a new label for Tweets containing synthetic and
manipulated media. Similar labels will now appear on Tweets containing potentially harmful, misleading information related to COVID-19. This will also apply to Tweets sent before today.
These labels will link to a
Twitter-curated page or external trusted source containing additional information on the claims made within the Tweet.
While false or misleading content can take many different forms, we will take action based on three broad categories:
Misleading information -- statements or assertions that have been confirmed to be false or misleading by subject-matter experts, such as public health authorities.
Disputed claims -- statements or
assertions in which the accuracy, truthfulness, or credibility of the claim is contested or unknown.
Unverified claims -- information (which could be true or false) that is unconfirmed at the time it is shared.
A coronavirus check will include, facial recognition, providing personal information, a check against criminal records, a check on the car, and an app with location tracking to keep tabs on your whereabouts in Phuket
Phuket is a holiday island in Thailand that is accessed by road via a single bridge to the mainland. In the name of coronavirus monitoring the Phuket authorities have introduced an horribly invasive computerised checkpoint on the bridge.
on people crossing the bridge will include a temperature check with a facial recognition detection system connected with the public health database. In the case detection of a traveller has contracted the Covid-19 virus, police will be alerted at the
checkpoints along with National Emergency Notification Center staff.
But that is just the beginning of it. The Phuket Smart Check Point will also include scanning for suspect vehicles involved in crimes, and checking the traveller's criminal
The Check Point will also require travellers to register and supply personal information. This will be kept on record for subsequent crossings and will be used for unspecified analysis by the authorities, including for the suppression of
The system comes with an app that can be used as a tracking device allowing authorities to see where your current location is in the province.
Paul Ellery in the Morning Sunshine Radio 16 September 2019, 07:45
Sunshine Radio is a local radio station serving Hereford and Monmouthshire with music, speech, local news and information.
Ellery in the Morning is a daily light-entertainment programme that includes discussions of news of the day.
Ofcom received a complaint that a presenter talked in a mocking manner about singer Sam Smith coming out as non-binary.
After playing a Sam Smith track during the programme, the presenter Paul Ellery said:
I can't get over this that he [Sam Smith] says he doesn't identify with being male or female, so in future we have to call him
'they'. And I heard somebody on -- I think it was on BBC News Channel over the weekend -- saying, the easiest way to find out, Sam, if you're male or female or they, is to take your clothes off -- there we go you're definitely a boy!.
We considered Rule 2.3:
In applying generally accepted standards broadcasters must ensure that material which may cause offence is justified by the context...Such material may include, but is not
limited to, offensive language, violence, sex, sexual violence, humiliation, distress, violation of human dignity, discriminatory treatment or language (for example on the grounds of age, disability, gender reassignment, pregnancy and maternity, race,
religion or belief, sex and sexual orientation, and marriage and civil partnership). Appropriate information should also be broadcast where it would assist in avoiding or minimising offence.
Sunshine FM described the
programme as a live, unscripted one man show and stated that there was no production team or backroom staff involved in its broadcast. In response to Ofcom's Preliminary View, which was to record a breach of Rule 2.3, the Licensee said that the presenter
had resigned from Sunshine Radio.
Ofcom Decision: Breach of rule 2.3
In this case, the comments made by the presenter about Sam Smith were brief, which may have limited the potential for offence to
some extent. However, they did not form part of a serious or considered discussion about issues related to gender identity and, at no point were his comments challenged, scrutinised or otherwise contextualised. Furthermore, the tone of the presenterís
comments was mocking, dismissive and flippant towards Sam Smith's announcement that they were identifying as non-binary.
Noting that we only received one complaint from listeners about the presenter's comments, we considered that
the above factors established the potential for the comments in question to cause offence.
Given the strength of the presenter's views on gender reassignment which had the potential to cause offence to listeners, and in
particular, to members of the trans community, we considered that these comments were likely to have exceeded listeners' expectations of content on this local radio station. We therefore considered that there was insufficient context to justify the
potentially offensive references to Sam Smith's gender.
We acknowledged the Licensee's position that the comments were not intended to offend listeners, and the presenter's acknowledgement that they were misjudged. However,
regardless of the intent, in our view the comments had the potential to cause offence for the reasons set out above.
Ofcom was concerned by Sunshine FM's submission that other than the presenter, no other members of a production
team or backroom staff were involved in the broadcast of the programme. We acknowledged the steps the Licensee has taken to improve compliance prior to the presenter's resignation, including the presenter undertaking compliance training and attending
daily meetings to review content.
However, given all of the above, our Decision was that the content exceeded generally accepted standards, in breach of Rule 2.3 of the Code.
New research by the British Board of Film Classification (BBFC) has shown that children and teens are being exposed to harmful or upsetting content while in lockdown, often on a daily basis.
The research, carried out by
YouGov, has revealed that in lockdown, nearly half (47%) of children and teens have seen content they'd rather avoid, leaving them feeling uncomfortable (29%), scared (23%) and confused (19%).
One in seven (13%) said they see
harmful content daily while in lockdown, with 14 year olds exposed to the most. A quarter (24%) of 14 year olds say they see harmful content on a daily basis.
This comes as more than half (53%) parents say they haven't spoken to
their children about their increased time online during lockdown, with a third (29%) saying they didn't think those chats would make a difference.
The BBFC is encouraging parents to talk to their children about what content they
might be watching online during lockdown, as 60% of children say they have approached their parents to chat after seeing content that has upset or disturbed them while they've been online in lockdown.
Parents, and young people,
can check out age ratings and ratings info to find out what content might contain on the BBFC website and app. The BBFC also has a wide range of educational resources to help parents homeschool their children during lockdown available on their website,
and on their children's website cbbfc.
The research also shows that 82% of parents, and three quarters (73%) of children want to see trusted BBFC age ratings and ratings info displayed on user generated content platforms like
YouTube, so they can avoid content that might upset or disturb them.
95% of parents said they want age ratings on user generated content platforms linked to parental filters. The BBFC is therefore calling on platforms to consider
using BBFC age ratings for their content, and for uploaders of user generated content to age rate their content which could then be linked to parental filters.
David Austin, Chief Executive of the BBFC, said:
This research shows that during the lockdown parents can make a real difference to their children's risks online if they talk about how to avoid potentially distressing and inappropriate content. We're supporting parents to help their
children to navigate the online world safely, and both our website and children's website cbbfc, contain a wealth of free educational resources including ones we have developed with the PHSE Association.
But platforms have a role
to play as well. What a difference it would make, for example, if YouTube had well known, trusted BBFC age ratings created by those uploading or watching the video, that parents and young people recognise from the cinema, DVD and Blu-ray and Netflix,
linked to filters. Now more than ever we need to work together to protect children online by giving them the information they need to choose content well.
This research supports the Government's recognition of the
need to help families stay safe online, with guidance recently issued containing the four-point plan including: reviewing security and safety settings; checking facts and guarding against disinformation; being vigilant against fraud and scams; and
managing the amount of time spent online.
Modern journalism ceased to try to report the facts. Instead it started to act almost as a teacher, standing by the reader's side, and guiding him or her towards the 'right' viewpoint. By Matthias Heitmann
Japan's games censors of the Consumer Entertainment Rating Organization (CERO) are set to resume after having closed down last month to comply with Tokyo's lockdown. The board plans to resume business on May 7.
The censors did not work from home
during the closed period, so it will be a relief to games producers and distributors as ratings are mandatory in Japan before a game can be sold.