OSA Risk Assessment

(Draft) Addendum to Logic Users Group rules in the light of the Online Safety Act

In the context of the UK Online Safety Regulations, "illegal content" and "potential harm to users, especially children" refer to specific types of content and behaviors that we will address to protect users, particularly vulnerable individuals such as children.

Illegal Content​

This refers to content that directly violates the law. It includes, but is not limited to:
  1. Child Sexual Abuse Material (CSAM): Any form of content that depicts or promotes the sexual abuse or exploitation of children.
  2. Terrorist Content: Content promoting terrorism, including terrorist attacks, extremist ideologies, and recruitment materials for terrorist groups.
  3. Hate Speech: Content that promotes violence or discrimination based on characteristics such as race, religion, gender, sexual orientation, or disability.
  4. Fraudulent and Scamming Content: Content intended to deceive individuals for financial gain, such as phishing schemes, fraudulent offers, and fake product promotions.
  5. Intimate Image Abuse: Content involving the sharing or distribution of intimate images or videos without consent, often referred to as "revenge porn."
  6. Incitement to Violence: Any content that promotes or encourages violence, self-harm, or criminal activity.

The full list is here:

Categories of priority illegal content

1. Terrorism
2. Child Sexual Exploitation and Abuse (CSEA)
4. Harassment, stalking, threats and abuse
5. Controlling or coercive behaviour
6. Intimate image abuse
7. Extreme pornography
8. Sexual exploitation of adults
9. Human trafficking
10. Unlawful immigration
11. Fraud and financial offences
12. Proceeds of crime
13. Drugs and psychoactive substances
14. Firearms, knives and other weapons
15. Encouraging or assisting suicide
16. Foreign interference
17. Animal cruelty Under the Online Safety Regulations, platforms are required to take measures to prevent such illegal content from being shared, and must have systems in place for users to report and remove it promptly.

Potential Harm to Users, Especially Children​

This refers to content or behaviors that may not necessarily be illegal but still pose significant risks to users, particularly children. These may include:
  1. Cyberbullying and Harassment: Online bullying or harassment, which can lead to emotional distress, depression, or even self-harm, particularly in young people.
  2. Exposure to Harmful or Disturbing Content: Content that could have a negative psychological effect on children, such as graphic violence, self-harm tutorials, or explicit material not related to sexual abuse but still harmful to a child's mental or emotional well-being.
  3. Misinformation and Disinformation: False or misleading content, especially around sensitive topics like health, that may lead children to make dangerous decisions or develop incorrect beliefs. This can include anything that may go against national or government safety advice in regard to pandemics.
  4. Addiction and Excessive Use: Platforms that encourage excessive screen time or addiction to certain types of content, such as gaming or social media, which can interfere with a child's development, education, and well-being.
  5. Predatory Behavior: Online grooming or manipulation by adults trying to exploit or abuse children. This may include predatory messaging, inappropriate content, or online activities aimed at developing a relationship with a minor for harmful purposes.

Risk assessment, introduction​

To address illegal content and minimize harm, especially to children, we will:
  • Identify and Block Illegal Content:
    • All posts by new members are flagged as requiring moderation before being made public. This helps us to identify users who have not joined with a "legitimate interest" in the forum topic.
    • The level of activity on the forum is such that we can can ensure safe content using:
      • human moderators to detect and prevent the sharing of illegal or harmful content.
      • A high moderator to active user ratio.
      • our report system which allows any registered user to report content which infringes either the Logic Users Group regulations or contains any illegal or harmful content. Reports are sent via email notifications and so are monitored several times a day and ensure any such content is removed in a timely manner.
    • To date there have been zero instances of any harmful material as described in the act.
    • We regularly monitor uploaded file attachments to Direct Messages (aka DMs or PMs) but moderators do not monitor the text or linked content therein, unless there is a specific reason to do so (e.g. suspected abuse, illegal or harmful use of the system). However our report system extends to Direct Messages so any participant in a DM may flag DM content via the report system.
  • Implement Age Verification:
    • Members are already required to declare their age range (13 to 17, or over 18) at the time of registering.
    • Minors have no permissions to either send or receive direct messages.
    • All content posted or uploaded to the forum must be suitable for all age groups, hence there is no need for any age restricted areas.
By taking these actions, we minimize the risks associated with illegal content and protect users, particularly vulnerable groups such as children, from harm. We consider the risk associated with all 17 categories of the illegal content to be negligible.

Risk assessment​

CSEA = Child Sexual Exploitation and Abuse. 13/3/25

Risk Relevant Illegal Content Risk Level Evidence and Reasoning Mitigation Measures
User Generated Content Hate Speech, Harassment, CSEA, Terrorism, etc. Negligible Users can post content, but the community is small and moderation carried out regularly. Evidence: Low volume of user reports, active moderator presence, clear community guidelines.

There have been no incidents in 17 years.
The very conspicuous, effective and simple to use report system is enabled and monitored regularly.

Users engaging in harmful behaviour would be immediately banned and any identified illegal behaviour reported to law enforcement agencies.
Anonymity Harassment, Trolling, Illegal Content Sharing Negligible Users cannot post anonymously. Email addresses and IP addresses of registered users are available to admin. N/A
User Connections Grooming, Harassment, Coercion Low/Medium Users can connect, but the community is small and connections may be limited. Evidence: Low number of user-to-user connections.

Access to the direct message system is not available until users have posted publicly and known to have a legitimate interest in the forum topic as a professional, educator or hobbyist.

Nor are direct messages available to children. With or without effective age verification this would include any potential groomer posing as a child.
The report system is also enabled in direct messages and monitored regularly.

Lack of Age Verification CSEA, Exposure to Harmful Content Low/Medium The terms and conditions prohibit the posting of pornography by users of any age. Any content that is inappropriate for children or NSFW is removed and action taken against the user posting such content The terms make it very clear that any users that post such content would have their accounts suspended and any identified illegal behaviour reported to law enforcement agencies.
Back
Top