1. Introduction
Shall Unite is dedicated to providing a secure digital environment for children. While we do not currently partner with external law enforcement or legal entities, we employ AI and human moderation to identify and address harmful content.
2. Definitions
CSAE (Child Sexual Abuse and Exploitation) includes, but is not limited to:
- CSAM (Child Sexual Abuse Material): Sexualized depictions of minors.
- Online Grooming: Manipulative digital interactions targeting minors for sexual exploitation.
- Sextortion: Coercing children to produce explicit content under threats.
- Sexual Exploitation: Using minors for sexual gain or personal benefit.
3. Safety Measures
A. AI + Human Moderation
- Automated Scanning: AI systems continuously flag potentially harmful text, images, or behavior patterns.
- Human Review: A trained moderation team investigates flagged content and user reports.
- Keyword Blocking: Proactive filters detect and restrict CSAE-related terminology.
B. Account Controls
- Parental Verification (KYC): Parents must sign up and verify their identity before creating child accounts.
- Age-Based Restrictions: Platform features are tailored by age, reducing exposure to mature content.
- Behavior Monitoring: Suspicious interactions (e.g., adults frequently contacting minors) may trigger alerts.
C. Reporting & Accountability
- In-App Reporting: Users can report suspicious content or activity for immediate moderator review.
- Account Termination: Violators face permanent bans for CSAE-related offenses.
D. No Mature Content
- Platform-Wide Restriction: Shall Unite does not allow mature or adult content for any user, including parents.
4. Parental Tools
- Activity Oversight: Parents can monitor connections and content relevant to their child’s account.
- Approved Contacts: Children interact only with parent-authorized friends.
- Safety Resources: Informational guides help families recognize and address potential online risks.
5. Policy Transparency
- No External Partnerships: We currently rely exclusively on internal moderation.
- Continuous Improvement: We regularly refine our AI models and moderation procedures.
6. Limitations
- No Legal Collaboration: We do not escalate cases to law enforcement.
- No Guaranteed Content Removal: No system can ensure 100% removal of harmful content.
7. Contact
For concerns or inquiries, please reach out to support@shallunite.com.