Koo rolls out new safety features for proactive content moderation


Microblogging platform Koo on Thursday announced the launch of new proactive content moderation features, geared to provide users with a safer social media experience. The new features developed in-house are capable of proactively detecting and blocking any form of nudity or child sexual abuse materials in less than 5 seconds, labelling misinformation and hiding toxic comments and hate speech on the platform, Koo said in a release. Twitter-rival Koo said it is committed to providing a safe and positive experience for its users, being an inclusive platform that is built with a language-first approach. Koo, while announcing the launch of the new proactive content moderation features, said these are designed to provide a safer and more secure social media experience to users. ”In order to provide users with a wholesome community and meaningful engagement Koo has identified few areas which have a high impact on user safety that is Child Sexual Abuse Materials and Nudity, Toxic comments and hate speech, misinformation and disinformation, and impersonation and is working to actively remove their occurrence on the platform,” it said. The new features are an important step towards achieving this goal.