One of the main issues of global content governance on social media relates to the definition of the rules governing online content moderation. One could think that it would be sufficient for online platforms to refer to existing human rights standards. However, a more careful analysis shows that international law only provides general principles, which do not specifically address the context of online content moderation, and that a single human rights standard does not exist.
This is one of the reasons why, since their birth, major social media platforms have set their own rules, adopting their own peculiar language, values and parameters. Yet, this normative autonomy too has raised serious concerns. Why should private companies establish the rules governing free speech online? Is it legitimate to depart from minimal human rights standards and impose more stringent rules?