A safe space for everyone – a plea for a democratic and participative metaverse

Dr Octavia Madeira, Institute for Technology Assessment and Systems Analysis (ITAS), Karlsruhe Institute of Technology (KIT)

Dr Georg Plattner, Institute for Technology Assessment and Systems Analysis (ITAS), Karlsruhe Institute of Technology (KIT)


The vision of a metaverse presented by Meta CEO Mark Zuckerberg in October 2021 was a watershed moment for society and for the tech world. Although the concept of a second digital life, including a digital identity, is neither new nor exclusive to Meta (see, for example, Second Life), this was the first time that almost all the possible functionalities of the metaverse had been presented up to that point. Here, there is a special emphasis on immersion via virtual reality which, as an extension of today’s Internet applications, is meant to give users a completely new sense of participation and let them experience the metaverse in a multimodal and multi-sensory way. The presentation of the vision also focused in particular on technological permeability, on the diffusion of social media in all areas of human life and therefore on the displacement of social media as a pure entertainment platform.

Should the metaverse turn out to be as Zuckerberg and other proponents envision it, this would mean a radical transformation of social interaction with the digital space, and also a radical change in our everyday lives. Shopping could increasingly shift to the metaverse as an immersive experience, sports classes could take place in a virtual environment and virtual church services could be held with believers from all over the world. The world of work has already permanently changed, partly due to the coronavirus pandemic – and we could soon move from working at home to working in the meta-office.

But these innovations will not only change our everyday lives – they will also cause extremism and radicalisation to strike out in new directions and transform to adapt to new environments. Generally speaking, extremists use technologies that are cheap, readily available, easy to use and widely accessible for their purposes, like propaganda, communication and recruitment. Using technology for a function other than that intended by the developers with the intention of doing harm to others is an inherently creative process. Cropley, Kaufman and Cropley (2008) call this “malevolent creativity”. They define it as a form of creativity that “is deemed necessary by some society, group, or individual to fulfill goals they regard asdesirable, but has serious negative consequences for some other group, these negative consequences being fully intended by the first group” (106). We describe actors who display malevolent creativity (such as extremists or spreaders of fake news) as malevolent actors.

In the past, malevolent actors were very creative especially when it came to realigning their own organisation and distributing their own ideology. The digital revolution has equipped them with an unprecedented number of tools with which to further their cause: from (encrypted and instant) mass communication for propaganda and recruitment to alternative instruments for financing operations and logistics through to new means of destruction and terror. Recent technological advances have opened up a wide range of new opportunities for malevolent actors. For example, Web 2.0, the rise of social media and the availability of nearly all content on the Internet have enabled these actors to easily connect with other like-minded individuals and form almost entirely closed communities that reinforce their own views.

Research into the metaverse as the successor to social media and the mobile Internet can provide important insights into how malevolent actors could creatively use the metaverse. While we generally agree with Joe Whittaker and others (Whittaker 2022; Valentini, Lorusso and Stephan 2020) that distinguishing between offline and online radicalisation does not make sense from an analytical perspective, the way in which malevolent actors are currently using social media could give an idea of the metaverse of the future.

It is generally recognised that malevolent actors (with different ideological backgrounds) began to make use of the Internet and its possibilities at an early stage (Feldman 2020; Fisher 2015; Stewart 2021; Lehmann and Schröder 2021). They used new technologies in creative ways in order to evade monitoring and detection and also to improve their own operations. As an anonymous place of countless possibilities where one can find a wealth of information tailored to one’s own interests, the Internet is a gold mine for extremists (Bertram 2016, p. 232).

While research on the radicalisation patterns of convicted jihadi terrorists has shown that offline networks played a much greater role in their radicalisation than online networks (Hamid and Ariza 2022), other research indicates that the Internet has a more important role for right-wing extremists. This applies especially to the planning of their attacks and actions (von Behr et al. 2013; Gill et al. 2017). “The Internet is largely a facilitative tool that affords greater opportunities for violent radicalization and attack planning. Nevertheless, radicalization and attack planning are not dependent on the Internet […].” (Gill et al. 2017, p. 113).

In particular, social media has been used by malevolent groups to create, target and distribute self- generated content without the traditional processes of vetting used by traditional media companies and avoiding policing and censorship from nation states (Droogan et al. 2018, p. 171). Furthermore, social media has also become an instrument of social interaction for those who are already radicalised and those they want to convince or who are interested in their activities (Conway 2017).

The introduction of the metaverse could further reinforce this momentum. By further bridging the gap between offline and online, it could be even more difficult to maintain the distinction between the two spheres of radicalisation (and extremism and terrorism). At present, offline networks provide familiarity and a close environment and are more likely to evade security services than online extremists (Hamid and Ariza 2022). The future metaverse could bring together these advantages of the offline world in an extensive and immersive digital experience. Combined with the advantages of the online world – instant mass communication and propaganda – the metaverse could become an even bigger game-changer than the Internet and social media were.


The metaverse is still in the early stages of development and still has a long way to go before it reaches a certain stage of maturity in which promises and actual functionalities are implemented. It is already apparent that the risks of the metaverse are comparable to those of social media and, in the past, a response often came too late. Freedom and security will probably be the decisive variables in this technology of the future, which makes engaging with malevolent actors all the more crucial (Neuberger 2023). In the initial phase of the metaverse, it is already becoming clear that malevolent actors are finding fertile ground – as illustrated, for example, by the incidents of sexual harassment that have occurred in the current test versions of the metaverse (Bazu 2021; Bovermann 2022; Diaz 2022; Wiederhold 2022).

How can these developments be tackled? How can they be prevented before they cause harm? It will be important to ensure the democratic involvement of actors and marginalised groups in decision-making and development processes. While this would now be a genuinely reactive process in the case of social media, the developers of the metaverse still have the opportunity to build beneficial structures. Democratisation of social media is desirable from a sociopolitical perspective because it is a very powerful tool due to its widespread use and its economic and cultural importance. This power should be democratically legitimised and controlled (Engelmann et al. 2020). However, democratic safeguarding should not follow a party political pattern.

In the development of the metaverse, social media should be informative in various ways – from the creativity with which malevolent actors use new media and technologies (see above) through to the democratic involvement of users. Social media operators have already tried to take account of the aspect of participation:

  • META conceived the idea of an Oversight Board in 2018 as a body whose independent judgement could help the company make tough content decisions. This board is committed to being independent, accessible and transparent. META has granted it the authority to decide whether content should be allowed or removed.
  • Twitter has been advised by a Trust and Safety Council in the past. This consisted of various NGOs and researchers who advised the company on online security issues. Elon Musk dissolved the Council after taking over the company (The Associated Press 2022). 
  • On its YouTube video platform, Google has introduced the Priority Flagger Programme. This enables NGOs and public authorities to use highly effective tools to report content that violates the Community Guidelines. This flagged content is then reviewed by moderators as a priority. However, the deletion criteria are the same as for any other reports. The programme was revised by YouTube in 2021, which led to major criticism from the community (Meineck 2021).

In general, there seems to be a worrying trend on social media to cut back on these participative models of moderation and security in favour of artificial intelligence (AI) applications (Gorwa et al. 2020; Llansó 2020). However, AI solutions cannot and should not replace the involvement of civil society in decision-making processes and questions of democratic culture, not least because AI-supported content moderation solutions are still prone to error and lack transparency (Gillespie 2020; Gorwa et al. 2020). 


In social media research and particularly in platform governance research, important approaches can be found that may help to enable a democratic and inclusive metaverse. In addition to essential cooperation between operators and governmental and non-governmental actors on issues of transparency and research, there is an emphasis in particular on actively strengthening democratic actors and narratives (Bundtzen and Schwieter 2023; Engelmann et al. 2020; Rau et al. 2022). 

This strategy is crucial in order to ensure that a state’s repressive apparatus is actually only used as a measure of last resort to stop malevolent actors. Democratic argument and discourse must be possible in an inclusive metaverse without people constantly having to fear repression and restriction. Instead, platform operators can also take steps in the metaverse to consciously and actively promote democratic actors and narratives, and thus build democratic resilience in the metaverse. 

Here too, the metaverse can take inspiration from existing approaches in the social media field, such as YouTube’s trusted flagging programme. Democratic actors, e.g. NGOs and government organisations, specialising in areas such as hate speech, group-focused enmity or strengthening democracy could have access to special reporting tools. They could also be given extended powers to contextualise questionable content. 

However, as well as reinforcing democratic narratives, the democratisation of the platform itself is a crucial factor for inclusivity. Involving users in decision-making and design processes can have enormous added value for a platform that is interested in democratic interaction. Marginalised groups and their representatives know exactly where hate and harassment may be lurking in the digital space. By involving such stakeholders at an early stage, some of the mistakes that were made on social media could be minimised from the outset.

In political practice, mini-publics have already proved effective as an instrument of user participation (Escobar and Elstub 2017; Smith and Setälä 2018). Mini-publics are groups of (randomly or systematically) selected citizens who work together over an extended period to examine socially relevant issues, with the inclusion of external sources, e.g. scientific expertise. Topics are examined, discussed and assessed from a broad range of perspectives, and the resulting recommendations are forwarded to political decision-makers (Escobar and Elstub 2017; Pek et al. 2023). One example of this is the virtual citizens’ assembly in Germany. In June 2022, its members debated the consequences of using artificial intelligence (Buergerrat.de 2022). These types of assemblies allow platform-specific topics to be discussed with the aim of ensuring that decision-making is more democratic. 

Although quite controversial (see above), platform councils can also develop potential for promoting democracy if they are able to operate independently, objectively and transparently (Haggart and Keller 2021; Rau et al. 2022). To ensure this, platform councils of this type could be based on the press and broadcasting councils that are already established in Germany, in line with the recommendations of Kettemann and Fertmann (2021). It should be noted, however, that responsibilities (geographical, practical), participants (citizens, experts, NGOs, political decision-makers) and not least powers (quasi-judicial, advisory) must be part of the social discourse and cannot yet be conclusively clarified (Cowls et al. 2022; Kettemann and Fertmann 2021). Furthermore, such councils could boost public confidence – the more diverse and transparent their line-up is and also the more publicly visible the effects of their recommendations are.

Last but not least, the aim must also be to strengthen media literacy and policy competence by means of various training opportunities. These should be designed in such a way that individuals who are not (or no longer) associated with the education system are also able to benefit from them. Here it is vital to provide the necessary tools for dealing with fake news, other manipulated or extremist content and also hate speech on the Internet. One example to mention is the Good Gaming – Well Played Democracy project directed by the Amadeu Antonio Foundation, which aims to raise the gaming community’s awareness of extremist content, among other things.

In addition, it must be noted that building a democratic metaverse is not solely a task for citizens. The creation of a digital twin in the sense of a well-fortified democracy is also important. However, according to Rau et al. (2022), this does not exclusively mean the use of repressive measures such as deletion or suppression of problematic content (see, for example, Bellanova and De Goede 2022) but also, coupled with this, the strengthening of democratic actors, e.g. through algorithmically increased visibility. In this context, the empowerment of marginalised democratic actor groups becomes especially important in order to adequately represent social diversity. They are properly trained to recognise problematic content at an early stage, for example, and can thus also be consulted for advice (Rau et al. 2022). The use of counter speech could also be another strategy for tackling extremist content in the metaverse (Clever et al. 2022; Hangartner et al. 2021; Kunst et al. 2021; Morten et al. 2020). The term (digital) counter speech refers to comments or other content posted as a response to hate speech in order to minimise and weaken the impact of it or to support potential victims (Ernst et al. 2022; Garland et al. 2022). In this regard, studies have shown that counter speech can be an effective means of tackling extremist content and reducing it effectively (Garland et al. 2022; Hangartner et al. 2021). In the context of newer technological complexes, e.g. AI, consideration is currently being given to implementing counter speech automatically in certain circumstances, although final concepts and responsibilities are still the subject of intensive discussion (Clever et al. 2022). 

In addition to participatory methods, legislation can also be used to prevent extremist content. In Germany, the dissemination of unconstitutional symbols and signs is forbidden and perpetrators can be prosecuted. Germany’s Network Enforcement Act (Netzwerkdurchsetzungsgesetz, NetzDG) also provides a legal framework for dealing with hate crime on social media. Accordingly, the Terrorist Content Online Regulation (European Union 2021) requires platform operators offering services in the EU to remove or block reported terrorist content within one hour. Recent results of extremism research indicate, however, that so-called legal but harmful content is already proving to be a major challenge and is likely to be of significance in the metaverse as well (Jiang et. al. 2021; Rau et al. 2022). This includes, for example, digital content that may have a subtle radicalising effect but is not unlawful. However, it should be noted in this regard that content moderation must comply with the constitutional principle of free speech. Consequently, it is to be assumed that the ongoing discussion on the relationship between freedom and security will also significantly influence the design of the metaverse and will or must be the result of a negotiation process involving society as a whole in order to guarantee the democratic dimension.


If the immersiveness of the metaverse measures up to Mark Zuckerberg’s vision, it is very likely to have a huge impact on our everyday lives and on social interaction. This immersiveness would mean that the operators of the metaverse (or metaverses) would need to deal intensively with questions of democratisation. Not only would the state probably play a (yet to be defined) role in a metaverse, its users must also be enabled to participate democratically in it. This would help to make the platform inclusive and as safe as possible from malevolent actors.

Building on the social media research of recent decades, there are many common points of reference which can support and steer the design of a democratic metaverse. As mentioned above, the metaverse is still at an early stage of development. However, given the rapid pace of advancement, it is vital to support this process, stay on the ball and take an active role in discussions. A multi-perspective approach from all stakeholders involved is also relevant to ensure a balance between security and freedom for all users. The possibilities outlined here for building a metaverse present some solutions for implementing democratic pillars. In summary, the following solutions should be deployed by platform operators:

  • early implementation of methods for user participation, e.g. mini-publics or independent platform councils
  • strengthening of democratic actors and inclusion of marginalised groups
  • reference to existing scientific research findings on social media, hate speech and (digital) extremism, as well as open cooperation with research institutions
  • offer of educational opportunities in cooperation with democratic actors

Final and concrete implementation is currently still the subject of lively discussion. However, the status of the early development phase of the metaverse is encouraging active participation, which is also reflected in this Immersive Democracy Project and can be understood as an invitation to this process. Participation is not a panacea for the dangers that lurk in the digital space. But it is an important source of support that can help to empower marginalised groups or individuals in specific ways and thus give them the tools to work together with operators against discrimination and hate in the metaverse. Now is the time to develop these tools and make sure that a future metaverse is as safe and secure as possible for everyone.


Bazu, Tanya (2021): The metaverse has a groping problem already. MIT Technology Review. Available online at https://www.technologyreview.com/2021/12/16/1042516/the-metaverse-has-a-groping-problem/, last checked on 28 September 2022.

von Behr, Ines; Reding, Anaïs; Edwards, Charlie; Gribbon, Luke (2013): Radicalisation in the Digital Era: The Use of the Internet in 15 Cases of Terrorism and Extremism | Office of Justice Programs. In: RAND Europe.

Bellanova, Rocco; De Goede, Marieke (2022): Co‐Producing Security: Platform Content Moderation and European Security Integration. In: JCMS: Journal of Common Market Studies 60 (5), pp. 1316–1334. https://doi.org/10.1111/jcms.13306

Bertram, Luke (2016): Terrorism, the Internet and the Social Media Advantage: Exploring how terrorist organizations exploit aspects of the internet, social media and how these same platforms could be used to counter-violent extremism. In: Journal for Deradicalization 2016 (7), pp. 225–252.

Bovermann, Philipp (2022): Online-Belästigungen im Metaverse – Am eigenen Leib. Süddeutsche Zeitung. Available online at https://www.sueddeutsche.de/kultur/metaverse-vr-virtual-reality-microsoft-sexuelle-belaestigung-1.5519527?print=true, last checked on 4 February 2022.

Buergerrat.de (2022): Bürgerrat diskutierte über künstliche Intelligenz. Buergerrat.de. Available online at https://www.buergerrat.de/aktuelles/buergerrat-diskutierte-ueber-kuenstliche-intelligenz/, last checked on 22 June 2023.

Bundtzen, Sara; Schwieter, Christian (2023): Datenzugang zu Social-Media-Plattformen für die Forschung: Lehren aus bisherigen Maßnahmen und Empfehlungen zur Stärkung von Initiativen inner- und außerhalb der EU. Berlin: Institute for Strategic Dialogue (ISD).

Clever, Lena; Klapproth, Johanna; Frischlich, Lena (2022): Automatisierte (Gegen-)Rede? Social Bots als digitales Sprachrohr ihrer Nutzer*innen. In: Julian Ernst, Michalina Trompeta and Hans-Joachim Roth (eds.): Gegenrede digital: Neue und alte Herausforderungen interkultureller Bildungsarbeit in Zeiten der Digitalisierung. Wiesbaden: Springer Fachmedien, (Interkulturelle Studien), pp. 11–26. https://doi.org/10.1007/978-3-658-36540-0_2

Conway, Maura (2017): Determining the Role of the Internet in Violent Extremism and Terrorism: Six Suggestions for Progressing Research. In: Studies in Conflict & Terrorism Routledge, 40 (1), pp. 77–98. https://doi.org/10.1080/1057610X.2016.1157408

Cowls, Josh; Darius, Philipp; Santistevan, Dominiquo; Schramm, Moritz (2022): Constitutional metaphors: Facebook’s “supreme court” and the legitimation of platform governance. In: New Media & Society. https://doi.org/10.1177/14614448221085559

Cropley, David H.; Kaufman, James C.; Cropley, Arthur J. (2008): Malevolent Creativity: A Functional Model of Creativity in Terrorism and Crime. In: Creativity Research Journal 20 (2), pp. 105–115. https://doi.org/10.1080/10400410802059424

Diaz, Adriana (2022): Disturbing reports of sexual assaults in the metaverse: ‘It’s a free show’. New York Post. Available online at https://nypost.com/2022/05/27/women-are-being-sexually-assaulted-in-the-metaverse/, last checked on 28 September 2022.

Droogan, Julian; Waldek, Lise; Blackhall, Ryan (2018): Innovation and terror: an analysis of the use of social media by terror-related groups in the Asia Pacific. In: Journal of Policing, Intelligence and Counter Terrorism Routledge, 13 (2), pp. 170–184. https://doi.org/10.1080/18335330.2018.1476773

Engelmann, Severin; Grossklags, Jens; Herzog, Lisa (2020): Should users participate in governing social media? Philosophical and technical considerations of democratic social media. In: First Monday. https://doi.org/10.5210/fm.v25i12.10525

Ernst, Julian; Trompeta, Michalina; Roth, Hans-Joachim (2022): Gegenrede digital – Einleitung in den Band. In: Julian Ernst, Michalina Trompeta and Hans-Joachim Roth (eds.): Gegenrede digital. Wiesbaden: Springer Fachmedien Wiesbaden, (Interkulturelle Studien), pp. 1–7. https://doi.org/10.1007/978-3-658-36540-0_1

Escobar, Oliver; Elstub, Stephen (2017): Forms of Mini-Publics: An introduction to deliberative innovations in democratic practice. (Research and Development Note) New Democracy.

European Union (2021): Regulation (EU) 2021/784 of the European Parliament and of the Council of 29 April 2021 on addressing the dissemination of terrorist content online (Text with EEA relevance). OJ L.

Garland, Joshua; Ghazi-Zahedi, Keyan; Young, Jean-Gabriel; Hébert-Dufresne, Laurent; Galesic, Mirta (2022): Impact and dynamics of hate and counter speech online. In: EPJ Data Science 11 (1), p. 3. https://doi.org/10.1140/epjds/s13688-021-00314-6

Gill, Paul; Corner, Emily; Conway, Maura; Thornton, Amy; Bloom, Mia; Horgan, John (2017): Terrorist Use of the Internet by the Numbers: Quantifying Behaviors, Patterns, and Processes. In: Criminology & Public Policy 16 (1), pp. 99–117. https://doi.org/10.1111/1745-9133.12249

Gillespie, Tarleton (2020): Content moderation, AI, and the question of scale. In: Big Data & Society 7 (2). https://doi.org/10.1177/2053951720943234

Gorwa, Robert; Binns, Reuben; Katzenbach, Christian (2020): Algorithmic content moderation: Technical and political challenges in the automation of platform governance. In: Big Data & Society 7 (1). https://doi.org/10.1177/2053951719897945

Haggart, Blayne; Keller, Clara Iglesias (2021): Democratic legitimacy in global platform governance. In: Telecommunications Policy 45 (6). https://doi.org/10.1016/j.telpol.2021.102152

Hamid, Nafees; Ariza, Cristina (2022): Offline Versus Online Radicalisation: Which is the Bigger Threat? London: Global Network on Extremism & Technology.

Hangartner, Dominik et al. (2021): Empathy-based counterspeech can reduce racist hate speech in a social media field experiment. In: Proceedings of the National Academy of Sciences 118 (50). https://doi.org/10.1073/pnas.2116310118

Jiang, Jialun Aaron; Scheuerman, Morgan Klaus; Fiesler, Casey; Brubaker, Jed R.; Alexandre Bovet (ed.) (2021): Understanding international perceptions of the severity of harmful content online. In: PLOS ONE 16 (8). https://doi.org/10.1371/journal.pone.0256762

Kettemann, Matthias C.; Fertmann, Martin (2021): Die Demokratie Plattformfest Machen: Social Media Councils als Werkzeug zur gesellschaftlichen Rückbindung der privaten Ordnungen digitaler Plattformen. Potsdam-Babelsberg: Friedrich-Naumann-Stiftung.

Kunst, Marlene; Porten-Cheé, Pablo; Emmer, Martin; Eilders, Christiane (2021): Do “Good Citizens” fight hate speech online? Effects of solidarity citizenship norms on user responses to hate comments. In: Journal of Information Technology & Politics 18 (3), pp. 258–273. https://doi.org/10.1080/19331681.2020.1871149

Llansó, Emma J (2020): No amount of “AI” in content moderation will solve filtering’s prior-restraint problem. In: Big Data & Society 7 (1). https://doi.org/10.1177/2053951720920686

Meineck, Sebastian (2021): Trusted Flagger: YouTube serviert freiwillige Helfer:innen ab. netzpolitik.org. Available online at https://netzpolitik.org/2021/trusted-flagger-youtube-serviert-freiwillige-helferinnen-ab/, last checked on 27/06/2023.

Morten, Anna; Frischlich, Lena; Rieger, Diana (2020): Gegenbotschaften als Baustein der Extremismusprävention. In: Josephine B. Schmitt, Julian Ernst, Diana Rieger und Hans-Joachim Roth (eds.): Propaganda und Prävention. Wiesbaden: Springer Fachmedien Wiesbaden, pp. 581–589. https://doi.org/10.1007/978-3-658-28538-8_32

Neuberger, Christoph (2023): Sicherheit und Freiheit in der digitalen Öffentlichkeit. In: Nicole J. Saam and Heiner Bielefeldt (eds.): Sozialtheorie. Bielefeld: transcript Verlag, pp. 297–308.

Pek, Simon; Mena, Sébastien; Lyons, Brent (2023): The Role of Deliberative Mini-Publics in Improving the Deliberative Capacity of Multi-Stakeholder Initiatives. In: Business Ethics Quarterly 33 (1), pp. 102–145. https://doi.org/10.1017/beq.2022.20

Rau, Jan; Kero, Sandra; Hofmann, Vincent; Dinar, Christina; Heldt, Amélie P. (2022): Rechtsextreme Online-Kommunikation in Krisenzeiten: Herausforderungen und Interventionsmöglichkeiten aus Sicht der Rechtsextremismus- und Platform-Governance-Forschung. In: Arbeitspapiere des Hans-Bredow-Instituts SSOAR – GESIS Leibniz Institute for the Social Sciences. https://doi.org/10.21241/SSOAR.78072

Smith, Graham; Setälä, Maija (2018): Mini-Publics and Deliberative Democracy. In: Andre Bächtiger, John S. Dryzek, Jane Mansbridge and Mark Warren (eds.): The Oxford Handbook of Deliberative Democracy. Oxford University Press, pp. 299–314. https://doi.org/10.1093/oxfordhb/9780198747369.013.27

The Associated Press (2022): Musk’s Twitter has dissolved its Trust and Safety Council. National Public Radio (NPR). Washington, DC, 12/12/2022.

Wiederhold, Brenda K. (2022): Sexual Harassment in the Metaverse. In: Cyberpsychology, Behavior, and Social Networking 25 (8), pp. 479–480. https://doi.org/10.1089/cyber.2022.29253.editorial

Stay tuned to the latest news:

Current publications and dates in the newsletter