On June 14, 2023, the European Parliament moved closer to adopting the Artificial Intelligence (AI) Act, one of the first comprehensive laws that would regulate AI and prioritize protecting citizens over the development of technology. A final version of the law is expected to pass later this year. This move to regulate AI is a step in the right direction and acknowledges that AI-enhanced social media platforms are, indeed, a double-edged sword. Although such technology brings the entire world to our devices and offers ample opportunities for individual and community fulfillment, it can also distort reality and create false illusions. By spreading dis- and misinformation, social media and AI pose a direct challenge to the functioning of our democracies.
Yuval Noah Harari has argued that advanced AI will hack our “operating system,” namely our interpersonal relationships and our collective intelligence. Ultimately the way we interpret the reality around us, the way we learn and react, depends on how our brains are wired. It can be argued that, with the rapid rise of technology, evolution has not been given enough time to develop those regions of the neocortex that are responsible for higher cognitive functions. As a consequence, we are biologically vulnerable and exposed, as our brains are at the receiving end of information and disinformation alike.
Considering the dangers of advanced AI and AI-enhanced social media, there is an urgent need to design neuroscience-based policies to support citizens in building a system of digital self-defense. Enter the “Neuroshield.”
The Neuroshield, as we conceive it, would involve a threefold approach to defend citizens from the risks of digital technology and AI:
- Developing a code of conduct with respect to information objectivity
- Implementing regulatory protections
- Creating an educational toolkit for citizens
It is critical for both policymakers and brain scientists to advance this approach. By closely involving neuroscientists in planning and rolling out the Neuroshield, we can ensure that the best existing insights about the functioning of our cognition are taken into account.
The three pillars of the Neuroshield are further defined in the sections below.
Developing a Code of Conduct
First of all, an alliance of publishers, journalists, media leaders, opinion makers, and brain scientists must be formed to define a code of conduct with respect to the notion of objectivity of information. Although the interpretability of facts lies within the realm of everyone’s social and political freedom, guaranteed by the U.S. Bill of Rights and national constitutions, what is a fact — and what is not — cannot be contested. As neuroscience demonstrates, the moment an element of ambiguity is introduced into the understanding of what is factually true, an unstoppable avalanche of contestation begins, as “alternative truths” become encoded ever more strongly in our brains. Therefore, undeniable truths need to be protected.
There are several ways of going about this, from enshrining the culture of fact-checking in the media at large to establishing a commitment among journalists and scientists to correct erroneous and untrue information. Conclusions drawn on the basis of journalistic ethic need to be enhanced by what we know about the functioning of the brain and its susceptibility to bias and disinformation. In addition, social media platforms need to agree to a pact for mental health, committing to downgrade harmful content and share positive health information.
The debate about media independence in the current polarized environment is rife, both in political and media circles. However, by developing a code of conduct that is agreed upon and followed by the media, civil society organizations, and governments, we can move toward a more truth-based society.
Implementing Regulatory Protections
Regulatory protections must become part of the Neuroshield given that self-governance and exclusive reliance on the code of conduct can create an uneven playing field, with some actors committing to higher standards than others, risking being undercut. The proposed European AI Act, for example, would oblige providers of AI foundation models to assess and mitigate risk and be registered in an EU database. Generative foundation models, like ChatGPT, would have to comply with additional transparency requirements, such as disclosing the content that was generated by AI.
Additionally, considering the vast evidence about the links between the use of social media and the mental health crisis affecting adolescents across the world, regulating accessibility of social media until a certain age has become an essential part of the global conversation. As early as 2017, Jean Twenge argued in The Atlantic that, if left uncontrolled, smartphones would unleash a mental health crisis.
The recent warnings of U.S. Surgeon General Dr. Vivek Murthy are also alarming and unequivocal. He argues that there is harm, and potential for harm, resulting from the use of social media. Exposure to high-risk content, whether it concerns anxiety, hate, body dissatisfaction, or glorified depictions of self-harm, can be particularly damaging for young brains that are still in their formative phase. Murthy also noted the considerable impact of social media on attention and sleep.
Based on this evidence, regulatory actions — such as enforcing a minimum age for the use of social media — are urgently needed.
Creating an Educational Toolkit
Finally, developing a toolkit to protect citizens as their cognitive freedom comes under the onslaught of disinformation is a crucial component of the Neuroshield. Its prime objective should be to educate citizens to distinguish legitimate information from disinformation or misinformation. The toolkit should include reliable fact-checking methods and stipulate ways to raise concerns about content based on disinformation. The Swedish Psychological Defense Agency is a great example of a comprehensive approach to combatting mis- and disinformation. It proactively identifies and counters disinformation activities, aimed at changing people’s perceptions or influencing behaviors or outcomes of the decision-making process.
Technology platforms also need to be involved in developing the toolkit, as exemplified by Google’s video campaigns in several countries highlighting the way misleading claims are made online. In addition, tech-free spaces and media plans should become a norm in schools and at home.
Given the tremendous expansion of sophisticated technology and social media in recent years, policymakers and brain scientists must work together to prioritize the development of a Neuroshield to combat the perils of the digital age and protect all citizens. This aligns with the EU’s recent move to regulate AI under the AI Act.
Utilizing the three-pronged approach described above, the Neuroshield will enable a more confident embrace of social media and AI, one that helps to broaden horizons and seize opportunities in a safe way, while being protected from capture by malignant actors, both at home and abroad.
About the Authors
Paweł Świeboda is a member of the Steering Committee of the Brain Capital Alliance and the OECD Neuroscience-inspired Policy Initiative. He is currently completing his term as CEO of EBRAINS and as director general of the Human Brain Project.
Harris A. Eyre, M.D., Ph.D., is a fellow with Rice University’s Baker Institute for Public Policy and senior fellow with the Meadows Mental Health Policy Institute. He leads the Brain Capital Alliance and co-leads the OECD Neuroscience-inspired Policy Initiative. He is an advisor to the Euro-Mediterranean Economists Association.
This material may be quoted or reproduced without prior permission, provided appropriate credit is given to the author and Rice University’s Baker Institute for Public Policy. The views expressed herein are those of the individual author(s), and do not necessarily represent the views of Rice University’s Baker Institute for Public Policy.