Iโm only 22 years old and I dread the way Artificial Intelligence will disrupt the future. When I was growing up, Artificial Intelligence lived in the realm of science fiction. I remember being in awe of Iron Manโs AI system Jarvis as it helped fight off aliensโbut laughing at dumb NPCs (non playable characters) in video games or joking with my dad about how scratchy and unhuman-like virtual assistants like Siri were. The โrealโ AIs could only be found as Star Warsโ C-3PO and the like and were discussed mainly by nerds like me. More punchline than reality, AI was nowhere near the top of political agendas. But today, as a 22-year-old recent college graduate, Iโm watching the AI revolution happen in real-timeโand Iโm terrified world leaders arenโt keeping pace.
This is a Fortune story of Sunny Gandhi, one of the many young persons who are witnessing how AI tools are enhancing academic dishonesty, sexual harassment, political misinformation, workforce disruptions and addictive relations.ย
Artificial Intelligence has significantly transformed our society, touching every aspect of our lives and offering innovative solutions to challenges. However,ย alongside its positive solution lies a trail of concerns and negative consequences. As a result of this, there has been a growing number of outcry for proper regulations towards safer AI development and use. Among the groups leading this charge are the Generation Z (Gen Z), with a 19-year-old Sneha Revanur becoming a prominent voice.ย
Sneha Revanur has become one of the leading Gen Z voices calling for AI guardrails. She is the founder and president of Encode Justice,ย the world’s first and largest youth movement for safe and fair AI. Along with Sunny Gandhi, vice president of policy, and about 1,000 young people worldwide, Encode Justice is demanding a say in discussions with policymakers and companies about implementing measures for safer AI technology that benefits young people.โWe are the next generation of users, consumers, advocates, and developers, and we deserve a seat at the table.โ Revanur states.
While Sneha Revanur has had the opportunity to speak at high-profile events, the White House and at various summits, her team has taken bold steps to address the lack of safer AI regulations. They have developed a manifesto called AI 2030,ย which Sunny Gandhi describes as “a comprehensive policy platform encompassing the key issues of today with the goal of garnering support across academia, the public sector and civil society.” The AI 2030 agenda is the world’s first intergenerational call for global AI governance.ย
Encode justice believes in the potentials of AI but calls on world leaders to act now to bend the arc towards truth, safety, and justice in AI developmentย by 2030. This agenda focuses on five key areas, aiming to shape a future where AI benefits everyone while minimizing potential risks:
Build Trust and Human Connection
This section addresses the risks associated with AI-generated content, particularly its potential to spread misinformation and non-consensual content such as those in the political and social sectors. The agenda calls for:
- Increased privacy and transparency in AI systems
- Better user consent, control and understanding of AI risks
- Greater government regulation and company accountability
As the AI 2030 agenda states, “Without intervention, the lines between real and artificial, between human- and machine-generated, and between truth and deception will completely blur – and it is our generation that will suffer the most.”
Protect our Fundamental Rights and Freedoms
This part strongly emphasizes on safeguarding individual rights, promoting fairness, and ensuring accountability in AI systems that impact people’s lives. Key points include:
- Proactive research and collaboration in safer AI development
- Continuous monitoring of AI’s real-world impact
- Evaluating systems for fairness and non-discrimination
- Providing legal frameworks for individuals to seek redress when their rights are violated
The AI 2030 agenda emphasizes that “It is not enough for AI to be fair – we must reimagine how, where, and on whom we choose to use it.”
Secure our economic future
This section advocates for the government to shift its focus from maximizing AI capabilities to using AI as a tool for human empowerment and economic equality. It calls for:
- Government leadership in shaping AI development
- Mitigating potential negative impacts on employment and economic structures
The AI 2030 agenda argues, “If we instead aspire to a world in which AI drives economic gains for all and unlocks time for the activities we find most meaningful โ while uplifting, rather than superseding, humans โ leaders must intervene.”
Ban fully automated weapons of destruction
The AI 2030 agenda highlights the unpredictable nature of autonomous weapon systems, stating, “autonomous weapon systems can behave unpredictably, malfunction, or be hacked or misused, potentially resulting in unintended civilian casualties or indiscriminate attacks on non-military targets.” It calls for:
- An international treaty prohibiting the creation, manufacturing, and use of autonomous weapons
- Redirecting efforts towards positive AI applications
- Encouraging both public and private sectors to invest in AI for peacekeeping and conflict resolution
Cooperate for safer today and tomorrow
This section emphasizes the need for global cooperation in AI governance. It proposes:
- Establishing a global authority to manage AI risks, especially for foundation models
- Creating global institutions for AI safety, similar to CERN
- Urging major AI-developing nations to implement domestic regulations to prevent hazardous AI outcomes
The AI 2030 agenda stresses, “As with climate change and nuclear nonproliferation, we need international coordination to govern AI.”
These calls to action, if implemented, would likely lead to a more secure AI ecosystem. From a cybersecurity standpoint, there is a need for robust security measures in AI systems, particularly in areas like deepfake detection, protection against AI-powered cyberattacks, and safeguarding personal data used in AI training. Gen Z emphasize the need for proactive security measures, increased transparency, and global cooperation, all of which are crucial for addressing the unique cybersecurity challenges posed by AI technologies.