Disclaimer: We may earn a commission if you make any purchase by clicking our links. Please see our detailed guide here.

Follow us on:

Google News
Whatsapp

OpenAI Unveils Collective Alignment Team to Tackle Bias in Democratic AI Governance

Yusuf Balogun
Yusuf Balogun
Yusuf is a law graduate and freelance journalist with a keen interest in tech reporting.

In a bid to enhance transparency and inclusivity in artificial intelligence (AI) development, renowned artificial intelligence lab OpenAI has announced the establishment of a new research team named “Collective Alignment” for democratic AI governance to tackle bias and ethical concerns.

This initiative is a continuation of the grant program introduced in May 2023, designed to fund experiments in democratic processes, with a focus on mitigating bias and addressing ethical concerns in AI systems. The AI lab disclosed this in a blog post published on its website today, where it discussed the findings of experiments it funded.

We funded 10 teams from around the world to design ideas and tools to collectively govern AI. We summarize the innovations, outline our learnings, and call for researchers and engineers to join us as we continue this work, OpenAI said in the blog post.

Known for ChatGPT’s quick success, OpenAI is confronted with issues of the abuse of AI technology. With the forthcoming 2024 U.S. election serving as an example, worries have been expressed about AI’s ability to produce deep fake content and misinformation, especially in delicate situations like political campaigns.

Some who oppose ChatGPT contend that because of the data utilized for training, AI systems may be inherently biased and provide outputs that are racist or sexist. OpenAI’s resolve to proactively address these issues is demonstrated by the creation of the Collective Alignment team.

Addressing AI Bias and Governance Concerns with the Collective Alignment Team

The newly formed “Collective Alignment” team at OpenAI is dedicated to proactively addressing concerns related to bias and governance in AI development. Tyna Eloundou, a research engineer and founding member of the team, emphasizes the crucial role of public input in shaping the governance of AI software, especially as AI technologies become more integrated into society. The mission of the team is to create robust democratic processes capable of effectively identifying and addressing potential biases and other ethical factors in AI systems.

OpenAI is looking into possible collaborations to ensure that only people are involved in the input process. One such cooperation is Worldcoin, a cryptocurrency initiative that OpenAI CEO Sam Altman developed. Worldcoin offers a way to distinguish contributions made by AI from those made by humans. The Worldcoin integration team is currently exploring ways to improve the democratic input process, even if specific plans have not yet been finalized.

Tyna Eloundou and Teddy Lee now make up the Collective Alignment team. They are actively seeking two more members, a research scientist and a research engineer. The group will work closely with other OpenAI research teams as well as the “Human Data” team, which is in charge of setting up the necessary infrastructure to gather human feedback on the company’s AI models.

The Latest

Partner With Us

Digital advertising offers a way for your business to reach out and make much-needed connections with your audience in a meaningful way. Advertising on Techgenyz will help you build brand awareness, increase website traffic, generate qualified leads, and grow your business.

Recommended