OpenAI Forms Superalignment, a Dream Team to Tackle Superintelligence Alignment
In OpenAI’s recent blog post, the ChatGPT maker has announced that it is forming a team comprised of skilled machine learning researchers and engineers to address the challenge of aligning superintelligence and have allocated 20%, of its computing resources for the next four years for this. Coheaded by Ilya Sutskever, OpenAI’s co-founder and Chief Scientist, and Jan Leike, the Head of Alignment, the division’s primary focus is to solve the fundamental technical problems that come with superintelligence alignment within a four-year timeframe. The team comprises researchers and engineers from their previous alignment team as well as experts from other departments within the company.
OpenAI intends to share the outcomes of their work widely and considers contributing to the alignment and safety of non-OpenAI models as an important aspect of its mission. The new team’s work complements OpenAI’s ongoing endeavours to enhance the safety of current models like ChatGPT and to address other AI-related risks, including misuse, economic disruption, disinformation, bias and discrimination, addiction and overreliance, among others. While the focus of the new team is on the machine learning challenges associated with aligning superintelligent AI systems with human intent, they actively engage with interdisciplinary experts to ensure their technical solutions encompass broader human and societal concerns.
Superintelligence can address global challenges but carries risks of human disempowerment or extinction. However, current methods of controlling it are insufficient.
OpenAI is Hiring
To make this mission come to fruition, OpenAI is actively hiring research engineers, research scientist, and research managers.
Research Engineer: OpenAI is hiring Research Engineers with an annual salary range of $245,000 to $450,000. The role involves writing performant code for machine learning (ML) training, conducting and analyzing ML experiments, collaborating with a small team, and planning future experiments. Responsibilities also include exploring scalable oversight techniques, studying generalization, managing datasets, investigating reward signal issues, predicting model behaviours, and designing approaches for alignment research. Candidates should be aligned with OpenAI’s mission, have strong engineering skills, curiosity about ML models, and enjoy a fast-paced research environment. ML algorithm implementation and data visualization skills are desirable, and ensuring human control over AI systems is a priority.
Research Scientist: In this role, you’ll develop innovative machine-learning techniques, collaborate with colleagues, and contribute to the company’s research vision. Responsibilities include designing experiments for alignment research, studying generalization, managing datasets, exploring model behaviours, and designing novel approaches. The ideal candidate is aligned with OpenAI’s mission, has a track record of ML innovation, can pursue research independently, and is motivated to address the challenge of aligning AI models. Experience with ML algorithms and data visualization is preferred.
Research Manager: As the research lead, you’ll oversee a team of research scientists and Engineers focused on aligning superintelligence and studying generalization. The role involves planning and executing research projects, mentoring team members, and fostering a diverse and inclusive culture. Leadership experience in research, alignment expertise, and a passion for OpenAI’s mission are desired. The annual salary range is $420,000 – $500,000. Adaptability, a hunger for learning, and a commitment to improving culture and diversity are also important qualities.
This news comes against the backdrop of AI regulation becoming a hot topic in the world, with comparisons made to nuclear war threats. OpenAI CEO Sam Altman also testified before the US Senate about the same.
Additionally, OpenAI also launched a program to fund experiments aimed at democratising AI rules. They will grant $1 million to those who contribute the most to addressing safety issues.
Soon after the world tour, look like OpenAI harbours an awe-inspiring arsenal poised for revelation.
The post OpenAI Forms Superalignment, a Dream Team to Tackle Superintelligence Alignment appeared first on Analytics India Magazine.




