The bioweapons story hidden amidst the OpenAI for-profit news

0
gettyimages-2223572220.jpg


Big news for them track artificial general intelligence – or AI, which corresponds entirely to human intelligence. OpenAI, that describes Its mission to “ensure that AGI benefits all of humanity” has completed its long-standing work Corporate restructuring plan yesterday. It could completely change the way we deal with the risks of AI, particularly biological ones.

First, a quick refresher: OpenAI was originally founded as a nonprofit in 2015, but gained a for-profit arm four years later. The non-profit organization will now be called the OpenAI Foundation and the for-profit subsidiary will now be one Non-profit corporationcalled OpenAI Group. (PBCs, unlike other structures, have legal requirements to balance mission and profit.) The foundation will continue to control OpenAI Group, holding a 26 percent stake, which was valued at about $130 billion at the completion of the recapitalization. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)

“We believe that the world’s most powerful technology must be developed in a way that reflects the world’s collective interests,” OpenAI wrote a blog post.

One of the first steps of OpenAI – besides the big Microsoft deal – is the foundation that is committing $25 billion to accelerating health research and supporting “practical technical solutions for AI resilience,” which is about maximizing the benefits of AI and minimizing its risks.

Sign in Here to explore the world's big, complicated problems and the most efficient ways to solve them. Shipped twice a week.

Maximizing benefits and minimizing risks is the essential challenge in developing advanced AI, and no field represents this knife edge better than life sciences. The use of AI in biology and medicine can increase diseases recognitionimprove Answerand advance discovery new treatments and vaccines. However, many experts believe that one of the biggest risks associated with advanced AI is its potential to help create dangerous biological agents, Lowering the entry barrier start lethal attacks with biological weapons.

And OpenAI is So consciously that its tools could be misused to create bioweapons.

The pioneering AI company has founded security measures for its ChatGPT agent, but we are still at the beginning of what AI bio capabilities can enable. For this reason there is another recent news: OpenAI Startup fundtogether with Lux Capital and Founders Fund, provided $30 million in seed capital for New York biodefense startup Valthos – could prove almost as important as the company's complex corporate restructuring.

Valthos wants to build the next generation”Tech stack“For biodefense – and quickly. “As AI has advanced, life itself has become programmable,” the company wrote in an introductory blog post after emerging from stealth last Friday. “The world is moving closer to near-universal access to powerful dual-use biotechnologies capable of eliminating or creating disease.”

You may be wondering whether the best course of action is to put the brakes on these tools with their catastrophic, destructive potential. But that’s unrealistic at a time when we’re making progress – and investment – ​​in AI at an ever-increasing rate. Ultimately, what matters here is whether the AI ​​we develop mitigates the risks posed by… the AI ​​we develop. This question becomes even more important as OpenAI and others move toward AGI.

Can AI protect us from AI risks?

Valthos envisions a future in which any biological threat to humanity “can be instantly identified and neutralized, regardless of whether the origin is external or within our own bodies. We are building AI systems to rapidly characterize biological sequences and update medicines in real time.”

This could allow us to respond more quickly to outbreaks and potentially prevent epidemics from becoming pandemics. We could repurpose therapeutics and develop new medicines in record time, helping scores of people with conditions that are difficult to treat effectively.

We're not even close to AGI in biology (or anything like that), but we don't need to be, because AI biocapabilities pose significant risks, such as the intentional creation of new pathogens that are deadlier than anything in nature and that could be released intentionally or accidentally. Efforts like Valthos's are a step in the right direction, but AI companies have yet to travel this path.

“I am very optimistic about the upside and benefits that society can derive from AI bio capabilities,” he said Jaime Yassifthe vice president for global biological policy and programs at the Nuclear Threat Initiative. “At the same time, however, it is important that we develop and use these tools responsibly.”

(Disclosure: I previously worked at NTI.)

However, Yassif argues that there is still much work to be done to improve the predictive power of AI tools for biology.

And AI cannot initially develop its advantages in isolation – it must be continuously invested in the other structures Drive change. AI is part of a broader ecosystem of biotechnological innovations. Researchers still need to do a lot of wet lab work, conduct clinical trials, and evaluate the safety and effectiveness of new therapeutics or vaccines. They must also deliver these medical countermeasures to the populations that need them most, which is notoriously difficult loaded with bureaucracy and financing problems.

Bad actors, on the other hand, can act right here and now, impacting the lives of millions of people much faster than it would take to take advantage of AI, especially when there are no intelligent ways to intervene. That's why it's so important that protective measures to protect against the exploitation of useful tools a) can be used in the first place and b) can keep pace with rapid technological progress.

SaferAI, which assesses the risk management practices of top AI companies, Ranks OpenAI has the second best framework after Anthropic. But everyone still has more work to do. “It’s not just about who’s at the top,” Yassif said. “I think everyone should do more.”

As OpenAI and others move closer to AI that is smarter than humans, the question of how to maximize the benefits of biology and minimize the risks has never been more important. As the tools to redesign life itself become more sophisticated, we need greater investment in AI biodefense and biosecurity across the board. Therefore, I hope that using AI to address the risks posed by AI will pay off.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *