Here’s an important and arguably invaluable ingredient in the glue that holds society together: Google makes it moderately difficult to learn how to commit terrorism. The first few pages of results for a Google search on how to build a bomb, or how to commit murder, or how to unleash a biological or chemical weapon, don’t really tell you much about how to do it.



It is not impossible to learn these things from the internet. People have successfully built working bombs out of public information. Scientists have warned others against publishing blueprints for deadly viruses because of similar fears. But while the information is certainly out there online, learning how to kill a bunch of people isn’t simple, thanks to the concerted efforts of Google and other search engines.

How many lives does it save? That is difficult to answer. It’s not like we could responsibly run a controlled experiment where sometimes it’s easy to look up instructions on how to commit major atrocities and sometimes it’s not.

But it turns out we may be irresponsibly running an uncontrolled experiment on this, thanks to rapid advances in large-scale language modeling (LLM).

Security through darkness

When they were first released, AI systems like ChatGPT were generally willing to give precise, correct instructions on how to carry out a biological weapons attack or build a bomb. Over time, Open AI has corrected this tendency, for the most part. But a class exercise at MIT, written in preprint earlier this month and covered last week in the Sciencefound that it was easy for groups of undergraduates without relevant backgrounds in biology to obtain accurate biological weapon recommendations from artificial intelligence systems.

“In one hour, the chatbots suggested four potential pandemic pathogens, explained how they could be made from synthetic DNA by reverse genetics, provided the names of DNA synthesis companies unlikely to screen orders, analyzed detailed protocols and how to solve them, and recommend that anyone lacking the ability to perform reverse genetics undertake a nuclear facility or contract research organization,” says the paper, whose authors include MIT biohazard expert Kevin Esvelt.

To be clear, building a bioweapon requires a lot of detailed work and theoretical skills, and ChatGPT’s instructions are probably far too incomplete to actually allow non-virologists to do it—yet. But it seems worth considering: Is security through obscurity a sustainable approach to preventing future mass atrocities where information is more readily available?

In almost every respect, greater access to information, in-depth support training, personally tailored advice and other benefits we expect to see from language modeling are great news. But when a personal trainer is advising users to commit acts of terrorism, it’s not so great news.

But it seems to me that you can solve the problem from two sides.

Managing information in an AI world

“We need better control of all chokepoints,” said Jaime Yassif of the Nuclear Threat Initiative. Science. It should be more difficult to get an AI system to give detailed instructions for building bioweapons. But also, many of the security flaws that the AI ​​systems accidentally uncovered—like noting that users could contact DNA synthesis companies that don’t screen orders and are therefore more likely to authorize a request to create a dangerous virus—are fixable!

We could require all DNA synthesis companies to screen in all cases. We could also remove papers about dangerous viruses from training materials for powerful AI systems – a solution supported by Esvelt. And we might be more careful in the future about publishing papers that give detailed recipes for making deadly viruses.

The good news is that positive players in the biotech world are beginning to take this threat seriously. Ginkgo Bioworks, a leading synthetic biology company, has partnered with US intelligence agencies to develop software that can analyze engineered DNA at scale, providing researchers with the means to fingerprint synthetic pathogens. That alliance demonstrates the ways in which advanced technology can protect the world from the malign effects of … advanced technology.

Artificial intelligence and biotechnology both have the potential to be enormous forces for good in the world. And managing risks from one can also help risks from the other—for example, making it harder to create deadly plagues protects against some AI disasters just as it protects against human disasters. What is important is that instead of letting detailed instructions for bioterrorism get online as a natural experiment, we stay proactive and make sure that printing biological weapons is difficult enough that no one can do it unnecessarily, whether with ChatGPT support or not.

A version of this story was originally published in the Future Perfect newsletter. Sign up here to subscribe!

#fuel #pandemic