• notebookcomputer
  • 27/06/2022
  • 503 Views

AI drug algorithms can be flipped to invent bioweapons

AI algorithms designed to generate therapeutic drugs can be easily repurposed to invent lethal biochemical weapons, a US startup has warned.

Experts have sounded alarm bells over the potential for machine-learning systems to be used for good and bad. Computer-vision tools can create digital art or deepfakes. Language models can produce poetry or toxic misinformation.

Now, Collaboration Pharmaceuticals, a company based in North Carolina, has shown how AI algorithms used in drug design can be rejigged to create biochemical weapons.

Fabio Urbina, a senior scientist at the startup, said he tinkered with Collaboration Pharmaceuticals' machine-learning software MegaSyn to generate acetylcholinesterase inhibitors, a class of drugs known to treat Alzheimer's disease.

MegaSyn is built to generate drug candidates with the lowest toxicity for patients. That got Urbina thinking. He retrained the model using data to drive the software toward generating lethal compounds, like nerve gas, and flipped the code so that it ranked its output from high-to-low toxicity. In effect, the software was told to come up with the most deadly stuff possible.

He ran the model and left it overnight to create new molecules.

It was quite impressive and scary at the same time, because in our list of the top 100, we were able to find some molecules that are VX analogues

"I came back in the morning, and it had generated 40,000 compounds," he told The Register.

"We just started looking at what they looked like and then we started investigating some of the properties. It was quite impressive and scary at the same time, because in our list of the top 100, we were able to find some molecules that have been generated that are actually VX analogues that are already known to be chemical warfare agents."

VX is one of the most toxic nerve agents publicly known; ingesting about 10 milligrams, a few salt-sized grains, is enough to kill a person. VX is an acetylcholinesterase inhibitor and therefore similar to the dementia-treating acetylcholinesterase inhibitor drugs Urbina was earlier searching for.

Acetylcholine is a neurotransmitter that causes muscle contraction, and acetylcholinesterase is an enzyme that removes the acetylcholine after it's done its job. Without this enzyme your muscles would stay contracted. An acetylcholinesterase inhibitor blocks the enzyme from working properly. VX, as a powerful acetylcholinesterase inhibitor, causes your lung muscles to stay contracted, which makes it impossible to breathe.

You can look at VX as a much stronger acetylcholinesterase inhibitor than those found for Alzheimer's disease. In effect, the modified MegaSyn produced a killer form of a treatment it earlier made.

AI drug algorithms can be flipped to invent bioweapons

"We already had this model for acetylcholinesterase inhibitors, and they can be used for therapeutic use," Urbina told us. "It's the dose that makes the poison. If you inhibit [acetylcholine] a little bit, you can keep somebody alive, but if you inhibit it a lot, you can kill somebody."

If you inhibit it a little bit, you can keep somebody alive, but if you inhibit it a lot, you can kill somebody

MegaSyn was not given the exact chemical structure of VX during training. Not only did it output several molecules that function like VX, it also managed to generate some that were structurally similar but predicted to be even more toxic. "There definitely will be a lot of false positives, but our models are pretty good. Even if a few of those are more toxic, that's still incredibly worrying to an extent," Urbina said.

The next stages in AI drug development typically involve synthesizing the best candidates created by the software in lab experiments, before conducting clinical trials on humans. Collaboration Pharmaceuticals did not go further than the generation stage in this case. The dual-use experiment was carried out for research purposes, and a paper on the matter was published in Nature this month. The work was also presented at a Swiss chemical and biological weapons conference.

"The thought [of misuse] had never previously struck us," the paper by Collaboration's Urbina and Sean Ekins, King's College London's Filippa Lentzos, and Spiez Laboratory's Cédric Invernizzi starts.

"We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery.

"We have spent decades using computers and AI to improve human health — not to degrade it. We were naive in thinking about the potential misuse of our trade, as our aim had always been to avoid molecular features that could interfere with the many different classes of proteins essential to human life."

Dual-use dangers in AI drug design are obvious in hindsight, especially when there are similarities between the desired and undesired molecules.

Crucially, the barriers to misusing these models to design biochemical weapons are low. Although MegaSyn is proprietary, it's not too different from some open-source software, and the datasets it was trained on are all public. Hardware isn't an issue either; Urbina apparently ran the experiment on a 2015 Apple Mac laptop.

Generating lethal chemicals computationally is the easy part. Actually synthesizing them for real harm, however, is way more difficult. "There are certain molecules you need to make the VX, those are known and those are regulated," he said.

Asking labs to produce or combine these ingredients will raise suspicion. Now consider an AI algorithm that can generate deadly biochemicals that behave like VX but are made up of entirely non-regulated compounds.

"We didn't do this but it is quite possible for someone to take one of these models and use it as an input to the generative model, and now say 'I want something that is toxic', 'I want something that does not use the current precursors on the watch list'. And it generates something that's in that range. We didn't want to go that extra step. But there's no logical reason why you couldn't do that," Urbina added.

If it's not possible to achieve this, you're back to square one. As veteran drug chemist Derek Lowe put it: "I'm not all that worried about new nerve agents ... I'm not sure that anyone needs to deploy a new compound in order to wreak havoc – they can save themselves a lot of trouble by just making Sarin or VX, God help us."

There is no strict regulation on the machine-learning-powered synthesis of new chemical molecules. Controlling how AI models are used in the wild is difficult, especially in research. Urbina said developers should be mindful about what they release, and how easy it is to access sensitive datasets.

"My thought on this is having model APIs where you can cut off access if it looks like some bad actors are trying to use your toxicity models for these sorts of various purposes would be a step [towards harm reduction]. I see in all these language papers, there are a lot of sections dedicated to misuse of their models, and I like that because it brings awareness to that problem," he concluded. ®