ABOVE: MODIFIED FROM ©ISTOCK.COM, jxfzsy

There are more than 100,000 chemicals in consumer products. For the vast majority, there is very little information about their toxicity. Traditionally, researchers will test chemicals of interest in animals. As an extreme example, a pesticide undergoes about 30 animal tests, costing about $20 million and consuming more than 10,000 mice, rats, rabbits, and dogs over five years. About 20 kilograms of the chemical are needed for this testing; obtaining such a volume can be quite a challenge for a substance not yet on the market. Other substances receive less scrutiny, but even products with lower regulatory standards, such as industrial chemicals, can require $5 million worth of animal testing before entering the marketplace.

Our group, the Center for Alternatives to Animal Testing (CAAT) at Johns Hopkins University, sought a better way. As so many biologists are doing these days, we turned to intelligent...

The software takes advantage of the power of big data and transfer learning, a machine learning method that applies information from one task or set of items to another. Similar chemicals have similar properties. Based on that principle, the software builds a map of the chemical universe. Similar chemicals are put close to each other, dissimilar ones more distant. Then, the model can place new chemicals on the map, assess what is known about their neighbors, and from that information surmise their potentially harmful health and environmental effects. The more data are fed into the model, the more powerful it becomes.

In collaboration with Underwriters Laboratories (UL), a global safety consulting and certification company, the database has expanded to more than 10 million chemical structures, more than 300,000 of which are annotated with biological and chemicophysical data and some 50,000 of those also include animal data. Using an Amazon cloud server, it took two days to analyze the similarities and differences among the 10 million chemicals to place them on a map. Applying this to 190,000 classified chemicals based on animal tests, the computer correctly predicted the outcome of toxicity studies 87 percent of the time—exceeding the 70 percent probability of animal tests to find a toxic substance again in a repeat animal test. Traditional testing for the nine different hazard classifications currently analyzed by the model consumes 57 percent of all animals in safety testing in Europe, or about 600,000 animals per year. These include assays such as skin and eye irritation/corrosion, skin sensitization, acute toxicity, and mutagenicity.

Some of the chemicals in the database have already been tested excessively, causing unnecessary loss of animal life. For instance, two chemicals were tested more than 90 times in rabbit eyes; 69 substances were tested more than 45 times in the same eye test. Our AI approach could drastically reduce the need for these common animal toxicity tests, and save quite a bit of money in the process. Running the supercomputer to create the map cost about $5,000, and computer costs per prediction are now negligible. Most importantly, beyond the regulation of chemicals being brought to market, chemists can identify hazards before the compounds are ever synthesized. And when a toxic chemical needs to be replaced, the AI approach can help ensure that less-toxic substances are chosen.

The power of this approach depends on the availability of data, and its implementation will only work for toxic properties that are directly dependent on chemical structure. Where the underlying biology is complex—such as links between a substance and cancer, or insults to a developing embryo—more information is required. 

Here, large standardized databases are rare, with the notable exception of some US agencies’ robot-testing programs. These programs have subjected about 2,000 chemicals to more than 700 cellular assays, and nearly 10,000 chemicals to about 60 assays. With computing power now more accessible and far more affordable, AI is the solution to many real-world problems, such as limiting the need for animals in safety testing and biological research.

Thomas Hartung is chair for Evidence-based Toxicology at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland, where he is a professor of molecular microbiology and immunology. Hartung is also a professor of pharmacology and toxicology at the University of Konstanz in Germany. He directs the Centers for Alternatives to Animal Testing (CAAT) of both universities.

Interested in reading more?

May 2019 The Scientist Issue

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!
Already a member?