Learning Retrosynthetic Planning through Simulated Experience

John S. Schreck, Connor W. Coley, Kyle J.M. Bishop

Research output: Contribution to journalArticlepeer-review

131 Scopus citations

Abstract

The problem of retrosynthetic planning can be framed as a one-player game, in which the chemist (or a computer program) works backward from a molecular target to simpler starting materials through a series of choices regarding which reactions to perform. This game is challenging as the combinatorial space of possible choices is astronomical, and the value of each choice remains uncertain until the synthesis plan is completed and its cost evaluated. Here, we address this search problem using deep reinforcement learning to identify policies that make (near) optimal reaction choices during each step of retrosynthetic planning according to a user-defined cost metric. Using a simulated experience, we train a neural network to estimate the expected synthesis cost or value of any given molecule based on a representation of its molecular structure. We show that learned policies based on this value network can outperform a heuristic approach that favors symmetric disconnections when synthesizing unfamiliar molecules from available starting materials using the fewest number of reactions. We discuss how the learned policies described here can be incorporated into existing synthesis planning tools and how they can be adapted to changes in the synthesis cost objective or material availability.

Original languageEnglish
Pages (from-to)970-981
Number of pages12
JournalACS Central Science
Volume5
Issue number6
DOIs
StatePublished - Jun 26 2019

Fingerprint

Dive into the research topics of 'Learning Retrosynthetic Planning through Simulated Experience'. Together they form a unique fingerprint.

Cite this