Authors: D.E. Bussiere
Affilation: Abbott Laboratories, United States
Pages: 30 - 34
Keywords: biotechnology, drug discovery, biomedical research, insilico
During the last two decades, new techniques and advancements have added layers of complexity to the research done in biotechnology, drug discovery, and biomedical research. Many of the techniques developed have been computational in nature: in effect, taking their particular disciplines in silico (that is, to be executed within the computer). Often, these techniques are lumped under the term of ‘biomedical computing’. To visualize these advancements, compare the ‘traditional’ drug discovery cycle (Figure 1) used by many companies until the early-1980’s with the current biological target-based drug discovery cycle (Figure 2) used by most pharmas and biotech concerns today. In the ‘traditional’ cycle, an initial lead compound was found by isolating a molecule with a certain biological activity, perhaps by ethanobotany or by fortuitous happenstance (as in the discovery of penicillin), modifications of this lead compound were planned using clues provided by a crude analysis of structure-activity relationships (SAR) or by traditional medicinal chemistry techniques. The new, modified compounds were then synthesized and retested. This cycle continued until the particular biological activity of the compound was maximized . This cycle, while successful, was not rapid. It often took 5-6 years to bring a drug to the preclinical phase. Contrast this ‘traditional’ cycle to the current biological target-based cycle (also known as structure-based drug design, Figure 2). Now, recognizing that many drugs are antagonists (inhibitors) of macromolecules (most often an enzyme protein), a ‘ biological target’ molecule is chosen before any drug discovery project is begun. The biological target is a macromolecule which is crucial for the biological activity or process which is to be inhibited. In some cases, biological target selection is simply a matter of common sense and examination of results from basic research. For example, human immunodeficiency virus-1 (HIV) is expressed as a single polypeptide within an infected host cell. This polypeptide is then processed by an virally-encoded protease, the processed proteins are then packaged and the virus explodes from within the infected cell. It was therefore correctly surmised that HIV protease was critical for virus maturation and was an important biological target for drug discovery and development. This has led to several highly effective therapeutics for HIV. It is not always that simple, however, especially when the biological activity to be inhibited is not parasitic in nature (as in the case of a viral infection) or when the number of possible targets is enormous. In these cases, the computational field of bioinformatics plays a critical role by providing methodology for scanning the current genomic databases for an optimal target or targets and for predicting the activities of the as-of-yet uncharacterized genes and proteins. Assuming that one is able to select a target, several technologies come into play. First, the gene of the target of interest is cloned and the protein or macromolecule is expressed and purified. The initial lead compound is then discovered by a variety of techniques such as high-throughput screening, where hundreds of thousands of compounds are examined en masse for binding to the purified target (Figure 2). Often, in a concurrent effort, the three-dimensional structure of the target macromolecule will be determined using nuclear magnetic resonance (NMR) or X-ray crystallography, or the structure can be modeled using molecular modeling techniques. The structural methods are extremely dependent on computational techniques, as will be discussed in a following section. Once the structure of the target macromolecule has been determined or modeled, and a lead compound has been isolated, the structure of the target-compound complex can be determined using the same techniques. These target-compound structures can then be examined using computational chemistry techniques and possible modifications to the compound can be determined. Finally,all of this data is collated and is used in designing the next series of compounds, which are then synthesized. This cycle is repeated until a compound is sufficiently potent (able to inhibit the biological target at extremely low, typically picomolar, concentrations), at which point it is sent to preclinical (animal testing) and clinical (human) testing. In the current discovery cycle, an average time to reach preclinical investigation is three years. While the following example was obviously slanted towards small-molecule drug discovery, the computational tools play a similar role in biotechnology and biomedical research. The following special session will present cutting-edge research in the area of biomedical computing. A brief review and introduction to each section that will be covered follows.
Nanotech Conference Proceedings are now published in the TechConnect Briefs