You are here Biopharmaceutical Glossary homepage/Search > Informatics > Drug discovery & development informatics

Drug discovery & development informatics glossary & taxonomy
including In Silico & molecular drug modeling
Evolving terminology for emerging technologies

Suggestions? Comments? Questions? Mary Chitty  mchitty@healthtech.com
Last revised November 11, 2013 

 



An understanding of the behavior of biological systems at each level of their organization can only be achieved by careful study of the complex dynamical interactions between the components of these systems. For this understanding to be quantitative it is necessary to develop structurally, biochemically and biophysically detailed mathematical models. Once developed, these models can be simulated, analyzed, and visualized through application of modern engineering and computational approaches.  IBM, Functional Genomics and Systems Biology Overview  http://www.research.ibm.com/FunGen/

Informatics  Map: Finding guide to terms in these glossaries  Site Map
Related glossaries include: Applications Drug discovery & development  Molecular Diagnostics  Pharmacogenomics,   
Informatics Algorithms
  Bioinformatics   Cheminformatics    Information management & interpretation    Protein informatics   Research
Technologies    Genomic & proteomic manipulation & disruption for pharmaceuticals, including RNAinterference   Sequencing 
Biology Pharmaceutical biology  Genetic variations    Protein Structure  Protein informatics 

Bio-IT WorldBio-IT World  April 29 - May 1, 2014 • Boston, MA Program | Register | Download



biocomplexity, biological complexity: Genomics

Biomedical Informatics Research Network BIRN: A national initiative to advance biomedical research through data sharing and online collaboration. .. focuses directly on the biomedical research community’s unique, data-intensive sharing and analysis needs, which are particularly evident in fields such as biomedical imaging and genetics.   a user-driven, software-based framework for research teams to share significant quantities of data – rapidly, securely and privately – across geographic distance and/or incompatible computing systems. ... We also offer data-sharing software tools specific to biomedical research, best practices references, expert advice and other resources.  http://www.nbirn.net/

biopharmaceutical informatics: Drug companies go through a very arduous and regulated discovery, applied research, and development process- typically spanning five years of laboratory research and ten years of clinical studies .. multinational clinical studies, which need to be done with tremendous precision over a very long period of time. The study parameters must be identical for every patient (many times numbering 10,000 patients, followed for five or more years), and all the participating hospitals essentially have to behave in exactly the same way for the trial to be valid. ..  The life science industry is conservative by nature, and therefore it is a late- adopting industry. It is very sensitive to standards because of the legacy according to which these companies have to maintain data and information. Major pharmaceutical companies typically adopt a 100-year minimum document retention policy, ...each of the industry's four industrial sectors - the pharmaceutical, the biotech, the medical device, and the diagnostics sector - has a different set of needs and desires, as well as its own requirements for unique IT solutions. ,, Life science companies are dealing with very large computational data sets. Some are now approaching half terabyte sizes and upward Life science companies also immensely concern themselves with security, because their data represent their crown jewels. Other major concerns expressed by this industry include the stability, scalability, and security of an operating environment. Life science companies and regulatory bodies such as the FDA are more concerned than ever with operating environments that decay with use: When under computational stress, these fragile operating systems have a habit of crashing, and when these systems crash, they tend to corrupt data. ... Post-genomic, proteomic, chemical information, and other data sets have created a major appetite for solutions to deal with this tremendous amount of data. Scientists are now asking their IT professionals for the ability to better conceptualize and interpret the meaning of this vast information. To do this, scientists need tools for 3D visualization with a tremendous degree of high definition and accuracy. The next step is to take disparate data sets, render them into 3D values, see the DNA and RNA interface, watch protein folds, and then put a therapeutic small molecule in there and see how it relates within a virus that environmentally influences a different process. Scientists Are Demanding Solutions for Dealing with the Post-Genomic, Proteomic, and Chemical Data Deluge: An Interview with Howard Asher, Director, Global Life Sciences Group, Sun Microsystems, CHI GenomeLink 30 http://www.chidb.com/newsarticles/issue30_1.asp 

Biosemantics Group: http://www.biosemantics.org/  Addresses concept identification and disambiguation algorithms, meta-analysis and visualization techniques, and biological applications [interconnect genes and proteins, semi-automated annotations of protein functions.] Medical Informatics department of the Erasmus MC University Medical Center of Rotterdam and the Center for Human and Clinical Genetics of the Leiden University Medical Center

Computer-Assisted Drug Design CADD: Involves all computer- assisted techniques used to discover, design and optimize biologically active compounds with a putative use as drugs.  IUPAC Computational   Broader term:  drug design  Related terms: Cheminformatics

data credibility: Different labs have different reputations, and scientists look at work produced by their peers in a subjective light. This data credibility issue creates a need to tag almost every data item with a confidence factor. This is so that, as you create your next experimental hypothesis, you know that the quality of the information you are relying upon is high enough that you can go profitably down the scientific line of inquiry that you are pursuing. The Life Science Industry Represents Unique Opportunities for Informatics Companies: An Interview with Shiv Tasker of Blackstone Computing, CHI's GenomeLink 25.1 http://www.chidb.com/newsarticles/issue25_1.asp 

data integration: The term "data integration" is used generically within the industry for describing disparate situations. Consequently, considerable confusion results regarding the best practices for solving specific, data integration problems. There are a number of markedly different approaches to data integration, each with its own strengths and weaknesses, and many different technologies are available for each approach. All data integration efforts are initiated to support particular research objectives. Although they are aimed toward the same strategic goal, they can differ substantially in the specific problems that they are trying to solve, in the scale of the integration, and in the types of data that are integrated. The strategies and technologies that best apply to address specific objectives are unlikely to be the same.  Key Trends Influencing Informatics Initiatives in Life Science Companies: An Interview with Eric Meyers and Jack Pollard of 3rd Millennium, CHI's GenomeLink 29.2 http://www.chidb.com/newsarticles/issue29_2.asp    Related terms: data mining - integrating, data reduction methods; Information management & interpretation interoperability; IT infrastructure   XML; Omes & -omics integromics

data management: Each new generation of DNA sequencers, mass spectrometers, microscopes, and other lab equipment produces a richer, more detailed set of data. We’re already way beyond gigabytes (GB): a single next-generation sequencing experiment can produce terabytes (TB) of data in a single run. As a result, any organization running hundreds of routine experiments a month or year, or trying to handle the output of next-generation sequence instruments, quickly finds itself with a massive data management problem.  Data Management: The Next Generation, Salvatore Salamone, BioIT World, Oct 2007      http://www.bio-itworld.com/issues/2007/oct/cover-story-data-management  

data mining: The biopharmaceutical industry is grappling not only with sheer data volume but with the ability of researchers to extract information through identification and contextual analysis of those data that are relevant to a particular set of investigations. Data Mining in Drug Development and Translational Medicine July 2009 Table of Contents | Tables and Figures | Executive Summary

Nontrivial extraction of implicit, previously unknown and potentially useful information from data, or the search for relationships and global patterns that exist in databases.  W. Frawley and G. Giatetsky-Schapiro and C. Matheus, “Knowledge Discovery in Databases: An Overview.” AI Magazine,  213- 228, Fall 1992

Exploration and analysis, by automatic or semi- automatic means, of large quantities of data in order to discover meaningful patterns or rules. Berry, MJA, Data Mining Techniques for Marketing, Sales and Customer Support John Wiley & Sons, New York 1997 cited in Nature Genetics 21(15): 51-55 ref 11, 1999

data mining drug development: The biopharmaceutical industry is grappling not only with sheer data volume but with the ability of researchers to extract information through identification and contextual analysis of those data that are relevant to a particular set of investigations. The mountain of data generated and stored is growing ever-higher. The information content of life science data is multidimensional and not readily accessible by merely looking at the output. Unless such data can be put into proper context and interpreted—i.e., mined—their value is only in their potential. Insight Pharma Reports, Data mining in drug development and translational medicine, 2009

Data Visualization and Exploration Tools  April 29 - May 1, 2014 • Boston, MA Program | Register | 

de novo design:
The design of bioactive compounds by incremental construction of a ligand model within a model of the receptor or enzyme active site, the structure of which is known from X-ray or NMR data. IUPAC Medicinal Chemistry

drug design: Includes not only ligand design, but also pharmacokinetics (Pharmacogenomics) toxicity, which are mostly beyond the possibilities of structure- and/ or computer- aided design. Nevertheless, appropriate chemometric (Chemoinformatics) tools, including experimental design and multivariate statistics, can be of value in the planning and evaluation of pharmacokinetic and toxicological experiments and results. Drug design is most often used instead of the correct term  "ligand design”.  IUPAC Computational 

The molecular designing of drugs for specific purposes (such as DNA- binding, enzyme inhibition, anti- cancer efficacy, etc.) based on knowledge of molecular properties such as activity of functional groups, molecular geometry, and electronic structure, and also on information cataloged on analogous molecules. Drug design is generally computer- assisted molecular modeling and does not include pharmacokinetics, dosage analysis, or drug administration analysis.  MeSH, 1989

The phrase "drug design" is to some extent a misnomer. What is really meant by drug design is ligand design (i.e., design of a small molecule that will bind tightly to its target).[3] Although modeling techniques for prediction of binding affinity are reasonably successful, there are many other properties, such as bioavailability, metabolic half-life, lack of side effects, etc., that first must be optimized before a ligand can become a safe and efficacious drug. These other characteristics are often difficult to optimize using rational drug design techniques.  … Typically a drug target is a key molecule involved in a particular metabolic or signaling pathway that is specific to a disease condition or pathology or to the infectivity or survival of a microbialpathogen. Some approaches attempt to inhibit the functioning of the pathway in the diseased state by causing a key molecule to stop functioning. Drugs may be designed that bind to the active region and inhibit this key molecule. Another approach may be to enhance the normal pathway by promoting specific molecules in the normal pathways that may have been affected in the diseased state. In addition, these drugs should also be designed so as not to affect any other important "off-target" molecules or antitargets that may be similar in appearance to the target molecule, since drug interactions with off-target molecules may lead to undesirable side effects. Sequence homology is often used to identify such risks.Wikipedia Nov 4 2013 http://en.wikipedia.org/wiki/Drug_design
See also biological target
http://en.wikipedia.org/wiki/Biological_target

An iterative process involving drug discovery, lead optimization and chemical synthesis with the aim of maximizing functional activity and minimizing adverse effects. 
Narrower terms: rational drug design, structure- based drug design, molecular design;  Related terms: 3D-QSAR, QSAR, Computer Aided Molecular Design, Computer Assisted Drug Design CADD, Computer Assisted Molecular Modeling CAMD, de novo design  See also structure-based drug design Related terms: 3D QSAR, QSAR Algorithms, Data & information management

drug discovery informatics: 

drug ontology: Integrating Pharmacokinetics Knowledge into a Drug Ontology As an Extension to Support Pharmacogenomics, CG Chute, MD, DrPH,1 JS Carter,2 MS Tuttle,2 M Haber,3 and SH Brown, MS, MD4 Integrating Pharmacokinetics Knowledge into a Drug Ontology As an Extension to Support Pharmacogenomics, CG Chute, MD, DrPH,1 JS Carter,2 MS Tuttle,2 M Haber,3 and SH Brown, MS, MD4 AMIA Annu Symp Proc. 2003; 2003: 170–174. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1480302   

in silico: In a white paper I wrote for the European Commission in 1988 I advocated the funding of genome programs, and in particular the use of computers. In this endeavour I coined "in silico" following "in vitro" and "in vivo" I think that the first public use of the word is in the following paper: A. Danchin, C. Médigue, O. Gascuel, H. Soldano, A. Hénaut, From data banks to data bases. Res. Microbiol. (1991) 142: 913- 916.  You can find a developed account of this story in my book The Delphic Boat, Harvard University Press, 2003 personal communication Antoine Danchin, Institute Pasteur, 2003

Literally "in the computer".   Narrower terms: in silico biology, in silico modeling, in silico proteomics, in silico screening, in silico target discovery; Cell biology virtual cells in silico; Related terms: Chemoinformatics rules of five 

in silico biology: The considerable "algorithmic complexity" of biological systems requires a huge amount of detailed information for their complete description. Although far from being complete, the overwhelming quantity of small pieces of information gathered for all kind of biological systems at the molecular and cellular level requires computational tools to be adequately stored and interpreted. Interpretation of data means to abstract them as much as allowed to provide a systematic, an integrative view of biology. Most of the presently available scientific journals focus either on accumulating more data from elaborate experimental approaches, or on presenting new algorithms for the interpretation of these data. Both approaches are meritorious. However, since both communities do not interact much with each other, neither the experimental nor the computational biologists really apply the theoretical tools to that extent which would be possible and desirable to achieve that progress of research which is already feasible. "Aims and Scope" In Silico Biology: An international journal of computational biology http://www.bioinfo.de/isb/aims.html  Related terms: in silico, virtual cells 

in silico modeling: Modeling of biological pathways and other biological processes for drug discovery and development. Given the enormous increase in genetic and molecular data, such models will continue to improve and are predicted to become an essential tool for evaluating hypotheses, with only the more promising ones being subjected to empirical testing. 

in silico screening: See also virtual screening  
in silico
transcriptomics:  Omes & -omics

integrated R&D informatics: Knowledge management for improved R&D, workflow based informatics to improve productivity, informatics for translational science in an era of molecular medicine, building ontologies using semantic web and wikis, informatics tools to handle biologics screening, assay data and registration systems; clinical data integration and biomarkers. 

Integrated R&D Informatics & Knowledge Management
Integrated R&D Informatics and Knowledge ManagementIntegrated R&D Informatics and Knowledge Management  Molecular Medicine Tri-Conference February 10-12, 2014 • San Francisco, CA Program | Register | Download Brochure

integrative biology: the ability to take data sources from a number of different places and integrate them to help us understand biological systems better.  David de Graaf in "Building Integrative Biology at Boehringer Ingelheim, BioIT World Jan-Feb 2009 http://www.bio-itworld.com/2009/1/05/de-graaf-at-boehringer-ingelheim.html Related terms: systems biology

ligand binding: One of the biggest challenges in computational drug design is the accurate calculation of the free energy of binding of small ligands. Currently, typical errors in these calculations make them unusable to distinguish between strong binders (which would potentially make good drugs) and non- specific binders (which wouldn't). We are using distributed computing methods to greatly increase the accuracy of such calculations.   Vijay Pande, Pande Group Projects, Stanford Univ. US http://www.stanford.edu/group/pandegroup/projects.html#ligandbinding  Related terms: drug design, molecular design;  Pharmaceutical biology binding site, ligand, ligand design: Drug targets  ligand docking: See under docking.

LIMS Laboratory Information Management Systems: A basic LIMS is a passive bookkeeping system designed to keep track of laboratory processes. It records the procedures that have been applied to each sample, when a procedure was run, the machine or instrument that was used, and who (e.g., which technician) did the work or was responsible for it. It also records any run-specific parameters of the procedure, and the results if any. In addition, a LIMS typically handles necessary administrative functions, such as inventory management, monitoring of quality measures, resource planning for instruments and personnel, and reporting.  Related terms: robotic systems, robotics, sample prep, Assays & Screening

medicinal systems biology: This review will focus on the development of a novel "chemical genetic/ genomic approach" that uses small molecules to "probe and identify" the function of genes in specific biological processes or pathways in human cells. Due to the close relationship of small molecules with drugs, these systematic and integrative studies will lead to the "medicinal systems biology approach" which is critical to "formulate and modulate" complex biological (disease) networks by small molecules (drugs) in human bio-systems. TK Kim, Chemical genomics and medicinal systems biology: chemical control of genomic networks in human systems biology for innovative medicine, J Biochem Mol Biol. 37(1): 53- 58, Jan 31, 2004    Related term: Bioinformatics systems biology

molecular informatics:  Molecular Informatics presents methodological innovations that will lead to a deeper understanding of ligand-receptor interactions, macromolecular complexes, molecular networks, design concepts and processes that demonstrate how ideas and design concepts lead to molecules with a desired structure or function, preferably including experimental validation.  The journal's scope includes but is not limited to the fields of drug discovery and chemical biology, protein and nucleic acid engineering and design, the design of nanomolecular structures, strategies for modeling of macromolecular assemblies, molecular networks and systems, pharmaco- and chemogenomics, computer-assisted screening strategies, as well as novel technologies for the de novo design of biologically active molecules.  Molecular Informatics, Wiley 2010 forward, was QSAR & Combinatorial Science http://www.wiley-vch.de/publish/en/journals/alphabeticIndex/7777/?jURL=http://www.wiley-vch.de:80/vch/journals/2022/molinf/index.html   

molecular mimicry: The process in which structural properties of an introduced molecule imitate or simulate molecules of the host. Direct mimicry of a molecule enables a viral protein to bind directly to a normal substrate as a substitute for the homologous normal ligand. Immunologic molecular mimicry generally refers to what can be described as antigenic mimicry and is defined by the properties of antibodies raised against various facets of epitopes on the viral protein. MeSH from Immunology Letters 28 (2): 91- 99 May 1991 

myGrid: The myGrid team produce and use a suite of tools designed to “help e-Scientists get on with science and get on with scientists”. The tools support the creation of e-laboratories and have been used in domains as diverse as systems biology, social science, music, astronomy, multimedia and chemistryhttp://www.mygrid.org.uk/ 

NeuroCommons: The NeuroCommons project is creating an Open Source knowledge management platform for biological research. The first phase, a pilot project to organize and structure knowledge by applying text mining and natural language processing to open biomedical abstracts, was released to alpha testers in February 2007. The second phase is the development of a data analysis software system. The software will be released by Science Commons under the BSD Open Source License. These two elements together represent a viable open source platform based on open content and open Web standards. Neurocommons, ScienceCommons http://sciencecommons.org/projects/data/ 

pathway & disease modeling: Expression

pharmaceutical bioinformatics:  a new discipline in the area of the genomics revolution. It is central to biomedicine with application in areas like pharmacy, medicine, biology and medicinal chemistry. The genomics revolution has given high throughput methods for massive gene sequencing, chemical synthesis and biological testing. This creates oceans of new information. Pharmaceutical bioinformatics is all about how to use all the new information effectively.  University of Uppsala, Pharmaceutical Bioinformatics http://www.pharmbio.org/

pharmaceutical forecasting: The main goal in Phase I and II drug development is to find dose ranges in humans that induce minimal or no obvious toxicity and that result in some detectable level of effectiveness for the desired indication. Insight Pharma Reports, Bayesian Forecasting of Phase III Outcomes: The Next Wave in Predictive Tools, June 2007

Pharmaceutical R&D Informatics April 29 - May 1, 2014 • Boston, MA Program | Register | 

pre-competitive R&D information: "Pre- competitive" can hardly be defined in absolute terms. Genetic information that is regarded as pre- competitive by large drug developing companies (like those who participated in the SNP consortium) may be regarded as competitive by e.g. start- up firms who seek to commercialize any new information – provided they can reserve some exclusive right to its use. Thus, it seems that institutional and legal frameworks play a role in defining or constituting certain areas of research as "pre- competitive". Accordingly, the arguments raised in the Working Group infer two types of reasons for considering research as pre- competitive: - Functional prerequisites of successful research that make strategies of private appropriation technically unfeasible - Regulatory conditions that impose normative restrictions on the appropriation of research results "Arguments, Research Consortia, World Business Council for Sustainable Development (WBCSD) , 2003 http://www.wz-berlin.de/ipr-dialogue/argumentations/hgr/CV_Research_Consortia.htm

Precompetitive R&D precludes: (a) exchanging information among competitors relating to costs, sales, profitability, prices, marketing, or distribution of any product, process, or service that is not reasonably required to conduct the research and development that is the purpose of such venture; (b) entering into any agreement or engaging in any other conduct restricting, requiring, or otherwise involving the production or marketing by any person who is a party to such venture of any product, process, or service, other than the production or marketing of proprietary information developed through such venture; and (c) entering into any agreement or engaging in any other conduct that is not reasonably required to prevent misappropriation of proprietary information contributed by any person who is a party to such venture or its results. David. Hahn, Thomas Sporleder, ADE 601 Glossary Technical Terms for Agribusiness Managers, Ohio State Univ. US* no longer on the web

proof of concept: Industry and academic experts will be discussing the latest in patient stratification theories and strategies, comparative effectiveness in the PoC stages, and the role of precision medicine in accelerating PoC. The trend of in-licensing molecules to keep the development pipeline fed will be examined, as PoC strategies may differ for “optioned entities.”   Accelerating Proof of Concept  November 5-7, 2012 • Philadelphia, PA Program | Register | Download Brochure
Accelerating Proof of Concept
 

May be defined as the earliest point in the drug development process at which the weight of evidence suggests that it is "reasonably likely" that the key attributes for success are present and the key causes of failure are absent. POC is multidimensional but is focused on attributes that, if not addressed, represent a threat to the success of the project in crucial areas such as safety, efficacy, pharmaceutics, and commercial and regulatory issues. The appropriate weight of evidence is assessed through the use of mathematical models and by evaluating the consequences of advancing a candidate drug that is not safe, effective, or commercially viable, vs. failing to advance a candidate that possesses these attributes. Tools for POC include biomarkers, targeted populations, pharmacokinetic (PK)/pharmacodynamic (PD) modeling, simulation, and adaptive study designs.  Proof of Concept: A PhRMA Position Paper With Recommendations for Best Practice, . Cartwright ME, et. al, Clin Pharmacol Ther. 2010 Feb 3. Epub ahead of print http://www.ncbi.nlm.nih.gov/pubmed/20130568  Compare with  definitions in Business of biopharmaceuticals 

rational drug design: The input of biocomputing in drug discovery is twofold: firstly the computer may help to optimise the pharmacological profile of existing drugs by guiding the synthesis of new and "better" compounds. Secondly, as more and more structural information on possible protein targets and their biochemical role in the cell becomes available, completely new therapeutic concepts can be developed. The computer helps in both steps: to find out about possible biological functions of a protein by comparing its amino acid sequence to databases of proteins with known function, and to understand the molecular workings of a given protein structure. Understanding the biological or biochemical mechanism of a disease then often suggests the types of molecules needed for new drugs.  Wolfram Altenhogen "Biocomputing and drug design, 1996 http://www.techfak.uni-bielefeld.de/bcd/ForAll/Introd/drugdesign.html  Related terms: structure based drug design; Combinatorial Libraries & synthesis: rational library design,  computational quantum chemistry

receptor mapping: The technique used to describe the geometric and/or electronic features of a binding site when insufficient structural data for this receptor or enzyme are available. Generally the active site cavity is defined by comparing the superposition of active to that of inactive molecules. IUPAC Medicinal Chemistry, IUPAC Compendium

Over the past ten to fifteen years [before 1987], receptor mapping has expanded from a very minor technique, besieged by problems and limited in its approach, to one that is widespread, extended beyond receptors and applied to clinical problems and populations with modern imaging and scanning techniques. MJ Kuhar "Imaging receptors for drugs in neural tissue"  Neuropharmacology 1987 Jul. 26 (7B): 911-6  

regulatory therapies: Will be devised by reference to regulome maps, and pharmaceutical companies will be busy identifying molecules whose specific action will be limited to a particular regulatory target.

Software- directed pleiotropy tests could in the future predict specific side effects that an intervention on any individual component of the regulatory system is likely to have. Regulomics after Genomics: A Challenge for the 21st Century, Emile Zuckerk, Institute of Molecular Medical Sciences, International Union of Biological Sciences http://www.iubs.org/test/bioint/41/16.htm  Related terms regulome maps, regulomics, controller gene diseases; Gene definitions: pleiotropy 

simulations: Up until now, biomolecular simulations in drug design have been of limited use because of the short time scales, long turnaround times (implying poor sampling), the limited accuracy of simulations alluded to above, and the relatively small size of systems simulated when one wishes to account for proper inclusion of the physiological environment like membranes and solvent. Developing a new drug goes beyond finding binding compounds and must rely on good properties from the outset: activity, absorption, distribution, metabolism, excretion. Pharmacological researchers would like to predict these properties first, before one optimizes activity as conventionally done, and before analogs are made. ... When sufficient resources are available, simulations can determine the relative free energy values of drugs passing through membranes. These values are required to estimate the bioavailability of drugs. Opportunities in Molecular Biomedicine in the Era of  Teraflop Computing March 3 & 4, 1999,  Rockville, MD, NIH Resource for Macromolecular Modeling and Bioinformatics  Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana- Champaign 

soft drug design: Soft drug design represents a new approach aimed to design safer drugs with an increased therapeutic index by integrating metabolism considerations into the drug design process. Soft drugs are new therapeutic agents that undergo predictable metabolism to inactive metabolites after exerting their therapeutic effect. Hence, they are obtained by building into the molecule, in addition to the activity, the most desired way in which the molecule is to be deactivated and detoxified. Soft drug design: general principles and recent applications. Bodor N, Buchwald P. Med Res Rev. 2000 Jan;20(1): 58-101 Related term: Microarrays small molecule microarrays

structure:  In a biological or anatomical context, the term structure is associated with two distinct concepts (meanings): 1. a material object generated as a result of coordinated gene expression, which necessarily consists of parts (e.g., hemoglobin molecule, cell, heart, human body); and 2. the manner of organization or interrelation of the parts that constitute a structure specified by the first definition (i.e., the structure of a structure). Both definitions emphasize the critical need for declaring the principles according to which units of organization can be defined in order to be able to state what is 'whole' and what is 'part'. Specifying the manner in which parts interrelate must satisfy two requirements: 1. to determine the kinds of parts of which various structures may be constituted; and 2. to state the manner of spatial organization of parts by describing their boundaries, continuities and attachments, as well as their location, orientation and spatial adjacencies in terms of qualitative coordinates (in addition to the quantitative geometric coordinates, which are embedded in the Visible Human data sets). Cornelius Rosse, et. al., Visible Human, Know Thyself: The Digital Anatomist Dynamic Structural Abstraction, National Library of Medicine, US http://www.nlm.nih.gov/research/visible/vhpconf2000/AUTHORS/ROSSE/TEXTINDX.HTM  Related terms: Cell biology, Expression Compare unstructured.

structural homology: Protein informatics
Structure Activity Relationship SAR: Cheminformatics

Structure based drug design: Protein Informatics

Support Vector Machines SVMs:  
Wikipedia http://en.wikipedia.org/wiki/Support_vector_machine  
Nello Cristianini, John Shawe-Taylor, An Introduction to Support Vector Machines and Other Kernel- based Learning Methods, Cambridge University Press, 2000 http://www.support-vector.net/ 

text mining: The scientific literature captures the learnings of over a $100 billion investment in biomedical research per year. Our ability to access the knowledge buried in the literature has been limited at best. This workshop will present strategies on using the scientific literature to answer questions that have previously been unanswerable. A comprehensive view of utilizing the literature will be presented and how text mining fits into the overall approach. Everything from accessing the literature, legal issues, text analytics technologies and tools, visualization of results and curation strategies will be covered. Agile Answers from Literature DVD April 20, 2010 •

Usually data mining technologies mine knowledge from data with well-formed schemes such as relational tables. But, text data don't have such scheme, and information is described freely in the documents. Therefore, we focus on Natural Language Processing (NLP) technologies to extract such information. Using NLP technologies, documents are transformed into a collection of concepts, described using terms discovered in the text. Usually, "text mining" is used to indicate a text search technique. But, we think of text mining as having more functions. Text mining technologies extract more information than just picking up keywords from texts: facts, author's intentions, their expectations, and their claims.  Tokyo Research Lab, IBM, Text Mining  http://www.trl.ibm.com/projects/textmining/index_e.htm 

Using data mining on unstructured data, such as the biomedical literature.  Related terms:  natural language processing; Algorithms & data analysis: support vector machines  
Text Mining Glossary, ComputerWorld, 2004  http://www.computerworld.com/s/article/93967/Sidebar_Text_Mining_Glossary   Includes Categorization, clustering, extraction, keyword search, natural language processing, taxonomy, and visualization.

top-down: A systems approach, which looks at the big picture and complexity. Genomics is essentially a top- down approach, the opposite of a bottom- up approach. Our ways of thinking have been so profoundly influenced by bottom- up, reductionist approaches that we are having to learn to think in very different ways to begin to fully exploit genomic data. Narrower term: Nanoscience & miniaturization nanofabrication- top- down

translational genomics: Genomics categories  

translational research: To improve human health, scientific discoveries must be translated into practical applications. Such discoveries typically begin at “the bench” with basic research  in which scientists study disease at a molecular or cellular level then progress to the clinical level, or the patient's “bedside.” Scientists are increasingly aware that this bench-to-bedside approach to translational research is really a two-way street. Basic scientists provide clinicians with new tools for use in patients and for assessment of their impact, and clinical researchers make novel observations about the nature and progression of disease that often stimulate basic investigations. Translational research has proven to be a powerful process that drives the clinical research engine. However, a stronger research infrastructure could strengthen and accelerate this critical part of the clinical research enterprise. NIH Common Fund, Translational Research Overview, 2011 http://commonfund.nih.gov/clinicalresearch/overview-translational.aspx 

Translational research is one of the most important activities of translational medicine as it supports predictions about probable drug activities across species and is especially important when compounds with unprecedented drug targets are brought to humans for the first time. Translational research has the potential to deliver many practical benefits for patients and justify the extensive investments placed by the private and public sector in biomedical research. Translational research encompasses a complexity of scientific, financial, ethical, regulatory, legislative and practical hurdles that need to be addressed at several levels to make the process efficient. What's next in translational medicine? Littman BH, Di Mario L, Plebani M, Marincola FM. What's next in translational medicine? Clin Sci (London) 112 (4): 217- 227, Feb 2007   Related terms: Molecular Medicine  clinical proteomics   translational medicine   biomarker validation

VRML Virtual Reality Modeling Language: An open language under development Web3D Consortium http://www.web3d.org/vrml/vrml.htm

VRML was  supposed to be  the standard language for V[irtual] R[eality], but VRML  browsers and  plug- ins tend to be large. XML (Extensible  Markup  Language) is emerging as the most likely  alternative to or fix for VRML.  [Mike Hurwicz "Virtual Reality in VRML or XML?" Web Developer's Journal June 21, 2000]  http://www.webdevelopersjournal.com/articles/virtual_reality.html

Virtual Cell Program:  Jeremy Gunawardena, Harvard Medical School   http://vcp.med.harvard.edu/home.html  Related terms: -Omes & -omics metabolome, transcriptome
Virtual Cell
, Dept of Plant Biology, Univ. of Illinois- Urbana Champaign, US http://www.life.uiuc.edu/plantbio/cell/

virtual library: Chemoinformatics
virtual screening: Assays & Screening

Bibliography
Catalyzing Inquiry at the Interface of Computing and Biology, Edited by John C Wooley and Herbert S Lin. National Research Council (US) Committee on Frontiers at the Interface of Computing and Biology. Washington (DC): National Academies Press (US); 2005. ISBN-10: 0-309-09612-X http://www.ncbi.nlm.nih.gov/books/NBK25462/   
IUPAC International Union of Pure and Applied Chemistry, Glossary of Terms Used in Combinatorial Chemistry, D. Maclean, J. J. Baldwin, V.T. Ivanov, Y. Kato, A. Shaw, P. Schneider, and E. M.. Gordon, Pure Appl. Chem., Vol. 71, No. 12, pp. 2349-2365, 1999. 100+ definitions. http://www.iupac.org/reports/1999/7112maclean
IUPAC International Union of Pure and Applied Chemistry, Compendium of Chemical Terminology: Recommendations, compiled by Alan D. McNaught and Andrew Wilkinson, Blackwell Science, 2012. "Gold Book" 6500+ definitions. http://goldbook.iupac.org/
IUPAC  International Union of Pure and Applied Chemistry, Glossary of Terms used in Computational Drug Design, H. van de Waterbeemd, R.E. Carter, G. Grassy, H. Kubinyi, Y. C.. Martin, M.S. Tute, P. Willett, 1997. 125+ definitions. http://www.iupac.org/reports/1997/6905vandewaterbeemd/glossary.html  
Molecular modeling, Folding@home Education@home,, Stanford Univ. http://www.stanford.edu/group/pandegroup/folding/education/molmodel.html 
SLAC Glossary, Stanford Linear Accelerator Center, Stanford Univ. US, 2002, 300 definitions. http://www.slac. stanford.edu/history/ slacspeak/

Alpha glossary index

How to look for other unfamiliar  terms

IUPAC definitions are reprinted with the permission of the International Union of Pure and Applied Chemistry. 

Contact | Privacy Statement | Alphabetical Glossary List | Tips & glossary FAQs | Site Map