Back
to BiopharmaceuticalGlossaries.com
You are here Biopharmaceutical/
Genomic Glossaries homepage/Search
> Biopharmaceutical Informatics > Biopharmaceutical Algorithms & data management
Biopharmaceutical
Algorithms
& data management glossary & taxonomy
<%end if%>
With changes in sequencing technology and methods, the rate
of acquisition of human and other genome data over the next few years will
be ~100 times higher than originally anticipated. Assembling and interpreting
these data will require new and emerging levels of coordination and collaboration
in the genome research community to develop the necessary computing algorithms,
data management and visualization system. Lawrence Berkeley
Lab, US "Advanced Computational Structural Genomics"
Finding guide to terms in these glossaries
Informatics
Map Site
Map The dividing line between this glossary and
Information management&
interpretation is fuzzy - in general Algorithms
& data analysis focuses on structured data, while Information management
& interpretation centers on unstructured data.
Other related glossaries include Applications: Drug Discovery &
Development Proteomics ANOVA Analysis Of Variance: Error model based on a standard
statistical approach. a generalization of the familiar t-test that allows
multiple effects to be compared simultaneously, in contrast to the t-test. An
ANOVA model is expressed as a large set of equations that can be solved, given a
dataset of measurements, using standard software. affinity based data mining:
Large and complex data sets are analyzed
across multiple dimensions, and the data mining system identifies data
points or sets that tend to be grouped together. These systems differentiate
themselves by providing hierarchies of associations and showing any underlying
logical conditions or rules that account for the specific groupings of
data. This approach is particularly useful in biological motif analysis.
"Data mining" Nature Biotechnology 18: 237-238 Supp. Oct. 2000 Broader term: data
mining
agglomerative method: See under cluster analysis
algorithm: A procedure consisting of a sequence of algebraic formulas and/or logical steps to calculate or determine a given task.
MeSH, 1987
Algorithms fuel the
scientific advances in the life sciences. They are required for dealing with the
large amounts of data produced in sequencing projects, genomics or proteomics.
Moreover, they are crucial ingredients in making new experimental approaches
feasible... Algorithm development for Bioinformatics applications combines
Mathematics, Statistics, Computer Science as well as Software Engineering to
address the pressing issues of today's biotechnology and build a sound
foundation for tomorrow's advances. Algorithmics Group, Max Planck
Institute for Molecular Genetics, Germany http://algorithmics.molgen.mpg.de/
Rules or a process, particularly in computer
science. In medicine a step by step process for reaching a diagnosis or
ruling out specific diseases. May be expressed as a flow chart in
either sense. Greater efficiencies in algorithms, as well as improvements in computer
hardware have led to advances in computational biology. A computable set of steps to achieve a desired result.
From the Persian author Abu Ja'far
Mohammed ibn Mûsâ al-Khowârizmî who wrote a book with
arithmetic rules dating from about 825
A.D. NIST Narrower terms: docking algorithms, sequencing algorithms, genetic
algorithm, heuristic algorithm. Related terms heuristic, parsing; Sequencing
dynamic programming methods.
artificial intelligence (AI): A wide- ranging term encompassing
computer applications that have the ability to make decisions; the ability
to explain reasoning is evidence of intelligence. Also covers methods
that have the ability to learn. J Glassey et al. “Issues in the development
of an industrial bioprocess advisory system” Trends in Biotechnology 18
(4):136-41 April 2000
Or as some people have noted, laboriously trying to get computers to
do what people do intuitively, without great effort. Conversely there are things
computer can do (relatively) effortlessly such as massive numbers of
error- free calculations. The most promising applications seem to involve
incorporating both computer aided consideration of many possibilities, combined
with human judgment. Narrower terms: cellular automata, expert systems, fuzzy logic, genetic algorithms, neural nets
Related term: training sets.
American Association of Artificial Intelligence:
Topics http://www.aaai.org/AITopics/html/current.htmlz comparative data mining:
Focuses on overlaying large and complex
data sets that are similar to each other ...particularly useful in all
forms of clinical trial meta analyses ... Here the emphasis is on
finding dissimilarities, not similarities. "Data mining" Nature Biotechnology
Vol. 18: 237-238 Supp Oct.. 2000 Broader term: data mining
curse of dimensionality:
(Bellman
1961) refers to the exponential growth of hypervolume as a function of
dimensionality. In the field of NNs [neural nets], curse of dimensionality
expresses itself in two related problems. Janne Sinkkonen "What is
the curse of dimensionality?" Artificial Intelligence FAQ http://www.faqs.org/faqs/ai-faq/neural-nets/part2/section-13.html
Related
term: high-dimensionality
decision trees:
Hierarchical series of questions leading to specific action
steps -- to guide manufacturers and reviewers in determining the level and
extent of safety testing needed at various stages. Report Recommends More
Explicit Guidelines For Assessing Safety of New Ingredients Added to Infant
Formula, National Academy of Sciences press release, 2004 http://www4.nationalacademies.org/news.nsf/isbn/0309091500?OpenDocument
dendogram:
A tree diagram that depicts the results of hierarchical
clustering. Often the branches of the tree are drawn with lengths that are
proportional to the distance between the profiles or clusters. Dendograms are
often combined with heat maps, which can give a clear visual representation of
how well the clustering has worked. Related terms: cluster analysis, heat maps, profile charts
error model: A mathematical formulation that identifies the sources of
error in an experiment. An error model provides a mathematical means of
compensating for the errors in the hope that this will lead to more accurate
estimates of the true expression levels and also provides a means of estimating
the uncertainty in the answers. An error model is generally an approximation of
the real situation and embodies numerous assumptions; therefore, its utility
depends on how good these assumptions are. The model can be expressed as a set
of equations, as an algorithm, or using any other mathematical formalisms. ...
The term error model has become very popular among software providers,
particularly in light of the success of Rosetta’s Resolver, which incorporates
an error model. As a result, some software developers may use the term
inappropriately. Not everything that is called an error model really is one. evolutionary
algorithm: An
umbrella term used to describe computer-based problem solving systems which use
computational models of some of the known mechanisms of EVOLUTION
as key elements in their design and implementation. A variety of EVOLUTIONARY
Algorithms have been proposed. The major ones are: GENETIC
Algorithms (see Q1.1),
EVOLUTIONARY
PROGRAMMING (see Q1.2),
EVOLUTION
Strategies (see Q1.3),
CLASSIFIER
Systems (see Q1.4),
and GENETIC
PROGRAMMING (see Q1.5).
They all share a common conceptual base of simulating the evolution of INDIVIDUAL
structures via processes of SELECTION,
MUTATION,
and REPRODUCTION.
The processes depend on the perceived PERFORMANCE
of the individual structures as defined by an ENVIRONMENT.
More
precisely, EAs
maintain a POPULATION
of structures, that evolve according to rules of selection and other operators,
that are referred to as "search operators", (or GENETIC
Operators), such as RECOMBINATION
and mutation. Each individual in the population receives a measure of its FITNESS
in the environment. Reproduction focuses attention on high fitness individuals,
thus exploiting (cf. EXPLOITATION)
the available fitness information. Recombination and mutation perturb those
individuals, providing general heuristics for EXPLORATION.
Although simplistic from a biologist's viewpoint, these algorithms are
sufficiently complex to provide robust and powerful adaptive search mechanisms. Heitkötter, Jörg
and Beasley, David, eds. (2001) "The Hitch-Hiker's Guide to Evolutionary
Computation: A list of Frequently Asked Questions (FAQ)",
USENET: comp.ai.genetic Available via anonymous FTP
from ftp://rtfm.mit.edu/pub/usenet/news.answers/ai-faq/genetic/
evolutionary computation:
Encompasses methods of
simulating EVOLUTION
on a computer. The term is relatively new and represents an effort bring
together researchers who have been working in closely related fields but
following different paradigms. The field is now seen as including research in GENETIC
Algorithms, EVOLUTION
Strategies, EVOLUTIONARY
PROGRAMMING, ARTIFICIAL
LIFE, and so forth. For a good overview see the editorial introduction to
Vol. 1, No. 1 of "Evolutionary Computation" (MIT Press, 1993).
That, along with the papers in the issue, should give you a good idea of
representative research. expert systems:
A computer-based program that encodes rules obtained from process experts
usually in the form of “if - then” statements. J Glassey et al.
“Issues in the development of an industrial bioprocess advisory system”
Trends in Biotechnology 18 (4):136-41 April 2000 Related term: artificial intelligence.
fuzzy:
In contrast to binary (true/ false) terms allows for looser
boundaries for sets or concepts.
fuzzy logic:
A superset of conventional (Boolean) logic that
has been extended to handle the concept of partial truth- truth values
between “completely true” and ‘completely false”. Introduced by Dr.
Lotfi Zadeh (Univ. of California - Berkeley) in the 1960’s as a means to model the uncertainty
of natural language. AI FAQ, Carnegie Mellon University Computer Science
Department http://www.cs.cmu.edu/Groups/AI/html/faqs/ai/fuzzy/part1/faq-doc-2.html
Approximate, quantitative reasoning that is concerned with the linguistic ambiguity which exists in natural or
synthetic language. At its core are variables such as good, bad, and young as well as modifiers such as more, less,
and very. These ordinary terms represent fuzzy sets in a particular problem. Fuzzy logic plays a key role in many
medical expert systems. MeSH, 1993
global schema: A schema, or a map of the data content of a data
warehouse that integrates the schemata from several source repositories.
It is "global", because it is presented to warehouse users as the schema
that they can query against to find and relate information from any of
the sources, or from the aggregate information in the warehouse. Lawrence
Berkeley Lab "Advanced Computational Structural Genomics" Glossary Broader term: schema
Hansch analysis:
The investigation of the quantitative relationship
between the biological activity of a series of compounds and their
physicochemical substituent or global parameters representing hydrophobic,
electronic, steric and other effects using multiple regression correlation
methodology. IUPAC Medicinal Chemistry Related term: QSAR
heat map: A rectangular display that is a direct translation of
a Cluster- format data table. Each cell of the data table is represented as a
small color- coded square in which the color indicates the expression value.
Generally green indicates low values, black medium values, and red high ones,
although this is user- settable. The net effect is a colored picture in which
regions of similar color indicate similar profiles or parts of profiles. Related terms: cluster analysis, dendogram, heat map, profile chart;
Expression heuristic:
Tools such as
genetic algorithms or neural
networks employ heuristic methods to derive solutions which may be
based on purely empirical information and which have no explicit rationalization.
IUPAC Combinatorial Chemistry Trial and error methods.
Narrower terms: heuristic
algorithm, metaheuristics
heuristic algorithm: A programming strategy for solving
computationally resistant problems that utilizes self- educating techniques
(i.e., feedback evaluation) to improve performance (e.g., FASTA). Problem
solving by such experimental, trial- and- error methods does not guarantee
the optimal solution. [labvelocity.com]
hierarchical clustering:
Unsupervised clustering approach used to
determine patterns in gene expression data. Output is a tree- like structure. Related term: cluster analysis, self- organizing maps
high-dimensionality: Many applications of machine learning methods
in domains such as information retrieval, natural language processing, molecular
biology, neuroscience, and economics have to be able to deal with various sorts
of discrete data that is typically of very high dimensionality. One standard approach to deal with high dimensional data is to perform a
dimension reduction and map the data to some lower dimensional representation.
Reducing the data dimensionality is often a valuable analysis by itself, but it
might also serve as a pre- processing step to improve or accelerate subsequent
stages such as classification or regression. Two closely related methods that
are often used in this context and that can be found in virtually every textbook
on unsupervised learning are principal component analysis (PCA) and
factor analysis. Thomas Hoffmann, Brown Univ. Statistical Learning in
High Dimensions, Breckenridge CO, Dec. 1999 http://www-2.cs.cmu.edu/~mmp/workshop-nips99/speakers.html
See also under learning algorithms;
Related terms: cluster analysis, curse of dimensionality, dimensionality
reduction, ill- posed problem, neural nets, principal components analysis
ill-posed problems:
In the 1960s [Russian mathematician Andrei Nikolaevich]
Tikhonov began to produce an important series of papers on ill- posed problems. He defined a class of regularisable
ill- posed
problems and introduced the concept of a regularising operator which was used in the solution of these problems. Combining his computing
skills with solving problems of this type Tikhonov gave computer implementations
of algorithms to compute the operators which he used in the solution of these
problems.. "Andrei Nikolaevich Tikhonov",
MacTutor History of Mathematics, Univ. of St. Andrews, Scotland, 1999
Problems without a unique solution, problems without any solution. Life
sciences data tends to be very noisy, leading to ill-posed problems.
Interpretation of microarray gene expression
data is an ill- posed problem. Compare well- posed problem
influence based data mining:
Complex and granular (as opposed
to linear) data in large databases are scanned for influences between specific
data sets, and this is done along many dimensions and in multi- table formats.
These systems find applications wherever there are significant cause and
effect relationships between data sets - as occurs, for example in large
and multivariant gene expression studies, which are behind areas such as
pharmacogenomics. "Data mining" Nature
Biotechnology Vol. 18: 237- 238 Supp. Oct. 2000 Broader
term: data mining
information theory:
Founded
by Claude Shannon in the 1940's, has had an enormous impact on communications
engineering and computer sciences. k-means clustering:
The researcher picks a value for k, say k = 10,
and the algorithm divides the data into that many clusters in such a way that
the profiles within each cluster are more similar than those across clusters.
The actual algorithms for this are quite sophisticated. Although the core
algorithms require that a value of k be selected up front, methods exist that
adaptively select good values for k by running the core algorithm several times
with different values. A non-hierarchical method. Broader terms: cluster analysis, neural nets
knowledge based systems:
An extension of the expert system concept
wherein additional forms of knowledge, such as mathematical models, are
incorporated with the expert rules. J Glassey et al. “Issues in the development
of an industrial bioprocess advisory system” Trends in Biotechnology 18
(4):136- 141 April 2000 Related term: data mining.
Knowledge Discovery in Databases (KDD):
The notion of Knowledge Discovery in Databases (KDD) has been given various
names, including data mining, knowledge extraction, data pattern
processing, data archaeology, information harvesting, siftware, and even (when
done poorly) data dredging. Whatever the name, the essence of KDD is the
"nontrivial extraction of implicit, previously unknown, and potentially
useful information from data" (Frawley et al 1992). KDD encompasses a
number of different technical approaches, such as clustering, data
summarization, learning classification rules, finding dependency networks,
analyzing changes, and detecting anomalies (see Matheus et al 1993). Gregory
Piatetsky- Shapiro, KDD Nuggets FAQ, KDD Nuggets News, 1994 http://www.kdnuggets.com/news/94/n6.txt
Google = about 241,000 July 19, 2002;
about 321,000 Oct 22, 2007 Related term: data mining machine learning:
In Knowledge Discovery, machine
learning is most commonly used to mean the application of induction
algorithms, which is one step in the knowledge discovery process. This is
similar to the definition of empirical learning or inductive learning in
Readings in Machine Learning by Shavlik and Dietterich. Note that in their
definition, training examples are ``externally supplied,'' whereas here they are
assumed to be supplied by a previous stage of the knowledge discovery process. Machine
Learning is the field of scientific study that concentrates on induction
algorithms and on other algorithms that can be said to ``learn.'' Glossary of
terms, Ron Kohavi, Machine Learning, 30, 271- 274, 1998 Related term: supervised, training sets MathML:
Intended to
facilitate the use and re-use of mathematical and scientific content on the Web,
and for other applications such as computer algebra systems, print typesetting,
and voice synthesis. W3C http://www.w3.org/Math/whatIsMathML.html
metadata: Information management
& interpretation
metaheuristics:
Widely used to solve important practical combinatorial
optimization problems. However, due to the variety of techniques and concepts
comprised by metaheuristics, there is still no commonly agreed definition for
metaheuristics. The definition used in the Metaheuristics Network is the
following. A metaheuristic
is a set of concepts that can be used to define heuristic methods that can be
applied to a wide set of different problems. In other words, a metaheuristic can
be seen as a general algorithmic framework which can be applied to different
optimization problems with relatively few modifications to make them adapted to
a specific problem. Examples of metaheuristics include simulated annealing (SA),
tabu search (TS), iterated local search (ILS), evolutionary algorithms (EC), and
ant colony optimization (ACO). Project Summary, Metaheuristics Network,
Improving Human Potential, European Community http://www.metaheuristics.org/index.php?main=1
mosaic plots:
A graphical alternative for qualitative, or categorical,
data … display cross- classified data by constructing rectangles of area
proportional to the counts … likely to become more familiar [to scientists]
and their use is likely to grow. Are to categorical variables what scatterplots
are to continuous variables, and their purpose is the same, to find interesting
patterns of association between variables. RD Meyer & D Book “Visualization
of data” Current Opinion in Biotechnology 11:89- 1196, 2000
multivariate statistics:
A set of statistical tools to analyze
data (e.g., chemical and biological) matrices using regression and/ or pattern
recognition techniques. IUPAC Computational neural networks:
Technique for optimizing a desired property
given a set of items which have been previously characterized with respect
to that property (the 'training set'). Features of members
of the training set which correlate with the desired property are 'remembered
and used to generate a model for selecting new items with the desired property
or to predict the fit of an unknown member. IUPAC Combinatorial Chemistry
Communication between statisticians and neural net researchers is often
hindered by the different terminology used in the two fields. There is a
comparison of neural net and statistical jargon in ftp://ftp.sas.com/pub/neural/jargon Narrower terms: artificial neural networks, probabilistic neural
networks. Often uses fuzzy logic; Related terms: artificial
intelligence; Drug
discovery informatics self- organizing maps
nonparametric: See under parametric versus nonparametric methods:
normalization:
A knotty area in any measurement process, because
it is here that imperfections in equipment and procedures are addressed. The
specifics of normalization evolve as a field matures since the process usually
gets better, and one’s understanding of the imperfections also gets better. In
the microarray field, even larger changes are occurring as robust statistical
methods are being adopted. See also normalization Microarrays
Narrower terms: thresholding
OASIS Organization for the
Advancement of Structured Information Systems: A
not- for- profit, global consortium that drives the development, convergence and
adoption of e-business standards. http://www.oasis-open.org/who/ parsing:
Using algorithms to analyze data into components. Semantic
parsing involves trying to figure out what the components mean. Lexical
parsing refers to the process of deconstructing the data into components. Narrower term: Drug
discovery informatics gene parsing
Pearson correlation:
Commonly used similarity function which looks
explicitly at the shape of the expression profile, avoiding the need to
transform the data beforehand. It’s easiest to understand what this function
does by using a different spatial representation of the data. Take two
expression profiles and draw a scatter plot of corresponding values. In other
words, pair the first value of the first profile with the first value of the
second, the second value of the first profile with the second value of the
second, and so forth. The Pearson correlation measures how well a straight line
can be fit to the data. A correlation of +1 means the fit is perfect to a line
that slants up, 0 means the fit is random, and –1 means the fit is perfect to
a line that slants down. predictive data mining;
Combines pattern matching, influence
relationships, time set correlations, and dissimilarity analysis to offer
simulations of future data sets...these systems are capable of incorporating
entire data sets into their working, and not just samples, which make their
accuracy significantly higher ... used often in clinical trial analysis
and in structure- function correlations. "Data mining" Nature Biotechnology
Vol. 18: 237-238 Supp. Oct. 2000 Broader term: data mining
probabilistic neural networks: Statsoft
probability:
Probability web http://www.mathcs.carleton.edu/probweb/probweb.html
profile chart: A line graph that is a direct translation of a
Cluster- format data table. Each cell of the data table is represented as a point
whose Y coordinate indicates the expression value, and whose X coordinate is the
ordinal position of the value in its profile. The points for each profile are
connected by lines. A profile chart is a good way to visualize individual
clusters. Related terms: cluster analysis, dendogram, heat map regression to the mean:
A common misconception about genetics has to
do with overgeneralization about the likelihood of increased quality by selective breeding.
Two very tall parents will tend to produce offspring who are taller than the
average population -- but less tall than the average of the parents'
heights. Or as George Bernard Shaw is supposed to have said to a famous
beauty who suggested they have a child ""With your brains and my looks
..." He said to have replied, "But what if the child had my looks and your
brains?"
robust: A statistical test
that yields approximately correct results despite the falsity of certain
of the assumptions on which it is based Oxford English Dictionary
Hence, can refer to
a process which is relatively insensitive to human foibles and variables
in the way (for example, an assay) is carried out.
Idiot- proof.
schema (plural schemata):
A description of the data represented
within a database. The format of the description varies but includes a
table layout for a relational database or an entity- relationship diagram.
Lawrence Berkeley Lab "Advanced Computational Structural Genomics"
Glossary Narrower term: global schema
self-organization: Typically
refers to a process by which systems organize themselves without external
direction, manipulation or control. The term is difficult to define precisely
because it is used in reference to a variety of processes generating a variety
of systems. M. Beth L. Dempster, Glossary, A Self- Organizing Systems
Perspective on Organizing for Sustainability, Univ. of Waterloo, Canada, 1998 http://www.nesh.ca/jameskay/ersserver.uwaterloo.ca/jjkay/grad/bdempster/gloss.html
A process where the organization (constraint, redundancy) of a system spontaneously increases, i.e. without this increase being controlled by the environment or an encompassing or otherwise external system.
[F. Heylighen, "Self Organization" Jan 27, 1997
in: F. Heylighen, C. Joslyn and V. Turchin (editors): Principia Cybernetica Web (Principia
Cybernetica, Brussels) http://pespmc1.vub.ac.be/SELFORG.html
semantic parsing: See under parsing
Google = about 1,380 Aug. 20, 2002;
about 45,200 Oct 8, 2007
similarity scores: See under distance functions or similarity
scores:
stochastic:
"Aiming, proceeding by guesswork" (Webster's Collegiate
Dictionary). Term which is often applied to combinatorial processes involving
true random sampling, such as selection of beads from an encoded library,
or certain methods for library design. IUPAC COMBINATORIAL CHEMISTRY
Truly random, based on probability.
time delay data mining:
The data is collected over time and systems
are designed to look for patterns that are confirmed or rejected as the
data set increases and becomes more robust. This approach is geared
toward long- term clinical trial analysis and multicomponent
mode of action
studies. "Data mining" Nature Biotechnology Vol. 18: 237-238 Supp. Oct.
2000 Broader term: data mining
training set:
An initial dataset for which the correct answers are
known and feeding the data and correct answers into a program that adjusts the
parameters of the general model. The training program adjusts the model
parameters so that the model works well on the given dataset. There are usually
enough parameters so that this can be accomplished, provided the dataset is
reasonably consistent. The training set usually has to be very large to produce
a good classifier. trends-based data mining:
Software analyzes large and complex
data sets in terms of any changes that occur in specific data sets over
time. Data sets can be user- defined or the system can uncover them
itself...This is especially important in cause- and- effect biological experiments.
Screening is a good example. Data mining, Nature Biotechnology Vol. 18: 237-
238 Supp. Oct. 2000 Broader term: data mining
unsupervised training sets:
Unsupervised training is where the network has to make sense of the inputs
without outside help. ... Unsupervised training is used to perform some initial
characterization on inputs. However, in the full blown sense of being truly self
learning, it is still just a shining promise that is not fully understood, does
not completely work, and thus is relegated to the lab. Artificial Neural
Networks Technology, Data and Analysis Software, Dept. of Defense, 2000 http://www.dacs.dtic.mil/techs/neural/neural3.html
well-posed problem:
A problem is well-posed if (and only if):
it has one and only one solution; a small change in the data (such as prescribed
boundary conditions, source strengths, coefficients in the PDE, etc) produces
only a small change in the solution. Nils Andersson "Appropriate boundary
conditions", Partial Differential Equations, Univ. of Southampton, UK, 2001 http://www.maths.soton.ac.uk/staff/Andersson/MA361/node38.html
Related term: ill-posed problems
IUPAC definitions are reprinted with the permission of the International
Union of Pure and Applied Chemistry.
Bibliography
Evolving terminology for emerging
technologies
Suggestions? Comments? Questions? Mary Chitty mchitty@healthtech.com
Last revised June 15, 2012
Informatics: Bioinformatics
Chemoinformatics
Clinical informatics Drug
discovery informatics IT
infrastructure Ontologies Research
Technologies: Microarrays & protein chips
Sequencing
Biology: Protein
Structures Sequences, DNA
& beyond.
How to do research in the MIT AI
Lab,
a whole bunch of current, former, and honorary MIT AI Lab graduate students,
1988-1997? http://www.cs.indiana.edu/mit.research.how.to/mit.research.how.to.html
coefficient of variation (CV):
The standard
deviation of a set of measurements divided by their mean.
Shannon's work,
Bell Labs http://cm.bell-labs.com/cm/ms/what/shannonday/work.html
Information theory
primer, Tom Schneider,
National Cancer Institute, US, 2002 http://www.lecb.ncifcrf.gov/~toms/paper/primer/
AAAI Machine
Learning, http://www.aaai.org/AITopics/html/machine.html
American Association for Artificial Intelligence
Wikipedia http://en.wikipedia.org/wiki/Machine_learning
IEEE Neural
Networks, Institute of Electrical
and Electronics Engineers http://www.ieee-nns.org/
Neural Network FAQ
ftp://ftp.sas.com/pub/neural/FAQ.html
OASIS Glossary of terms
http://www.oasis-open.org/glossary/index.php
50 + terms
M. Beth L. Dempster, Glossary, A Self-Organizing Systems
Perspective on Organizing for Sustainability, Univ. of Waterloo, Canada, 1998,
30 + terms. http://www.nesh.ca/jameskay/ersserver.uwaterloo.ca/jjkay/grad/bdempster/gloss.html
Evolutionary
Algorithms, terms and definitions, Hans-Georg Beyer, Eva Brucherseifer, Wilfried
Jakob, Hartmut Pohlheim, Bernhard Sendhoff, Thanh Binh To, 2002
http://ls11-www.cs.uni-dortmund.de/people/beyer/EA-glossary/
Flake Gary Computational Beauty of Nature: Computer Explorations of
Fractals, Chaos, Complex Systems and Adaptation. Glossary MIT Press, 2000.
280+ definitions. http://mitpress.mit.edu/books/FLAOH/cbnhtml/glossary-intro.html
Glossary of terms, Ron Kohavi, Machine Learning, 30, 271-
274, 1998, 45 definitions. http://ai.stanford.edu/~ronnyk/glossary.html
Inmon, Bill, Glossary
of Data Warehousing, 2002-2005 http://www.inmoncif.com/library/glossary/
Heitkötter, Jörg
and Beasley, David, eds. (2001) "The Hitch-Hiker's Guide to Evolutionary
Computation: A list of Frequently Asked Questions (FAQ)",
USENET: comp.ai.genetic Available via anonymous FTP
from ftp://rtfm.mit.edu/pub/usenet/news.answers/ai-faq/genetic/
About 110 pages
IUPAC Combinatorial International Union of Pure and Applied
Chemistry, Glossary of Terms Used in Combinatorial Chemistry, D. Maclean, J.J.
Baldwin, V.T. Ivanov, Y. Kato, A. Shaw, P. Schneider, and E.M. Gordon, Pure
Appl. Chem., Vol. 71, No. 12, pp. 2349- 2365, 1999, 100+ definitions http://www.iupac.org/reports/1999/7112maclean/
IUPAC Computational] International Union of Pure and Applied Chemistry,
Glossary of Terms used in Computational Drug Design, H. van de Waterbeemd, R.E.
Carter, G. Grassy, H. Kubinyi, Y. C.. Martin, M.S. Tute, P. Willett, 1997. 125+
definitions. http://www.iupac.org/reports/1997/6905vandewaterbeemd/glossary.html
NIST National Institute of Standards and Technology, Dictionary of
Algorithms, Data Structures and Problems, Paul Black, 2001, 1300+ terms
http://www.nist.gov/dads/
Statsoft, Inc. Statistics glossary, Electronic Statistics Textbook, Tulsa
OK, US 2001, 1200 + definitions. http://www.statsoft.com/textbook/stathome.html
Alpha glossary index