3.4.2012 | 17:10
Ritrýndar greinar sem styðja Vitræna hönnun
Margir vilja reyna að afskrifa Vitræna hönnun vegna þess að ritrýndar greinar styðja ekki Vitræna hönnun. Fyrir þá sem halda að þetta sé staðreynd þá birti ég hérna lista af greinum sem styðja Vitræna hönnun og hafa birst í ritrýndum vísinda tímaritum, fengið héðan: Peer-Reviewed & Peer-Edited Scientific Publications Supporting the Theory of Intelligent Design (Annotated)
Scientific Publications Supportive of Intelligent Design Published in Peer-Reviewed Scientific Journals, Conference Proceedings, or Academic Anthologies
Joseph A. Kuhn, Dissecting Darwinism, Baylor University Medical Center Proceedings, Vol. 25(1): 41-47 (2012).
David L. Abel, Is Life Unique?, Life, Vol. 2:106-134 (2012).
Douglas D. Axe, Philip Lu, and Stephanie Flatau, A Stylus-Generated Artificial Genome with Analogy to Minimal Bacterial Genomes, BIO-Complexity, Vol. 2011(3) (2011).
Stephen C. Meyer and Paul A. Nelson, Can the Origin of the Genetic Code Be Explained by Direct RNA Templating?, BIO-Complexity, Vol. 2011(2) (2011).
Ann K. Gauger and Douglas D. Axe, The Evolutionary Accessibility of New Enzyme Functions: A Case Study from the Biotin Pathway, BIO-Complexity, Vol. 2011(1) (2011).
Ann K. Gauger, Stephanie Ebnet, Pamela F. Fahey, and Ralph Seelke, Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness, BIO-Complexity, Vol. 2010 (2) (2010).
Michael J. Behe, Experimental Evolution, Loss-of-Function Mutations, and The First Rule of Adaptive Evolution, The Quarterly Review of Biology, Vol. 85(4):1-27 (December 2010).
Douglas D. Axe, The Limits of Complex Adaptation: An Analysis Based on a Simple Model of Structured Bacterial Populations, BIO-Complexity, Vol. 2010(4):1 (2010).
Wolf-Ekkehard Lönnig, Mutagenesis in Physalis pubescens L. ssp. floridana: Some further research on Dollos Law and the Law of Recurrent Variation, Floriculture and Ornamental Biotechnology, 1-21 (2010).
George Montañez, Winston Ewert, William A. Dembski, and Robert J. Marks II, A Vivisection of the ev Computer Organism: Identifying Sources of Active Information, BIO-Complexity, Vol. 2010(3) (2010).
William A. Dembski and Robert J. Marks II, The Search for a Search: Measuring the Information Cost of Higher Level Search, Journal of Advanced Computational Intelligence and Intelligent Informatics, Vol. 14 (5):475-486 (2010).
Douglas D. Axe, The Case Against a Darwinian Origin of Protein Folds, BIO-Complexity, Vol. 2010 (1) (2010).
Winston Ewert, George Montañez, William Dembski and Robert J. Marks II, Efficient Per Query Information Extraction from a Hamming Oracle, 42nd South Eastern Symposium on System Theory, pp. 290-297 (March, 2010).
David L. Abel, Constraints vs Controls, The Open Cybernetics and Systemics Journal, Vol. 4:14-27 (January 20, 2010).
David L. Abel, The GS (genetic selection) Principle, Frontiers in Bioscience, Vol. 14:2959-2969 (January 1, 2010).
D. Halsmer, J. Asper, N. Roman, and T. Todd, The Coherence of an Engineered World, International Journal of Design & Nature and Ecodynamics, Vol. 4(1):4765 (2009).
Winston Ewert, William A. Dembski, and Robert J. Marks II, Evolutionary Synthesis of Nand Logic: Dissecting a Digital Organism, Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics, pp. 3047-3053 (October, 2009).
In 2003, researchers at the University of Michigan published in Nature the results of a computer simulation of evolution called Avida. The papers authors expressly framed Avida as a refutation of intelligent design (ID) arguments, claiming the program shows that complex adaptive traits do emerge via standard Darwinian mechanisms. But does Avida truly model standard Darwinian mechanisms? In 2009, four researchers at the pro-ID Evolutionary Informatics Lab furthered this scientific debate in a peer-reviewed paper titled, Evolutionary Synthesis of Nand Logic: Dissecting a Digital Organism. Building upon concepts previously established by William Dembski and Robert J. Marks, the paper argues that Avidas programmers smuggle in active information to allow their simulation to find its evolutionary targets. According to the paper, sources of intelligently programmed active information in Avida include the following:
- Active information from Avidas initialization where [t]he initialization in Avida recognizes the essential role of the nop-C instruction in finding the EQU.
- Mutation, fitness, and choosing the fittest of a number of mutated offspring.
- Most importantly, there is Stair step active information where the digital mutations in Avida are essentially pre-programmed to perform a useful function, and are rewarded for doing so.
William A. Dembski and Robert J. Marks II, Bernoullis Principle of Insufficient Reason and Conservation of Information in Computer Search, Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics, pp. 2647 2652 (October, 2009).
William A. Dembski and Robert J. Marks II, Conservation of Information in Search: Measuring the Cost of Success, IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, Vol. 39(5):1051-1061 (September, 2009).
David L. Abel, The Universal Plausibility Metric (UPM) & Principle (UPP), Theoretical Biology and Medical Modelling, Vol. 6(27) (2009).
David L. Abel, The Capabilities of Chaos and Complexity, International Journal of Molecular Sciences, Vol. 10:247-291 (2009).
David L. Abel, The biosemiosis of prescriptive information, Semiotica, Vol. 174(1/4):1-19 (2009).
A. C. McIntosh, Information and Entropy Top-Down or Bottom-Up Development in Living Systems, International Journal of Design & Nature and Ecodynamics, Vol. 4(4):351-385 (2009).
A.C. McIntosh, Evidence of design in bird feathers and avian respiration, International Journal of Design & Nature and Ecodynamics, Vol. 4(2):154169 (2009).
David L. Abel, The Cybernetic Cut: Progressing from Description to Prescription in Systems Theory, The Open Cybernetics and Systemics Journal, Vol. 2:252-262 (2008).
Richard v. Sternberg, DNA Codes and Information: Formal Structures and Relational Causes, Acta Biotheoretica, Vol. 56(3):205-232 (September, 2008).
Douglas D. Axe, Brendan W. Dixon, Philip Lu, Stylus: A System for Evolutionary Experimentation Based on a Protein/Proteome Model with Non-Arbitrary Functional Constraints, PLoS One, Vol. 3(6):e2246 (June 2008).
Michael Sherman, Universal Genome in the Origin of Metazoa: Thoughts About Evolution, Cell Cycle, Vol. 6(15):1873-1877 (August 1, 2007).
Kirk K. Durston, David K. Y. Chiu, David L. Abel, Jack T. Trevors, Measuring the functional sequence complexity of proteins, Theoretical Biology and Medical Modelling, Vol. 4:47 (2007).
Wolf-Ekkehard Lönnig and Heinz-Albert Becker, "Carnivorous Plants," in Handbook of Plant Science, Vol 2:1493-1498 (edited by Keith Roberts, John Wiley & Sons, 2007).
David L. Abel, Complexity, self-organization, and emergence at the edge of chaos in life-origin models, Journal of the Washington Academy of Sciences, Vol. 93:1-20 (2007).
Felipe Houat de Brito, Artur Noura Teixeira, Otávio Noura Teixeira, Roberto C. L. Oliveira, A Fuzzy Intelligent Controller for Genetic Algorithm Parameters, in Advances in Natural Computation (Licheng Jiao, Lipo Wang, Xinbo Gao, Jing Liu, Feng Wu, eds, Springer-Verlag, 2006); Felipe Houat de Brito, Artur Noura Teixeira, Otávio Noura Teixeira, Roberto C. L. Oliveira, A Fuzzy Approach to Control Genetic Algorithm Parameters, SADIO Electronic Journal of Informatics and Operations Research, Vol. 7(1):12-23 (2007).
Wolf-Ekkehard Lönnig, Kurt Stüber, Heinz Saedler, Jeong Hee Kim, Biodiversity and Dollos Law: To What Extent can the Phenotypic Differences between Misopates orontium and Antirrhinum majus be Bridged by Mutagenesis, Bioremediation, Biodiversity and Bioavailability, Vol. 1(1):1-30 (2007).
Wolf-Ekkehard Lönnig, Mutations: The Law of Recurrent Variation, Floriculture, Ornamental and Plant Biotechnology, Vol. 1:601-607 (2006).
David L. Abel and Jack T. Trevors, Self-organization vs. self-ordering events in life-origin models, Physics of Life Reviews, Vol. 3:211228 (2006).
David L. Abel and Jack T. Trevors, More than Metaphor: Genomes Are Objective Sign Systems, Journal of BioSemiotics, Vol. 1(2):253-267 (2006).
Øyvind Albert Voie, Biological function and the genetic code are interdependent, Chaos, Solitons and Fractals, Vol. 28:10001004 (2006).
Kirk Durston and David K. Y. Chiu, A Functional Entropy Model for Biological Sequences, Dynamics of Continuous, Discrete & Impulsive Systems: Series B Supplement (2005).
David L. Abel and Jack T. Trevors, Three subsets of sequence complexity and their relevance to biopolymeric information, Theoretical Biology and Medical Modeling, Vol. 2(29):1-15 (August 11, 2005).
John A. Davison, A Prescribed Evolutionary Hypothesis, Rivista di Biologia/Biology Forum, Vol. 98: 155-166 (2005).
Douglas D. Axe, Estimating the Prevalence of Protein Sequences Adopting Functional Enzyme Folds, Journal of Molecular Biology, Vol. 341:12951315 (2004).
Michael Behe and David W. Snoke, Simulating evolution by gene duplication of protein features that require multiple amino acid residues, Protein Science, Vol. 13 (2004).
Wolf-Ekkehard Lönnig, Dynamic genomes, morphological stasis, and the origin of irreducible complexity, in Valerio Parisi, Valeria De Fonzo, and Filippo Aluffi-Pentini eds., Dynamical Genetics (2004).
Stephen C. Meyer, The origin of biological information and the higher taxonomic categories, Proceedings of the Biological Society of Washington, Vol. 117(2):213-239 (2004) (HTML).
John Angus Campbell and Stephen C. Meyer, Darwinism, Design, and Public Education (DDPE) (East Lansing, Michigan: Michigan State University Press, 2003).
This is a collection of interdisciplinary essays that addresses the scientific and educational controversy concerning the theory of intelligent design. It was peer-reviewed by a philosopher of science, a rhetorician of science, and a professor in the biological sciences from an Ivy League university. The book includes five scientific articles advancing the case for the theory of intelligent design, the contents of which are summarized below.
S. C. Meyer, Dna and the Origin of Life: Information, Specification and Explanation, DDPE, pp. 223-285.
M. J. Behe, Design in the Details: The Origin of Biomolecular Machines, DDPE, pp. 287-302.
P. Nelson and J. Wells, Homology in Biology: Problem for Naturalistic Science and Prospect for Intelligent Design, DDPE, pp. 303-322.
S. C. Meyer, M. Ross, P. Nelson, P. Chien, The Cambrian Explosion: Biology's Big Bang, DDPE, pp. 323-402.
W. A. Dembski, Reinstating Design Within Science, DDPE, pp. 403-418.
Meyer contends that intelligent design provides a better explanation than competing chemical evolutionary models for the origin of the information present in large bio-macromolecules such as DNA, RNA, and proteins. Meyer shows that the term information as applied to DNA connotes not only improbability or complexity but also specificity of function. He then argues that neither chance nor necessity, nor the combination of the two, can explain the origin of information starting from purely physical-chemical antecedents. Instead, he argues that our knowledge of the causal powers of both natural entities and intelligent agency suggests intelligent design as the best explanation for the origin of the information necessary to build a cell in the first place.
Behe sets forth a central concept of the contemporary design argument, the notion of "irreducible complexity." Behe bases his argument on a consideration of phenomena studied in his field, biochemistry, including systems and mechanisms that display complex, interdependent, and coordinated functions. Such intricacy, Behe argues, defies the causal power of natural selection acting on random variation, the "no end in view" mechanism of neo-Darwinism. On the other hand, he notes that irreducible complexity is a feature of systems that are known to be designed by intelligent agents. He thus concludes that, compared to Darwinian theory, intelligent design provides a better explanation for the presence of irreducible complexity in the molecular machines of the cell.
Paul Nelson and Jonathan Wells reexamine the phenomenon of homology, the structural identity of parts in distinct species such as the pentadactyl plan of the human hand, the wing of a bird, and the flipper of a seal, on which Darwin was willing to rest his entire argument. Nelson and Wells contend that natural selection explains some of the facts of homology but leaves important anomalies (including many so-called molecular sequence homologies) unexplained. They argue that intelligent design explains the origin of homology better than do mechanisms cited by advocates of neo-Darwinism.
Meyer, Ross, Nelson, and Chien show that the pattern of fossil appearance in the Cambrian period contradicts the predictions or empirical expectations of neo-Darwinian (and punctuationalist) evolutionary theory. They argue that the fossil record displays several features -- a hierarchical top-down pattern of appearance, the morphological isolation of disparate body plans, and a discontinuous increase in information content -- that are strongly reminiscent of the pattern of evidence found in the history of human technology. Thus, they conclude that intelligent design provides a better, more causally adequate explanation of the origin of the novel animal forms present in the Cambrian explosion.
Dembski argues that advances in the information sciences have provided a theoretical basis for detecting the prior action of an intelligent agent. Starting from the commonsense observation that we make design inferences all the time, Dembski shows that we do so on the basis of clear criteria. He then shows how those criteria, complexity and specification, reliably indicate intelligent causation. He gives a rational reconstruction of a method by which rational agents decide between competing types of explanation, those based on chance, physical-chemical necessity, or intelligent design. Since he asserts we can detect design by reference to objective criteria, Dembski also argues for the scientific legitimacy of inferences to intelligent design.
Frank J. Tipler, Intelligent Life in Cosmology, International Journal of Astrobiology, Vol. 2(2): 141-148 (2003).
David L. Abel, Is Life reducible to complexity?, Fundamentals of Life, Chapter 1.2 (2002).
David K.Y. Chiu and Thomas W.H. Lui, Integrated Use of Multiple Interdependent Patterns for Biomolecular Sequence Analysis, International Journal of Fuzzy Systems, Vol. 4(3):766-775 (September 2002).
Michael J. Denton, Craig J. Marshall, and Michael Legge, The Protein Folds as Platonic Forms: New Support for the pre-Darwinian Conception of Evolution by Natural Law, Journal of Theoretical Biology, Vol. 219: 325-342 (2002).
Wolf-Ekkehard Lönnig and Heinz Saedler, Chromosome Rearrangement and Transposable Elements, Annual Review of Genetics, Vol. 36:389410 (2002).
Douglas D. Axe, Extreme Functional Sensitivity to Conservative Amino Acid Changes on Enzyme Exteriors, Journal of Molecular Biology, Vol. 301:585-595 (2000).
Solomon Victor and Vijaya M. Nayak, Evolutionary anticipation of the human heart, Annals of the Royal College of Surgeons of England, Vol. 82:297-302 (2000).
Solomon Victor, Vljaya M. Nayek, and Raveen Rajasingh, Evolution of the Ventricles, Texas Heart Institute Journal, Vol. 26:168-175 (1999).
W. A. Dembski, The Design Inference: Eliminating Chance through Small Probabilities (Cambridge: Cambridge University Press, 1998).
R. Kunze, H. Saedler, and W.-E. Lönnig, Plant Transposable Elements, in Advances in Botanical Research, Vol. 27:331-470 (Academic Press, 1997).
Michael Behe, Darwin's Black Box: The Biochemical Challenge to Evolution (New York: The Free Press, 1996).
Charles B. Thaxton, Walter L. Bradley, Roger L. Olsen, The Mystery of Life's Origin: Reassessing Current Theories (New York: Philosophical Library, 1984; Dallas, Texas: Lewis & Stanley Publishing, 4th ed., 1992).
Stanley L. Jaki, Teaching of Transcendence in Physics, American Journal of Physics, Vol. 55(10):884-888 (October 1987).
Granville Sewell, Postscript, in Analysis of a Finite Element Method: PDE/PROTRAN (New York: Springer Verlag, 1985) (HTML).
William G. Pollard, Rumors of transcendence in physics, American Journal of Physics, Vol. 52 (10) (October 1984).
This article by Dr. Joseph Kuhn of the Department of Surgery at Baylor University Medical Center appeared in the peer-reviewed journal Baylor University Medical Center Proceedings. It poses a number of challenges to both chemical and biological evolution, including:
1. Limitations of the chemical origin of life data to explain the origin of DNA
2. Limitations of mutation and natural selection theories to address the irreducible complexity of the cell
3. Limitations of transitional species data to account for the multitude of changes involved in the transition.
Regarding the chemical origin of life, Kuhn points to the Miller-Urey experiments and correctly observes that "the experimental conditions of a low-oxygen, nitrogen-rich reducing environment have been refuted." Citing Stephen Meyer's Signature in the Cell, he contends that "the fundamental and insurmountable problem with Darwinian evolution lies in the remarkable complexity and inherent information contained within DNA." Kuhn also explains that "Darwinian evolution and natural selection could not have been causes of the origin of life, because they require replication to operate, and there was no replication prior to the origin of life," but no other known cause can organize the information in life.
Dr. Kuhn then turns to explaining the concept of irreducible complexity, citing Michael Behe's book Darwin's Black Box and noting that "irreducible complexity suggests that all elements of a system must be present simultaneously rather than evolve through a stepwise, sequential improvement, as theorized by Darwinian evolution." Further, "The fact that these irreducibly complex systems are specifically coded through DNA adds another layer of complexity called 'specified complexity.'" As a medical doctor, Kuhn proposes that irreducibly complex systems within the human body include "vision, balance, the respiratory system, the circulatory system, the immune system, the gastrointestinal system, the skin, the endocrine system, and taste." He concludes that "the human body represents an irreducibly complex system on a cellular and an organ/system basis."
Kuhn also explores the question of human/ape common ancestry, citing Jonathan Wells's book The Myth of Junk DNA and arguing:
DNA homology between ape and man has been reported to be 96% when considering only the current protein-mapping sequences, which represent only 2% of the total genome. However, the actual similarity of the DNA is approximately 70% to 75% when considering the full genome, including the previously presumed "junk DNA," which has now been demonstrated to code for supporting elements in transcription or expression. The 25% difference represents almost 35 million single nucleotide changes and 5 million insertions or deletions.
In Dr. Kuhn's view, this poses a problem for Darwinian evolution because the "[t]he ape to human species change would require an incredibly rapid rate of mutation leading to formation of new DNA, thousands of new proteins, and untold cellular, neural, digestive, and immune-related changes in DNA, which would code for the thousands of new functioning proteins."
Kuhn also observes that a challenge to neo-Darwinism comes from the Cambrian explosion:
Thousands of specimens were available at the time of Darwin. Millions of specimens have been classified and studied in the past 50 years. It is remarkable to note that each of these shows a virtual explosion of nearly all phyla (35/40) of the animal kingdom over a relatively short period during the Cambrian era 525 to 530 million years ago. Since that time, there has been occasional species extinction, but only rare new phyla have been convincingly identified. The seminal paper from paleoanthropologists J. Valentine and D. H. Erwin notes that the absence of transitional species for any of the Cambrian phyla limits the neo-Darwinian explanation for evolution.
Despite Texas's call for discussing the scientific strengths and weaknesses of Darwinian evolution, Kuhn closes by noting, "In 2011, when new textbooks were presented to the State Board of Education, 9 out of 10 failed to provide the mandated supplementary curricula, which would include both positive and negative aspects of evolution (44)." Citing Discovery Institute's Report on the Texas Textbooks, he laments:
[S]everal of the textbooks continued to incorrectly promote the debunked Miller-Urey origin of life experiment, the long-discredited claims about nonfunctional appendix and tonsils, and the fraudulent embryo drawings from Ernst Haeckel. In essence, current biology students, aspiring medical students, and future scientists are not being taught the whole story. Rather, evidence suggests that they continue to receive incorrect and incomplete material that exaggerates the effect of random mutation and natural selection to account for DNA, the cell, or the transition from species to species.
Kuhn concludes, "It is therefore time to sharpen the minds of students, biologists, and physicians for the possibility of a new paradigm."
What is it that distinguishes life from non-living entities? This peer-reviewed paper attempts to answer that question, noting that Life pursues thousands of biofunctional goals, whereas Neither physicodynamics, nor evolution, pursue goals. Is it possible that unguided evolution and strictly material causes produced lifes purposeful processes? According to this paper, the answer is no. Lifes goals include the use of symbol systems to maintain homeostasis far from equilibrium in the harshest of environments, positive and negative feedback mechanisms, prevention and correction of its own errors, and organization of its components into Sustained Functional Systems. But the article notes that the integration and regulation of biochemical pathways and cycles into homeostatic metabolism is programmatically controlled, not just physicodynamically constrained. This programming is termed cyberneticyet according to the paper cybernetic control flows only from the nonphysical world of formalism into the physical world through the instantiation of purposeful choices. Indeed, Only purposeful choice contingency at bona fide decision nodes can rescue from eventual deterioration the organization and function previously programmed into physicality. Life thus cannot be the result of unguided material processessome cause capable of programming purposeful choices is necessary.
This peer-reviewed paper is a follow-up up to the 2008 PLoS One paper co-authored by Axe and Lu on Stylus, a computer simulation of evolution which is more faithful to biological reality than many others. This 2011 paper explains that the functions of the digital organisms in other simulations are often divorced from real-world meaning. They designed Stylus to present a more accurate picture:
The motivation for Stylus was the recognition that prior models used to study evolutionary innovation did not adequately represent the complex causal connection between genotypes and phenotypes.
Stylus aims to correct these deficiencies by simulating Darwinian evolution in a manner that more accurately reflects the biological relationship between genotype and phenotype. It is also more realistic because it solves real-world problems. As the paper explains, Functional specificity therefore has a structural basis in the Stylus world, just as it does in the real world. Stylus manipulates digital objects that have real-world meaning: the targets of evolution in Stylus are Chinese characters. As the paper explains:
These translation products, called vector proteins, are functionless unless they form legible Chinese characters, in which case they serve the real function of writing. This coupling of artificial genetic causation to the real world of language makes evolutionary experimentation possible in a context where innovation can have a richness of variety and a depth of causal complexity that at least hints at what is needed to explain the complexity of bacterial proteomes.
These characters not only have real-world meaning, but their function-related shapes bear interesting analogies to proteins. An additional similarity between Chinese characters and proteins is that just as protein domains are re-used throughout many proteins, so particular shapes, called strokes, are found commonly throughout Chinese characters.
Basic to life is an information conversion, where the information carried in genes (the genotype) is converted into an organism's observable traits (the phenotype). Those biological structures then perform various functions. Another way of framing this information conversion is therefore: sequence → structure → function. Axe, Lu and Flatau explain that many previous computer programs attempting to simulate evolution achieve part of this conversion, but not the whole thing.
For example, Conway's famous Game of Life starts with a structure, and in some instances that structure can perform a function. But there is no sequence involved in the conversion. Avida starts with a sequence of programming commands, and when successful performs certain logic functions. But in Avida there is no structure to mediate between sequence and function. Stylus, on the other hand, is more advanced in that it simulates the full sequence → structure → function information transfer. It does this by starting with a programmed genome. As the paper explains:
[The] Stylus genome encodes a special kind of text, namely, one that describes how to decode the genome. That is, the desired genome will encode a sequence of Chinese characters (in the form of vector proteins) that tells a reader of Chinese how Stylus genes are translated into vector sequences, and how those sequences are processed to make readable vector proteins.
The paper explains: What Stylus offers that no other model offers, to our knowledge, is an artificial version of gene-to-protein genetic causation that parallels the real thing.
In the world of Stylus, a Chinese character is like a protein. So how can we determine if a functional "protein" has evolved? According to the paper, "At the core of Stylus software is an algorithm that quantifies the likeness of a given vector protein to a specified Chinese character." This complicated algorithm is described as follows:
Stylus endows these graphical constructs with interesting similarities to their molecular counterparts by uncovering and exploiting a pre-existing analogy -- the analogy between the set of characters used in Chinese writing and the set of protein structures used in life. Specifically, vector proteins are drawn objects that may function as legible Chinese characters if they are suitably formed. ... Stylus is unique in its use of real function that maps well to molecular biology. It therefore represents a significant advance in the field of evolutionary modeling. (internal citations omitted)
The paper presents a set of Chinese characters that can be used for simulating the evolutionary process in the Stylus world. But can these Chinese character groups, which have many qualities that parallel real-world protein families, evolve by random mutation and natural selection? That's the sort of question the creators of Stylus hope to answer. The results of such simulations will probably be fleshed out in future papers. But the current paper leaves us with a strong sense of where this is all heading:
Evolutionary causation is intrinsically tied to the relationship between genotype and phenotype, which depends on low-level genetic causation. It follows that evolutionary explanations of the origin of functional protein systems must subordinate themselves to our understanding of how those systems operate. In other words, the study of evolutionary causation cannot enjoy the disciplinary autonomy that studies of genetic causation can.
In view of this, the contribution of Stylus is to make evolutionary experimentation possible in a model world where low-level genetic causation has the essential role that it has in the real world. Combined with the free Stylus software, the complete Stylus genome made freely available with this paper paves the way for analogy-based studies on a wide variety of important subjects, many of which are difficult to study by direct experimentation. Among these are the evolution of new protein folds by combining existing parts, the optimality and evolutionary optimization of the genetic code, the significance of selective thresholds for the origin and optimization of protein functions, and the reliability of methods used for homology detection and phylogenetic-tree construction.
There probably will never be a perfect computer simulation of biological evolution, but Stylus brings new and improved methods to the field of evolutionary modeling. This tool will help those interested in testing the viability of Darwinian claims to assess whether complex features can be created by random mutations at the molecular level.
This peer-reviewed paper had its origins in a debate at Biola University in 2009 where Stephen Meyer debated two critical biologists. One of those scientists was Arthur Hunt from the University of Kentucky, who had previously cited the research of Michael Yarus which proposed that certain chemical affinities between RNA triplets and amino acids could have formed a chemical basis for the origin of the genetic code. According to Hunt, Yaruss research showed that chemistry and physics can account for the origin of the genetic code and thus the very heart of Meyers thesis (and his book [Signature in the Cell]) is wrong. Meyer and Nelsons BIO-Complexity paper responds to Yaruss claims, showing that when challenged, ID proponents can produce compelling technical rebuttals. According to their detailed response, Yaruss (and Hunts) claims fail due to selective use of data, incorrect null models, a weak signal even from positive results, and unsupported assumptions about the pre-biotic availability of amino acids. Rather than refuting design, the research shows the need for an intelligently-directed origin of the code.
This paper reports research conducted by Biologic Institute scientists Ann Gauger and Douglas Axe on the number of minimum changes that would be required to evolve one protein into another protein with a different function. The investigators studied two proteins, Kbl and BioF, with different functions but highly similar structures -- thought by evolutionists to be very closely related. Through mutational analysis, Gauger and Axe found that a minimum of seven independent mutations -- and probably many more -- would be necessary to convert Kbl to perform the function of its allegedly close genetic relative, BioF. Per Axes 2010 BIO-Complexity paper, "The Limits of Complex Adaptation: An Analysis Based on a Simple Model of Structured Bacterial Populations," they report that this is beyond the limits of Darwinian evolution:
The extent to which Darwinian evolution can explain enzymatic innovation seems, on careful inspection, to be very limited. Large-scale innovations that result in new protein folds appear to be well outside its range. This paper argues that at least some small-scale innovations may also be beyond its reach. If studies of this kind continue to imply that this is typical rather than exceptional, then answers to the most interesting origins questions will probably remain elusive until the full range of explanatory alternatives is considered.
This research, published by molecular biologist Ann Gauger of the Biologic Institute, Ralph Seelke at the University of Wisconsin Superior started by breaking a gene in the bacterium Escherichia coli required for synthesizing the amino acid tryptophan. When the gene was broken in just one place, random mutations in the bacterias genome were capable of fixing the gene. But when two mutations were required to restore function, Darwinian evolution could not do the job. Such results show that it is extremely unlikely for blind and unguided Darwinian processes to find rare amino-acid sequences that yield functional proteins. In essence, functional proteins are multi-mutation features in the extreme.
This peer-reviewed paper by Michael Behe in the journal Quarterly Review of Biology helps explain why we dont observe the evolution of new protein functions. After reviewing many studies on bacterial and viral evolution, he concluded that most adaptations at the molecular level are due to the loss or modification of a pre-existing molecular function. In other words, since Darwinian evolution proceeds along the path of least resistance, Behe found that organisms are far more likely to evolve by a losing a biochemical function than by gaining one. He thus concluded that the rate of appearance of an adaptive mutation that would arise from the diminishment or elimination of the activity of a protein is expected to be 100-1000 times the rate of appearance of an adaptive mutation that requires specific changes to a gene. If Behe is correct, then molecular evolution faces a severe problem. If a loss (or decrease) of function is much more likely than a gain-of-function, logic dictates that eventually an evolving population will run out of molecular functions to lose or diminish. Behes paper suggests that if Darwinian evolution is at work, something else must be generating the information for new molecular functions.
The ability of Darwinian evolution to produce features that require multiple mutations before gaining a benefit has been an issue long debated between proponents of intelligent design and proponents of neo-Darwinism. This paper responds to arguments from Michael Lynch and Adam Abegg, finding that they made a mistake--actually two mistakes--in their calculation of the length of time required for multiple mutations to occur when there is no adaptive benefit until all mutations are in place.
The purpose of Axes paper is then to mathematically determine how much time is needed to evolve traits that require multiple mutations before any adaptive benefit is conferred on the organism. He notes that there are essentially three models that might be invoked to explain the origin of these complex features: molecular saltation, sequential fixation, and stochastic tunneling. Axes paper tackles stochastic tunneling, a model that is in a sense midway between the molecular saltation and sequential fixation models. According to Axe, stochastic tunneling "differs from sequential fixation only in that it depends on each successive point mutation appearing without the prior one having become fixed." However, because the prior mutations are not yet fixed in the larger population, this means that the number of organisms that have the prior mutations may be small. Thus, this mechanism "must instead rely on the necessary mutations appearing within much smaller subpopulations," or as Axe models it, bacterial lines. This model resembles molecular saltation in that it depends on all required mutations eventually appearing by chance -- but anticipates this will happen after mutations are fixed in smaller subpopulations. Axe explains why all of these models face unavoidable statistical improbabilities: "in view of the fact that the underlying limitation is an unavoidable aspect of statistics -- that independent rare events only very rarely occur in combination -- it seems certain that all chance-based mechanisms must encounter it."
Axe thus aims to accurately model the evolution of a multi-mutation feature. He investigates two cases: (1) when intermediate mutations are slightly disadvantageous, and (2) when intermediate mutations are selectively neutral. Axe seeks to give neo-Darwinian evolution a generous helping of probabilistic resources by modeling the evolution of bacteria -- asexual organisms that reproduce quickly and have very large effective population sizes. Unsurprisingly, Axe found that Darwinian evolution has great difficulty fixing multiple mutations when those mutations have negative selection coefficients (i.e., they are disadvantageous, or maladaptive). Neutral mutations have a better shot at becoming fixed, but even here Axe finds that the ability of neo-Darwinian evolution to produce multi-mutation features is severely limited. The implications of this analysis for Darwinian evolution are large and negative. Axe's model made assumptions which were very generous towards Darwinian evolution. He assumed the existence of a huge population of asexually reproducing bacteria that could replicate quickly -- perhaps nearly three times per day -- over the course of billions of years. In these circumstances, complex adaptations requiring up to six mutations with neutral intermediates can become fixed. Beyond that, things become implausible. If only slightly maladaptive intermediate mutations are required for a complex adaptation, only a couple of mutations (at most two) could be fixed. If highly maladaptive mutations are required, the trait will never appear. Axe discusses the implications of his work:
In the end, the conclusion that complex adaptations cannot be very complex without running into feasibility problems appears to be robust. ... Although studies of this kind tend to be interpreted as supporting the Darwinian paradigm, the present study indicates otherwise, underscoring the importance of combining careful measurements with the appropriate population models.
Axe's paper, because it focuses on bacteria, does not model the evolution of sexually reproducing organisms. In sexually reproducing eukaryotic organisms, the longer generation times and lower effective population sizes would dramatically lower the number of mutations that could be fixed before acquiring some adaptive benefit. In vertebrate evolution, the probabilistic resources available to Darwinian evolution would be much smaller than those available to bacteria, and the result proportionately difficult to explain along Darwinian lines. Some other mechanism must be generating complex multi-mutation features.
This original research paper on mutagenesis in plants favorably cites "intelligent design proponents," including Michael Behe, William Dembski, Jonathan Wells, and Stephen Meyer, as advocating one of various legitimate "scientific theories on the origin of species." Citing skeptics of neo-Darwinism such as Behe and "the almost 900 scientists of the Scientific Dissent from Darwinism," the paper notes that:
Many of these researchers also raise the question (among others), why -- even after inducing literally billions of induced mutations and (further) chromosome rearrangements -- all the important mutation breeding programs have come to an end in the Western world instead of eliciting a revolution in plant breeding, either by successive rounds of selective "micromutations" (cumulative selection in the sense of the modern synthesis), or by "larger mutations" ... and why the law of recurrent variation is endlessly corroborated by the almost infinite repetition of the spectra of mutant phenotypes in each and any new extensive mutagenesis experiment (as predicted) instead of regularly producing a range of new systematic species...
Lönnig focuses on the origin of a particular trait found in some angiosperms, where longer sepals form a shelter for developing fruit called inflated calyx syndrome, or "ICS." According to Lönnig, phylogenetic data indicate that under a neo-Darwinian interpretation, this trait was either lost in multiple lineages or evolved independently multiple times. If the trait evolved multiple times independently, then why do so many plants still lack such a "lantern" protective shelter? After noting that some proponents of neo-Darwinism make unfalsifiable appeals to unknown selective advantages, he concludes that neo-Darwinism is not making falsifiable predictions and finds that this "infinity of mostly non-testable explanations (often just-so-stories) itself may put the theory outside science."
However, there is another possibility, namely the scientific hypothesis of intelligent design. In contrast to neo-Darwinism, the author notes the ID view can "be falsified by proving (among other points) that the probability to form an ICS by purely natural processes is high, that specified complexity is low, and finally, by generating an ICS by random mutations in a species displaying none." Lönnig recounts the many phrases Darwin used to explain that his theory of evolution requires "innumerable slight variations," and argues that the ICS could not evolve in such a stepwise fashion. After reviewing the multiple complex steps involved in forming an ICS, he states that his research "appears to be in agreement with Behe's studies (2007): it seems to be very improbable that the current evolutionary theories like the modern synthesis (continuous evolution) or the hopeful monster approach (in one or very few steps) can satisfactorily explain the origin of the ICS." In closing, Lönnig cites further Behe's concept of irreducible complexity and Dembski's arguments regarding the universal probability bound, contending that the ICS may be beyond the edge of evolution. Nevertheless, he leaves the present question open for further research, which he enthusiastically invites. Yet, citing the work of Stephen Meyer, William Dembski, and Robert Marks, he concludes that "it appears to be more than unlikely to generate the whole world of living organisms by the neo-Darwinian method."
This paper continues the work of the Evolutionary Informatics Lab showing that some cause other than Darwinian mechanisms is required to produce new information. Thomas Schneider's "ev" program has been widely cited as showing that Darwinian processes can increase information. In this peer-reviewed paper, William Dembski and his coauthors demonstrate that, contrary to such claims, the ev program is in fact rigged to produce a particular outcome. According to the paper ev "exploit[s] one or more sources of knowledge to make the [evolutionary] search successful" and this knowledge "predisposes the search towards its target." They explain how the program smuggles in active information:
The success of ev is largely due to active information introduced by the Hamming oracle and from the perceptron structure. It is not due to the evolutionary algorithm used to perform the search. Indeed, other algorithms are shown to mine active information more efficiently from the knowledge sources provided by ev.
Schneider claims that ev demonstrates that naturally occurring genetic systems gain information by evolutionary processes and that "information gain can occur by punctuated equilibrium." Our results show that, contrary to these claims, ev does not demonstrate "that biological information...can rapidly appear in genetic control systems subjected to replication, mutation, and selection." We show this by demonstrating that there are at least five sources of active information in ev.
1. The perceptron structure. The perceptron structure is predisposed to generating strings of ones sprinkled by zeros or strings of zeros sprinkled by ones. Since the binding site target is mostly zeros with a few ones, there is a greater predisposition to generate the target than if it were, for example, a set of ones and zeros produced by the flipping of a fair coin.
2. The Hamming Oracle. When some offspring are correctly announced as more fit than others, external knowledge is being applied to the search and active information is introduced. As with the child's game, we are being told with respect to the solution whether we are getting "colder" or "warmer."
3. Repeated Queries. Two queries contain more information than one. Repeated queries can contribute active information.
4. Optimization by Mutation. This process discards mutations with low fitness and propagates those with high fitness. When the mutation rate is small, this process resembles a simple Markov birth process that converges to the target.
5. Degree of Mutation. As seen in Figure 3, the degree of mutation for ev must be tuned to a band of workable values.
A critic might claim that some of these items represent a proper modeling of Darwinian evolution. However, the way that ev uses these processes is unlike Darwinian evolution. For example, in (1), we see that the program's use of a "perceptron" causes the output to be highly biased towards matching the target. It's a way of cheating to ensure the program reaches its target sequence. Likewise, in (2) and (4), the program can effectively look ahead and march in the right direction towards the target, whereas unguided Darwinian evolution would have no "look ahead" capability. The active information in the Hamming Oracle makes a sharp contrast with the evolution of real binding sites where there may be no binding capability until multiple mutations are fixed.
Mutation and selection are not the causes of success in these genetic algorithms. Yes, random mutation occurs and yes, there is selection. But selection is performed by a fitness function that is encoded by the programmer. And in programs like ev, the programmer intentionally shapes the fitness function to be amenable to stepwise Darwinian evolution. This effectively assumes the truth of Darwinian evolution. But in the real world of biology, fitness functions might look very different: there might be lonely islands of function in a vast sea of nonfunctional sequences. Indeed, if one uses a randomized fitness function, the search performs poorly and might not even outperform a blind search.
Thus choosing the right fitness function (from the set of possible fitness functions) requires as much or more information than choosing the right string from the set of possible strings in your search space. The fitness function itself is an information-rich structure. The program starts with this information-rich fitness function, and then produces something much less information rich -- the target sequence. And as the paper shows, ev does this in a relatively inefficient way: using the same information-rich fitness function, you can find the target 700 times more efficiently than by using simple single-agent stochastic hill climbing. Active information is smuggled into the fitness function. Rather than showing that information can arise by Darwinian evolution, ev shows that intelligence is required.
This paper by leading ID theorists William Dembski and Robert Marks argues that without information about a target, anything greater than a trivial search is bound to fail: Needle-in-the-haystack problems look for small targets in large spaces. In such cases, blind search stands no hope of success. They cite No Free Lunch theorems, according to which any search technique will work, on average, as well as a blind search. However, in such a case, Success requires an assisted search. But whence the assistance required for a search to be successful? Dembski and Marks thus argue that successful searches do not emerge spontaneously but need themselves to be discovered via a search. However, without information about the target, the search for a search itself is still no better than a blind search: We prove two results: (1) The Horizontal No Free Lunch Theorem, which shows that average relative performance of searches never exceeds unassisted or blind searches, and (2) The Vertical No Free Lunch Theorem, which shows that the difficulty of searching for a successful search increases exponentially with respect to the minimum allowable active information being sought. The implication, of course, is that without the ultimate input from an intelligent agent -- active information -- such searches will fail.
This paper by Biologic Institute director Douglas Axe argues that amino-acid sequences that produce functional protein folds are too rare to be discovered by the trial-and-error processes of Darwinian evolution. It begins by observing that when the genetic code was first discovered, The code had made it clear that the vast set of possible proteins that could conceivably be constructed by genetic mutations is far too large to have actually been sampled to any significant extent in the history of life. Yet how could the highly incomplete sampling that has occurred have been so successful? How could it have located the impressive array of protein functions required for life in all its forms, or the comparably impressive array of protein structures that perform those functions? This concern was raised repeatedly in the early days of the genetic code [14], but it received little attention from the biological community. After reviewing the problem, Axe concludes that With no discernable shortcut to new protein folds, we conclude that the sampling problem really is a problem for evolutionary accounts of their origins. He argues that a search mechanism unable to locate a small patch on a grain of level-14 sand is not apt to provide the explanation of fold origins that we seek. Clearly, if this conclusion is correct it calls for a serious rethink of how we explain protein origins, and that means a rethink of biological origins as a whole.
This paper continues the peer-reviewed work co-published by William Dembski, Robert Marks, and others affiliated with the Evolutionary Informatics Lab. Here, the authors argue that Richard Dawkinss METHINKSITISLIKEAWEASEL evolutionary algorithm starts off with large amounts of active information -- that is, information intelligently inserted by the programmer to aid the search. This paper covers all of the known claims of operation of the WEASEL algorithm and shows that in all cases, active information is used. Dawkinss algorithm can best be understood as using a Hamming oracle as follows: When a sequence of letters is presented to a Hamming oracle, the oracle responds with the Hamming distance equal to the number of letter mismatches in the sequence. The authors find that this form of a search is very efficient at finding its target -- but that is only because it is preprogrammed with large amounts of active information needed to quickly find the target. This preprogrammed active information makes it far removed from a true Darwinian evolutionary search algorithm. An online toolkit of programs called Weasel Ware accompanies the paper and can be found at http://evoinfo.org/weasel.
This article explains that the organization of matter in life requires non-material causes such as "mental choice of tokens (physical symbol vehicles) in a material symbol system which then "instantiates non-physical formal Prescriptive Information (PI) into physicality." It also acknowledges that life is fundamentally based upon information: "Life, on the other hand, is highly informational. Metabolic organization and control is highly programmed. Life is marked by the integration of large numbers of computational solutions into one holistic metasystem. No as-of-yet undiscovered law will ever be able to explain the highly informational organization of living organisms." The article explains that "choice contingency" is a concept where the outcome is determined by the choice of an intelligent agent:
Whereas chance contingency cannot cause any physical effects, choice contingency can. But choice contingency, like chance contingency, is formal, not physical. So how could non-physical choice contingency possibly become a cause of physical effects? The answer lies in our ability to instantiate formal choices into physical media. As we shall see below, formal choices can be represented and recorded into physicality using purposefully chosen physical symbol vehicles in an arbitrarily assigned material symbol system. Choices can also be recorded through the setting of configurable switches. Configurable switches are physicodynamically indeterminate (inert; decoupled from and incoherent with physicodynamic causation). This means that physicodynamics plays no role in how the switch is set. Physicodynamic factors are equal in the flipping of a binary switch regardless of which option is formally chosen. Configurable switches represent decision nodes and logic gates. They are set according to arbitrary rules, not laws. Here arbitrary does not mean random. Arbitrary means not physicodynamically determined. Rules are not constrained by physical nature. Arbitrary means freely selectable -- choice contingent.
Only an intelligent cause -- an agent -- could implement such choice contingency. The article further explains that physical constraints are not what govern life, but rather choice controls, which cannot be explained by metaphysical naturalism:
Volition (choice contingency) is every bit as repeatedly observable, predictable (given any form of true organization), and as potentially falsifiable as any naturalistic hypothesis. Volition and control are no more metaphysical than acceleration, wave/particle duality, weight, height, quarks, and light. We cannot label volition and control metaphysical, and quantum mechanics and statistical mechanics physical. Mathematics and the scientific method themselves are non-physical. Volitional controls (as opposed to mere constraints) are a fact of objective reality. If this fact does not fit within the perimeter of our prized lifelong worldview, perhaps it is time to open our minds and reconsider the purely metaphysical presuppositions that shaped that inadequate worldview. Philosophic naturalism cannot empirically or logically generate organizational bona fide controls. It can only generate self-ordering, low-informational, unimaginative constraints with no formal cybernetic capabilities. Metaphysical naturalism is too small a perimeter to contain all of the pieces. Naturalism is too inadequate a metanarrative to be able to incorporate all of the observable scientific data.
The article concludes that the formalisms we see in life arise only in the minds of agents.
This paper studies the genetic code, observing that Nucleotides function as physical symbol vehicles in a material symbol system. But it argues that teleology is necessary to explain the choice controls in such systems: The challenge of finding a natural mechanism for linear digital programming extends from primordial genetics into the much larger realm of semantics and semiotics in general. Says Barham: The main challenge for information science is to naturalize the semantic content of information. This can only be achieved in the context of a naturalized teleology (by teleology is meant the coherence and the coordination of the physical forces which constitute the living state). The alternative term teleonomy has been used to attribute to natural process the appearance of teleology (43-45). Either way, the bottom line of such phenomena is selection for higher function at the logic gate programming level. The article explains why natural selection is inadequate to explain many features we observe in biology, and why instead we require a cause that can anticipate function: Programming selections at successive decision nodes requires anticipation of what selections and what sequences would be functional. Selection must be for potential function. Nature cannot anticipate, let alone plan or pursue formal function. Natural selection can only preserve the fittest already-existing holistic life.
This peer-reviewed scientific paper argues that we live in an "engineered world." It observes that "Human-engineered systems are characterized by stability, predictability, reliability, transparency, controllability, efficiency, and (ideally) optimality. These features are also prevalent throughout the natural systems that make up the cosmos. However, the level of engineering appears to be far above and beyond, or transcendent of, current human capabilities." The paper cites the fine-tuning of the universe for life, such as the special properties of water, the prevalence of elements needed for life (e.g. hydrogen, oxygen, and carbon), the expansion rate of the universe, as well as the Galactic Habitable Zone, a concept developed by Discovery Institute senior fellow Guillermo Gonzalez:
On the universal scale, however, one can see that our planet is in a comparatively narrow region of space known as the "Galactic Habitable Zone. This zone allows for the right surface temperature, stable climate metallicity, ability to hold liquid water, and many other conditions necessary for life. There is no practical reason why the universe has to contain life, but the fact that it does gives great importance to this zone for the benefit of our existence.
The authors then explain Gonzalez and Jay Richards's Privileged Planet argument:
Not only does this zone satisfy the requirements of life but also it endowed humans with a prime position to view the wonders of the universe. There are many qualities that make the earth an excellent place from which to study the universe. First of all is the transparency of the atmosphere. Our atmosphere admits the radiation necessary for life while blocking most of its lethal energy. This transparency also allows humans to see into space without the distortions caused by a thick atmosphere as would be the case on Venus. Secondly, the regularity of our solar system's orbits makes time calculation of planetary events more predictable, even allowing for estimations of planetary orbits millions of years ago. Finally, the gas and dust in our region of the Milky Way are diffuse compared to other regions in the local mid-plane. This allows humans to view 80% of the universe without blockage. If our solar system was moved farther away, perpendicularly to the mid-plane, we would be able to see the other 20%. However, this would cause a large percentage of our current view to be blocked by dust as well as the luminosity of stars in close proximity. Humanity's place in the universe is amazingly unique when it comes to discovery. Planet earth is in prime position for the gleaning of knowledge from the stars.
The paper also focuses on fine-tuning in biology as evidence of biological design, citing the work of a variety of noteworthy proponents of intelligent design, including Walter Bradley, Michael Behe, Jonathan Wells, and William Dembski. The paper examines the engineering of life, noting that "[b]iological systems are constantly undergoing processes that exhibit modularity, specificity, adaptability, durability, and many other aspects of engineered systems." It quotes from William Dembski and Jonathan Wells's book The Design of Life, stating: "Many of the systems inside the cell represent nanotechnology at a scale and sophistication that dwarfs human engineering. Moreover, our ability to understand the structure and function of these systems depends directly on our facility with engineering principles." The authors further cite the work of Michael Behe's, such as Darwin's Black Box and The Edge of Evolution, explaining that biological systems display "irreducible complexity" which requires a goal-directed process or "'bottom up-top down' design". After examining the engineering of our universe from the macro- to microscope scales, they conclude: "An interdisciplinary study of the cosmos suggests that a transcendently engineered world may be the most coherent explanation for the reality we experience as human beings."
Ewert, Dembski, and Marks focus on this third point, noting that, The importance of stair step active information is evident from the inability to generate a single EQU [the target function] in Avida without using it. They ask, What happens when no stair step active information is applied? and note what the original authors of the Avida paper in Nature reveal:
At the other extreme, 50 populations evolved in an environment where only EQU was rewarded, and no simpler function yielded energy. We expected that EQU would evolve much less often because selection would not preserve the simpler functions that provide foundations to build more complex features. Indeed, none of these populations evolved EQU, a highly significant difference from the fraction that did so in the reward-all environment.
But does real biology reward mutations to the extent that Avida does? The passage quoted above shows that when Avida is calibrated to model actual biology -- where many changes may be necessary before there is any beneficial function to select for (irreducible complexity) -- none of these populations evolved the target function. Avidas creators trumpet its success, but Ewert, Dembski, and Marks show that Avida uses stair step active information by rewarding forms of digital mutations that are pre-programmed to yield the desired outcome. It does not model true Darwinian evolution, which is blind to future outcomes and cannot use active information. The implications may be unsettling for proponents of neo-Darwinian theory: Not only is Darwinian evolution on average no better than blind search, but Avida is rigged by its programmers to succeed, showing that intelligence is in fact necessary to generate complex biological features. An online toolkit of programs called Mini Vida accompanies the paper and can be found at http://evoinfo.org/minivida.
In his 2001 book No Free Lunch, William Dembski argued that Darwinian evolutionary searches cannot produce new complex and specified information, and information that is found by Darwinian searches actually reflects information that was smuggled in by an intelligence external to the search. This peer-reviewed paper co-written with Robert J. Marks furthers Dembskis arguments, contending that in all searches -- including Darwinian ones -- information is conserved such that on average no search outperforms any other. The implication of their principle of Conservation of Information (COI) is that Darwinian evolution, at base, is actually no better than a random search. To make their argument, the paper develops a methodology for measuring the information smuggled into a search algorithm by intelligence. Exogenous Information (IΩ) represents the difficulty a search in finding its target with no prior information about its location. Active Information (I+) is the amount of information smuggled in by intelligence to aid the search algorithm in finding its target. Endogenous Information (Is) then measures the difficulty the search will have in finding its target after the addition of Active Information. Thus, I+ = IΩ - Is. Having laid this theoretical groundwork, Dembski and Marks begin to apply their ideas to evolutionary algorithms which claim to produce new information. They argue that computer simulations often do not properly model truly unguided Darwinian evolution: COI has led to the formulation of active information as a measure that needs to be introduced to render an evolutionary search successful. Like an athlete on steroids, many such programs are doctored, intentionally or not, to succeed, and thus COI puts to rest the inflated claims for the information generating power of evolutionary simulations such as Avida and ev. They conclude that when trying to generate new complex and specified information, in biology, as in computing, there is no free lunch, and therefore some assistance from intelligence is required to aid Darwinian evolution find unlikely targets in search space.
This peer-reviewed article by William A. Dembski and Robert J. Marks II challenges the ability of Darwinian processes to create new functional genetic information. Darwinian evolution is, at its heart, a search algorithm that uses a trial and error process of random mutation and unguided natural selection to find genotypes (i.e., DNA sequences) that lead to phenotypes (i.e., biomolecules and body plans) that have high fitness (i.e., foster survival and reproduction). Dembski and Marks's article explains that unless you start with some information about where peaks in a fitness landscape may lie, any search -- including Darwinian searches -- are on average no better than a random search. After assessing various examples of evolutionary searches, Dembski and Marks show that attempts to model Darwinian evolution via computer simulations, such Richard Dawkins famous "METHINKSITISLIKEAWEASEL" exercise, start off with, as Dembski and Marks put it, "problem-specific information about the search target or the search-space structure." According to the paper, such simulations only reach their evolutionary targets because there is pre-specified "accurate information to guide them," or what they call "active information." The implication, of course, is that some intelligent programmer is required to front-load a search with active information if the search is to successfully find rare functional genetic sequences. They conclude that "Active information is clearly required in even modestly sized searches." This paper is in many ways a validation of some of Dembski's core ideas in his 2001 book No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence, which argued that some intelligent input is required to produce novel complex and specified information. Dembski has written about this article at Uncommon Descent, explaining how it supports ID: Our critics will immediately say that this really isnt a pro-ID article but that its about something else (Ive seen this line now for over a decade once work on ID started encroaching into peer-review territory). Before you believe this, have a look at the article. In it we critique, for instance, Richard Dawkins METHINKS*IT*IS*LIKE*A*WEASEL (p. 1055). Question: When Dawkins introduced this example, was he arguing pro-Darwinism? Yes he was. In critiquing his example and arguing that information is not created by unguided evolutionary processes, we are indeed making an argument that supports ID.
Materialists often vaguely appeal to vast periods of time and boundless probabilistic resources in the universe to make their scenarios sound plausible. But is mere possibility sufficient justification to assert scientific plausibility? This peer-reviewed article in Theoretical Biology and Medical Modelling answers that question, arguing that [m]ere possibility is not an adequate basis for asserting scientific plausibility because [a] precisely defined universal bound is needed beyond which the assertion of plausibility, particularly in life-origin models, can be considered operationally falsified. The paper observes that Combinatorial imaginings and hypothetical scenarios can be endlessly argued simply on the grounds that they are theoretically possible, but then argues that the unwillingness of materialists to consider certain origin of life models to be false is actually stopping the progress of science, since at some point our reluctance to exclude any possibility becomes stultifying to operational science. The paper observes that Just because a hypothesis is possible should not grant that hypothesis scientific respectability, an important rejoinder to materialists who propose speculative stories about self-organization or co-option to explain the origin of biological complexity. The author then rigorously calculates the Universal Plausibility Metric (UPM), incorporating the maximum probabilistic resources available for the universe, galaxy, solar system, and the earth:
cΩu = Universe = 1013 reactions/sec X 1017 secs X 1078 atoms = 10108
câ¦g = Galaxy = 1013 X 1017 X 1066 = 1096
câ¦s = Solar System = 1013 X 1017 X 1055 = 1085
câ¦e = Earth = 1013 X 1017 X 1040 = 1070
The author concludes that consideration of Universal Plausibility Metrics allow for falsification of speculative origin of life scenarios: The application of The Universal Plausibility Principle (UPP) precludes the inclusion in scientific literature of wild metaphysical conjectures that conveniently ignore or illegitimately inflate probabilistic resources to beyond the limits of observational science. When hypotheses require probabilistic resources that exceed these metrics, the author argues that they should be considered not only operationally falsified hypotheses, but bad metaphysics on a plane equivalent to blind faith and superstition. It concludes that the complexity we see in life requires an agent-based cause that can make choices: Symbol systems and configurable switch-settings can only be programmed with choice contingency, not chance contingency or fixed law, if non-trivial coordination and formal organization are expected.
This paper seeks to address the question, If all known life depends upon genetic instructions, how was the first linear digital prescriptive genetic information generated by natural process? The author warns materialists that there is an easy solution to the challenges posed by intelligent design: To stem the growing swell of Intelligent Design intrusions, it is imperative that we provide stand-alone natural process evidence of non-trivial self-organization at the edge of chaos. We must demonstrate on sound scientific grounds the formal capabilities of naturally occurring physicodynamic complexity. However, while the author notes that much effort has been spent arguing to the lay community that we have proved the current biological paradigm, he concludes that the actual evidence for self-organization is sorely lacking and has been inflated. The author emphasizes a distinction between order and organization, arguing that self-ordered structures like whirlpools are readily constructed by natural processes, but have never been observed to achieve 1) programming, 2) computational halting, 3) creative engineering, 4) symbol systems, 5) language, or 6) bona fide organization -- all hallmarks of living organisms. In contrast, living organisms are built upon programming and are highly organized, but physicodynamics alone cannot organize itself into formally functional systems requiring algorithmic optimization, computational halting, and circuit integration. His solution offers a positive argument for design: No known natural process exists that spontaneously writes meaningful or functional syntax. Only agents have been known to write meaningful and pragmatic syntax. He notes that the kind of sophisticated formal function found in life consistently requires regulation and control, but Control always emanates from choice contingency and intentionality, not from spontaneous molecular chaos.
This article explains that classical measures of information, such as Shannon Information, are inadequate to explain biological function, suggesting that functional biological information be measured as prescriptive information (PI). It argues that the choice of an intelligent agent is necessary to produce PI: "PI arises from expedient choice commitments at bona fide decision nodes. Such decisions steer events toward pragmatic results that are valued by agents. Empirical evidence of PI arising spontaneously from inanimate nature is sorely lacking. Neither chance nor necessity has been shown to generate prescriptive information (Trevors and Abel 2004). Choice contingency, not chance contingency, prescribes non-trivial function." According to the article, agent choice is required to generate the formalisms found in living organism: Formalisms of all kinds involve abstract ideas and agent-mediated purposeful choices. Inanimate physics and chemistry have never been shown to generate life or formal choice-based systems.
This paper expressly endorses intelligent design (ID) after exploring a key question in ID thinking. The ultimate question in origins must be: Can information increase in a purely materialistic or naturalistic way? It is not satisfactory to simply assume that information has to have arisen in this way. The alternative of original design must be allowed and all options examined carefully. A professor of thermodynamics and combustion theory, McIntosh is well acquainted with the workings of machinery. His argument is essentially twofold: (1) First, he defines the term "machine" (a device which locally raises the free energy) and observes that the cell is full of machines. Such machines pose a challenge to neo-Darwinian evolution due to their irreducibly complex nature. (2) Second, he argues that the information in living systems (similar to computer software) uses such machines and in fact requires machines to operate (what good is a program without a computer to run it?). An example is the genome in the DNA molecule. From a thermodynamics perspective, the only way to make sense of this is to understand that the information is non-material and constrains the thermodynamics so that the local matter and energy are in a non-equilibrium state. McIntosh addresses the objection that, thermodynamically speaking, highly organized low entropy structures can be formed at the expense of an increase in entropy elsewhere in the universe. However, he notes that this argument fails when applied to the origin of biological information:
whilst this argument works for structures such as snowflakes that are formed by natural forces, it does not work for genetic information because the information system is composed of machinery which requires precise and non-spontaneous raised free energy levels -- and crystals like snowflakes have zero free energy as the phase transition occurs.
McIntosh then tackles the predominant reductionist view of biological information which "regards the coding and language of DNA as essentially a phenomenon of the physics and chemistry of the nucleotides themselves." He argues that this classical view is wrong, for "biological structures contain coded instructions which ... are not defined by the matter and energy of the molecules carrying this information." According to McIntosh, Shannon information is not a good measure of biological information since it is "largely not relevant to functional information at the phenotype level." In his view, "[t]o consider biological information as simply a 'by product' of natural selective forces operating on random mutations is not only counter-intuitive, but scientifically wrong." According to McIntosh, one major reason for this is "the irreducibly complex nature of the machinery involved in creating the DNA/mRNA/ribosome/amino acid/protein/DNA-polymerase connections." He continues:
All of these functioning parts are needed to make the basic forms of living cells to work. ... This, it may be argued, is a repeat of the irreducible complexity argument of Behe [67], and many think that that debate has been settled by the work of Pallen and Matzke [68] where an attempt to explain the origin of the bacterial flagellum rotary motor as a development of the Type 3 secretory system has been made. However, this argument is not robust simply because it is evident that there are features of both mechanisms which are clearly not within the genetic framework of the other. That is, the evidence, far from pointing to one being the ancestor of the other, actually points to them both being irreducibly complex. In the view of the author this argument is still a very powerful one.
Further citing Signature in the Cell, McIntosh states: What is evident is that the initial information content in DNA and living proteins rather than being small must in fact be large, and is in fact vital for any process to work to begin with. The issue of functional complexity and information is considered exhaustively by Meyer [93, 94] who argues that the neo-Darwinist model cannot explain all the appearances of design in biology. So how do biological systems achieve their highly ordered, low-entropy states? McIntosh's argument is complementary to that of Stephen Meyer's, but it takes a more thermodynamic approach. According to McIntosh, information is what allows biological systems to attain their high degrees of order: the presence of information is the cause of lowered logical entropy in a given system, rather than the consequence. In living systems the principle is always that the information is transcendent to, but using raised free energy chemical bonding sites. McIntosh solves the problem of the origin of information by arguing that it must arise in a "top-down" fashion requiring the input of intelligence:
[T]here is a perfectly consistent view which is a top-down approach where biological information already present in the phenotypic creature (and not emergent as claimed in the traditional bottom-up approach) constrains the system of matter and energy constituting the living entity to follow intricate non-equilibrium chemical pathways. These pathways whilst obeying all the laws of thermodynamics are constantly supporting the coded software which is present within ... Without the addition of outside intelligence, raw matter and energy will not produce auto-organization and machinery. This latter assertion is actually repeatedly borne out by experimental observation -- new machinery requires intelligence. And intelligence in biological systems is from the non-material instructions of DNA.
This thinking can be applied to DNA: since "the basic coding is the cause (and thus reflects an initial purpose) rather than the consequence, [the top-down approach] gives a much better paradigm for understanding the molecular machinery which is now consistent with known thermodynamic principles." McIntosh explains that the low-entropy state of biological systems is the result of the workings of machines, which must be built by intelligence: It has often been asserted that the logical entropy of a non-isolated system could reduce, and thereby new information could occur at the expense of increasing entropy elsewhere, and without the involvement of intelligence. In this paper, we have sought to refute this claim on the basis that this is not a sufficient condition to achieve a rise in local order. One always needs a machine in place to make use of an influx of new energy and a new machine inevitably involves the systematic raising of free energies for such machines to work. Intelligence is a pre-requisite. He concludes his paper with an express endorsement of intelligent design: "the implication of this paper is that it supports the so-called intelligent design thesis -- that an intelligent designer is needed to put the information into the biological system."
In this peer-reviewed paper, Leeds University professor Andy McIntosh argues that two systems vital to bird flight -- feathers and the avian respiratory system -- exhibit irreducible complexity. The paper describes these systems using the exact sort of definitions that Michael Behe uses to describe irreducible complexity:
[F]unctional systems, in order to operate as working machines, must have all the required parts in place in order to be effective. If one part is missing, then the whole system is useless. The inference of design is the most natural step when presented with evidence such as in this paper, that is evidence concerning avian feathers and respiration.
Regarding the structure of feathers, he argues that they require many features to be present in order to properly function and allow flight:
[I]t is not sufficient to simply have barbules to appear from the barbs but that opposing barbules must have opposite characteristics -- that is, hooks on one side of the barb and ridges on the other so that adjacent barbs become attached by hooked barbules from one barb attaching themselves to ridged barbules from the next barb (Fig. 4). It may well be that as Yu et al. [18] suggested, a critical protein is indeed present in such living systems (birds) which have feathers in order to form feather branching, but that does not solve the arrangement issue concerning left-handed and right-handed barbules. It is that vital network of barbules which is necessarily a function of the encoded information (software) in the genes. Functional information is vital to such systems.
He further notes that many evolutionary authors look for evidence that true feathers developed first in small non-flying dinosaurs before the advent of flight, possibly as a means of increasing insulation for the warm-blooded species that were emerging. However, he finds that when it comes to fossil evidence for the evolution of feathers, None of the fossil evidence shows any evidence of such transitions.
Regarding the avian respiratory system, McIntosh contends that a functional transition from a purported reptilian respiratory system to the avian design would lead to non-functional intermediate stages. He quotes John Ruben stating, The earliest stages in the derivation of the avian abdominal air sac system from a diaphragm-ventilating ancestor would have necessitated selection for a diaphragmatic hernia in taxa transitional between theropods and birds. Such a debilitating condition would have immediately compromised the entire pulmonary ventilatory apparatus and seems unlikely to have been of any selective advantage. With such unique constraints in mind, McIntosh argues that even if one does take the fossil evidence as the record of development, the evidence is in fact much more consistent with an ab initio design position -- that the breathing mechanism of birds is in fact the product of intelligent design.
McIntoshs paper argues that science must remain at least open to the possibility of detecting design in nature, since to deny the possibility of the involvement of external intelligence is effectively an assumption in the religious category. Since feathers and the avian respiratory system exhibit irreducible complexity, he expressly argues that science must consider the design hypothesis:
As examples of irreducible complexity, they show that natural systems have intricate machinery which does not arise in a bottom up approach, whereby some natural selective method of gaining small-scale changes could give the intermediary creature some advantage. This will not work since, first, there is no advantage unless all the parts of the new machine are available together and, second, in the case of the avian lung the intermediary creature would not be able to breathe, and there is little selective advantage if the creature is no longer alive. As stated in the introduction, the possibility of an intelligent cause is both a valid scientific assumption, and borne out by the evidence itself.
This article tries to explain how scientists can produce artificial intelligence and bridge the cybernetic cut -- from programmed reactions to real choices. It thus states: How did inanimate nature give rise to an algorithmically organized, semiotic and cybernetic life? Both the practice of physics and life itself require traversing not only an epistemic cut, but a Cybernetic Cut. A fundamental dichotomy of reality is delineated. The dynamics of physicality (chance and necessity) lie on one side. On the other side lies the ability to choose with intent what aspects of ontological being will be preferred, pursued, selected, rearranged, integrated, organized, preserved, and used (cybernetic formalism). The article contends that choice contingency is necessary to produce functional biological life forms, for: Choice contingency, on the other hand, involves purposeful selection from among real options. Unlike chance contingency, with choice contingency an internalized goal motivates each selection. The paper further notes that The capabilities of chance contingency are often greatly inflated, suggesting that agent steerage is necessary to explain biological features. According to the paper Purposeful choices are needed to achieve sophisticated formal utility. The chance and/or necessity of physicodynamics alone have never been observed to generate a nontrivial formal control system.
This article by pro-ID evolutionary biologist Richard Sternberg compares the information processing ability of the cell to computer programming. Sternberg observes that non-physical symbols and codes underlie biology, stating that There are no chemical constraints or laws that explain the 64-to-20 mapping of codons to amino acids and stop sites -- the relations are arbitrary with respect to the molecular components in the sense that mappings can be reassigned. According to Sternberg, the genetic code is like computer codes in that it contains the following properties: Redundancy, Error dampening capability, Symbolic and semantic flexibility, Output versatility, Multiple realizability, and Text editing. There is also a computer-like form of recursivity in molecular biology, as a protein product can in turn be part of the transcriptional, RNA processing, or translational apparatus -- even binding to its own mRNA. He explains the interdependent nature of DNA and other biomolecules, stating Any DNA code is but the domain of a larger system; the larger system in turn depends on DNA codes (at least in part). The authors conclusion is that the workings of biology, fundamentally, are not reducible to material molecules but rather resides in information, symbols, and sets of mathematically logical rules: The mathematical structures that proteins (and RNAs!) are the result of are not in a gene. Instead, the DNA sequence is the material platform for the symbol strings that allow information to be accessed. In this sense, then, DNA is less than its Central Dogma interpretation because it is not ontically informational. Yet DNA enables many more code systems tha[n] commonly acknowledged and in this way is more than just a collection of codons.
Computer simulations of evolution such as Avida have been widely touted as having refuted intelligent design. But close scrutiny of these simulations reveal that they do not model true Darwinian processes because they are essentially pre-programmed to evolve complex systems. This peer-reviewed paper by ID-proponents attempts to present a computer simulation that fixes these defects by modeling Darwinian evolution in a biologically accurate manner, superior to that used by other evolutionary simulations such as Avida.
This striking paper supports intelligent design advocates who view life as being front-loaded to allow for biological evolution. For example, the paper states, This model has two major predictions, first that a significant fraction of genetic information in lower taxons must be functionally useless but becomes useful in higher taxons, and second that one should be able to turn on in lower taxons some of the complex latent developmental programs, e.g., a program of eye development or antibody synthesis in sea urchin. In other words, lower taxa somehow have the genetic tools to produce systems that they do not have, but that do exist in higher taxa. As the article states: "Genes that are seemingly useless in sea urchin but are very useful in higher taxons exemplify excessive genetic information in lower taxons. It is unclear how such genetic complexity could have evolved." When discussing the convergent use of pax-6 in widely diverse organisms, it states: "So, how does it happen that convergently evolved systems have the same developmental switches? These findings are very difficult to explain within the context of Darwinian ideas." The author proposes a hypothesis where some pre-Cambrian ancestor that had "a Universal Genome that encodes all major developmental programs essential for every phylum of Metazoa emerged in a unicellular or a primitive multicellular organism." This common ancestor then lost much genetic information in many lineages: The proposed model of a Universal Genome implies that a lot of information encoded in genomes is not utilized in each individual taxon, and therefore is effectively useless. The article suggests that microevolution is at work, but that Darwinian macroevolution cannot be credited with major innovations: Furthermore, genetic evolution in combination with natural selection could define microevolution, however, within this model it is not responsible for the emergence of the major developmental programs. This is an evolutionary model, but it challenges the sort of unguided and random evolution inherent to neo-Darwinism, and supports an intelligent design model.
This article devises a method of measuring the functional sequence complexity of proteins, which in turn permits distinguishing between order, randomness, and biological function. The authors suggest that If genes can be thought of as information processing subroutines, then proteins can be analyzed in terms of the products of information interacting with laws of physics. The metric of functional sequence complexity advanced by these authors is highly similar to the notion of complex and specified information.
This 2007 chapter on carnivorous plants by Lönnig and Becker in the John Wiley & Sons volume Handbook of Plant Sciences notes that it appears to be hard even to imagine the clearcut selective advantages for all the thousands of postulated intermediate steps in a gradual scenario, not to mention the formulation and examination of scientific (i.e. testable) hypotheses for the origin of the complex carnivorous plant structures examined above. They go on to favorably cite the work of Michael Behe, stating:
The reader is further invited to consider the following problem. Charles Darwin provided a sufficiency test for his theory (1859, p. 219): "If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down." Darwin, however, stated that he could "not find such a case." Biochemist Michael J. Behe (1996, p. 39) has refined Darwin's statement by introducing and defining his concept of "irreducible complexity", specifying: "By irreducibly complex I mean a single system composed of several well-matched interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning." Some biologists believe the trap mechanism(s) of Utricularia and several other carnivorous plant genera (Dionaea, Aldrovanda, Genlisea) come at least very near to "such a case" of irreducible complexity.
This article suggests that intelligent mind is responsible for the complexity of life, stating: In computer science, only the programmer's mind determines which way the switch knob is pushed. In evolution science we say that environmental selection favors the fittest small groups. But selection is still the key factor, not chance and necessity. If physicodynamics set the switches, the switches would either be set randomly by heat agitation, or they would be set by force relationships and constants. Neither chance nor necessity, nor any combination of the two, can program. Chance produces only noise and junk code. Law would set all of the switches the same way. Configurable switches must be set using choice with intent if computational halting is expected.
This paper (published in two different venues) uses genetic algorithms that are controlled by an intelligent agent based on fuzzy logic and finds that such a method is more efficient than a random search typical of Darwinism. Citing the Intelligent Design and Evolution Awareness (IDEA) Center, it states: The success achieved in the implementation of an intelligent agent controlling the evolutionary process is somewhat similar to the controversial approach of the Intelligent Design Theory [14], which is defended by many scientists as an answer to several aspects that are not well explained by the neo-Darwinist Theory.
This study attempts to trace the evolutionary history of two taxa of flowering plants that evolutionary biologists believe to be closely related. The authors tried to use mutagenesis experiments to cause the plants traits to revert to a more primitive form, but found that such basic mutagenesis experiments were unable to cause the reversion of the taxa to the primitive state. The authors have an explanation for their observations that explains a long-standing law of evolution, and supports the basic tenets of intelligent design: since most new characters arise, not by simple additions but by integration of complex networks of gene functions rendering many systems to be irreducibly complex (Behe 1996, 2004; for a review, see Lönnig 2004), such systems cannot -- in agreement with Dollos law -- simply revert to the original state without destroying the entire integration pattern guaranteeing the survival of a species. They conclude that, for the rise of these taxa as well as for the inception of irreducible complex systems, the debate continues whether mutations and selection alone will be sufficient to produce all the new genetic functions and innovations necessary for the cytoplasm, membranes, and cell walls. The article favorably cites works from ID-friendly scientists such as Doug Axe's articles in Journal of Molecular Biology; Michael Behe's Darwin's Black Box; Behe and Snoke's 2004 article in Protein Science; David Berlinski's writings in Commentary; William Dembski's books The Design Inference, No Free Lunch, and The Design Revolution; Stephen C. Meyer's article in Proceedings of the Biological Society of Washington, and his work in Darwinism, Design, and Public Education; and also cites pro-ID entries from Debating Design.
Citing Darwins Black Box and other articles by Michael Behe about irreducible complexity, as well as the work of William Dembski and Stephen Meyer, this article states: all the models and data recently advanced to solve the problem of completely new functional sequences and the origin of new organs and organ systems by random mutations have proved to be grossly insufficient in the eyes of many researchers upon close inspection and careful scientific examination. Citing the work of Meyer, it further notes the limits of the origin of species by mutations.
This article, co-authored by a theoretical biologist and an environmental biologist, explicitly challenges the ability of Darwinian mechanisms or self-organizational models to account for the origin of the language-based chemical code underlying life. They explain that "evolutionary algorithms, neural nets, and cellular automata have not been shown to self-organize spontaneously into nontrivial functions." The authors observe that life, "typically contains large quantities of prescriptive information." They further argue that "[p]rescription requires choice contingency rather than chance contingency or necessity," entailing a necessary appeal to an intelligent cause. Throughout the paper, the authors use positive arguments referencing the creative power of "agents" as they cite the work of Discovery Institute fellows and ID-theorists William Dembski, Charles Thaxton, and Walter Bradley. Critiquing models of self-organization, they conclude that "[t]he only self that can organize its own activities is a living cognitive agent."
This article argues for intelligent design, observing that only intelligence capable of making choices can create the complexity we see in human beings. The authors state: "Neither chance contingency (quantified by Shannon theory) nor any yet-to-be-discovered law of nature can generate selection contingency (Trevors and Abel, 2004). Yet selection contingency is abundantly evident throughout nature." The sort of cause that is needed looks like this: "If the brains decision nodes were constrained by natural law, our decisions would not be real. If our choices were constrained by chance or necessity, we should stop holding engineers responsible for building collapses, and stop holding criminals responsible for their behavior. Real selection/choice contingency not only predates the existence of human metaphor and heuristic use of analogy, it produced human mentation." According to the authors, Sign systems in human experience arise only from choice contingency at successive decision nodes, not chance contingency or necessity.
In this article, Norwegian scientist Øyvind Albert Voie examines an implication of Gödel's incompleteness theorem for theories about the origin of life. Gödel's first incompleteness theorem states that certain true statements within a formal system are unprovable from the axioms of the formal system. Voie then argues that the information processing system in the cell constitutes a kind of formal system because it "expresses both function and sign systems." As such, by Gödel's theorem it possesses many properties that are not deducible from the axioms which underlie the formal system, in this case, the laws of nature. He cites Michael Polanyi's seminal essay Life's Irreducible Structure in support of this claim. As Polanyi put it, "the structure of life is a set of boundary conditions that harness the laws of physics and chemistry their (the boundary conditions) structure cannot be defined in terms of the laws that they harness." As he further explained, "As the arrangement of a printed page is extraneous to the chemistry of the printed page, so is the base sequence in a DNA molecule extraneous to the chemical forces at work in the DNA molecule." Like Polanyi, Voie argues that the information and function of DNA and the cellular replication machinery must originate from a source that transcends physics and chemistry. In particular, since as Voie argues, "chance and necessity cannot explain sign systems, meaning, purpose, and goals," and since "mind possesses other properties that do not have these limitations," it is "therefore very natural that many scientists believe that life is rather a subsystem of some Mind greater than humans."
This peer-reviewed article by ID-proponents seeks to offer definitions of information that measure information in terms of functionality. The authorss approach mirrors the concept of specified complexity. They explain that The purpose of this paper is to show that Shannon entropy can also be redefined as a function of the joint patterns between data and functionality, thus incorporating a functional interpretation into the measure. They explain that their methods can also be used to measure the degree of mutational changes necessary to convert one functional protein into another: The difference in functional entropy between the two different sequences not only provides an estimate for the amount of information required to change the starting sequence into the final sequence, but it also calculates the estimated number of trials to achieve the final sequence in evolution and thus The functional entropy change calculated can be interpreted as a quantifier of evolutionary change. Their paper experimentally tests their methods, calculating difference in functional entropy between a hox enzyme found in insects and crustaceans, thought to be homolagous. They write: Since the novel function as expressed did not come into effect until all 6 mutations were in place, the evolutionary path was modeled as a random walk and yielded a change of ~26 bits. According to Axe (2010), this of course pushes the limit of what can be produced by Darwinian evolution.
This article recognizes the important point that biological information must be defined in terms of the specific type of information it represents. Shannon information and Komologorov information are said to be inadequate measures of information. Instead, the authors recommend using functional sequence complexity, a concept essentially identical to specified complexity, to measure biological information. The article also refers to choice contingency entailing an arbitrary intelligent choice as a known cause: Compression of language is possible because of repetitive use of letter and word combinations. Words correspond to reusable programming modules. The letter frequencies and syntax patterns of any language constrain a writer's available choices from among sequence space. But these constraints are the sole product of arbitrary intelligent choice within the context of that language. Source and destination reach a consensus of communicative methodology before any message is sent or received. This methodology is called a language or an operating system. Abstract concept ('choice contingency') determines the language system, not 'chance contingency,' and not necessity (the ordered patterning of physical 'laws.')" It then argues that true organization, such as that studied in biology, requires this "choice contingency," implying intelligent design: "Self-ordering phenomena are observed daily in accord with chaos theory. But under no known circumstances can self-ordering phenomena like hurricanes, sand piles, crystallization, or fractals produce algorithmic organization. Algorithmic 'self-organization' has never been observed despite numerous publications that have misused the term. Bone fide organization always arises from choice contingency, not chance contingency or necessity."
Otto Schindewolf once wrote that evolution postulates "a unique, historical course of events that took place in the past, is not repeatable experimentally and cannot be investigated in that way." In this peer-reviewed article from a prestigious Italian biology journal, John A. Davison agrees with Schindewolf. Since "[o]ne can hardly expect to demonstrate a mechanism that simply does not and did not exist," Davison attempts to find new explanations for the origin of convergence among biological forms. Davison contends that "[t]he so-called phenomenon of convergent evolution may not be that at all, but simply the expression of the same preformed 'blueprints' by unrelated organisms." While discussing many remarkable examples of "convergent evolution," particularly the marsupial and placental saber-toothed cats, Davison is unmistakable in his meaning. The evidence, he writes, "bears, not only on the questions raised here, but also, on the whole issue of Intelligent Design." Davison clearly implies that this evidence is expected under an intelligent design model, but not under a Neo-Darwinian one.
This experimental study shows that functional protein folds are extremely rare, finding that "roughly one in 1064 signature-consistent sequences forms a working domain" and that the "overall prevalence of sequences performing a specific function by any domain-sized fold may be as low as 1 in 1077." Axe concludes that "functional folds require highly extraordinary sequences." Since Darwinian evolution only preserves biological structures that confer a functional advantage, it would be very difficult for such a blind mechanism to produce functional protein folds. This research also shows that there are high levels of specified complexity in enzymes, a hallmark indicator of intelligent design. Axe himself has confirmed that this study adds to the evidence for intelligent design: "In the 2004 paper I reported experimental data used to put a number on the rarity of sequences expected to form working enzymes. The reported figure is less than one in a trillion trillion trillion trillion trillion trillion. Again, yes, this finding does seem to call into question the adequacy of chance, and that certainly adds to the case for intelligent design." See Scientist Says His Peer-Reviewed Research in the Journal of Molecular Biology "Adds to the Case for Intelligent Design".
In this article, Lehigh University biochemist Michael Behe and University of Pittsburgh physicist Snoke show how difficult it is for unguided evolutionary processes to take existing protein structures and add novel proteins whose interface compatibility is such that they could combine functionally with the original proteins. According to their analysis, mechanisms in addition to standard Darwinian processes are required to generate many protein-protein interactions:
The fact that very large population sizes109 or greaterare required to build even a minimal MR feature requiring two nucleotide alterations within 108 generations by the processes described in our model, and that enormous population sizes are required for more complex features or shorter times, seems to indicate that the mechanism of gene duplication and point mutation alone would be ineffective, at least for multicellular diploid species, because few multicellular species reach the required population sizes. Thus, mechanisms in addition to gene duplication and point mutation may be necessary to explain the development of MR features in multicellular organisms.
By demonstrating inherent limitations to unguided evolutionary processes, this work gives indirect scientific support to intelligent design and bolsters Behe's case for intelligent design in answer to some of his critics.
Biology exhibits numerous invariants -- aspects of the biological world that do not change over time. These include basic genetic processes that have persisted unchanged for more than three-and-a-half billion years and molecular mechanisms of animal ontogenesis that have been constant for more than one billion years. Such invariants, however, are difficult to square with dynamic genomes in light of conventional evolutionary theory. Indeed, Ernst Mayr regarded this as one of the great unsolved problems of biology. In this paper Dr. Wolf-Ekkehard Lönnig, Senior Scientist in the Department of Molecular Plant Genetics at the Max-Planck-Institute for Plant Breeding Research (now retired), employs the design-theoretic concepts of irreducible complexity (as developed by Michael Behe) and specified complexity (as developed by William Dembski) to elucidate these invariants, accounting for them in terms of an explicit intelligent design (ID) hypothesis.
This article argues for intelligent design as an explanation for the origin of the Cambrian fauna. Not surprisingly, it created an international firestorm within the scientific community when it was published. (See David Klinghoffer, The Branding of a Heretic, Wall Street Journal, Jan. 28, 2005, as well as the following website by the editor who oversaw the article's peer-review process: http://www.richardsternberg.net/.) The treatment of the editor who sent Meyer's article out for peer-review is a striking illustration of the sociological obstacles that proponents of intelligent design encounter in publishing articles that explicitly defend the theory of intelligent design.
This paper by Tulane mathematician and cosmologist Frank Tipler observes that teleological explanations are live possibilities within physics. Tipler also contends that the universe is set up to permit the existence of life, and that the universe seems guided by an ultimate goal inherent it. The implication, as Tipler writes, is that the evolution of life has been guided by that goal, rather than being entirely random.
This article suggests that explaining the functional complexity in life requires a force that can make choices: Progress in understanding the derivation of bioinformation through natural processes will come only through elucidating more detailed mechanisms of selection pressure 'choices' in biofunctional decision-node sequences. The latter is the subject of both 'BioFunction theory' and the more interdisciplinary 'instruction theory'. Life, then, is not only not reducible to complexity; it is not even reducible to FSC! Life is a symphony of dynamic, highly integrated, algorithmic processes yielding homeostatic metabolism, development, growth, and reproduction (ignoring the misgivings of those few life-origin theorists with mule fixations!). But as Yockey argues, it remains to be seen whether such highly sophisticat-ed algorithmic processes can exist apart from the linear, segregatable, digital, FSC instructions observed at the helm of all known empirical life. The author argues that The key to life-origin research lies in uncovering the mechanisms whereby these productive algorithmic programming choices were made and recorded in nucleic acid. He compares the processes that generated life to those that generate computer programming: Selection is exactly what is found in computer algorithms. Correct choices at each successive decision node alone produce sophisticated software. RSC strings are pragmatically distinguished from FSC strings by virtue of the fact that RSC strings are almost never observed to do anything useful in any context. FSC strings, on the other hand, can be counted on to contribute specific utility.
Citing the work of William Dembski, the opening paragraph of this article reads: Detection of complex specified information is introduced to infer unknown underlying causes for observed patterns. By complex information, it refers to information obtained from observed pattern or patterns that are highly improbable by random chance alone. We evaluate here the complex pattern corresponding to multiple observations of statistical interdependency such that they all deviate significantly from the prior or null hypothesis. Such multiple interdependent patterns when consistently observed can be a powerful indication of common underlying causes. That is, detection of significant multiple interdependent patterns in a consistent way can lead to the discovery of possible new or hidden knowledge.
These researchers reach a conclusion that is thoroughly teleological and non-Darwinian. The authors look to laws of form embedded in nature as possessing the power to guide the formation of biological structures. The intelligent design research program reflected here is broad yet certainly recognizable , positing design as a feature programmed into nature.
This article examines the role of transposons in the abrupt origin of new species and the possibility of a partly predetermined generation of biodiversity and new species. The authors' approach is non-Darwinian, and they cite favorably the work of design theorists Michael Behe and William Dembski, acknowledging that some biological systems are irreducibly complex.
This study published by molecular biologist Douglas Axe, now at the Biologic Institute, challenges the widespread idea that high species-to-species variation in the amino-acid sequence of an enzyme implies modest functional constraints. Darwinists commonly assume that such variation indicates low selection pressure at the variable amino acid sites, allowing many mutations with little effect. Axe's research shows that even when mutations are restricted to these sites, they are severely disruptive, implying that proteins are highly specified even at variable sites. According to this work, sequences diverge not because substantial regions are free from functional constraints, but because selection filters most mutations, leaving only the harmless minority. By showing functional constraints to be the rule rather than the exception, it raises the question of whether chance can ever produce sequences that meet these constraints in the first place. Axe himself has confirmed that this study adds to the evidence for intelligent design: "I concluded in the 2000 JMB paper that enzymatic catalysis entails 'severe sequence constraints.' The more severe these constraints are, the less likely it is that they can be met by chance. So, yes, that finding is very relevant to the question of the adequacy of chance, which is very relevant to the case for design." See Scientist Says His Peer-Reviewed Research in the Journal of Molecular Biology "Adds to the Case for Intelligent Design".
This article argues that intelligent design is recognizable in the human heart, stating: Comparative anatomy points to a design and a Designer. Surgeons, anatomists and anyone studying the human form and function have an unsurpassed opportunity to ponder over the wonders of creation and contemplate the basic questions: where did we come from? why are we here? and where are we going?
This article concludes that there is a design in the evolution of the venous connections of the heart, pectinate muscles, atrioventricular valves, left ventricular tendons, outflow tracts, and great arteries. But the version of evolution it presents is decidedly non-Darwinian, as it notes that evolution appears to be goal-directed by a designer: One neglected aspect in the study of evolution is that of anticipation. Fish atria and ventricles appear to have a built-in provision for becoming updated to the human 4-chambered structure. This transformation is achieved in stages: the truncus yields the great arteries, appropriate shifting takes place in the great arteries, the left ventricle decreases in sponginess and increases in the size of its lumen, the chordopapillary apparatus becomes more sophisticated, the coronary circulation undergoes changes, and the ventricular septal defect closes. The article closes by stating, "This evolutionary progression points to a master design and plan for countless millennia."
This book was published by Cambridge University Press and peer-reviewed as part of a distinguished monograph series, Cambridge Studies in Probability, Induction, and Decision Theory. The editorial board of the series includes members of the National Academy of Sciences as well as a Nobel laureate, John Harsanyi, who shared the prize in 1994 with John Nash, protagonist of the film A Beautiful Mind. Commenting on the ideas in The Design Inference, well-known physicist and science writer Paul Davies remarked: "Dembski's attempt to quantify design, or provide mathematical criteria for design, is extremely useful. I'm concerned that the suspicion of a hidden agenda is going to prevent that sort of work from receiving the recognition it deserves." Quoted in Larry Witham, By Design (San Francisco: Encounter Books, 2003), p. 149.
This peer-reviewed chapter from an academic book on plant research favorably references Michel Behes concept of irreducible complexity. After noting that some major problems have to be solved for gene duplications to be of fundamental evolutionary significance, it cites to Behe's 1996 book Darwin's Black Box to justify the question: What could be the selective advantage of the intermediate (still unfinished) reaction chains? The authors further state that examples of irreducibly complex systems are found in biology.
In this book Behe develops a critique of the mechanism of natural selection and a positive case for the theory of intelligent design based upon the presence of "irreducibly complex molecular machines" and circuits inside cells. Though this book was published by The Free Press, a trade press, the publisher subjected the book to standard scientific peer-review by several prominent biochemists and biological scientists.
In this book Thaxton, Bradley and Olsen develop a seminal critique of origin of life studies and develop a case for the theory of intelligent design based upon the information content and "low-configurational entropy" of living systems.
This article from the American Journal of Physics seeks to help educators understand how they can teach students about the evidence for transcendence in the universe. The article assumes that a transcendent realm exists beyond the universe and that the universe can plausibly be said to reflect design.
In this article appearing in a 1985 technical reference book, mathematician Granville Sewell compares the complexity found in the genetic code of life to that of a computer program. He recognizes that the fundamental problem for evolution is the "problem of novelties" which in turn raises the question "How can natural selection cause new organs to arise and guide their development through the initial stages during which they present no selective advantage?" Sewell explains how a typical Darwinist will try to bridge both functional and fossil gaps between biological structures through "a long chain of tiny improvements in his imagination," but the author notes that "the analogy with software puts his ideas into perspective." Major changes to a species require the intelligent foresight of a programmer. Natural selection, a process that is "unable to plan beyond the next tiny mutation," could never produce the complexity of life.
In this peer-reviewed paper, nuclear physicist William G. Pollard notes that Big Bang cosmology requires some kind of transcendent reality. Pollard argues that the scientific justification for this transcendent domain can be found in quantum mechanics because universal laws and constants are finely-tuned to permit the existence of advanced life, which point to an intelligent source, a mind, as designing the universe.
Peer-Edited or Editor-Reviewed Articles Supportive of Intelligent Design Published in Scientific Journals, Scientific Anthologies and Conference Proceedings
A. C. McIntosh, Functional Information and Entropy in Living Systems, Design and Nature III: Comparing Design in Nature with Science and Engineering, Vol. 87 (Ashurt, Southampton, United Kindom: WIT Transactions on Ecology and the Environment, WIT Press, 2006).
Jonathan Wells, Do Centrioles Generate a Polar Ejection Force? Rivista di Biologia /Biology Forum, Vol. 98:71-96 (2005).
Heinz-Albert Becker and Wolf-Ekkehard Lönnig, Transposons: Eukaryotic, Encyclopedia of Life Sciences (John Wiley & Sons, 2005).
Scott A. Minnich and Stephen C. Meyer, Genetic analysis of coordinate flagellar and type III regulatory circuits in pathogenic bacteria, Proceedings of the Second International Conference on Design & Nature, Rhodes, Greece, edited by M.W. Collins and C.A. Brebbia (Ashurst, Southampton, United Kingdom: WIT Press, 2004).
Four science articles in William A. Dembski and Michael Ruse, eds., Debating Design: From Darwin to DNA (Cambridge, United Kingdom: Cambridge University Press, 2004) (hereinafter Debating Design).
William A. Dembksi, The Logical Underpinnings of Intelligent Design, Debating Design, pp. 311-330.
Walter L. Bradley, Information, Entropy, and the Origin of Life, Debating Design, pp. 331-351.
Michael Behe, Irreducible Complexity: Obstacle to Darwinian Evolution, Debating Design, Pp. 352-370.
Stephen C. Meyer, The Cambrian Information Explosion: Evidence for Intelligent Design, Debating Design, pp. 371-391.
In this article, Dembski outlines his method of design detection. He proposes a rigorous way of identifying the effects of intelligent causation and distinguishing them from the effects of undirected natural causes and material mechanisms. Dembski shows how the presence of specified complexity or "complex specified information" provides a reliable marker of prior intelligent activity. He also responds to a common criticism made against his method of design detection, namely that design inferences constitute "an argument from ignorance."
Walter Bradley is a mechanical engineer and polymer scientist. In the mid 1980s he co-authored what supporters consider a seminal critique of origin of life studies in the book The Mystery of Life's Origins. Bradley and his co-authors also developed a case for the theory of intelligent design based upon the information content and "low-configurational entropy" of living systems. In this chapter he updates that work. He clarifies the distinction between configurational and thermal entropy, and shows why materialistic theories of chemical evolution have not explained the configurational entropy present in living systems, a feature that Bradley takes to be strong evidence of intelligent design.
In this essay Behe briefly explains the concept of irreducible complexity and reviews why he thinks it poses a severe problem for the Darwinian mechanism of natural selection. In addition, he responds to several criticisms of his argument for intelligent design from irreducible complexity and several misconceptions about how the theory of intelligent design applies in biochemistry. In particular he discusses several putative counterexamples that some scientists have advanced against his claim that irreducibly complex biochemical systems demonstrate intelligent design. Behe turns the table on his critics, arguing that such examples actually underscore the barrier that irreducible complexity poses to Darwinian explanations, and, if anything, show the need for intelligent design.
Meyer argues for design on the basis of the Cambrian explosion, the geologically sudden appearance of new animal body plans during the Cambrian period. Meyer notes that this episode in the history of life represents a dramatic and discontinuous increase in the complex specified information of the biological world. He argues that neither the Darwinian mechanism of natural selection acting on random mutations nor alternative self-organizational mechanisms are sufficient to produce such an increase in information in the time allowed by the fossil evidence. Instead, he suggests that such increases in specified complex information are invariably associated with conscious and rational activity, that is, with intelligent design.
- Granville Sewell, A Mathematicians View of Evolution, The Mathematical Intelligencer, Vol. 22(4) (2000). (HTML).
This paper explores the proper way to measure information and entropy in living organisms. Citing the work of Stephen Meyer, the author argues that random mutations cannot increase order in a living system: [R]andom mutations always have the effect of increasing the disorder (or what we will shortly define as logical entropy) of any particular system, and consequently decreasing the information content. What is evident is that the initial information content rather than being small must in fact be large, and is in fact vital for any process to work to begin with. The issue of functional complexity and information is considered exhaustively by Meyer who argues that the neo-Darwinist model cannot explain all the appearances of design in biology. McIntosh continues, explaining that only teleology -- intelligent design -- can explain the increases in information that generate observed biological complexity: Even within the neo-Darwinist camp the evidence of convergence (similarity) in the suggested evolutionary development of disparate phylogeny has caused some writers to consider channelling of evolution. Such thinking is a tacit admission of a teleological influence. That information does not increase by random changes (contrary to Dawkins assertion) is evident when we consider in the following section, the logical entropy of a biochemical system. He concludes that goal-directed processes, or teleonomy, are required: There has to be previously written information or order (often termed teleonomy) for passive, non-living chemicals to respond and become active.
Molecular biologist Jonathan Wells writes in the Italian biology journal Rivista di Biologia that the cell may be viewed and studied as a designed system with engineered machines. Showing the heuristic value of intelligent design, he writes: Instead of viewing centrioles through the spectacles of molecular reductionism and neo-Darwinism, this hypothesis assumes that they are holistically designed to be turbines. What if centrioles really are tiny turbines? This is much easier to conceive if we adopt a holistic rather than reductionistic approach, and if we regard centrioles as designed structures rather than accidental by-products of neo-Darwinian evolution. If centrioles really are turbines, then fluid exiting through the blades would cause them to rotate clockwise when viewed from their proximal ends. Wells hypothesizes that such approaches may lead to understandings of the workings of centrioles, perhaps even uncovering some causes of cancer.
This encyclopedia entry recounts that some biological systems may be irreducibly complex, stating: "A general difficulty to be mentioned in this context (but not inherent in the selfish DNA hypothesis) is that mutation and selection may not be the full explanation for the origin of species; i.e. the factors of the neo-Darwinian scenario may find their limits, for example, in the generation of irreducibly complex structures (Behe, 1996). This is a term used to describe structures that, according to Behe and co-workers, cannot be explained by a piecemeal production via intermediate steps." The article elaborates on Behe's argument stating, "Among the examples discussed by Behe are the origins of (1) the cilium, (2) the bacterial flagellum with filament, hook and motor embedded in the membranes and cell wall and (3) the biochemistry of blood clotting in humans." The article then proposes that additional systems may challenge Darwinian explanations, stating: "Moreover, the traps of Utricularia (and some other carnivorous plant genera) as well as several further apparatuses in the animal and plant world appear to pose similar problems for the modern synthesis (joints, echo location, deceptive flowers, etc.). Up to now, none of these systems has been satisfactorily explained by neo-Darwinism. Whether accelerated TE activities with all the above named mutagenic consequences can solve the questions posed remains doubtful."
This article underwent conference peer review to be included in this peer-edited volume of proceedings. Minnich and Meyer do three important things in the paper. First, they refute a popular objection to Michael Behe's argument for the irreducible complexity of the bacterial flagellum. Second, they suggest that the Type III Secretory System present in some bacteria, rather than being an evolutionary intermediate to the bacterial flagellum, probably represents a degenerate form of the same. Finally, they argue explicitly that compared to the neo-Darwinian mechanism, intelligent design better explains the origin of the bacterial flagellum. As the authors explain, In all irreducibly complex systems in which the cause of the system is known by experience or observation, intelligent design or engineering played a role in the origin of the system.
Mathematician Granville Sewell explains that Michael Behe's arguments against neo-Darwinism from irreducible complexity are supported by mathematics and the quantitative sciences, especially when applied to the problem of the origin of new genetic information. Sewell notes that there are "a good many mathematicians, physicists and computer scientists who...are appalled that Darwin's explanation for the development of life is so widely accepted in the life sciences." Sewell compares the genetic code of life to a computer program -- a comparison also made by computer gurus such as Bill Gates and evolutionary biologists such as Richard Dawkins. He notes that experience teaches that software depends on many separate functionally coordinated elements. For this reason "[m]ajor improvements to a computer program often require the addition or modification of hundreds of interdependent lines, no one of which makes any sense, or results in any improvement, when added by itself." Since individual changes to part of a genetic program typically confer no functional advantage (in isolation from many other necessary changes to other portions of the genetic code), Sewell argues that improvements to a genetic program require the intelligent foresight of a programmer. Undirected mutation and selection will not suffice to produce the necessary information.
Articles Supportive of Intelligent Design Published in Peer-Reviewed Philosophy Journals, or Peer-Reviewed Philosophy Books Supportive of Intelligent Design
Michael C. Rea, World without Design : The Ontological Consequences of Naturalism (Oxford University Press, 2004).
William Lane Craig, Design and the Anthropic Fine-Tuning of the Universe, in God and Design: The Teleological Argument and Modern Science, pp. 155-177. (Neil Manson ed., London: Routledge, 2003).
Michael Behe, Reply to my Critic: A Response to Reviews of Darwins Black Box: The Biochemical Challenge to Evolution, Biology and Philosophy, Vol. 16, 685709, (2001).
Del Ratzsch, Nature, Design, and Science: The Status of Design in Natural Science (State University of New York Press, 2001).
William Lane Craig, The Anthropic Principle, in The History of Science and Religion in the Western Tradition: An Encyclopedia, pp. 366-368 (Gary B. Ferngren, general ed., Garland Publishing, 2000).
Michael Behe, Self-Organization and Irreducibly Complex Systems: A Reply to Shanks and Joplin, Philosophy of Biology, Vol. 67(1):155-162 (March, 2000).
William Lane Craig, Barrow and Tipler on the Anthropic Principle vs. Divine Design, British Journal for the Philosophy of Science, Vol. 38: 389-395 (1988).
William Lane Craig, God, Creation, and Mr. Davies, British Journal for the Philosophy of Science, Vol. 37: 168-175 (1986).
In this article published in the mainstream journal Biology and Philosophy, Michael Behe defends his views supporting intelligent design as stated Darwins Black Box.
Michael Behe defends his arguments for irreducible complexity against the criticisms of various Darwinian scientists.
Bloggfærslur 3. apríl 2012
Um bloggið
Mofa blogg
Færsluflokkar
- Bloggar
- Bækur
- Dægurmál
- Ferðalög
- Fjármál
- Fjölmiðlar
- Heilbrigðismál
- Heimspeki
- Íþróttir
- Kjaramál
- Kvikmyndir
- Lífstíll
- Ljóð
- Löggæsla
- Mannréttindi
- Matur og drykkur
- Menning og listir
- Menntun og skóli
- Samgöngur
- Sjónvarp
- Spaugilegt
- Spil og leikir
- Stjórnmál og samfélag
- Sveitarstjórnarkosningar
- Tónlist
- Trúmál
- Trúmál og siðferði
- Tölvur og tækni
- Umhverfismál
- Utanríkismál/alþjóðamál
- Vefurinn
- Viðskipti og fjármál
- Vinir og fjölskylda
- Vísindi og fræði
Tenglar
Kristnar síður
Ýmislegt
Sköpun/þróun
Síður sem fjalla um sköpun/þróun
- Detecting Design
- UnCommon descent Blogg síða William Dembski um vitræna hönnun
- Creation-Evolution Headlines Síða sem fjallar um fréttir tengdar sköpun þróun
- EvolutionNews Síða sem fjallar um fréttir sem tengjast Vitsmunahönnun
Bloggvinir
- Bergur Thorberg
- Birgirsm
- Brosveitan - Pétur Reynisson
- Bryndís Böðvarsdóttir
- Daníel Þór Þorgrímsson
- Davíð S. Sigurðsson
- Davíð Örn Sveinbjörnsson
- Daði Einarsson
- Dóra litla
- Eva
- Eygló Hjaltalín
- Friðrik Páll Friðriksson
- Georg P Sveinbjörnsson
- Gladius
- Gunnar Ingi Gunnarsson
- Gunnlaugur Halldór Halldórsson
- Guðni Már Henningsson
- Guðrún Sæmundsdóttir
- Guðsteinn Haukur Barkarson
- Gísli Kristjánsson
- Halldóra Hjaltadóttir
- Halldóra Lára Ásgeirsdóttir
- Hjalti Rúnar Ómarsson
- Hörður Finnbogason
- Hörður Halldórsson
- Inga Helgadóttir
- Ingibjörg
- Ingvar Leví Gunnarsson
- Ingvar Valgeirsson
- Janus Hafsteinn Engilbertsson
- Jens Sigurjónsson
- Jóhann Hauksson
- Jóhann Helgason
- Jóhannes Ólafsson Eyfeld
- Jón Hjörleifur Stefánsson
- Jón Ríkharðsson
- Jón Valur Jensson
- Jónatan Gíslason
- Júdas
- Kristin stjórnmálasamtök
- Kristinn Theódórsson
- Kristinn Theódórsson
- Kristinn Ásgrímsson
- Linda
- Mama G
- Morgunstjarnan
- Nonni
- Omnivore
- Predikarinn - Cacoethes scribendi
- Pétur Eyþórsson
- Ragnar Birkir Bjarkarson
- Ragnar Kristján Gestsson
- Ragnar Steinn Ólafsson
- Ragnheiður Katla Laufdal
- Róbert Badí Baldursson
- Rósa Aðalsteinsdóttir
- Rödd í óbyggð, kristilegt félag
- Röddin
- Rúnar Kristjánsson
- Sigurður Þórðarson
- Sigvarður Hans Ísleifsson
- Steinar Immanúel Sörensson
- Styrmir Reynisson
- Svanur Gísli Þorkelsson
- Sverrir Halldórsson
- TARA
- TARA ÓLA/GUÐMUNDSD.
- Theódór Norðkvist
- Tryggvi Hjaltason
- Tímanna Tákn
- Unknown
- Vefritid
- Viðar Freyr Guðmundsson
- gudni.is
- Ólafur Jóhannsson
- Þarfagreinir
- Þórdís Ragnheiður Malmquist
- Alexander Steinarsson Söebech
- Árni Karl Ellertsson
- BookIceland
- Elísa Elíasdóttir
- Fanney Amelía Guðjonsson
- Friðrik Már
- Gestur Halldórsson
- Guðjón E. Hreinberg
- Gunnar Ingvi Hrólfsson
- Gunnar Jóhannesson
- Hulda Þórey Garðarsdóttir
- Jens Guð
- Karl Jóhann Guðnason
- Kristinn Ingi Jónsson
- Lífsréttur
- Mathieu Grettir Skúlason
- Tómas Ibsen Halldórsson
- Valur Arnarson
- Viktor
- Vilhjálmur Örn Vilhjálmsson
Heimsóknir
Flettingar
- Í dag (22.12.): 1
- Sl. sólarhring: 2
- Sl. viku: 12
- Frá upphafi: 803229
Annað
- Innlit í dag: 1
- Innlit sl. viku: 11
- Gestir í dag: 1
- IP-tölur í dag: 1
Uppfært á 3 mín. fresti.
Skýringar