What is a computer? What can computers tell us about the brain? Will computers every think like people?

Kant, Godel, Turing machine, neurology, information, analog and digital, analytical and synthetical, boot program.

Return to the Theory of Options

Previous 4.5 Intuition and Judgment

Next 5.1 The Origin of Culture

4.6 Artificial Intelligence

"It seems to me that there is a fundamental conflict, as revealed by the Godel (-Turing) theorem, between mathematical understanding and purely computational processes. There is no obstruction to our mathematical understanding being the product of evolution provided that the physical laws with which natural selection operates are not of an extremely computational nature." Roger Penrose

"What Godel's theorem offers the romantically inclined is a similarly dramatic proof of the specialness of the human mind. Godel's theorem defines a deed, it seems, that a genuine human mind can perform but that no impostor, no mere algorithm controlled robot, could perform. The technical details of Godel's proof itself need not concern us; no mathematician doubts its soundness. The controversy all lies in how to harness the theory to prove anything about the nature of mind." Daniel Dennett

"As an evolutionary biologist, I have learned over the years that most people do not want to see themselves as lumbering robots programmed to ensure the survival of their genes. I don’t think they will want to see themselves as digital computers either. To be told by someone with impeccable scientific credentials that they are nothing of the kind can only be pleasing." John Maynard Smith

"On two occasions I have been asked (by members of Parliament), 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question." Charles Babbage

"Computers in the future may weigh no more than 1.5 tons." Popular Mechanics, 1949

"There is no reason anyone would want a computer in their home." Ken Olson, founder of Digital Equipment Corp.

"640K ought to be enough for anybody." Bill Gates, 1981

 4.6.1 Defining A Computer

As explained in the chapter on the alleged wiring of the brain, for centuries philosophers have used mechanical analogies about springs, cogs, wheels, pumps and wires, to explain how humans exist in relation to machines. Often these analogies are more confusing than helpful. Yet, there is one analogy so striking that we cannot ignore it. When man began the first industrial revolution he built machines in crude images of his beasts of burden. He replaced the horse, ox, and donkey with the car, train and tractor. And he tested the new machines in ways, such as comparing the speed of a train against a horse, or the pull of a tractor against a bullock, that made the analogy fitting. New machines replaced people too, and hundreds of craft trades were wiped out by the factory system. But the challenge of the new machinery was always to the limbs and muscles of humans. Plus the machinery had such diverse applications that we still debate which beast any machine was built in the image of, or what any particular machine was intended to replace.

However, with the new industrial age came a totally new machine, one that could perform not physical but mental labor. Only there was no doubt this time which creature the new device was in the image of, or what it would replace. For a species that has always considered itself the only one on Earth capable of thinking, it is obvious how humans will rate the thinking machine against all other artifacts of their creation, and against themselves. One thing people wanted to know about the new machines say, was could they play chess. If this was a special challenge, it was only because of all the animal species that might be intelligent there is still only one species that plays chess.

To this author the analogy is important in another way. An essential factor of human evolution was the learning neurology of the higher cortex. Once neural circuits can learn after birth there is no need to individually design each circuit in the brain by natural selection. A multipurpose neural circuit can be multiplied many times to add brain bulk. In early evolution too, it seemed far easier for animals to evolve a new shape or skin covering or other non-neural functions than to evolve complex neurology. But once the Rubicon of 50,000 genes coding for neurology has been crossed in advanced primates, from then on the rest of the body that holds back neural development. This effectively creates a new era of evolution in which, after hundreds of millions of years of expensive neural design, suddenly neural power becomes cheap within other evolutionary constraints. Interestingly then, this author recalls as a boy the science teacher explaining how carbon, the basis of life, had four electrons in its outer shell, but the next element in this class was silicon. Only while animals can breath out carbon dioxide, silicon dioxide is sand, so everybody laughed at the idea of silicon based life. Yet even as we laughed the first circuits were being etched in silicon, and we saw within a lifetime the technological evolution of silicon-based computation. Only the point about silicon was not that it was used, but that it was cheap. Humans evolved because their ancestors were born into an evolutionary era in which brain power became cheap. Now civilization is entering an age in which computing power has also become cheap, and we do not know where this will lead.

For humanity then, the computer brings both opportunities and threats. The opportunities are to test existing theories of intelligence against how the new machines work. The threat is that everything humans had previously considered about possessing intelligence might be wrong. Philosophers, from Plato or earlier onward had insisted that there was something unique about how humans could think. Now it seems that our thinking process could, even in theory, be emulated by a machine. Of course, the catch is "even in theory". We accept that for practical use a motor car can replace a horse, just as a computer can replace a clerk, and mostly it has. But we do not expect that a robot horse, no matter how perfectly constructed could replace a living horse emotionally or sentimentally in being a friend, or living history and culture. Some people expect that it could, and now we even make robot dogs for children. In this sense there is a conceivable market for robot horses or even robot people. Only we would not consider such devices alive or conscious in a philosophical sense, though individual humans display strange, even warped sentimental attachments to all manner of objects that are not bona fide in whatever way. Except false or fetish attachment to things, like worship of false Gods, seems more an issue of how humans really do think than properties we can ascribe to the objects of human attachment. Many people today claim they already "love" their computers and computers will become more personalized in future. So, the debate is not human ingenuity for building devices that emulate nature, but whether such devices could ever be conscious. Humans not only see images as input, but have a picture of outside reality even with the eyes shut, or feel sentiments of love, jealousy and pain. So, we want to know if this level of consciousness could be built into machines that we make, even in theory, regardless of present limitations of technology.

Plus as we saw in the debate over cause-and-effect, the practical limitation of calculating out every cause in the universe did not stop people debating its theoretical implications, especially on behavior and free will. Those who viewed the human will as completely determined jumped on the mathematical issue of cause-and effect as justification for their standpoint. Those who saw humans as ultimately free and morally responsible adopted a stranger reasoning. This was that the practical limitations of computing every cause-and-effect must also have an analytical expression as a mathematical rule, demonstrable in itself. We could use say, an argument like Wittgenstein's to show that all computational results could be reduced to a proposition. This proposition could only be logically connected to other propositions, but would infer nothing about the real world. Recently, similar arguments are made using Godel's theorem or its extension using Turing's theorem, which applies to machines. We can also use arguments from complexity theory to demonstrate mathematically that the universe is indeterminate to an extent of being subject to very large changes from very small perturbations. Or the argument can jump to physics to debate that quantum theory proves that the world is indeterminate, though critics say that this implication of quantum theory has been misunderstood.

The point perhaps is that outside of specialist fields nobody really understands what all these theories imply, but that does not stop individuals from adopting the view of human nature that they would probably hold anyway. How non-astounding then, that when the debate switches from evolution or cause-and-effect, to artificial intelligence, that the same people are found holding again the views that they always held. Only this time the issue is what type of computer could we ultimately build. If we could build a computer that could "think like humans" somehow this will prove that the human will is not free. Yet, if we cannot build such a computer it will prove the contrary. The same esoteric arguments previously used to debate evolution, free will, and cause-and-effect are now used to debate the type of machine we can build, and more amazingly, what all this has to do with human moral accountability.

Only while these debates are interesting this book is about maximizing human options. The relevant issue here is not so much what type of machine humans can ultimately build in some hypothetical future. We need to know what types of machine humans should attempt to build now to enhance our present options.

So, what is a computer? What types of computers should we be building? And if the brain is a form of computer, what things can we learn from computers that will help us to better understand the brain?

4.6.2 How Computers Work

Basically, a computer is a device for manipulating information such that from the information we put into it we get a higher quality of information out of it. Now in this universe you do not get anything for free, so computers enhance information quality by two processes. The first is consumption of energy. Energy, like the oxygen in the blood, or electric charge in a battery, has an information order when it is available for useful work, higher than the order after the work is consumed. All computers consume energy to work and by doing so they extract the information order available in the energy source, and use it to enhance data already available as raw information. Despite that the human brain is only 2.5% of the body weight, it consumes about 20% of the energy used in the body.

The second way computers enhance data also involves energy consumption, but here the energy that went into the design of the computer. Whether by nature via evolution, or human engineering, designing things consumes energy. The modern microchip has been able to make cheap computers possible because an enormous amount of design "energy" has become concentrated into a very small device, including the energy of designs of early microchips, being recycled into the latest designs. The brain too, designed by evolution, is such a useful computer because the energy of hundreds of millions of years of neural design has been concentrated into it. Again, the human brain uses almost half the total gene count in the genome for its design, despite that it composes 2.5% of the total body by weight.

But whether designed by nature or by engineers, or by a process yet unknown to humans, there are two different ways to design a computer depending on what one wants the device for. The first method is to concentrate the energy of design into the hardware of the computer. So the computer will consist of three minimal components any computer requires; an energy source, data, and design hardware. This will be the simplest type of computer, because once it is connected up to the energy source and data input it can begin processing data straight away. There is no need for further design input. Such a computer could be installed by a capable mechanic who, so long as he new how to connect the device up, would not have to know anything further about computers to make it work. Industry uses these types of computers because they are simple and rugged, a mechanic can install them, and they work straight away.

A term used to describe hardware only computers is to call them analog computers, only this is not precise. Analog means 'analogous to' some process in industry or nature which the computer models. The first computers such as the famous fly-ball governor used by James Watt in 1786 to control the speed of early steam engines, and consisting of data, energy transformation and hardware design were true analog computers. Other simple analog computers can be built from gears, levers, or calibrated sliding scales. At least one analog calculating machine using gears was discovered from Greek times, though its function is unknown. In industry early analog computers used pneumatic pressure or an electrical current of variable strength to control processes. This gave rise to a general term for analog, meaning any signal of a continuously variable type, like those used in older telephones. However, analog signals easily suffer degradation. So today it is usual to replace these by more modern digital signals. Digital signals transmit discrete information of signal strength. This is where the terms become confusing because most modern analog computers use digital signals.

Yet, for all their simplicity and no matter what we call them there is a great drawback to all analog-type hardware only computers. They remain specialized by the initial hardware design to a particular function. A hardware only analog device for calculating the orbits of the planets could never be used for playing chess, or for controlling an industrial process, and so on. To perform a different function the analog computer must be redesigned and rebuilt, which is a tedious, inflexible process. So, during World War II hybrid computers arose featuring two major technical innovations. First, they began using discrete digital signals for the data, to replace the less reliable analog type data signals. Next, they began using softwiring of the major parts of the design, so the computer function could be rewired quickly when a new process needed to be modeled. Finally, after the war, engineers realized that digital signals, so successful for modeling the data, could be used for another purpose. This would be for the electronic softwiring of the design function itself. This requires that the energy of the design process, instead of going into manipulating wires, be arranged instead in an electronic program which could be fed into the computer before or between the data processing runs. This results in the modern, programmable, digital computer. Notice though, that whereas older style computers were three element, consisting of energy supply, hardware, and data, the new computers were four element, consisting energy supply, hardware, electronic softwiring, and the data.

Notice too that in the modern computer the design energy goes into the computer in two stages. First it goes into the hardware, then it goes in again as the computer program. Plus, in the modern computer one set of hardware can run many different programs, so we get maximum utilization of the design energy of the hardware, by effecting changes to computer function through software design.

Are there any disadvantages to the modern, digital computer?

Well, any modern technology is frustrating when it does not work the way it is promised. But there is one huge frustration to all modern computers. It is the problem surmised by Kant for the brain. The old analog type computers had the simple charm that they could always process data straight away. This is how the original fly-ball governor worked. The mechanic connected it to the output shaft of the steam engine and the throttle linkages. Once the shaft began rotating the energy of the shaft worked the fly balls, which adjusted the throttle. But programmable computers will not start working so simply. They need Kant's third element to knowledge, his "pure principles of understanding" to work, and they always need it. We do not notice this problem on modern computers because it is disguised beneath layers of innovation. But the original meaning of the term boot up does not mean to kick the computer to start it, but refers to the expression 'you cannot pull yourself into the air by your own bootstraps'. Early programmers must have felt this, because they literally had to start the first computers by feeding into them an inaugural boot program through electronic switches, a frustrating chicken and egg task. On the modern computer the boot process is disguised from the user in several layers. First, a special hard coded microchip provides the inaugural boot data. Then the operating system program (Windows, UNIX, etc.) is loaded, then other programs, and then finally data can be processed. But whatever operating system is used, the sequence must work in precisely this order or the computer will not run.

This need to provide a boot program for any programmable computer might seem to be a frustration of technology, but it is more than that. True, computer manufactures should do a better job of disguising the boot problem from users. But the problem is a property endemic to the universe, or at least a universe inhabited by intelligent beings. There are proofs of this. In 1931a mathematician called Kurt Godel proved that within any axiomatic system at least one axiom must remain unproved, effectively the 'boot' axiom. Later another mathematician, Alan Turing, proved that for any series of machines that could solve algorithms by a program, now called a Turing Machine, the first program in the very first machine cannot be written by another Turing Machine. In philosophy, Lock's proposition that all information in the brain arises from the senses is itself a proposition which cannot be proved true by sense data alone. So, it is a boot proposition. Finally, we have the observation of Kant that a stream of data by itself will not provide information without some inaugural organizing principle of how the data should be handled. Such an organizing principle could not arise from the data itself.

To illustrate the problem more clearly we will introduce two other terms to describe computers, based on Kant's original classification of knowledge. Kant said all propositions requiring proof must be either analytical or synthetical, depending on whether they redefined meaning or added extra knowledge. As stated, today the terms digital and analog more refer to signal types than the computer function, so we can redefine computers as analytical or synthetical in function instead. (The term "synthetical" sounds horrible.) Now synthetical computers are traditional analog or hardware only type, which is true. Traditionally, analog computers carried out real measurements, like the pressure or temperature of a process, or the speed of a steam engine. So they do add extra information in the form of data to the computing process, which is the synthetical function in Kant's logic. Without denying the practical importance of measurement to industry or experimental science, however, the computers of most interest to philosophy and logic are analytical computers, or as we may define them, analytical machines. Broadly, an analytical machine is one that carries some logical or computational function based on a program, which is fed into it. Analytical machines include Turing Machines, programmable computers, and the higher cortex of the human brain. By contrast, computers that can process data directly by the intelligence designed into their hardware are synthetical machines. Synthetical machines include all simple analog devices like a fly-ball governor, all machines not requiring a separate program to process data, and all the neural motor reflexive circuits of the brain and nervous system. (Again, synthetical machine is horrible sounding term!)

Classifying computing devices this way we can now restate the problem of all intelligence machines.

We have in nature and technology simple devices called synthetical machines. These are machines consisting of energy E, design, d, and input-output, say x, so we would characterize the machines as (E, d, x). Without explaining how the initial design of either natural or artificial synthetical machines came about, we know that once designed all synthetical machines will function straight away without any need of a further electronic program to boot them up. This is even when the data these machines process is in electronic form. But we also have in technology other devices called analytical machines. These are machines contain energy E, design, d, and input-output, x, plus an electronic program p. We would characterize the machines as (E, d, p, x). Only all analytical machines require a boot program to run. As well, Turing effectively proved that it would be impossible for the boot program of the inaugural analytical machine to be written by itself. In fact, we know human programmers wrote the inaugural boot program for the first artificial analytical machine, because they complained with colloquial humor that the task felt like pulling oneself into the air by one's own bootstraps. But even if the inaugural electronic boot program for the first artificial analytical machine was written with the human brain, this only pushes the problem back one notch, because human brains too are analytical machines. So somewhere there must occur a transformation of synthetical into analytical machines of the type (E, d, x) à (E, d, p, x). What we want to know then is where did the first 'p' come from? Or, if the human brain is an analytical machine, from where did it get its inaugural boot program?

4.6.3 Intelligent Devices

Prior to evolution of the human brain on Earth we are not aware of any other analytical devices. Because only analytical devices can produce other analytical devices we have a problem here analogous to the one that if all life evolves from other life, how did first life begin? Only as the previous chapter explained that in the brain of the newborn human infant there takes place an amazing transformation of a synthetical machine of reflex into an analytical machine of reason, which is what the new research shows. Thus, natural neurology of the brain overcomes in newborn infants what would otherwise be a serious problem for developing intelligence of the reasoning type. (This is just how natural technology allows reasoning to form, reasoning of the type humans use to discuss topics like computers arises from very complex social process.) Even so, developing the data stream into meaningful logic is a difficulty that we have not been able to solve in technology. Simply, there does not exist a practical computer which an engineer could connect to a data source, and from merely the physical act of flowing data through its logic circuits could that device develop by itself a Kantian boot program of order and relation.

This problem of a self-generating boot program might be the main barrier to genuine intuitive behavior among computers. Modern computers are already good at analytical functions such as logic, mathematics, and sorting data. This is essentially the resolution of tautologies, which computers can now do very well. We feed into computers what is really a constrained set of information, from which there is only one solution, just that it takes time to work the solution through. For example, there is only one solution to the problem of how many people called Smith live next to a neighbor called Jones in all the streets in America. Yet despite that there is only one solution to work it out by hand would take a massive effort. Yet, computers can resolve this type of problem in seconds, and not just for trivial information. Plus as computers become faster it takes less time to arrive at solutions to increasingly complex sets of restraints. Only no matter how fast computers become all they are achieving so far is reduction of mental effort by humans. Computers, by their speed and accuracy can solve problems it would not be practical for humans to solve, so this way computers bring knowledge into the world that might otherwise not exist in practical form. Yet strangely, all the knowledge any computer has produced so far already existed in potential form. The computers added no new knowledge. They just arrived quickly at solutions that were too tedious for humans to calculate out any other way.

Where computers fall down is not the resolution of constrained solutions, but Kantian qualities of learning, judgment, and intuition, which might be a problem of the boot program. Each human infant acquires a unique boot program, driven by a random learning process particular to each individual. A human adult, or child, can always startle us with a totally novel idea, because nobody knows exactly how the learning program was shaped. Modern computers cannot do that. No matter how cleverly we program them they never truly startle us because we can always find a way to rerun the program, or get another program on another computer to produce the same results that the first computer did. That is why we only gain from computers resolution of tautologies at great speed, but never gain from them any new intuitive or judgmental knowledge. (The famous Y2K problem might have the potential to startle us with unpredictable results.)

We can however program computers to learn, except the learning for now is concentrated in the electronic program. The hardware itself does not modify with learning the way the brain does, although this might be a practical rather than an ultimate technological limitation. More of a problem is that during learning the brain undergoes historical and random inputs that supply each brain a unique learning process, although this too might only be a problem of technology. In early anti-aircraft gunnery it was difficult to get the automatic controls of a gun to follow the random evasive action of a human pilot. Engineers solved this by feeding a random "white noise" into the input signal of the tracking mechanism. (A white noise, like the actions of a pilot when somebody is shooting at him, is an example of non hyper-geometric function.) Except in the human brain the randomness of decision making is a result of several processes. One is the learning experience, another is the mood of the user affecting the electrochemistry of the synapse firing. Another effect would be quantum fluctuation of finely balanced neuron firing states. Perhaps all these effects could be simulated in a computer by a random noise fed into the program. We could also simulate the learning process in hardware, such that as data flowed through a device, decisions would be remembered for influencing the outcome of the next switch. Whatever way, there must be sufficient randomness to the process that it could not be repeated exactly. That is, the computer could only be considered to exercise its own judgment when humans could no longer simulate exactly why it reached the decisions it did. Suppose we built two identical computers, maybe for playing chess, and subject them to a similar learning process. If despite a similar learning process each machine played its game slightly differently, we could assume the machines were intelligent, and we could learn something new from them.

If all these technical barriers were overcome, would humans be able to build machines with true consciousness?

Again, we must be careful of such questions until we know why they are being asked. The question might be leading into a debate that humans are "nothing but" machines, and therefore can be held no more morally accountable than machines. Or it might be a question related to a fetish about machines, that we could make good frustrated emotional relationships with living creatures by transferring affections to a non-living device. In either case we presume that any person asking such questions already has an answer in mind, even if not one of rational possibilities. On the other hand we can foresee, quite readily several areas where the intuitive power of artificial devices should be significantly enhanced. A prime example would be interplanetary robot probes, where today, somewhat scandalously, many of these probes have significantly less processor power than available on home PCs. (One limitation is that the typical commercial processor is not nuclear proofed against cosmic radiation.) Also we need better auto-pilot controls, especially for responses in an emergency, and more intelligent response controls to any industrial process, or even the traffic system or economy, or for playing chess. What we envisage is a form of hierarchical learning, with gradated generalizations of response. We could call higher levels of generalization morals or principles, just like in the brain. Generalized responses could be learned by an evolution program. The program would try to solve problems at the lower level first, but defer to higher principles if the situation became complex or unexpected. If we could get such programs to evolve by experience they would offer unique solutions that might not always be apparent to humans. Once we have problem solving in a way not apparent to humans we are gaining fresh knowledge from the computer, rather than just the resolution of tautology.

If computers were programmed to learn very high level responses that we might call emotions, morals or intuition, would the computer still feel in a sentient mode of understanding? This type of question could be debated forever, depending how far in the future we extended possibilities. Only all consciousness ultimately evolves to meet a need proportionate to the complexity of the problem. As we saw, the initial sentient responses were 'pain' or 'pleasure', and these were just a generalized grouping of responses such as 'withdraw' or 'do again'. We could similarly program a machine with low level responses such as 'pain' or 'pleasure' and later build on lower level sentience up to full emotional response the way higher animals do. Only even very low level responses evolved in creatures already alive and even then only in life that was mobile enough to face complex choices. So it seems that life, reproduction, mobility and adaptation might be prerequisites for any form of natural sentience, even at a very low level. Humans have not technically been able to produce organisms capable of responding to the environment in such a multiplicity of ways even without any sentience. Production of such devices able to display advanced sentience such as intelligence or emotion must therefore be very far in the technological future from anything that we presently understand.

Rather than seeing computers as an alternative to natural, biological intelligence then, we should see them as an extension of natural human intelligence, the way radar or telescopes are an extension of natural vision, or machines are an extension of natural muscle power. This has so far been the great impact of computers. Even simple, non-intuitive computers that can only resolve tautologies from constrained information sets can still act very fast and accurately, freeing the human mind for other problems. Today a person can take a data collection of millions of bits of information, and quickly resolve it into a graph, picture, or algorithmic representation, something the human mind cannot do quickly, but computers do very well. The development of intuitive or judgmental intelligence in computers should merely extend this type of mental assistance computers provide. In the future it might be possible to produce judgmental or intuitive-type computer runs, based on the computer's prior learning experience or judgmental principles set in the program. We could say present the computer with a problem, and ask it to delineate among the scenarios the high moral and low moral case, based on guiding principles we, or perhaps an institution has set as guidelines. Perhaps one day we might see judgement, leaning, or intuition, not as the output of a fantastic super-machine, but an extension to the common spreadsheet. Or maybe companies or organizations such as hospitals will keep an ethical decision making database constantly updated by precedent of how problems were resolved historically, and the consequences.

Still, we would not expect these programs to make choices. Any person who has actually worked with computers can soon see they are devices for presenting choices, not making them. Only often when one resolves a complex tautology on a computer such as a scheduling problem, it is still necessary to remind people that this is only the solution the computer provides. We might feed a computer with thousands of dates by which events happen, but the computer will resolve this into a single crucial date on which we must make a certain decision for everything else to follow. All that the computer has done is taken the given constraints and resolved the available information into its simplest presentable form. Humans must make choices of what to do with this information, or whether the information the computer was fed was complete, so the final choice is still with humans. Computers as analytical machines are also useful mental adjuncts for extending the original task of human abstraction in the sense of running scenarios. As Karl Popper said, we think the consequences of a scenario through in order to let our hypothesis die in our stead. From this an important future task of computers will be modeling of possibilities, to test in electronic image the choices humans face. If intuitive or judgmental experience can be programmed into computers which do this, it will only increase humans options, and the ultimate choices about their own future humans alone will have to make.

The problem of artificial intelligence then is another issue of the debate over cause-and-effect. Those anxious to prove that humans do not have free will believe the proof will come when we can build machines with artificial intelligence. Here we take the opposite view. We could only classify a machine as intelligent precisely when it exercised its own amount of "free will" to the extent that its human programmers could no longer be sure exactly why it made the decisions it did. Some of this is just semantics, but ultimately human choices will decide which way we want to take this debate, and what we wish to use artificial intelligence for.

So, from the perspective of the Theory of Options if machines can be built or programmed in a way that increases human knowledge they should be. Knowledge increases options by increasing understanding of real choices while the freedom and knowledge to exercise choice is what makes us human. Ultimately, the machines of an earlier industrial age did not in the end replace humans but relieved them of physical labor and mechanical chores. So, it is hoped the computer too will eventually relieve humans of rote mental labor. Today, computers are so useful because they relieve the brain of mental tedium leaving it free to perform creative tasks that the brain is good at. Computers providing intuitive or judgmental knowledge will also not replace the human role, but let humans arrive more quickly at what they should be doing; making choices over what to do next.

Return to the Theory of Options

Previous 4.5 Intuition and Judgment

Next 5.1 The Origin of Culture

  

Hosted by www.Geocities.ws

1