Unlocking the Secrets of the Brain


Unlocking the Secrets of the Brain


 to Neuroscience,  Biogenetics  and  Biology

Give me a fulcrum and I will move the World

Circa 300 BC. Archimedes

Over the last century our knowledge of every branch of science has advanced at an ever increasing pace, yet now, at the beginning of the third millennium AD, we are on the cusp of, perhaps, the greatest expansion ever – and in almost every sphere of research.  The invention and breakneck speed of the development of computers has played a major role in this endeavour by processing research material, and speeding up communications and disseminating information. Computing is transforming business, commerce, industry, finance and the basic fundamentals of both education and economics. However, increasingly, we are beginning to realise that this is only just a part of the potential contribution computing science can make.


Just as Galileo looked through his telescope, and Hooke through his microscope, computers are enabling us to see things we had not been able to see before. Biogeneticist, J Craig Venter says “The once distant domains of computer codes and those that program life are beginning to merge”. Cognitive Science Philosopher, Daniel Dennett has included a chapter on ‘programming’ in his latest book on ‘Intuition Pumps and Other Tools for Thinking’. The Edinburgh physiologist, Jamie Davies comments on the valuable insights computing can offer his discipline. UCL is exploring the convergence of carbon and silicon based life forms. David Deutsch and Chiara Marletto, at Oxford are drawing attention to the similarities of human reasoning and gene replication to programming. Evolutionary biologist, Andreas Wagner describes the “deep unity between the material world of biology and the conceptual world of computation”. He goes further, suggesting computers are also “the microscopes of the 21st century”.

Computers supplement our Brains

While telescopes and microscopes supplemented our eyes and launched the scientific revolution, the invention of steam power and electricity drove the industrial revolution. This massive increase in the supply of energy supplemented our muscles. Now computers are beginning to supplement our brains, and the impact of the computing revolution will be orders of magnitude greater than all that has come before.

Now that we have half a century’s experience of designing and programming computer systems we can begin to appreciate the contribution that our burgeoning knowledge can make. There are now two information processing systems in the universe that we know about. Throughout natural history, the brain has evolved as a means of organising all the organs of a life form to behave as one cooperative, coordinated whole. The brain does this by processing information from all its sensory and other organs. The new kids on the block are our electronic digital computers, which can do one thing and one thing only; just process information, and they only have three instructions with which to do this. However, the way computers have been designed teaches us a great deal about the way the systems of the brain achieve their almost miraculous tasks.

The Duality of the Brain.

The first and most significant contribution of computing is that it helps us understand the duality of the brain’s neural systems that has puzzled humans from the birth of recorded history. The Egyptians used the word ‘ka’ to describe the ‘vital essence’ that observably departed from the body when people died. The Hindus and Buddhists thought this was a spirit or ‘atman’. Descartes’ dualism suggested that ‘while the body, of which the brain is a part, works like a machine the mind is not material’. Given the problems that Galileo experienced, it is not surprising he choose to describe the mind as ‘soul’. Sir Charles Sherrington opined that “the mind was invisible and intangible, and per se cannot even move a finger, let alone play the piano”. By the late 1930’s Edwin Schrödinger was sure that both the brain and mind must operate to the rules of physics.

None had the privilege of programming a computer!

 Computers are exactly the dual systems that everyone sought.

Hardware & Software

The integrated circuits in a computer, mostly of semiconductors that we can see and touch are the Hardware, and we have coined the term Software for the patterns of ephemeral electronically coded instructions – the programs – that flow over these circuits. Without Software all computers are useless, they cannot even make the tea! Without Hardware, Software programs have nothing to operate upon.

The BRAIN is observably the physical hardware: the neuron networks, the family of glia cells, the glands, the neurotransmitters, the messenger molecules, and more. We can open a skull and see and touch them all.

The MIND (or spirit, soul, ka or atman if you prefer) is the mass of ephemeral patterns of electrochemical signals and electromagnetic fields flowing in waves across the brain all bathed in floods of hormones.

Without the Mind (neural Software) the Brain is dead. Without the Brain (neural Hardware) the Mind has nothing to operate upon. We could say to Sir Charles Sherrington that the Mind may, indeed, not be able to play the piano, but nor can the Brain play the piano either. Moving the fingers over the keyboard requires the coordinated activity of both!

We can now say with some confidence that both the Brain and the Mind are equally tangible and measureable, and both operate within the conventional rules of Physics.


The second great contribution of Computing is the concept of ‘Programming’: from the design of a system to all the coded functions of a complete software application. The crucial essence of software is that it enables a fixed piece of equipment, or hardware, to be used for an unlimited number of applications. We are all familiar with being able to use our laptop to write text one minute, check a spreadsheet the next, then compute some formula, or skype a friend. Same hardware, different programs.

Similarly, we have one set of neural networks that, for instance, manipulate the muscles of our lungs, vocal chords, mouth, lips and tongues. Across this single neural hardware system we can speak every word we know, sing, and play many musical instruments.

The instructions at the disposal of the computer programmer are remarkably modest. There are just three: add, subtract, and the ability to jump to another instructions in certain circumstance. Alan Turing outlined this ‘software’ system in his famous paper to the Royal Society in 1936, which he described as a processing ‘machine’. In many ways the Mind, as opposed to the brain, can be seen as a Turing ‘machine’.

We invented software programming and coding. In the process we have discovered the software of biogenetics, neuroscience and biology et al.

We are increasingly realising that this software, hardware dichotomy is common to many things. It is basic to the operation of the genes, how we learn from experience; and in other species, how bees operate as hives, fish swim in shoals and birds fly in flocks. There is increasing evidence that computing has enabled us to explore a whole new discipline: the science of software. For centuries we have really only known about the ‘hardware’– what we could see and touch.  We are now beginning to realise even in computing itself, but also in biogenetics, in biology and in psychology that the software actually plays the dominant role. Current computers are largely transistor based, but researchers are racing to build quantum, biological and light based processors using new materials recently discovered. They all depend on software, and the same basic software architecture will work on whatever hardware is invented.

Creativity and Evolution

Our knowledge of the hereditary transfer of physical information from one generation to the next via DNA is advancing by leaps and bounds. This ‘Darwinian’ evolution accounts for the growth of our physical structures – all our cells and organs including the brain: the hardware of our body. Computer science is helping us understand that there is a parallel process. We buy a new word processor software program and replace the previous version by uploading it to our lap top.

Learning a new language is a bit more difficult. The knowledge of the hereditary transfer of the information that flows over the brain: the software, is by learning and education. For instance, language is not inherited through our DNA. It is one of many skills that has to be learned by each individual from their parents and peers. It is part of what can best be described as cultural evolution. Classical – Darwinian, or hardware evolution advances very gradually over many hundreds of generations. Software evolution can advance significantly within one single generation.

Understanding the logic of these systems

Advances in Biogenetics and our growing understanding of the plasticity of the brain in the context of software systems design is giving us a framework to develop a logical architecture for creativity. We are beginning to identify more than one stream of evolution and a logical explanation for each.



The five great questions in cognitive neuroscience divide into two pairs: Information and Memory; and Intelligence, Thinking and the family of Creativity. All four contribute to the central question that has fascinated mankind throughout recorded history: Consciousness. Computing and particularly software science makes a major contribution to all five.

The five are interrelated but we can study them in this sequence. Then, by comparing and contrasting our electronic brains with our natural ones we can venture some definitions, and outline the potential future of computing.

1 and 2                INFORMATION & MEMORY

Information and memory are the foundations of all our knowledge and understanding of everything. Information is what we store, remember and recall. Without memory we could not recognise that things we are witnessing might have occurred before and react accordingly.  Without memory we could not learn from experience and build up our knowledge.

When we are born we can do almost nothing, but we can learn to do almost anything.

  1. Introduction to Information

A soundbite catch all definition of information is that it is order. More helpfully, we can distinguish four types of information. (1)There is the physical information of the natural world all around us; how everything works and how everything happens. (2)There is biological information – the way information is passed from generation to generation – biogenetic information.  And how plants and animals are nourished – photosynthesis, for instance. Then there are the ways that animals in general, and human beings in particular, use and process information – neural information. The sensory and other organs generate a continuous flow of information as they monitor what is happening in the external environment. Similarly the autonomic, hormonal and immune systems generate a flow of information in the body. (3)Human information. We have developed various forms of specialised information systems, most typically speech and language, but also mathematics, and notations, symbols and descriptors of music, formulae and much else.  (4) Lastly we have invented a complete matrix of artificial, or machine information which drives all our communications and computing systems.

Machine information maps very efficiently onto language and mathematics, but we can observe that language does not completely map onto neural information, particularly the internal functions. There are quite large areas of our experience that, although we have had speech for at least ten thousand years we have not developed words and phrases to describe many of our experiences, particularly our emotions, perceptions, impressions and sensations. We cannot accurately define many of our feelings of attraction, beauty, leadership, and so forth. We use the word qualia for this family of experiences. Many people have found this a problem in our understanding of ourselves, but actually it is a valuable pointer to the construction of theories of our knowledge and understanding of consciousness.  We should ask ourselves “what is the reason barrier to finding words and phrases to accurately define some of our most basic emotions?”

In another dimension there is information that is valuable of itself, and information that is of little or no direct value, but enables us to find information, like formulae, recipes and algorithms.

To help define information Claude Shannon argued that Information is the unifying principle of science, while John Archibald Wheeler suggested that in due course we will have learned to understand and express all of physics in the language of information. Put another way, the laws of physics are the algorithms of the processing of information.

Perhaps we can go one stage further?

Biogenetics, neuroscience and computing are all about different aspects of processing information.

A recently suggested reason for the apparent weirdness and uncertainty of quantum physics may be that the information about quantum states is incomplete. (NS 11 Ap 2015 p36)

  • Introduction to Memory

Memory is a complete subject in its own right. At this point in the argument it is important just to note that computer hardware is fixed. Everyone can only work with what comes from the manufacturer. The brain – particularly defined as the hardware is very different. We are born with some two billion neural networks, half connecting up every organ in the body and half concentrated in the brain. A mature brain has grown upwards of a trillion additional links and structures, holding the massive memory systems of all our experiences – of the order of one hundred thousand words for starters – and that is just one language. We grow the neural circuits to learn to walk, talk, run, write, swim, code, number and drive cars virtually automatically. Donald Hebb summed up the process: – “neurons that fire together, wire together”. We can discuss later how this forms, then strengthens memory, and also study pointers to how the memory systems degrade: dementia and other malfunctions.  It is noteworthy that the term ‘plasticity’ is back in fashion to describe how the brain is continuously growing, reorganising and replacing worn or damaged functions.

The brain mind has primarily evolved as a ‘learning system’. Information and memory are tied intimately together. Indeed, static information, or memory, can be said to be ‘potential information’. Streams of signals transmitted through the system can be said to be ‘kinetic information’. They do work. The latter grows the former. The former regenerates the latter. The software of the mind grows the hardware of the brain!

However, information, whether active or stored in memories, is only of use if we can put it to use. There is the whole subject of the conversion and translation of pure facts into meaning: the implications of pure information. The extrapolation of simple lists of data to solve problems. As the brain processes information the whole body reacts, and can be instantly ready to respond. No computer has any concept whatever of what it is processing.


The third and fourth questions centre around the great debate over what we mean by intelligence. We can observe natural, autonomic intelligence; the great coordinator. Intelligence is the essential skill all brains have evolved to enable the fastest possible response to incomplete information: that way we survived. The ability to identify patterns in masses of information. To respond to the unexpected.

But then there is the apparently unique human skill of prediction, imagination, invention; abstract ideas and concepts. The whole family of creativity; not only the arts, crafts, sciences and entrepreneurship, but in every aspect of life.  Creativity, so crucial to us, and apparently, completely absent from our computers.

In this section we can explore:-

Computer programming

Biogenetic programming

Neural programming

Nature and nurture

Different types of computer programming – software: replicated in biogenetics and the brain

Energy supply and generation


Junk: in DNA and Programs

Self Organising Systems

5.            Last, and fifth, but by no means least is CONSCIOUSNESS.

All four add to our understanding of consciousness; however, the contribution of computing is more subtle. The triumvirate of biogenetics, cognitive neuroscience, psychology, plus software science can make a surprisingly massive contribution by highlighting what we would have to design and build into our computers to emulate what nature has endowed us with.

Role of Synapses: Synaptic tension: Implications of variable width synaptic gaps.

Hypothesis: that the Mind – software, generates conscious experiences, while the Brain – hardware, determines when we sleep.

We can finally explore the NEAR FUTURE




We are on the brink of establishing a new area of science.


Some short (soundbite) DEFINTIONS

We have artificial (machine) information and memory.

Now we can begin to extend the parameters of the design goals of artificial (machine) intelligence;

Begin to outline the parameters of what we will need to design into software to generate artificial (machine) thinking and creativity,

Last but not least the horizons of artificial (machine) consciousness.

With these overall tasks in mind let us address in more detail these five great questions.


Unlocking the Secrets of the Brain


 to Neuroscience,  Biogenetics  and  Biology


One of the great conundrums that puzzles cognitive neuroscience is how information is represented in the body in general and the brain in particular – what we termed above as neural information. Neural information systems have evolved in all animals enabling the brain to coordinate all the organs of the body to behave as one coordinated whole. The exclusively human information systems of speech, and language, then writing, then mathematics, et al has enabled us to effectively connect up these internal communication systems to the brains of all the other members of a community. Thus we can pass on what we have experienced, what we are thinking; discuss and debate what other people think and come to group decisions on how to react, how to behave, how to work together.

The first great breakthrough in evolution was when single celled organisms grouped together for protection and enabled individual cells to specialise. Speech enabled groups of people to work together, reap the benefits of the division of labour, enabling individuals to specialise. Computing and its role in communications is not just accelerating this trend, it is generating a great leap forward.

We have noted that human information does not map onto all neural information, but let us put the problem of describing qualia on one side for now. Similarly, let us also temporarily park the natural information of the planet and universe around us and concentrate on the complete matrix of machine information systems and structures that have recently been invented by the engineers and designers of the telecommunications and computing industries.  

Neuroscience can learn a great deal from these inventions.

Artificial (Machine) Information

It comes as a surprise to many people that there is not one single letter, number, picture or sound in any computer that has ever been built. Computers are built of semiconductors – usually transistors at present – that can be in two states. Hence computers are said to be binary. Samuel Morse worked out that if patterns of five binary signals could be transmitted along telegraph wires, the thirty two combinations in each pattern could be coded to represent the alphabet and the numbers [25= 32.  One code (letter shift) indicated the following codes were letters, another code (figure shift) indicated the following codes were numbers, or punctuation.] Nowadays the entire computing, telegraphy, radio, television and sound industry is based on this principal, and a pattern of eight ‘bits’ is the world standard. [28 = 256 combinations, which accommodates upper and lower case alphabets in English, Arabic and Greek plus numbers, punctuation and symbols]. Pictures and sounds are coded as patterns of these eight bit ‘bytes’. Streams of these eight digit bytes flow round the system. Each digit is of equal value, hence computers are said to be digital. This is the structure of artificial, or machine information. Claude Shannon did seminal work on the speed, capacity and format of the transmission of machine information.

If some extra-terrestrials did not know this standard code nothing on the world wide web would have any meaning whatever to them. Similarly, if a nation on our planet decided to promote an alternative coding system, it would be entirely possible to do so – technically, at least. 

Neural Information

It comes to many as an even bigger surprise that there are no letters, numbers, pictures or sounds, let alone words in our heads. Evolution did not have the benefit of Morse’s inventive brain so there is no binary code in the brain! There is no coding system, as such, at all. Nor is the brain digital. Information is transmitted by the mind and stored by the brain in analogue patterns – that is, the strength of each part of every pattern is as significant as its form and position.

We can understand this concept more easily by studying the development of language. Our best guess is that when our distant ancestors starting to walk upright, this new activity stretched their necks and vocal chords, enabling them to widen their repertoire of sounds. The first sounds – or proto words, were patterns of sound that were accepted by everyone in a community as referring to specific individual things or actions. Neural patterns evolved to activate the muscles of the lungs, vocal chords, lips, tongue and mouth to produce patterns of sound waves, while the hearing system of the ears evolved to analyse these patterns of sound waves into recognisable streams of neural activity.

Representation of words

The neural pattern to say and hear the word ‘tree’, for instance, was probably an extension of the visual image of trees, the feeling of trees, the smell of trees, the sounds of trees, and the taste of trees. Every experience associated with any aspect of trees symbiotically extended the neural networks of both trees and the record of these new experiences.  Thus there was not, neither is there now, a specific neural network for a specific word. There is a massive network of all the associations and cross references to our perceptions, impressions, sensations and emotions associated with all our experiences of that word, which includes sending streams of signals to the muscles to speak various versions of that word.

However, there is a dichotomy here. We do not inherit language. Not one single child has ever been born able to speak one single word! Language and much else is not inherited through our DNA. It is passed on generation to generation, laboriously learned by each infant from their parents, peers and community. Language is a cultural skill and its fast expansion can be called cultural evolution.

Writing, reading and printing added another dimension. Again, the brain and mind advanced synchronously. The brain had to evolve the capability to manipulate the muscles of the fingers to move a writing instrument with a degree of precision never previously needed, while the eyes had to decode shapes with a new degree of accuracy. Reading and writing involved the mind creating dictionaries of new words, and expanding words into alphabets and spelling; and speech into phonemes and syllables. Every child has to learn to build massive new neural structures to cope with storing, cross referencing, editing, reinforcing, recognising and retrieving a hundred thousand words or more, and that is just in one language. Then there is the meaning of each word and the meaning of strings of words, where the position and sequence is as significant as the words themselves.  Then there is the emphasis placed on words.

In short order – over less than four thousand years – less than two hundred generations – the brain has had to respond to the invention of writing; and every child has to cope over the first decade of their lives with a mass of interleaved, interrelated, associated and connected hierarchies of neural network patterns. We have one verbal input system, ears; one verbal output system, speech; another input system is the recognition of visual patterns, reading; blind people can ‘feel’ braille; we have another output system making precision images with our fingers, writing; or sequencing characters, and more recently, keyboarding.

Interpreting Words

There is another aspect of language that suggests tantalising implications. The western world is used to an alphabet with five vowels and twenty one consonants. Curiously we can understand a sentence with the vowels removed relatively easily, but it is nearly impossible to fathom the meaning of a sentence with the consonants removed.  For instance: N smll stp fr mn, n gnt lp fr mnknd is fairly easy to decode.  Oe a e o a, oe ia ea o ai is impossible to understand. [words are easier if the consonant comes first: small is easier than one]

What underlying systems account for this? We know that the brain is very good at processing incomplete information fast and reacting quickly: it is a fundamental part of the genius of intelligence, but this ability seems to be in a different class. Our best guess of the origin of the alphabet is that it was an extension of Egyptian Hieroglyphs, probably by the Phoenicians as a decisive move from shapes for things (logograms) to shapes for sounds (phonograms). The first Phoenician alphabet did not include vowels, although the Egyptians had determinative hieroglyphs. It is generally believed that ‘vowels’ were added by the Greeks. The result was so successful it is largely unchanged to today. But why the division? Is it that the vowels are effectively a musical component providing form, structure and pattern to the simple phonogram streams? The sounds mammals make to give warning, fear, food or mating are effectively vowels. Perhaps consonants came later to differentiate gradations of warning, food sources etc. The opposite of most perceived wisdom! Or is there some deeper reason to do with the processing and decoding of the neural signals from the ears? Is it possible that vowels are a hereditary ability transmitted by Darwinian evolution while consonants, and therefore words are a skill transmitted by learning and education – by cultural evolution? Either way, the fact that the brain can decode some groups of partial information so easily, and other groups not at all, is an important factor to keep in mind. Furthermore it re-enforces the argument that music and words are processed differently.

Silent Speech.

When we are thinking of a word or phrase and wish to communicate it, we have a choice either to activate the vocal chords to say words or activate the muscles of our fingers to write words. There is also a third alternative. We have the ability to speak silently in our heads; to say words without making any sound and to hear those words without speaking them. When we read, we hear ourselves say the words in our head. When we write or keyboard some text, we compose it in our heads, then say it silently as we write or type it.  There appears to be a neural short-cut linking the input and output systems directly, ignoring the vocal system, hearing system, visual system and writing systems. Thus we have the means to think in our heads, pursue ideas and abstract concepts, invent imaginary worlds, and dream up solutions to problems. All entirely silently, anywhere, anytime, about anything.

There are very few examples in neural software systems where a neuron, or neural network associated with some discreet piece of information, like a word, has the choice of any one of three alternative outputs. We can observe doing it every day, but we do not understand the neural physics of how that choice is executed. There appears to be a switch, but where? And what activates such a switch? The alternative would infer that words destined for each output are stored separately – very unlikely.

The next step in the chain is how raw information can be expanded to generate and deliver meaning, but before moving on it is useful to expand the reference to the relationship between language and music. Our best guess is that language is a derivative of music. Music appears to have a direct line to our emotions, while language intermediates that direct link. Research is progressing into using music as an additional carrier of information under the term Sonorics.


Information is much more than just a string of facts or list of measurements. The meaning of information suggests both the implications of that information, and how information can be put to use by extrapolating one source of information to elucidate another. The Mesopotamians started a House of Knowledge. The Greek Egyptians built the Library of Alexandria. More recently names like Diderot and the Encyclopaedia Britannia, and dictionaries and lexicons blaze a trail to the World Wide Web. We are on the cusp of having the entire body of human knowledge, art, culture, commerce, instructions, skills and emotions instantly available. However, this gives us only a limited type of meaning. One word defined by a mass of other words, pictures et al. If a dictionary is in a foreign language that we do not recognise, it provides us with no meaning of anything whatever.

This observation draws attention to the fact that the meaning of words and phrases in the mind brain must be wholly other, and, while we can and do use words to describe the meaning of other words, the actual meaning to us of a word or phrase must be anchored back, maybe though a number of subsequent experiences, to words directly associated with the sensory and other organs, and the perceptions, impressions, sensations and emotions generated by the glands and hormones associated with all our experiences of those words. Even when we hear, or just think about single words, we experience an emotional reaction: fear, delight, suspicion, revulsion and so forth. For instance, hearing the word ‘filthy’, gives us a momentary sense of apprehension.

It would never occur to us that a dictionary or on-line encyclopaedia has any sense whatsoever of the meaning of the words or phrases printed on a page or displayed on a screen. The descriptions are just lists and algorithms attached to each word. Computers are no different. This the very fact that a computer has no sense whatever of the ‘meaning’ of the ‘facts’ it is processing helps us clarify what, and how, information is extended into meaning in the brain mind. It can be argued that one principle function of a brain mind to extract meaning from observations – information.


Another fundamental difference between computers and the brain mind is how both create, process and access memory.

Information (data) and processing are kept separate in all computing. The processor executes the list of steps set out in the program, manipulating inert information. In the brain mind there is no processor (or homunculus). The brain is driven entirely by the information it receives.

Computer programs input information, mainly from keyboarded texts and measurements, but also from other computers and a variety of scanning and measuring systems. This ‘data’ can be processed, compared with stored records and output to printers and other computers, and new or updated records made in what programmers usually refers to as databases, as patterns of magnetic digits on a variety of media – hard drives, interchangeable disks and sticks. The design of databases is a whole industry on its own. Information needs to be filed in a database in a way in which another program can locate it. This is a fundamental point. Computer systems are best designed on the basis of how they will need to be used in the future.

The Brain Mind simply cannot work that way. The neuron networks can have no knowledge whatsoever of how information coming in from the eyes, ears etc. may be used or needed in the future, if at all. All natural systems are self-organising. They have no concept of what might happen in the future. Perhaps we can coin the term Selfortems for self organising systems?

Computer databases are based on preselected keywords, or designed with indexes that are often structured in complex hierarchies depending on the application. Relational databases use cross references. Whatever referencing systems are used they are all updated as new data arrives or is edited.  If the program cannot find something through these keywords, indexes and cross references, the only alternative is for the program to access every item in the database, one after another from the beginning.

Neural Memory

The structure of neural memory is completely different. The brain has no idea if a present experience is the beginning of a whole new framework of important information, or something trivial.  We know from many experiments that images seen for twenty seconds or so always create a minimal memory ‘trace’. We can reliably demonstrate that this ‘trace’ is available so that, if a similar image is seen, it can be recognised. However, this trace may not be sufficiently strong for it to be available for the more complex function of recall.  Repetition of an image, or experience, strengthens existing structures and adds cross references of everything associated with each repetition. If a whole new neural network develops and is in frequent use, the axon and dendrite filaments are insulated, ‘myelinated’, and the synaptic links are strengthened.  More and more usage grows more and more cross references. Motor networks – like learning to ride a bicycle for instance, often start as whole brain activities as we struggle to coordinate our hands and feet and react to the balance mechanism in our ears. As we become proficient, this network is gradually edited to a resilient network, which becomes almost automatic.  We no longer have to ‘think’ what to do. The term ‘autopilot is often used.

Eric Kandel won his Nobel Prize demonstrating how the simple ‘Aplysia’ grows new neural structures to respond to new situations.

We can observe that the ambient emotional state has a major effect on the formation of memory. Most people have long term instant recall of traumatic events, or experiences with loved ones thanks to the fact that the brain is probably flooded with adrenaline to maximise all reactions. Similarly the formation of memory is enhanced if more than one sense is involved –multimedia acquisition. The cross references strengthen each individual trace.

We have a fairly clear idea how these links and structures are grown.  In the paragraphs above we quoted Donal Hebb’s axiom. Our best guess is that where two adjacent neuron networks are simultaneously active, their electromagnetic fields overlap and glia cells are attracted to form a temporary speculative bridge. Messenger molecules may carry out the same bridging task for more distant simultaneously active neurons.

These ‘glia bridges’ are not dissimilar to the structures of glia scaffolding that help the nascent brain develop in the womb (see p 15).

 Repetition strengthens these tentative links. The timing of particular movements can be fine-tuned by sliding the synapses of axons along the receiving dendrites to lengthen or shorten the distance, and therefore transmission time, to a muscle, or adjacent nucleus.

Whenever these neural structures are stimulated, they are capable of generating similar electrochemical signal patterns as those that created those neural structures in the first place. Hence kinetic information flowing though the neural networks automatically causes the growth of new neural structures, or potential information – memories. When these memory structures are activated they generate kinetic information that does work similar to the original activity. Or in computer language. Software grows hardware, and hardware generates software. And so we learn. The process is automatic. We can also learn how to learn consciously – it is what the whole process of education is about – using various different techniques and practices – mostly based on various types of repetition, stimulation and encouragement. The strongest drivers are inquisitiveness and the ambition to explore and competition.

Accessing Memories

We can access information in many different ways. Computer programs try and use the codes, keywords, indexes and references designed into the system of each application. Hence the endless account numbers everyone has to endure. Otherwise the program has to look at each individual entry one after the other!

The Brain finds information in broadly three ways, by association, and by a form of triangulation, and by using words as indexes. If we are shown a picture, say a picture of Tower Bridge, and asked what city is it in, then we will know the answer if we have a neural link from a previous direct association, historic connection or cross reference between that iconic image and London.

If the question if “What is the capital of Britain”, then we have to activate all the networks associated with Britain and all the networks associated with Capital cities, and hope that these two networks have an overlap to yield ‘London’. If networks generate strong signals we can answer confidently. If networks only generate weak signals we may answer hesitantly. If networks generate very weak responses they may not have enough energy to penetrate our consciousness and we will conclude we do not know the answer. In this circumstance we may be aware we do know the answer although it seems to be just beyond reach. We use the expression that the answer is ‘on the tip of our tongues’.

This clearly suggests that memory efficiency is at least partly a function of the ambient hormone mix, and the energy employed: initially the strength of the input signals: the importance of our need to access something – in an emergency: or just the strength of the signal generated when the potential information is converted to kinetic information.  Thus, clearly, energy generation, nutrients and the fuel supply play a major role in all aspects of memory. Weak energy supply may well be an important contributor to brain malfunction – dementia et al.

Memory aides. 

Memory is so important that it has attracted many people to study and invent ways of improving memory skills. Perhaps the earliest known example is Simonides from circa the 4th century BC, who accurately recalled the names of guests at a dinner by remembering the faces of where they sat.

There is a long and entertaining list of proprietary ‘memory aides’ down the centuries. Interestingly most of these aides work very well for their promoters, and those who are convinced of their efficacy, but for few others. There appears to be a psychosomatic effect.

Various general purpose aids include associating things to be remembered with things in a room or on a path way. Other aids include the use of acronyms and sentences. ‘No plan like yours to study history wisely’ has helped many generations of students recall the Royal Families: Norman, Plantagenet, Lancaster, York, Tudor, Stuart, Hannover, and Windsor.

The valuable role of rhythm, rhyme, cadence, metre, alliteration and onamatopia, hint that patterns and wave forms play a strong role along with music in the systems of the brain. Reciting poetry, or the lines of a play are interesting. Learning such a sequence of words suggests a ‘metanetwork’ linking the words together. Thus by accessing the metanetwork each word in sequence can to be recited. And the profile of each word includes a reference to the fact it was in that poem! Similarly we have a record of each time we recalled a memory and the circumstances. Our Imaginations enable us to create memories of events that have not happened: the plot of a novel, for instance.

Computers store chunks of information as instructed by their programs. The brain has to be more subtle. It is much easier to learn new information if there is some form or context, or if we can build on something already learned. We tend to deride the Greek belief that the world was made up of earth, air, fire and water. However, it provided a foundation on which to expand ‘earth’ to the whole family of minerals and geography, ‘air’ to gases, ‘fire’ to energy and ‘water’, without which there would be no life. The first steps into the unknown are always the most difficult. Simple frameworks like this, provide a foundation to build on and teach to others

The implications for the science of education are profound. Our burgeoning knowledge of how the brain grows memory structures should surely guide the design of the curriculum. We now know what goes with the grain of how the brain learns. First build a framework by small incremental steps, preferably borrowing from other existing knowledge, then fill in the detail.

The Dawn of Abstract Concepts and Ideas

Just before we move on from information and memory, there is one important observation that flows from the development of language.  Initially words appear to have been ‘labels’ for things and actions. It is easy to see how the neural structures to represent these words were anchored to the neural representations of the tactile and visual sensations. One small addition might be the precursor of much of our human development. People started to identify things that were, say, big or small; fast or slow. No one has ever seen a ‘big’ or a ‘fast’.  The humble adjective and adverb are not anchored to anything tactile or visible, or any sensory input.  One adjective can apply to many nouns so the composite ‘phrase’ came into existence, where the meaning of the whole was greater than the sum of the constituents. Adjectives and adverbs can only be described in terms of other words. They only exist in neural structures. They are abstract. Thus we took the first step to be able to think in the abstract. To reminisce, predict, imagine non-existent worlds, to invent, to create new ideas and concepts: to think.


Serendipitously, computing has thrown a new light on Descartes’ other famous insight. ‘I think therefore I am’. From the very beginning of computing, programmers have realised that the computer’s unique capacity to process massive information structures to identify complex patterns had exciting potential. They soon coined the term ‘artificial intelligence’. However, rather like the term ‘electronic brain’, this title has caused more harm than good. Much research has been frustrating, but recently some major and extremely valuable breakthroughs have been achieved, principally in the analysis of massive data sets and the identification of patterns that it is doubtful if, even an unlimited, team of people could ever have identified. Also, many people think that a program that can beat a chess master or win a quiz game, or pilot an aeroplane deserves to be called intelligent. It rather depends on the definition of ‘intelligence’. And this is a quagmire, because, perhaps surprisingly, we do not have a definition of intelligence, nor are we anywhere near one. The intelligence quotient or IQ test has been a source of controversy since its development a century ago.  It is, effectively, our only test of mental ability. It clearly measures something, some ability, or abilities but there is no consensus and much furious debate, on what?

Attributes of Intelligence

Observably, the whole brain mind system has evolved to respond as fast as possible to incomplete information: and survive. Speed is one essential quality of intelligence (and IQ tests in particular). Speed of the recognition of previous experiences and their outcome, speed of acquisition and storage of information, speed and accuracy of recall, speed of the identification of the repetitions of patterns. Speed of identification of the implication of information. Speed of converting content into comprehension. All these extremely valuable attributes have one thing in common. They all exist to a greater or lesser extent in all living things, especially our closest relatives among the mammals. They can all be described as executive functions.

As computers get faster, and our logic, system design and programming skills advance so we can build these types of system with ever greater complexity, sophistication and efficiency. However, our experience of computing in general and the quest for artificial, or perhaps we should call this ‘machine intelligence’ highlights the fact that computers can only process information. Computers can execute what already exists ever more efficiently. The point is can computers do more?

The great genius of the brain mind system is its creative functions; to modify our reactions; the ability to extrapolate past experience to ‘imagine’ the future and predict the possible course of future events, and to make provision for eventualities – to change our environment: to invent new solutions to old problems. In particular to think. To come up with something completely new.

Given these insights we can begin to see that, while intelligence is a very valuable attribute, it is essentially about the execution of known solutions- responding – as fast as possible: it is about quantity, and so makes a relatively smaller contribution to our conscious awareness. On the other hand, the ability to respond to situations and events in new and unpredictable ways, and to build whole new worlds and ideas and concepts in our imaginations is about the quality of our decision making and behaviour, and so is a relatively large component of the characteristics of our personalities.

Arguably, therefore, we can say that Descartes was correct. To be able to think is an attribute essentially unique to human beings, and largely the basis of our consciousness. It is the source and foundation of who we are. Cogito ergo sum.

Evolution of Thinking

Where have these attributes come from? How have they evolved? The roots are very deep. Physicists tell us that electricity flows, it cannot stay still. Our distant ancestors saw the grass wave threateningly and their legs were running fast enough to escape. Signal input, signal processing and output signal to muscles: end of story, known as the Zeno effect.  However, to be able to see a predator and accurately judge its direction and speed of travel was very valuable. To do this required that the brain could process a succession of images, temporarily storing them so that a sequence could be compared sufficiently to note the differences and so judge speed and direction. Survival selected to enhance this breakthrough. This ability to hold an image, even temporarily, enabled us to interrupt the automatic learned response and evaluate alternatives and select a better solution. Nobel laureate, Gerald Edelman, studied this phenomenon in the 1990s and coined the phrase “the remembered present” for this facility to capture and hold a fleeting image. A similar technique is the way a stream of radio signals are used to refresh the images on a television screen so that they appear to be stable.

Initially this was of only marginal value as it requires time. Being chased by a predator was not a good moment to stop and ‘think’ of a better way to escape! After the event; sitting round the fireside, language allowed the tribe to reminisce and ponder and discuss alternative future strategies. Time to think and debate, promote alternative solutions and in the process grow new neural structures. Next time a crisis occurred these new neural structures were in place to generate new behaviours that outwitted the predators. Mankind had created a new weapon. The ability to think of a better solution, and execute it on future occasions: to learn from experience. This introduction of ‘time’ into the system to reflect started the long journey steadily dividing intelligence and thinking into the two separate skills we recognise today.

Computers better than humans

Throughout history we have extended our knowledge by painstaking observation and recording of phenomena, then trying to discern patterns that we can use to extrapolate and predict future events to verify our theories. We call this the scientific method.   ‘Artificial’ or machine intelligence is particularly good at accumulating ever larger databases then searching for patterns at ever increasing speed. As Schrödinger and others pointed out, it is the basis of much of physics today. It is the basic logic behind the computer programs that can beat chess masters and win quizzes. However, of far more significance, this ‘data mining’ is the basis of the beginning of a truly professional means of identifying trends that are able to forecast such things as diverse as epidemics of illnesses and economic behaviour. Arguably computers can identify patterns of ‘order out of chaos’ significantly better than Brains.

There are other tasks, that, arguably, computers can do better, quicker and more accurately than humans, especially those that require complex calculations, concentration on monitoring and executing apparently trivial, monotonous detail over long periods and the identification of patterns in ever greater masses of data. They can also execute complex instructions ever more quickly, often far faster than humans. Thus they are a wonderful resource for us to make the most of.

Humans better than computers

However, computers are completely unable to cope with the unexpected. It is, in fact, a contradiction in terms. It is possible to design programs that can ‘learn’. It is possible to design programs that are capable of modifying themselves in response to information input. However, this flexibility is limited to the ingenuity of the original designer. We have to invent the learning algorithm in the first place. This can be massively sophisticated, but ultimately is limited to its parameters. People talk of a ‘singularity’ when a computer will be cleverer than a human. On the other hand, however clever the original program, cleverer humans will always be able to outwit it.

Thinking creatively is about coming up with a ‘different’ solution to a traditional problem. A new solution suggests new or modified networks of neurons, which have learned a new algorithm. New concepts and ideas are often not accepted initially because people cannot understand them and so judge them to be aberrant. They have not yet grown the supporting neural networks in their own brains. Gradually our body of knowledge expands. If a computer produces a ‘different’ solution it is considered to have faulted, is given to the engineer to mend or be thrown away!  The genius of the human brain is that, as we activate lots of relevant (or even not apparently relevant) neural networks we automatically grow tentative, speculative new neural links and structures. A new network generating a new result may initially be laughably irrelevant, but from time to time the new solution is a breakthrough. To achieve the success of a new idea, inventors first have to accept the new concept themselves. This is often quite hard, but nothing like as difficult as persuading everyone else!

We ‘think’ of the solution to a problem. We do not ‘intelligence’ a solution.

Alan Turing described the process of information processing in 1936 when he showed that, if it is possible to design a formula, or algorithm to solve a problem – software, then any information processing system – hardware, can process that algorithm to produce the answer. Both brains and computers can operate as Turing Machines. If we inherit, or learn a procedure, the brain is optimised to execute it. However, go back a century. JS Mill started to put thinking in a different perspective. “Everything must be open to question”, and then Edward de Bono coined the phrase “lateral thinking”.

Emmanuel Kant was one of the first to observe that these are two quite separate skills at play. Jack Fincher started the current debate in 1976 with ‘Human Intelligence’, while more recently Guy Claxton has drawn attention to the difference between intelligence and thinking in his book ‘Hare Brain and Tortoise Mind’. Daniel Kahneman has taken this theme further with the popular ‘Thinking fast and slow’. Not only is there ‘intelligence’ but also ‘thinking’, and the two are quite different skills. This allows us to establish some helpful definitions.

Intelligence is the ability to process existing solutions and algorithms accurately and quickly: to respond.

Thinking is the ability to devise new algorithms to resolve problems: to create.

If we begin to think along these lines many long term problems begin to fall into place. Intelligence can then be seen as the inherited set of some billion neurons with which we are born. Thinking is more about the trillion additional neural links and structures that we grow during our lives. Intelligence is the ability to execute these new skills ever more efficiently, as we develop and refine them. Thinking is about creating new skills.

We have noted that Intelligence is about executing the best available neural instructions, both inherited and learned, as fast as possible. We recognise a predator from incomplete data, react and survive. In a crisis the brain selects the best available neural solution fast, on the basis that any decision is better than indecision or dithering.

Types of Thinking

‘Thinking’ is about quite different skills. We largely accept that words are the currency of thinking, and to a great extent this is correct, but language is by no means the whole story. Digital codes that support language and mathematical notation act as indexes and algorithms to represent complex structures, but we have to use the remarkable plasticity of the brain to map these onto the analogue electrochemical patterns that the neurons generate.

We think in a multitude of modalities and many are not easily expressed in words. Architects think in spaces and shapes, musicians in chords and harmonies, novelists in plots, engineers in structures, programmers in patterns, designers in symmetries, entrepreneurs in ideas and innovative solutions, scientists in abstract theories, chefs in tastes and textures, authors in atmosphere…….all are analogue abstract patterns, links, relationships, and experiences. Words are the communication system of these concepts. Without them we could do little or nothing, but often they do not do full justice to our thoughts. In the same way and for the same reason, as noted above, we find it difficult to impossible to describe ‘qualia’ in words that adequately express complex sensations, impressions and emotions.  Intelligence has no time for this sophisticated approach. Intelligence cuts to the easiest available solution and executes it as fast as possible and we survive to think another day.

Thinking involves reflexion, factoring in previous experiences, possibly debating alternatives with others. Thinking takes time. It is a luxury that we cannot afford when faced with a crisis. But provided we survive the crisis, we can reflect on possible better responses should we face the same problem again.  If we imagine a better alternative, then the next time we do encounter the same situation we have ready and waiting a better response, and we outwit our enemy. Intelligence is about short term survival. Thinking is about long term survival. Thinking is about maximum information: maximum quality.

Advantages of intelligence

Intelligence, that wonderful facility to respond efficiently, to recognise things swiftly, to identify patterns fast, to recall facts accurately and quickly, to execute the best existing solution available rapidly, is a function of the Hardware Brain – neurons having the necessary power to transmit the patterns of electrochemical signals across the synapses and over the neuron networks with the least interference and stimulating the muscles, and other organs and neural networks in the correct sequence and exact timing: in parallel, activating the glands to secrete hormones instantly to provide the ideal chemical ambience for the whole body to respond with the best possible chances of success. In addition, intelligence includes automatically monitoring reactions internally to confirm that patterns of instructions have been executed as intended, and external reactions from people and the environment to check that our behaviour is generating the desired result, and modifying our further actions accordingly.

Advantages of Thinking

Thinking, that wonderful facility to reflect, to question, to contemplate and evaluate alternatives, to imitate and learn new skills, to control ourselves and our behaviour, to imagine, to dream, to predict, to improve, to speculate, to forecast, to improvise, to invent, to originate new ideas, concepts, art forms and styles, to solve problems, to generate new words, to inspire others, to push out the frontiers of our knowledge – the extraordinary ability to create. This is the domain of the human Mind, the Software of the system. We can think anywhere, anytime about anything.


This has one massive implication. The speed of a computer is built into its design and is unchangeable. The quality of the programs determines the efficiency of every application, but the hardware is fixed by the manufacturer. To a great extent the speed and efficiency of all the neural networks with which we are born are equally fixed. How they develop in a lifetime can be significantly affected by the quality of the energy available – the regularity, quantity and quality of the food supply, or the lack, to the body and the efficiency of the nutrient path to the brain. However, the speed of the neural networks cannot be increased.  We cannot learn to be more intelligent, any more than we can learn to grow much more than six feet tall and jump twice our height. We can learn to deploy our skills more effectively, but we cannot learn to be more intelligent.

On the other hand there is no limit to the software programs we can design, or their complexity or their sophistication. Equally, there is no limit to the ability of people to create. Observably, there is a rainbow spectrum of human ability, but there is no limit or ceiling.  Many things affect the creativity of individuals like the environment, natural inquisitiveness, background, parental and peer encouragement, opportunities, self-confidence, ambition, but the most powerful is the breadth of education and the stability, freedom and toleration of the community. Observably a higher level of inherited intelligence helps, but it is by no means a limiting, or determining factor.

There are limits to Intelligence. There is no limit to our ability to think and be creative, both as individuals and as a community. Thinking is also a skill that can be nurtured and learned and it opens a new dimension in human endeavour.


Programming is the art of designing and building software: part systems analysis, part systems architecture, part coding, part logic, part developing a mathematical model, part psychology, part pure art. Design of screen interfaces is important but in addition users need to feel comfortable using the system and confident that it will support them.

The actual commands that drive the electronic circuits are just ‘add’, ‘subtract’ and ‘jump to a new instruction depending on certain circumstances’. Using just these three instructions it is possible to write programming code to carry out simple functions, like multiply and divide. In addition there are codes to input information from keyboards scanners and various measuring devices; output information to the screen or printers; access or store information in long term memory devices, and send and receive information from the communication systems. A set of these functions is the ‘machine code’ of an individual computer. These ‘subroutines’, or machine codes can then be used to build up hierarchies of ever more complex functions.

Programs to help write programs

To help design and build whole ‘applications’, programmers have designed programs to help programmers write programs; high level languages that are easier to use, graphical  user interface languages to design screen presentation, and operating systems to control all the input output and memory functions. MSD(isk) O(perating) S(ystem) runs to some forty million machine code instructions. The programs that help programmers write programs, then compile these ‘high level language’ programs back down into the programs the hardware can operate. Thus the program that finally does the work in every computer is one long string of ‘0s’ and ‘1s’. There are just trillions of them. Possibly the best descriptions of how a processor, that can only add and subtract can be programmed to ‘beat a chess master’, is set out in detail in chapter 24 of Intuition pumps and other tools for Thinking by Daniel Dennett published by Allen Lane. A copy of just Chapter 24 is also available from Amazon

In designing the architecture of an application the programmers – some people like to be called software engineers – first have to map out how all the input is to be obtained, how the desired outputs are to be computed to achieve the design goals of the project. That is often the easy part. Large parts of a system are ‘defensive programming’ to respond when the data received is incomplete or inaccurate, or the operator hits the wrong keys by mistake – or on purpose – or the power fails, or the computer develops a fault.  Computers can have both positive and negative zeros. In early systems much mirth and ridicule flowed from company’s issuing invoices for £0.0, and following these up with demands for payment until the hapless customer despatched a cheque for £0.0. HM Treasury still demand tax payments of a few pence, or they, or rather their computer programs issue fines of £100 or more!  Entirely logical to a programmer – unfathomable to a member of the public.

A feature of programming is that a routine is either correct or wrong. A program that is 99.99% right can be useless. Precision is all. Generally, human beings feel they are not doing badly if they are right most of the time. Computing can be a rude shock. As a result, intelligent users and designers use tried and tested sub routines and existing systems as much as possible.

Biogenetic Programming

One initial single strand of the DNA of our genome operates in much the same way. Small groups of codons are strung together to form RNA, which forms twenty amino acids which are strung together in myriad combinations to build enzymes which coalesce into highly complex protein stem cells, which then fold into appropriate shapes to form the various organs that  carry out the myriad tasks to first grow a complete living being, and then maintain and replace damaged or worn out cells and organs throughout a lifetime. Position and sequence at each stage are as important as the structures themselves. Start and stop points play a major role. 

We can observe the same concept of groups of genes, amino acids and enzymes carrying out various functions in hierarchical structure around the central objective of physically building the organs that making up a complete living organisms. [People use the analogy of millions of different ‘lego’ shaped pieces slotting into each other] Some genes, amino acids and enzymes carry out quite different roles. Some generate instructions to the builders, others regulate these activities and yet more operate in a ‘quality control’ mode. The whole process is remarkably similar to the hierarchies of software subroutines, and defensive programs. Different levels of these hierarchies have evolved in different ways at different times.

For instance, some sixty thousand species have a very similar structural architecture based on a backbone and spinal cord. A mass of ‘hox’ genes regulate the development of these two organs in all 60,000 species. One aspect, recently researched offers a fascinating insight into this world of biogenetic software. The genes that are expressed to build one vertebrae plus two extensions able to form the twin bones of a rib cage are remarkably similar down the eons of evolutionary time. At one extreme, hox regulators repeatedly switch on these vertebrae building subroutines again and again over a hundred times, building a very long spine. At the other extreme they switch on just twelve vertebrae building subroutines. The first meta-biogenetic program is constructing the backbone of a snake, the second is constructing the backbone of a human being. Other hox genes produce arms, or fins or wings, or in snakes are switched off. However, they all appear to be still present in DNA! There are, of course, a myriad other gene structures that have evolved differently for every species, but the basic architecture remains the same. Very small changes at the early stages generates much greater variation later on.

The above is roughly the position up to birth, however, we are also beginning to be aware that there is another phase of variation as organs that are worn out or damaged are replaced. Almost all cells are replaced up to puberty and maturity, some quite regularly. Others, like the taste buds on the tongue are replaced every few weeks throughout life. This turnover enables minor variations to be expressed in response to the environment. There is a strong case that the creation and modification of neurons as part of memory formation can be carried forward when these neurons have to be replaced.

The most minor changes to the biogenetic software very early in life can have powerful long term evolutionary effects to a species, whereas quite major variations, as a result of lifetime experiences, may only assist that individual. Whether lifetime modifications to the genome as a result of experience and, or changing environments is possible is an open question. Whether such modifications can be passed to subsequent generations very controversial. These possibilities cannot be dismissed out of hand. The jury is out. The existence of two very different types of software, discussed below, may shed some light on this debate.

Long sections of DNA appear redundant, but we may be beginning to understand the reason for this. Large parts of computer programs are also apparently redundant. There is an evolutionary reason for these long sections of redundant computer coding, and strong hints that there is an evolutionary reason for apparently redundant stretches of genes.

Neural Programming

Neuron structures follow a very similar path. In the womb, glia cells build scaffolding along which neurons grow dendrites to collect information and Axons to transmit information. Initially this provides simple links to and from all the organ and muscles. Later, glia and neurons cooperate again to build the myriad new neuron links and structures. We noted above that ‘at birth a baby can do almost nothing, but it can learn to do almost anything’. The process is incremental: new skills building on existing abilities. Typically neuron networks connect up the ears and vocal chords to the brain, so that a baby can make sounds and hear sounds. As baby hears variations in sound patterns so new variant neural patterns start to form to carry slightly different electrochemical signal patterns to the brain, which in turn sends slightly different signal patterns to the vocal system. Baby hears different sounds and tries to imitate them: hears and says its first words. Everyone seems very pleased. Similarly visual patterns become more complex, as do tastes, feelings and smells. They all seem to be linked together to the being that feeds them. Baby begins to be conscious that the sensation of making the sound ‘Mummy’ is linked to the same visual sensation that produces all sorts of other pleasant sensations.

There is no ‘image’ of ‘mummy’, there is no neural pattern of the word ‘mummy’, but baby is conscious of the sensations that reliably represents and repeats the sensations, perceptions, impressions and emotions always associated with the image and the word. We become so used to these sensations that when we see Mummy we are convinced that we are seeing her image from the retina of our eyes somehow reproduced in the brain. We are not. We are experiencing the sensations that the patterns of the stored image generates stimulated, and updated by the information from our eyes. These patterns of stored images are very similar to subroutines in a computer or other type of program.

More obviously are words which we put together in phrases, and sentences and paragraphs and chapters and books. We can use every word as many times as we like. We can never wear a word out. It is a pattern of sensations, acting like a catalyst to produce a result without being effected itself. We have noted that the position of a word in a sequence is often as important to the overall meaning of a phrase as its individual meaning on its own. The meaning of a whole phrase is far greater than the meanings of the sum of the individual words. A phenomenon that appears a great deal in all software based systems.

Similarly we grow the neural circuits to walk, to run, to dance, to write, to balance a bicycle, to drive a car. Words, experiences and activities were thought to be different neural structures. This is unlikely. The way they are formed, the way they generate, then strengthen and edit neural structures – potential information – or memory are identical. The way potential information can be stimulated to generate kinetic information and so repeat exactly the original activities is identical. Nature rarely, if ever, duplicates an efficient system; it builds on, expands, and evolves whatever system is in place.

Inherited and learned skills

We have observed that language, along with many other cultural attributes, are not inherited though the medium of DNA. These cultural skills are learned through the medium of education passed on by each generation to the next.

We can begin to see that there are two streams of evolution. Classical, Darwinian, evolution through the medium of DNA, RNA, amino acids et al to create the physical structures of our bodies, including the immune systems, autonomic systems and the networks of neurons that connect up everything to our brain.

In addition we have developed a parallel system of transferring our expanding knowledge and experience, principally, but not entirely via the medium of language. Initially by inquisitiveness, imitation, encouragement, attention and endless repetition, each child starts to populate the bare neural structures with increasing detail. First this entails control of the body; to crawl, stand, walk, run and eat, but progressively to make sounds that become words. Once every child has a basic lexicon, learning changes gear, moving ever faster to explore the world. Up to this watershed we are not much different from the animals we share our planet with. Birds learn to fly by copying their parents, monkeys swing in the trees and learn to hunt and keep out of harm’s way and reproduce at every opportunity.

Language is a massive neural structure that we quite literally grow as we learn to communicate with siblings, parents and the community about us. Many observers have identified these complex neural structures and explored this neural ‘learning’ system: hierarchies of subroutines building into ever more sophisticated structures and facilities.

Donald Hebb referred to cell assemblies, but also suggested Engrams and phase sequences. Herb Simon proposed hierarchies of assemblies, as part of the architecture of complexity. Arthur Koestler suggested holons. Daniel Bor quotes the term chunking, but using it as both a noun and a verb. John Duncan proposes cognitive or neural enclosures. Arie de Geus and Charles Ross have coined the term neural modules, or Neurules to describe mental subroutines, or bioprograms, but as part of a family of names to define the five states of neural memory. Richard Dawkins has proposed memes. Terence Deacon has developed symbols as part of a series of concepts, specifically associated with algorithms for words. Guy Claxton has opted for minitheories, while Rupert Sheldrake has developed morphic resonance.                        

Nature or nurture: updated

There has been a long debate over whether we inherit or learn all the various skills we use every day. Aristotle, the Stoics and then John Locke in ‘An Essay Concerning Human Understanding’ coined the phrase ‘Tabula Rosa’, supporting the ‘nurture’ position.

More recently the pendulum has swung the other way, with claims that the brain is a tabula rosa only for certain behaviours; based on various neurological investigations into specific learning and memory functions, such as Karl Lashley‘s study on mass action and serial interaction mechanisms. Psychologists and neurobiologists have shown evidence that initially, the entire cerebral cortex is programmed and organized to process sensory input, control motor actions, regulate emotion, and respond reflexively (under predetermined conditions). These programmed mechanisms in the brain subsequently act to learn and refine the ability of the organism. For example, psychologist Steven Pinker argues that the brain is “programmed” to pick up spoken language spontaneously: nature wins.

It is now commonplace to upload a new version of some computer software to our lap tops. Two things are necessary for this to work efficiently. The systems must be compatible and the hardware must be able to accept and process the new software. Back to the brain: it is true that when we are born we can do almost nothing, but what we can do, directly as a result of our inherited DNA, is crucial. We have the basic ability to be able to learn. Monkeys can make and hear sounds but they do not inherit the ability to learn to speak and recognise words – however hard we try to teach them. That propensity to learn is part of ‘classical’ evolution. The neural hardware can accept, process and retain information. Perhaps that special classically evolutionary skill is the ability to grow massive memory structures: to process the patterns of electrochemical signals flowing from the ears – kinetic information, and automatically construct new neural links and structures – potential information – memory; multiplying the capacity of the brain at birth by many orders of magnitude.

However, the fact that we can learn to do almost anything: for instance, the ability to learn a particular language, is not inherited through our DNA. It is one of the many skills that has to be learned by each individual from their parents, peers and community. It is part of what can best be described as ‘cultural’ evolution. Language is the first great Engram, Holon, Neurule or Meme everyone has to learn. All the skills painstakingly built up generation after generation follow: reading and writing multiply the benefits of language. However, it is worth noting that writing relies on the physical ability to manipulate the fingers precisely; and reading, on being able visually to decode symbols, which suggests that the dividing line between classical and cultural evolution is blurred. Synergy has bid both each other up. Other obvious cultural engrams, holons, neurule or memes passed on from one generation to the next include belief systems (religion); concepts of law, government and behaviour; notions of ethics, tolerance and liberty. Increasingly in the last three centuries the whole corpus of science: the complete curriculum. In another dimension physical craft skills, music, art and drama.

Darwinian, classical or hardware evolution advances very gradually over many hundreds of generations. Cultural or software evolution can advance significantly within one single generation. Hence civilisation is accelerating in leaps and bounds.

There is an important corollary to this edifice. Because all engrams, holons, neurules and memes are learned individually by each person, it follows that everyone will learn fractionally differently. These differences enable people to develop different opinions about every subject. This ordered chaos is the basis of the creation of new ideas, solutions and concepts. It also means that the science fiction ideas of transmitting a ‘brain dump’ of one person’s complete knowledge to someone else is probably impossible. Using computing terms again: everyone has a different basic operating system, and no two are compatible.

So we are in the happy position to conclude that John Locke was, at least, partly correct, three hundred and fifty years ago. In many ways the infant brain is a blank state, but it is a slate uniquely capable of storing the mass of information written on it.

Different types of Computer software

There is more for us to learn from software, because software comes in two distinct forms. We have developed fundamentally two types of computer program. We can build compiled systems and interpreted systems. In essence, compiled systems are used for the bulk of the infrastructure of the processing and delivery of applications: interpreted systems are used to fine tune an application to fit individual needs. For example, the bulk of a word processor program is compiled. It is fixed and we cannot easily modify or interfere with its basic functions. However, every article we write is different. The processor builds up the string of characters as we keyboard them. To introduce different point sizes, different type faces and much more besides, we just click a font icon. This inserts a programming command into that text stream to ‘change to bold face’, ‘Upper Case’, ‘12 point’, ‘Times Roman’.  When we call for a print, the underlying compiled program accesses every character in the text string and sends it to the printer. When it encounters our ‘command’ to ‘change to bold face’ it interprets this to switch to a new typeface table, which has the new shapes of each successive character, adjusts the width of bold characters and so forth till it finds another embedded command.

In early word processing programs the operator could key these ‘program commands’ into the text stream and so see these commands on their screens. Nowadays, computers are so powerful that, while those commands are still there, they are inserted by ‘click of mouse’ and are not visible on the screen. The text is reprogrammed instantly as we click these typesetting commands.  The original basic standard protocols to insert typesetting commands into text was known as Hyper Text Mark-up Language or HTML. The codes in this language are based upon the hieroglyphics used by editors to mark up text for type setter’s right back to Gutenberg. These codes were amplified by Sir Tim Berners Lee to form the basic standard protocols for the World Wide Web:  Hyper Text Transfer Protocol HTTP. Thus did computing evolve.

The significance of compiled and interpreted programming is considerable.

In Computing this dual system allows standard applications, which are very expensive to develop, to be used by very large numbers of people, who each have their own version tailored to their individual needs.

In Neuroscience the same principles allow the swift learning of vast amounts of knowledge from parents and the community.

In Biogenetics it enables every person to be, and to develop differently. It may also be the basis of transgenerational epigenetics. More later.

Where Computing can learn from Biogenetics and Neuroscience.

At this stage in the computing revolution, even the most complex software systems are a pale shadow of the complexity of the biogenetic program even to create an Aplysia, or a fruit fly, or even the simplest of the 60,000 vertebrates, let alone a human being. Evolutionary biogenetics has ‘learned’ that minute variations in a deeply buried subroutine can wreak havoc higher up the hierarchical chains. Large organisations whose top management are increasingly finding that their businesses depend of the efficiency of their computer systems are extremely cautious about changing anything, and as a result older and older systems get more and more fragile, generating frightening hostages to fortune. The same applies to the design of complex computer systems. Slowly the profession is learning to segment programs into modules and always employ tried and tested sub programs rather than design brand new systems from scratch. Equally it is wise to continuously make small modifications to modules that can be insulated temporarily from the main system. Thus the whole system can move gradually forward as the environment changes


Another major difference between our computers and our brain involves the power supply to each system. Computers can have short term batteries but essentially are driven by an external electricity supply.

Not just the Brain, but each neuron generates its own power supply.  There are a variety of neurons but they all have a nucleus. They receive information along dendrite filaments from the sensory and other organs and from other neurons, and send information along axons to the muscles, organs and other neurons. Dendrites and axons transmit information by way of electrochemical signals, and are connected to other organs by a gap known as a synapse. Neurotransmitters pass information across these synaptic gaps. One current opinion is that the width of the ‘synaptic gaps’ vary according to the tension across the synapse, and thus plays a major role in whether a strong or weak signal is transmitted. Astrocyte glia cells connect the nutrient paths in the blood stream to the nucleus, where mitochondria convert these nutrients into energy. Adenosine triphosphate (ATP) is used as a buffer to supply sudden demands for increased energy. Both the neuron filaments and the synapses are powerfully modulated by neurotransmitters, messenger molecules and an array of hormones that can significantly vary both the ambient chemical state and the performance of individual filaments, neurons and synapses. We all have plenty of experience of the dynamic effects on our emotions and behaviour of floods of hormones like testosterone and adrenalin. It has been known for some time that frequently used dendrites and axons are strengthened and made more electrically efficient by being insulated by a substance called myelin. More recent research is showing that ogligodendrocyte cells influence this myelination process and may play a more significant role in the memory strengthening properties of dendrites and axons. Our knowledge of messenger molecules, like exosomes, is expanding all the time. Similarly we are learning more about the electromagnetic fields that active neurons generate when they transmit information. Both these electrochemical signals and the associated electromagnetic fields fluctuate in waves and generate recognisable rhythms: the circadian rhythm being the best known.

Neurons, therefore, are almost individual computers on their own. They have their own independent power supply. They can mend damage, and even reproduce themselves, thus they have a number of the attributes of living organisms. One effect of this self-sufficiency is that neurons have some control over their behaviour. As a result their responses are probabilistic rather than predictable.

We can see that the ‘power supply’ not only effects the direct signally capability of neurons, but also effects the passage of information around the whole system and across the synapses in three ways. Most directly is the strength of the neurotransmitters at each synapse. Secondly is the effect of the ambient hormone mix surrounding groups of synapses. Thirdly, and least researched, is the massive implications of the tension of the synapses across sections, swathes or the whole brain, and the consequent variations in the width of the synaptic gaps, which determines the efficiency of transmission across them. If the power supply weakens, causing the synaptic tension to fall and, therefore, the gaps to widen, reducing the strength of the traffic flow; is this what we recognise as falling sleep?

It now seems likely we have a greater degree of control over the myelination process, and that this has an added dynamic effect over memory formation. Similarly, neural control over the astrocyte cells supplying nutrients to the neurons would suggest that a greater degree of direct selective control over neuron activity may be possible.


Much work is being directed at Alzheimer’s and other forms of dementia. Almost all research is directed at the ‘hardware’ – for instance, the build-up of plaque degrading the neural networks and inhibiting information transfer. It seems at least equally likely, that the problem is in the software – the electrochemical signalling system. Little or no work is being done in this sphere. One suggested target of research is to investigate the efficiency of the nutrient path. There is the obvious possibility of weaknesses in the diet and digestive system, but a more plausible possibility is that the astrocyte transfer of nutrients to the neurons and the conversion to energy by the mitochondria might be the problem. If less energy is available to the neural networks then, even if a question correctly stimulates the right neurons, there may not be sufficient energy for the signals to penetrate consciousness, giving the impression that the answer has not been found. Similarly insufficient energy might account for new ‘memories’ not being established, while more robust memories from the past are still intact.  A problem often presented by elderly people.


The implications of compiled and interpreted programs in the science of software are considerable. In biogenetics we know that the nucleotides or base pairs in the codons in the genes build amino acids, proteins and finally cells. We can see the double helix of the hardware and we are learning how the software builds individual stem cells and then expresses them to carry our distinct functions. So far so good, but we are also becoming increasingly aware that the basic inherited genome is slightly varied across the generation and that throughout life small corrections – variations come into operation to cope with the changing environment in which every individual finds themselves.

Thus DNA has three streams of information, not two as previously thought. There are the billions of nucleotide base pairs – hardware. There are the instructions to use these patterns of base pairs to grow stem cells, then express them as specific organs: the descendant of the parents – compiled DNA software. Then there is the additional facility to fine tune this system to modify the cells to accommodate new information as we monitor the environment and in the light of our experiences- interpreted DNA software.

For example: DNA builds the cells of our ears and vocal chords and the basic neural framework that connects everything up. Babies can make and hear sounds. As we learn to fine tune these circuits to say individual words, the neural networks are being updated and edited. Redundant sounds are excised.

In every aspect of our lives DNA is also being modified so that if neural networks wear out or are damaged, replacement neurons will contain the updated information – inherited compiled structures modified by interpreted experience. Thus we are able to respond to fast moving changes to our environment quickly and efficiently – and survive. 

It is interesting to note that this facility to interpret minor changes to augment the slower process of evolution opens the possibility that relatively minor adjustments learned by one generation can be passed on to subsequent  generations – What Lamarck called transgenerational epigenetics. 

Junk Programming – junk DNA?

There is more. In the design of word processing system the string of characters – the text – is stored as it is keyboarded. We have noted that in early, slow processors the operator inserted codes to change typeface, make subsequent characters bold, or italic and so forth. These were visible on the screen and could be edited. Nowadays the operator clicks an icon. However, this process does not allow the existing commands to be edited, so modern Word Processing program are designed to insert the control codes into the text stream and the processors are so fast the text appears on the screen in the new format almost instantly. To cope with multiple editing these control codes are inserted immediately before the text characters so overruling any existing formatting codes. If the author makes various changes. Say, the text starts off as 10 point Times, the author may later decide to change to 12 point Garamond, with bold, then italic headings, first in red, or later in blue. As the author makes each change the new code is inserted next to the text. All the existing control remain in place. When the program displays the text on the screen, or prints it out, it executes the control codes in the sequence they were requested. What goes to the screen or out to the printer is the final print, but the program has first selected 10 point Times, then 12 point Garamond and so on. As more and more features are added to word processors the control codes get longer and longer. In a final document the actual visible text characters output may only be as little as 1% of the whole stored text stream.  This is remarkably reminiscent of the format of DNA, where the foetus observably passes through the stages of its ancestor’s evolution. Hence for a while the foetus grows gills (Times) then legs (Garamond). It is easier, strategically, to add a new function than edit an existing one. Both DNA and Word Processing files contain the evolutionary history of their hosts. Thus we have a cogent explanation why various sections of DNA appear to be irrelevant, sometimes called ‘junk’.  Evolution modifies existing functions, augmenting, updating and improving them. Learning is about modifying existing neural structures, augmenting, updating and improving them. Information is cumulative.

Self-Organising Systems: ‘Selfortems’.  

All the systems in the universe are self-organising. They can only react to events. A very few animals like beavers and squirrels influence their environment by building dams or storing food for the winter. But the very paucity of these examples emphasises the massive breakthrough humans have made in developing an ability to predict the possible future course of events, and so make provision for potential eventualities. This unique talent has enabled us to begin to influence our environment and so take ever increasing control over our lives. This skill has developed in large part thanks to language which has enabled us to reminisce and apply the experiences of the past to help imagine the future, and debate what actions to take. As discussed above language has also enabled us to think in the abstract and so imagine, discuss, debate and even commit to memory events that have not happened. For millennia our neural structures – our Brains – our neural hardware – has steadily matured and become remarkably efficient. However, over a relatively short time- mere thousands of years – our ability to think –  our neural software – our Minds have expanded at breakneck speed. In just hundreds of years we have invented machines to supplement our eyes, our muscles, our voices. We can see and talk to anyone in the universe. And now in the last few decades, we have invented machines that can store, process and distribute unlimited volumes of information. Brains have limitations: Minds do not. There is no limit to their capability.

Mankind has wondered down the ages what our purpose might be on earth. Are we entirely an extraordinary result of blind chance?  Perhaps our growing capacity to take increasing control of our environment suggests, perhaps, our role is to make ourselves independent of our earth, and the power supply from our sun and roam the universe.

 Back to ground Zero!

5.  Greatest Contribution of Computing: CLUES TO CONSCIOUSNESS

Perhaps the greatest contribution computing can make today is to our understanding of the most difficult problem of all: consciousness.

We can now say with some confidence that “the Brain is not conscious”. This is a startling statement, but we can see that it is true from a number of perspectives.  All our efforts to generate artificial, or machine intelligence in our computers have been designed into the software. Bigger and better hardware has played its part but only as the medium on which all these programs operate. Artificial, or machine information is all about the software coding systems. If we want to try and emulate thinking, let alone consciousness on our computers it will have to be done in the software, unless, or until we can design computer hardware that can self-reproduce itself.

Descartes opined that the pineal gland might be the seat of consciousness. No less a luminary than Francis Crick was attracted to the idea that the signature of consciousness might be generated by the oscillations of pyramidal neurons, not least because these cells appear to be exclusive to humans.

 We have known for some time that the brain does not feel pain. If we stub our toe we see this happen numbers of milliseconds before a message reaches the brain. The brain modulates the message but the sensations of pain are experienced by the whole body. Brain surgeons regularly operate on the brain structures without need of anaesthetics.

It is increasingly clear that Consciousness is generated by the Mind – the software – the continuous living mass of patterns of electrochemical signals stimulating the glands to output floods of hormones that cause the whole body to experience sensations, perceptions, impressions and emotions.

To support this argument we can observe that conscious experiences are a series of processes. In the eighteenth century David Hume argued that “if the brain stopped thinking, the conscious self vanishes”, and this has largely been the accepted wisdom ever since. Arguably, he was close to being right, except that the mind never stops processing – it is a definition of us being alive. Whether the electrochemical activity generates a conscious experience depends on the state of the neuron networks – the brain.  Both the processes of the mind and the state of the brain need to be operational. We can look at this the opposite way.

The brain – the mass of neural networks, glia cells, neurotransmitters, messenger molecules – the hardware – remains intact after death, but self-evidently that person has completely lost all sense of consciousness. Observably something else generates ‘consciousness’. Similarly, the billions of nucleotide base pairs in the genome do not create the amino acids and the proteins that create a baby child. The molecules in an acorn do not grow an oak tree. In every case something else is needed. The neurons of the brain need the electrochemical activity of the mind. The circuits of semiconductors need programs: the base pairs need energy to select sequences of codons to create RNA, which in turn organises the building of amino acids and so forth. The molecules in an acorn need energy – warmth – and water to start building shoots, leaves, and set in motion photosynthesis to grow whole oak trees.  In each case the transistors remain unchanged: the neurons are not destroyed – they grow extra links: the genome remains intact. They all act like catalysts. The work is done by the software, which always makes the maximum possible use of the structure and framework, but the work is done exclusively by the software.

A good analogy suggests that the transistors in computers, the neurons in the brain, the base pairs in the genome, and the molecules in a seed are like bricks and beams. Carefully assemble these components in the right patterns and structures and together they can create a house. Put some energy and paraphernalia of everyday life into that house and it begins to reflect the characteristics of a home.  When our eyes meet a stunningly attractive person across a crowded room, signals flash across the neurons stimulating the glands to output testosterone, adrenalin….. We begin to feel very alive, our inhibitions fade and our muscles brace up to make a good impression, and our whole body is conscious of a growing sense of excitement.

Evolution of consciousness.

Most animals, certainly all mammals experience the sensations of hunger, thirst, fear, aggression, attraction. The sensory organs monitor the environment, internal and external, and transmit patterns of signals across the neural networks. This activity stimulates the glands to generate a mix of hormones which creates an emotional sense of awareness of our surroundings. To be aware that one was running for one’s life, gave the opportunity to react unexpectedly and survive. So any advance of this skill was selected for. We react by a combination of muscular activity moderated by the hormone mix. We run away, prepare to fight, search for food and water; reproduce. This behaviour is pretty much standard across the animal world, but also in babies. When a child is conscious of being hungry it learns to yell until a parent comes to feed it.

We are conscious from very early in life. Gradually, gradually our conscious experience expands.

Monitoring and Feedback

The huge advance from such primitive reactions to the multifaceted consciousness we experience is primarily due to our development of language. We have discussed above the ability to interrupt the conditioned response and provide time to pause the system and evaluate alternatives, and then, and only then decide on the best course of action (the Zeno effect). In parallel we developed the ability to monitor our own behaviour and the reactions of others. We inherited these basic facilities from deep in our ancestry. Every dendrite from the brain to every muscle is matched by a parallel reciprocal axon feeding information back to the brain, confirming the action originally ordered has been executed efficiently. Many mammals have ‘mirror’ neurons that mimic the behaviour of others and are active when we see others behave as we might wish to do so. Gradually the executive brain systems have been augmented by a parallel feedback and monitoring system, which enabled our ancestors to begin to moderate their behaviour in the light of the responses their behaviour generated. Feedback and monitoring systems enable any system to become more efficient, but the development of language allowed these reactions and perceptions to be discussed, debated and the experience of others added to the mix of each individual. A feed back system suggests a possible delay between the original act and the processing of the feedback from its effect. This accounts for the phenomenon, very much debated recently, that there often appears to be a delay between some executive action of the brain and our apparent consciousness of it. There is no ‘free will’ issue here of who is in control. It is ‘I’ deciding and ‘me’ checking up.

This parallel second system monitoring our autonomic responses proved to be very valuable and survival selected for its development. The ‘Zeno’ ability to interrupt and evaluate alternatives solutions, the capacity to monitor behaviour and feedback the effects, and, above all, the ability to discuss and debate individual and group experiences was a very powerful mix. It is working ever more energetically for us to this day. It is one of the major components of all research!


Language provided another boost to these developing facilities. The key value of language is that it necessitates ascribing meaning to words, and then phrases. Meaning is an integral part of consciousness. As many observers have commented, the words in a language started life as labels to things and actions. Thus the visual sensation of a tree was linked to the verbal sensation of the sound ‘tree’. We take for granted that we see, hear, feel, smell and taste and have sounds, images, feelings, smells and tastes stacked up somehow in our brains, along with a series of dictionaries full of words and encyclopaedias full of images recently supplemented by Wikipedia entries. However, we are also beginning to appreciate that neither we, nor our computers, have anything of the sort. Neither have we one single word, or image, or scent, or feeling or taste. Computers have complex coding structures we have invented to represent, what we call ‘information’. The Mind represents all this ‘information’ as a matrix of perceptions, impressions, sensations and emotions. We are so familiar with this whole process that we have the sensation that we see an image.   Surely it is the patterns of energy – photons – that strike the retina? Well ‘no’. The patterns of photons are transmitted back to the brain – almost the whole brain is involved in one aspect or another: edges and shapes in one place, colour in another…… What we experience – are conscious of – is a representation of those images. Mostly what we experience at any one moment is the neural patterns representing images we have remembered from the past, updated by any modifications; or mosaics of past memory components. If this sounds unlikely, close your eyes, and you will still be conscious of all the components of the images you were looking at. You can even zoom in to detail. Follow a path as though you were walking along it. Answer detailed questions about that image, and where and when you have seen it before and what was happening when that occurred. We experience the sensation of seeing, hearing, tasting, touching and smelling. Language allows us to classify and define them, pigeon hole them so that we can discuss them, describe them in a form others can participate. Thus we are consciously aware of the world around us.

Time, past, present and future

However, language has done so much more.  Communication widened our horizons and in particular opened up the fourth dimension. We became conscious of the passage of time. We could reminisce about past events, and use those experiences to improve our responses to present situations, but of even great significance we could begin to extrapolate these experiences to predict the possible course of future events. We know of no other organism that has this facility. It is, arguably, the most important attribute we have acquired. If we can envisage what may happen in the future we can prepare for possible eventualities. We can begin to influence our environment and take increasing control over our surroundings. We are no longer entirely at the mercy of events.

Abstract Processing

We have observed that we can interrupt the automatic response to evaluate alternatives; we can discuss, debate, interrogate and speculate about the implication of the knowledge we have experienced; and extrapolate our experiences to speculate about the future. We can discuss all this with others, or review and assess all this silently to ourselves. We can build vast complex abstract structures; imagine the plots of novels; iterate possible solutions to problems, compose music in our minds. Establish memories of all this as though they were reality. This whole process can be entirely abstract, purely a mass of electrochemical signal patterns. As a result ‘I’ can go one step further and speculate, wonder and guess how every detail of all this will affect ‘me’. It is a small step to angle our thinking and planning to benefit ourselves and our prospects. We can begin to build ambitions and the possible activities that might enable us to achieve these objectives. We are building the characteristics of our personalities. What we think we are, how we might behave and what we might be able to achieve.

We are increasingly conscious of a sense of self.  A sense of being alive. A sense of agency – of being in control of ourselves and our lives, to some extent at least,

This mass of additional facilities has expanded the capacity of our brains. Not just the ability to store, process and access at least one hundred thousand words – in just one language – but also their meaning, and the meanings  of groups of words – phrases – and then, in addition, a vast cornucopia of abstract ideas, concepts, imaginings in a myriad of subjects and themes. A staggering mass of electrochemical activity – software – continuously growing supplementing and extending the neuron networks – the hardware. It seems more than likely that the pyramidal cells discovered by Francis Crick – possibly the largest and most complex neurons we have discovered to date, may well have evolved to help moderate and support all these facilities.

Initiation of neural activity

This leaves us with one last attribute. The ability to initiate neural activity. To instigate a completely new idea – to think of a completely novel line of reasoning – to create a solution to a problem never previously conceived. The basis of Leonardo da Vinci’s famous remark that. “Every single idea, every single concept, and every single piece of information we have about our world and how it operates has been conceived originally in a human brain”.

It is not too difficult to see how we might build one idea upon another, to match two thoughts together and realise together they are more than the sum of the parts. But to think of something absolutely new for the very first time. Very Difficult. Careful observation suggests that the basis of the evolution of all life forms is incremental. The study of all the biological systems of the brain superficially follows the same logic. With one great difference. Abstract thought is mental activity neither stimulated by, nor dependent on, nor tied to the sensory organs.

All computers are driven by the programs, originated externally, and loaded into their processors. Brains have neither programs nor processors. All living beings are driven by the surrounding events. The Brain is driven by the signals from all the sensory organs.

No computers invented today, or even in prospect by the most inventive science fiction authors can think in the abstract: it is a contradiction in terms. However, humans do. Look at the evidence.

We can interrupt the automatic response, pause and evaluate alternatives: stop and think. Gerald Edelman’s ‘remembered present’.

We can create imaginary situations and commit whole parallel worlds to memory and we can retrieve that memory and carry on with our imagined existence. The emphasis here is consciously being able to stimulate that memory retrieval: self-stimulation. Business schools decades ago liked to talk about ‘self-starters’.


Concentration is a very valuable attribute. It is a hard to skill to learn for young children. A hard skill to learn for adults. Yet most people can concentrate effortlessly on their hobbies. Watching a play, or film we can become completely absorbed and oblivious of everything else. Learning to concentrate is about taking control, consciously, or unconsciously of parts of our brains: giving priority to some activities and downgrading the significance of others. We can observe we can do two tasks in parallel. A good example is driving a car and carrying on a conversation. If the conversation takes preference we suddenly find we are at our destination! If we see danger we drop the conversation. As noted above we use the term ‘autopilot’.


More recently we have learned more about the nutrient path. In particular the astrocyte cells that pass nutrients from the blood supply to individual neurons. Is it possible we have direct neural control over their operation? If we want to give priority to one brain operation can we accelerate the flow of fuel to selected neurons?

Similarly we are learning more about the control oligodendrocytes exercise over the myelination of neurons, which appears to affect their memory storage capabilities. Could we have, or learn to exercise neural control over the selective activities of these cells?

All down though history we have tended to equate consciousness with being awake and unconsciousness with being asleep. And the subconscious as ‘sort of’ being always present in the background. Perhaps we should uncouple these processes of consciousness and states of being wake, because observation suggests they do not quite fit.

It is quite possible to be asleep and conscious, or at least largely so. We call it dreaming. Sleep walking is less common, but this suggests we are awake but not conscious. Let us explore uncoupling consciousness from being asleep or awake. We can observe a gradient, from  anaesthesia to deep sleep, rapid eye movement REM sleep to drowsiness, to daydreaming to being alert: from being unconscious to being aware, up to full concentration. Being asleep and being conscious do not map exactly together.

Process and State

We have set out above the processes by which the neurons and hormones generate the perceptions, sensations, impressions and emotions of consciousness. Whether we experience that consciousness depends on the state of the gradient of being awake or asleep, and that could depend on the very sophisticated way that neurons are connected together – the Synapses.


Synapses are very interesting because they seem overly complex, for what they apparently do. Unnecessary complexity is not common in the natural world.

All the neurons are connected to the sensory and other organs, the muscles and other neurons by a narrow gap, or synapse. Electrochemical signals flowing along the axons to the synapse stimulate the production of neurotransmitters, which flow across this synaptic gap or cleft stimulating a similar reaction on the other side of this connection. The strength of the signal transmitted across this gap is modulated by the strength of the incoming signal, the volume of neurotransmitters, and the ambient hormone chemical mix around this junction.

Variable width Synaptic gaps.

But why a gap? How wide is this gap? Are the gaps consistent across the brain? However, a much more interesting question is whether these gaps are fixed or variable? It is entirely plausible that the width of the synaptic gaps are determined by the tension across the synapses, and this tension could well be variable in different areas of the brain at different times and for different reasons in different circumstances. The most likely determinant of the level of synaptic tension is the strength of the power supply. If the power supply begins to drop – after some hard work, or a tough day, then the tension will begin to drop and the synaptic gaps widen. Neural messages will continue to be transmitted but less efficiently. This is congruent with falling asleep.

As the energy levels rise after a period of inactivity or rest, the tension will begin to rise and the synaptic gaps narrow. Neural messages will begin to be transmitted more efficiently. We wake up. And all the sensations of consciousness flow again. This scenario closely fits our observed experience. Sections of the brain may well be in tension while other parts are not, hence we are only partly conscious when dreaming: the opposite in sleep walking. A sharp knock to the head quite literally knocks the synapses apart, and we are instantly unconscious. In a crisis selected areas are flooded with hormones to maximise our chance of survival. The synaptic tension is highest in these areas and so has the highest concentration of energy to maximise the efficiency of our processing – concentration. The synapses of areas of the brain operating on autopilot have lower tension than others, but only marginally so, thus priority for attention can be switched instantaneously.

Two Functions

We are left with a very attractive solution. Consciousness is one function and sleeping is another. They are not the same, nor are they different versions of the same facilities, and it is likely that they evolved for different reasons.  Consciousness evolved to continuously process feedback and to pause and learn to improve the automatic responses. The state of sleep fluctuates to compensate for periods of intense activity when we deploy every scrap of energy to survive. Concentration is when we throw all available resources at that part of the brain that has the best chance of ensuring our survival.

The Mind generates the sensations of the processes of conscious experiences. Tension across the synapses, and therefore the width of the synaptic gaps in the Brain determines the state of wakefulness or sleep. The software does the first job, the hardware does the second. The electrochemical signals of the software Mind stimulate the glands to output the hormones. The synaptic gaps between the hardware neurons widen and narrow. The processes of consciousness are continuous even when we are fast asleep. We know this because if we are asleep and someone calls our name or shouts a warning we instantly awake. This continuous background processing is what people call the autonomic system, which is best described as the ‘operating system’ of the whole body.

Sleep can progressively switch off areas of the brain leaving enough areas active to experience limited consciousness – dreaming. Usually the motor neuron circuits close down first so we have the sensation of being unable to move. As we drive a car and hold a discussion concurrently we are fully awake for one activity, yet easily be completely unconscious of the traffic around us: selective states over which the processes do or do not flow.

The Mind generates Consciousness: The Brain determines when we are asleep.


The Logic of Creativity and Evolution

Not only is computing and the science of software making game changing advances but so, also is medical science, which has nearly doubled life expectancy in just a century.

Taken together, medicine, biogenetics and neuroscience will progressively make the last quarter of many people’s lives potentially the most intellectually, productive and creative. This will balance the near eradication of all the lower skilled occupations that will progressively be carried out by computers.

What do we want our computers to do? Over the last fifty years computing power has doubled approximately every eighteen months (known as Moore’s law). Individual circuits may be reaching a plateaux, but overall computing power is likely to increase a million to a billion fold within the lifetime of a majority of today’s population. However, the quality and sophistication of software may only progress by a few orders of magnitude. The Science of Software can learn a great deal from our developing knowledge of biogenetics and evolution. Extraordinary discoveries are being made into the resilience and robustness of all the systems of the body in general, and the plasticity of the brain in particular. The operation of the immune system provides another dimension of understanding. Sophistication at this level in computing is not even on the horizon.

Reciprocally, software systems analysis of the architecture of evolution – how modifications to the genotype (the complete genome) changes the phenotype (the final shape and attributes of our living bodies), offers the exciting possibility of unravelling the biggest mystery of all – at least on a par  – if even more elusive – as a solution to consciousness. The extraordinary skill of creativity, upon which everything above absolutely depends.

Do we have an outline of the tools to lay down the foundations of the logic of creativity in the brain mind?

Leonardo da Vinci made the point that every single concept has been conceived originally in a human brain. But how?

We have noted above that because all engrams, holons, neurules and memes are learned individually by each person, it follows that everyone will learn fractionally differently. These differences enable people to develop different opinions about every subject. This ordered chaos is fertile ground for variations in opinions and so part of the creation of new ideas, solutions and concepts.

The same is true in molecular biology. In the 1990s biologists were baffled to discover the vast majority of genes apparently serve no purpose. Stanford analysed the yeast genome and created some six thousand yeast mutants by knocking out one genome in each. To their considerable surprise thousands of these ‘defective’ mutants prospered unaffected notwithstanding their genome had been edited. This is now a common research project. This resilience and robustness of the genotype suggests that what we originally thought was junk is, in fact, a very powerful resource. This massive redundancy means that a genotype can withstand considerable damage and wear and tear but affect the phenotype hardly at all. Either the damaged gene is replaced or the next door gene goes in to bat. What applies at the high end of a whole gene equally applies at the level of a single protein. Change one letter in a protein’s amino acid string and 80% of this editing will have no effect.

One of the definitions of ‘life’ is the ability to replicate. Now we can observe that living organisms can replicate their genotype, and make errors, which can even be replicated down the generations, without affecting the phenotype. Such organisms are very robust and so can deal with most of the vagaries that life can throw at them.

The young of all members of a community may all employ many different variant similar genes. Superficially this will have no apparent effect, however faced with a hostile environment the slight marginal differences may be enough to ensure survival for enough offspring to enable the community to survive. Throughout the plague epidemics of bygone eras, always some succumbed, some survived. As well as a defensive strategy it could also be the foundation of evolution. The combinations of potential variants runs into astronomical numbers. This massive replication means that every member of a community is different. No surprise there! But the implications are that this very robustness, this error tolerance, not only wards against problems but provides a strategy to facilitate innovation, to experiment. Most modifications to the genotype do not affect the phenotype, but some may well modify the phenotype and some changes will be beneficial, and a new better offspring will have evolved. This also accounts for the frequently observed fact that certain attributes in a family lie dormant for a number of generations, then suddenly reappear for no apparent reason.

One of the first observations of early civilised communities was that consanguinity risked giving birth to sub-standard progeny. The rise of the cities has been ascribed to the cross fertilisation of mixed populations. The accelerating expansion of our knowledge matches the mass travel made possible by railways and mass migration by aeroplanes. Variety is good.

So what is the source of these variations? Popular opinion has largely accepted that mutations are caused by damage from x- rays or similar. Or just chance errors. Darwin had a much more robust opinion in the Origen:-

“I have hitherto sometimes spoken as if variations had been due to chance. This is a wholly incorrect expression, but it serves to acknowledge our ignorance of the cause of each particular variation”.

It is far more likely that nature makes its own luck. Rather than depend on pure chance to create the living world, it is probable that a basic part of the algorithms of reproduction includes transcription variants. What we call Junk DNA is nothing of the kind. Arguably it is created on purpose for the very purpose of providing variant alternatives and the basis of at least some aspects of evolution.

Thus we have identified a common theme between the processes of evolution and the generation of new ideas, or creativity in the brain. Both depend on a mass of variations. What we once identified as errors, are, in fact, part of the basis of innovation.

The logic of evolution and creativity are both based on massive redundancy and massive variety, that both provides stability and resilience, and the potential for new solutions.

This is the exact opposite of contemporary computer software. At this stage in its development precision accuracy is the paramount objective. As noted above, a feature of coding is that a program is either correct or wrong. A program that is 99.99% right can be useless.

This is not to say that sophisticated systems design in the future could not incorporate alternative choices that, in particular circumstances, would allow the program to follow unconventional routes. However, simply making this point highlights the problem and the significant difference between our natural systems and our invented systems. They are a very long way apart!


We can make one last intriguing observation. In the first few decades of the computing revolution most attention was centred on the hardware: transistors replacing valves. Punched cards and paper tape then keyboards. The invention of the screen and mouse. New types of memory media. Programming, tended to try and concentrate on relatively simple tasks like math formulae or trivial processing tasks like book keeping. Everything was processed in batches. The older generation found the concept of programming beyond their grasp. There had never been anything like this before. Machines were machine that did things. They had never come across machines that did nothing! The Home Office bought a computer for the Metropolitan Police as late as 1968 with no funds in the budget for any programming, or software at all! This fiasco gave rise to the formation of the Central Computing Agency.

Then in the space of two decades the entire situation reversed. The arrival of the ubiquitous personal computer towards the end of the 1970’s spawned the design of an underlying ‘operating system’, enabling bright members of the public to compete with the professionals and write their own ‘programmes’. Screens, keyboards and dot matrix, then laser and inkjet printers replaced typewriters. Spreadsheets revolutionised accounting. In a few short years all hardware products had to support one standard software ‘operating system’. Every software product had to be compatible with and operate on the operating system platform. Software became the dominant power in the whole of computing and associated industries. Hardware developments came thick and fast but they merely supported the vastly greater software industry. Hardware components will continue to change but will still be merely supportive of the Software drive.

Why is this interesting?

For millennia the physical strength of individuals has dominated developing civilisations. Might (hardware) was right. Power counted. The sensibilities of consciousness, even conscience (software) was not a high priority. Slowly, Oh! So slowly brains bettered brawn. The industrial and scientific revolutions changed all that. Steam, electricity and machines began to replace muscles.

Now in the twenty first century intellectual power is fast overtaking muscle power, changing the whole basis of economics, industry and government. Medical science is making great strides in replacement organs. 3D Printing is enabling replacement organs seeded with the recipient’s stem cells to provide more and more ‘spare parts’. As our physical bodies wear out perhaps we can increasingly replace them. A recent paper in Surgical Neurology International doi.org/2c7 suggests we may soon be able to transplant a new body onto an old head. Not immortality, but an intriguing possibility, none the less.

Just as electrically driven machines have taken over many menial jobs so computer controlled robots will do more and more medium and low skilled jobs. Long before the middle of the century we will need to educate the younger generation in completely new ways, perhaps for many into their thirties, even forties, to cope with the explosion of our knowledge. A great time to be alive, but to be young is very heaven.   


Some people argue that describing computers as ‘electronic brains’; coining the phrase ‘artificial intelligence’ and forecasting a ‘singularity’ when computers will take over the planet, have done a disservice as it has raised apprehension and discouraged rational study of arguably the most valuable invention of civilisation.

A question frequently asked is “what are the benefits of a better understanding of the operation of our Brain?” Expanding our knowledge is always an advantage, but there is more at stake. The philosophy, strategy and policies of our whole education system depends on our knowledge of how the brain works. These policies are currently based on some rather vague foundations. For millennia the community could only afford to educate around 10% of the population. The prizes of the industrial revolution went to the nations that realised they had to educate their whole populations. For a hundred years the priority has been to pass on as much information as possible to the next generation. This gets progressively more difficult. Early specialisation risks choosing obsolescence.

The computer revolution is changing the goal posts again. As more and more tasks, not only manual, but also, increasingly intellectual are automated, the future lies with the creative professions, and with those communities that embrace these changes. Teenage education must move in the direction of identifying, and nurturing talent in every sphere, then helping each individual to lay down the foundation for lifelong learning: stimulate their curiosity and broaden their interests. We need to help young people recognise the type of brain they have inherited, and then how best to use it, how to learn, how to concentrate, how to think. Subjects not often found in any curriculum!

The future lies with both competition and cooperation. Symbiosis between the genius of the human mind and the processing power of our machines is the way forward to unlock the extraordinary potential of the most exciting invention in the history of civilisation. Neuroscience including Psychology, Biogenetics, Biology and Computing can all teach, learn from, and augment each other. The science of software, the science of learning and the science of creativity are the key sciences of the 21st century. History suggests that cooperation is far more difficult than competition. That is the challenge.

The whole of recorded history will be seen merely as an overture to the future on the cusp of which we now stand.


“I seem to be like a boy playing on the foreshore, diverting myself with shining pebbles, whilst the great ocean of truth lies all undiscovered before me”.

Isaac Newton

Potential for a National research program aimed at the creation of the


Expanding our understanding of the synergy, interrelationship and interaction of computing, biogenetics, biology, cognitive neuroscience and cognitive psychology in general, and in doing so widen our understanding of the functions of the Brain Mind in particular.

Considerable resources are being invested in trying to build hardware systems that emulate the neural structure of the brain. Typically, Manchester University are building a system than will emulate a number of neural networks using one million ARM components. The plan is to provide a system that enables researchers to test mathematical models on these advanced systems.

They are all hardware!

DEFINITIONS can be a very useful starting point for further research, but sometimes long debates over lengthy definitions can be a hindrance. We offer here a short set of ‘sound bite’ definitions, which might advance the former without incurring the latter.

The BRAIN is the physical hardware: the neural networks of neurons, each with a nucleus, axons, dendrites and synaptic connections; the family of glia cells, including the astrocyte and ogligodendrocyte cells; the glands, the neurotransmitters, the messenger molecules including the exosomes: the hardware. We can open a skull and see and touch them all.

The MIND is the mass of ephemeral patterns of electrochemical signals (and the electromagnetic fields they generate) flowing in waves across the brain, all bathed in floods of hormones.

The biological systems of the brain and mind have evolved as a means of organising all the organs of the body to behave as one cooperative, coordinated whole. The Brain Mind is primarily evolving as a learning system.

Information is repeatable patterns of behaviour, sequences of events, sets of facts and measurements. Information can be a sequence of events that describes how things operate or work, like a formulae, recipe or algorithm.  Language is a major form of information as is all notation. A streams of active information – doing work – causing effects – is kinetic information. 

Claude Shannon argued that Information is the unifying principle of science, while John Archibald Wheeler suggested that in due course we will have learned to understand and express all of physics in the language of information.

Memory is information in a static form and stored where it can be accessed for future use: Memory is potential information.

Intelligenceis the ability to recognise patterns and events from incomplete information, and process existing solutions and algorithms accurately and quickly: to respond.

Thinking is the ability to devise new algorithms, solutions and ways of behaviour, to resolve problems: to create.

Consciousness is to be aware we are alive and of the events occurring in our environment, the effect they have on us and we have on others, and to have some control over our ability to respond and our acceptance of responsibility for our behaviour.

Subjects for detailed Study


Great strides have been taken in the field of machine information. We are on the cusp of having the entire body of human knowledge art, culture, commerce, instructions, skills and emotions instantly available. We need to focus on the science of the transference of information:


In the age of computing the most important skill of all maybe the ability during a lifetime of learning to be able extract meaning from information, and so make the maximum use of it.

There is a strong case to review from first principles our whole concept of the education of the whole community, which over the centuries has been limited to educating a few, or some for just a few years, partly due to economic constraints. It seems very ill-judged that young people are forced to specialise narrowly in their early teens. This is surely a hostage to obsolescence, given the accelerating speed of change? In addition, given the rising longevity of the population, and the fundamental changes in the patterns of employment, the community needs to design lifelong education plans, based on the widest curriculum for every member of the community, perhaps up to their early twenties, with individuals only beginning to specialise after a first general degree course that gives equal weight to intellectual, arts, crafts, design, and creative subjects. By such time students will have travelled widely, had some work experience of many different professions, and have had a far better opportunity to identify their individual talents.


It is remarkable that very little work has ever been done to define and quantify the Rainbow of Human ability.Without a detailed knowledge of ability it is difficult to see how we can have much faith in our current methods of assessment, measurement and aptitude. Public examinations at schools, universities and in some professions; and IQ tests are a start on a long journey. Building a database of the Parameters & Definitions of Ability would be extremely difficult but of enormous benefit. Without a solid basis to work from how do we know whether we are even travelling in the right direction?

Once again the computing revolution can offer some useful pointers. A study done for the (then) Department of Trade and Industry with the Royal Institution in 2001 showed no pattern or correlation whatever among the credentials, education and conventional skills of some 2,000 people pursuing successful careers in computing. It has taken forty years to get computer science into the national school curriculum. Yet anyone can learn to code – program – in an hour! This has baffled the establishment. Designing computing systems is not a skill that is better or worse than that of conventional lawyers, doctors, accountants, managers, administrators, manufacturers, soldiers, …… It is just different. Arguably it is the most creative of all the professions we have ever devised.


Much work, and resources have been poured into this subject with varying success. Part of the problem is that there is no agreed definition of ‘intelligence’ and huge controversy over tests like the Intelligence Quotient or IQ test.

There have been three broad approaches.

The first research concentrated on designing systems that attempted to emulate decision making and selection procedures and forecasting systems. These have tended not to be very successful because the problems turned out to be in the specification of the application, rather than the program to implement these designs. Much effort has been devoted to building mathematical models of the economy. These have been partially successful because they taught the designers a great deal about the relationships of various statistics as they tried to build them into their models. A very prestigious league table is published annually of the successful participants, including many central bankers, economics institutions and world organisations. The most interesting result is that there is no consistent pattern of success (or otherwise). They remain, very much more complex guesses about what to input: the computers just crunch the formulae! Not one participant forecast the 2008 crisis. Senior leaders in the financial markets were saying on the public record as early as 1998 that they did not fully understand the ‘financial instruments’ they were beginning to trade, which were being designed largely by maths graduates of our senior universities. The problem was these mathematicians had little or no experience or knowledge of the curious behaviour of financial markets. The world economy dropped between the two.

This line of research has, however, led to a series of products that had less lofty ambitions but under the title of ‘Expert Systems have had more success.

Secondly, in the 1980s and 90s, as interest in cognitive neuroscience began to be popular, considerable resources were put into designing systems that tried to emulate neural networks, principally as learning systems. They were used to design systems to help early robots ‘learn’ to find their way through mazes and such like.

Thirdly, considerable success has been achieved by systems that process very large volumes of information, then use complex algorithms to try and identify patterns – sometimes called ‘data mining’. These systems have been used to build software that win chess matches, and quiz games, but of far more important, they are proving to be extremely useful in identifying very early possible medical epidemics and they may well be very useful in analysing vast data sets to forecast more accurately other trends in behaviours and economics. These doors are just opening.


Almost no work has been attempted in this field as, like intelligence there is no agreed definition of ‘thinking’. Over the years statisticians have made jokes about the probability of a monkey hitting a keyboard and composing a sonnet or suchlike, which has given the subject something of a bad name.

We are beset on many fronts with complex problems. It would seem intelligent to try and devise algorithms that might allow us to input information about difficult problems and try and suggest alternative courses of action. If we could slightly narrow the odds this would be a success. Similarly the intellectual process of defining the parameters and attach values might of itself be beneficial. We could try and emulate the brain by first attempting to find ways to identify the implications of information.

The greatest achievement of the human race has been to find ways to predict possible future events. It would seem intelligent to use these machines of ours to try and help us find ways of extrapolating the implications of information of the current known situations that we can glean, then try and predict the possible range of future developments.

In the political arena, considerable benefit might accrue from building accurate models of the past and compare what they might have forecast with the reality that has occurred.

Some derision has accrued to the suggestion that in many complex situations there are often known unknowns, and unknown unknowns. To find ways to identify, quantify and express these would be a very valuable, if modest, beginning.


Building a computer that could be conscious is the Holy Grail. It appears to be an insurmountable task. We cannot conceive of bathing the motherboard of our processors in some mix of hormone like chemicals that would give that processor any sensation of conscious experience. However, as we have explored above, it is neither neurons nor transistors that generate meaning. Neither neurons nor transistors are intelligent. No neuron in history has ever created a new idea. Likewise, the hardware of the brain is only the agent of consciousness.

The patterns of electrochemical signals that stimulate the glands into flooding the system, cause the whole body to experience the sensations of hunger, thirst, fear and anger, and the perceptions of visualising images, hearing sound; smelling, tasting, and the emotions of touching and being touched. While hormones like adrenalin, gear the muscles up for maximum efficiency, they also cause the neuron networks to move into ‘top gear’. Heightened emotional state changes our behaviour: it can strengthen memory formation, assist access, speed up responses and select the best available solution: facilitate action.

It is certainly possible to design software that modifies all other programs. While all our computers continue to be made of metal it is doubtful if we could cause the cover to tingle with excitement or its heart rate (clock pulse) to race. However, if we could specify what combinations of circumstances could be expected to cause a computer to feel angry, for instance, we could set the parameters so that all subsequent programs were modified to respond in an angry fashion.

We have been writing programs like this since the 1960’s. Early ‘trace ‘routines were used to diagnose errors in a program. The program to be tested is stored as a data file. The trace routine is loaded to the processor and deconstructs the program to be tested one instruction at a time as though it were just data. In this controlled situation the trace program causes each individual instruction to be executed then prints it out step by step. Thus the programmer can see exactly what happens. Thus one program can execute another program as though it were simple information; and, therefore, by extension modify it. Nowadays, many systems are built using this architecture.

Whether such a system could be said to be conscious, or only behaving in a conscious way is a moot point, but it might be a way station on the journey. It is entirely possible to design systems that could express an opinion about its performance, or behaviour. How long it took to deal with queries, for instance? If a computer was not used for a period it could be programmed to announce it was bored! These suggestions may sound trivial, but they are in the same class as programs designed to have conversations to test if they are computers or humans. The famous Turing Test.

These projects are valuable because they help us better understand the underlying architecture of how a computer might be programmed to go about these tasks, but also they open windows on how the brain mind operates.

Perhaps, more importantly they help us design better interface systems. The future probably lies in direct physical connections between our machines and our brains, or at least the nervous system; melding together the massive memory and processing capacity of our machines and the unique human capability to extract meaning from information, make decisions, predict possible future events, make judgements and create new ideas, concepts, solutions and better behaviour.

Charles T Ross

2015 // BMF // Contribution of Computing + Logo // IAP// 15 09 Sep

© Brain Mind Forum 2015.

Comments are closed.