Computer Science


See and read it all here and now.

Animation on the UPGRADE01A YouTube channel. Read other information here.

Science fiction and politics is covered mostly at this space-time, but every category listed is mentioned in a blog, story or commentary here.

The free exchange of ideas, hopes, dreams, goods, services, peace and love should prevail. Education is alive and free online and in the world’s libraries.

#OccupyLiberty

Data Storage by Element

Author:  David S. Ullery

Date: May 8, 2009.

See time stamp on original version and latest version.

There have been several updates and refinements.

©ΞΞΞΞΞΞΞΞΞΞΞΞΞΞΞΞΞ©

MAY 8, 2009

©ΞΞΞΞΞΞΞΞΞΞΞΞΞΞΞΞΞ©

Every element within the periodic table of elements contains unique properties that may be exploited for a new kind of data storage system.

For example, the full electrical charge capacity, mass, reflection of visible light and atomic light spectrum signature combine to form a unique set of values that can be read in up to four different ways simultaneously.

Elements that are solid at room temperature are ideal, because the cost is lower.  A slide, for example, could contain a message in the corner made up of some ideal combination of solid elements such as gold, copper, silver, platinum, carbon, silicon, lead, calcium, or other economical-to-obtain elements.  Perhaps carbon could serve as a kind of “end-of-program” marker, if there are three (or some agreed-upon number N) in a row, for example.

One reader or a combination of readers could be utilized to find and scan the atomic-sized message (data, text, ID TAG, program file, etc.).  An amplified electromagnetic signal could help distinguish electrical charge differences among the various elements deployed (“elements deployed” refers to which elements will be utilized after research, ROI analysis, and probably a form of standardization at some level).  Increasing the number of atoms at each “bit”, and/or varying the number (varying would be based on the specific element type to further increase the differences between bits.

If gold and silver where utilized, then it may be, just as an example, that a three-to-one ratio of gold-t0-silver atoms is required to economically or technically distinguish the difference) may serve as amplification for all three types of readers mentioned. There are other ways that may be exploited as well.  Perhaps multiple readers could be exploited to do a faster read utilizing statistical analysis, fuzzy logic, an AI “Expert System”, or some combination to optimize the accuracy of each ‘READ’.

Some set of elements that are both solid at room temperature, and widely distributed within the periodic table, may be the best set of candidates for scientific research in this area.  If both the masses and electric capacity are quite distinguishable among each of the selected elements, then it may be a good set of elements to select, provided the read/write/production costs are all reasonable.

If five elements may quickly and easily be distinguished in an economical manner, then a base five (radix 5) system could be employed to read and write the system. An ink-jet-printer-like device, with containers of each of the elements utilized (based on research and ROI), could be utilized to write the elements onto whatever material is desired.  A simple converter-mapper interface could be utilized to convert the system to whatever hardware or software interface specifications (including the radix change-over mapping to binary) that may be desired.

________________________________________________________________________

**Note: © All rights reserved with the following exceptions:

You may reference or copy any of the material on this site as long as you do not make a profit from it, your reference or copy is freely accessible on the Internet, and your URL site is on either the “.com” or the “.edu” domain. Please provide links back to the original Upgrade01A reference and give credit to the author(s). – That is one of the purposes of this blog.

Stick to the spirit of this website and you are on the right track!  Note: many of the musical pieces have left room to improvise on top of with an additional track or two.  For example, none have lyrics or human song yet.  Feel free to add on. An mp3 could be provided upon request as long as I am given credit for my work and this website is referenced, if you have your music on line.  THANKS!!!!  Of course, editing of the original tracks is permitted as well.

Thanks! David S. Ullery ( uPgRaD3 z3R0 0n3 A)

________________________________________________________________________

Several Novel Ways of Storing and Manipulating Data
Version: 3.4

Author: David Saxton Ullery

———————————————————————————————————————————————————

The information in this posting may not be copied or used to create any technology without permission. Not-for-profit uses are permitted. Please comment and include any suggestions and questions that you may have.  Thanks!

———————————————————————————————————————————————————

Introduction

This article briefly outlines a few novel approaches that could potentially lead to dramatic increases in the amount of information that may be stored and manipulated at the nanometer scale,  and shift the paradigm in the way information is traditionally manipulated and perceived. Some techniques demonstrate how a large amount of  data could be stored directly as symbols or shapes, others outline possible alternative approaches to storing data by exploiting different properties of atomic elements that may offer insight into radically different approaches to the very problems that nanotechnology companies and researchers are working on today.

New approaches in thinking about exploiting previously unconsidered yet readily differentiating properties, opens the door to the thinking of the technologies that are researched and ultimately employed as a viable commercial product. Thus, the goal is that reading and pondering the concepts presented here will help trigger new ideas that will lead to much more economical approaches, new ways of thinking of computation, and ultimately newer, more powerful computational machines that do not necessarily follow the traditional Von Neumann architecture.

When examining future nanotechnologies for reading and writing information, storing data at a higher symbolic level of information other than only utilizing simple binary format should be examined as an alternative approach to the current standard architecture in today’s storage technologies. The approaches given here deal with the storing  of information at the nanometer size, but are not directly exploiting quantum mechanical properties, nor do they depend on DNA or wetware.  Instead, they depend upon both exploiting the unique properties  of the atomic elements, and our increasingly sophisticated ability to move atoms to form any physical shape we desire, including directly storing symbols in their “natural” form. By purposefully positioning groups of atoms into various patterns, they may be interpreted in new and unique ways by the technology that reads, writes and manipulates the data.

Storing information may be enhanced in another way: More economical and useful ways of reading, writing, and manipulating data can be achieved by exploiting the informational differences inherent in different elements, along with the differences in a single element and its various isotopes.  Different elements, isotopes, and molecules each have properties that could be exploited other than their quantum mechanical properties, and other common approaches that nanotechnology researchers are already examining.  For example, every element has its own unique mass, atomic number, number of electrons, electromagnetic properties, chemical properties,  size, shape, and so on. Shapes are especially interesting when configured in simple molecular structures, crystal structures or when atoms are physically moved in a purposeful manner atom-by-atom to form simple text or other symbols that can later be read and interpreted utilizing relatively simple algorithms.

Mixing and combining each of these and other ideas presented here and extrapolated upon by the knowledgeable reader would enhance all of these approaches in a synergistic manner. It opens up possibilities for alternatives to the traditional Von Neumann, binary-based architecture, yet does not force such a change.

Element Detection

Hydrogen (H) and Deuterium (D or ²H)

Alternative approaches for storing binary data using an element and its isotope.

Note: An element other than Hydrogen may be a better choice, but the concept is the same. However, Deuterium is very stable, not radioactive, and relatively plentiful in ocean water.

Since hydrogen and deuterium have their own unique atomic weight and emission spectrum, it should be eventually possible to detect tiny amounts of either, and use them to represent binary information.  Another element/isotope pair should be considered, if there are known techniques for detection (reading) differences, and more efficient ways of switching states between the element and its related isotope. Other elements and their isotopes may have other properties, such as differing diameters that may be exploited more economically than hydrogen.

Here are a few ideas to consider:

  • Use hydrogen, with mass number 1 to represent the zero (“0” or “off” or “no”) state.
  • Use deuterium with mass number 2 to represent the one (“1” or “on” or “yes”) state.
  • Read the values using mass spectrometry , infrared spectrometry, other non-destructuctive  spectrometry methods utilizing much shorter wavelengths such as UV, or perhaps bounce a single photon off of each.  A photon bounced off of a single hydrogen atom would behave differently than one bounced off of a single deuterium atom.  Another approach may be to utilize a modified version of the technology of the scanning tunneling microscope STM, if it can be refined to the point where it could read the difference between an element and its isotope. Utilizing new forms of spectrometry (or other electromagnetic techniques), which use  much higher frequencies than ultraviolet may  someday utilized to detect size, position, mass, electromagnetic properties.
  • To write data: Store the gases of each type and inject the atoms one by one into the bit containers. Another approach may be to find a way to push atoms into place, perhaps utilizing a modified, greatly shrunk down version of STM (see sections that follow for a bit more on this).  Perhaps a neutron beam could be used in a novel way to convert H to D, thus “burning” ones into memory in a manner analogous to PROMs and EPROMs.
  • Each tiny collection of atoms can be stored inside a single carbon buckyball, with each “bit” separated by an empty buckyball or by some other means, such as a tiny number of silicon atoms to separate each bit, such that the state of each atom or tiny cluster of atoms are not easily disturbed. Another approach may be to load up a nanotube or a column-like structure created using a few nanotubes.  Each atom, or atom cluster would be fed into one end of the column,  possibly followed by a separator element (consisting of a either a string or clump of one or more atoms such as silicon, or a buckyball), followed by another atom or atom cluster.  Each atom or cluster would represent a zero or a one and could be read from one end of the column one at a time until the last atom is read… More on this in sections below.

Multiple Element

This idea may be practical for memory storage of the more permanent kind, because writing may prove to be exceedingly slow for rapid computation.  The ability to distinguish between different elements may be more practical for reading, but writing with multiple elements may prove to be difficult.  A technique inspired by an ink jet printer could work – the valves would need to be extremely tiny – perhaps made from carbon nanotubes.

  • Use any two elements that are easy to distinguish when only one or two or three atoms of each type is present.  Binary numbers would be represented using one element as the zero, and the second element as the one value. Using a large atom such as lead to represent the “1″ value, and a much smaller atom, such as hydrogen to represent to the “0″ may prove beneficial.
  • Use multiple element types, with each element representing a different value. The radix of the system would depend on the number of easily readable elements that can be stored into a tiny space using one, two, three or any tiny number of atoms each.

Using this scheme,  hydrogen could represent a “zero”, helium a “one”, …, oxygen a “seven”, and so on (Atomic Number minus 1) for each element.  The radix may be octal, decimal, base 36, or any base up to the number of elements used. Carbon, silicon  and  perhaps gold may need to be skipped since they are needed to construct the memory containers and may interfere with the readings.  Rare elements may be avoided due to their cost or radioactive effects.

Similar to the two-element technique, it may be of benefit to select elements that vary in their atomic number (and mass) by large amounts rather than selecting closely related.  Selecting elements from different groups within the periodic table may prove to be exploitable and therefore useful.

  • Another binary alternative would be to stick to a single element.  Use one atom, perhaps xenon to represent a “0″, and use two side-by-side atoms of the same element to represent a “1″.  A variation on this theme could be to use zero atoms to represent “0″, and a cluster of one or more atoms to represent a “1″.
  • Another approach is to use atoms of dramatically different size to represent differing values. The heavier elements are much larger than the lighter elements.  Technologies may exploit these differences.  Combining a few larger atoms together would increase these differences.  Atoms may either be placed side-by-side or stacked one upon the other to produce a taller, nanometer-scale mountain. Using this approach, the data may be interpreted either digitally or analogically.  Analogically if the mountains, or side-by-side, or some combination are made of varying elements with different sized atoms. One can imagine a nanometer sized head, not unlike a tiny record needle reading analog data, with an interface taking in the data; then, depending on the architecture of the future computing device, the context of the data, manipulating the data directly as analog data or digitizing the data.  Multiple versions of digitized data are envisioned here; depending on the context once again:  (a) interpret atomic-sized mountains over a certain threshold as a “one”, or (b) interpreting varying heights, or other features (total mass, …) as an analog value to be converted to a digitized value; (c) interpret the atomic stack of various element types as a stack of bits, (d) interpret data in any manner where it is economic to read, write and manipulate

For any approach selected, it may turn out that only elements that are solid at or near room temperature are practical, thus ruling out all of the gases and liquids.  Alternatively, simple molecules, such as NaCl, or other salts could be used, if there are better techniques for reading and/or writing molecules. Even H2O – water molecules could be employed if the data is kept cool enough – erasing this data would obviously be very simple.

Reading and Writing the Atoms for Each Alternative Technique

In either the element/isotope or the multi-element concept, one could “write” the atoms (or molecules) into a nanometer sized tube that is transparent enough for electrons or photons to bounce off of each atom, one at a time. It may be necessary to use one element (may need to be several atoms long to insulate the properties of each “bit”) as a type of tag or marker to separate the information atoms, especially if more than one of each element is needed.

As an alternative to bouncing electrons or photons off of each atom, a reading head and writing head could be created using a technology based on already existing scanning tunneling microscopes (STM). This approach may be particularly useful in the single-element approach where a pattern (0 or 1; 1 or 2 groups) are used.

Exploiting the unique particle velocities or resonance frequencies generated of elements or molecules of different mass may be combined with other technologies to measure differences between atoms or molecules to read the atoms from different elements.  Acoustic and electromagnetic waves may be utilized to generate frequencies to induce large amplitude vibrations within a system of atoms lined up in patterns.

The nanotube structure could be used to keep the atoms or atom clusters (molecules) in place, like so many beads on a string, or more like a line of different color peas inside a transparent straw.  The design close up may resemble chicken wire. It may not be a completed tube, but merely a trough or rain drain-like structure that is “U” shaped from the end instead of “O” shaped.

Rows of these tube or trough-like nanotube structures could be connected together to create a two dimensional matrix, or a single, very long structure could be wound up into a disc, like a CD or DVD disc and read from the inside out.

It is interesting to note here, regardless of the selected alternative, that an STM reader could conceivably read large chunks of atoms at a time, projecting different shapes that could then be decompressed by  shape-recognition software into standard bits, bytes, or any other form, including the original form.  A string of ones and zeros physically represented by atoms or small clusters of atoms would form unique shapes due to the distribution of mass, electromagnetic, and other properties.  To read these shapes as chunks may require the trough to have a certain amount of “wiggle room” so that the atoms may not form a completely straight line. Different elements or molecules may be readily coaxed into specific shapes by subjecting them to different electric charges, magnetic fields, chemicals, or simply by squeezing them into or through other nano-sized machines or templates (like a tiny cookie cutter).

Nanometer Scaled Symbolic Writing

The following outlines the concept of storing data more directly as high-level text or other types of high-level symbols, thus effectively compressing much more information into bit-sized areas for simple text messages. In some cases, depending on current state-of-the-art, an “atom” may be replaced with “a cluster of atoms”, or “a molecule” or “a cluster of molecules”, but the concept is such that in any alternative, the real estate used must be substantially smaller than the current space required for a single bit on today’s storage systems.  Given that a nanometer is 10 to the -9th meters, a typical atom’s ranges from about 0.1 to 0.5 nanometer, and today’s memory chips are storing bits at the 45-nanometer level, it seems we have some room to work with.

Let us examine some potential ways we might represent information at the atomic level.

  1. Store text, including entire computer programs using ordinary text, but write the text at the tiniest possible size. Remember the I.B.M. Logo?  All three letters contain a total of 35 xenon atoms (atomic number 54). Each atom is spaced at what looks like one or two atom widths apart on average.  According to the article from the link above: “In 1989, IBM scientist Don Eigler was surprised to learn that in addition to using an STM to look at tiny things he could also use it like a pair of tweezers, to move things as small as a single atom.”  Suppose that the text could be crushed down to use no more than 8 atoms per character –  the same number of bits used in today’s binary ASCII code, yet still be kept in the same general shape as the actual letters, or perhaps some new, more compressed, yet easily recognizable set of shapes. It may be easier for a technology to read the entire glob as a shape than it would be to read each atom as a single bit.
  2. Use lines of atoms of different lengths to represent different values. Example: “.”=0, “-”=1, “–”=2, “—”=3; where each “-” is one, two or three atoms in length…perhaps larger clumps would be needed or more economical.  Molecules could replace atoms, if kept very tiny (whatever is the least number of atoms or smallest size molecule that can be detected at high speed).  This is simply a variation of “1″ above, but keeping each shape more or less as a line, however the length of each symbol would grow with the number of characters represented.  If we kept the number of symbols small, say to 10 or less, then the longest line would be only  10 atoms wide. One could imagine building a code based on combining various symbols without necessarily resorting to a number system.  It would be constructed  like a kind of short hand.
  3. Use different shapes to increase the symbol set, without increasing the number of atoms.  Example:  “+”, “^” could be represented using four and three atoms respectively.  This is really no different than option “1″, but could be interpreted as a variation on option two or a hybrid between 1 and two where the atoms are allowed to occupy more than a single row.
  4. Use marker symbols to distinguish the representation of any of the above representations to create a hybrid.  Marker symbols may be actual text-like or at least shapes or combination of shapes not unlike XML tags, or they may be atoms of a different elements or molecules as discussed in previous sections. A processor configured to read multiple symbolic representations may have the ability to reconfigure its actual hardware, or load different algorithms into its memory.  It may be that the actual processor is of a traditional silicon/binary type with a suitable interface that acts like a connector/adapter/translator/mapper between the computer and the storage.  Alternatively, it may be that the entire computer is constructed to directly manipulate these symbols or to at least readily convert them in a  much more tightly coupled manner than a traditional computer would be able to do.
  5. The symbols used may not resemble any of the symbols familiar to us like those on our keyboard.  It may be more convenient to exploit the shapes that clusters of atoms tend to form when combined together.  Crystals are one example, but they tend to have several variations.  The point is that the shapes need to be as easy as possible to construct, be stable, yet be consistent and deterministic. If a given element, with a limited number of atoms forms the same set of shapes, it may be possible to filter them so that they can be used to represent a set of symbols.
  6. Utilize binary or some other radix where needed, or where more generalize information is needed.  Binary data may still be stored as a shape rather than utilizing the atoms or clusters of atoms as simple zero-one bits.  More is discussed about this below.
  7. A two-dimensional photograph could be compressed down as a simple black and white photo (atom/no atom),  or a color photograph (1,2, or 3 atoms; where one=”red”, two=”blue” and three=”yellow”).  Simple markers could be used to (A) tag that it is photo information, and (B) tag the next row of an array, or simply tag the actual “bit” length of each “row” of pixels – in other words to mark out how many patterns or “1″ “2″ and “3″ would be needed.  Note however, that three or even four characters can be represented in binary, using just two bits (atoms): “00″,”01″, “10″, “11″.

Once again, a modified scanning tunneling microscope (STM) technology can be used to write and read the data using either technique or any hybrid combination.

Rather than storing information as bits, the information is stored and read directly at a  symbolic  level.  Simple software algorithms would be used to translate the characters and shapes to be used  and interpreted as needed.

For example: a Java program may be stored as source code using a tiny number of atoms to represent each character.  The java program would be read using the STM-based reader, then translated (decompressed) into byte code  and run on a conventional computer, if desired.  Alternatively, an entire CPU architecture could be build around the new storage technique that directly manipulates the stored symbols.  Literal XML tags could be used, if desired to mark code and data sections.

The multiple  techniques presented in this section and the previous section could be combined.  Use the one/two atom pair technique (to store binary code.  Separate the code with special tags (atom-by-atom XML or otherwise).

Perhaps the text could eventually be shoved into super long, nanotube-based structures and wound up into a disk storing up trillions of times the data currently stored on today’s high definition DVDs.  This would be similar to the device describe in the previous sections.

In scenarios where memory reads could be relatively slow, then it would make sense to pack more data, using less atoms to represent the data.  The link above shows that using current technology, at least 3 letters can be written and read using an STM.

Once the concept of reading symbols sinks in, it becomes apparent that the most general form of information can be represented in binary, and it may seem that information could be compressed better if data is always represented this way, but the concept of utilizing symbol recognition could enhance this most general case.  Using just two atoms (or groups or molecules), we can arrange them in the following ways:

( 1 ) “- “      [just one atom followed by no atom, or “10”] ,

( 2 ) “–”     [two atoms next to each other, or “11”],

( 3 ) ” -”      [no atom, followed by one atom, or “01”]

( 4 ) “=  “    [two atoms, one above the other, with no atom next to it, or “1010”],

( 5 ) “\ “      [two atoms at an angle, down and to the left, or “1001”]

( 6 ) ” /”      [two atoms at an angle, down and to the right, or “0110”]

( 7 ) “_ “      [no atoms on top an one atom below, or “0010”]

( 8 ) ” _”      [no atom on top, and one atom below and to the right, or “0001″

Of course, in theory, there could be up to four atoms within the given space, thus allowing for 15 values, but the reader and writer must both be able to distinguish all of those patterns in the same tiny space.  It may be the case where multiple atoms packed closely together will not retain a stable pattern.

Once the technology reaches the level where a single atom could be read, then pure binary representation may be the best technique in 100% of the cases, but using shape recognition may still be the best way to interpret the information. It may be more practical to limit the number of atoms within a given space and interpret the limited number of shapes within that space as a particular value.  It would work in a manner not unlike using braille for the blind. If all of the available space can be filled in with every combination, that is great, but we can still exploit the concept without completely utilizing every conceivable combination and permutation.

Unusual Processing NanoMachines That Eat Data

Another variation on reading of data could be a of the destructive kind.  Read the atoms by grabbing them off of their storage surface and literally pass the data into the processor.  A machine that directly works with shapes instead of bits could process different symbols by filtering them into different locations.  Using lined up symbols like those described in “2” above could direct the data based on each symbol’s length.  Longer symbols could not enter shorter slots.  Data  contextualized as numeric or alphabetical could quickly be sorted by length (spaghetti sort), addition would be fairly straight forward (add the lengths).

In another context, the data could be interpreted as an algorithm for constructing multiple copies of another nano-mechanical machine.  The symbols may consist of various length rods, gears, levers and pulleys as the “data” section; intermingled with short instruction sets. The processor may be cleverly enough designed to be capable of understanding how to manufacture thousands or millions of tiny machines.  The symbol “6” followed by the symbol for a gear could indicate that the machine is to grab the next six gears out of the gear repository or instruct another part of the machined to build six gears built to the size of the “data” gear, and perhaps to use the same element in doing so (the data gear may be made out of carbon or gold, for example).

Utilizing the direct literal “grabbing” of data, the mass of the “bits” could be exploited by a machine designed to take advantage of data in this context.  For example, data could be directly sorted  or added together by mass; larger atoms or more massive molecules could be filtered so that the computing machine would be reconfigured to perform different operations.  Suppose nano-sized gears could only be turned by an atomic mass of greater than or equal to 18 = 2 oxygen or 18 hydrogen atoms.

Grabbing data may be as straight forward as pushing “end of file” atoms into one end of a nanotube, thus allowing the program to push out the other end in a FIFO manner, down into a slot where the calculating machine sits.

Conclusion

We can utilize shape recognition at the atomic or molecular level to store binary information. If a single element is utilized, then the shape alone could represent an arbitrary value.

We can utilize shape recognition at the atomic or molecular level to store information directly as symbols, potentially packing more information into the same tiny space where a single bit may ordinarily be stored.  This same technology could then be utilized to ultimately reduce the information back down to the binary level, but using the same techniques and technology that we use to detect shapes.  It may turn out that multiple shapes are more readily recognized than directly using the atom (or smallest practical “unit”) in the more straight forward way of simply looking at “atom = 1″, “no atom = 0″, or “two atoms = 1 and one atom = 0″ in a linear manner.

We could potentially use different elements or their isotopes to store more information into a single bit without resorting to quantum computer effects, but by exploiting the different spectrum and/or mass, and possibly other differences among the elements. If combinations of elements are used, then atomic number or electromagnetic properties could be utilized to give a single physical shape more than one value.  Two star shapes with different mass could represent two distinct values, for example.  Two squares, made with the same element but with differing numbers of atoms or with differing spaces between the atoms potentially could be exploited.

We could potentially use different elements as markers or tags, not unlike tags utilized in XML.  We could literally create XML tags just like the famous IBM logo was created. Alternatively, we could employ the idea of packing different elements or molecules between sections of data to be interpreted as a change in context.

We could use nanotube to stuff atoms into  – to be read one at a time, or potentially read as chunks with the unique shapes later to be decompressed.  It is even conceivable that a highly sophisticated machine could interpret and manipulate the chunks and shapes directly. The shapes of each chunk may be exploited in the design of the processor itself.  Taken further, the tags or markers could be utilized to directly modify the processor.

We could use a plane surface to read shapes or combine this concept with the nanotube or buckyball concept mentioned earlier.

Processing data at the symbolic level may open up new and unique approaches to computing.  The processing techniques could be simulated using ordinary, binary computers by building a virtual machine designed to manipulate symbols. The simulations would be used as a discovery process so that alternative architectures may be explored.

Finally, we may discover these ideas to not be good ideas at all, yet it may toggle the mind of someone else in science, art or music in some yet unknown way.  Perhaps it is on the right track, but requires another approach that some else may come up with. Perhaps someone in a completely different field may look at this posting, sleep on it, and come up with another novel idea that is directly useful or creates yet another tangent.  A fractal-like graph may result, pointing toward some great idea to solve some totally unrelated problem.  The final result may be four or five or six or one hundred people down the chain – it may loop back around to me…

In the futuristic, on line, open source, science fiction novel “Upgrade 01A“, computers that utilize nanotechnology (some perhaps similar to what is briefly outlined here), some based on DNA, some based on quantum computers, others based on yet unheard of technologies, and some hybrids, are common place.  Many tiny, microscopic computers and robots are integrated inside the bodies, brains, and clothing of the main characters.  Thus,  the characters’ physical abilities, intelligence, and life expectancy are greatly enhanced or upgraded. New devices implanted in a person’s brain are often referred to as “upgrades” and may include a model number that is traditionally denoted as a hexadecimal number. Computers are integrated into virtually every device and object of value.  Please read part one and enjoy…

↑Top of Page↑ [Home] [Back to More Posts]

© All rights reserved, with the exceptions given on the home page. In short, feel free to use this material in any public URL with “.com”, or “.edu” domains for non-profit purposes. Please link back to whatever you reference.

  • DNA computing – From Wikipedia – get an overview and plenty of links …

  • Boffins build artificial DNA – 08 Jul 2008: “Japanese scientists claim to have created the world’s first artificial DNA, a development that brings DNA computing much closer…”, Written by Iain Thomson in San Francisco
  • DNA Computer Puts Microbes to Work as Number Cruncher “…scientists have made a computer from a small, circular piece of DNA, then inserted it into a living bacterial cell and unleashed the microbe to solve a mathematical sorting problem…
  • Computer Made from DNA and Enzymes “…Israeli scientists have devised a computer that can perform 330 trillion operations per second, more than 100,000 times the speed of the fastest PC. The secret: It runs on DNA. …”

Version: 1.5

Author: David Saxton Ullery

Suppose that, after thorough analysis of the human brain, free will for us is really proven to be an illusion with emergent properties closely associated with consciousness and human intelligence. Some time later, we take this knowledge to build a sophisticated, powerful artificial brain machine with scores of billions of high-speed neurons and possibly trillions of self-modifying synaptic-like connections,  which eventually designs an even more sophisticated machine on its own. This new machine calculates how to create free will and so it documents a detailed design of a practical free will machine for us.  Should we go ahead and build the free will machine – or more realistically, should we have the machine(s) build it for us?

Suppose the newly built free will machine could be integrated into our brains such that we would have artificial free will. A logging is recorded every time a true free choice is made that is different than the illusory free choice that we would  have otherwise made. If the machine has complete knowledge of all of its human host’s needs and wants, it seems possible that it would always decide exactly what we would have decided anyway – it would always do precisely what we “want”. A trillion-dollar do-nothing machine! On the other hand, it might occasionally or frequently select a different choice which may very much anger or disturb the host who ironically feels like he is no longer in control. The host may begin to feel like a slave.  The host would want one thing, and his machine would override, resulting in a different decision being made.

If free will is not ruled out, then it is possible that we do not possess it in our own brains, but that it is physically possible to construct a device that would enable it. One of its parts may consist of a type of time machine capable of passing information, in the form of quantum bits, backwards in time through an artificial wormhole. The possible future choice outcome could be analyzed and weighted against the current state. The new future choice could then be taken. The process would recursively repeat until a final decision is made. Effectively we really do go back and change our choice at time “t” (potentially several times). This would satisfy the requirement for free will, since the machine allows for us to make a different choice given the same previous causes.  Not only could we make a different choice, but we would occasionally make a different choice for the exact same event, at the exact same time, for the exact same set of circumstances.

Logically speaking, we already have such a devise… to a degree. We can take our past, learned experiences, combined with logical future outcomes, think about logical outcomes until we come up with a final decision that we act upon. We never actually act upon any but one of the “what if” scenarios, but we can often logically deduce the outcome (if I eat the cake, it will taste great, but I will gain weight; if I jump off the cliff without a hang glider, I will likely die; …). At time “t”, we actually make our choice.  We end up doing what at least a part of our self “wants”, based on a set of resources within our brain. If we are good at predicting outcomes, then we will often make exactly the same choice that we would have, in the science fiction scenario given above.

With the possession of the time machine, we could actually taste or eat an entire piece of chocolate cake and decide that it is well worth it, and finally decide to actually eat the cake as our final choice.  Without the time machine, we may decide to eat the cake, based on the knowledge that the last cake made by the same chef, was really delicious and decide on eating the cake.  We must assume here that the time machine owner may not have her cake and eat it too.  The owner will forget that she tried the cake once the final decision is made.  Otherwise, the owner may conclude that she can both eat the cake and finally not eat the cake, thus gaining the pleasure of the cake eating, and not suffering from the extra fat added on to her body as a result. The machine would work as if it processed future events in its owner’s subconscious – whether it actually did or not – not unlike a medicine that induces short-term amnesia on a patient.

Even the time machine version is deterministic, because the future qubits are still part of the cause of the agent’s final choice, in an automated way. However, that may only be one of the components.  It may be completely deterministic, yet satisfy all of the requirements for free will.  This deterministic machine allows the agent to change her mind and make a different choice.

Now suppose this same free will machine is kept outside of your brain or is rewired such that your illusionary “free will” referenced the free will machine only when called upon.  Perhaps you decide to use it every time you go to play roulette or buy stock.  That machine would no longer be a free will machine – it would be used to figure out what to bet on at the casino or what stock to choose. You would surely end up following the money trail in each and every case.  Even if not used strictly for money, it would no longer effectively be a free will machine, but just another weight to be used by your deterministic decision making resources – your non-free choice.

It seems an agent could have real free will yet perceive it as slavery since his choices would not always behave in ways he “wants”; have the illusion of free will yet perceive it as real, and prefer the latter. Another possibility is that there is no difference between the two. Then, we have yet another possibility that would have free will, but still be determined in an odd sort of way.  Finally, we have the illusion of free will calling upon a free will machine and end up rendering it useless as a free will machine, yet getting wealthy from it as a time machine.

As long as you can do whatever you “want”, why would want the ability to freely “will” a choice?  Since you do not posses a time machine, your best bet is to carefully consider all available options, and choose the one you “want” – hopefully the one that has the most promising outcome. Having free will implies that you may choose something other from that which you “want”.  What you “want” is based on a process of weighted variables created by competing resources within the brain. The choice or determined decision is not always rational or the best option for your future well being, but it is what you “want” at the time. What you should strive for is to learn how to always “want” what is best for you in the long run.  This “want” is determined by a combination of your genetic makeup, and your past experiences.

Most of what you want is determined by subconscious processes that the conscious parts of you never have the privilege of seeing.  Many decisions you make happen too quickly – there is no time conscious part of you to rationalize or think about the choice to make. If you are threatened by a predator, your “fight or flight” instincts kick in.  If you stopped to ponder the outcome and reflect on all possible options, you will likely be eaten.  For decisions that do allow time for thought, for example: “Should we go on vacation next month or not?”, eventually require action or movement in your body parts to make reservations, pick up car keys, drive the car, call a cab or whatever you decide.   Eventually, in your brain, an action potential must cause a chain of neurons to fire in the motor cortex portion your brain, triggering a signal to move down your spine, causing your hand(s), arm(s), and/or legs to move – all of which you have no conscious knowledge or direct control.

Repeatable tests continue to show that this action potential occurs well before you consciously decide that you “want” to do something (see here –  for a start, then here…feel free to search these and other tests).  The thought that initially came to you – the pondering of vacation – initially came into your conscious mind as a result of previous causes in your unconscious resources. Undoubtedly, the fact that the process became conscious has some effect on the overall causal chain, but it is not purposely causal – it is just another set of inputs – or rather it tends to strengthen the already existing variables having to do with the thought processes revolving around the concept of vacationing – a process already set in motion.

It seems that people who are both very happy most of the time, and successful most of the time, within the standards coming from within their own minds, from their family and from their peers; make choices that would most often match up with the choices they would make if they actually did posses the time machine-based free will machine.  The same would hold true for those who do what they want, based on hedonistic, short-term gain, as long as they are good at predicting short-term outcome.  Any person with good predicting abilities will make the same choices most of the time, within the framework of their personality and their personal philosophy of life.  In cases where the free will machine works better than our own choices, it will only be because it is a better predictor, or at least has the potential for being a better predictor, if we assume that the future “trials” do not change the past, present or future.

In the end, possessing a better predictor mechanism, or possessing a mechanism that adds additional weight to any well-informed rational resources within our brains would seem more desirable than possessing true free will.  Having true free will, with no benefit of a decent outcome predictor would tend to cause negative outcomes.  Having a decent outcome predictor does not require free will to take advantage of it – better to leave those processes in the hands of the unconscious resources deep inside a brain that took tens or hundreds of millions of years to evolve to do what it already does very well on its own. Free will, if possible would be a negative mutation unless it is of the time-machine kind – which is oddly deterministic and a very excellent outcome predictor.  However, do you really want to know the outcome of everything you do?  You may likely end up perfectly unhappy and never satisfied with no surprises.

Suppose you had a machine that could be implanted in your brain that would stimulate the pleasure zones in your brain every time you think of it or want it…. oops wrong topic…or is it?

=========== New Stuff to Consider ===========

Enjoy more discussions on Free Will at the following forum:

Free Will II

Another post related to Free will:

Patterns, Design, and Physical Laws

====================================

Top of Page↑ Home Back to More Posts

© All rights reserved, with the exceptions given on the home page. In short, feel free to use this material in any public URL with “.com”, or “.edu” domains for non-profit purposes. Please link back to whatever you reference.

Building a Network of Intelligent Agents

A Top Down And Bottom Up Approach

Notes:

1. This posting is subject to multiple revisions as are all posts, stories and music on this blog site. Stream of concisenesses writing technique is incorporated, so some paragraphs may tend to veer off onto tangents. It is the nature of this blog site. Eventually this posting will be linked into the science fiction story: Upgrade 01A, but the content itself is not necessarily fiction – many of the concepts are very possible within the next decade starting right now before your eyes.

2. There is already a great deal of work being done on creating a framework for what is being discussed here. The W3C Organization, for example, has done extensive work in such areas as:

and various other Semantic Web Activities…

Agents will take advantage of progress made in these areas as the technologies mature. Some progress is already being made. See links throughout the text and at the bottom for a few more ideas to consider. A later posting (Part II) will likely dive into some of this as details are further revealed.

Comments are welcome!

Author: David Saxton Ullery

Step One: Build An Internet Spider Program with Weighted Results

It will start out as a simple Spider Program that can quickly crawl through web sites and follow hyperlinks. In addition, it will have its results weighted, based on a simple interface with its owner. For example: “Good Job” may translate to +1, “Great Job” to +2, “Not what I wanted” to -1, and so on. The weights would be tunable.

The back end would be augmented to include technologies such as a simple neural network, heuristics, fuzzy logic, genetic programs and algorithms that adapt to fitness functions, swarm intelligence, and so on. Open source databases would generally be included. The purpose of the back end is to add knowledge to each agent, add personality to the human-agent interface, add on new capabilities to the agent-agent interface, gradually improve symantic network pathways, and gradually add seamless context-switching capabilities when the network grows to sufficient size. The human ought to be able to change topics and the agent ought to know where to look to find intelligent-sounding responses so that it may interact in interesting ways that are both useful and entertaining to its owner.

Programming A Spider in Java

Open Source Crawlers in Java

Human interfaces may be safely individualized to the owner’s taste as long as the agent-to-agent interface remains intact.

Like the oridinary existing spider programs, these new intelligent Spider Agent hybrids would have HTML access scanners as well and many or most versions of these programs would start off by scanning web sites for information and tags.

Step Two: Add Semantic Interfaces to Web Sites Using XML

XML tags will be one of the features that are the most highly dynamic within the network of cooperating Agents. WordPress and other blog sites already do this. This posting includes tags and categories.

A nice feature is that each agent ought to be attached to a home-base web site, or a network of the owner’s blogs, websites and so forth. For example, this author owns this site plus access to other sites. Each site is linked on the home page. One site has the owner’s music that may be listened to by anyone.

This site features original science fiction stories that include links to other sites that contain information of interest and relevance to the context of the stories. Such related links could be tagged using an agreed upon standard, making it very easy for the back-end agent heuristics to gather meaningful semantic information. Thus, another network is created in addition to the spider-agent network, adding a kind of synergy on top of the system.

Semantic Web

XML

See more links on the semantic web at the bottom of this post. Thanks!

Step Three: Build a Network of Spider-Agent Programs that Learn From Each other

Make the Agents highly cooperative with dynamic service oriented architecture hybrids. The interfaces will need to evolve/adapt/be designed with interfaces that are highly cooperative. At first, the initial agents may interact with a handful of trusted sites, but later there may be thousands, millions, or a billion. Some agents will evolve specialties, others may be more generalist, others may evolve into a kind of middlemen. Still others may act as a kind of immune system or private security guards. There may even be a “queen” agent that produces offspring.

A single human may own several cooperating agents, use self selecting techniques or breeding techniques to mate the programs. Some of these agents may simply be clones to allow faster searching, others may be specialists of one type or another. This process may lead to venturing out into the community of networks of trusted sites. As with biological systems, too much inbreeding may lead down a bad path or at the very least limited imagination. Progress is on the order of linear. On the other hand, cooperation leads to something beyond simple exponential growth. Ideas bounce back and forth, agents “breed” with a larger pool of ideas. With “breeding” to be taken literally with genetic programming techniques, metaphorically with human interaction – quite naturally with new ideas added into the mix, or more likely a combination.

Hopefully, researchers with advanced knowledge in the various fields I am mentioning here will cooperate with this effort and contribute by adding interfaces to their own projects. Presumably, they would be able to retain an isolated version of their research projects. If done carefully, the research community and the public at large would stand to benefit. Imagine using such a system for research on virtually any topic – be it scientific, music, philosophy, or pure entertainment!

The Facade pattern may be used as the interface, providing services. The services may include providing entire classes or class sets (Java perhaps) that will have a function another agent requests. In addition, each web site will provide XML tags. Interfaces will need to be standardized in such a way that the methods may be discovered and queried.

They (network of hybrid cooperating spider agents) may start off sticking to single topics and spread to new topics over time. For example, a new type world wide dictionary may result from an initial effort. PC owner may dedicate their off time to such a venture in a manner similar to the SETI effort. The network of agents involved may actually start understanding words, or at least act as if they do. Such a project may end up branching off into many useful areas including better automated phone systems, better GPS and Music units in the car, better machine patient interfaces to help in the care of the elderly, all kinds of fun toys, games, and gadgets. An efficient, mobile Multi-lingual translator may result. Imagine your smart phone with instant access to word phrases translated to any language for you (spoken or text or both). Suppose each agent were connected to a semantic “dictionary” of only 10,000 words and phrase, but there were 1,000,000 such agents all cooperating on a high-speed network. Each agent’s home base would included its own competing heuristics, neural network, and genetic algorithms. Eventually, the best of these systems would survive, while poor one would die out, be replaced, or evolve into even better systems.

Currently, a single Neural Network on a single application are often limited to a hundred or so artificial neurons each. Imagine one million agents cooperating with one another, having several connections (artificial synapses) to each other.

Some agents may evolve entirely through artificial selection using genetic algorithm techniques (with fitness functions). Others may have Intelligent Design (unlike natural selection which contains no ID, but took billions of years to get to where we are today – we do not want to wait that long) aspects about them where its owner adds in top down heuristics. Still others may be hybrids of the two with additional features such as fuzzy logic.

In each step, positive results are reported back to your spider program and the program adds a plus 1 to its database. Negative results are given a negative 1 response.

Once step three is achieved, then your agent could search other agents for similar searches or questions, check their given weights and use these weights as a means of finding the most likely results.

Agents with the best responses to the types of questions a person may ask could be linked to the owners agent and replicated for use on that person’s computer or computer network. Owners may choose to retrieve personalities from cooperative agents. Perhaps a Ramona-like Avatar would be added into the mix, or one that plays chess or other games, or both!

Agent programs could be standardized in such a way that modules could readily be replicated. A person’s agent would automatically select the fittest algorithms. The algorithms will evolve “naturally” using various fitness functions. The fitness functions themselves can be shared and distributed. Human augmentation would be encouraged.

Example Scenario

Suppose an individual had a tendency to ask questions about Friday evening flights from LAX to Portland, enjoyed playing chess, had an arm-chair interest in nanotechnology, wanted the latest iPhone apps, enjoyed Aaron Copland, Mozart, and Bob Dylan, and enjoys images of fractals and exotically clad young women.

First, let’s examine the flight information in more detail. After several runs, this person’s agent would come come to find out the following:

The owner prefers American Airlines over Alaska, unless the difference in price is $20 more for AA than Alaska (suppose the owner has frequent flier mileage). If the flights are booked at least 2 weeks in advance, then the owner gets the seats he wants at a reasonable price. The agent got this way on its own.

Another owner’s agent discovers this agent while searching for the same information. It requests, the data plus any modifications to the algorithm search order from your agent. It turns out that the results are pretty good for the other owner, except this person prefers Alaska airlines. The adjustments are quickly made by the second agent. Its modified rules are made public.

A third agent with similar requirements finds both of these interfaces along with Goggled information from Delta airlines. Its owner likes the cheapest prices regardless of airlines, so the agent grabs the information from the first agent regarding strategy for better prices and applies them to a more general algorithm for its master’s purposes.

Suppose information was shared among all of the agents as to how often positive results were obtained directly from each agent. Weights could be added to each agent regarding this information.

As new information is added, new standard XML tags could be added into the mix and distributed. At first, there may be very few tags, such as the standard information given by all airlines including names, flight number, pricing information, and so one. Pricing information may be the most dynamic, with flight times coming in second and so on. The information does not have to be a direct, static value, but could serve as a pointer (a URL for example).

Over time, some agents, or pools of agents, will become robust, generalized intelligent bots able to respond to more and more questions, play chess, research nanotechnology, and more simply by linking into areas of interest by tags. Tags themselves will evolve to have both general and specific qualities about them.

Context switching agents may evolve, so that topics may change seamlessly and efficiently. Context-switching capabilities is one of the goals of the back end of the spider-agent hybrid system. It needs to be highly modular, with a well-designed object oriented architecture, so that new versions can easily be swapped in

Efficient data trails may be created in a manner similar to an ant colonie’s chemical trails to the best food sources (swarm intelligence is but one example). Trails with the highest weights are followed, but with a twist – the weights need to have semantic context. The agents are usually looking for specific types of information for their owners, although sometime they may be hunting down better algorithms or better heuristics for their own purpose. If an agent does better and improves itself, it will have a tendency to survive to the next generation, therefore this incentive ought to be build into its program(s). It should “want” to survive and have its algorithm cloned or its genes replicated, but only if it improves the colony (only if the owner’s agents will improve as a result). The “food” is whatever data or information the owner likes. To survive, the agent must improve the overall quality of this food for its owner.

The entire Internet may someday evolve into an artificial super-organisms, from the bottom up and from the top down. Humans are still permitted to create new ideas from scratch and add new ideas into the mix to speed up the process and at the same time the agents themselves will share and swap data and algorithms. In the long term, this approach may be more successful than any of the relatively isolated AI projects to data.

Negative intruder spiders will require a defense system of agents (an immune system). What does not kill the system will make it stronger! Competing systems will surely evolve.

Powerset Symantic Web Searching

Artificial Intelligence

The Emotion Machine – Minsky

Ant Algorithms

http://en.wikipedia.org/wiki/Ant_colony_optimization

Fuzzy Tutorial

Fuzzy Logic

http://www.aco-metaheuristic.org/

Free Will and AI forum posting (if you here from there and want to return ….

Note: this posting is subject to multiple revisions as are all posts, stories and music on this blog site.

Author: David Saxton Ullery

© All rights reserved, with the exceptions given on the home page. In short, feel free to use this material in any public URL with “.com”, or “.edu” domains for non-profit purposes. Please link back to whatever you reference.

Consider cooperation for a greater gain over theft for short-term smaller gain. If you have good ideas share them using links, comments, original ideas. Make us all wealthy …. thanks!

°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°

Try parallel processing…

Please enjoy some musical patterns inspired by the DNA Pattern

and Flow of Consciousness here, while reading the article

…it will open in a separate window or tab depending on your browser:

>>> D ‘n A Mutations I <<<

°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°

A computer program will behave in exactly the same way at any time on any computer, given the same initial state. A software program is an example of a “pattern”. The program may be copied. Two copies of the program will behave exactly the same way on two different computers even though the two computers, on which the program copies run, are made up of completely different atoms from one another.

Because the computer program is a pattern, it could be translated into an entirely different programming language and run on an entirely new type of computer. The computers are patterns as well. The pattern called “computer” and the pattern called “program” have a degree of independence from the stuff of which they are made. At the physical level, the atoms, and electrons, and the energy states are absolutely 100% different on both machines and both “copies” of the “same” program, yet the programs behave in exactly the same way on the “design” level!

Ultimately, the computer and the program that runs on it are subject to the pre-determined laws at the physical level, but the patterns at the design level are subject to a much stronger force – that imposed by the pattern itself. Even though the atoms, the initial state and everything about the two computers and the two programs can be absolutely 100% different in space, time, behavior, initial state, molecular structure, elements, and on and on…, the pattern manages to “overrule” the pre-determined physical state that the computer and the program will be in at the start and the end of the running at end of the program. We are not even concerned over the “physical” state of the pattern, only the “design” state of the pattern.

Patterns are at least somewhat independent of the stuff of which they are made. A boat is a pattern. If it has several parts replaced on it, as the parts wear out, it is still recognized as the same boat. A person is a pattern with atoms and cells that are constantly changing.

Similar to the program pattern, identical twins are born with identical copies of a DNA pattern, but each copy is obviously made up of separate molecules. The twins rapidly form separate identities because their initial conditions and environments are different at both the physical and the pattern level.

Unlike boats; DNA, people, computers, and Turing machines all are patterns that contain a type of memory and take input from the outside environment in the world in which they exist. DNA and people are different from computers and Turing machines, somewhat due to their relative complexity, but largely because their overall behavior are subject to both physical changes and pattern states. Physical objects, independent of the person or the DNA may change their respective pattern behavior in unpredictable ways, but they can adapt to the changes within limits. Unless pre-programmed in advance, to accept inputs, for example, an ordinary software program will ignore external events. A computer will simply stop functioning if its parts wear out.

Although, ultimately “it is all physical”, the white blood cell reacts to the germ that it “detects” in the blood stream, because it is “programmed” by the DNA to seek and destroy germs, not merely clumps of amino acids, but the special pattern we may call a “germ”.

DNA mutates in a random way due to being struck by various particles. These mutations are the physical mechanisms by which Evolution occurs. Mother Nature would not get very far by merely following low-level physical laws. Random change must be augmented by a Design mechanism – the Pattern of Natural Selection which is used to select the most “useful” designs for replication, and to eliminate the “bad” designs. Natural Selection is a very slow process, because it is so closely linked to the natural physical level. That is why relatively simplistic life forms do not have many degrees of freedom. However, as the patterns (life forms), become more and more complex, with patterns layering upon other layers of patterns, they gain more and more freedom from their physical makeup. Each pattern layer is subject to the new laws of the pattern “designs” of the layers “below” allowing a slightly higher degree of freedom from the underlying physical layer.

The computer program “if” statement “reacts” to the value (pattern, design) of “true” or “false”, and does not give one hoot about the makeup of the computer or how the “true” or “false” is ultimately represented at the physical level. An “if” statement embedded inside a computer program follows rules that are designed into the programming language. The computer program may be of the sort, such as the Java programming language, that is interpreted by a virtual machine – yet another program. The virtual machine is said to run “on top of” the computer operating system. Eventually, these multiple layers of programs (patterns) get converted into the machine language that the Central Processing Unit (CPU) “understands”. The CPU, in turn, is made up of patterns of circuits. The circuits are made up of a complex pattern of logical “AND”, “NOR” and “NOT” gates, and are manufactured out of silicon and various other elements, consisting of atoms, and powered by electricity.

Each layer outlined above, from the bottom physical layers, to the top software layers, have room for multiple design or pattern variations in their makeup. As long as each pattern layer is designed in such a way as to produce the same outputs and accept the same inputs that are “expected” by the layers immediately above and below it, the design variations will not affect the state of the program of interest at the top layer. In general, the higher the level in the pattern layers, the higher the degrees of freedom one has in the design of that level. The original program could be written in several different ways, and still express identical results to the original.

A child playing catch with a friend (obviously consisting of many more complex layers than a computer program) may decide to move out of the way of the ball rapidly flying towards his face, because the ball is going to smack him in the face and it will hurt. The physical state of the ball is of no immediate interest. It could be any baseball at any time in any space – it is the “ball” and “face” and “pain” patterns that are the important pieces of information stored somewhere in the child’s brain. It could be any child playing with any friend. It is not inevitable that the child will be hit by the ball or that child will avoid the ball.

The child’s conscious level is built upon countless pattern-based “design” layers which allows for a degree of freedom allowing for the avoidance of the “inevitability” of being struck by the ball. The child’s conscious mind is much more influenced both by the patterns it is processing, and the patterns doing the processing, than by the physical stuff that the child’s brain is made of.

Ultimately, the physical structures of the atoms involved do matter. The physical laws do determine the direction, speed, and force of impact of the ball. The brain is physically made out of atoms, but the baseball player does not care or think about those things. The player is considering the patterns (and avoiding pain).

As Daniel Dennet aptly pointed out (Freedom EvolvesViking Penguin, 2003), Free Will and Determinism are not necessarily mutually exclusive concepts. Indeed, an agent “pattern”, with a memory and feedback from the external world can make predictions better in a fully deterministic universe than it could in a non-deterministic universe. Calculation can be made by that agent, to avoid dangers, seek energy sources, improve its local environment, self-replicate, communicate (via yet another set of patterns following rules) with other similarly-patterned agents, and so on.

Above, I mentioned that “patterns at the design level are subject to a much stronger force – that imposed by the pattern itself”. “A much stronger influence” is a more accurate phrase. Weaker forces seem to have stronger influences at macroscopic levels than do the strong forces.

At the sub-atomic level, we see that there seems to be a non-deterministic quantum state “ruling” this “lowest level of all worlds” world – the center of the onion. However, it is masked by a more deterministic, atomic-level “pattern” one, or just a few levels above it.

The Neutrons and Protons consist of sets of 3 quarks each, held by the strong nuclear force. A residual force holds the nucleus together. The weak nuclear force works at a larger distance but is obviously weaker. The Electro-magnetic force controls electrons, with the residual binging atoms to create molecules.

Already, at these lower pattern levels, determinism seems to overcome the quantum effect. The larger the pattern, the more reliable and useful they become. For example, on my way to work, I do not have to hunt for the local Starbucks, where I get my coffee for my long trek to work in the morning. It is a sure bet that I will drive past two or three withing the first 20 minutes of my commute.

Gravity is a force that is so weak, that scientist are just now getting the point where it can be measured at distances down to the millimeter. I have heard that gravity is 5-10 orders of magnitude weaker than the electro-magnetic force, yet it has a dominant influence over ordinary matter! As I sit here at my computer desk, I can very much feel gravity holding me down on my chair. The patterns of solar systems and galaxies are not “interested” in the stronger forces. Still larger patterns are dominated by Dark Energy (from the vacuum of space?) Is this because the entire Universe is the largest and greatest pattern?

=========== New Stuff to Consider ===========

Enjoy more discussions on Free Will at the following forum:

Free Will II

Another post on Free will:

The Free Will Machine

====================================

References: Freedom Evolves (Viking Penguin, 2003), Daniel C. Dennet

Top of Page↑ Home Back to More Posts

Next Page »