XML


See and read it all here and now.

Animation on the UPGRADE01A YouTube channel. Read other information here.

Science fiction and politics is covered mostly at this space-time, but every category listed is mentioned in a blog, story or commentary here.

The free exchange of ideas, hopes, dreams, goods, services, peace and love should prevail. Education is alive and free online and in the world’s libraries.

#OccupyLiberty

Advertisements
Several Novel Ways of Storing and Manipulating Data
Version: 3.4

Author: David Saxton Ullery

———————————————————————————————————————————————————

The information in this posting may not be copied or used to create any technology without permission. Not-for-profit uses are permitted. Please comment and include any suggestions and questions that you may have.  Thanks!

———————————————————————————————————————————————————

Introduction

This article briefly outlines a few novel approaches that could potentially lead to dramatic increases in the amount of information that may be stored and manipulated at the nanometer scale,  and shift the paradigm in the way information is traditionally manipulated and perceived. Some techniques demonstrate how a large amount of  data could be stored directly as symbols or shapes, others outline possible alternative approaches to storing data by exploiting different properties of atomic elements that may offer insight into radically different approaches to the very problems that nanotechnology companies and researchers are working on today.

New approaches in thinking about exploiting previously unconsidered yet readily differentiating properties, opens the door to the thinking of the technologies that are researched and ultimately employed as a viable commercial product. Thus, the goal is that reading and pondering the concepts presented here will help trigger new ideas that will lead to much more economical approaches, new ways of thinking of computation, and ultimately newer, more powerful computational machines that do not necessarily follow the traditional Von Neumann architecture.

When examining future nanotechnologies for reading and writing information, storing data at a higher symbolic level of information other than only utilizing simple binary format should be examined as an alternative approach to the current standard architecture in today’s storage technologies. The approaches given here deal with the storing  of information at the nanometer size, but are not directly exploiting quantum mechanical properties, nor do they depend on DNA or wetware.  Instead, they depend upon both exploiting the unique properties  of the atomic elements, and our increasingly sophisticated ability to move atoms to form any physical shape we desire, including directly storing symbols in their “natural” form. By purposefully positioning groups of atoms into various patterns, they may be interpreted in new and unique ways by the technology that reads, writes and manipulates the data.

Storing information may be enhanced in another way: More economical and useful ways of reading, writing, and manipulating data can be achieved by exploiting the informational differences inherent in different elements, along with the differences in a single element and its various isotopes.  Different elements, isotopes, and molecules each have properties that could be exploited other than their quantum mechanical properties, and other common approaches that nanotechnology researchers are already examining.  For example, every element has its own unique mass, atomic number, number of electrons, electromagnetic properties, chemical properties,  size, shape, and so on. Shapes are especially interesting when configured in simple molecular structures, crystal structures or when atoms are physically moved in a purposeful manner atom-by-atom to form simple text or other symbols that can later be read and interpreted utilizing relatively simple algorithms.

Mixing and combining each of these and other ideas presented here and extrapolated upon by the knowledgeable reader would enhance all of these approaches in a synergistic manner. It opens up possibilities for alternatives to the traditional Von Neumann, binary-based architecture, yet does not force such a change.

Element Detection

Hydrogen (H) and Deuterium (D or ²H)

Alternative approaches for storing binary data using an element and its isotope.

Note: An element other than Hydrogen may be a better choice, but the concept is the same. However, Deuterium is very stable, not radioactive, and relatively plentiful in ocean water.

Since hydrogen and deuterium have their own unique atomic weight and emission spectrum, it should be eventually possible to detect tiny amounts of either, and use them to represent binary information.  Another element/isotope pair should be considered, if there are known techniques for detection (reading) differences, and more efficient ways of switching states between the element and its related isotope. Other elements and their isotopes may have other properties, such as differing diameters that may be exploited more economically than hydrogen.

Here are a few ideas to consider:

  • Use hydrogen, with mass number 1 to represent the zero (“0” or “off” or “no”) state.
  • Use deuterium with mass number 2 to represent the one (“1” or “on” or “yes”) state.
  • Read the values using mass spectrometry , infrared spectrometry, other non-destructuctive  spectrometry methods utilizing much shorter wavelengths such as UV, or perhaps bounce a single photon off of each.  A photon bounced off of a single hydrogen atom would behave differently than one bounced off of a single deuterium atom.  Another approach may be to utilize a modified version of the technology of the scanning tunneling microscope STM, if it can be refined to the point where it could read the difference between an element and its isotope. Utilizing new forms of spectrometry (or other electromagnetic techniques), which use  much higher frequencies than ultraviolet may  someday utilized to detect size, position, mass, electromagnetic properties.
  • To write data: Store the gases of each type and inject the atoms one by one into the bit containers. Another approach may be to find a way to push atoms into place, perhaps utilizing a modified, greatly shrunk down version of STM (see sections that follow for a bit more on this).  Perhaps a neutron beam could be used in a novel way to convert H to D, thus “burning” ones into memory in a manner analogous to PROMs and EPROMs.
  • Each tiny collection of atoms can be stored inside a single carbon buckyball, with each “bit” separated by an empty buckyball or by some other means, such as a tiny number of silicon atoms to separate each bit, such that the state of each atom or tiny cluster of atoms are not easily disturbed. Another approach may be to load up a nanotube or a column-like structure created using a few nanotubes.  Each atom, or atom cluster would be fed into one end of the column,  possibly followed by a separator element (consisting of a either a string or clump of one or more atoms such as silicon, or a buckyball), followed by another atom or atom cluster.  Each atom or cluster would represent a zero or a one and could be read from one end of the column one at a time until the last atom is read… More on this in sections below.

Multiple Element

This idea may be practical for memory storage of the more permanent kind, because writing may prove to be exceedingly slow for rapid computation.  The ability to distinguish between different elements may be more practical for reading, but writing with multiple elements may prove to be difficult.  A technique inspired by an ink jet printer could work – the valves would need to be extremely tiny – perhaps made from carbon nanotubes.

  • Use any two elements that are easy to distinguish when only one or two or three atoms of each type is present.  Binary numbers would be represented using one element as the zero, and the second element as the one value. Using a large atom such as lead to represent the “1″ value, and a much smaller atom, such as hydrogen to represent to the “0″ may prove beneficial.
  • Use multiple element types, with each element representing a different value. The radix of the system would depend on the number of easily readable elements that can be stored into a tiny space using one, two, three or any tiny number of atoms each.

Using this scheme,  hydrogen could represent a “zero”, helium a “one”, …, oxygen a “seven”, and so on (Atomic Number minus 1) for each element.  The radix may be octal, decimal, base 36, or any base up to the number of elements used. Carbon, silicon  and  perhaps gold may need to be skipped since they are needed to construct the memory containers and may interfere with the readings.  Rare elements may be avoided due to their cost or radioactive effects.

Similar to the two-element technique, it may be of benefit to select elements that vary in their atomic number (and mass) by large amounts rather than selecting closely related.  Selecting elements from different groups within the periodic table may prove to be exploitable and therefore useful.

  • Another binary alternative would be to stick to a single element.  Use one atom, perhaps xenon to represent a “0″, and use two side-by-side atoms of the same element to represent a “1″.  A variation on this theme could be to use zero atoms to represent “0″, and a cluster of one or more atoms to represent a “1″.
  • Another approach is to use atoms of dramatically different size to represent differing values. The heavier elements are much larger than the lighter elements.  Technologies may exploit these differences.  Combining a few larger atoms together would increase these differences.  Atoms may either be placed side-by-side or stacked one upon the other to produce a taller, nanometer-scale mountain. Using this approach, the data may be interpreted either digitally or analogically.  Analogically if the mountains, or side-by-side, or some combination are made of varying elements with different sized atoms. One can imagine a nanometer sized head, not unlike a tiny record needle reading analog data, with an interface taking in the data; then, depending on the architecture of the future computing device, the context of the data, manipulating the data directly as analog data or digitizing the data.  Multiple versions of digitized data are envisioned here; depending on the context once again:  (a) interpret atomic-sized mountains over a certain threshold as a “one”, or (b) interpreting varying heights, or other features (total mass, …) as an analog value to be converted to a digitized value; (c) interpret the atomic stack of various element types as a stack of bits, (d) interpret data in any manner where it is economic to read, write and manipulate

For any approach selected, it may turn out that only elements that are solid at or near room temperature are practical, thus ruling out all of the gases and liquids.  Alternatively, simple molecules, such as NaCl, or other salts could be used, if there are better techniques for reading and/or writing molecules. Even H2O – water molecules could be employed if the data is kept cool enough – erasing this data would obviously be very simple.

Reading and Writing the Atoms for Each Alternative Technique

In either the element/isotope or the multi-element concept, one could “write” the atoms (or molecules) into a nanometer sized tube that is transparent enough for electrons or photons to bounce off of each atom, one at a time. It may be necessary to use one element (may need to be several atoms long to insulate the properties of each “bit”) as a type of tag or marker to separate the information atoms, especially if more than one of each element is needed.

As an alternative to bouncing electrons or photons off of each atom, a reading head and writing head could be created using a technology based on already existing scanning tunneling microscopes (STM). This approach may be particularly useful in the single-element approach where a pattern (0 or 1; 1 or 2 groups) are used.

Exploiting the unique particle velocities or resonance frequencies generated of elements or molecules of different mass may be combined with other technologies to measure differences between atoms or molecules to read the atoms from different elements.  Acoustic and electromagnetic waves may be utilized to generate frequencies to induce large amplitude vibrations within a system of atoms lined up in patterns.

The nanotube structure could be used to keep the atoms or atom clusters (molecules) in place, like so many beads on a string, or more like a line of different color peas inside a transparent straw.  The design close up may resemble chicken wire. It may not be a completed tube, but merely a trough or rain drain-like structure that is “U” shaped from the end instead of “O” shaped.

Rows of these tube or trough-like nanotube structures could be connected together to create a two dimensional matrix, or a single, very long structure could be wound up into a disc, like a CD or DVD disc and read from the inside out.

It is interesting to note here, regardless of the selected alternative, that an STM reader could conceivably read large chunks of atoms at a time, projecting different shapes that could then be decompressed by  shape-recognition software into standard bits, bytes, or any other form, including the original form.  A string of ones and zeros physically represented by atoms or small clusters of atoms would form unique shapes due to the distribution of mass, electromagnetic, and other properties.  To read these shapes as chunks may require the trough to have a certain amount of “wiggle room” so that the atoms may not form a completely straight line. Different elements or molecules may be readily coaxed into specific shapes by subjecting them to different electric charges, magnetic fields, chemicals, or simply by squeezing them into or through other nano-sized machines or templates (like a tiny cookie cutter).

Nanometer Scaled Symbolic Writing

The following outlines the concept of storing data more directly as high-level text or other types of high-level symbols, thus effectively compressing much more information into bit-sized areas for simple text messages. In some cases, depending on current state-of-the-art, an “atom” may be replaced with “a cluster of atoms”, or “a molecule” or “a cluster of molecules”, but the concept is such that in any alternative, the real estate used must be substantially smaller than the current space required for a single bit on today’s storage systems.  Given that a nanometer is 10 to the -9th meters, a typical atom’s ranges from about 0.1 to 0.5 nanometer, and today’s memory chips are storing bits at the 45-nanometer level, it seems we have some room to work with.

Let us examine some potential ways we might represent information at the atomic level.

  1. Store text, including entire computer programs using ordinary text, but write the text at the tiniest possible size. Remember the I.B.M. Logo?  All three letters contain a total of 35 xenon atoms (atomic number 54). Each atom is spaced at what looks like one or two atom widths apart on average.  According to the article from the link above: “In 1989, IBM scientist Don Eigler was surprised to learn that in addition to using an STM to look at tiny things he could also use it like a pair of tweezers, to move things as small as a single atom.”  Suppose that the text could be crushed down to use no more than 8 atoms per character –  the same number of bits used in today’s binary ASCII code, yet still be kept in the same general shape as the actual letters, or perhaps some new, more compressed, yet easily recognizable set of shapes. It may be easier for a technology to read the entire glob as a shape than it would be to read each atom as a single bit.
  2. Use lines of atoms of different lengths to represent different values. Example: “.”=0, “-”=1, “–”=2, “—”=3; where each “-” is one, two or three atoms in length…perhaps larger clumps would be needed or more economical.  Molecules could replace atoms, if kept very tiny (whatever is the least number of atoms or smallest size molecule that can be detected at high speed).  This is simply a variation of “1″ above, but keeping each shape more or less as a line, however the length of each symbol would grow with the number of characters represented.  If we kept the number of symbols small, say to 10 or less, then the longest line would be only  10 atoms wide. One could imagine building a code based on combining various symbols without necessarily resorting to a number system.  It would be constructed  like a kind of short hand.
  3. Use different shapes to increase the symbol set, without increasing the number of atoms.  Example:  “+”, “^” could be represented using four and three atoms respectively.  This is really no different than option “1″, but could be interpreted as a variation on option two or a hybrid between 1 and two where the atoms are allowed to occupy more than a single row.
  4. Use marker symbols to distinguish the representation of any of the above representations to create a hybrid.  Marker symbols may be actual text-like or at least shapes or combination of shapes not unlike XML tags, or they may be atoms of a different elements or molecules as discussed in previous sections. A processor configured to read multiple symbolic representations may have the ability to reconfigure its actual hardware, or load different algorithms into its memory.  It may be that the actual processor is of a traditional silicon/binary type with a suitable interface that acts like a connector/adapter/translator/mapper between the computer and the storage.  Alternatively, it may be that the entire computer is constructed to directly manipulate these symbols or to at least readily convert them in a  much more tightly coupled manner than a traditional computer would be able to do.
  5. The symbols used may not resemble any of the symbols familiar to us like those on our keyboard.  It may be more convenient to exploit the shapes that clusters of atoms tend to form when combined together.  Crystals are one example, but they tend to have several variations.  The point is that the shapes need to be as easy as possible to construct, be stable, yet be consistent and deterministic. If a given element, with a limited number of atoms forms the same set of shapes, it may be possible to filter them so that they can be used to represent a set of symbols.
  6. Utilize binary or some other radix where needed, or where more generalize information is needed.  Binary data may still be stored as a shape rather than utilizing the atoms or clusters of atoms as simple zero-one bits.  More is discussed about this below.
  7. A two-dimensional photograph could be compressed down as a simple black and white photo (atom/no atom),  or a color photograph (1,2, or 3 atoms; where one=”red”, two=”blue” and three=”yellow”).  Simple markers could be used to (A) tag that it is photo information, and (B) tag the next row of an array, or simply tag the actual “bit” length of each “row” of pixels – in other words to mark out how many patterns or “1″ “2″ and “3″ would be needed.  Note however, that three or even four characters can be represented in binary, using just two bits (atoms): “00″,”01″, “10″, “11″.

Once again, a modified scanning tunneling microscope (STM) technology can be used to write and read the data using either technique or any hybrid combination.

Rather than storing information as bits, the information is stored and read directly at a  symbolic  level.  Simple software algorithms would be used to translate the characters and shapes to be used  and interpreted as needed.

For example: a Java program may be stored as source code using a tiny number of atoms to represent each character.  The java program would be read using the STM-based reader, then translated (decompressed) into byte code  and run on a conventional computer, if desired.  Alternatively, an entire CPU architecture could be build around the new storage technique that directly manipulates the stored symbols.  Literal XML tags could be used, if desired to mark code and data sections.

The multiple  techniques presented in this section and the previous section could be combined.  Use the one/two atom pair technique (to store binary code.  Separate the code with special tags (atom-by-atom XML or otherwise).

Perhaps the text could eventually be shoved into super long, nanotube-based structures and wound up into a disk storing up trillions of times the data currently stored on today’s high definition DVDs.  This would be similar to the device describe in the previous sections.

In scenarios where memory reads could be relatively slow, then it would make sense to pack more data, using less atoms to represent the data.  The link above shows that using current technology, at least 3 letters can be written and read using an STM.

Once the concept of reading symbols sinks in, it becomes apparent that the most general form of information can be represented in binary, and it may seem that information could be compressed better if data is always represented this way, but the concept of utilizing symbol recognition could enhance this most general case.  Using just two atoms (or groups or molecules), we can arrange them in the following ways:

( 1 ) “- “      [just one atom followed by no atom, or “10”] ,

( 2 ) “–”     [two atoms next to each other, or “11”],

( 3 ) ” -”      [no atom, followed by one atom, or “01”]

( 4 ) “=  “    [two atoms, one above the other, with no atom next to it, or “1010”],

( 5 ) “\ “      [two atoms at an angle, down and to the left, or “1001”]

( 6 ) ” /”      [two atoms at an angle, down and to the right, or “0110”]

( 7 ) “_ “      [no atoms on top an one atom below, or “0010”]

( 8 ) ” _”      [no atom on top, and one atom below and to the right, or “0001″

Of course, in theory, there could be up to four atoms within the given space, thus allowing for 15 values, but the reader and writer must both be able to distinguish all of those patterns in the same tiny space.  It may be the case where multiple atoms packed closely together will not retain a stable pattern.

Once the technology reaches the level where a single atom could be read, then pure binary representation may be the best technique in 100% of the cases, but using shape recognition may still be the best way to interpret the information. It may be more practical to limit the number of atoms within a given space and interpret the limited number of shapes within that space as a particular value.  It would work in a manner not unlike using braille for the blind. If all of the available space can be filled in with every combination, that is great, but we can still exploit the concept without completely utilizing every conceivable combination and permutation.

Unusual Processing NanoMachines That Eat Data

Another variation on reading of data could be a of the destructive kind.  Read the atoms by grabbing them off of their storage surface and literally pass the data into the processor.  A machine that directly works with shapes instead of bits could process different symbols by filtering them into different locations.  Using lined up symbols like those described in “2” above could direct the data based on each symbol’s length.  Longer symbols could not enter shorter slots.  Data  contextualized as numeric or alphabetical could quickly be sorted by length (spaghetti sort), addition would be fairly straight forward (add the lengths).

In another context, the data could be interpreted as an algorithm for constructing multiple copies of another nano-mechanical machine.  The symbols may consist of various length rods, gears, levers and pulleys as the “data” section; intermingled with short instruction sets. The processor may be cleverly enough designed to be capable of understanding how to manufacture thousands or millions of tiny machines.  The symbol “6” followed by the symbol for a gear could indicate that the machine is to grab the next six gears out of the gear repository or instruct another part of the machined to build six gears built to the size of the “data” gear, and perhaps to use the same element in doing so (the data gear may be made out of carbon or gold, for example).

Utilizing the direct literal “grabbing” of data, the mass of the “bits” could be exploited by a machine designed to take advantage of data in this context.  For example, data could be directly sorted  or added together by mass; larger atoms or more massive molecules could be filtered so that the computing machine would be reconfigured to perform different operations.  Suppose nano-sized gears could only be turned by an atomic mass of greater than or equal to 18 = 2 oxygen or 18 hydrogen atoms.

Grabbing data may be as straight forward as pushing “end of file” atoms into one end of a nanotube, thus allowing the program to push out the other end in a FIFO manner, down into a slot where the calculating machine sits.

Conclusion

We can utilize shape recognition at the atomic or molecular level to store binary information. If a single element is utilized, then the shape alone could represent an arbitrary value.

We can utilize shape recognition at the atomic or molecular level to store information directly as symbols, potentially packing more information into the same tiny space where a single bit may ordinarily be stored.  This same technology could then be utilized to ultimately reduce the information back down to the binary level, but using the same techniques and technology that we use to detect shapes.  It may turn out that multiple shapes are more readily recognized than directly using the atom (or smallest practical “unit”) in the more straight forward way of simply looking at “atom = 1″, “no atom = 0″, or “two atoms = 1 and one atom = 0″ in a linear manner.

We could potentially use different elements or their isotopes to store more information into a single bit without resorting to quantum computer effects, but by exploiting the different spectrum and/or mass, and possibly other differences among the elements. If combinations of elements are used, then atomic number or electromagnetic properties could be utilized to give a single physical shape more than one value.  Two star shapes with different mass could represent two distinct values, for example.  Two squares, made with the same element but with differing numbers of atoms or with differing spaces between the atoms potentially could be exploited.

We could potentially use different elements as markers or tags, not unlike tags utilized in XML.  We could literally create XML tags just like the famous IBM logo was created. Alternatively, we could employ the idea of packing different elements or molecules between sections of data to be interpreted as a change in context.

We could use nanotube to stuff atoms into  – to be read one at a time, or potentially read as chunks with the unique shapes later to be decompressed.  It is even conceivable that a highly sophisticated machine could interpret and manipulate the chunks and shapes directly. The shapes of each chunk may be exploited in the design of the processor itself.  Taken further, the tags or markers could be utilized to directly modify the processor.

We could use a plane surface to read shapes or combine this concept with the nanotube or buckyball concept mentioned earlier.

Processing data at the symbolic level may open up new and unique approaches to computing.  The processing techniques could be simulated using ordinary, binary computers by building a virtual machine designed to manipulate symbols. The simulations would be used as a discovery process so that alternative architectures may be explored.

Finally, we may discover these ideas to not be good ideas at all, yet it may toggle the mind of someone else in science, art or music in some yet unknown way.  Perhaps it is on the right track, but requires another approach that some else may come up with. Perhaps someone in a completely different field may look at this posting, sleep on it, and come up with another novel idea that is directly useful or creates yet another tangent.  A fractal-like graph may result, pointing toward some great idea to solve some totally unrelated problem.  The final result may be four or five or six or one hundred people down the chain – it may loop back around to me…

In the futuristic, on line, open source, science fiction novel “Upgrade 01A“, computers that utilize nanotechnology (some perhaps similar to what is briefly outlined here), some based on DNA, some based on quantum computers, others based on yet unheard of technologies, and some hybrids, are common place.  Many tiny, microscopic computers and robots are integrated inside the bodies, brains, and clothing of the main characters.  Thus,  the characters’ physical abilities, intelligence, and life expectancy are greatly enhanced or upgraded. New devices implanted in a person’s brain are often referred to as “upgrades” and may include a model number that is traditionally denoted as a hexadecimal number. Computers are integrated into virtually every device and object of value.  Please read part one and enjoy…

↑Top of Page↑ [Home] [Back to More Posts]

© All rights reserved, with the exceptions given on the home page. In short, feel free to use this material in any public URL with “.com”, or “.edu” domains for non-profit purposes. Please link back to whatever you reference.

Building a Network of Intelligent Agents

A Top Down And Bottom Up Approach

Notes:

1. This posting is subject to multiple revisions as are all posts, stories and music on this blog site. Stream of concisenesses writing technique is incorporated, so some paragraphs may tend to veer off onto tangents. It is the nature of this blog site. Eventually this posting will be linked into the science fiction story: Upgrade 01A, but the content itself is not necessarily fiction – many of the concepts are very possible within the next decade starting right now before your eyes.

2. There is already a great deal of work being done on creating a framework for what is being discussed here. The W3C Organization, for example, has done extensive work in such areas as:

and various other Semantic Web Activities…

Agents will take advantage of progress made in these areas as the technologies mature. Some progress is already being made. See links throughout the text and at the bottom for a few more ideas to consider. A later posting (Part II) will likely dive into some of this as details are further revealed.

Comments are welcome!

Author: David Saxton Ullery

Step One: Build An Internet Spider Program with Weighted Results

It will start out as a simple Spider Program that can quickly crawl through web sites and follow hyperlinks. In addition, it will have its results weighted, based on a simple interface with its owner. For example: “Good Job” may translate to +1, “Great Job” to +2, “Not what I wanted” to -1, and so on. The weights would be tunable.

The back end would be augmented to include technologies such as a simple neural network, heuristics, fuzzy logic, genetic programs and algorithms that adapt to fitness functions, swarm intelligence, and so on. Open source databases would generally be included. The purpose of the back end is to add knowledge to each agent, add personality to the human-agent interface, add on new capabilities to the agent-agent interface, gradually improve symantic network pathways, and gradually add seamless context-switching capabilities when the network grows to sufficient size. The human ought to be able to change topics and the agent ought to know where to look to find intelligent-sounding responses so that it may interact in interesting ways that are both useful and entertaining to its owner.

Programming A Spider in Java

Open Source Crawlers in Java

Human interfaces may be safely individualized to the owner’s taste as long as the agent-to-agent interface remains intact.

Like the oridinary existing spider programs, these new intelligent Spider Agent hybrids would have HTML access scanners as well and many or most versions of these programs would start off by scanning web sites for information and tags.

Step Two: Add Semantic Interfaces to Web Sites Using XML

XML tags will be one of the features that are the most highly dynamic within the network of cooperating Agents. WordPress and other blog sites already do this. This posting includes tags and categories.

A nice feature is that each agent ought to be attached to a home-base web site, or a network of the owner’s blogs, websites and so forth. For example, this author owns this site plus access to other sites. Each site is linked on the home page. One site has the owner’s music that may be listened to by anyone.

This site features original science fiction stories that include links to other sites that contain information of interest and relevance to the context of the stories. Such related links could be tagged using an agreed upon standard, making it very easy for the back-end agent heuristics to gather meaningful semantic information. Thus, another network is created in addition to the spider-agent network, adding a kind of synergy on top of the system.

Semantic Web

XML

See more links on the semantic web at the bottom of this post. Thanks!

Step Three: Build a Network of Spider-Agent Programs that Learn From Each other

Make the Agents highly cooperative with dynamic service oriented architecture hybrids. The interfaces will need to evolve/adapt/be designed with interfaces that are highly cooperative. At first, the initial agents may interact with a handful of trusted sites, but later there may be thousands, millions, or a billion. Some agents will evolve specialties, others may be more generalist, others may evolve into a kind of middlemen. Still others may act as a kind of immune system or private security guards. There may even be a “queen” agent that produces offspring.

A single human may own several cooperating agents, use self selecting techniques or breeding techniques to mate the programs. Some of these agents may simply be clones to allow faster searching, others may be specialists of one type or another. This process may lead to venturing out into the community of networks of trusted sites. As with biological systems, too much inbreeding may lead down a bad path or at the very least limited imagination. Progress is on the order of linear. On the other hand, cooperation leads to something beyond simple exponential growth. Ideas bounce back and forth, agents “breed” with a larger pool of ideas. With “breeding” to be taken literally with genetic programming techniques, metaphorically with human interaction – quite naturally with new ideas added into the mix, or more likely a combination.

Hopefully, researchers with advanced knowledge in the various fields I am mentioning here will cooperate with this effort and contribute by adding interfaces to their own projects. Presumably, they would be able to retain an isolated version of their research projects. If done carefully, the research community and the public at large would stand to benefit. Imagine using such a system for research on virtually any topic – be it scientific, music, philosophy, or pure entertainment!

The Facade pattern may be used as the interface, providing services. The services may include providing entire classes or class sets (Java perhaps) that will have a function another agent requests. In addition, each web site will provide XML tags. Interfaces will need to be standardized in such a way that the methods may be discovered and queried.

They (network of hybrid cooperating spider agents) may start off sticking to single topics and spread to new topics over time. For example, a new type world wide dictionary may result from an initial effort. PC owner may dedicate their off time to such a venture in a manner similar to the SETI effort. The network of agents involved may actually start understanding words, or at least act as if they do. Such a project may end up branching off into many useful areas including better automated phone systems, better GPS and Music units in the car, better machine patient interfaces to help in the care of the elderly, all kinds of fun toys, games, and gadgets. An efficient, mobile Multi-lingual translator may result. Imagine your smart phone with instant access to word phrases translated to any language for you (spoken or text or both). Suppose each agent were connected to a semantic “dictionary” of only 10,000 words and phrase, but there were 1,000,000 such agents all cooperating on a high-speed network. Each agent’s home base would included its own competing heuristics, neural network, and genetic algorithms. Eventually, the best of these systems would survive, while poor one would die out, be replaced, or evolve into even better systems.

Currently, a single Neural Network on a single application are often limited to a hundred or so artificial neurons each. Imagine one million agents cooperating with one another, having several connections (artificial synapses) to each other.

Some agents may evolve entirely through artificial selection using genetic algorithm techniques (with fitness functions). Others may have Intelligent Design (unlike natural selection which contains no ID, but took billions of years to get to where we are today – we do not want to wait that long) aspects about them where its owner adds in top down heuristics. Still others may be hybrids of the two with additional features such as fuzzy logic.

In each step, positive results are reported back to your spider program and the program adds a plus 1 to its database. Negative results are given a negative 1 response.

Once step three is achieved, then your agent could search other agents for similar searches or questions, check their given weights and use these weights as a means of finding the most likely results.

Agents with the best responses to the types of questions a person may ask could be linked to the owners agent and replicated for use on that person’s computer or computer network. Owners may choose to retrieve personalities from cooperative agents. Perhaps a Ramona-like Avatar would be added into the mix, or one that plays chess or other games, or both!

Agent programs could be standardized in such a way that modules could readily be replicated. A person’s agent would automatically select the fittest algorithms. The algorithms will evolve “naturally” using various fitness functions. The fitness functions themselves can be shared and distributed. Human augmentation would be encouraged.

Example Scenario

Suppose an individual had a tendency to ask questions about Friday evening flights from LAX to Portland, enjoyed playing chess, had an arm-chair interest in nanotechnology, wanted the latest iPhone apps, enjoyed Aaron Copland, Mozart, and Bob Dylan, and enjoys images of fractals and exotically clad young women.

First, let’s examine the flight information in more detail. After several runs, this person’s agent would come come to find out the following:

The owner prefers American Airlines over Alaska, unless the difference in price is $20 more for AA than Alaska (suppose the owner has frequent flier mileage). If the flights are booked at least 2 weeks in advance, then the owner gets the seats he wants at a reasonable price. The agent got this way on its own.

Another owner’s agent discovers this agent while searching for the same information. It requests, the data plus any modifications to the algorithm search order from your agent. It turns out that the results are pretty good for the other owner, except this person prefers Alaska airlines. The adjustments are quickly made by the second agent. Its modified rules are made public.

A third agent with similar requirements finds both of these interfaces along with Goggled information from Delta airlines. Its owner likes the cheapest prices regardless of airlines, so the agent grabs the information from the first agent regarding strategy for better prices and applies them to a more general algorithm for its master’s purposes.

Suppose information was shared among all of the agents as to how often positive results were obtained directly from each agent. Weights could be added to each agent regarding this information.

As new information is added, new standard XML tags could be added into the mix and distributed. At first, there may be very few tags, such as the standard information given by all airlines including names, flight number, pricing information, and so one. Pricing information may be the most dynamic, with flight times coming in second and so on. The information does not have to be a direct, static value, but could serve as a pointer (a URL for example).

Over time, some agents, or pools of agents, will become robust, generalized intelligent bots able to respond to more and more questions, play chess, research nanotechnology, and more simply by linking into areas of interest by tags. Tags themselves will evolve to have both general and specific qualities about them.

Context switching agents may evolve, so that topics may change seamlessly and efficiently. Context-switching capabilities is one of the goals of the back end of the spider-agent hybrid system. It needs to be highly modular, with a well-designed object oriented architecture, so that new versions can easily be swapped in

Efficient data trails may be created in a manner similar to an ant colonie’s chemical trails to the best food sources (swarm intelligence is but one example). Trails with the highest weights are followed, but with a twist – the weights need to have semantic context. The agents are usually looking for specific types of information for their owners, although sometime they may be hunting down better algorithms or better heuristics for their own purpose. If an agent does better and improves itself, it will have a tendency to survive to the next generation, therefore this incentive ought to be build into its program(s). It should “want” to survive and have its algorithm cloned or its genes replicated, but only if it improves the colony (only if the owner’s agents will improve as a result). The “food” is whatever data or information the owner likes. To survive, the agent must improve the overall quality of this food for its owner.

The entire Internet may someday evolve into an artificial super-organisms, from the bottom up and from the top down. Humans are still permitted to create new ideas from scratch and add new ideas into the mix to speed up the process and at the same time the agents themselves will share and swap data and algorithms. In the long term, this approach may be more successful than any of the relatively isolated AI projects to data.

Negative intruder spiders will require a defense system of agents (an immune system). What does not kill the system will make it stronger! Competing systems will surely evolve.

Powerset Symantic Web Searching

Artificial Intelligence

The Emotion Machine – Minsky

Ant Algorithms

http://en.wikipedia.org/wiki/Ant_colony_optimization

Fuzzy Tutorial

Fuzzy Logic

http://www.aco-metaheuristic.org/

Free Will and AI forum posting (if you here from there and want to return ….

Note: this posting is subject to multiple revisions as are all posts, stories and music on this blog site.

Author: David Saxton Ullery

© All rights reserved, with the exceptions given on the home page. In short, feel free to use this material in any public URL with “.com”, or “.edu” domains for non-profit purposes. Please link back to whatever you reference.

Consider cooperation for a greater gain over theft for short-term smaller gain. If you have good ideas share them using links, comments, original ideas. Make us all wealthy …. thanks!