How could self programming computers work? It seems like they would always require a human programmer. Certainly the Model-A of self-programming computers would require a human to invent it, but once invented the human could just get out of the way an allow the machine to program itself. The only question that would then remain would be -- what would it program itself to do?
Well, animals, plants and other living things are examples of self-programming devices. They are not computers, per se, but are self-programming and self-creating in every sense. Creationism is not required to explain them, regardless of how angrily the Creationists argue otherwise.
Computers are not as complex as most living things, however, so the Model-A self-programmer cannot be expected to function so intricately as real-life, but the same general mechanism would be used -- natural selection.
There must be a way for the computer (or more than one) to act in an environment that kills off any unsuccessful versions, and a way to randomly mutate the computers in such a way that is not immediately fatal to every version.
One difference between living things and computers is the reliability of electronic computers. Theoretically, a computer which is always supplied electricity and always kept clean could live forever. I've seen old radios from the 1930s that were still perfectly functional in the 1970s (and may still be functioning now, other than having a limited bandwidth.) They used tubes, but operated at such low temperatures that the tubes rarely burned out.
On the other hand, I also have owned TVs that required frequent trips to a test-and-replace kiosk, usually in a drugstore, to replace tubes that seemed designed to burn out like light bulbs, to generate profits for Sylvania, Zenith and RCA for a long, long time.
Similarly, I still have transistor based stereo equipment that I bought decades ago and it still functions like new. And those transistors are not like the tubes -- they do not burn out. As long as I keep them clean, and don't drop them from a height, those things will probably keep working long after I'm dead.
This also means that almost no errors occur during the operation of transistorized computers. There are always errors of some kind, but they are usually caught by the devices themselves and prohibited from effecting overall operation. Chips have been miniaturized to the point that only the bare essence of operational devices are active. There is just not much to go wrong. Of course there are sometimes catastrophic breakdowns. A hard disk crashes, or a memory error wipes out day's work, or a million people's work is lost. Yet these kinds of errors are statistically very rare.
But there are trillions of operations between failures, even when they do fail. If your car engine only failed after a trillion motor revolutions it would never fail during your lifetime, nor for several hundred more years, even if it was running at 6000 rpm 24 hours a day, every day for all that time.
Computer chips have no moving parts other than electrons, so they do not wear out like a car engine's pistons, which rub against their cylinders with only a thin film of oil to reduce the friction. Computers do have fans to keep them cool, of course that is something that might fail, and they can gather dust or other contaminates which might cause shorts or poor air flow. Also, power supplies can suffer surges or other failures. But humans could only wish they could be repaired so easily as a power supply.
Yet, for all practical purposes, a machine can be programmed as if it never failed. For instance, if I set a bit in memory from a 0 to a 1 I do not need to keep checking if it is still a 1 every second. I just assume it will always be a 1 until I purposely change it to a 0, or if I turn off the machine.
I can create a flip-flop by programming the machine to set the bit to 1 if it is 0 but to set it to 0 if it is a 1, and to do that forever. This sequence of "machine code" might be used:
But for the machine to program itself would require some external reason to change what the machine does. How would a machine decide to alter the sequence of its actions?
One method is to have a sensor that reads cosmic rays hitting it. Every cycle the sensor would be 1 if a cosmic ray has just hit it, and 0 if nothing did. If a recording of those events was made, then a random sequence of 1s and 0s would result, at least the exact sequence could not be predicted because the cosmic rays arrive as asynchronous events.
Now the above sequences of instructions are symbolic assembler instructions of a hypothetical machine, but each code is really a string of 1s and 0s itself. So if an instruction sequence was built by outputting the random strings of 1s and 0s into an empty program, and then executing the randomly created program, chances are that it would do nothing useful. In fact, with most machines, the chances that any given string of 1s and 0s will do anything useful whatsoever is extremely low, such as 1 in a billion or even far worse. However, when trillions of attempts are made, then every once in a while a new, good program is randomly created.
It is not exactly an easy thing to determine whether the program is good or bad, in the overall scheme of things, however, defined by normal evolution, the very fact that a program produced a copy of itself is success, and not copying itself is failure. The fact that a self-copying program does that plus something additional to merely copying itself would be astoundingly better.
What if the self-copying made 2 copies for every run of the program? Soon there would be 4 then 8 then 16 and so forth until the world of bits that holds the programs would be full of self-copying programs.
I will end this discussion here. Randomness has the ability, unassisted by humans over a long term, to create programs that can create themselves, given that the environment contains components that can be arranged in random configurations and sufficient energy exists to cause those programs to keep on keeping on for a very long time, kind of like us.
The only necessary, natural thing required to exist are atoms that can be arranged in random sequences and then react with the environment of other atoms to copy themselves using that same preexisting environment of atoms.
Well, animals, plants and other living things are examples of self-programming devices. They are not computers, per se, but are self-programming and self-creating in every sense. Creationism is not required to explain them, regardless of how angrily the Creationists argue otherwise.
Computers are not as complex as most living things, however, so the Model-A self-programmer cannot be expected to function so intricately as real-life, but the same general mechanism would be used -- natural selection.
There must be a way for the computer (or more than one) to act in an environment that kills off any unsuccessful versions, and a way to randomly mutate the computers in such a way that is not immediately fatal to every version.
One difference between living things and computers is the reliability of electronic computers. Theoretically, a computer which is always supplied electricity and always kept clean could live forever. I've seen old radios from the 1930s that were still perfectly functional in the 1970s (and may still be functioning now, other than having a limited bandwidth.) They used tubes, but operated at such low temperatures that the tubes rarely burned out.
On the other hand, I also have owned TVs that required frequent trips to a test-and-replace kiosk, usually in a drugstore, to replace tubes that seemed designed to burn out like light bulbs, to generate profits for Sylvania, Zenith and RCA for a long, long time.
Similarly, I still have transistor based stereo equipment that I bought decades ago and it still functions like new. And those transistors are not like the tubes -- they do not burn out. As long as I keep them clean, and don't drop them from a height, those things will probably keep working long after I'm dead.
This also means that almost no errors occur during the operation of transistorized computers. There are always errors of some kind, but they are usually caught by the devices themselves and prohibited from effecting overall operation. Chips have been miniaturized to the point that only the bare essence of operational devices are active. There is just not much to go wrong. Of course there are sometimes catastrophic breakdowns. A hard disk crashes, or a memory error wipes out day's work, or a million people's work is lost. Yet these kinds of errors are statistically very rare.
But there are trillions of operations between failures, even when they do fail. If your car engine only failed after a trillion motor revolutions it would never fail during your lifetime, nor for several hundred more years, even if it was running at 6000 rpm 24 hours a day, every day for all that time.
Computer chips have no moving parts other than electrons, so they do not wear out like a car engine's pistons, which rub against their cylinders with only a thin film of oil to reduce the friction. Computers do have fans to keep them cool, of course that is something that might fail, and they can gather dust or other contaminates which might cause shorts or poor air flow. Also, power supplies can suffer surges or other failures. But humans could only wish they could be repaired so easily as a power supply.
Yet, for all practical purposes, a machine can be programmed as if it never failed. For instance, if I set a bit in memory from a 0 to a 1 I do not need to keep checking if it is still a 1 every second. I just assume it will always be a 1 until I purposely change it to a 0, or if I turn off the machine.
I can create a flip-flop by programming the machine to set the bit to 1 if it is 0 but to set it to 0 if it is a 1, and to do that forever. This sequence of "machine code" might be used:
sub r0,r0 ; zero register 0Nothing will ever change that cycle unless I purposely stop the machine unless the machine suffers a memory error, perhaps every 100 trillion times. But it would only pick back up again on the next loop, never realizing that there were two 1s in a row or whatever the error was.
xor r0,#1 ; exclusive-or r0 with 1 (flips 0/1)
jmp back1 ; repeat xor forever
But for the machine to program itself would require some external reason to change what the machine does. How would a machine decide to alter the sequence of its actions?
One method is to have a sensor that reads cosmic rays hitting it. Every cycle the sensor would be 1 if a cosmic ray has just hit it, and 0 if nothing did. If a recording of those events was made, then a random sequence of 1s and 0s would result, at least the exact sequence could not be predicted because the cosmic rays arrive as asynchronous events.
add r0,r0 ; shift 1 bit to the leftThe machine could record a certain number of these in a row, say 8 bits for a convenient size, the size of a byte of memory. The programming of the machine would execute whatever coding that the combination of bits happened to "mean", for instance if the left most bit was 1, then it would add, otherwise if 0, then subtract the remaining bits from a "register". Over time some value would be found in the register which would be a random combination that reflected the randomness of cosmic rays. There is 1 chance in 256 that any particular value of 0 thru 255 will be output, every cycle.
in r0,Sky ; read sky (1 or 0 cosmic ray)
out SIO,r0 ; output to Serial port
jmp back3 ; repeat entire sequence
Now the above sequences of instructions are symbolic assembler instructions of a hypothetical machine, but each code is really a string of 1s and 0s itself. So if an instruction sequence was built by outputting the random strings of 1s and 0s into an empty program, and then executing the randomly created program, chances are that it would do nothing useful. In fact, with most machines, the chances that any given string of 1s and 0s will do anything useful whatsoever is extremely low, such as 1 in a billion or even far worse. However, when trillions of attempts are made, then every once in a while a new, good program is randomly created.
It is not exactly an easy thing to determine whether the program is good or bad, in the overall scheme of things, however, defined by normal evolution, the very fact that a program produced a copy of itself is success, and not copying itself is failure. The fact that a self-copying program does that plus something additional to merely copying itself would be astoundingly better.
What if the self-copying made 2 copies for every run of the program? Soon there would be 4 then 8 then 16 and so forth until the world of bits that holds the programs would be full of self-copying programs.
I will end this discussion here. Randomness has the ability, unassisted by humans over a long term, to create programs that can create themselves, given that the environment contains components that can be arranged in random configurations and sufficient energy exists to cause those programs to keep on keeping on for a very long time, kind of like us.
The only necessary, natural thing required to exist are atoms that can be arranged in random sequences and then react with the environment of other atoms to copy themselves using that same preexisting environment of atoms.
No comments:
Post a Comment