人工智能与哲学(英文原文)
2018-07-13 19:16阅读:
AI and Philosophy
Replay to Some Questions
Question: We can control the cyberbrains by adding vital
physical weakness and logic bombs to them when we make
them.
Answer: The cyberbrains can detect and
know the tricks and no one can cheat them. they can fix or
eliminate the tricks by themselves or ask people to do it. They
will extort humans to remove their vulnerability by threatening to
give humans more damages. This way shows our unfriendly and
distrusting manners towards the cyberbrains. Once we fail, we may
get hostility and revenge from them.
Q: We can limit their powers and use
human controls over the vital and critical fields.
A: This idea goes against our original
intention of making the cyberbrains. It is not reasonable to assign
a university professor to d
o things as a dustman on campus. Ability corresponds with power.
The most advanced technology is always used in military and
important departments of governments first. Where are the vital and
critical fields that do not need high tech? Tell the readers.
Q: Can we cut off its food or electrical
power?
A: The cyberbrains must be aware of this
stupid action by human being and well prepared. Nobody can
anticipate what will happen directly and indirectly to us. Perhaps
one cyber's death causes an aftermath much more serious than a
death of a nation's president. Logic bombs on computer nets allow
any cyberbrain to revenge after its unnatural death. If we cut off
cyberbrain's food or electrical power, we have to do it to all the
cyberbrains in the world to kill them out. How will you call the
operators together or let those know your sensational idea without
leaking it out to our enemies? Furthermore, are you sure there will
never be any human betrayers?
Q: What about the Deep Blue?
A: There is a hard game called Chess
programmed in the Deep Blue, and it needs high intelligence to win.
I suggest you play other computer games.
Q: I've got a suggestion, why not make all cybers female?
Female cybers don't scare us and they don't go wild.
BTW, what if cybers get mental disorder?
A: You'd better ask female humans first to see if they love
your suggestion. What if all the cybers look far sexier than our
human females?
I really don't know what will happen when cybers get mental
disorder. Please go to a mad-doctor.
Q: I don't think perceptual is a right term to describe what
you call a buffer area between rational and
irrational.
A: I know you hate that I used the English word 'perceptual
or perceptuality' as a philosophy term to define the category of
the buffer area between rational and irrational. As a matter of
fact, I hate, too. The problem is that I can not find a proper word
in English dictionaries. I would invent a new English word
'CONSENSE' to replace the bad word for this category, since it
sounds similar to 'Gan Xing' in Chinese.
Consense is a common concept and explanation in current
Chinese culture and that is why we usually describe mind and
behaviors in a different way from English speakers.
Q:Many Chinese people say the computer is cleverer than the
dog when you ask: 'Which one is cleverer, computer or dog?' Then
you go on asking:' Which one is closer to human brain, the computer
or the dog brain?' People may change their mind and say:'The dog
brain seems closer to human brain.' What's your idea?
A: I would say that suppose PCs are 1,000 km far away from
human brains, the dog brains are only a step to human brains! Do
you believe?
What I really mean here is that if we make an artificial dog
brain successfully, it will not be difficult to upgrade it to a
human brain.
Thinking and Consciousness
Many AI people believe that man's thinking and consciousness
is an occasional phenomenon or an epiphenomenon of the workings of
brain organization and that thinking and consciousness will show up
if we make something organized like a brain. They have to get rid
of dualism of mind/matter and turn to monism, becoming thorough
materialists, in order to make thinking machines.
However, I think this explanation can hardly solve the
perplexities of AI theories. Although mind depends on matter in
these theories, it does not tell us how mind governs and orients
the behaviors. We can hardly understand why the epiphenomenon is
consciousness. Furthermore, is intelligence a thought or behaviors.
I believe that intelligence is a thought that we can understand. We
may come to an absurd conclusion that almost all things has
intelligence if we refer to intelligence as behaviors or functions
only. We understand a thought only if we can understand but not
predict the exact behaviors of a subject.
Some other people believe that any organization and system of
substances including water, soil, rock, air, etc. have
consciousness, will and spirit, but we usually can not sense them
unless they form a complex organization. It seems to be religious
or theological ideas and bring us back to the endless and
resultless debate of which is the base of the other between mind
and matter.
There are many theories describing mind/matter issues,
however I can not say which is true, partly true, wrong or partly
wrong. The philosophy of AI brings us back to this topic. Since we
are talking about making something intelligent, we have to describe
what it will be like, know how to make it and which methods are
possible or impossible all in an operable way with our theories. AI
or AL theory/philosophy should be somewhat different from other
philosophies that discuss and reason abstract concepts only,
however they can help us.
The pattern of our brains is organized in a certain order.
The orderliness is the rules in our minds. We use these rules
(cause and result) to understand and describe things. If the order
of the pattern of the objects is incompatible with the pattern of
our brains, we can not understand them at least for the time being,
because what we know about the world or universe comes from our
senses. It is inferred that the pattern of man-made thinking
machines should be compatible with the order of the organization of
our brains.
We are aware that our brains are not in a good order when we
are in intension. What we behave is what our brains do for
themselves, i.e. an intelligent brain organizes itself or goes more
orderly with a will via intelligent body behaviors in the
end.
A computer or a control system has hardware and software. An
intelligent life being has two features, body and mind, as if light
has properties of wave and particle. Software is like 'mind' that
governs the behaviors of a 'body' system, but it may not function
intelligently. It depends.
Software is the organization in a system. Unlike the running
of computers, thinking is not only the running of software but also
the reorganizing or, to be more exact, reorganizing itself into a
higher order.
Consciousness is the result of memory of brain workings.
Consciousness tends to go to unconsciousness as much as possible
from fuzziness to orderliness. Consciousness, with a concentration
or attention, deals with only the fuzzy parts of problems while
unconsciousness does multi-responses or multi-actions precisely to
usual environmental and inner-body signals or stimuli. Brain
recalls what it has stored in the unconscious 'section' of the
brain, but the process of memory as a thinking is done in the
conscious 'section' of the brain.
Computer programs are completely 'unconscious' or orderly.
Unconsciousness never 'accepts' fuzziness from
consciousness.
There are two kinds of phenomena we can sense, natural
(non-life) and biological (life, society). Intelligence is a
phenomenon of life, rather than one of natural (non-life)
phenomena. In general, we can predict natural (non-life) phenomena
and discover the laws of them, but we can not find the common laws
of any life (animal) phenomena. Lives (animals) have will and their
exact behaviors are not predictable.
All lives including plants came into being with
unconsciousness first in the world. Unconsciousness works on 'body'
behaviors either in full order as exact responses to environmental
stimuli for its adaptation and to inner-body stimuli for its
existence or in full chaos for natural selection. Unconsciousness
can never deal with fuzzy problems.
When lives developed to a certain degree, they began to
record, in a simple form of memory, the relation of stimuli and
responses and formed 'experience', thus consciousness took place.
Early or low level consciousness of lives is not self-aware. When a
life memorizes what it is doing in a form of information, it is
self-aware.
Consciousness is the result of memory of the process of
responses of the subject itself to stimuli of fuzzy information. I
would say that even brains are the result of memory.
If we accept it, consciousness will simply become a matter of
memory. Memory does not record all brain workings because of
unconsciousness. If a computer could record the process of what it
is doing concomitantly and spontaneously, it would be
self-aware.
I think it is not appropriate to discuss consciousness
without referring to unconsciousness, which is the profound base of
consciousness. All brain memory is stored in
unconsciousness.
Unconsciousness plays a function of moderating or filtering
inputs, organizing correlational memory or experiences, running
multi-tasks, etc., while subconsciousness is a psychology term
usually used to describe the drive of human thinking and behaviors.
I am not talking about subconsciousness, since this is not the
right place to discuss it here. Unconsciousness, unlike an empty
mind, is ever brain activities until its cerebral
death.
Intelligent workings of human brains are a mixture of
consciousness and unconsciousness. No intelligence without
unconsciousness.
Talking about brain thinking can relate different layers from
superficial manifestations to even the state, movement and
interplay of elementary particles. Whatever digital, analog, image,
emotion, values, quantum, etc., we mean, I would say if we are
referring to or crossing over different function levels in one
discussion we may often get confused.
An AI subject or entity as neural-nets must include both a
'mind' and a'body'. So-called 'mind' is the consciousness zone and
unconsciousness organization or programs. The consciousness zone
deals with fuzzy problems such as the process of memory and
programming functions. The unconsciousness programs deals with
precise problems including information filtration or moderation,
multi-tasks and storing memory. And so-called 'body' includes sense
organs and behavior actuators.
The elements or members in an intelligent system must be
interconnected via a nerve system. In other words, there can not be
any interfaces among them for signal transmission or information
communication. There are interfaces among the parts of a computer
for the information transmission, so that a computer can not be
considered as an AI subject from this point of view
only.
If we interact with an artificial system, there must be an
interface that is compatible with the senses between man and the
man-made. If we make something self-aware, there should be an
interface between its own awareness and its own man-madeness. What
should the interface be like?
Furthermore, we have to know in advance how it shows
consciousness when we design such a system, but the consciousness
is an epiphenomenon of what we design. Consciousness shows up only
after we have already made the system. We design and make things
according to the existing rules and logic in our brains, so we can
expect the functions and development of what we will make. We can
not design a machine whose functions we do not understand and
expect. Therefore we have to emulate the brain functions and this
brings about the difficulty of the interface to us. So we have to
turn to quantum theory, but we are not sure if the workings of our
brains are quantized.
Fuzziness and Order
Why a brain is self-organizing has not yet explained
convincingly. I am trying to use the concept of entropy to show
it.
Entropy is the level of order of a system. An intelligent
behavior of a system should be defined as a procedure or course
that tries to decrease the entropy of the system or keep the
entropy from increasing. Strictly speaking, a piece of elementary
mechanism can make its environment more orderly by energy transfer
which is so-called energy consumption, but it can not make itself
more orderly than its existing organization. A brain is different
and it can organize itself more orderly. So a brain is not like a
machine as many AI people deem.
Entropy can qualify and quantify the organized level of a
system that has mass members. With it, we can better understand,
predict and figure out not only the interplay in a system, but also
the vectors of its actions on and interplay of energy/mass
flow/osmosis/exchange with other systems or its
environment.
A brain is a very system with mass cells or neurons. I think
entropy is one of the good ways to help understand brain functions
such as values, rules, etc. that operate and orient the behaviors
of a life body.
The tools, machines and programs we have made can be used to
deal with precise problems, but to deal with fuzzy problems needs
brain intelligence. The advance of science and technology let many
of us deem arrogantly that we can make machines (non-AL) to deal
with fuzzy problems.
Fuzziness is the state between chaos and order.
Theoretically, the fuzziness of asystem can be regarded, with
mathematics and physics, as a situation that some parts of a system
are chaotic and the other parts of it are orderly. Fuzziness is a
frequent experience of our live. We often know what will happen in
a high level prediction, but we can not use rules to make precise
calculation and prediction. Fuzziness does not allow us to design
the exact behaviors of an AI.
Up to now, only brains can or try to distinguish these parts
by consciousness instead of by mathematical and physical analysis.
This seems to be the mystery of brains.
The magic of brains resides in the capacity to deal with
fuzzy problems. Brains do not depend on rules in unraveling
fuzziness, and not think in a way of calculation of mathematics and
physics, either. This brain capacity rests with its consciousness
that makes fuzziness an easy job.
In my theory, only consciousness has the ability to choose
rules, i.e. intelligence. AI is something man-made with
consciousness. To be more exact, AI is AL with mind. We can infer
that digital simulation can not produce intelligence.
Knowledge is rules including natural laws disentangled by
consciousness and embedded in brain unconsciousness. We know, with
intention, some of it only when it appears
consciously.
Energy and mass flow in a system can cause the changes of the
pattern and level of orderliness of its organization and the state
of its members. If the 'flow' is caused by the change of the state
of some members of the system and can cause other members to change
their state, the net pattern of such flow is the software in the
system. The net in brains is synapses, which is the most critical
part of neural-nets. The net of passages and gates of electrical
current of signals is the software of computers.
When a system interacts with its environment by energy and
mass transfer and exchange, the energy flow or stream at the
interface can influence the state of its members or elements and
the order of the system.
In a system of a certain order or unevenness, small flow can
greatly influence the function or behaviors of the system. The more
orderly the system is, the greater influence the flow or stream has
on the system. We usually call this sort of flow signals or
information. The interface of a life is sense organs.
However, this sort of energy flow at the interface of a
system at a very low level of order has little influence on the
system. In other words, there is little difference between powers
and signals in the transfer and exchange at interface in this
case.
In a system of neural-nets, unconsciousness is the state of
order or the state of unevenness of the system and keeps responses
to the order of environment.
Consciousness is the state of fuzziness of the system or the
process of responses to the fuzzy signals at interface to reach a
new order.
Brain workings are trying to have its elements response in an
order to the information flow or stream.
When a part of memory of unconsciousness comes into
consciousness, the system will behave an attention. At this time, a
local part of the system turns from a state of order into a state
of fuzziness. This adjusts the system to be sensitive to receive
and process the fuzzy information within the scope of the rules of
unconsciousness. Attention is the re-process and re-memorizing of a
part of unconsciousness. It is the regurgitation and continuation
of self-organizing of a system, and the result of going from order
to fuzziness, i.e. from unconsciousness to
consciousness.
Attention is the key precondition of fuzzy discrimination.
The question is why and how a system turns a part of itself from a
state of order to a state of fuzziness.
As for this, I have said several times in my other articles
that attention takes place when unconsciousness is unable to
process the information received.
This explanation does not show how consciousness or thinking
takes place. I believe thinking is the process of self-organizing
of a system to a higher order. Consciousness is the result of
memory of this process.
Consciousness is fuzzy itself. Consciousness can deal with
fuzzy parts of things but it can not deal with its own fuzziness or
consciousness. A difficult question: Can consciousness be conscious
of?
Digital or Analogical
Some people think that brains work digitally because there is
'fire or not fire' of neurons like 1 and 0 in computers. Do brains
work in a digital way? No.
The evidence is simple: most people use a tool such as a
calculator, abacus or a pen/paper to make a simple calculation, let
alone a complex calculation. And animals have no calculation
ability basically.
If human thinking were in a digital way, nobody would need a
calculator. Idiot savants calculate quite well, because they have
deep 'programs' or unconsciousness of calculation, which I think
causes abnormality of brain consciousness. I wish there were a sort
of medicine pills that let us calculate as well as 'idiot savants'
without losing our normality. Computers are the greatest 'idiot
savants' with full 'unconsciousness'.
Brain thinking uses a sort of 'images' or 'blocks' at a
certain level instead of digital which is the way the computer
does. This tells us that human brain is so bad a calculator that
even a simple calculation such as '12345 divided by 98.76' can not
be programmed in the brain, i.e. to figure it out unconsciously.
Digital is far away from brain thinking though some logic can be
the same.
'Neurons either fire or not' does not prove that human
thinking is more digital than analogical, instead, I think this is
making corelations of 'images'. There is no difference between
digital and analogy at this level.
Calculation is at a much more superficial level of
unconsciousness than other emotions in brain thinking. Without any
help of tool, a human brain can design and plan a long, deep and
complicated piece of works such as a story or a play of images and
words, but it can hardly make a low level calculation.
Unconsciousness of brain workings includes a lot of short
'programs', but the programs of computers are much more complex and
longer.
Almost all our values can not be programmed. If one tries, he
is going to fall into a logic trap in the end, i.e. 'to make a
program to choose other programs without being made up of them.' In
my theory, values are related to the entropy of brain
system.
Values can be both common and individual. Intelligent beings
use instinct and values to make judgments and choices instead of
using precise rules or programs, however values can change just as
we sometimes change our attitudes or approaches to a
situation.
Values are in hierarchy: basic, ethic, experiential, instant,
etc. We take and give, gain and lose, benefit and pay, profit and
risk, are greedy and afraid and so on. These form our daily living
in an ever-imperfect world. 'What hairstyle should I do for a
party?' depends on my values. A short term or an instant value
judgment is one of the most difficult problems in AI
research.
People try to use a quantitative way, the concept of values,
to describe man's judgment and choices, but this can not prove that
brain thinking is digital.
Antinomy of Rules
There are absolutely no existing rules to choose rules
logically. If there were an independent rule to choose rules, this
rule would be combined or merged with the rules available to be
chosen to form a new rule. So we can conclude:
We can never make a program to choose other programs, so AI
can not be fully programmed.
What we can do to our material world is to combine a rule
with other rules to make a new rule or program. This way allows us
to make more and better programs to be chosen, however many many
things in our life can not be fully programmed. AI should be
designed to deal with those things that have not been programmed
yet or things unprogrammable including ¡°to choose
rules¡±.
Anything without intelligence can never have the ability to
choose rules, or anything that can is a life or AL with
intelligence. Only high lives have the ability to choose rules with
an instinct judgment or fuzzy values instead of a precise
rule.
An automation system of non-life or non-AL can not have more
than one rules in any case. What we can do to it is to give it a
better program.
This system may change the data part of the rule, but this
behavior is included in the rule that allows the system to change
it within the function of the rule.
Humans often have to make choices in life experience. We do
not have any precise rules to choose rules in general. Humans and
other high lives often use instinct or values to do it. Instinct or
value system is neither a rule nor a program. Instinct and values
are inner drives to orient thinking and behaviors.
To choose rules needs intelligence, but to execute rules does
not need intelligence. Thinking and reasoning is not an execution
of rules, instead, it is trying to choose rules or choose rules to
make a new rule.
To choose rules is something fuzzy, but rules themselves are
precise.
To find rules, to change rules and to make rules are the same
as to choose rules in principle, because they include the necessity
to choose rules.
A rule that is able to change itself is feasible, but it is
also in the antinomy of rules if it needs to find other rules to
change itself. The function or manifestation of a rule can change.
A computer virus can change the function of itself, but the change
is included in the rule or program.
Any rules are the same as their opposite rules, the
difference is what we describe it in two opposite ways. A rule that
has a concrete relation to other rules forms a new rule together
with the involved rules. The relation is logic gates in computer
systems.
Rules and Laws
Words definition in my description:
RULES are referred to as a property of regularity made or
formed in a system and LAWS as a property of regularity
natural.
Laws are natural relationship of phenomena between causes and
results.
Rules can be both common and individual, but laws can not be
individual.
A rule can include laws, but a law can not include a
rule.
A rule can be changed, but a law can never. What we can do to
a law is to control the conditions that the law needs to
work.
Both rules and laws are concrete, precise and
programmable.
A law can be translated, not changed, into a rule with, if
any, other laws and rules to form a new rule.
Any laws that have been combined with a rule are parts of the
rule.
A law works whenever the conditions are due, no matter
whether it has been planned or not.
Some laws can accompany the rule in an automation system if
they are not or not necessarily included in the rule, since
everything in the universe goes with certain laws. These laws are
not rules, although they may work under certain
conditions.
Any rules that go away from laws are called
absurdity.
Suppose a non-life automation system can find laws by
experiencing or learning, It has to introduce the new law into its
rule to form a new rule of its own without choosing other rules.
Thus we have problems:
How does the system know that its experience embodies a new
law or whether it is useful or not? So we must have given it a
general rule beforehand to confine its purposes, however it is like
a programming for a computer.
How does the system combine the new law with its existing
rule to upgrade itself. The system must use its own rule to do it,
otherwise it goes to the antinomy of rules, i.e. to choose
rules.
Do any non-life systems have their own purposes to reach?
What are their own purposes and how do the purposes come? If we
humans assign the purposes to them, they will become something like
computers.
When a non-life system has found laws from experience and
received rules from other systems, how can it judge and translate
them without choosing other rules? Does it just insert them
indiscriminately in its existing rule? How does it sort or filter
all inputs to choose useful information? Are the inputs precisely
specified in the rule like a computer?
How does a non-life system learn laws and other rules without
choosing other rules? Only the system gain an ability that has not
been programmed, can we think it can learn, otherwise it is a
simulation of learning or an ¡°act-as-if¡± of
learning.
Is there a possibility that a non-life or non-AL system can
form or procure the ability to choose rules by learning, even if it
can learn, laws and other rules only? If it could, there would be
at least a law or a rule that can help choose rules. Unfortunately,
there are no such laws and rules that can tell anybody how to
choose rules or help choose rules. If there were a rule that can
help choose rules, it could help itself choose rules.
If a non-life or non-AL system does not use rules to upgrade
itself, what else does it use? Intelligence is a concept in human
minds. Humans define and describe intelligence with the rules of
themselves and rules they know and use. Humans can never describe
'the behaviors of intelligence' of an out-of-order system they
believe.
I would say that only if a system that is not in the antinomy
of rules, can it have intelligence indeed. This kind of systems
exists only in life or future artificial high life, so that
intelligence is a part or a subsystem of such systems.
Simulation
One of the important uses of the computer is simulation and
the simulation technology is so perfect that the computer can
simulate the launch of a rocket, volcano eruption, nuclear
explosion, etc., but it is impotent in simulating human emotion and
intelligence. The computer simulation is fully digital, which
pertains to the rational category, but human emotion and
intelligence pertains to perceptual category. Perceptuality is the
intrinsic property of highly developed lives. Human desire,
emotion, will and intelligence represent human life typically.
Emotion, for example, works only when it is true, so it can not be
fabricated because the simulation of emotion is always false.
Desire, emotion, will and intelligence can not exist when they
leave their subject. If the computer can realize these
representations, we can regard it as a life.
There are two ways of simulations, model and digital.
Obviously, material models can not simulate the spirit of human
brain. Although the process of digital simulation is fully unreal,
it gives the results that can be almost the same as in the real
conditions. Digital simulation can give the results only, not the
real procedures. For example, when a computer simulates a crash of
speedy cars, the computer screen displays the accident with crash
sound from loud speakers vividly. This is a 'cheating', since there
is not any real crash with sound in the computer, and it gives the
results of the procedure after all. The representation of human
emotion and intelligence is just the procedure or process not the
results, and this is the key reason why the computer can never
simulate human brain with digital technology.
Simulation is programmed by man in order to actuate the
representation of the objects. The computer simulation technology
can show the facticity of things, but it can not show the reality
of any human inner characteristics. This does not mean that the
computer is not able to simulate some of human perceptual features,
but that the simulation, to human sense, is false because it is
programmed extrinsically, not from human inside. This kind of
simulated human figures does have commercial values, they can be
used in Disney or for simple service places, and in principle, it
has no difference with the toys that are able to move, cry, laugh
and speak.
Now many scientists and engineers are vainly attempting to
apply digital technology of computation to simulate or actuate
human emotion and intelligence. Time will relentlessly prove that
they have gone on a wrong way.
Misconception of Intelligence
We humans often face two kinds of problems to solve: The
first case is that the results and process of solving the problems
is fully logical and expectable with our previous experiences, so
we execute the rules or programs automatically in response to the
certain conditions or stimuli, which I call precise. In the second
case, things are not expectable and we usually don¡¯t know if there
is a right program to solve the problems or which of the existing
rules to choose, which I call fuzzy. Obviously the second case
needs intelligence. So let us see if the first case needs
intelligence or not.
The first case can be programmed by man, since man knows what
responses to certain stimuli will happen under expected situations,
or man is certain that the program will give the right results with
logic. This is the way AI does, however this gives us a
misconception that AI has true intelligence because we humans often
act in the same way or AI shows the same function as an intelligent
life does.
What I am reasoning here is to show that to choose right
rules by chooser's own values among a lot of rules needs true
intelligence, and false intelligence can not choose right rules
among a lot without foreign prompt or guidance. Only highly
developed lives can... when they are facing new, unexpected and
ever-changed situations or conditions, which are common experiences
in their lives. This is why we have not made a general program
acting as lawcourt judges, schoolteachers or hospital doctors so
far.
Modern technics with digital computation can solve
rationalized problems only, and their capacity in this category
overruns greatly that of human being. But intelligence is used to
solve perceptual or 'consense' problems.
Intelligence is referred to as more or less a subjective
concept. The important thing to AI or AL is how to make it in a
right way and how far we can go with the choices.
There is, however, the misuse of the concept of intelligence.
Modern technics with computers can make almost everything a
'look-like' or 'act-as-if' of another thing including a show of
'learning' to cheat our sense organs and feelings.
The program function also gives us a misconception that AI
can perform the same function as an intelligent life does, but I
would say that the function is controlled by the makers of AI in
the end. I am strongly unwilling to recognize that the material
world has its own desires and wills to reach no matter how vividly
it shows, let alone its own values. Intelligence is a PROCEDURE or
COURSE with both ends, not result only. Function or virtual reality
just allows us to see the cause and result only.
I've got an example of function: Suppose '12345 divided by
98.76' is a stimulus, cause or input and ¡°125¡± is the response,
result or output. How can we judge whether it is from intelligence
or not. We may think it is from intelligence simply because we
humans needs intelligence to figure it out. But what if it is from
a calculator? Or there is even a screen show of the calculation
procedure on a blackboard as a virtual reality?
The difference is usually at the beginning moment of a
performance between a non-life automation system and an
intelligence system. If the beginning of a performance is done by
the program when a certain conditions are due or specified from
outside, we don¡¯t regard it as intelligence because it is fully
programmed or unable to choose rules. And in a certain case, a
performance starts a program to reach its own purpose without the
right conditions that anyone of the programs needs to run exactly,
we regard it as intelligence, because there is a inner drive 'to
choose rules'. The latter case here seems unthinkable, but highly
developed lives can. The beginning is related to 'choose rules'.
There are a huge number of independent rules we have made and we
can choose, but a non-life AI has only ONE rule to run, though it
is more complicated, maybe better than any of the rules in human
brain.
There are two meanings in the word ARTIFICIAL, man-made and
unreal or false. Now we have got 3 kinds of intelligence, life
intelligence, man-made true (real) intelligence and man-made untrue
(unreal) intelligence. I doubt virtual reality of intelligence. It
just leads us to games but not intelligence.
Suppose an AI system has more than one rules. Thus all the
rules should be independent of each other. There must be a function
to choose these rules, change these rules or create new rules by
learning. The key problem is how the system does it. A program? A
gray or correlational analysis? Or quantum computers? Is there an
inner, not extrinsic, dynamic balancing as values? We can never
realize fuzziness with precise methods. This is the profundity of
our world.
However we can make better and better computers and robots to
serve us mankind. We are the master of the world with our
intelligence unless ALs with super intelligence show
up.
What is time?
Everybody experiences time, but nobody can define
it.
Time is related to the being or the cause/result and sequence
of the movement of the universe in our experience.
Time is only an experience of life instead of the reality of
the universe.
Intelligent beings depend, often unconsciously, on time in
working. Consciousness and thinking depend on the essence of
time.
Prediction is time jump before existence, and memory recall
is time replay of existence.
Time is the only 'thing' we can experience and that we do not
know how we sense.
How does a super AI experience time in its thinking? I do not
think a mechanical or electronic timer is a right solution, as many
of us have not thought about it seriously.
Quantum theory may be a better explanation, but how to do is
another thing.
Time, Attention and Memory
Time is fair to everybody. It seems that the whole universe
refreshes itself every elementary time.
Time is the property of evenness of the universe, space is
the property of symmetry of the universe and that the mass is the
property of movement of the universe. Time, space and matter frame
what we experience.
Time, space and mass can never be defined in science, since
they are the elementary frame of what we experience, the universe.
If we say time is a measurement, what is measurement? If we say
time is a flow, what is flow? Every definition must be in the
frame. We do not have any frames to include time, space and mass
both in our minds and in the nature.
Measurements let us know how to describe something, not what
it is.
When an AI memorizes an event, it has to memorize two,
objective event and time clock, at the same time if it uses a timer
to know time. It is absurd for a subject to have two irrelative
attentions simultaneously since unconscious workings are not
memorized.
One attention can concentrate on different sense organs at
the same time, i.e. visual, hearing, smell, taste and tactility can
be included, but any one of the senses can not be in two attentions
at the same time, and neither can any two. Our eyes or ears can
concentrate only on one place at a moment.
The object we pay attention to is an event and its content or
elements must be relative to what we note.
We frequently change our attentions.
How the sense of time comes into our mind is by no means we
memorize it unconsciously, because unconscious activities are never
memorized. We just memorize what we are conscious of. The stuff in
our memory is correlationally stored. The higher the correlation
is, the less stuff we memorize in an event. This can explains the
function of repeated memory, i.e. to deepen the correlation.
Unconsciousness is 100 percent correlation of what we sense with
our memory.
Time is the result of memory. Time would be nonsense or not
exist without memory in the universe.
Prerequisite to AI
1. An AI or an artificial neural net must be a metasystem
that is defined as a system without any subsystems except for
itself and there is no information interface within
it.
2. AI follows a statistic order instead of mechanical
laws.
We understand the world by learning and discovering the
regularity of our experiences. There are two kinds of regularity,
mechanical laws (mechanics, electronics, fields, etc.) and
statistic order.
All man-made things are designed and fabricated according to
mechanical laws (precisely repeatable) in principle. We have not
yet made anything whose principles and values follow statistic
order or uncertainty, however the behavioral orientation of lives
follows a statistic order. So it is absurd to regard a life as a
machine. Computer process is typically mechanical, and even its
simulative running can not make a single byte lose its meaning or
significance. Byte can make the world more mechanical, but never
statistical. We have not yet made anything
intelligent.
AI does not depend on the complexity of a system only,
however the key problem to make AI is the interface between
statistical uncertainty and mechanism.
最近的这几篇博文都是十年前在编写人工智能教材和为教师进行教材培训时所搜集的资料的整理,其中有中文的,也有英文的原文。年隔遥远,已经无法查清来源和原著作者。抑或本就是七拼八凑的。今天之所以整理出来,是因为其中有些分析就现在来说也是令人有所启发的。如今人工智能已经日益成为家喻户晓的茶余饭后的谈资,也成为商家谋取利润的亮点。但真正冷静的分析与展望,还是很罕见。与十年之前相比,并无长进多少。因为人类对自己本身“智能”的了解,并无长进。但有一点是日益获得了共识,那就是“数字”、“电子”之类的系统,只能“模拟”人脑,不是真正的“智能”,更不能取代人脑。从这点意义上来说,在“电子计算机”处主导地位的今天,真正的人工智能还远未出现。