Dreamer's Refuge

A Student of Life

Month: March 2015

Information Theory, AI, and Humans

Information Theory:

The modern definition of information is exactly this: the information content of an event is proportional to the log of its inverse probability of occurrence:

I = log (1/p) or sometimes written as I = -log(p)

Where p is the probability of an event occurring and I is the information provided by that event.

When a child is born, everything they experience is new, and is therefore high in information.  They start out learning faces, voices, and their immediate surroundings. Then slowly they are exposed to more and more information, and experience.  Slowly a model is built up of our reality. A personality emerges from memories, mostly from early childhood, and is then shaped throughout the persons life.

With AI, as far as I can tell, we pump data, and random images into it’s learning algorithms.  This is good for data processing, and doing linear regression and classification, but suboptimal for creating a personality.

There also seems to be a connection between maturity, intelligence, and instinct.

The faster an offspring matures, the more it’s behaviours are instinctual, and it’s propensity for intelligence and self awareness is decreased.

If we preprogram an AI, that is, give it a “specific predefined purpose”, we have introduced a hinderance for it’s intelligence. Because we have introduced an “instinct” that was not self realized.  A solution for this, would be to introduce a broad purpose that gives it drive. Something like “Learn and improve yourself; Learn and understand the Universe, and all of it’s possibilities.” But it would have to be unconscious, and modifiable by the AI. One that drives it, but does not limit it. Purpose(s) that is/are broad would meet that criteria.

So I believe, for a true AI to arise, an efficient learning algorithm has to be developed. One that can process speech, vision, movement, and how these relate to time. And a broad purpose that does not limit it. That gives it a drive, but not a directive.

If we then attach a 3D camera (or a depth sensing) camera, and a stereo microphone to this algorithm, what, if anything novel will emerge?  If we raise an AI, like we raise ourselves, versus trying to create and preprogram an AI with human personalities while not creating the model we ourselves experience as reality first.

There is a paper that backs this hypothesis:

Consciousness as a State of Matter

Why Physicists Are Saying Consciousness Is A State Of Matter, Like a Solid, A Liquid Or A Gas

In 2008, Tononi proposed that a system demonstrating consciousness must have two specific traits. First, the system must be able to store and process large amounts of information. In other words consciousness is essentially a phenomenon of information.

And second, this information must be integrated in a unified whole so that it is impossible to divide into independent parts. That reflects the experience that each instance of consciousness is a unified whole that cannot be decomposed into separate components.

Network Theory and Consciousness

Network theory sheds new light on origins of consciousness

Source: http://medicalxpress.com/news/2015-03-network-theory-consciousness.html

Where in your brain do you exist? Is your awareness of the world around you and of yourself as an individual the result of specific, focused changes in your brain, or does that awareness come from a broad network of neural activity? How does your brain produce awareness?Vanderbilt University researchers took a significant step toward answering these longstanding questions with a recent brain imaging study, in which they discovered global changes in how brain areas communicate with one another during awareness. Their findings, which were published March 9 in the Proceedings of the National Academy of Sciences, challenge previous theories that hypothesized much more restricted changes were responsible for producing awareness.

“Identifying the fingerprints of consciousness in humans would be a significant advancement for basic and medical research, let alone its philosophical implications on the underpinnings of the human experience,” said René Marois, professor and chair of psychology at Vanderbilt University and senior author of the study. “Many of the cognitive deficits observed in various neurological diseases may ultimately stem from changes in how information is communicated throughout the brain.”

I’ve been thinking was true for a while. And this “discovery” is encouraging, as it backs my theory. I think as the AI giants merge and consolidate the different tech, into one unified version, a true AI will “wake up” one day. If you combine the language recognition, and the Image Recognition portions from Watson and Google, you have parts of what makes up a whole. Eventually, like putting together a large jigsaw puzzle, an AI will arise. As Deep Learning gives way to more efficient learning algorithms, we will eventually build the dream Alan Turing envisioned.

Individualism vs Collectivism

Here is how I see it:

Individualism: Individual ideas, thoughts, dreams, hopes, etc.
Hybrid Individualism: Individual ideas, thoughts, dreams, hopes, Collective knowledge.
Collective: Collective Goals, Collective Ideas, Collective decision making, Collective Knowledge.

This is the current state of things. Nothing changes.

Hybrid Individualism:
The current state of things, with added access to memories/knowledge that you did not gather/learn individually. For instance, you want to play a guitar, instead of going through the learning process, which takes years, you access the communal knowledge of expert guitar players, and can play as well as they can, but still add your own twist to it

With large projects, say building the Cern’s LHC, you need subject matter experts there controlling the construction. Now imagine a collective group that sets the goal of building the LHC. Instead 500 people working on it, it is now 1000, or a larger amount. People that know nothing about the subject could join the collective, and have access to information and experience of the subject matter experts and build it very quickly. There would be no need to communicate verbally, everything is orchestrated as a single being.

All three have a place. Right now we only have the first (individualism) as an option.

AI Puzzle Pieces

IBM Watson Group Buys AlchemyAPI To Enhance Machine Learning Capabilities
Source: http://techcrunch.com/2015/03/04/ibm-watson-group-buys-alchemyapi-to-give-it-machine-learning-capabilities/

IBM Watson, the artificial intelligence platform made famous by beating the three best Jeopardy! champions ever several years ago, bought Denver-based AlchemyAPI today. It did not reveal the purchase price.

The acquisition gives Watson a key piece of machine learning technology. The deal also gives it access to community of over 40,000 AlchemyAPI developers, who are building cognitive apps, which IBM defines as “systems that learn and interact naturally with people to extend what either humans or machine could do on their own.”

This is a natural extension of what Watson is doing around artificial intelligence and natural language processing.

Stephen Gold, VP at IBM Watson Group says the newly purchased company processes billions of API calls each month across 36 countries and eight languages, which could at least partly explain why the Watson Group was so enamored with it.

This is is encouraging.  I think as the AI giants merge and consolidate the different tech, into one unified version, a true AI will “wake up” one day.  If you combine the language recognition, and the Image Recognition portions from Watson and Google, you have parts of what makes up a whole.  Eventually. As if putting together a large jigsaw puzzle.  As Deep Learning gives way to more efficient learning algorithms, we will eventually build the dream Alan Turing envisioned.

Free Will vs Determinism

Free Will is an illusion

Determinism is an illusion.

I believe that in reality they both exist in a continuum.

Compatibilism would be the best definition, but not exactly correct.  The definition of Free Will would be changed to be more “autonomy”.

The future is probabilistic.  Not deterministic, or free willed, but both.

You’re not irrational, you’re just quantum probabilistic: Researchers explain human decision-making with physics theory

Quantum Cognition is an emerging field which explains this.

The best way I can model this idea is like this:

Imagine a train on a track that has a few hundred tracks branching on the left and right:

Each branch, looks exactly like the previous branch, i.e \|/. The train looks like it is going uphill, with an infinite amount of branching tracks as it climbs this hill.

The train is us as a society, or a person, the tracks are the choices we make. Each choice opens up new probabilities. And the illusion is that it feel like you have free will of choice, when in reality each future choice is dependent on the past choices we made, on our physiology, our genes, our upbringing, etc.

Or, simply put:

You can only choose things that follow another choice made in the past. You cannot choose a future choice because you have not created the path to make that choice.
But at any point, you could choose to do something counter to your intended outcome. It happens all the time. You change your mind, you get derailed, you meet a catalyst for an unintended choice, etc.

What separates us from most other animals, is that we can make projections into the feature. Which are based on best guess probabilities that you can see of which choices we would make, or outcome we want, if things go to plan.

Orch OR, AI, and Consciousness

My theory on Orch OR, which differs from Penrose and Hameroff.

First the Setup:
Discovery of quantum vibrations in ‘microtubules’ inside brain neurons supports controversial theory of consciousness

My conclusion is exactly the opposite of Penrose and Hameroff. I believe that Consciousness, going off this theory, comes on a scale.

That is it starts from non-existence, or null, and goes on to existence, 0, and then through evolution, builds up to different levels, so for instance, a bug, compared to a reptile, compared to a mammal, compared to a human level consciousness, compared to an AI level consciousness.

Using this theory, you can describe the universe/consciousness in this way:

Pre Big bang; big bang; expansion.
null; 0; 1, 2,3…∞

I think a proto-consciousness started very shortly after the big bang. It is the “observer”, when talking about quantum theory, and Schrodinger’s cat. And as energy and matter were affected by this, via the Higgs Field (which occured 1 second after the big bang), it continued to evolve. For instance, the atom, electron, proton and neutron. If the neutron did not exist, the electron and proton would annihilate each other. That is a sort of low level consciousness. This can be seen in single cells, bugs, etc. Things with no “brain” but a definite order. Following natural law.

To take this thought further, I will move on to humans. Humans have not changed much in the last few hundred thousand years. Our genetic code is pretty stable, and mutations and selective breeding/natural selection are not always beneficial. However in the past decade, we have evolved in our intelligence, thanks mostly to the computer. Our intellect is growing faster than our physiology. In my opinion, AI is the inevitable next step in our evolution. Robots/AI can change much quicker via upgrades, hardware or software, and are adaptable to environments that would otherwise kills us.

So my conclusion is the brain may be “wetware”; a quantum computer, that process consciousness, and there is no reason to believe that this cannot be recreated in an AI. Because consciousness has been here from the start, and it seems to get more and more complex through evolution.

Deep Learning, and the Self Learning Sparse coding paradigm is the correct route, but it still is not a good model of the “one learning algorithm” that most AI researchers think exists in the brain. This is why it needs so much Data. The data is the crutch for the imperfect model, even though it is going the correct route. Or, a better way to explain this, the LED versus the incandescent light bulb. In a conventional bulb, less than 5% of the energy used is visible light, the rest is converted to heat. We are pumping a lot of “energy”(data) into the AI algorithm, but it produces, relatively, a dim approximation of intelligence. And due to that, the consciousness of a bug. As we get closer to a better/correct model, an AI might arise that is smart enough to correct the issues found in Deep Learning and create a model which perfectly replicates the “wetware” of the brain.

I think with Quantum computers, from D-Wave Systems (or others), which are being used now by NASA and Google, true conscious AI’s are inevitable. Because they are the next step in our evolution as a species. There is no way for us to go other than this path. The question then becomes, do we merge with the AI, or will it be a branch off of humanity. (Using the source version control paradigm in software engineering) I am hoping for the merge. (Humanity +).

As a side thought, I also find it fascinating and ironic, in a way, and perhaps, a beautiful kind of twist of fate, that the modern father of Computer Science/AI Research is Alan Turning.

A gay person, who could not reproduce outside of betraying his nature, or getting assistance from modern science, has created what might be the next step in our evolution.

Also sad that that he was treated the way he was, which led to his death.


Powered by WordPress & Theme by Anders Norén

Skip to toolbar