The modern definition of information is exactly this: the information content of an event is proportional to the log of its inverse probability of occurrence:
I = log (1/p) or sometimes written as I = -log(p)
Where p is the probability of an event occurring and I is the information provided by that event.
When a child is born, everything they experience is new, and is therefore high in information. They start out learning faces, voices, and their immediate surroundings. Then slowly they are exposed to more and more information, and experience. Slowly a model is built up of our reality. A personality emerges from memories, mostly from early childhood, and is then shaped throughout the persons life.
With AI, as far as I can tell, we pump data, and random images into it’s learning algorithms. This is good for data processing, and doing linear regression and classification, but suboptimal for creating a personality.
There also seems to be a connection between maturity, intelligence, and instinct.
The faster an offspring matures, the more it’s behaviours are instinctual, and it’s propensity for intelligence and self awareness is decreased.
If we preprogram an AI, that is, give it a “specific predefined purpose”, we have introduced a hinderance for it’s intelligence. Because we have introduced an “instinct” that was not self realized. A solution for this, would be to introduce a broad purpose that gives it drive. Something like “Learn and improve yourself; Learn and understand the Universe, and all of it’s possibilities.” But it would have to be unconscious, and modifiable by the AI. One that drives it, but does not limit it. Purpose(s) that is/are broad would meet that criteria.
So I believe, for a true AI to arise, an efficient learning algorithm has to be developed. One that can process speech, vision, movement, and how these relate to time. And a broad purpose that does not limit it. That gives it a drive, but not a directive.
If we then attach a 3D camera (or a depth sensing) camera, and a stereo microphone to this algorithm, what, if anything novel will emerge? If we raise an AI, like we raise ourselves, versus trying to create and preprogram an AI with human personalities while not creating the model we ourselves experience as reality first.
There is a paper that backs this hypothesis:
In 2008, Tononi proposed that a system demonstrating consciousness must have two specific traits. First, the system must be able to store and process large amounts of information. In other words consciousness is essentially a phenomenon of information.
And second, this information must be integrated in a unified whole so that it is impossible to divide into independent parts. That reflects the experience that each instance of consciousness is a unified whole that cannot be decomposed into separate components.