NeuroMind technology
:: beyond consciousness ::
NeuroMind technology
Home
Contact
Goals and Concepts
Concepts behind NeuroMind technology
... to be implemented in the long term

The concrete connections and paths that form an intelligent system are too complicated to understand or program directly, so let them emerge!
For decades, most of AI research and development was about writing down lots of symbolic rules in the hope that intelligence might emerge by a system that applies and combines those rules. As far as I can say, the success was limited. NeuroMind technology takes a different approach, by stating a different question (which also requires a whole different way of thinking to implement such a system): Instead of specifying and writing down which structures and functions are combined in which exact way to generate an intelligent system, the idea of NeuroMind technology is to setup constrains and "drives", that make NeuroMind technology's artificial neural networks form the functionality the way they are supposed to by themselves, which allows for much more complex structures. Also - since the process of learning these structures can (and should) be driven by real-world data, interaction and feedback - the structures are very adaptable to whatever requirements the AI has to fulfill.

In order for an artificial intelligence to emerge, it needs an inherent notion of what is good and/or what is bad!
For example on a fundamental level, pain and hunger are bad and require the AI to take appropriate action. These fundamental drives are comparably easy to hard-code and create the foundation for "higher-level" or "abstract" drives and emotions like fear to evolve. Even more, abstract "symbols" or notions which seem to be essential building blocks of higher-level intelligence, must get some inherent meaning for the intelligent being (for example being part of a plan to avoid pain). Abstract thinking without very concrete and fundamental meaning somewhere behind the abstract concepts (even if deeply hidden behind a whole bunch of those) is literally meaningless. And if things that we are considering meaningful are just as meaningless to the AI as anything else - how should we expect it to ever develop intelligent and meaningful behavior and thinking?

Intelligent systems require interaction with their environment!
In order for higher level intelligence to develop, a system requires rich ways of interaction with its environment. The environment should also provide sufficient "challenges" to the system.

Expectation might be the key!
I feel like expectation might be the driving force behind perceptive (and maybe all) intelligence. Our brain constantly seems to interpolate. Not only in time, but also spatially. And maybe even on higher abstract levels! Simple self-experiment: Look at the background gradient on top of this page (where the NeuroMind logo is located). It actually consists of a linear gradient for about the upper two thirds and remains in the darkest color of the gradient for the lower third. Still you might notice a darker "line" or "break" just where the gradient stops, although there really is no such thing in the color data. Our brains however expect the gradient to continue and notice that there is a break of the (spatial) expectation in brightness. Hypothesis: Our brains are expectation machines and (subtle) derivations from those expectations are the key to many or even most of the processes that create intelligence! It seems that derivation from expectation might also work well for isolated image recognition and understanding.


(I certainly got a lot of inspiration from Steve Grand's ideas about AI, so please also visit his site: Steve Grand)

Last update: 2010-04-19 17:53:02 CE(S)T