Project page of:
The RoboSense Framework
and
BIOLOGICALLY INSPIRED COMPUTING: The emergence of intelligence through evolving Artificial Neural Networks

Download my BSc Thesis in [PDF] or view it in [HTML]
Hosted on a BudgetDedicated VPS

Table of Contents

Introduction

I'm currently writing a thesis "The emergence of intelligence through evolving Artificial Neural Networks". What does it mean? I'm studying the way nature learns, on two time-scales. The first is the inter-generational (between lifetimes) evolution, the second is the intra-lifetime learning behaviour. (Higher) Intelligent behaviour can be attributed mostly to the evolution of Neural Networks. Both are natural phenomena with an artificial counterpart. You have Evolutionary Programming / Algorithms (EP and EA) and Artificial Neural Networks. This thesis provides insights into the similarity and differences between the natural phenomena and their artificial counterpart. This is the project page of both my Thesis and RoboSense. The project is done by me, Erik de Bruijn and started out in conjunction with my Bachelors thesis of Information Management at the University of Tilburg.

Based on this work, my primary thesis supervisor wrote this
letter of recommendation.

Read this doc on Scribd: Biologically Inspired Computation: Emergence of Intelligence through evolving Artificial Neural Networks

RoboSense

�There is nothing in the intellect that was not before in the senses�
Aristotle [Quoted in Copleston, Frederick, A History of Philosophy, vol. 2, Doubleday, New York, 1993, 038546844X, page 339]

``Nothing is in the understanding, which is not first in the senses.''


John Locke

RoboSense is a framework I've been programming that aids my in my experiments. It is designed to be generically applicable to different physicial robotics designs, but currently only supports interfacing with the "Lego NXT Intelligent brick" and only over Bluetooth (not USB).

o Robust + rich sensoric experienced (5% done)
     - through integration of serveral sensoric feeds
     - auto-callibrating senses with eachother
 o 3D Visualization
     - See the Robot's world (5% done)
 o Automatic construction of Dynamic Model of the Robot + its Environment (0% done)
 o Robot Controls (wireless)
   - manual (20%)
   - autonomous (visual goal based)
        

Required hardware:
- A Lego NXT set (about 300 euro)
- Digital Video Camera/Webcam (9 - 1000 euro)
Strict dependencies:
- ARToolkit - A working Bluetooth stack
- OpenCV: Intel(tm)'s Open Computer Vision library Optional dependencies:
- Firewire and DV libraries

Installation on Ubuntu Feisty Fawn / Debian Etch:

# apt-get install libbluetooth2-dev
$ wget nxtlibc[version].tar.gz
$ gunzip xzf nxtlibc*
$ cd nxtlib*
$ ./configure
$ make
$ sudo make install
OpenCV (see also http://opencvlibrary.sourceforge.net/CompileOpenCVUsingLinux )
 $ sudo apt-get install libtiff4-dev
 $ wget opencv[??]
 $ tar xzf opencv*
 $ ./configure
 $ make
 $ sudo make install
Bazar (depends on OpenCV):
 $ wget http://cvlab.epfl.ch/software/bazar/bazar-1.3.1.tar.gz
 $ tar xzf bazar*
 $ cd bzar*
 $ ./configure
 $ make
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib" ./interactive
$ cd ~/ARToolkit/bin/
$ LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib" ./RoboSense 

"SFERES is a framework that gathers an evolution framework with a simulation framework. It is dedicated to experiments involving the design of artificial agents through artificial evolutionary algorithms, like Genetic Algorithms or Evolution Strategies, for instance."
  -- from: http://sferes.lip6.fr/external/about.php
These dependencies must be resolved (this worked for me in ubuntu feisty)
$ apt-get install libbz2-dev byacc flex
$ tar xzf sferes-5.13.tar.gz
$ cd sferes*
$ ./configure
$ make
$ sudo make install

To get dynamic loading of shared libraries working for 'Player', this is required:
$ apt-get install libtool libltdl3-dev

TODO: Getting PLAYER + ARTOOLKITPLUS!!! work together

First we need to install Player:
$ wget http://ovh.dl.sourceforge.net/sourceforge/playerstage/player-2.0.4.tar.bz2
$ tar xjf player-*
$ cd player-*
$ ./configure
(check out options like --enable-sphinx2 to enable speech recognition !)
$ make
$ sudo make install
$ svn co https://playerstage.svn.sourceforge.net/svnroot/playerstage playerstage
$ apt-get install libogre-dev libogre6 scons
$ scons install

We need a physics engine for the 3D simulator Gazebo:
$ cd ode-0.8
$ ./configure  --enable-double-precission  --enable-realease  
$ sudo make install

Installing Gazebo:
$ ./configure
$ sudo make install

It may not be obvious (wasn't to me at least) that you have to specify for example "-t 10" when running gazebo because otherwise it quits immediately.

Installing the Player module for sferes:
$ tar xzf sferesModulePlayerStage-1.0.tar.gz

Experiment 1: Autonomous discovery of physiology through tracker guided visual feedback

This is an experiment in which I'm trying to teach an artificial neural network about a virtual organisms' physiology. The learning process is guided by visual feedback: specifically, 3d coordinates of a marker mounted on the robot. I'm still working on getting the visual feedback in a usable way. The teaching process will be based on an evolutionary algorithm. The sensoric feed will be processed by several neural networks. The eventual senso-motoric coordination ability will be the basis for higher-order movement and coordination tasks. This bootstrapping process will be rooted in theory regarding "Ontogenesis" (biological development of an organism). Below is a video of an example Robot. Combination of ARToolkit, OpenCV (Intel's Open Computer Vision) and nxtlibc created in C++.

Direct link at youtube

Eventually I want to use different tracking mechanisms such as those in OpenCV or Bazar. This would allow me to drop special purpose makers and use features from the images of the robot.

Experiment 2: Evolution of lego structures

I'm thinking of an experiment with generating Lego creatures through a genetic algorithm. Essentially, the code the algorithm evolves is the DNA for the eventual lego construction. Simulating and scoring for interesting behaviour will allow it the select good DNA over the useless DNA and promote good behaviour. I'm not sure how to codify this information into feasible structures yet. This video shows a very primitive way of generating it out of object IDs + coordinates.

Experiment 3: Mirror-Neurons & Imitation

Doing something, and seeing something done generates a similar neural experience. This can be explained by mirror neurons. Mirror neurons are thought to play an important role in learning (imitation, gaining experience) and empathy. Intelligence can perhaps be acquired without proper functioning of these neurons (autists are often very intelligent, are socially less developed) [Ramachandran].

EvoMorph

Video thumbnail. Click to play
Click To Play

Gallery

thumbs/IMG_1374.JPG.jpg
IMG_1374.JPG
http://www.youtube.com/watch?v=_GK95_hDI68
thumbs/IMG_1355.JPG.jpg
IMG_1355.JPG
thumbs/IMG_1356.JPG.jpg
IMG_1356.JPG
thumbs/IMG_1359.JPG.jpg
IMG_1359.JPG
thumbs/IMG_1360.JPG.jpg
IMG_1360.JPG
thumbs/IMG_1361.JPG.jpg
IMG_1361.JPG
thumbs/IMG_1362.JPG.jpg
IMG_1362.JPG
thumbs/IMG_1363.JPG.jpg
IMG_1363.JPG
thumbs/IMG_1364.JPG.jpg
IMG_1364.JPG
thumbs/IMG_1365.JPG.jpg
IMG_1365.JPG
thumbs/IMG_1366.JPG.jpg
IMG_1366.JPG
thumbs/IMG_1372.JPG.jpg
IMG_1372.JPG
thumbs/IMG_1373.JPG.jpg
IMG_1373.JPG
Misc videos:
CAD Rendered Lego Tux ANN trained AI Fighter (by someone else) avida-base - Auto-adaptive genetic system for Artificial Life research

From Rob:

For the GA, this is how I would do it:

When a block is placed, spots open up and others close depending on placement. Each connection is different (tops must connect to bottoms, wheels on axles, etc.). So when the 12x4x1 black piece is placed in the beginning, 48 tops and 48 bottoms open up for connections. Those spots can be placed in a list. Each piece has a number associated with the spot on itself along with a number associated with the spot it connects with in the list and yet a third value determining direction...so..by your example, a white axle is placed based on its first top to the black piece 5th bottom in the 2nd direction (all directions would be normalized, so let's assume that this is correct). Upon placing the axle, 5th, 6th, 7th, and 8th bottom piece would be taken out of the list, and 6 more positions would be added to the list (the 4 bottom pieces of the white axle, and the 2 axles themselves for possible tires). The next axle would be placed in position 13 of the black bottom (notice I didn't say 17 because 4 positions were subtracted from the list). So a gene would look like:

Piece ID
+ Self Position #
+ Connect to Position #
+ Direction Normalized

If any piece has an invalid move, the piece is automatically void from entry (this leaves residual, or rather recessive genes to accumulate in children genetics that may show up later). As pieces are placed, the spots they occupy are removed from the list, and the piece itself adds its own spots to the list as mentioned above.

Genetic crossovers can be done as well as transposition within the genetic code allowing for patterns of both parents to be derived. Mutations of genes along with point-mutation (of just a single part of the gene, whether piece, or position) can be done as well to avoid becoming trapped in a local minima.

Pieces that move (such as the 2x2x3.2x2x3 swing arm) can have multiple IDs to denote each of its movement positions for starter placement.

If it is not understood, I apologize, I sometimes get an idea and unable to express it in words.

--Rob