To explore how neural adaptation can create and change representations through interaction with the world.
So I started basically with building a bot which follows light to see and to experience its behaviours according to basic directions. Then I try to see this simulation on the computer which gave me the possibilities of multiplying the agent.
What I am showing in this example is agents are just merely following light with their basic neurons, so they are checking at that point their proximity to light and behaving according to it. While this is the most common approach today, I questioned if there could be a way to extend this behaviour which makes it to learn behave smarter in time.
So I added couple of new things to my bot, I multiplied its neurons to get more accurate results, I added capabilities which are called avoid systems meaning if it is too close to light or getting light from more in its tail, just run backwards.
The most challenging one was to make it understand certain patterns so that it can learn to behave according to these patterns. The main problem I was having is I had only one bot and no generations. The genetic algorithms work in this way.
Results and further questions
It is not easy to implement genetic algorithms into physical devices.
How could this approach be useful in our daily lives and user experiences with devices?
Think of a building where the floors should be painted could this approach be applied to this kind of scenario?