First the precious link And from the papers :
Can an image behave? In other words, if a digital image is a visual representation of colors (i.e. pixels) on a grid (i.e. screen or piece of paper), what if each element of this grid were able to act on its own? A series of experiments in answering these questions led me to create Reactive, a live video installation that amplifies a user’s movements with exploding particle systems in a virtual space.
Reactive began as an experiment in taking a digital image and mapping each pixel in a three-dimensional space. A low-resolution image (80x60 pixels) is mapped to a grid of 2400 pyramid shapes, each colored according to RGB values from the source image, and each with a “z-axis” position according to that color’s brightness. Suddenly, this still image manifests itself as a floating particle system with a one-to-one relationship between pixels and particles.
That is a really interesting topic to dig. To perceve everything on the screen as composition of pixels. The latest examples that Shiffman showed as at Nature of Code are basically spawned from this idea as well, but applying algorithms based on imitation of Nature. Those were really slick examples by the way.
Again from the Swarm:
Swarm is an interactive video installation that implements the pattern of flocking birds (using Craig Reynold’s “Boids” model) as a constantly moving brush stroke.
Swarm is implemented as a system of 120 boids following the rules outlined by Reynolds. In my system, each boid looks up an RGB color from its corresponding pixel location in the live video stream. If the viewer stands still, his or her image will be slowly revealed over time as the flock makes its way around the entire screen. If the viewer chooses to move during the process of painting more abstract shapes and colors can be generated.
Here we can see he is using Reynolds’ Boids as the drawing tool.