This is a workshop that Zachary Lieberman gave back in November in Barcelona.
a workshop examining image processing and computer vision via the processing environment
There are lots of handful and new info about video sensing, I also learned new terms.
One is image quantization. one explanation is here:
Many people don’t have full-color (24 bit per pixel) display hardware. Inexpensive display hardware stores 8 bits per pixel, so it can display at most 256 distinct colors at a time. To display a full-color image, the computer must choose an appropriate set of representative colors and map the image into these colors. This process is called “color quantization”. (This is something of a misnomer; “color selection” or “color reduction” would be a better term. But we’re stuck with the standard usage.)
Quasimondo has a processing example of this here. And there is something called clustering algorithms which I still haven’t figured out what it is. I am going to dig more of those stuff in the summer hopefully.