Archive for January, 2006

New goodies and ideas

Friday, January 27th, 2006

Again one of the nights that I am lost in the infinite world of world wide web jumping from that source to another. Actually I have discovered really interesting stuff.
I am trying to remember how it all started.. I have found this article about Electronic Music In Turkey after I saw the name Bulent Arel who was one the founders of Columbia-Princeton Electronic Music Center and was curious if he was Turkish, which turned out to be he was and tried to do couple of stuff back in the day.

The other thing and maybe the most valuable one is this blog of Trond Lossius who is a sound and installation artist living in Bergen, Norway. It is one of the deepest source that I have come across in a long time. Especially for a newbie like me who wants to educate himself in spatialisation and sound installations. So I am really curious about discovering new concepts in this sense, one of which is Ambisonic Sound I still haven’t quited visualized on my mind. This is Trond’s post about it which he collected from SurSound mailing list.

Also another thing that I was steered through his posts is this software environment called which is if I quote from the site : is an interactive network performance environment invented and developed by composer and computer musician Georg Hajdu. It enables up to five performers to play music over the Internet under the control of a “conductor.”

I believe it could be a good source for the final project in LIPP, I am thinking about a multi-user video composing environment with sound where different performers perform according to the score they have. I must dig this later, especially I am curious about Cage’s latest works called 5, where he uses time-frame in the same score.
~~~ I haven’t read about this, so I must ~~~

Reading: Machine Visions: towards a poetics of Artificial Intelligence

Thursday, January 26th, 2006

The text Machine Visions: towards a poetics of Artificial Intelligence just came in time as I was questioning myself if we were going to cover the use of text in its more aesthetics sense, leaning towards more into typography than mining text. So I want to share some thoughts on this subject.

In my 4 years of education back in my visual communication design degree, we were always taught we must find the best possible ways through visual entities to communicate our ideas. I always feel like there are two, if not more and which I am sure more, ways of conveying ideas in that sense. One way of approach is passing the idea directly without adding ‘spice’ to it, which could be described as a dull process now. This is what the art and literature history was all about until modernism. The second one, however, came in the 20th century, I assume starting from texts of James Joyce in literature- although it would be unrightfully not to mention pioneers of these like Tristam Shandy and maybe Don Quixote here. To find itself in different disciplines as we memorize in the modern art history. But the person I want to mention here is Paul Rand, a graphic designer, who did marvelous designs on the covers of magazines try to convey his ideas, probably one of the greatest influences in the graphic design world. The most distinct thing that attracts me to this gentleman is, his creative approach to the process of designing long before computer era [I am totally ranting about computers here] with only scissors and physical papers as technological devices. I cannot see a possibility now this to happen, at some point we are bound to computers at some part of our processes. This is necessarily not a bad thing at all though. We are allowed to gates of un-numerous multiplication of computations via computers which we wouldn’t have otherwise, and this is basically the first reason that I am taking Nature of Code or this class, to explore and possibly apply and see the results if there is anything to see at all. But I just cannot accept the fact that David Carson’s or other “new” deconstructivist works could be count as artificial intelligence. [Ok I am dramatizing at point as I know he is not mentioning AI as we all know] But still, I cannot see anything other than aesthetically pleasing works when I look at Carson’s stuff. I liked them and was really impressed when I first saw them back in my undergraduate years[also I must admit I had couple his posters on my wall back in Istanbul] , but it wouldn’t take me long to realize they were repeating variances of themselves, merely aesthetically pleasing pieces. Those were new at some point but I just couldn’t live with that point with the rest of my life. Personally I think this was one of the reasons that I felt restricted in Graphic Design. To make something beautiful or sell-able only should not be the only thing that I can do in this life. Some people are fine with that, well I like to learn new things that’s all.

I have read the text with more or less these kinds of emotions. Oh apart from that, I really would like to design the text piece I am going to create with Machine Vision anyway. I am just questioning would Java be appropriate for that? -See how I am an aesthetically minded bastard in the end:) But c’mon, wouldn’t be awesome to create something in openGL using z axis and spitting out some random words around the screen flying aesthetically pleasing!!! Ok I am not going to use that word again.

Digital Audio Synthesis Techniques/MIDI

Wednesday, January 25th, 2006

This semester Daniel Palkowski is instructing this class and I am in. Here is the syllabus for the class. In the first class we listened different pieces from different composers. The album was Early Modulations, Vintage Volts . We saw the differences between approach of composers to the electronic music, for example Morton Subotnick was leaning towards sequencing as opposed to what electro-acoustic of the time does with the sound. !!!Art Krieger!!! is one example and he seems like not enjoying what Morton does back in the time. It is interesting I couldn’t find any information mentioning this guy, probably I am writing his name wrong.

Some of the early electronic devices are:
20’s Theramin (live) Bernard Herman
20’s Vacuum tube
30’s Wire(Tape) Recorder – magnetic tape –
40’s EMC
60’s Synths like MOOG, ARP, BUCHLA, and
70’s Modular Synth

Actually I have found this nice link about the electronic music timeline through EMF Institute
Then we cover the basics of pitch, octave oscillation and concepts like what is a synth made of. It looks like an oscillator and filter and envelope generator is enough to make a synth. I am going to cover the titles in detail in the upcoming days.

So that is more or less what we cover in the first class.

~~~I am going to double check the names soon.~~~

week 1: First Assignment, Perlin Noise

Wednesday, January 25th, 2006


My first week assignment for Nature of Code is here. Some background about Perlin Noise; the noise itself was found by Ken Perlin back in the 80’s, later developed and applied by film production companies. The noise is mostly used for generating clouds, smoke, terrain etc. I think this equation is being used by most of the 3D softwares right now. There is 1D, 2D, 3D of this noise, it was not impossible to to 4D, but it was a huge calculation and back in 2002, Perlin wrote a paper to improve and optimize the equation. There is a great talk of him in this link.

If we come to the assignment I always had this obsession about applying a wave to a plane with no reason and this assignment was suited for this opportunity. I also explored the vertices and beginShape() function and creating a plane with this first time in this assignment. I see alot potential in creating planes with QUADS, QUADS_STRIP and especially POLYGON. This is a vast area waiting to be explored by me thinking I can apply our mathematical equations to them.

So future goals could be:

  • Mapping an image’s brightness/or/rgb values and create a plane according to that.
  • Create different variances of perlin noise using lines, sphere which I am really curious about the results.
  • Play with the noiseDetail() function in order to change the properties of the noise, this is another rich area.
  • Also I quite skipped Random Walk hoping to apply it to some exercise this week too. It was all Perlin Noise week for me.

    Some links below:
    Ken Perlin’s HyperTexture Paper, ~ Computer Graphics, Volume 23, Number 3, July 1989 – Paper is about how we can achieve different materials playing with the density of the noise.-
    Ken Perlin’s Improving Noise Paper – This is the paper of the improvement of the current noise algorithm which allows to build 4D, 5D noises easily as opposed to the previous one and faster.

    – The project Web Wide World by Ken Perlin – This movie is the render of the project which includes 4D Perlin noise to generate clouds and terrain and as Perlin states it “The land height and the various features (snow, clouds, etc), are all derived from a fractal sum of noise.” Since it is highly CPU intensive, it was run on SP2 of IBM back in the day. Live Recording Movie is here.

    Some Java/Processing Examples of Perlin Noise on the Web:
    Filtered Clouds – Toxi
    perlinTerrain – Toxi
    NoiseDetail reference – Toxi
    BumpMap2D – I am ashamed that I don’t know this guy’s name.
    Old School water Ripple by Perlin Noise – err..
    Some discussion and a nice example of source code using it to create colors at Processing forums.

    Reading The Computational Beauty of Nature – Introduction

    Tuesday, January 24th, 2006

    The Computational Beauty of Nature by G. William Flake
    There are really good points that the writer makes in the introduction.

    “… In making a large list of ‘things’ it is easy to forget that the manner in which ‘things’ work more often than not depends on the environment in which they exist…”

    -Actually I believe this is more than more often than not, since everything I see around myself is somehow related to its environment, even in arts, don’t we all lean towards judging work according to its context? (Ok, not all of us, but I do :) )

    “…Examples such as economic markets that defy prediction, the pattern recognition capabilities of any of the vertebrates or the evolution of life are all emergent in that they contain simple units that, when combined, form a more complex whole. Whole System being greater than sum of its part…”

    – We have come to holism once again. I am trying to remember where I first came across to this term, I think it was when studying Gestalt pyschology in Perception class back in foundation of my undergraduate degree. Mathematical algorithms that tries to explain nature really amaze me from the first time I saw them and yet they still do, but, maybe because of my roots of being an eastern, maybe not even that and merely being an as every other person, to believe only those can explain everything is not an approach I bound to. I really like the sense how the writer approaches to this subject as well.

    “…Nature is frugal… There are different ways to describe the interactions of agents; however, multiplicity, iteration and adaptation by themselves go long way in describing what it is about interactions between agents that make them so interesting….”

    – I agree with all my heart. The quote of this class :)

    week 1:Computational Poetry and Mac Low

    Tuesday, January 24th, 2006

    My code is not working as I want it to be, but then again if it were what would be the reason of taking this class :) So I am going to post it in the end of this post. By the way I am curious how Shiffman is using #IDs to skip on the page through titles since I haven’t figured that out in WordPress.

    Whatever today I was thinking when walking from the school, can an algorithm write/ read a poem? Hmm. To answer that question we should first address what is a poem imho. What makes poem a poem? Could it be the verse, metrics, or context or how intense it is, it is a difficult question to answer. I find similarities between poem and music as I dig deeper, what makes a music, music? So for me the answer is somewhere between the saturation and our past lives experience which builds how you perceive your surroundings thus giving you your taste. So with this approach, yes an algorithm might write and read a poem.

    Another approach though is purely mathematical, can an algorithm write and read a poem, well my answer to that would be, no way! I think I am becoming more holistic lately (after I had this operation maybe :) That algorithm could give you the perfect output in linguistics but still that kind of reductionism couldn’t be compared with worst poems I know. maybe. I borrowed Virtual Muse from the library last week , Hartman gives some insight both conceptually and technically how to approach a computer poetry with examples. I got the impression of his words that the best generative poetry is the one that finds its way, like evolve itself, as opposed to imitating the way human approach poetry.

    [page 72]
    AutoPoet embodied an innappropriate idea of poetry. As long as the goal was the imitation of the human poet – or as long as the poem’s reader was encouraged to think that was the goal – I wasn’t likelt ro get any farther. What’s wrong with the AutoPoetry I’ve quoted here (an all the other reams of it the machine would produce until it was turned off) is exactly that it’s imitation poetry. All our habits of reading are called upon, all the old expectations, and then let down. ‘Monologues of Soul and Body’ had worked because its ‘body’ sections were so different from human poetry, It had successfully demanded its own way of reading…”

    I cannot say he succeeds in finding that working formula of the perfect poem but you should read the book if you are curious about his projects[since I don’t have neither the english skills nor the memory to remember those], it is rich in content and he mentions about Mac Low throughout the book with some of his projects which was the main reason I got this book. But yet, there is some good potential on the book about the projects that might pop for the final.

    So if we come to assignment part, I was really curious of what Mac Low wanted to achieve through his diastic readings and thought it might be a good exercise to re-write this algorithm. Only with errors :) I am posting the code below, we can talk more about it in the class, the main problem is, it screws with multiple words of phrases. Actually I have realized it tonight… So I am thinking to improve this for next week, with couple of new optimizations hopefully.

    To Download

    Jackson Mac Low’s Diastic Algorithm

    Sunday, January 22nd, 2006

    I have been trying to implement Jackson Mac Low’s diastic readings[link] in Java for an exercise this week. I managed to work the algorithm partly, I can choose a phrase before compile and spit out the words according to it that my program reads from an input file. Couple of things are missing though;

  • I am only deliminating the words through spaces, I am trying to get my hands dirty regex, but they are damn nonsense.
  • I need to work on errorHandling to make the program don’t go into loops. When searching for long phrases and cannot find it in the text.
  • Maybe a runtime-user input system.
  • Minor problems running the code that are stated below.
  • Texts to be read :
    Handling Errors UsingExceptions from Sun Docs
    Regular Expressions from Sun Docs
    Java Fundamental Class Reference I/O, Input Output Operations in Java
    Introduction to the java.util.regex Package – Beginning Regular Expressions by Andrew Watt (through NYU Library)

    // *************** DIASTIC TEXT ALGORITHM ****************
    // Bugs: There is this repetition of the i after
    // it finds the phrase character, could be a problem
    // if the phrase has double char like "book".
    // also i doesn't goes up to the last element.
    // *******************************************************
    // to do: use lines according to the marks.
    // make it look more like a poem.
    // *******************************************************

    import java.nio.*;
    import java.nio.channels.*;

    public class Diasticc {
        public static void main (String[] args) throws IOException {
            FileInputStream fis = new FileInputStream(args[0]);
            FileChannel fc = fis.getChannel();
            ByteBuffer bb = ByteBuffer.allocate((int)fc.size());
            String content = new String(bb.array());
            String phrase="poem";
            String[] word = content.split(" ");
            int i = 0;
            for (int k=0; k < phrase.length(); k++) {
                for ( i = i + 1; i<word.length; i++) {
                    if(i >= word.length-1) {
                        i = 0;
                    if(word[i].length() > k) {
                    if(word[i].charAt(k) == phrase.charAt(k)) {
                       // System.out.println(i + " " + word[i]);
                       // System.out.println(k + " " + phrase.charAt(k));

    Reading Golan Levin’s Paper

    Sunday, January 22nd, 2006

    I was reading Golan Levin‘s paper about Computer Vision, and this area seems has lots of potential overall. In the beginning of his paper he gives a historical background of Computer Vision through Myron Krueger‘s works, a figure that coined the term “artifical reality”. It looks like it is the first HCI using camera and computer vision. Another peak point for me about this project, it allows two participants in mutually remote locations to participate in the same projection. That is a great route I should be aware of.

    Next piece is Levin‘s collaborative work with Lieberman called Messa di Voce. Basically it is a tracking system of your head which spits out circles according to level of your sound that is transforming through a mic. Since it addresses good points about HCI and possible outcomes of computer vision in this sense, it is a good project.

    There are numerous other projects as well but the one I like, since it is using physical computing, is an installation called Standarts and Double Standarts(2004) by Rafael Lozano-Hemmer. This work consists offifty leather belts suspended at waist height from robotic servo-motors mounted on the ceiling of the exhibiton room. Controlled bu a computer vision-based tracking system, the belts rotate automatically to follow the public, turning their buckles slowly to face passers-by. In its conceptual sense, it “turns a condition of pure surveillance into an ‘absent crowd’ using a fetish of paternal authority: thebelt”.

    In the second part of the paper Levin explains elemantry computer vision technics which are frame differencing, which attempts to locate features by detecting their movements, background substraction, which locates visitor pixels according to their difference from a known background scene, and brightness thresholding, which uses hoped-for differences in luminosity between foreground people and their background environment.

    He has a detailed explanation in here with code samples: Link.

    Shiffman’s Reactive and Swarm

    Thursday, January 19th, 2006

    First the precious link And from the papers :

    Can an image behave? In other words, if a digital image is a visual representation of colors (i.e. pixels) on a grid (i.e. screen or piece of paper), what if each element of this grid were able to act on its own? A series of experiments in answering these questions led me to create Reactive, a live video installation that amplifies a user’s movements with exploding particle systems in a virtual space.

    Reactive began as an experiment in taking a digital image and mapping each pixel in a three-dimensional space. A low-resolution image (80×60 pixels) is mapped to a grid of 2400 pyramid shapes, each colored according to RGB values from the source image, and each with a “z-axis” position according to that color’s brightness. Suddenly, this still image manifests itself as a floating particle system with a one-to-one relationship between pixels and particles.

    That is a really interesting topic to dig. To perceve everything on the screen as composition of pixels. The latest examples that Shiffman showed as at Nature of Code are basically spawned from this idea as well, but applying algorithms based on imitation of Nature. Those were really slick examples by the way.

    Again from the Swarm:

    Swarm is an interactive video installation that implements the pattern of flocking birds (using Craig Reynold’s “Boids” model) as a constantly moving brush stroke.

    Swarm is implemented as a system of 120 boids following the rules outlined by Reynolds. In my system, each boid looks up an RGB color from its corresponding pixel location in the live video stream. If the viewer stands still, his or her image will be slowly revealed over time as the flock makes its way around the entire screen. If the viewer chooses to move during the process of painting more abstract shapes and colors can be generated.

    Here we can see he is using Reynolds’ Boids as the drawing tool.

    First week notes

    Thursday, January 19th, 2006

    Diastic reading:
    Diastic reading is an arbitrary but not random way of selecting words from one text to create a new text.
    A key phrase(title phrase) guides the selection of words. Let’s say that the key phrase is “red balloon.” Starting at the beginning of the text, we would select the first word that began with an “r” — the first letter of the first keyword. Then, we’d continue from where we stopped in the text until we found a word that had an “e” as its second letter (the second letter of the keyword). Continuing, we’d look for a word that had a “d” as the third letter. Next, we’d look for a word that had a “b” as its first letter (the first letter of the second keyword). If at any point we reach the end of the text, we go back and continue from the beginning.
    When a word is followed by a punctuation mark or ends a line in the source text, the line ends in the generated text.

    from this site.

    Acrostic reading:
    Acrostic” reading-through procedures draw words and other linguistic units from source texts by “spelling out” their titles with linguistic units that have the letters of the words in the titles as their initial letters. One reads through a source text and finds successively linguistic units spelling out the title as follows: the units spelling out individual words comprise single lines (often long ones) and the series of lines spelling out the whole title comprises a stanza.

    This book looks like a very rich source creating Computer based poetry :Charles O. Hartman, The Virtual Muse: Experiments in Computer Poetry (Hanover, NH: Wesleyan University Press, 1996), pp. 95-96.

    There is a discussion about the book in Grand Text Auto which is worthwhile to read.

    I must say I am still trying to understand The Young Turtle Assymetries, actually the works that Jackson Mac Low did approaching in a different sense than merely diastic, both as a performance and a narrative as I understood. More to be digged. Here is the link for the explanation.
    And this is the mp3 of this work which sounds quite interesting.

    That’s all for now.