Archive for February, 2006

Location matters

Thursday, February 23rd, 2006

Nicolas (From Pasta and Vinegar) explains a nice point of different results of location awareness through mobile apps. It turns out that sometimes getting the location of your colleague automatically doesn’t help to communicate,and having lack of availability of this information push you to communicate more with him/her. I think this is a really good point in an era where we think if something is done in technological terms then it is a good thing. Still as far as the technology goes there is this website navizon and they offer a location based system where users periodically update their surrounding wi-fi locations and make it shared so that users without can benefit from that. (P2P wireless positioning.) It is a good approach, as it will allow us to point ourselves without the help of GPS in the long run.

NoC midterm.

Wednesday, February 22nd, 2006

noc_midterm

Screwed up as always :) Well I have some stuff at my hand but I think I generally dive in the code so much that I forget the build something on top of it.

This was my proposal for the records.
Right now I have this: The canvas consists of vector fields 120×120. The particles move according to that. The mouseDrag puts a little spice to it. Actually whole vector field thing started with trying to discover how magnetic field forces work. In the end I couldn’t quite simulate the magnetic field force but this is what I come up with.

This is my second example from last week: I just added a sin/cos lookup table and producing them according to that table as their location. It could be improved to make a fireball maybe…

So this is my third example, which I totally screwed. It was supposed to be this: Last week after I got bored with Magnetic Force concept, I thought it would be great to mix particle systems with waves which means springs in here. Then I started searching on the subject and came up with really good explanations:


Pixar – Physically Based Modelling – Particle System Dynamics
– pdf file.
Particle System Example – Paul Bourke
and one smooth Traer Physics Library.

A normal type of person wouldn’t bother with underlying equations but since I am the biggest geek and also the biggest failure in math, I just took this as an opportunity to prove how I suck at math once again. I have crashed to this derivation concept. Actually it is kind of interesting, it could be used to calculate different positions of your objects in different timesteps and it could come handy at some point. So I have totally biased with it, and tried the port the C code into processing which you can see the result until yesterday Shiffman reminded me about Zachary Lieberman’s Madrid workshop website.

It turns out he applied Euler Method to his particle system and it is in processing! That’s what I have been looking for all this week, an implementation of ODEs. So Yesterday I started and re-wrote the C based code referencing the Lieberman’s way.

Of course it couldn’t work in one night… So I went ahead and hacked Lieberman’s original code to create what I would like to create. In the end it was a great work labour for me overall. I enjoyed it very much, and also I realized that even if I were curious about math side of these applications, I shouldn’t lose the whole picture.

LIPP & NIME collaboration

Friday, February 17th, 2006

Following Josh’s great idea of collaborating for the end semester show at Tonic, I am keeping the sources that I have found which could be beneficial in terms of think-tank. I was reading Joseph Paradiso’s paper Electronic Music Interfaces, and he is pointing out good ideas approaching how to build a electronic musical interface.

I think the question we have to ask to ourselves is before starting any production, how I am going to approach to it. Is it going to be an extension for a current acoustic instrument,or is it going to be a total new interface? Personally I am kind of curious about two things currently. Using slit-scanning as a real time composition tool and using brainwaves to perform and compose.

I have found couple of good papers around those subjects:
Computer Music Research page of Plymouth University, UK
Interfacing the Brain Directly with Musical Systems: On developing systems for making music with brain signals
Tactile Composition Systems for Collaborative Free Sound
An Informal Catalogue of Slit-Scan Video Artworks by Golan Levin.

Those could be a good start for inspiration.-

The Dumpster

Thursday, February 16th, 2006

I have been looking through The Dumpster at Tate’s site and I saw Lev Manovich’s article about this project and it was interesting for me, maybe for you guys too.

I think the article itself is important as how it tells data-representation-visualization as not merely simple representation but also revealing the details that is hiding inside of the data and how we/as users/ all interested could be a great clue of how data visualization is important in our lives. We can see a similar approach at the Seattle Public Library where 6 LCD screens on glass wall were built to show the visualization of data circulation of the library items. Making visible the invisible.

For Noah Wardrip-Fruin and which I totally agree, the main difference of these projects that draws the line between them and a simple visualization is their outcome spits an idea rather than flying numbers or texts.

thoughts and ideas

Tuesday, February 14th, 2006

After reading the paper Direct Manipulation vs. Interface Agents
I think we already left the era where we were happy we were reaching all the information faster, now it has become so much more important to get the information that exactly we need, information which is free from all the noises it is packed with. I want to give RSS as an example; although it changed the way I surf the web, helped me to reach the information faster, and it was a good thing (which I still believe it is more than most of the things) I have subscribed to just so much channels (thinking how easy to subscribe to a feed) that I cannot keep up with them anymore. I am constantly thinking how I can fill the bridge between this overkill information and my needs. I have tried basic solutions like foldering different kinds of feeds, using smart folders and vice versa but well they keep on coming and they are not going to stop I guess. I believe an application which uses similar approach as in the Bayesian filtering could be a good solution to stop this nonsense. An agent application where I can train it to sort the messages according to current and changeable user needs and where I can watch the visualization of channels that are relevant to me with different cues. I believe applying these (not only Bayesian but also different kinds of sort algorithms) to channels that I am feeding could be an answer to my needs..or maybe not.

Problem with such an agent is I am-just-not-ready-to-give-control-to-it. Rephrasing, I have to trust it with my whole heart that it is going to do the vital selections for me. I think the biggest problem lies in here. So to avoid such a problem, we must pull back our dependencies to a level where it is balanced between our trust and its selection. It is an interesting subject b/c my approach subscribing to shitload of channels becoming obsolete with that kind of approach isn’t it? Well I dunno. I guess I should build such an agent and test it…

Controllers

Monday, February 13th, 2006

Max Mathews and Bill Verplank is teaching a nice class over Stanford for the last couple of years and I am aware of it now. The class is about building/experimenting new musical controllers, videos seem interesting, syllabus might be checked out periodically.

Joseph Paradiso (he is the director of the Responsive Environments Group)has paper dated 1998 where he describes new electronic music interfaces.

MIT ambient intelligence group

Monday, February 13th, 2006

This looks kind of interesting, could be a way to pursue for phd,actually it looks like it IS the way for me to go. I have checked out the lady behind it, Pattie Maes, the class she is giving ambient intelligence and the syllabus looks really promising. Lot of doors ready to be opened from there. Thinking briefly, I am interested in building objects that close the gap between audience and the device and which is promised to make our life easier, this group seems a nice environment to work with.

Jitter

Monday, February 13th, 2006

jit.op object lets you perform mathematical operations on all the data in the jitter matrix at once.

math operation will be determined by the attribute @op and provided by an operator.
ex jit.op @op pass + * – * meaning pass the alpha plane, add to the first plane, subtract from the second and multiply the scalar with the last plane.

you can set the size of the jit.pwindow by sending a size message box to its left inlet.

stop stops the movie playback start starts the movie playback.
vol controls the volume of the movie.
frame goes to a specific frame.

You can get important information about the current movie loaded into the jit.qt.movie object by querying attributes such as duration timescale and fps. Querying gettime spits current time.

Looppoints will help you to set looppoints for your movie. it takes two attributes the starting time of quicktime units and ending time of it.

random attribute gives a random number between 0-attribute.
select gets object/number from its left inlet and compares this with its attributes and according to that bangs its related outlet.
key spits out the ASCII value of the key you press.

jit.unpack objects unpack the four planes into four different planes separating their A-R-G-B values. (multiplane matrix into individual 1-plane matrices)

gate object allow us to route different inlets according to the values.
coll object contains the permutations which are later usable. // 0, 1,2,3 / 1,1,3,2/ 2,2,1,2,3/… you can then use unpack object and send these to the gate object for their outlets. Thus, you can swap between the values.
line object generate ramps and line segments

we can describe the jit.matrix with names, likewise: jit.matrix bob 4 char 320×240.
interp message blurs the pixels of the movie.

jit.brcosa
jit.rota

Digital Literature – Interview with Noah Wardrip-Fruin

Saturday, February 11th, 2006

There is a nice interview at artificial.dk with Noah Wardrip-Fruin.

I came to this work through a fascination with possibilites of words, and with the sense that undifferentiated flow down a page wasn’t the right medium for the text I wanted to write.

the time of reading and the time of the processes overlap. there’s text and then there are processes, and the processes enact something in connection with the actions of the reader, at the time of reading, that is related to the themes of the text but not the same. text that responds on a textual level, to things that thappen at the time of reading, such as actions on the part of the reader.

That makes me think about the form as transformation of concepts. The work should be transparent and give feedback to audience in a shape that audience could think something out of it.

Also lately I came across different sites [one was datablogging and the other was seattle library visualization which uses the books that have been taken as the elements and projecting this on the wall of the library] that rely mostly on data and It was dichitung I guess where I read when visualization is art when it is not. Actually question of being an art piece or not is not my biggest concern, but finding new ways of approach to visualization, taking the data and transforming it in a sense that you give something to your audience, a message…

Camille Utterback’s work textrain is also another use of text which is different from what we are used to. It is as Wardrip-Fruin’s words; a shift from text representing a story to text representing an idea.

He has a good article back in dichtung’s previous issue : Here.

well said.

Jitter Tutorials Start

Saturday, February 11th, 2006

At last…

jit.qt.movie
‘s primary job is open a movie and read each frame one after other into RAM for easy access by other jitter objects.

jit.window gets that data in the RAM and shows it as colored pixels on the screen. (external window)
jit.pwindow does the same thing in the workspace.

mousestate
gives mouse coordinates of the screen with its outlets.
scale scales a input range of values to output range.
scale 0 600 -1 1

rate is the playing speed of the movie. 1 is the default. -1 is reverses the movie.

we can map our mouse coordinates as the playing speed of the movie by sending this value by rate $1 to the inlet of the jit.qt.movie.

so we have setcell object where we can set the exact point of the matrix data and display it on the pjit.pwindow.

it works like this: setcell x y val 0-255.

pack object gets number to be packed and spits them as a pack.
pak is similar to pack but all inlets cause outputs.

here is a small snippet that uses pak and allows you to change the matrix data instantly which you can see on the jit.pwindow plus max window.-

#P window setfont “Sans Serif” 9.;
#P window linecount 1;
#P message 288 67 60 196617 clear \, bang;
#P comment 111 108 100 196617 value;
#P comment 60 108 100 196617 y;
#P newex 35 173 52 196617 pak 0 0 0;
#P number 110 129 35 9 0 255 3 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P number 63 129 36 9 0 12 3 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P number 20 129 35 9 0 15 3 3 0 0 0 221 221 221 222 222 222 0 0 0;
#P message 268 112 129 196617 setcell \$1 \$2 val \$3 \, bang;
#P user jit.pwindow 267 238 162 122 0 1 0 0 1 0;
#P newex 269 192 117 196617 jit.matrix 1 char 16 12;
#P comment 22 108 100 196617 x;
#P newex 191 229 46 196617 jit.print;
#P fasten 2 0 0 0 196 210;
#P connect 2 0 3 0;
#P connect 7 0 8 2;
#P connect 6 0 8 1;
#P connect 5 0 8 0;
#P fasten 8 0 4 0 40 200 167 200 167 108 273 108;
#P fasten 11 0 2 0 293 142 274 142;
#P connect 4 0 2 0;
#P window clipboard copycount 12;