{ Brain } – beta version

6B2A0867

Brain <beta version>

Brain is an automata that shows how the brain works when you think. Composed of handmade steel wires and knotted rope as pulley system. Brain functions in two ways:

1) Conscious –> Hand cranking the steel wheel to fluctuate the shells, and also rotating the wheels on the other face of brain.

2) Subconscious –> Using ultrasonic sensor to measure the distance, part of the brain will function automatically once the user is in position / wearing the helmet.

 

Exposing yourself & Taking in the unknown

The most exciting/weird experience of this project is the moment you stick your head into it. The motor above you starts moving; the vibration and sounds of the pulley system; the view you see through the wood cubes, knotted ropes, and steel wheels. It’s the mixture feelling of exposing yourself to undergo the vulnerability, and taking in the unknown to activate all your senses in the narrow space.

 

2D/paper –> 3D/physical

Inspired by renowned kinetic sculptor Arthur Ganson, I decided to use steel wire as the means for my Automata final project. Based on the photos and videos collecting from internet, I sketched out the shape first, and then tried to figure out how to bend the steel wire into wheel.

brain_node

It’s a trial and error process, but thanks to magical Zoe Logan, I learned different pliers and jigs to make proper(at least better than free style) steel wheel. The whole journey is a little bit pain in the ass, but the result is satisfying. Beside boosting my stamina with heavy laboring, I’m glad the efforts somehow accumulating into something weird and terrifying looking.

 

I learned a lot from this transforming 2d ideas into 3d physical form process: the pros and cons of wire bending compare to the thick steel connected by welding; the limit of series connection the wheels with knotted ropes; the necessity of middleware of two big wheels to increase the tension.

 

gamma version

Multiple motors to fully construct the node system of brain on all the faces. Pill the paper off the acrylic sheets so it’d be totally transparent. Could have multiple hand crank nodes as well so it will become a cooperative “thinking” system.

brain_idea

 

Related posts

Concept Sketch, Process of Making.

 

5-in-5_Day3_Breathe

tease(a teaser)

More to come, too cold to edit at 4am…

**Update**

Inspired by Wriggles & Robins(check out theirs!)
Set up in my bedroom with window wide open at night, tempurature -12ºC/10ºF,
LG HX350T projector, a lot of layers, and hot water stand by. Animation made in AE!

For now, it’s just a one-day simple projecting test. For future, I’d love to connect this with more interactions, exploring possibility to be generic art of emotions and communications. Pam suggested using words recognition to tell user’s speaking content then it can easily avoid pre-rendering! Seems to be having a lot of potentials. Exciting!

Oh there’s one big cannot unsee problem. It’s restricted by the temperature and amount of breath! Hmmm…

Triangulation!

Triangulation.. Finally! Want it so bad for a semester. Luckily having a winter break to do it. Have a lot of rooms for improvement but still, one step further!

*UPDATE*(in the middle of writing this post UGH)

New version. Normal speed!

In the middle of documentation, the ill performance, both low speed running and wrong color picking bothered me a lot, so I went through the whole codes again and making adjustment around. And then I found out what the problem is! It’s the random plotting of points for triangles! It not only slows down the process, but also causes the ill performance of choosing the right color for the triangle. OH YEAH SO HAPPY. It’s the similar “bitter yet sweet moment” I usually will have when coding… I think I’m ready for the new semester!

Old version. x3 speed! Slow and chaos.

Old Line version. Slow but it seems to have better outcome with the random plotting points methods!!

 

Image Gallery

colorV20252line544outcome06 outcome01 outcome04 outcome02 outcome03 outcome05 outcome07

 

Basic idea of code

  1. capture image from webcam and save as PImage source
  2. iterate through source, pick up every pixel, compare either a) colors or b) brightness difference(I found comparing by colors will be more accurate).
  3. if the difference is bigger than certain threshold, plant a point in class Voronoi. Voronoi does all the calculation to transform points into triangles.
  4. for each triangle gotten out from Voronoi, using getCentroid() to pickup the color to fill the triangle built up with beginShape(), endShape() and vertex().
  5. wipe out Voronoi and build a new one every 10 frames to speed up the performance.

 

References

  • from Robbie Tilton. ITP alumnus! This is where my base came from! Clear description made me not afraid of trying the code out! From him, I got the idea of using Cols and Rows to boost up the performance, and plotting points for triangles with a random deviation of 5 pixels to make it look less grid-like(BUT randomness is not good for picking color, and it also slows down the performance).
  • from Jan Vantomme. Very well documentation! From him, I learned the difference between getCentroid() and getSites(), and also learned that, since getCentroid() and getSites() don’t return the voronoi regions and points in same order, to fill the right color for right position it has to be looped to iterate through, picking and filling up the color at the same time.

 

Inspiration & Further

 

Code, as below: Continue reading

Quick notes after Winter Show 2013

Dope Ropes

doperope

Feedback

  • “Is it NIME? Just PComp? Wow good job guys, amazing!”
  • “What does it do?”
  • “Wow I like that, check this out, this is my favorite!”
  • “Why do you choose those sounds?”
  • advices for the ropes attachment
  • very satisfying when pulling
  • bell instrument
  • “Are they sex tool ropes?”
  • pretty on and off switch LED
  • “LED is fun.”
  • Are you a musician?
  • Can you compose song with this?
  • Using headphones for this.
  • Speakers can be located separately for each set.
  • People are afraid they’re going to pull it down; afraid to pull in the beginning

Aftershow Notes

  • too high to adjust, all the wires need to be glued to be stable
  • xbee data sending –> glitch!
  • pulling instrument –> potential, because it’s rare. The only association people have is ringing the bell.
  • people seems more related to recognizable sample sounds
  • pitch changing one needs to be more subtle and gentle
  • Pulling demonstration helps people understand

 

 

Glitchtchtchitch

glitchtchtchitch

Feedback

  • some just pass by
  • some enjoy it, standing for a long time and coming back later
  • people feel more comfortable when I’m not beside the stand
  • watching –> realizing –> smiling –> seeing EyesMouthes –> laughing
  • “Ha. Big brother is watching.”
  • “How do you say this title?”  “I see. Sounds like what they look like.”
  • “Fun” “Very interesting”
  • “Hmm”
  • “What should I do?” “How does it work?”
  • “Can I take picture with it?”
  • “Can I record this?”
  • people wave at the camera
  • “It’d be cool if it can detect me smiling”
  • interested in how it works
  • “I want to take this home and put it in my room.”

Aftershow Notes

  • If it’s workable would be more interesting to open more sketches
  • Ideally two computers with 4 projectors!
  • can add more interaction function e.g. detect smile and laugh
  • more knowledge about surveillance and psychology of people’s react with it

ICM_Glitchtchtchitch

ICM Final– Glitchtchtchitch.

Manipulation and surveillance visualization.

Featured in ITP Winter Show 2013(See all the pics!).

Glitchtchtchitch is a live interactive installation showing multiple short-lived fault in a system. By bringing out the imperfection of technology with massive pixels manipulation, sound distortion, and multiple screens display, Glitchtchtchitch visualize the transient fault and the incapability of communication. Although mainly sending serious messages, with the effect of headless illusions, heads displacement, and delay, it leaves audience undergoing an experience without too much pressure.

Glitchtchtchitch is presented by running more than 10 Processing sketches at the same time, and using 2-3 projectors to increase the amount of screen, the variety, and increase the level of distortion.

 

Main idea –>  In order to cubify heads, instead of just altering pixels, I made an object “Cube” to get, restore, alter, and display the pixels of specific range. Also to achieve the headless effect, besides the library OpenCV, I took a background image beforehand, and display its pixels within certain ranges, once detecting a face. ALL THE SOURCE CODES

 

Notes

  • Speed issue has room for improvement.
  • Different scale presentation style(projectors included) looks nice.
  • From user test and presentation feedback, people love headless and delay effect the most. Because they’re the most bizarre, unrealistic, and uncommon visual impact.

Problems with solutions

  • OutOfBounds —> constrain(xxx, 0, numPixels-1)
  • flip horizontal —> video.width-fx-1
  • can’t cover image with pixels[ ] —> solved by using pixels for both
  • improve the sketch speed —> P2D, PFrame,
  • connect to webcam? PS eye? —> camera list, example

References

  • scale PImage http://stackoverflow.com/questions/17705781/video-delay-buffer-in-processing-2-0
  • Minim noise http://code.compartmental.net/tools/minim/manual-noise/
  • hide menu http://processing.org/discourse/beta/num_1224367967.html

Original proposal –> Here.

http://jhclaura.com/Glitchtchtchlitch_proposal/Glitchtchtchlitch_proposal.pdf