{"id":782,"date":"2017-11-16T11:16:30","date_gmt":"2017-11-16T10:16:30","guid":{"rendered":"http:\/\/davikingcode.com\/blog\/?p=782"},"modified":"2022-12-11T16:29:19","modified_gmt":"2022-12-11T15:29:19","slug":"franc-tamponnage-and-creative-coding","status":"publish","type":"post","link":"https:\/\/davikingcode.com\/blog\/franc-tamponnage-and-creative-coding\/","title":{"rendered":"Franc-Tamponnage and creative coding"},"content":{"rendered":"<p>During one month, our Bourgogne-Franche-Comt\u00e9 region was in trouble: the music festival, called <a href=\"https:\/\/www.facebook.com\/Franc.Tamponnage\/\" target=\"_blank\" rel=\"noopener\">Franc-Tamponnage<\/a>, for alternative, electronic and extreme kind was in progress. Our friends at <a href=\"https:\/\/www.facebook.com\/MagnaVoxProductions\/\" target=\"_blank\" rel=\"noopener\">Magna Vox<\/a> planned dozens of concerts throughout the region.<\/p>\n<p>For a specific set of concerts, we were in charge of creating a nice visual experience for visitors. It was the perfect opportunity for some creative coding! We were 3 coders on this project, Julien experimenting for the first time with <a href=\"http:\/\/openframeworks.cc\/\" target=\"_blank\" rel=\"noopener\">openFrameworks<\/a>, Tamsen playing around with one of his favorite toy: <a href=\"https:\/\/processing.org\/\" target=\"_blank\" rel=\"noopener\">Processing<\/a> and Aymeric making some good old Flash!<\/p>\n<h4>Here&#8217;s the recap by each of us :<\/h4>\n<p><!--more--><\/p>\n<h5>Aymeric :<\/h5>\n<p>I worked on the poster designed by <a href=\"http:\/\/letachepapier.fr\/\" target=\"_blank\" rel=\"noopener\">Le T\u00e2che Papier<\/a> and used a water ripples with AS3:<br \/>\n<a href=\"http:\/\/davikingcode.com\/blog\/wp-content\/uploads\/2017\/10\/franc-tamponnage.gif\"><img loading=\"lazy\" class=\"alignnone size-full wp-image-799\" src=\"http:\/\/davikingcode.com\/blog\/wp-content\/uploads\/2017\/10\/franc-tamponnage.gif\" alt=\"\" width=\"684\" height=\"508\" \/><\/a><\/p>\n<h5>Tamsen :<\/h5>\n<p>I worked with <a href=\"https:\/\/processing.org\/\">Processing 3<\/a> to create a music controlled image distortion &#8216;visualizer&#8217;. Images were from a participating artist <a href=\"https:\/\/www.facebook.com\/pierre.berthier.officieux\">Pierre Berthier<\/a> (black and white processed scans of sketches) so the base processing output was black on white, nothing fancy. It was then composited live and manipulated by a technician that had control over what and how things were projected, using <a href=\"http:\/\/www.millumin.com\/v2\/index.php\">Millumin<\/a>.<\/p>\n<p>Here&#8217;s an example :<br \/>\n<iframe loading=\"lazy\" src=\"https:\/\/player.vimeo.com\/video\/240478366\" width=\"640\" height=\"564\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><br \/>\nThe hope was to get live music to control how images changed and were distorted, the processing sketch was ready for such use, but due to technical and time constraints, a video of prerecorded processing output was sent through and looped, used as the technician saw fit during the concert.<\/p>\n<p>Using the Processing library called <a href=\"http:\/\/code.compartmental.net\/tools\/minim\/\">minim<\/a> by compartmental, a simple forward <a href=\"https:\/\/en.wikipedia.org\/wiki\/Fast_Fourier_transform\">FFT<\/a> is applied to the input audio. This allows us to get spectral data from the incoming signal, in short, we have a stream of values, each varying whether the sound has low, medium or high frequencies (the number of &#8216;divisions&#8217; or <a href=\"https:\/\/en.wikipedia.org\/wiki\/Frequency_band\">bands<\/a> you can extract from the signal can be determined in advance, we basically used 6).<\/p>\n<p>This is the most basic way to react to sound, and would be the same algorithm used to simply display a spectral visualization such as this one which we&#8217;ve all seen :<\/p>\n<p><a href=\"https:\/\/aymcreations.deviantart.com\/art\/Equalizer-385871495\"><img loading=\"lazy\" class=\"aligncenter size-medium wp-image-810\" src=\"http:\/\/davikingcode.com\/blog\/wp-content\/uploads\/2017\/10\/equalizer_by_aymcreations-d6dqkcn1-300x188.jpg\" alt=\"https:\/\/aymcreations.deviantart.com\/art\/Equalizer-385871495\" width=\"300\" height=\"188\" srcset=\"https:\/\/davikingcode.com\/blog\/wp-content\/uploads\/2017\/10\/equalizer_by_aymcreations-d6dqkcn1-300x188.jpg 300w, https:\/\/davikingcode.com\/blog\/wp-content\/uploads\/2017\/10\/equalizer_by_aymcreations-d6dqkcn1-768x480.jpg 768w, https:\/\/davikingcode.com\/blog\/wp-content\/uploads\/2017\/10\/equalizer_by_aymcreations-d6dqkcn1.jpg 1024w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>So you see, for each bands in the spectrum, we can play around with some parameters.<\/p>\n<p>In this case, high frequencies would swap the image used, the image itself is displayed on a quad and that quad&#8217;s vertices are being moved in space as well as the uv&#8217;s themselves, there are also effects such as faked motion blur (made by not clearing the background of the buffer).<\/p>\n<p>Here&#8217;s a simple patch to get started : <img loading=\"lazy\" class=\"aligncenter size-medium wp-image-811\" src=\"http:\/\/davikingcode.com\/blog\/wp-content\/uploads\/2017\/10\/snip_20171116102342-298x300.png\" alt=\"\" width=\"298\" height=\"300\" srcset=\"https:\/\/davikingcode.com\/blog\/wp-content\/uploads\/2017\/10\/snip_20171116102342-298x300.png 298w, https:\/\/davikingcode.com\/blog\/wp-content\/uploads\/2017\/10\/snip_20171116102342-150x150.png 150w, https:\/\/davikingcode.com\/blog\/wp-content\/uploads\/2017\/10\/snip_20171116102342.png 596w\" sizes=\"(max-width: 298px) 100vw, 298px\" \/><\/p>\n<pre><pre class=\"brush: java; title: ; notranslate\" title=\"\">\n\nimport ddf.minim.*;\nimport ddf.minim.analysis.*;\n\nMinim minim;\nAudioInput input;\nFFT fft;\n\nint bands = 6; \/\/ number of 'bands' we wish to extract from the audio input\n\nfloat[] spectrum = new float[bands];\n\nvoid setup() {\n  size(600,600);\n\n  minim   = new Minim(this); \/\/create minim\n  input = minim.getLineIn(Minim.STEREO); \/\/start line in (audio input)\n\n  fft = new FFT(input.bufferSize(),input.sampleRate()); \/\/create fft according to input\n  fft.linAverages(bands); \/\/setup for averages\n  fft.window(FFT.GAUSS); \/\/window algorithm\n\n  background(255);\n  smooth(4);\n\n  colorMode(HSB);\n  noStroke();\n}\n\nfloat getFFT(int i) {\n  float val = spectrum[i];\n  \/\/Here, one could correct values, remap or scaled them based on index etc...\n  return val;\n}\n\nvoid doSpectrum() {\n  fft.forward(input.mix);\n\n  for (int i = 0; i &amp;lt; bands; ++i)\n    spectrum[i] = fft.getAvg(i);\n}\n\nvoid draw() {\n  doSpectrum();\n\n  background(255);\n\n  float d = (float)width\/(float)bands;\n  float hueDiv = 255.0\/(float)bands;\n\n  for(int i = 0;i &amp;lt; bands; i++) {\n\n    color c = color(i*hueDiv,255,255);\n    fill(c);\n\n    float h = getFFT(i) * height;\n\n    rect(i*d,height - h,d,h);\n  }\n\n}\n\nvoid stop()\n{\n  input.close(); minim.stop();\n  super.stop();\n}\n\n<\/pre>\n<p>This sample will just draw a histogram for the spectrum, You will also notice that as bands are higher in the frequency domain, values are smaller, one should (in getFFT for example) try to remap or scale values based on the band index to get more workable values, the code to do so is not included here for clarity and there could be many approaches.<\/p>\n<p>Also to be clear, we&#8217;re calling bands averages of multiple bands &#8211; since we don&#8217;t want the full raw data the FFT would extract. Anyway this was enough for our use case. More complex sound analysis such as timbre analysis could be done with more audio oriented software such as Pure Data and the <a href=\"https:\/\/puredata.info\/downloads\/timbreid\">timbreID<\/a> library which not only does FFT but finer analysis of the &#8216;nature&#8217; of the sound let&#8217;s say . We almost were going to go this route but this would&#8217;ve involved wifi communication and some hardware we had no time to setup unfortunately.<\/p>\n<h5>Julien :<\/h5>\n<p>I was new in creative programming and I would use C++ to program procedural effects on <a href=\"https:\/\/www.facebook.com\/pierre.berthier.officieux\">Pierre Berthier<\/a> pictures. So the famous and well documented OpenFrameworks was my choice for the development.<\/p>\n<p>Cause the images are black and white, I preferred to put some colors and played with. In the constraint time, he develop 3 effects : Meta-balls, a liquid filling with particles emission, and a liquid spread effect.<\/p>\n<p>Here&#8217;s an example on a holographic device during the festival :<br \/>\n<iframe loading=\"lazy\" src=\"https:\/\/player.vimeo.com\/video\/240479114\" width=\"640\" height=\"564\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<p>The effects was not command by the music and we wanted to have control on the generation. A sequencer was develop to &#8220;randomize&#8221; the display with patterns (effects with different colors, background image, and other parameters). The patterns are stock in a json.<\/p>\n<p>At the beginning, it was expected to generate the video at runtime in a embarked device, so we planned to use a Raspberry Pi 3. The Raspberry Pi is not a powerful computer, and programming effect on OpenFrameworks with OpenGL is not very efficient on high resolution (720p or 1080p). The Raspberry displayed effect between 25-32 FPS on 720p. So to boost performances, I wrote the most effects in shaders (meta-balls and liquid spreading).<\/p>\n<p><strong>Metaballs<\/strong><\/p>\n<p>The metaballs are simple <a href=\"https:\/\/en.wikipedia.org\/wiki\/Implicit_surface\">implicit surface<\/a> with only spheres. So the shader development wasn&#8217;t complicated. The tricky part was on the trails\/fill effects.<br \/>\n<a href=\"https:\/\/en.wikipedia.org\/wiki\/Framebuffer_object\">FBO<\/a> (FrameBuffer object) are use to create layer : one for rendering metaballs and one for the trails. On the trails layer, we render the metaballs with a alpha to do a accumulation effect.<br \/>\nDuring the effect, metaballs\u00a0color change and is the same for the trails. The blend between the old color and the new color was hard to find. We leave the code of metaballs, maybe this will save your time.<\/p>\n<pre><pre class=\"brush: cpp; title: ; notranslate\" title=\"\">\nofSetColor(255);\n\nint nbCircle = 10;\nfloat _metaballsCenter[nbCircle * 2];\nfloat _metaballsSquareRadius[nbCircle];\n\n_fbo.begin();\nmetaballsShader.begin();\nmetaballsShader.setUniform2fv(\"metaballsCenter\", _metaballsCenter, _nbCircle);\nmetaballsShader.setUniform1fv(\"metaballsSquareRadius\", _metaballsSquareRadius, _nbCircle);\nmetaballsShader.setUniform3f(\"inputColor\", _currentColor.r*.0039215, _currentColor.g*.0039215, _currentColor.b*.0039215);\n\n_fbo.draw(0, 0);\n\nmetaballsShader.end();\n_fbo.end();\n\nofEnableAlphaBlending();\nglBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_SRC_ALPHA, GL_ONE);\n\nofSetColor(220, 220, 220, 20);\n\n_fbo2.begin();\n_fbo.draw(0, 0);\n_fbo2.end();\n\nofEnableAlphaBlending();\nofSetColor(ofColor::white);\n\n_fbo2.draw(x, y);\n_fbo.draw(x, y);\n<\/pre>\n<p><strong>Liquid filling with particles<\/strong><\/p>\n<p>The effect combine 3 layers : particles, trails , and the &#8220;liquid&#8221;.<\/p>\n<p>The particles are just circles\u00a0with a basic physics (velocity + mass + gravitation).<br \/>\nThe trails are similar to metaball&#8217;s trails. It&#8217;s a fbo with a semi-tranparent color apply every frame.<br \/>\nThe liquid is a polygon with 3 border align to the screen limit and the last border design by a Perlin noise.<\/p>\n<p><strong>Liquid spreading<\/strong><\/p>\n<p>The liquid spreading is a very simplified liquid physics. It&#8217;s based on a article of JGallant : <a href=\"http:\/\/www.jgallant.com\/2d-liquid-simulator-with-cellular-automaton-in-unity\/\">http:\/\/www.jgallant.com\/2d-liquid-simulator-with-cellular-automaton-in-unity\/<\/a>.<br \/>\nThe effect is affected by a obstacle shape store in a image. The green channel is use, and the value of green mean the &#8220;height&#8221; of the obstacle at this pixel.<br \/>\nAfter on a other image, sources of color fluid are place. The liquid is store in pixel channel (r,g,b), so we can manage three different liquid at the same time.<br \/>\nAt every frame, each pixels near pixels with liquid receive a fixed amount of liquid (color). In the process, the sources don&#8217;t lose liquid until the maximum level (255). So if a pixel is near of different color source, the color will be add in the pixel and blend the primaries color.<br \/>\nWhen a pixel is on an obstacle, we look the liquid level of the nearest pixel and the obstacle level. If the liquid level is higher than the obstacle, then the liquids color are added in the pixel.<br \/>\nFinally, for each primary color (rgb) a replacement color is set and the effect is done.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>During one month, our Bourgogne-Franche-Comt\u00e9 region was in trouble: the music festival, called Franc-Tamponnage, for alternative, electronic and extreme kind was in progress. Our friends at Magna Vox planned dozens of concerts throughout the region. For a specific set of concerts, we were in charge of creating a nice visual experience for visitors. It was &hellip; <a href=\"https:\/\/davikingcode.com\/blog\/franc-tamponnage-and-creative-coding\/\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Franc-Tamponnage and creative coding<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_discordance_state":"","_discordance_checked":true},"categories":[15],"tags":[22,49,48,46,47,45,33],"_links":{"self":[{"href":"https:\/\/davikingcode.com\/blog\/wp-json\/wp\/v2\/posts\/782"}],"collection":[{"href":"https:\/\/davikingcode.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/davikingcode.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/davikingcode.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/davikingcode.com\/blog\/wp-json\/wp\/v2\/comments?post=782"}],"version-history":[{"count":34,"href":"https:\/\/davikingcode.com\/blog\/wp-json\/wp\/v2\/posts\/782\/revisions"}],"predecessor-version":[{"id":1237,"href":"https:\/\/davikingcode.com\/blog\/wp-json\/wp\/v2\/posts\/782\/revisions\/1237"}],"wp:attachment":[{"href":"https:\/\/davikingcode.com\/blog\/wp-json\/wp\/v2\/media?parent=782"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/davikingcode.com\/blog\/wp-json\/wp\/v2\/categories?post=782"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/davikingcode.com\/blog\/wp-json\/wp\/v2\/tags?post=782"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}