home.social

#victoriaalbertmuseum — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #victoriaalbertmuseum, aggregated by home.social.

  1. Some historical documents from 2009 for #TextureTuesday... A high-res render and some detail crops of the award-winning generative typography & branding system/application I designed & developed for the DECODE exhibition at Victoria & Albert Museum London (one of the largest group shows the V&A had organized and curated in a few decades, and which later traveled to other museums worldwide)...

    The entire identity system & application was/is open source (a world's first back then) and was created with my toxiclibs toolkit (3D geometry, meshing, voxel-based volumetric modeling, animation) and Processing (GUI). Rendering using Christopher Kulla's Sunflow.

    More project information:
    web.archive.org/web/2010030309

    User guide (also explaining the structure of the 3D object):
    web.archive.org/web/2010021301

    Mirror of the old repo:
    github.com/postspectacular/vam

    Flickr set (120+ images):
    flickr.com/photos/toxi/albums/

    #AbstractArt #GenerativeArt #Typography #Branding #Texture #VictoriaAlbertMuseum #OpenSource #Polygons #Color #Voxel

  2. @DBG3D @t36s Okay, I found another nice excerpt, a bit more minimal than the above, but maybe also more clear to hear the approach described earlier. Just to explain once more, all the samples used are only one-shot single notes (produced by Simon Pyke/Freefarm). All melodies, chords, chord progressions, rhythm and the overall arrangement are fully generated (mostly but not exclusively) via cellular automata. The composition system also had other means to create/control, e.g. probabilistically trigger the recording of notes/events of selected tracks/channels for a few bars and then replay these phrases later, maybe using a different time scale, transpose, mirror and/or with different instruments... This proved to be highly effective (and musical) in terms of longer progressions and to create more interesting multilayered compositions/progressions. Some phrases were kept in a memory pool for up to 12 hours (the piece ran for 3 months)...

    As you can hopefully tell, the visuals for that installation were audio-responsive (not really audio per se, but responding to the events of the composer). Likewise, if the visuals would become too agitated/intense, an event would be sent to the composer to quickly dial down/thin out the musical intensity (e.g. trigger tempo change, mute tracks, lower velocity etc.). This hybrid, coupled two-way feedback worked very well in practice and there were so many moments I wish I would have recordings of...

    #GenerativeArt #GenerativeMusic #MusicComposition #CellularAutomata #AudioReactive #Installation #VictoriaAlbertMuseum #Video