Introductions

Björk: The Gate

Björk: The Gate – NOWNESS from NOWNESS on Vimeo.

For the first release from her forthcoming new album, co-produced by Arca, Björk has teamed up with a super-troupe of contributors to create a hallucinogenic new video. Artist Andrew Thomas Huang lends his tech-savvy hand to envision a kaleidoscopic world inhabited by the singer-songwriter, who is clad in an iridescent otherworldly garment designed by Gucci’s Alessandro Michele. Read more no NOWNESS – http://bit.ly/2f2N1LQ

More of Björk’s The Gate here – bjork.lnk.to/the-gateYT

News, Updates & Videos

Could There Be Other Explanations?

Could There Be Other Explanations? from Julius Horsthuis on Vimeo.

Make your way through a fractal agnostic temple in glorious 3K.
I tried to keep the post effects to a minimum to give you a pure mandelbulbilicious experience.
julius-horsthuis.com
Music: “Nomasi” by Osanno
soundcloud.com/osanno

News, Updates & Videos

VJLoops.com presents “Spaced Out” Various Artist Promo Mix 2016

Spaced Out- VJLoops.com V.A. Promo Mix 2016 from VJLoops.com on Vimeo.

Visit our lightbox and start your download today http://www.vjloops.com/lightbox/spaced-out-443.html

Special thanks to the following for their contributions:

Anisha
Catmac
Daniel Knight
DocOptic
dominiclyddon
Eclips?
Eternal Spline
Humanizr
Kreativorks
Luminator
Patrick Jansen Design
Rover
SPACEMAKER
Splash Creative Media
VJ Loops
VJ Solly
Yarkus

News, Updates & Videos

Fractalicious 3

Fractalicious 3 from Julius Horsthuis on Vimeo.

Every 6 months or so Julius Horsthuis bundles the best parts of his Fractal short films into a showreel-style piece. This is the third installment of the Fractalicious series. rendered in Mandelbulb3D.
julius-horsthuis.com
music: “Attica” by Vessels
vesselsband.bandcamp.com/

News, Updates & Videos

Royksopp Tour Visuals. Monument.

Royksopp Tour Visuals. Monument. from alx on Vimeo.

Segment from Röyksopp band tour visuals. 2015. 'Monument' song.

Agency 'D-E-F'

My role: VJing & LED screens content creation

ON THE WEB:

https://www.behance.net/gallery/29329691/Royksopp-2015-Tour-Visuals

http://romanowsky.tumblr.com

https://dribbble.com/romanowsky

Introductions

The Colors of Feelings

The Colors of Feelings from Thomas Blanchard on Vimeo.

“The Colors of Feelings” is an experimental dreamlike video rocking us smoothly through circular moves. It is also an analogy of feelings such as anger, love, sadness and joy ; they mix and eventually ease.

The visual compositions have been created out of paint, oil, milk, honey and cinnamon.

SYDNEY CHILDREN’S HOSPITAL ART PROGRAM
facebook.com/SydneyKidsArt

The Colors Of Feelings on Discovery Channel
vimeo.com/139934834
discovery.ca/

EXHIBITION:
The space of the universe.
Moscow, Bolshoy Afanasievskiy pereulok
It will last from 9 september till 9 October 2015
———————————————————-
Music: Max Richter – Lost in space
maxrichtermusic.com
———————————————————-
Canon 24-105L + Macro lenses
———————————————————-
thomas-blanchard.com
———————————————————-

News, Updates & Videos

Tribocycle

Tribocycle from Ben Ridgway on Vimeo.

An infinite design. A moment in time. A moving meditation.
“Tribocycle” is designed as a looping movie for large scale projections and installations.
Sizes- 4k square, 4k, 1080p

Selected Screening: Plums Fest | Moscow | Nov 2014
Selected Screening/Installation: Currents 2014 New Media Festival,
Digital Dome at the Institute of American Indian Arts | New Mexico | June 2014

Selected Gallery Installation | Dreams and Divinities | San Cristobal, Chiapas, Mexico | March 2014

Selected Installation | Animafest 24th World Festival of Animated Film | Museum of Contemporary Art Zagreb, Croatia | 2014

Guest Artist Installation | Boomfest 2014

Tribocycle (c) 2013 by Ben Ridgway

News, Updates & Videos

Journey through the layers of the mind

Journey through the layers of the mind from Memo Akten on Vimeo.

first tests playing with #deepdream #inceptionism

A visualisation of what’s happening inside the mind of an artificial neural network.

In non-technical speak:

An artificial neural network can be thought of as analogous to a brain (immensely, immensely, immensely simplified. nothing like a brain really). It consists of layers of neurons and connections between neurons. Information is stored in this network as ‘weights’ (strengths) of connections between neurons. Low layers (i.e. closer to the input, e.g. ‘eyes’) store (and recognise) low level abstract features (corners, edges, orientations etc.) and higher layers store (and recognise) higher level features. This is analogous to how information is stored in the mammalian cerebral cortex (e.g. our brain).

Here a neural network has been ‘trained’ on thousands of images – i.e. the images have been fed into the network, and the network has ‘learnt’ about them (establishes weights / strengths for each neuron). (NB. This is a specific database of images fed into the network known as ImageNet image-net.org/explore )

Then when the network is fed a new unknown image (e.g. me), it tries to make sense of (i.e. recognise) this new image in context of what it already knows, i.e. what it’s already been trained on.

This can be thought of as asking the network “Based on what you’ve seen / what you know, what do you think this is?”, and is analogous to you recognising objects in clouds or ink / rorschach tests etc.

The effect is further exaggerated by encouraging the algorithm to generate an image of what it ‘thinks’ it is seeing, and feeding that image back into the input. Then it’s asked to reevaluate, creating a positive feedback loop, reinforcing the biased misinterpretation.

This is like asking you to draw what you think you see in the clouds, and then asking you to look at your drawing and draw what you think you are seeing in your drawing etc,

That last sentence was actually not fully accurate. It would be accurate, if instead of asking you to draw what you think you saw in the clouds, we scanned your brain, looked at a particular group of neurons, reconstructed an image based on the firing patterns of those neurons, based on the in-between representational states in your brain, and gave *that* image to you to look at. Then you would try to make sense of (i.e. recognise) *that* image, and the whole process will be repeated.

We aren’t actually asking the system what it thinks the image is, we’re extracting the image from somewhere inside the network. From any one of the layers. Since different layers store different levels of abstraction and detail, picking different layers to generate the ‘internal picture’ hi-lights different features.

All based on the google research by Alexander Mordvintsev, Software Engineer, Christopher Olah, Software Engineering Intern and Mike Tyka, Software Engineer
googleresearch.blogspot.ch/2015/06/inceptionism-going-deeper-into-neural.html
github.com/google/deepdream

Next Page »