by Dr. Rivas and SinQuenza
Three weeks ago, researchers at Google® shared their new advancements on a deep learning image recognition algorithm. The images produced by their debugging process where so bizarre that it immediately sprung people’s curiosity. The images are already a new “internet thing”: there’s a language, the forums, and of course the memes. We wanted to take the deep dream phenomena as an starting point to think about digital art, the internet as its medium and the process of going from the ‘new’ to the ‘meme’.
Deep learning are a set of new tools involving neural networks that aim to be more flexible and faster at recognizing patterns, used in such things as image, hand-writing or speech recognition. Researchers have been studying them for some years now, and it already has resulted in notorious advancements. In the case of image recognition, neural networks are trained with collections of images/nouns pairs, and are then able to recognize with certain accuracy the existence (and position) of those nouns in never before seen images. As neural networks are so hard to debug, Google® came up with a technique that creates images showing what is the algorithm actually recognizing in the input; when the process is iterated the results are amazingly strange pictures that were quickly related to dreams, monstrous illustrations and, most of all, psychoactive hallucinations. The whole process has been named deep dreaming, due to the clear oneiric quality of the images, and is on its way to become extremely popular as people explore and share their own results with different inputs and parameters.
The story of the deep dream phenomena is illustrative of internet propagation. Before Google’s original blog post detailing their algorithm (17th of June), two images were already circulating the internet showing what it was referred to as “an image created by an A.I.” The source of the first image is one or two reddit posts that appeared on the 11th of June in the subreddits “woahdude” and “creepy”, each with millions of subscribers. They were effectively leaks, as no source or verification was given by their authors. It is easy to assume that the source was either a Google employee or a close friend who smartly saved the images that his geekbuddy was showing him. A nice example of how difficult it has become to contain information; leaks are very rapidly going from exceptions to rule, as anonymity becomes ever more easy and, more relevant, massive democratically curated forums are able to select and display images for millions to see almost instantly. The network has become so sensitive, the sharing so effortless, that leaks and their propagation are almost inevitable.
It is interesting to note how reddit serves here both as a historical register and a source of wrong information: as the original posts are blocked, there is no update on the comments which called it a fake or asked for more information. This is historical documentation in the internet era. It is not an obvious decision to make for moderators or site creators; either to provide knowledge, and thus update what is wrong, such as Wikipedia, or archive and register it as a source of historical information. The latter is terribly lacking in the internet, the former leads us to the terrible situation of an eternal present.
After the initial leak comes the relentless surge. Horizontnews, a Spanish pseudoscience magazine, reported the image on June 15, four days after the original reddit post. The article, as so many others, is just a translation to journalist language (and Spanish) of the original reddit post, with further suggestions of paranoid nature (or maybe not?) commenting on the power of artificial intelligence. It was quickly shared more than 10k times on Facebook. I don’t know of other articles that referred to the image, but this is how I first came to know about deepdream: through the post of a Facebook friend. It is interesting to see the reaction of most people before the original Google® Research blog post, that is, in a 5 day period. It is overwhelmingly sceptical about the authenticity of the picture. Self-proclaimed experts doubt that it has been created by an algorithm. They even cite a relevant research paper which uses the same principles and shows similar images, but accept that the new images present so many new features that they “must be a fake”. For once, sceptics were wrong.
To the surprise of many, the images turned out to be true, and just one of infinite possibilities. It helped so much to the popularity of the phenomena that the guys behind it were Google® researchers, still the popular guys of the hood. Their post is quite technical but for the joy of many it presented a few other images that showed even more impressive results. The internet, as it so often likes to do, exploded. Connections after the post are so extensive that it makes no sense to follow further its trajectories; it is factually impossible to trace them. When something becomes popular, a “meme”, a “viral”, it then ceases to be located. It also loses identity; it is reinterpreted, misinterpreted and overinterpreted. It is, most of all, uncontrollable; information will explore and align with the topology of the internet, only stopping there where divisions are huge and mostly a reflexion of its human substrate. A potential way of unravelling the architecture of the internet, similar to injecting tracers in the circulatory system. Let us again remind all that when something goes “viral”, when the “social networks” are “booming” on the “latest trend”, it is always hardly a global phenomena in the sense that it doesn’t encompass all internet-connected humans (although it may encompass all uncensored places with internet) but usually a tiny subset; bubbles who usually think themselves as the only one and therefore “the” internet. Basically, the illusion and common mistake of assuming universality from locality is also present on the internet, in a much higher level than we think.
How was then the deep dream art scene born? Of course, with the tool. A code example of the algorithm used to create the images was made public by Google® on the 1st of July, roughly two weeks after their original blog post. At the moment of writing it has now been a little more than a week since the tool became available, and deepdream creations are “everywhere”, and by “everywhere” I mean everywhere in my digital art bubble. Interestingly, the documentation of the code itself urges people to document their creations using the hashtag #deepdream, which effectively named the phenomena. The first images under the label of art are shared, unsurprisingly, in the Facebook “digital art” groups. The atemporal “popularity” sorting of Facebook does not allow me to see the propagation of the posts through the groups, but I personally saw many first images even on the 2nd of July. A fresh facebook group is growing rapidly with active users submitting their #deepdream experiments: it already has more than 500 members. Within four days the subreddit r/deepdream already has over 3.700 subscribers. This was about one hour ago when we started writing this text; the amount has already increased by 500. And one hour later again 500 people subscribed, now one day later it has more than 10.000 subscribers.
One thing to notice: as always, science was first, but nobody cared. The paper, and all the relevant information to be able to produce that type of images, had been available for more than a year in the free access, publicly accessible, Google® indexed and generally loved arXiv repository. It was probably talked about in academic circles, maybe even conferences. The extreme difference of impact between the two mediums, or circles, is worrying: through academic channels the existence and capabilities of the method was able to reach only the very specialized community. On the other hand, when a single image was shared in broad communities defined only by a common human reaction (r/woah), it took five days for millions, probably even tens of millions, to be aware, interested and amazed by the technological breakthrough. It seems to me as a good example of how science (that is, scientists) are constantly underestimating the interest and possible impact of their research. Science must realize that not only the knowledge they publish is valuable, and that technology is not the only use of that knowledge. They need to realize that the tools they use to reach their results are also immensely valuable, if so more than the results, and that non-scientists could also find gold in them, even if “just for art.” This is one positive way to walk down from the monasteries of science into the city.
Most of the deep dreams being shared now are done using an online interface, thus skipping the quite complicated installation procedure of the actual code. Images created with this interface are quickly saturating; it is not possible to change the database of images used for the recognitions, so common features are seen independent of the input. That is, there is no space for creativity, a natural consequence of the tool’s limitation. A clear image of a very common phenomena: the easiest new tool quickly saturates and loses all value. The next step is almost always the “meta”, the symbolization, both visually and verbally. A language is created: the particular way of creating an image can now be referred simply as “1 low and 5 highs”. The facebook group defines itself as “Surreal Visions conjured from the depths of an Artificial Collective Consciousness. …and dog caterpillars for everyone”, referring to one of the most common results of the standard parameters. It then follows the creation of memes that both connect the community and announce its saturation, as also the intervention of popular culture icons with the technique: movies, paintings, photos. The meta-isation is seen here as a direct consequence of the limitation of the tools.
But there is another way. Not every new thing is condemned to become a meme and join The Garden of Earthly Delights of the collective imaginary. Instead of the quick-product-to-meta way, there is always the harder option: understand, explore and hack the tool, to be able to create something qualitatively new. The qualitative nature is always a direct consequence of the tool used, and it is this difference what is usually assigned the most value in art. Fortunately, deep dreams are also starting to go in that way, lead by the few brave ones who installed, understood and are beginning to tweak the algorithms to produce images which use, for example, a different database of images. Mix and match at the tool level is also arising. The future is still unpredictable, exciting.
We believe that digital art shines in these two extremes. At one point, the use of the immense power of computation and repetition of computers. Explorations of the image generating algorithms space. And at the other extreme, the spaghetti bolognese cloud of symbols; the mix match, the reference, the mutants. The algorithm and the data. The http and the link. The rules and the game.
Thanks to Francisca of primerfoton for valuable info