A - Inside the Generative Productions



Since the latest update, FAKEWHALE STUDIO continues its research into the vast potential of generative images and their profound impact on artistic production. In his seminal essay “The Work of Art in the Age of Mechanical Reproduction,” Walter Benjamin discusses how technological advancements, such as photography and cinema, have transformed the aura and authenticity of artworks. His insights are incredibly relevant today, where the concept of originality in producing a visual image related to a given reality is once again challenged by generative technologies.
Benjamin observes that “what withers in the age of mechanical reproduction is the aura of the work of art” (Benjamin, 1936). With the advent of mechanical reproduction, the authenticity of the artwork is lost as it is separated from its original historical and cultural context, yet it assumes a new kind of originality inherent in the derived multiple. This phenomenon is similarly observable in generative images created by artificial intelligence, where reproduction not only copies but also transforms and reinterprets reality.
Similarly, another philosopher, Gilles Deleuze, with his work on simulacra and simulation, offers a complex perspective on the nature of reality and representation in the digital media age. Deleuze helps us understand how AI-generated images not only reproduce but also transfigure reality based on extensive databases, creating new layers of meaning and reinterpretation. His idea of “folds” and “flows” of information applies perfectly to the dynamic and fluid nature of AI-generated images, which are not fixed but resemble a flexible material, constantly changing and being reinterpreted.
Some programmers and prominent figures in Silicon Valley, such as Ian Goodfellow, the creator of GANs, and Fei-Fei Li, a pioneer in computer vision, have indirectly made significant conceptual contributions to the production of generative images through their work. In particular, they utilize deep neural networks to process large amounts of visual data. Machine learning algorithms, such as generative adversarial networks (GANs), create images that another network attempts to distinguish from real ones. This process leads to the generation of increasingly realistic and detailed images.
Large datasets, such as ImageNet, offer a vast array of labeled images covering countless categories. These datasets serve as the foundation for training deep neural networks, allowing models to learn from the visual variables present in the images. In a sense, these databases act as a giant “cauldron” where the different characteristics of images are mixed and combined according to a specific logic determined by machine learning algorithms.
As we move forward, our focus remains on the intellectual and aesthetic implications of AI in art, examining how these technologies are redefining creativity and challenging traditional notions of authorship and originality.


B - A New Imaginary



Every work, every action, and every word we express are the products of accumulated knowledge and past experiences. This seemingly simple principle reveals a fundamental truth: nothing we consider new is truly devoid of roots in the past. Every idea, every concept, every creative form that emerges is the result of a complex interaction between what we already know and what we are able to synthesize from this knowledge.
Analyzing this phenomenon in more rational terms, we can observe that innovation is not an isolated act but rather a process of evolution. Every new creation is intrinsically linked to existing patterns, models, and structures. These preexisting elements form a sort of matrix that conditions and guides our ability to create something new. Therefore, there is no real discontinuity between the past and the present; instead, there is a continuity that manifests through the transformation and adaptation of existing knowledge.
In this context, art, writing, and every other human practice can be understood as expressions of a progressive synthesis. Every creative act is, in effect, a process of reworking, where known elements are combined in new ways to generate seemingly original results. This process is driven by a constant interaction between past experience and the human ability to imagine new possibilities.
From a theoretical standpoint, one might argue that the idea of a completely original creation is, in a sense, an illusion. What we perceive as new is actually a new configuration of preexisting elements. Thus, every innovation is the result of a complex network of influences, knowledge, and patterns that intersect, creating a continuum between the old and the new.
Generative artificial intelligence, especially in the realm of image creation, represents a technology that replicates, in an essentially technical manner, the human creative process. The algorithms that generate new images operate by processing vast amounts of pre-existing data (images, styles, models), which are used to train the underlying algorithms responsible for creation. This process is not unlike the mechanism described earlier: nothing is created ex nihilo, but rather, it is the result of a synthesis of already existing information and patterns.
It is important to emphasize that these creations, although they may appear original, are closely tied to the data used to train the model. The algorithms do not possess creativity of their own (in the human sense of the term), but operate according to precise mathematical rules that determine how to manipulate and synthesize the inputs. This process, while sophisticated, is inherently limited: artificial intelligence generates new variations and solutions, but always within the boundaries imposed by the initial data and the programmed instructions (which define the space of possibilities).
This approach, though more technical and less emotional compared to the human creative process, is no less valid. In fact, the strength of artificial intelligence lies in its capacity for processing and synthesis, which increases proportionally with the amount of available data. The more data provided, the greater the model's ability to generate complex and diverse solutions (a process we might compare to the human ability to assimilate knowledge from multiple experiences and contexts, albeit in a less flexible and less emotionally nuanced way).
That said, the process differs substantially from the human experience of creation. A human being, during the creative act, does not merely rework information; human creativity is immersed in a complex interplay of experiences, emotions, intuitions, and sensory stimuli that continuously interact (a phenomenon that goes beyond simple data synthesis). Every creative choice is the product of a dynamic process, influenced by a constant and varied accumulation of experiences. In contrast, artificial intelligence operates within a closed system, where the final result is predetermined by the input data and the model's rules (which cannot fully replicate the spontaneity and complexity of human intuition).
However, one cannot deny the allure this technology holds, especially for those who see in creativity a desire for total control over the work. The ability to use artificial intelligence to generate images that faithfully reflect the instructions given (according to predefined parameters) offers a form of control rarely achieved in the human creative process (where unpredictability and instinct play a significant role). This raises an interesting question: if artificial intelligence can produce results very similar to those achievable by a human, albeit through different mechanisms, does it not, in part, realize the ambition of every creative mind to dominate and direct the creative act?











“in 2024, decades later, we witness the emergence of an unprecedented phenomenon in the field of digital imagery. Similar to how McLuhan predicted technologies would expand our communicative abilities, generative artificial intelligence now extends our capabilities in image production, allowing us to explore new aesthetic and conceptual frontiers. With the advent of tools like Midjourney, DALL-E, and other generative models that utilize AI to create detailed images from simple prompts, a turning point in the history of photography is marked. The ability of these technologies to generate images not only opens new possibilities for the role of the artist but also redefines the very essence of the artwork, narrowing the gap between human creation and algorithmic generation.
A concrete example of this change is already taking place in industrial design, where automatic customization from defined models leads design to explore new aesthetic variants. In this process, AI can dynamically generate images based on the context of visualization or the progressive corrections of the prompt creator.

From the early definitions by James Bridle and his ‘New Aesthetic’ to the work of Hito Steyerl with pieces like “In Free Fall” and Lev Manovich with “The Language of New Media,” various figures have explored digital aesthetics and the integration of image-based works into our perception of reality. Finally, Boris Groys, with “In the Flow,” has discussed how art fits into digital information flows, redefining authorship and communication in relation to the original object.
Applications and platforms like Snapchat and Instagram constantly accustom us to the excessive use of filters and digital modifications, in turn increasing our daily exposure to digitally obtained images or those that exploit more structured and profound modifications, exposing us indistinguishably to parts entirely generated by neural networks.”