Skip to main content

ORIGINAL RESEARCH article

Front. Digit. Humanit., 01 February 2017
Sec. Cultural Heritage Digitization
Volume 4 - 2017 | https://doi.org/10.3389/fdigh.2017.00003

Deep Creations: Intellectual Property and the Automata

  • Laboratory EA 4375, Centre d’études internationales de la propriété intellectuelle, Strasbourg University, Strasbourg, France

The rapid progress of deep neural network architectures is allowing both to automate the production of artworks and to extend the domain of creative expression. As such, it is opening new ground for professional and amateur artists alike. A major asset of these new computer processes is their capacity to derive, from a training phase, a generative model from which new artifacts can be produced. This attribute allows for a wide range of novel applications. New music or paintings in the style of famous artists can be produced at the click of a button, or combined to form new artworks. New graphical compositions can be “hallucinated” by the deep algorithmic models to produce striking, unexpected, visual forms. By the same token, the dependence on preexisting, protected, artworks lays the ground for potential zones of friction with the rights holders of the source data that helped shape the generative model. This articulation, between the popular creative movement initiated by the deep neural architectures and the preexisting rights of the authors, leads to a confrontation between the present legal framework for the protection of artistic creations and the new modalities offered by these new technological objects. The present work will address the conditions of protection of creations generated by deep neural networks under the main copyright regimes.

Introduction

Algorithmic productions have been part of the artistic landscape for more than half a century: from avant-garde procedural musical creations to new forms of computer graphics languages, they have opened innovative arenas for artistic expression and often served as exploration grounds for introducing and testing new computational tools that have become mainstream. Yet, the recent newfound generative capacity of deep-learning processes, catalyzed by the increase in computational power and the access to a wealth of training data, has boosted the expressive capacity of automated creations to unprecedented—and unforeseen—levels. Within the newfound field of “constructive machine-learning,” multidisciplinary efforts are now being deployed to explore and formalize the creative potential of these latest generations of algorithms, both extending and specializing the ongoing debate addressing the broader field devoted to exploring the notion of computational creativity.1

This drastic change prefigures nothing less than a revolution in the modalities of personal expression as deep-learning frameworks permeate the creative toolkit available to professionals and amateurs alike. Examples of algorithmic creations based on deep neural architectures are indeed now spreading beyond the confines of artistic and academic experiments to reach larger audiences. Deep convolutional networks, such as DeepDream, now “hallucinate” new forms of automated graphical creations, both expressionist textures and surreal collages, that are striking in their eccentric yet whimsical character (Berov and Kuhnberger, 2016). Other deep neural architectures based on autoregressive generative models, such as WaveNet (Oord et al., 2016), break new ground in sound and music production. From images to music, with forays into the realm of literature—narrative or poetry—and even in the more distant theater of choreography, most artistic fields are now infused by the prowess and potential of deep neural creations.

Yet, new, radical, and artistic endeavors are often associated with societal frictions [in particular when involving technical mediation: see O’Hear (1995)]. Here, the rise and popularization of novel creative artifacts are bound to raise questions about their legal protection and of their ownership as defined by copyright laws.2 Several of these issues have already been the subject of—mostly doctrinal—debates, at the intersection between the fields of computational creativity and intellectual property law (see, e.g., Bridy, 2012; Buccafusco et al., 2014, and references therein). However, the hybrid nature of these new constructions (hybrid in the sense that they are rooted in human data but grow and evolve automatically through the interpretative filter of a learnt algorithmic model) highlights specific questions that require a detailed analysis of the modalities involved in the underlying generative processes. The opacity of the causal chain leading from the training data to the final product, the lack of interpretability of the model by which the object is created is a first hurdle that prevents an immediate resolution of copyrightability and ownership. A second is the multiplicity of interactions between the automata and the various actors involved: from the gathering and selection of training sets, to the choice of architecture and training methodology, from the setting of parameters, to the definition of a stopping criterion. A third stumbling block is the emergence of new forms of interactions between a deep model and an artist’s input: the artist as curator, the stylistic transfer between multiple sources, the possibility to orient, retrain and guide the deep models toward the production of artistic objects, are but a few of the new roles that an artist can now take on.

Who is the author when the machine creates? What are the rules to delineate the perimeter of ownership in deep-generative art? Piercing through the complexity of the model and disentangling the multiple contributions at play in the generation of such a creation will be required not only to characterize a protectable object but also to identify the putative creators and assign an ownership to said object. After reviewing the current status of “deep creations” in the domain of artistic productions (see The Rise of Deep Creations), guidelines for the qualification of the artistic objects derived from these neural architectures in the context of copyright law will be provided (see The Protection of Deep Creations). Doing so will depend upon, firstly, the definition of modalities of identification, within the creative artifact of a trace, a tangible imprint, of the creators’ intent and of his/her personality and, secondly, the determination of the presumed contributors to the creation. This analysis will be complemented by a discussion of two forms of deep creations: style transfer and training data selection.

The Rise of Deep Creations

The development of computer-generated arts follows closely the technological evolution of both the hardware platforms and the algorithmic processes from which they emerge. Deep neural networks are one of the latest trends in more than half a century of digital artistic experimentations marked by an ever increasing automation of the creative process. In this movement of transfer, from human prerogatives to machine-based attributes, it is easy, if not tempting, to mythify the creative contribution of the automata. As an antidote to this tendency, a basic understanding of the inner workings of the algorithmic intermediaries is required to separate what is due to human action from what relies on machine automation. Before investigating the possibility of a protection of artistic creations generated through deep architectures, a foray into the genealogy of algorithmic art seems therefore appropriate, if only to better measure their technical characteristics and assess the novelty of their contribution.

From Rule-Based Systems to Neural Networks

A constant source of experimentation and stimulation, technical innovation offers an ever new vehicle for artists to explore the reaches of creative expression. As such, cutting-edge technologies are often rapidly integrated in the constantly growing toolset of the artistic community. Computer-based applications have participated in this process as early as the late 50s, delving into the random generation of shapes, with Desmond Paul Henry’s drawing machine (O’Hanrahan, 2016), or attempting to channel the production of procedural patterns as demonstrated by the works displayed at the first exhibitions of “Computergraphik” in February 1965, in Stuttgart, Germany (Klütsch, 2007). Not limited to graphic arts, the expressive capacity of computer programs, anticipated, a century earlier by Ada Lovelace,3 was immediately applied to music too, where the mathematical foundation of musical systems established since Pythagoras and expanded with the canon compositions of the fifteenth century, offered a fertile ground for computer-based automation (Grout and Palisca, 1996). Using new compositional rules, easily transferrable into an algorithmic language, new artworks emerged, such as the Illiac suite, composed in 1957, that unveiled as yet unexplored musical territories (Hiller and Isaacson, 1959). These first procedural approaches were soon improved upon and expanded with approaches to incorporate a probabilistic framework, such as in Iannis Xenakis’ “Morsima-Amorsima” (1962).4 More than simple tools, operating to transfer the artist’s fully fledged intentions on a material substrate, the potential of computers to open and catalyze new forms of creations was thus quickly recognized. As A. Michael Noll, one of the early computer artists, noted in 1967, “[i]n the computer, man has created not just an inanimate tool but an intellectual and active creative partner that, when fully exploited, could be used to produce wholly new art forms and possibly new aesthetic experiences” (Noll, 1967, pp. 89–95). Yet, the precursors of computer-based automation were forced to concede that “the process of constructing each new system was tedious because each was custom-crafted. The major difficulty was acquiring the requisite knowledge from experts and reworking it in a form fit for machine consumption,” a process of “knowledge engineering” that did not easily translate to the artistic landscape (Buchanan and Duda, 1983, p. 167). Reliant on a combination of stochastic variations and preprogramed formulaic definitions of artistic expression, these pioneering approaches were indeed essentially limited in their generative capacity by the possibility to comprehend and express in a set of rules the structural complexity of an artistic framework, be it traditional or avant-garde.

It is only with the advent of machine-learning techniques that it became possible to circumvent the explicit, laborious enunciation of rules associated with a creative intent or an artistic style. Armed with sufficient training data, these systems were indeed able to identify and capture automatically some of the stylistic features common to the examples they were provided with during a training phase. Using these extracted esthetic characteristics as an expressive palette opened new creative avenues, allowing the production of artistic works resembling existing ones or mixing a variety of styles, without requiring the creative process underlying these works to be deciphered and encoded. As one of the most promising early machine-learning techniques, neural network architectures were, thus, soon assimilated by computer artists. Using data to learn musical structure, neural networks were first used as a mechanism to elaborate a composition by learning note-to-note transitions (Todd, 1989; Mozer, 1994). Other attempts learned to improvise beats and melodies as part of a jazz band (Nishijima and Watanabe, 1993), or to select synthetic images based on an evaluation of their esthetic value (Machado and Cardoso, 1997), allowing the learning set as much as the neural network training protocols became constitutive elements of the artistic product. However, in spite of a promising debut, and much associated hype, the practical difficulties in training the large multilayered neural networks required to express rich features, coupled with the limitations of the available computing power, hindered their widespread application.5

From Shallow to Deep Neural Networks

For a while, neural networks seemed confined to selected applications, where specialized architectures and optimization methodologies had proved effective [such as digit recognition based on convolutional networks and back-propagation, as designed by LeCun et al. (1995)]. They were otherwise superseded by alternative, seemingly more principled, machine-learning frameworks, amenable to cope with a range of large-scale problems and less prone to overfitting (Scholkopf and Smola, 2001). This status quo, however, changed drastically in 2006 when an algorithmic breakthrough allowed the training of very large multilayered, densely connected, and neural network architectures (Hinton et al., 2006; Bengio et al., 2007). The realization that greedy unsupervised training could be relied on as a means of initializing the network weights and to bootstrap a subsequent supervised fine-tuning back-propagation phase, together with fast-paced technological progress in terms of computational capacity, as well as the access to vast resources of training data, all combined in creating a new platform where neural networks could, once again, thrive. Since then, these architectures have met with unprecedented success, solving complex problems in a vast array of applications, from image analysis, speech recognition, and natural language processing to automatic robotic control (LeCun et al., 2015). As a matter of fact, these so-called “deep neural networks” are now becoming the de facto one-stop-shop of machine-learning solutions.

Why such a landslide? Theoretical results suggest that deep architectures are needed in order to learn complex representations that capture high-level abstractions [see Bengio (2009) for a review]. However, generating such representations proved difficult before the training protocols introduced by Geoff Hinton and collaborators. While early days rule-based approaches were ineffective at scaling to new complex problems, so was the labor-intensive “feature engineering” required to translate raw data (e.g., images, speech, and language) into a set of meaningfully formatted inputs susceptible to comprehension by the previous generations of “shallow” machine-learning architectures: a costly process that required a significant investment by highly specialized experts. Deep neural networks, conversely, have been demonstrated to be not only capable of discovering the building blocks that compose the data on which they are trained but can also do so at different levels of abstraction, across multiple layers. Rich features describing the raw inputs are thereby automatically learnt by the network, in a hierarchical fashion, through a layer-wise generative process. Distributed representations emerge as training progresses, each layer representing more abstract concepts formed by the composition of features captured at the lower levels of the architecture (Zeiler and Fergus, 2014). For example, in the case of images, each unit in a given layer can be interpreted as a filter that responds to particular features in an input image: simple ones, such as edges, in the first layers, and more complex aggregates, higher in the network hierarchy. The top layers of the network end up—if sufficiently deep—capturing the content of the image, i.e., forming archetypal representations of the objects on which they have been trained: faces, animals, buildings, etc. (Mahendran and Vedaldi, 2014). In a symmetric movement, the potential of deep architectures to encode complex structures such as images or sound can also be used to produce new expressions of these objects: a property that has initiated a new wave of creative applications in the fine arts.

Deep Creations in the Field

Graphic Art

Among the most advertised of these recent productions, the striking visual creations of DeepDream have participated in a public recognition of the “hallucinating” power of deep neural networks (Berov and Kuhnberger, 2016). Originally developed in 2014 as part of the ImageNet Large-Scale Visual Recognition Challenge, based on a convolutional neural network architecture, the DeepDream image generation engine relies on an iterative process of modification of an input image so that the response of specific neurons, characteristic of certain features (e.g., faces or animals), is maximized. Patterns already present in the original image are thus progressively enhanced to let those selected features emerge in the regions of the image where they fit best, giving birth to chimeric constructions in a manner akin to the pareidolia phenomenon that finds us seeing familiar shapes while cloud-watching. Processed by the neural network, psychedelic forms and figures combine with the original image to create a landscape evocative of an acid mix between Hieronymus Bosch’s fantastic imagery, Giuseppe Arcimboldo’s portraits, and the slightly oppressive and colorful patchwork paintings of Hervé Di Rosa. Works produced with the DeepDream architecture have already been exhibited and auctioned in February 2016 at Gray Area, the San Francisco gallery, and arts foundation,6 and the code has been released under a Creative Commons license. A public web interface is available at http://DeepDream.com for users to upload and process their own images.

Other attempts at integrating deep neural networks within the generative process of visual works have led to exploring the concept of “style transfer.” It was recently shown that a purposely designed deep convolutional network could learn a stylistic content from a reference image as a multi-scale representation of textures. These textures, once applied to a new image, would impart it with the style learnt on the reference image. Producing novel paintings in the style of Van Gogh, Turner, Munch, or Kandinsky, then, just became a matter of feeding an image to the network and collecting the synthetized outcome (Gatys et al., 2015).7 While largely automated, parameters within the system allow nonetheless some degree of experimentation (beyond the sole choice of input and reference images). Modulating the relative weight between style fidelity and image content allows, for example, a certain user’s control on the desired output. This technique has been implemented as part of the http://Instapainting.com web service, where users can produce artworks in their style of preference and commission an artist to physically hand-paint the final product. The works of Vincent Dumoulin and collaborators have recently further extended the control over the deep-learning stylistic transfer by allowing multiple styles to be selected and combined at once, opening the way to new forms of visual creation, in which a graphical artist could “paint with styles” just as a musician combines sound textures to produce an original mix (Dumoulin et al., 2016). Other approaches to the problem of “non-photorealistic rendering” relied on a recurrent neural network (RNN) architecture to separate a given style from an image content and transform new images accordingly (Zhao and Xu, 2016) or combined convolutional networks with random Markov fields priors to better match local feature patches in both images (Li and Wand, 2016). The latter technique has been recently adapted to produce strikingly convincing paintings based on rough doodles or sketches (Champandard, 2016).

Musical Composition

In the musical domain, experiments based on deep architectures have prolonged and expanded the corpus-based algorithmic compositions initiated in the 1980s, such as the productions of Emily Howell, David Cope’s latest embodiment of his “Experiments in Musical Intelligence” (Cope, 2005). In these earlier works, a model was trained on selected works (including classical composers, from Palestrina to Rachmaninov, the creations of David Cope and, even, previous compositions by Emily Howell, the generative musical engine) that led to the production of musical compositions that integrated the various sources of the corpus they had been trained on. The creations of David Cope, although relying on a training phase, required, nonetheless, a major human input to prepare, select and filter the final products (Cope, 2010). More recently the FlowComposer system has produced startlingly realist songs. Starting from a corpus of about 13,000 lead sheets from a variety of styles and composers [ranging from jazz to pop and Brazilian music (Pachet et al., 2013)], the system allows a style to be selected [such as “American songwriters” (which contains compositions from Cole Porter, Gershwin, Duke Ellington, and more)] based on which a new lead sheet, composed of a melody and corresponding harmonies, is generated. A musician is then required to give the finishing touches to the composition. “Daddy’s Car,” a pop song in the style of The Beatles, jointly produced by the FlowComposer and French composer Benoît Carré, was part of a set presented at the Gaîté Lyrique concert hall in Paris the 27th of October 2016. Still, this system too necessitated a significant human effort both in terms of pre- and post-processing. On the contrary, it is the capacity of deep architectures to learn structured representations based on raw, unprocessed, and sound data—without requiring an otherwise complex preprocessing phase of “feature engineering” by domain experts—that is, here again, proving effective in further automating the creative process.

Just as images are best treated using specific, convolutional, and network architectures, so musical compositions require bespoke machine learning architectures to model their complex multi-scale temporal dependencies. RNNs, by taking the output of each hidden layer and feeding it back to itself as an additional input, develop a simple version of a memory state that allows them to capture long-term dependencies in input sequences (such as speech or music). This capacity makes them particularly well suited to represent musical samples. Deep versions of RNNs have thus been successfully applied to the generation of a vast range of musical samples, including polyphonic music generation (Boulanger-Lewandowski et al., 2012; Lyu et al., 2015), Johann Sebastian Bach-inspired piano pieces (Liu and Ramakrishnan, 2014), or Irish folk songs (Colombo et al., 2016). In the same vein as David Cope’s and the FlowComposer’s mixed sources compositions, Daniel Johnson trained a deep RNN on piano pieces from more than two dozen classical composers spanning from Joseph Haydn to Claude Debussy and Maurice Ravel, producing surprisingly polished results without resorting, this time, to any further human input (Johnson, 2015). A different kind of RNN architecture, the long short-term memory (LSTM) network that had been previously used to generate blues music (Eck and Schmidhuber, 2002), has also been adapted to deep structures. Trained directly on raw data, their productions have proved often more musically plausible than those obtained using other RNN models (Sturm et al., 2016). Other architectures have recently shown promising results in generating satisfying musical compositions. Based on stacks of convolutional layers constrained to follow some essential causality rules (enforcing that the prediction produced by the model at an instant t is only dependent on past events), the Wavenet network was trained on MagnaTagATune dataset (Law and Von Ahn, 2009) and the YouTube piano datasets to produce esthetically pleasing music samples (Oord et al., 2016). “Can we use machine learning to create compelling art and music?” is the question that tries to answer Magenta, another Google Brain research project.8 One of their latest deliveries tackles the problem of producing long-term structure in music. To do so a standard RNN structure has been improved to recognize patterns that occur across longer time intervals. Here again, the results are compelling; so much so that some deep neural network compositions are now considered good enough to be played live in front of a (human) audience. On the 27th of September 2016, Mark d’Inverno Quintet played a session at the Vortex Jazz Club in London where all of the music had been written by a computer, mostly based on deep-learning architectures. Highlighting the potential of new kinds of interactions between algorithmic creations and human interpretation, d’Inverno noted: “Even if you don’t think machines can be creative by themselves, they can potentially be creative friends. You can imagine a situation when you’re having a conversation with a machine offering prompts as a critical, creative accomplice” (Vincent, 2016).

The Spread of Deep Creation in the Arts

Beyond visual arts and music, applications to textual compositions have also emerged, not only in poetry (Wang et al., 2016; Yan, 2016) and literary creations (Roemmele, 2016) but also in rap song lyrics (Potash et al., 2015) and even in the production of screenplays, where “Benjamin” a LSTM–RNN “automatic screenwriter” created by Ross Goodwin and Oscar Sharp9 wrote, autonomously, the sci-fi experimental short “Sunspring” (Newitz, 2016). Generative choreography (Antunes and Fol Leymarie, 2012; Crnkovic-Friis and Crnkovic-Friis, 2016) and creative productions of sculptures (Lehman et al., 2016) have also relied on the expressive potential of deep neural networks.

While a variety of network architectures are employed to tackle the specifics of each mode of expression (including various instances of convolutional networks for images, recurrent networks for audio and text, etc.), a common trait to these approaches is the capacity to detect and encode archetypal representations at different levels of abstraction, thereby capturing some of the natural—or man-made—structures present in the training data. As such, the deep-learning approach to artistic creation moves a step further in the direction of an increased autonomy from previous generations of computer assisted or computer-generated artistic tools. This property and the reliance on training data to generate an internal model of the artistic form offer new mechanisms by which the creative artistic vision can manifest itself.

The capacity to produce socially meaningful and artistically relevant objects would have limited societal impact if confined to academic circles. As the public use of automated decision engines starts to raise issues of legal responsibility (as in the case of the algorithms operating drones or self-driving cars) or fundamental rights (as with the use of personal data), the mainstream deployment of automated creative tools is bound to question the notion of authorship and the associated protection of the creations. Indeed, this transition from theory to practice, from toy models to public scrutiny, is already on the way. Prompted by the dual effect of the open sourcing of deep-learning development libraries and APIs (that facilitate the practical development of new applications by any computer enthusiast), as well as the availability of dedicated platforms and services (such as http://DeepDream.com and http://Instapainting.com) where users can upload data and immediately retrieve augmented “deep creations,” the use of deep neural network to produce artworks is rapidly gaining in popularity. That these new waves of creations may, on their own merit, qualify as “art” is not so much in question (after all, they have been exhibited in galleries, played in concert halls). Still, as they become more popular, issues regarding the protection of deep creations are bound to emerge. What are the rules for an artwork to be copyrightable when the generative process is more and more automated and emanates from within the arcane traceries of deep architectures? What is the place of the human author, when his/her imprint on the creation is less and less traceable? Can deep creations be protected under copyright? And if so, who is the author?

The Protection of Deep Creations

While there is no such thing as a universal copyright law, close to 180 countries are signatories of the Berne Convention,10 an international treaty that defines a common framework for the protection of the rights of the creators of artistic works around the world. The Berne Convention minimum standards award to national jurisdictions the right to prescribe the detailed implementation of the law (as stated in article 2 §2), which leaves room for significant variations in the resulting national copyright legislatures (such as in the definition of the subject-matter categories to be brought within copyright, or the exceptions from reproduction rights) (Ginsburg, 2000). A first component of these copyrights of “author’s rights” includes moral rights, consisting in the right of paternity (i.e., the right to be identified as the author of the work) and in the right of integrity (i.e., the right for the author to object to any derogatory treatment of his/her work that that would be prejudicial to the author’s honor or reputation). A second component relates to exploitation or economic rights. They include the right to communicate to the public, the right to make reproductions, the right to make adaptations and arrangements of the work, the rights of diffusion and exhibition, among others.11 Economic rights are granted to the creator (or to his/her employer for works created by employees in the scope of their employment), with a term of protection that varies according to national law (the Berne Convention fixes it at 50 years after the author’s death, but many countries have adopted a 70 years term instead). While these rights can be assigned to others by license or transferred by contract, moral rights are, contrarily, not transferrable and remain an attribute of the author only (some countries, such as Canada, allow for this right to be waived, though). As with exploitation rights, their duration differs from country to country: perpetual in France12 or Spain,13 for example, they end with the exploitation right in Germany (that is, 70 years after the author’s death) and are limited to the lifetime of the author in the U.S.14

Before tackling in earnest the specifics of the protectability of “deep creations” and setting the stage for the necessary requirements applicable to these new objects [see The Author’s (Water)Mark in the Deep Creation], the following section will start by stating some of the generally established, fundamental principles governing the attribution of copyright to artistic creations at large (see Deep Creations under Copyright Law).15

Deep Creations under Copyright Law

Article 5 §2 of the Berne Convention explicitly states that the enjoyment and the exercise of copyright shall not be subject to any formality. An official registration procedure is therefore not necessary in order to enjoy a protection by copyright, the right emerging, in a sense, from the act of creation itself. Be that as it may, not all artworks are susceptible to be protected under copyright law. The first stage in the right’s assignment, therefore, lies in assessing whether or not a given work, e.g., a creation generated from a deep neural architecture, is deemed copyrightable. This condition relies in practice on the coexistence of a bundle of essential features: the presence of a “work” (see A Work, Fixed, or Expressed), the manifestation of “originality” in said work [see A (Rather Shallow) Threshold of Originality], and, finally, the existence of one or several individuals responsible for the work (see A Human Presence in the Algorithmic Pipeline).

A Work, Fixed, or Expressed

Article 2 §1 of the Berne Convention allows for the protection of “every production in the literary, scientific and artistic domain, whatever may be the mode or form of its expression,” no other criteria such as merit or destination being taken into account. Article 2 §2 grants national legislations the freedom to prescribe whether or not an artworks shall be “fixed in some material form” in order to be protected. In the absence of a precise definition of the notion of “fixation,” the notion leads to a wide spectrum of interpretations. In the U.S., for example, works may gain protection only if they are “fixed in any tangible medium of expression, now known or later developed, from which they can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.”16 This applies to a work “when its embodiment in a copy or phono-record, by or under the authority of the author, is sufficiently permanent or stable to permit it to be perceived, reproduced, or otherwise communicated for a period of more than transitory duration.”17 The same provision applies in the U.K.18 with the notable difference that, whereas in the U.S. the fixation of the creation must be by or under the supervision of the author, in the U.K. this condition is absent. In civil law countries, a less restrictive prescription of the materialization criterion is applied. The French copyright code states: “[t]he provisions of this Code shall protect the rights of authors in all works of the mind, whatever their kind, form of expression, merit or purpose.”19 Other countries do not set any fixation requirement at all (such as Spain or Germany), the sole requirement being that the work is expressed in a form perceptible by the senses. It is only then, in this process of projection from the internal representation of the author’s vision to a perceptible and tangible (although not necessarily permanent) expression that the act of creation occurs (Gendreau, 1994). In spite of this large gamut of interpretations, a common denominator to both the Anglo-American “copyright” and the Continental European “author’s right,” underlying the requirement for fixation/expression, is that copyright law does not protect “ideas” but, rather, the form in which ideas are expressed. An artwork should, and must, therefore leave the “closed box” in which it originated, be it the mental state of the artist or the internal configuration of a machine-learning model, in order to be considered for protection under the regime of copyright. The digitally filtered images created on http://DeepDream.com or http://Instapainting.com will all satisfy the criterion, as soon as they are displayed and recorded, as will the style-transfer images or videos captured from live scenes and processed in the moment, as demonstrated by Facebook in a recent prototype based on Caffe2Go, a new deep-learning platform porting convolutional neural techniques on mobile platforms (Gatys et al., 2016a).

A (Rather Shallow) Threshold of Originality

As article 2 §1 of the Berne Convention does not set any limitations to the protection of artistic works, national jurisdictions have independently specified the minimum threshold that a copyrightable work must meet. A common standard states that a copyrightable work should originate from an author’s creative effort, and not be the mere copy of a preexisting work. This principle forms the basis of the second condition for a work to be protected by copyright: the requirement for originality. In the absence of any positive definition in the national laws, the interpretation of this notion has been left to the courts. In this sense, the European Union Council Directive 93/98/EC held in 1993 that a “photographic work within the meaning of the Berne Convention is to be considered original if it is the author’s own intellectual creation reflecting his personality, no other criteria such as merit or purpose being taken into account” (recital 17 of the preamble). In 2012, a decision of the Court of Justice of the European Union further clarified the condition of originality by stating that: “an intellectual creation is an author’s own if it reflects the author’s personality. That is the case if the author was able to express his creative abilities in the production of the work by making free and creative choices.”20 This decision therefore requires E.U. national courts to determine, in each specific case, whether the work is “an intellectual creation of the author reflecting his personality and expressing his free and creative choices.”21 This principle is, similarly, reflected in article 6 of the E.U. Copyright Term Directive (2006/116/EC). In U.S. law, the notion of originality is considered a constitutional sine qua non-principle for copyright protection. It was invoked in a 1991 decision of the United States Supreme Court in which it was stated that copyright protection could only be granted to “works of authorship” that possess “at least some minimal degree of creativity,” excluding thereby the attribution of a copyright on the sole justification of labor (the “sweat of the brow”). However, as the Court further stated, “the requisite level of creativity is extremely low; even a slight amount will suffice. The vast majority of works make the grade quite easily, as they possess some creative spark.”22 Although the definition of originality varies between national legislations, the imprint of an individual’s “personality” or “creative spark” is commonly required as a minimum threshold to allow protection under copyright.

Implicit in this delineation of the concept of originality, and particularly relevant when addressing the case of computer-generated works, is the necessary presence of a creator from which the “personality” or the “creative spark” emanates. Creations obtained through deep architectures will have to comply with this fundamental principle: should they result from a purely automated process, independent from any human input, and they would be excluded, on this basis, from copyright protection (in most jurisdictions, at least: we will see in the next section that the U.K. and some other countries carry provisions to the contrary). The fact that deep creations rely on a training phase, where man-made examples serve to train a model, would, in most cases, incorporate a human component in the generative process. However, if no specific contribution of any individual source used to train the model is recognizable in the final reaggregated piece, the link to an specific, identifiable “personality” would be missing, thereby hindering the unambiguous attribution of one or more “authors” to the final product.

A particular case worth defining in the context of machine-learning creations concerns “derivative works,” i.e., creations based, in whole or in part, on another work. If, indeed, a significant stylistic component from the training corpus is detected in the final creation, then it may be considered a “derivative” from the original source(s). Article 2 §3 of the Berne Convention disposes that “[t]ranslations, adaptations, arrangements of music and other alterations of a literary or artistic work shall be protected as original works without prejudice to the copyright in the original work.” This disposition is followed in most national jurisdictions (U.S. 1976 Copyright Act § 101; German Copyright Act, art. 3; French Intellectual Property Code, art. L.112-3). The wording of the Convention makes it clear that the consent of the author of the source work is required in order to alter it without infringing on the original work (a provision explicitly stated in the U.S.: “protection for a work employing preexisting material in which copyright subsists does not extend to any part of the work in which such material has been used unlawfully.”23), still, U.K. courts have ruled to the contrary, allowing that copyright may be granted to a derivative work even though it infringes on the source work.24 While dependent on a training set, the artwork resulting from processing through a machine-learning architecture may still not satisfy the originality rule if no particular individual supplementary creative input can be identified therein. Consider the user of a “style transfer” application, as previously described, having selected two images in the public domain, Leonardo da Vinci’s Mona Lisa, say, and Johannes Vermeer’s Girl with a Pearl Earring. The combined painting processed through the deep model may be unique and reflect somewhat the style of the two artists, but the simple selection of these two classic artworks would certainly not suffice to imprint the user’s “personality” or “creative spark” in the final product. Similarly, a piano sonata produced from a model trained on a generic database of all Johannes Sebastian Bach’s works would generate works void of an additional, copyrightable, contribution.

A Human Presence in the Algorithmic Pipeline

The requirement for an original work establishes a particular link between a human creator and a materialized object. This ethereal, watermarked, presence of the author’s “personality” in the created form is a cornerstone of the modern principle of artwork copyrightability and is, symmetrically, an essential component that defines the author. That a creator is required for an artwork to be copyrighted appears, therefore, as a legal imperative. In most Continental European copyright systems, the creator is considered the de facto author of the work. Furthermore, only a natural person may be considered an author: the French “droit d’auteur” refers to the copyrighted work as a “creation of the mind” (“une oeuvre de l’esprit,” as defined in art. L.111-1 of the French Intellectual Property Code). Similarly, section 7 of the German Copyright act states that “the rights holder is the creator of the work” (“Urheber ist der Schöpfer des Werkes”). The requirement of a human author is also made explicit in U.S. copyright law in the context of artwork generated through automated processes: the U.S. Copyright Office has indeed now taken the position that “in order to be entitled to copyright registration, a work must be the product of human authorship. Works produced by mechanical processes or random selection without any contribution by a human author are not registrable.”25 Finding the author, however, may not be so trivial when the intervention of an automaton in the creative pipeline, in the form of a non-linear algorithmic process, blurs the human contribution to the point where it is hardly discernable anymore.

What would happen then, when the work is genuinely produced independently by an algorithmic process, without any significant human input? Most of the national copyright laws would consider the work in the public domain in the sense that it refers to works belonging to categories of creations not protected by copyright law. However, this position is not universally accepted. In the case of strictly “computer generated” (meaning that the work is generated by computer in circumstances such that there is no human creator of the work), the U.K. copyright code provides indeed that “the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”26 A few other jurisdictions (such as India, South Africa, Hong Kong, or New Zealand) have opted for similar rules in relation to computer-generated works. Under this provision, the author is not the creator (considered here to be the machine), but the individual responsible for “arrangements necessary for the creation of the work.” While this undoubtedly prevents a computer-generated creation to fall in the public domain, the law leaves nonetheless undefined, and therefore open to interpretation, the exact role of the person by whom the arrangements are made. Could it be the user of the deep neural network? The programmer who implemented that particular instance of a recurrent or convolutional neural network? The individual who selected the training set on which the internal weights were optimized? The investor who paid for the “app” and financed the development of the system?27

From finding the natural person source of the creative spark to determining those by whom the creation was made possible, whichever jurisdiction is favored, the matter of copyright attribution to original works produced through deep machine-learning architectures will therefore largely lie in identifying the author(s) in the algorithmic haystack.

The Author’s (Water)Mark in the Deep Creation

The potential scope of novel creative applications opened by machine-learning techniques prevents a general “copyright framework” for their creations from being devised. In order to explore some of the legal issues posed by these new creative tools, we will consider two applications of deep learning in the arts, where the technique has led to original applications: style transfer of graphical artworks (see Style Transfer, Derivative Work, and Fair Use) and the automated generation of musical compositions from a training corpus (see The Artist as a Database Curator).

Style Transfer, Derivative Work, and Fair Use

The possibility offered by recent deep-learning methodologies to separate an artwork’s style from its subject-matter, and to subsequently transpose this style onto another object, has led to the production of unexpected imagery. Artistic thought experiments (“What if Picasso or Van Gogh had painted the Mona Lisa?28 Or my portrait?”) can now be attempted with and their result visualized on a display with minimal human interaction. Some experiments have even recreated sequences of movies in the style of famous artists (e.g., Stanley Kubrick’s 2001, a Space Odyssey as if painted by Picasso29), or applied Van Gogh’s brush stroke and palette to a live video capture (Ruder et al., 2016). While this variation on the classical form of “pastiche”30 is not, strictly speaking, a new entry in the microcosm of algorithmic art [previous attempts at “learning style” from images (Drori et al., 2003) had indeed already proved the potential of the technique], deep-learning approaches have brought forward a qualitative boost, as well as new levels of automation, to this established practice. The process of “style transfer” is now gaining popularity through the availability of web platforms, such as http://Instapainting.com, where users can experiment freely with the technique. Among these services, @DeepForger (a Twitter bot created by @alexjc) boasts “[t]he Secret Manual to Creating Deep Forgeries.” The deep-learning-based bot automatically “paint[s] your photos using techniques from famous artists”.31 With the offering of such tools, the number of automatically generated composite artworks incorporating the contributions of multiple sources is bound to multiply. Since it is precisely the expression of an artwork (i.e., the form through which an artist expresses his/her voice, as opposed to an idea) that falls under the umbrella of copyright, “style transfer” directly hits at the core of the protection. What will then be the statute of these hybrid visual creations with respect to copyright laws?

Two mechanisms will participate in specifying the copyright assignment of the composite pieces: firstly, the transitivity of the copyright from an originally protected source to an image derived therefrom, and, secondly, the original selection of images used to produce the composition.

Let us first examine the legal relation with the reference images used for the composition. If the users own the copyrights to the two images used in the composition (or, similarly, if one is in the public domain), the user will keep a copyright on the derived work once the style-transfer operation has been carried out. Assuming the presence of the visual sources is recognizable in the combined product (which would depend on the parametrization of the transfer and should be examined case by case), care should then be taken to obtain the authorization of the original copyright owner’s to avoid infringement.

Could the work be considered to fall within the framework of “fair dealing” or “fair use”? In the European Union, the Directive 2001/29/EC on the “harmonization of certain aspects of copyright and related rights in the information society” includes the “pastiche” in its exhaustive list of copyright exceptions [art. 5 §3(k)]. Its purpose is to enable artists to make minor use of other people’s copyright material without infringing on the reference material. However, to justify the exception the public should immediately understand that the object of the composition is not to appropriate the notoriety of the reference artwork’s author.32 In the U.S., the form of pastiche would fall under doctrine of “fair use” under section 107 of the 1976 Copyright Act. To evaluate whether fair use applies to a composite artwork, the courts would consider the amount of the original work used, whether the reference image has been sufficiently altered, the potential degree of confusion with the original artwork and the risk of commercial conflict with its rights holder. The fact that, in the “style transfer” mechanics, the whole reference image style is used to modify a second visual source would assuredly prevent a straightforward application of the “pastiche exception.” As deep-learning style-transfer algorithms becomes increasingly efficient at separating the stylistic characteristics of an artwork from its content, the resulting composition may not satisfy the requirement that the derived work appears clearly unrelated to its reference. As a result, an uncertainty as to the origin of the art may ensue, as would a risk of interference with the interests of the reference art’s owner. It is only if the combined work offers a significant difference from the source, e.g., forms the basis of a critical commentary, if the original source is mentioned, that it could be considered a form of free expression protected under fair use, provided it is for non-commercial use.

Independent from a potential infringement on the reference art, for copyright protection to be granted to a derivative work (in the sense of an “alteration” of a previous “reference” work), it must include an additional original contribution. In the context of “style transfer,” this contribution lies in the specific selection and subsequent merging of two images. Could such a mix suffice to reach the threshold of originality required by copyright law? As the combination itself is the result of the automated processing in the deep-learning engine, the remaining source of originality would rest solely on the selection of source pictures. Although the case would be judged on its particular merit, this appears rather doubtful, for it would be difficult, in most cases at least, to justify an “imprint of one’s personality” based on this limited choice. With the exception of the U.K. (and other jurisdictions where similar provisions apply), where the user may justify authorship of the final work as “the person by whom the arrangements necessary for the creation of the work are undertaken,” it is probable that most courts would deny copyright on the derived composition and only retain the copyright of the source images. However, recent developments in neural style transfer are now enabling users to select multiple styles, from a variety of sources, in order to compose a mixed composite image in a manner reminiscent of musical remix works (Dumoulin et al., 2016; Gatys et al., 2016b). As the number of “free parameters” offered by these new approaches increases, it is expected that the degree of creative choices they offer will soon reach the minimum requirement of an original expression that would justify, as such, copyright protection.

The Artist as a Database Curator

A characteristic of machine learning-based art is its dependence on a corpus of training data, from which a model is created. As with style transfer, the reference to existing, potentially copyrighted sources, will raise the possibility of infringement. The selection of source data used to develop a deep-generative model may, also, provide an opportunity for creative input. In the case of deep neural architecture, the system would form a model of, say, a set of musical sources, by first identifying correlations in the sub-elements (audio frames, if a raw musical material is used for training or elements of a MIDI file, if the compositions are so encoded). Layer after layer, more complex (e.g., long term) correlations may emerge that are characteristic of the structures observed in the training corpus. Once the internal deep model is stabilized, the generation of novel musical pieces will merge these features in the attempt to produce a composition that conforms to the probability of associations learnt during training. In this process of disaggregation (of the musical source) in subunits, correlation (of said subunits) into features and reaggregation (of features) into a final composition, the chance of retaining significant sequences of the sources appears slim. But, in practice, whether material may persist through the non-linear processing of the deep neural model will largely depend on the nature of the corpus, e.g., its heterogeneity (or lack thereof), and the parameters used for training (in particular the avoidance of “overfitting,” i.e., when the model generalizes poorly and tends to reproduce the exact samples on which it has been trained, a well-recognized problem in neural network architectures).

The risk of infringement will be directly dependent on the possibility to recognize in the final product the characteristics of the works used in the training of the model. When a deep RNN is trained on the full collection of Bach’s piano pieces, it is expected that any product of the generative model thus constructed will lead to compositions evocative of the master’s works and even reproduce, or imitate closely, small sections of the original works. If the musical training corpus is not in the public domain, the resulting musical pieces would then potentially infringe on the source material. The evaluation of an alleged infringement will depend on measuring a likeness between the machine-generated product and the source data: if the original work has been so transformed that insufficient similarities remain in the final work, no infringement would be found. Conversely, if the work produced through the deep model contains identifiable elements of an original work, it will violate the right of reproduction of the source’s author. In the U.S., the threshold of similarity has been set to a very low level indeed since the controversial decision Bridgeport Music, Inc. v. Dimension Films.33 Arguing that “sampling is never accidental […] [w]hen you sample a sound recording you know you are taking another’s work product34 and holding that the reproduction of three notes from an original composition justified infringement, the court got rid, in essence, of the de minimis doctrine in the context of digital music sampling. In the same vein, a 2008 decision of the Federal Court of Justice of Germany (Bundesgerichtshof) stated that the “smallest audio fragments” are copyrightable and that the sampling a few bars of a drum beat can be the basis for copyright infringement.35 In France, a court decision of 2000 held that “the personality of the author may transpire independently from the number of notes” and thus, as long as the original composition is recognizable its unauthorized reproduction may justify infringement.36 Similarly, an older decision stated that the reproduction of four identical notes in a refrain constitutes an act of infringement.37 These extremely low thresholds in the application of the similarity metric used to estimate infringement in the field of music could severely limit the undiscerned use of copyrighted works in a machine-learning training corpus. The generation of musical creation may therefore require some additional cautionary procedures in the training of deep-generative models. In particular, it would be prudent to avoid overfitting the original sources in order to minimize the risk of “memorizing” and reproducing full sequences from the training set. The use of a “dropout” technique (Srivastava et al., 2014) or increasing the number of layers of the network [as shallow architectures are more prone to overfitting than deep models (Ba and Caruana, 2014)] would help prevent such an outcome. Other approaches could rely on incorporating a posteriori procedural measures of similarity in order to detect and avoid plagiarism in computer-generated textual or musical works (Papadopoulos et al., 2014) or in images (Polatkan et al., 2009).

Could the generation of artworks, based on the selection of a specific training corpus, justify the attribution of a copyright? It is now well established that the use of a computer as a tool does not prevent the creation of an original work, the essential test being the measure of “originality” as previously discussed. The task will then consist in evaluating the presence of the author’s personality, or her creative spark, in the product obtained from a deep-generative process. Here, a useful analogy could be made with David Cope’s creations. Emily Howell, the machine-learning program created by David Cope to automate the generation of musical works relies indeed on the manual, thoughtful, definition of a unique training corpus (consisting, as mentioned earlier, in a mix of works by classical composers, but also including David Cope’s own compositions, as well as a selection of previous hand-picked outputs from Emily Howell). When David Cope selected this improbable combination of musical sources, he undoubtedly manifested his creative intent. The generated work, although produced through the transformative pipeline of a computer program, corresponds, then, to a unique piece that only David Cope could have produced. At the opposite end of the spectrum, the use of a deep architecture to produce compositions in the style of J.S. Bach based on training a model of all his piano works would not justify the claim to an “original” contribution (Liu and Ramakrishnan, 2014). Similarly, when Daniel Johnson uses the full collection of preformatted sources from the “Classical Piano MIDI Page”38 to train his RNN model,39 the result, however interesting and musically believable it may be, would hardly qualify as a reflection of Johnson’s personality. The same principle would also apply o the automated visual creations of the DeepDream engine: if the sole input lies in the selection of the source image on which the deep model “hallucinates,” no additional “originality” would result from the processing of said image.40

Whereas most jurisdictions would grant David Cope the authorship on the product of Emily Howell, only in the UK (and in the other countries following the same practice) would the latter productions be granted a copyright. Indeed, should the creative process be considered “computer generated,” the individuals who took the necessary steps for the production of the artwork would be considered their “authors,” thereby granting them copyright to all productions generated by the automated process. Offering systematic copyright protection to such automated production is not without risk, though. As machine-learning creative engines progress, the large-scale production of artistic goods on par with human productions is bound to reach an ever growing audience. A flood of machine-based creations may follow. Take Melomics 109, the program developed by the University of Malaga in Spain, that used a combination of genetic algorithms and composition rules to produce a database of a billion unique songs (Quintana et al., 2013): while these compositions are part of the public domain, others may not and could lead to innumerable litigations. Imagine an automaton that produces novel objects, each different from the next, be it a text, a three-dimensional shape, a sequence of sounds, and a graphical form. The creator of the machine has devised it so that it is capable of learning from known artworks. No specific selection of the works has been made and, for that matter, it could pick at random from the corpus of preexisting art. The intellectual contribution of the creator of the automaton consisted, exclusively, in defining how the automaton can effectively learn a generative process capable of producing such texts, shapes, or sounds. Should the creator of the automaton hold a right on its output? As the proprietor of the end products, the right of selling or licensing them is assuredly not under question. But copyrights and author’s rights bring further prerogatives, both economic and moral, that extend well beyond mere ownership. Should these rights be systematically assigned to the aforementioned creator of the automaton? If a function of copyright is to promote creativity, the avalanche of automatically produced copyrighted creations, may refrain artists from expressing their voice and publishing their creations, for fear of infringing on the protected material (Jacobson, 2011). Care should be taken, therefore, to limit copyright attribution to the creations that are indeed the locus of a “creative spark,” a human one, that is, and not just the electric glint of a computational engine.

Conclusion

From graphical productions to musical compositions, deep neural network architectures are now the main algorithmic engine driving new forms of creative endeavors. Ever wished Rembrandt painted your portrait? Style transfer allows applying one author’s mannerism onto a second visual object. How about a radio station that would deliver a brand new Coltrane album every hour: deep learning allows generating musical pieces at the click of a button in the style of a composer, or a band. The increasing ease with which these algorithmic processes produce new artworks that rival human productions, as much as the automation of functions traditionally devoted to the artist, raise fundamental questions in terms of the value of the creation process and the protection of the artifacts generated from such processes. Copyright laws as a framework to promote creativity and protect authors are at the epicenter of this new debate. Who is the author when the machine learns from a corpus of preexisting works? How to identify the human contributions when the creation results from a complex process of decomposition and re-composition within the opaque construct of a deep neural network model? Answering these questions requires delving into the generative process at the core of the algorithmic engines and seeking the mechanism by which the personality of the author can be instilled in the final creation.

As the community embraces new tools that augment and stimulate the production of artworks, new sources of conflicts arise for which existing legal frameworks do not offer systematic, harmonized, mechanisms of resolution. Necessarily dependent on training data, deep creations contain indeed in their very fabric the ghostly, yet potentially perceptible, presence of the sources that helped forge the model from which they emerge. If left unchecked, it is the learning process at the center of these “creation machines” that may, therefore, lead to a systemic appropriation of protected material. In the face of the strict similarity metrics used to compare, for example, musical compositions, these new creations would pave the way to many infringement cases and greatly limit the application of deep architectures to the arts. A second concern may lie in granting, as some jurisdictions do, copyright protection to automatically generated artworks. The unbound production of copyrighted artifacts would end up artificially inflating the protected domains and contribute to stifling the creative ambitions of many artists locked in a labyrinth of potential infringements. If copyright is to remain an incentive to creativity, an adaptation of some of its underlying principles may be required. Allowing for a fair use of existing artistic corpora while limiting the attribution of rights to those artworks that reflect an original contribution of human creators might help to maintain the function of copyright as a catalyst to creativity.

Author Contributions

The author confirms being the sole contributor of this work and approved it for publication.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

The reviewer FS and handling Editor declared their shared affiliation, and the handling Editor states that the process nevertheless met the standards of a fair and objective review.

Acknowledgments

The author wishes to thank Andrew Thean for proofreading this text and Franck Macrez for his input on an earlier version of this work. The author is grateful to the reviewers for their constructive input which contributed to improve the quality of the paper.

Footnotes

  1. ^“Neural Information Processing Systems” (NIPS), one of the major machine learning conferences, has hosted in 2013, 2015, and 2016 a workshop on the very subject of “Constructive Machine Learning” (http://www.cs.nott.ac.uk/~psztg/cml/). For a general discussion on the field of computational creativity, the reader is referred to the foundational work of Boden (1999, 2010) as well as the more recent studies by Colton and Wiggins (2012) and Jordanous (2012).
  2. ^Other novel technical tools—such as blockchain (see, e.g., Zeilinger, 2016)—are bound to impact the intellectual property landscape, in particular in terms of the traceability of creative objects. The scope of the present study will be, however, limited to the subset of technical intermediaries susceptible to participate in (or substantially influence) the creative generation of artistic artifacts.
  3. ^Supposing, for instance, that the fundamental relations of pitched sound in the signs of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent” (Bowles, 1970, p. 4). Lady Lovelace was not of the opinion, though, that machines could ever be considered “creative” on the ground that “they could not originate anything” (Dartnall, 1994, p. 33).
  4. ^See Xenakis and Kanach (1992) and other productions of the “Equipe de Mathématique et Automatique Musicales” his computer-assisted composition laboratory EMAMu, founded by Xenakis in 1966.
  5. ^Michael Mozer recognized that his early recurrent neural network compositions “were not musically coherent, lacking thematic structure and having minimal phrase structure and rhythmic organization” (Mozer, 1994, p. 280), highlighting thereby the difficulty of basic neural network architectures to memorize and learn longer musical phrases.
  6. ^See http://grayarea.org/event/deepdream-the-art-of-neural-networks/.
  7. ^Some of the most effective generative models currently rely on an “adversarial” training (Goodfellow et al., 2014). These training architectures consist of a pair of models: a “generative model” (that attempts to generate some simulated data that mimic the statistical properties of the input data, e.g., some works by J. S. Bach, or paintings by V. Van Gogh, that resemble the pieces from the training set) and a second “discriminative model” that tries to separate true input data (i.e., the real pieces of Bach or Van Gogh) from the simulated ones. During the training phase, the generative model is optimized to fool the discriminative one and, conversely, the discriminative model will learn not be fooled by the artificial data outputted from the generative model. The outcome of this alternated, iterative, training produces a generative model far better at simulating data in the style of the training corpus than if it had been trained in isolation (Im et al., 2016).
  8. ^See https://magenta.tensorflow.org.
  9. ^See Ross Goodwin’s website: http://benjamin.wtf.
  10. ^Berne Convention for the Protection of Literary and Artistic Works, Sept. 9, 1886, as revised at Paris on July 24, 1971 and amended on September 28, 1979, S. Treaty Doc. No. 99–27 (1986), hereinafter Berne Convention.
  11. ^Art. 6bis of the Berne Convention.
  12. ^Art. L.121-1 to L.121-9, French Intellectual Property Code.
  13. ^Art. 14, Spanish copyright law.
  14. ^17 U.S. Code §106A “Rights of certain authors to attribution and integrity” (in the context of visual art).
  15. ^For a general review of the copyrightability of artificial intelligence-generated works in the context of U.S. law, see Bridy (2012).
  16. ^U.S. 1976 Copyright Act, §102.
  17. ^U.S. 1976 Copyright Act, §101.
  18. ^U.K. Copyright, Designs and Patents Act 1988 §3(3).
  19. ^Art. L.111-2, French Intellectual Property Code.
  20. ^Painer v. Standard Verlags GmbH C145/10, 2012, ECDR 6, at 89.
  21. ^Id., at 94.
  22. ^Feist Publications v. Rural Telephone Service, 499 U.S. 340, 345 (1991).
  23. ^U.S. 1976 Copyright Act §103(a).
  24. ^Redwood Music Ltd. v. Chappell & Co. Ltd., [Q.B. 1982] R.P.C. 109, 120.
  25. ^Compendium of U.S. Copyright Office Practices, Third Edition (December 22, 2014), section 313.2 “Works that lack human authorship”.
  26. ^Copyright, Designs and Patents Act (United Kingdom), 1988, ch. 48, §§ 9(3), 178.
  27. ^In one of the rare instances referring to “computer-generated works” in the context of section 9(3) of the U.K. Copyright, Designs and Patent Act, the England and Wales High Court ruled that the image frames displayed by a computer game were held authored by the person who “devised the appearance of the various elements of the game and the rules and logic by which each frame is generated and […] wrote the relevant computer program.” (Nova Productions Ltd v. Mazooma Games Ltd, [2006] EWHC 24 (Ch.), 20 January 2006, at 105).
  28. ^See http://www.genekogan.com/works/style-transfer.html.
  29. ^As can be watched at: https://vimeo.com/169187915.
  30. ^Defined as “an artistic work in a style that imitates that of another work, artist, or period” (Merriam-Webster dictionary).
  31. ^See https://nucl.ai/blog/forgeries-user-guide/.
  32. ^Appeal court, Paris, 11 May 1993, RIDA July 1993, p. 340.
  33. ^Bridgeport Music, Inc. v. Dimension Films, 410 F.3d 792 (6th Cir. 2005).
  34. ^Id. at 399.
  35. ^Metall Auf Metall (Kraftwerk et al. v. Moses Pelham et al.) Decision of the German Federal Supreme Court No. I ZR 112/06, November 20, 2008, at 56 Journal of the Copyright Society 1017 (2009).
  36. ^Tribunal de grande instance, Paris, 5 juill. 2000. Com. comm. électr., March 2001, comm. n° 23, obs. C. Caron.
  37. ^Appeal Court, Paris, 13 Nov. 1969 : RIDA avr. 1970, 145.
  38. ^http://www.piano-midi.de.
  39. ^See http://www.hexahedria.com/2015/08/03/composing-music-with-recurrent-neural-networks/ the code for this project is available at https://github.com/hexahedria/biaxial-rnn-music-composition.
  40. ^However, the modification of the model parameters, i.e., the selection the layers and specific units activated to produce a particular effect (or, as with David Cope, the selection of specific sources used for training the model), may contribute in assigning a sufficient level of “originality” to the final work.

References

Antunes, R.F., and Fol Leymarie, F. (2012). Generative choreography: animating in real-time dancing avatars. In International Conference on Evolutionary and Biologically Inspired Music and Art, 1–10. Berlin, Heidelberg: Springer.

Google Scholar

Ba, J., and Caruana, R. (2014). Do deep nets really need to be deep? Advances in Neural Information Processing Systems 27: 2654–2662.

Google Scholar

Bengio, Y., Lamblin, P., Popovici, D., and Larochelle, H. (2007). Greedy layer-wise training of deep networks. Advances in Neural Information Processing Systems 19: 153–160.

Google Scholar

Bengio, Yoshua (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning 2: 1–127. doi:10.1561/2200000006

CrossRef Full Text | Google Scholar

Berov, L., and Kuhnberger, K.-U. (2016). Visual hallucination for computational creation. In Proceedings of the Seventh International Conference on Computational Creativity (ICCC 2016), Edited by F. Pachet, A. Cardoso, V. Corruble, and F. Ghedini. (Paris, France: Sony CSL Paris, France), 107–114.

Google Scholar

Boden, M.A. (1999). Computational models of creativity. In Handbook of Creativity, Edited by R.J. Sternberg. Cambridge University Press. 351–373.

Google Scholar

Boden, M.A. (2010). Creativity & Art. Oxford University Press.

Google Scholar

Boulanger-Lewandowski, N., Bengio, Y., and Vincent, P. (2012). Modeling temporal dependencies in high-dimensional sequences: application to polyphonic music generation and transcription. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), Edinburgh, Scotland, 1159–1166.

Google Scholar

Bowles, E. (1970). Musicke’s Handmaiden: Or Technology in the Service of the Arts. Cornell University Press.

Google Scholar

Bridy, A. (2012). Coding creativity: copyright and the artificially intelligent author. Stanford Technology Law Review 5:1–28.

Google Scholar

Buccafusco, C., Burns, Z.C., Fromer, J.C., and Christopher, J.S. (2014). Experimental tests of intellectual property laws’ creativity thresholds. Texas Law Review 93: 1921–1980.

Google Scholar

Buchanan, B.G., and Duda, R.O. (1983). Principles of rule-based expert systems. Advances in Computers 22: 163–216. doi:10.1016/S0065-2458(08)60129-1

CrossRef Full Text | Google Scholar

Champandard, A.J. (2016). Semantic style transfer and turning two-bit doodles into fine artworks. arXiv:1603.01768, preprint.

Google Scholar

Colombo, F., Muscinelli, S.P., Seeholzer, A., Brea, J., Gerstner, W. (2016). Algorithmic composition of melodies with deep recurrent neural networks. arXiv:1606.07251, preprint.

Google Scholar

Colton, S., and Wiggins, G.A. (2012). Computational creativity: the final frontier? In 20th European Conference on Artificial Intelligence (ECAI), Vol. 12, Montpellier, France, 21–26.

Google Scholar

Cope, D. (2005). Computer Models of Musical Creativity. MIT Press.

Google Scholar

Cope, D. (2010). Recombinant Music Composition Algorithm and Method of Using the Same. U.S. Patent No 7,696,426. Washington, DC: U.S. Patent and Trademark Office.

Google Scholar

Crnkovic-Friis, L., and Crnkovic-Friis, L. (2016). Generative choreography using deep learning. arXiv:1605.06921, preprint.

Google Scholar

Dartnall, T. ed. (1994). Artificial Intelligence and Creativity: An Interdisciplinary Approach. Vol. 17. Springer Science & Business Media.

Google Scholar

Drori, I., Cohen-Or, D., and Yeshurun, H. (2003). Example-based style synthesis. In Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society Conference on, Vol. 2, Toronto, Canada, 143–150.

Google Scholar

Dumoulin, V., Shlens, J., Kudlur, M. (2016). A learned representation for artistic style. arXiv:1610.07629, preprint.

Google Scholar

Eck, D., and Schmidhuber, J. (2002). Finding temporal structure in music: blues improvisation with LSTM recurrent networks. In Neural Networks for Signal Processing, 2002. Proceedings of the 2002 12th IEEE Workshop on, Martigny, 747–756.

Google Scholar

Gatys, L., Ecker, A., Bethge, M. (2015). A neural algorithm of artistic style. arXiv:1508.06576, preprint.

Google Scholar

Gatys, L.A., Ecker, A.S., and Bethge, M. (2016a). Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2414–2423.

Google Scholar

Gatys, L.A., Ecker, A.S., Bethge, M., Hertzmann, A., Shechtman, E. (2016b). Controlling perceptual factors in neural style transfer. arXiv:1611.07865, preprint.

Google Scholar

Gendreau, Y. (1994). The criterion of fixation in copyright law. Revue Internationale du Droit d’Auteur (RIDA) 159: 110–203.

Google Scholar

Ginsburg, J.C. (2000). International copyright: from a bundle of national copyright laws to a supranational code. Journal of the Copyright Society of the USA 47: 265–413.

Google Scholar

Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., et al. (2014). Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), Montréal, Canada, 2672–2680.

Google Scholar

Grout, D.J., and Palisca, C.V. (1996). A History of Western Music. 5th ed. New York: W. W. Norton & Company.

Google Scholar

Hiller, L., and Isaacson, L. (1959). Experimental Music: Composition with an Electronic Computer. McGraw-Hill.

Google Scholar

Hinton, G.E., Osindero, S., and Teh, Y.W. (2006). A fast learning algorithm for deep belief nets. Neural Computation 18: 1527–54. doi:10.1162/neco.2006.18.7.1527

PubMed Abstract | CrossRef Full Text | Google Scholar

Im, D.J., Kim, C.D., Jiang, H., Memisevic, R. (2016). Generating images with recurrent adversarial networks. arXiv:1602.05110, preprint.

Google Scholar

Jacobson, W.P. (2011). Robot’s record: protecting the value of intellectual property in music when automation drives the marginal costs of music production to zero. Loyola of Los Angeles Entertainment Law Review 32: 31–46.

Google Scholar

Jordanous, A. (2012). A standardised procedure for evaluating creative systems: computational creativity evaluation based on what it is to be creative. Cognitive Computation 4: 246–79. doi:10.1007/s12559-012-9156-1

CrossRef Full Text | Google Scholar

Klütsch, C. (2007). Computer graphic – aesthetic experiments between two cultures. Leonardo 40: 421–5. doi:10.1162/leon.2007.40.5.421

CrossRef Full Text | Google Scholar

Law, E., and Von Ahn, L. (2009). Input-agreement: a new mechanism for collecting data using human computation games. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Atlanta, Georgia, USA, 1197–1206.

Google Scholar

LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature 521: 436–44. doi:10.1038/nature14539

PubMed Abstract | CrossRef Full Text | Google Scholar

LeCun, Y., Jackel, L.D., Bottou, L., Brunot, A., Cortes, C., Denker, J.S., et al. (1995). Comparison of learning algorithms for handwritten digit recognition. In International Conference on Artificial Neural Networks, Vol. 60, Perth, Australia, 53–60.

Google Scholar

Lehman, J., Risi, S., and Clune, J. (2016). Creative generation of 3D objects with deep learning and innovation engines. In Proceedings of the Seventh International Conference on Computational Creativity (ICCC 2016). Edited by F. Pachet, A. Cardoso, V. Corruble, and F. Ghedini. (Paris, France: Sony CSL Paris, France), 180–187.

Google Scholar

Li, C., and Wand, M. (2016). Combining Markov random fields and convolutional neural networks for image synthesis. arXiv:1601.04589, preprint.

Google Scholar

Liu, I., and Ramakrishnan, B. (2014). Bach in 2014: music composition with recurrent neural network. arXiv:1412.3191, preprint.

Google Scholar

Lyu, Q., Wu, Z., and Zhu, J. (2015). Polyphonic music modelling with LSTM-RTRBM. In Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia, 991–994.

Google Scholar

Machado, P., and Cardoso, A. (1997). Model proposal for a constructed artist. In Proceedings of World Multiconference on Systemics, Cybernetics and Informatics, Vol. 97, Orlando, FL, 521–528.

Google Scholar

Mahendran, A., and Vedaldi, A. (2014). Understanding deep image representations by inverting them. arXiv:1412.0035, preprint.

Google Scholar

Mozer, M.C. (1994). Neural network music composition by prediction: exploring the benefits of psychoacoustic constraints and multi-scale processing. Connection Science 6: 247–80. doi:10.1080/09540099408915726

CrossRef Full Text | Google Scholar

Newitz, A. (2016). Movie written by algorithm turns out to be hilarious and intense. Ars Technica Available at: http://arstechnica.com/the-multiverse/2016/06/an-ai-wrote-this-movie-and-its-strangely-moving/

Google Scholar

Nishijima, M., and Watanabe, K. (1993). Interactive music composer based on neural networks. Fujitsu Scientific & Technical Journal 29: 189–92.

Google Scholar

Noll, A.M. (1967). The digital computer as a creative medium. IEEE Spectrum 4: 89–95. doi:10.1109/MSPEC.1967.5217127

CrossRef Full Text | Google Scholar

O’Hanrahan, E. (2016). Leonardo special section: pioneers and pathbreakers: the contribution of Desmond Paul Henry (1921–2004) to 20th century computer art. Leonardo doi:10.1162/LEON_a_01326

CrossRef Full Text | Google Scholar

O’Hear, A. (1995). Art and technology: an old tension. Royal Institute of Philosophy Supplements 38: 143–58. doi:10.1017/S1358246100007335

CrossRef Full Text | Google Scholar

Oord, A.V.D., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., et al. (2016). WaveNet: a generative model for raw audio. arXiv:1609.03499, preprint.

Google Scholar

Pachet, F., Suzda, J., and Martín, D.A. (2013). Comprehensive online database of machine-readable lead sheets for jazz standards. In 14th International Society for Music Information Retrieval Conference (ISMIR 2013), Curitiba, Brazil, 275–280.

Google Scholar

Papadopoulos, A., Roy, P., and Pachet, F. (2014). Avoiding plagiarism in Markov sequence generation. In 28th Conference on Artificial Intelligence (AAAI 2014), 2731–2737. Quebec, Canada.

Google Scholar

Polatkan, G., Jafarpour, S., Brasoveanu, A., Hughes, S., and Daubechies, I. (2009). Detection of forgery in paintings using supervised learning. In 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 2921–2924.

Google Scholar

Potash, P., Romanov, A., and Rumshisky, A. (2015). GhostWriter: using an LSTM for automatic rap lyric generation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 1919–1924.

Google Scholar

Quintana, C.S., Arcas, F.M., Molina, D.A., Rodríguez, J.D.F., and Vico, F.J. (2013). Melomics: a case-study of AI in Spain. AI Magazine, Vol. 34, n° 3, 99–103.

Google Scholar

Roemmele, M. (2016). Writing stories with help from recurrent neural networks. In 30th AAAI Conference on Artificial Intelligence, Phoenix, Arizona, USA, 4311–4312.

Google Scholar

Ruder, M., Dosovitskiy, A., Brox, T. (2016). Artistic style transfer for videos. arXiv preprint arXiv:1604.08610, preprint.

Google Scholar

Scholkopf, B., and Smola, A.J. (2001). Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press.

Google Scholar

Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. (2014). Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, Vol. 15, n°1, 1929–58.

Google Scholar

Sturm, B., Santos, J.F., Ben-Tal, O., and Korshunova, I. (2016). Music transcription modelling and composition using deep learning. In 1st Conference on Computer Simulation of Musical Creativity, Huddersfield, UK, 1–16.

Google Scholar

Todd, P.M. (1989). A connectionist approach to algorithmic composition. Computer Music Journal 13: 27–43. doi:10.2307/3679551

CrossRef Full Text | Google Scholar

Wang, Q., Tianyi, L., Dong, W., Chao, X. (2016). Chinese song iambics generation with neural attention-based model. arXiv:1604.06274, preprint.

Google Scholar

Xenakis, I., and Kanach, S. (1992). Formalized Music: Thought and Mathematics in Composition. New York: Pendragon Press.

Google Scholar

Yan, R. (2016). I, poet: automatic poetry composition through recurrent neural networks with iterative polishing schema. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI), New York, NY, USA, 2238–2243.

Google Scholar

Zeiler, M.D., and Fergus, R. (2014). Visualizing and understanding convolutional networks. In European Conference on Computer Vision, Zurich, Switzerland, 818–833. Springer.

Google Scholar

Zeilinger, M. (2016). Digital art as ‘Monetised Graphics’: enforcing intellectual property on the blockchain. Philosophy & Technology 1–27. doi:10.1007/s13347-016-0243-1

CrossRef Full Text | Google Scholar

Zhao, Y., and Xu, D. (2016). Monet-style images generation using recurrent neural networks. In 10th International Conference, Edutainment 2016, Hangzhou, China, 205–211. doi:10.1007/978-3-319-40259-8_18

CrossRef Full Text | Google Scholar

Keywords: deep learning, intellectual property, machine learning, computational creativity, copyright

Citation: Deltorn J-M (2017) Deep Creations: Intellectual Property and the Automata. Front. Digit. Humanit. 4:3. doi: 10.3389/fdigh.2017.00003

Received: 15 November 2016; Accepted: 17 January 2017;
Published: 01 February 2017

Edited by:

Frederic Kaplan, École Polytechnique Fédérale de Lausanne, Switzerland

Reviewed by:

Fouad Slimane, École Polytechnique Fédérale de Lausanne, Switzerland
Francois Pachet, Sony Computer Science Laboratory, France

Copyright: © 2017 Deltorn. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Jean-Marc Deltorn, jmdeltorn@etu.unistra.fr

Download