Introduction
From driverless cars and genetic programming to 3D printing and surgical robots, artificial intelligence (AI) has permeated our lives in extensive ways and is emerging as one of the most revolutionary technologies of the twenty-first century. Yet, as with all new technologies, issues arise over legal regulation, one of which concerns copyright protection. As AI systems become increasingly autonomous, they are becoming capable of producing works with an extraordinarily minimal level of human intervention. In other words, AI challenged the traditional creative process whereby a human is evidently the author of the final work. As such, the question arises as to what type of protection, if any, should be given to works generated by autonomous AI.
First, this article will define AI and its implications for the creation of artwork. Second, it will analyse how AI-generated works interact with the traditional copyright principle of (human-centred) originality. Third, the article will assess how the current UK legal framework — specifically, Section 9(3) of the Copyright, Designs and Patents Act (CDPA) 1988 — deals with these works, noting its limitations. Finally, considering policy reasons, the article will propose entrance into the public domain as the most suitable approach to accommodate AI-generated works.
AI-Generated Works
The definition of AI is essential for contextualising the concepts of human authorship and originality in the copyright regime. As John McCarthy remarks, AI in its most basic form ‘is the science and engineering of making intelligent machines’.1 These systems are designed to analyse large amounts of data, make predictions or classifications, and learn from experience.2
The AI-driven creations discussed in this article refer to works created by programs that are powered by generative AI, which uses artificial neural networks to mimic the thought processes of humans.3 Generative AI is characterised by its ability to produce new and original content, unlike traditional machine learning algorithms that can only analyse or act on existing data. To achieve this, the AI makes predictions on data patterns and optimises its accuracy by comparing it to the training data. Most notably, generative AI are unsupervised or semi-supervised algorithms, which means that they operate with little to no human intervention.4 In fact, its ability to autonomously create new works of sophisticated art, music, and literature has led scholars to coin the term ‘algorithmic creativity’.5
There has been an advent of numerous image models offering different techniques, aesthetics, and styles for modifying AI-produced works.6 For example, DeepDream, created by Google engineer Alexander Mordvintsev, is an image-based program capable of creating original artworks in a psychedelic style. Users can upload any image to DeepDream, which then interprets it and enhances certain features based on its prediction.7
Image: DeepDream 8
More recently, ChatGPT has brought to the forefront the future of generative AI. Launched in November 2022 by leading AI research company OpenAI, the language model chatbot can generate human-like text in various forms including prose, poetry, script, and even code.9 OpenAI has been responsible for many other generative programmes including DALL-E, an art generator similar to DeepDream, and its newest text-to-speech tool, VALL-E.10
These examples indicate the burgeoning era of generative AI creations; these works are increasingly detached from a direct human author, yet they fulfil a standard of creative expression that would normally be awarded copyright.11 The following section will analyse how AI-generated content does not meet the standard of protectable works in the traditional copyright framework in the UK.
Implications of Generative AI in the Copyright Realm
Copyright laws serve as a legal framework to promote the progress of the arts and sciences by incentivising authors and protecting works from being reproduced without permission.12 Copyright is valuable as it grants the creator or owner of an original work the exclusive legal right to control how that work is used and distributed. Yet not all texts or outputs are capable of copyright protection. A fundamental criterion to determine which works are eligible for copyright is originality.13 Originality serves as a tool to delineate which works can be justifiably monopolised, and which works will have to remain in the public domain.
The UK had traditionally a low originality requirement, with originality referring to the notion of independent creation. Case law established that as long as it is not duplicated from another pre-existing work, and results from a modicum of ‘skill, labour or judgment’, copyright will subsist.14 One could claim that the vast majority of AI-generated works would meet this threshold since generative AI’s very function is to create entirely new outputs based on its novel prompt-based learning approach.15 In fact, as Jung argues, the algorithm would be considered ‘defective’ if the pre-existing input was copied wholesale since it would not actually be ‘generating’ anything.16 However, given that any person can use AI to generate artwork in a matter of seconds with the simple click of a button, it is doubtful whether there exists sufficient ‘skill, labour and/or judgement’ in the creation of that work.
Nonetheless, the main reason why AI works do not qualify as sufficiently ‘original’ is that the criterion requires human authorship – the explicit role of humans in the creation process.17 This has been established in UK law since its harmonisation with EU’s 2009 Infopaq test, which confirmed, against the context of new automated technologies, that a work must be ‘the author’s own intellectual creation’.18 Since Infopaq, subsequent EU cases such as Painer19 and Football Dataco20 have elaborated that an intellectual creation must ‘[reflect] the author’s personality’ and that the author must employ creative choices that ‘stamp the work created with his personal touch’.21
The notion of human authorship as a prerequisite of copyright was not contested when computer programmes were first used to create imaginative works. This can be attributed to the fact that, in their early development, these programmes were considered mere instruments that assisted in the creative process, very much like a pen or a camera. The outcome of the work was foreseeable as the programmer was directly involved in each step of the design and creation process.22 However, with the latest advances in generative AI, the computer program is no longer simply a tool – it makes creative decisions autonomously and independently from the original programmer. Consider, for example, DeepDream. The programmers have explained that they do not fully influence the final work:
‘Instead of exactly prescribing which feature we want the network to amplify…we simply feed the network an arbitrary image or photo and let the network analyze the picture. […] Each layer of the network deals with features at a different level of abstraction, so the complexity of features it generates depends on which layer it chooses to enhance.’23
The variance in possible outcomes implies that the algorithm’s programmers cannot foresee the expression of the images the AI churns out. This ‘unforseeability’ severs the direct connection between human ‘authors’ and the software’s output – although the programmers may start the development of the application, they do not directly control the results of the application’s creative process. Consequently, it cannot be said that AI-generated outputs are ‘the author’s own intellectual creation’. The minimal amount of discoverable human contribution fails to satisfy the standard of originality needed to justify copyright protection.
It should be noted that though this is the current position in UK law, the legislature could possibly reverse its position regarding the UK’s originality test post-Brexit. However, this is unlikely, considering that Infopaq has ‘to some extent been embedded in UK law through domestic jurisprudence in the UK appellate courts’.24
As such, copyright cannot be awarded to AI-generated works under the traditional originality framework. In search of other solutions, many have turned to Section 9(3) of the Copyright, Designs and Patents Act (CDPA) 1988, the UK’s bespoke provision protecting computer-generated works. However, as will be analysed in the following section, there are notable limitations of the CDPA provision in its application to works created by generative AI.
Analysis of s 9(3) of the CDPA 1988
According to CDPA 1988, a computer-generated work is one that ‘is generated by computer in circumstances such that there is no human author of the work’.25 Protection for computer-generated works is limited in comparison to genuine authorial copyright with respect to the shorter protection period, 50 years compared to 70 years, and the exclusion of moral rights for authors.
Section 9(3) of the CDPA 1988 states:
‘In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.’26
Section 9(3) creates an interesting legal fiction; it constructs an artificial author for computer-generated works, which, by definition, are authorless.27 In principle, the provision offers all-encompassing protection for computer-generated works, since in the absence of a human author, copyright can always be traced to the most plausible nearby human who made the ‘necessary arrangements’ for the work to come into existence.28
However, the CDPA is problematic because it is not equipped to deal with the complexities of AI-generated works. In 1988, when the provision was enacted, a human author could always be identified due to their clear involvement in programming the computer outputs. Yet, as discussed, emerging AI-generated works are qualitatively different from traditional computer-generated works, for they do not rely on creative human input.29 Despite drastic technological advancements, the CDPA has not been updated since its enactment. Consequently, the provision does not offer much clarity in its legal application to current, increasingly autonomous works, especially regarding the identification of the author who made the ‘necessary arrangements’. As Davis and Aplin remark, the wording of the provision is ‘ambiguous and open to interpretation’.30
Nova Productions v Mazooma Games and Others, the only case which has applied Section 9(3) of the CDPA, has suggested that the author may be the software programmer who wrote the algorithm that generated the final output.31 Nova concerned the alleged infringement of a series of composite frames produced onscreen by an arcade game.
Regarding whether the author of the visual display was the programmer or the user, Kitchin J ruled that:
‘In so far as each composite frame is a computer-generated work then the arrangements necessary for the creation of the work were undertaken by [the programmer] Mr Jones because he devised the appearance of the various elements of the game and the rules and logic by which each frame is generated and he wrote the relevant computer program.’32
Kitchin J rejected the claim that the player was not the author, for ‘he has contributed no skill or labour of an artistic kind … [a]ll he has done is to play the game’.33 The court ruled that the programmer should be awarded copyright for the composite frames.
As such, identifying the human author is dependent on their extent of involvement in the creation process. In the context of Nova, it appears self-evident that the programmer made the crucial arrangements and should therefore be the owner of the copyright. However, allocating copyright is not always a simple two-fold determination, especially when it comes to AI works. Due to the enormous amount of data involved and the complex process of training an algorithm, multiple stakeholders enable the creation of AI software. For instance, DALL-E was not trained on raw data from the internet, but on 650 million images that OpenAI licensed, which were fed into the algorithm.34 Apart from programmers, the AI company, investors, trainers, software engineers, data providers and machine operators are all important stakeholders heavily involved in the design process. It would be arbitrary to assume that the programmer is invariably the person who makes the ‘necessary arrangements’ for the work.35
On the other hand, in some scenarios, end-users do more than ‘press a button’, and play a pivotal role in the operation of the software. Take, for example, ChatGPT, where the user provides the prompt for the software. Their control over the kind of output the software generates based on their given prompts suggests that it is the user who makes the ‘necessary arrangements’ for the work.36 In this case, it could be argued that the programmers generate only a ‘potential for a creation’, and not the AI’s final product.37 However, there also exists a broad range of contributions a user can make — they could provide highly specified prompts based on their own ideas or simply ask for a generic output and let the programme take creative control.
In this regard, the provision does not offer guidance on what the minimum threshold for such a contribution should be, with its vague language of ‘necessary arrangements’.38 Consequently, where multiple stakeholders are involved in creating the AI work, the CDPA is unsatisfactory. Should they all be recognised through joint ownership? While possible, this could cause greater issues of whether ownership is split evenly or on a proportional basis according to each person’s contribution, for example, a 70/30 split. As Ramalho notes, the difficulty in allocating ownership does not favour legal certainty in an already ambiguous area.39 It is foreseeable that these ambiguities will lead to a rise in disputes as people seek to claim ownership over their respective contributions. But perhaps the most problematic part of the provision is the fact that by identifying the author as the ‘person by whom … arrangements are undertaken’, the CDPA still presupposes some form of creative intervention by humans.40 The generative nature of AI necessarily entails that the stakeholders can neither truly foresee nor control the program’s output. As such, while computer-assisted works with significant human intervention are adequately protected by the CDPA, AI-autonomous works are not.41
What then, should be the way forward? Should the CDPA be amended or replaced by new legislation for AI works? Such a proposal rests on the assumption that copyright protection for AI works would, in fact, be favourable. Considering the novelty of AI in the creative paradigm, it is important to take a step back and assess the value of copyright for authorless works. The following section will discuss how copyright protection would be undesirable through a policy analysis of the current context of commercial AI development.
Policy Considerations
One of the main arguments put forward by investors in AI programmes is based on the copyright incentive theory. This utilitarian theory supports the idea that exclusive economic rights granted by copyright are the only incentive for creators.42 The need to incentivise is particularly important within the digital economy, as heavily computer-dependent businesses that contribute to the accumulation of knowledge continue to ask for some form of protection for their investments. In the absence of copyright, innovation and investments could decrease in the AI sector.
How, then, should copyright be awarded? Could the AI application itself hold intellectual property rights? While the proposition sounds tempting – in resolving copyright allocation issues – awarding such rights is neither desirable nor necessary. At present, the UK copyright regime is built upon human authorship. Awarding a machine copyright would be contrary to the fundamental principles of intellectual property that protect human creativity. Additionally, from a policy perspective, computer programs, unlike human authors, do not need to be incentivised. AI systems will generate content with or without copyright. They do not need exclusive rights; nor do they possess the mind, intellect, or consciousness to claim the fruits of their work.
With regard to the businesses and individuals behind AI, the market already supplies these stakeholders with sufficient incentives. AI programmers can normally obtain copyright for their software code, and developers can maximise profit from their commercially successful software through sales and licensing.43 Moreover, in terms of protection against copyright infringement, they can enjoy sui generis database rights and trade secrets protection.44 Granting copyright for AI-generated outputs in addition to these incentives would be to provide over-protection.45 As Samuelson remarks, this excessive protection is further compounded by the fact that the owner would ‘automatically own everything the program was capable of generating’.46 Due to AI’s efficiently generative nature, and its ability to continuously evolve with machine learning, AI algorithms can produce near-infinite amounts of works at an unprecedented rate.47 Awarding developers with such an extensive amount of copyright would be highly disproportionate to the incentives it provides.48
Alternatively, an argument for patent protection also stems from the reward theory.49 Proponents of this theory argue that creators should be rewarded not only for their labour, but for the societal benefit of their contributions.50 Creators would be rewarded through exclusive rights which serve as ‘a legal expression of gratitude to an author for doing more than the society expects or feels that they are obliged to do’.51 It is undoubtable that the increased production of AI works would benefit society at large by spurring both technological and creative developments.
However, it should be noted that these sets of exclusive rights would be in addition to the profit and remuneration that creators enjoy; they would include the right to exclude others from using the methodology and the resulting work. Experts have previously criticised the theory on the basis that it serves to reward creators twice.52 When applied to this context, there seems to be a further lack of justification to over-reward developers to this extent when their involvement in AI-generated works is so remote. Moreover, exclusive rights would only exacerbate the anti-competitive drawbacks arising from greater protection for AI works. Developers and AI companies could hoard access to their technologies in the face of the lucrative opportunity to profit from countless commercially valuable works as copyright holders. The cost of AI and the staggering amounts of data required to train the algorithm would mean that only a handful of software giants would have access to the technology, promoting a ‘grab-all’ environment. This would further strengthen major monopolies while stifling competition and innovation in the field.
Lastly, the protection of AI-generated works would be undesirable as it would devalue human creativity. Jyh-An Lee predicts that if AI-generated works were protected under the CDPA, the majority of authors would be programmers and other stakeholders who made the necessary arrangements for the work.53 This would crowd out non-programmer authors who create their work independently. The combination of the speed at which AI works are generated, and the encompassing protection they receive, may make it harder for traditional artists to compete in the market. As artists behind an online #NotoAIArt campaign shared, AI image generators have the potential to devalue the efforts and skills of human-authored creations.54 Therefore, it is preferable not to award copyright protection to the artistic works produced by AI.
Proposing the Public Domain
This article proposes that immediate entrance into the public domain would be the ideal solution for AI-generated inventions. The public domain refers to any work or innovation that is not protected by one of the intellectual property law regimes.55 As the name implies, these works would automatically belong to the public as a whole, and not to any individual artist or creator in particular.
To start with, the public domain approach would eliminate noted legal uncertainties surrounding authorship and rights allocation when applying Section 9 (3) to autonomous AI-generated works. From the policy perspective, increased accessibility to AI-generated works would ensure fair access to an unprecedented volume of creative works, creating great social value. In addition, a public domain could lead to the proliferation of rich secondary markets as individuals and organisations would be able to use the works without purchasing them or incurring licensing costs. For example, DALL-E 2 users ‘get full usage rights’ to ‘[commercialise] the images they create with DALL-E, including the right to reprint, sell and merchandise’.56 DALL-E 2 has since been used for commercial projects, like illustrations for children’s books, characters for video games, and storyboards for movies.57 If OpenAI asserted copyright for the outputs of DALL-E 2, users would be limited in their use of the platform. The greater adoption of AI-generated works is favourable as it would, in turn, drive value-creation and generate greater economic opportunities.
Moreover, a ‘free-for-all market’ would inspire and protect the value of human creativity. While it is inevitable that AI will affect employment opportunities across industries, we can strive to foster cooperation between AI and humans, rather than a complete takeover. As Palace states, ‘it is imperative to ensure that humans remain an integral part of fields that do not necessarily require complete automation — such as the creative fields’.58 Immediate entrance of AI-generated works into the public domain would help to guarantee this, in ensuring that human artists are not discouraged from competing with a near-infinite number of advanced AI works.
The biggest argument against the public domain approach is the lost incentive for programmers and AI companies. Opponents assert that if AI works enter the public domain, it would be more difficult for creators to monetise their works, which could impair future innovation in AI applications.59
However, any loss would be mitigated by other factors. Firstly, as discussed earlier, there are sufficient incentives higher up the chain of the creative process to reward developers and companies without copyright for AI output. Secondly, apart from the benefits of social recognition, scientific curiosity, and peer collaboration, many innovators are incentivised to be the first to create or commercialise a product.60 This competitive edge, known as the first-mover advantage theory, is highly significant within the software industry and provides an additional layer of motivation for innovation.61 Generative AI, hailed as the ‘next era-defining technological innovation’, is expected to create a mini-industry.62 Venture capital firm NFX’s market map tracked more than 500 start-ups in the generative tech ecosystem (excluding OpenAI), which have so far raised more than USD$11 billion. Generative AI continues to stir excitement about its potential impact among consumers and investors alike – and there seems to be no sign of slowing down.63 Lastly, the fierce international race to lead AI innovation will spur global investments even further in the technology sector. Global AI investment ‘surged from $12.75 million in 2015 to $93.5 billion in 2021, and the market is projected to reach $422.37 billion by 2028’.64 China and the United States are among the leading nations fighting towards global primacy in AI and have created a supporting framework for businesses, organisations, and personnel training centres to achieve their goals.65 As such, the AI industry will continue to flourish, regardless of copyrights, ‘as a matter of national pride and policy’.66
Thus, entrance into the public domain remains the most viable option for AI-generated works. It would eliminate uncertainties regarding rights allocation, foster creative innovation, and increase access to AI, without any significant loss in incentives for developers and AI companies.
Conclusion
The copyright regime has entered unchartered territory with the advent of AI algorithms which are now able to generate creative works requiring little or no human intervention.67 Under the traditional UK copyright framework, these works fail to satisfy the requirement of human authorship for ‘originality’. Section 9(3) of the CDPA 1998 is also limited in its protection of AI-generated work since it was drafted in the 1980s when AI was still an abstract concept. As such, the decreasing role of human authors behind the work has led to legal uncertainties in its application. From a policy perspective, copyright protection appears dilemmatic as it would over-reward software developers and companies, stifle human creativity, and result in unequal access to AI. This article has thus proposed that AI-generated works be accepted as automatically part of the public domain. Indeed, this is the most appropriate solution to address the limitations of the CDPA while supporting the social and economic benefits of freely accessible AI.
I am grateful to Dr Luke McDonagh and the editors of the LSE Law Review for their valuable insights and comments on earlier drafts of my article. All errors remain my own.
[1] John McCarthy, ‘What is Artificial Intelligence?’ (Stanford Education, 12 November 2007) <http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html> accessed 4 March 2023.
[2] Alyssa Schroer, ‘Artificial Intelligence’ (BuiltIn, 19 September 2022) <https://builtin.com/artificial-intelligence> accessed 4 January 2023.
[3] Mark van Rijmenam, ‘What is Generative AI, and How Will It Disrupt Society?’ (The Digital Speaker, 11 November 2022) <https://www.thedigitalspeaker.com/what-is-generative-ai-how-disrupt-society/> accessed 9 January 2023.
[4] ibid.
[5] Enrico Bonadio and Luke McDonagh, ‘Artificial Intelligence as Producer and Consumer of Copyright Works: Evaluating the Consequences of Algorithmic Creativity’ (2020) 2 Intellectual Property Quarterly 112, 112.
[6] Sonya Huang and Pat Grady, ‘Generative AI: A Creative New World’ (Sequoia Capital US/Europe, 6 December 2022) <https://www.sequoiacap.com/article/generative-ai-a-creative-new-world/> accessed 4 January 2023.
[7] Cade Metz, ‘How A.I. Is Creating Building Blocks to Reshape Music and Art’ (New York Times, 14 August 2017) <https://www.nytimes.com/2017/08/14/arts/design/google-how-ai-creates-new-music-and-new-artists-project-magenta.html> accessed 4 January 2023.
[8] Alexander Mordvintsev, Christopher Olah, and Mike Tyka, [photograph of] ‘DeepDream’ (GoogleResearch, 18 June 2015) <https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html> accessed 4 January 2023.
[9] Kevin Roose, ‘The Brilliance and Weirdness of ChatGPT’ (New York Times, 5 December 2022) <https://www.nytimes.com/2022/12/05/technology/chatgpt-ai-twitter.html> accessed 14 January 2023.
[10] Luke Hurst, ‘After ChatGPT and DALL-E, meet VALL-E – the text-to-speech AI that can mimic anyone’s voice’ (Euronews Next, 12 January 2023) <https://www.euronews.com/next/2023/01/10/after-chatgpt-and-dalle-meet-vall-e-the-text-to-speech-ai-that-mimics-anyones-voice> accessed 13 February 2023.
[11] Bruce Boyden, ‘Emergent Works’ (2016) 39 Columbia Journal of Law & the Arts 377, 378.
[12] Christopher May, ‘The Venetian Moment: New Technologies, Legal Innovation and the Institutional Origins of Intellectual Property’ (2002) 20 Prometheus 159, 161.
[13] Copyright, Designs and Patents Act 1988 (CDPA 1988) s 1(a).
[14] Ladbroke v William Hill [1964] 1 All ER 465 p 7.
[15] Kyle Wiggers, ‘Prompt-based learning can make language models more capable’ (VentureBeat, 4 August 2021) <https://venturebeat.com/business/prompt-based-learning-can-make-language-models-more-capable/> accessed 4 March 2023.
[16] Gia Jung, ‘Do Androids Dream Of Copyright?: Examining Ai Copyright Ownership’ (2020) 35 Berkeley Technology Law Journal 1151, 1164.
[17] University of London Press v University Tutorial [1916] 2 Ch 601.
[18] Case C-5/08 Infopaq International A/S v Danske Dagblades Forening [2009] ECR I-06569.
[19] Case C-145/10 Eva-Maria Painer v Standard Verlags GmbH [2011] ECDR 13.
[20] Case C-173/11 Football Dataco Ltd v Sportradar GmbH [2013] 1 CMLR 29.
[21] Case C-145/10 Painer v Standard Verlags GmbH [2011] ECDR 13 [88]-[94].
[22] Tanya Aplin and Giulia Pasqaletto, ‘Artificial Intelligence and Copyright Protection’ in Rosa Maria Ballardini, Petri Kuoppamäki, Olli Pitkänen (eds), Regulating Industrial Internet Through IPR, Data Protection and Competition Law (Kluwer 2019) 432.
[23] Alexander Mordvintsev and Mike Tyka, ‘Inceptionism: Going Deeper into Neural Networks’ (Google Blog, 18 June 2015) <https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html> accessed 4 January 2023.
[24] Lionel Bently, Estelle Derclaye, Graeme Dinwoodie, and Richard Arnold, ‘IP Law Post-Brexit’ (Judicature, 13 July 2022) <https://judicature.duke.edu/articles/ip-law-post-brexit/> accessed 4 January 2023.
[25] CDPA 1998 s 178.
[26] CDPA 1998 s 9(3).
[27] Bonadio and McDonagh (n 5) 116.
[28] ibid.
[29] Jyh-An Lee, ’Computer-generated Works under the CDPA 1988′ in Jyh-An Lee, Reto Hilty, and Kung-Chung Liu (eds), Artificial Intelligence and Intellectual Property (Oxford Academic 2021) 178.
[30] Jennifer Davis and Tanya Aplin, Intellectual Property Law: Text, Cases, and Materials (Oxford University Press 2021) 135.
[31] Nova Productions v Mazooma Games and Others [2006] EWHC 24 (Ch).
[32] ibid [105].
[33] ibid [106].
[34] Bobby Allyn, ‘Surreal or too real? Breathtaking AI tool DALL-E takes its images to a bigger stage’ (NPR, 20 July 2022) <https://www.npr.org/2022/07/20/1112331013/dall-e-ai-art-beta-test> accessed 13 February 2023.
[35] Bonadio and McDonagh (n 5).
[36] ibid.
[37] Darin Glasser, ‘Copyrights in Computer-Generated Works’ (2001) 1 Duke Law & Technology Review 24, 29.
[38] Bonadio and McDonagh (n 5) 5.
[39] Ana Ramalho, ‘Will Robots Rule the (Artistic) World? A Proposed Model for the Legal Status of Creations by Artificial Intelligence Systems’ (2017) 21(1) Journal of Internet Law 12, 23.
[40] Brigitte Vézina and Brent Moran, ‘Artificial Intelligence and Creativity: Why We’re Against Copyright Protection for AI-Generated Output’ (Creative Commons, 10 August 2020)
<https://creativecommons.org/2020/08/10/no-copyright-protection-for-ai-generated-output/> accessed 14 January 2023.
[41] Lee (n 29) 189.
[42] Eric Priest, ‘An Entrepreneurship Theory of Copyright’ (2022) 36 Berkeley Technology Law Journal 737, 743.
[43] ‘Artificial Intelligence Call for Views: Copyright and Related Rights’, (Intellectual Property Office, 23 March 2021) <https://www.gov.uk/government/consultations/artificial-intelligence-and-intellectual-property-call-for-views/artificial-intelligence-call-for-views-copyright-and-related-rights> accessed 4 March 2023; also see Robert Denicola, ‘Ex Machina: Copyright Protection for Computer-Generated Works’ (2016) 69 Rutgers University Law Review 251, 283–285.
[44] Matt Hervey, ‘UK: Intellectual Property And Artificial Intelligence in The UK’ (mondaq, 9 August 2019) <https://www.mondaq.com/uk/trade-secrets/835214/intellectual-property-and-artificial-intelligence-in-the-uk> accessed 15 March 2023.
[45] Bonadio and McDonagh (n 5) 117.
[46] Pamela Samuelson, ‘Allocating Ownership Rights in Computer-Generated Work’ (1986) 47 University of Pittsburgh Law Review Journal 1185, 1208.
[47] Vivek Muppalla and Sean Hendryx, ‘Diffusion Models: A Practical Guide’ (scale, 19 October 2022) <https://scale.com/guides/diffusion-models-guide> accessed 10 March 2023.
[48] Lee (n 29) 191.
[49] David Keeling, Intellectual Property Rights in EU Law (Oxford University Press 2003) 243.
[50] Edwin Hettinger ‘Justifying Intellectual Property’ (1989) 18 Philosophy and Public Affairs 31, 36.
[51] Lionel Bentley and Brad Sherman, Intellectual Property Law (6th edn, Oxford University Press 2022) 37.
[52] Vallabhi Rastogi, ‘Theories of Intellectual Property Rights’ (Enhelion Blogs, 27 February 2021) <https://enhelion.com/blogs/2021/02/27/theories-of-intellectual-property-rights/> accessed 5 March 2023.
[53] Lee (n 29) 191.
[54] Sarah Shaffi, ‘‘It’s the Opposite of Art’: Why Illustrators Are Furious About AI’ (Guardian, 23 January 2023) <https://www.theguardian.com/artanddesign/2023/jan/23/its-the-opposite-of-art-why-illustrators-are-furious-about-ai> accessed 5 March 2023.
[55] Ulla Maija Mylly, ‘Preserving the Public Domain: Limits on Overlapping Copyright and Trade Secret Protection of Software’ (2021) 52 International Review of Intellectual Property and Competition Law 1314, 1316.
[56] ‘Using DALL·E for commercial projects’ (OpenAI, 20 July 2022) <https://openai.com/blog/dall-e-now-available-in-beta/> accessed 26 February 2023.
[57] Edwin Chen, ‘Generating Children’s Stories Using GPT-3 and DALL·E’ (Surge AI, 29 June 2022) <https://www.surgehq.ai/blog/generating-childrens-stories-using-gpt-3-and-dall-e> accessed 26 February 2023.
[58] Victor Palace, ‘What if Artificial Intelligence Wrote This? Artificial Intelligence and Copyright Law’ (2019) 71 Florida Law Review 217, 241.
[59] Rosa Ballardini, Kan He and Teemu Roos, ‘Digital Distribution of AI Generated Content: Authorship and Inventorship in the Age of Artificial Intelligence’ in Taina Pihlajarinne, Juha Vesala & Olli Honkila (eds), Online Distribution of Content in the EU (Edward Elgar Publishing Limited 2019).
[60] Bonadio and McDonagh (n 5).
[61] Ryan Abbott, ‘I Think, Therefore I Invent: Creative Computers and the Future of Patent Law’ (2016) 57(4) Boston College Law Review 1079, 1106.
[62] The Week Staff, ‘Chat GPT, Generative AI and the future of creative work’ (The Week, 7 December 2022) <https://www.theweek.co.uk/news/technology/958787/chat-gpt-generative-ai-and-the-future-of-creative-work> accessed 4 March 2023.
[63] George Simister, ‘How Five Investors View Generative AI’ (UKTN, 24 February 2023) <https://www.uktech.news/ai/investors-generative-ai-20230224> accessed 7 March 2023.
[64] Cindy Gordon, ‘2022: The Year Of AI Hopes And Horrors’ (Forbes, 30 December 2022) <https://www.forbes.com/sites/cindygordon/2022/12/30/ai-hopes-and-horrors/?sh=2be5296f7abe> accessed 4 January 2023.
[65] Deloitte Center for Technology, Media & Telecommunications, ‘Future in the Balance? How Countries Are Pursuing an AI Advantage’ (Deloitte Insights, 28 May 2019) <https://www2.deloitte.com/content/dam/Deloitte/cn/Documents/technology-media-telecommunications/deloitte-cn-tmt-future-in-the-balance-en-190528.pdf> accessed 18 January 2023.
[66] Palace (n 58) 239.
[67] Bonadio and McDonagh (n 5) 113.
Emma Chew
LLB (LSE) ’25 and Junior Notes Editor of the LSE Law Review 2022-23
