I. Introduction
In the music industry, AI tools are machine learning models used to create different musical arrangements and associations based on the information used to teach them. AI learns the features and musical arrangements that audiophiles are expected to find irresistible and engaging. AI can then produce new music by unpredictably combining different musical elements from the information fed to it, in this case, the melodies, notes, vocals of a specific artist(s), composer(s), songwriter(s), or musicians’ productions. AI has given amateurs and professional musicians a conduit for producing professional music at a nominal cost. Additionally, it has given emerging musicians access to professional mastering techniques that previously were out of reach due to associated costs.
AI music-making approaches utilize machine learning software that produces independent systems that can learn without precise human intervention. Human programmers set limitations, and the computer program or a neural network produces the musical work. The logic follows how individuals reason. Most music formats have specific notes and scales. For example, in Western music, there are 12 notes and 24 major and minor scales, which can have various tempos, harmonies, and articulations, creating millions of conceivable melodies.
AI has introduced new tools and platforms for music production, including AI-driven artistic, reproducing, and training tools such as Amper Music, Melodrive, Boomy, OpenAI’s Jukebox, Google’s Magenta, Sony’s Flow Machines, and LANDR. These options are now readily accessible through free or paid plans to help artists create new sounds. These AI-driven tools generate unlimited melodies to stimulate creativity. Additionally, some AI tools can vocalize music, compose, and collaborate. For instance, Yona AI, created by electronic composer Ash Koosha, can produce music and intone lyrics. Legal AI music technology collects melodies, tones, and lyrics from royalty-free music to train models to generate similar sounds. The illegal harvesting or incorporation of copyrighted music content in AI-generated music has sparked legal arguments about copyright and ownership.
II. Historical Context of Copyright in Music
In the past years, law courts, lawmakers, and watchdogs have been dealing with legal disputes concerning whether AI-generated content can be copyrighted; whether AI makers are legally responsible for using copyrighted information in algorithm training; and how present intellectual property (IP) regulations should adjust to AI technologies.
One landmark case shaping AI music copyright law is the Stephen Thaler and the “Creativity Machine” case, where the human AI instructor continually applied for copyright ownership for his AI algorithm. In August 2023, the court in Washington, D.C., restated the U.S. Copyright Office’s position that content generated by Thaler’s algorithm cannot be given copyright protection, even if the musical works satisfied all conditions for copyright protection.
The court reiterated the U.S. Copyright Law (Title 17 of the United States Code) and the Berne Convention that only humans can be licensed as authors or publishers. Furthermore, the court highlighted that human creativity is an essential prerequisite for copyright admissibility. This court ruling supports preceding U.S. Copyright Office declarations that have disallowed AI-generated artwork submissions. It also supports the widespread belief that copyright laws safeguard the expression of human thoughts, not those generated independently by technologies.
Nevertheless, the court’s decision does not completely deal with the intricacies of human-AI cooperation. The US Copyright Office has made it clear that if an individual provides noteworthy artistic input like editing, assembling, or picking out AI-generated components, an AI-generated work might be entitled to copyright protection. The Office highlights that the degree of human contribution and the degree of control applied by human creators are critical aspects in defining whether AI-generated works can be copyrighted. Presently, there are no distinct differences between human composition and machine composition because AI is trained mostly on copyrighted human knowledge, and future legal battles will probably heighten the legal thresholds and requirements for AI artists and companies.
Other cases concerning copyright infringement involve human creators versus AI companies. For example, in Andersen v. Stability AI, several artists accused Stability AI, Midjourney, and DeviantArt of scraping billions of pictures from the internet containing their copyrighted creations to teach their AI image-generating algorithms without authorization or payment. The court established that Stability assimilated reproductions of the complainants’ creations to train Stable Diffusion. It then stockpiled or integrated the trained images (plagiarized from their creations) into Stable Diffusion, as compacted reproductions. These actions by Stability AI were enough to find them culpable of direct copyright infringement.
Correspondingly, in December 2023, The New York Times took OpenAI and Microsoft to court, disputing that their huge language models illegally assimilated the paper’s copyrighted articles into their training datasets. The lawsuit maintained that the AI companies had consumed millions of the New York Times’s articles without authorization to aid in training chatbots to deliver information to readers.
These lawsuits present two important questions: whether the use of copyrighted information for training AI models is protected under the fair use doctrine or if it is copyright infringement. The AI artists and companies contend that teaching the AI models is akin to a person learning, and that their training falls under fair use because they do not precisely duplicate and circulate the creations in their initial composition. However, petitioners maintain that AI-generated productions commonly look like existing copyrighted creations, thus negating the fair usage reasoning.
III. Ethical and Economic Implications
The use of AI in music generation has direct ethical implications for human creativity and originality. Firstly, AI-generated music increases the risk of cultural homogenization since the algorithms are trained on the same publicly available information. Due to the ease of production, AI-generated content may surpass human-generated content, which could be problematic as AI might eventually displace human creators from the creative ecosystem. This situation raises questions about ethics and the future roles of human artists and publishers in the creative process. Human creativity may diminish over time with the gradual homogenization of artistic expression, where algorithms compress diverse human experiences into unexceptional mathematical models. AI companies treat culture and the arts as a homogeneous and infinite resource to be exploited and extracted, showing little regard for their future effects.
Human creativity is not a byproduct of an engineering method. It is a logically controlled biological process that derives originality from existing factual understandings. Creativity is about forging deep emotional connections with other artists and music lovers. It requires substantial investment in time, energy, capital, and experience for the concepts to become evocative. It is the human brain that can create meaningful emotional connections, not algorithms. Every human brain is unique, generating its own original mixtures of thoughts. In contrast, AI models operate on a single neural network that is replicated millions of times, while human brains have billions of neural networks, each with its own distinct perspective. The intellectual diversity of individual inspiration enables humans to sidestep the problem of uniform reasoning that can arise when AI artists utilize the same AI-generated platforms.
IV. Future Directions and Policy Considerations
Proposed legal frameworks for AI-generated music include the Generative AI Copyright Disclosure Act of 2024, presented in the U.S. Congress on April 9, 2024. This bill would force generative AI companies to reveal the datasets utilized to train their AI models to encourage transparency and, hypothetically, give rights holders more control over their creations.
Another law is the No AI FRAUD Act (2024), which aims to deter creators from using AI to impersonate individuals without their consent. This would help reduce the use of deepfake technology, particularly in the entertainment industry, where AI-generated shows are becoming increasingly prevalent. Similarly, the Tennessee Ensuring Likeness Voice and Image Security (ELVIS) Act of 2024 (ELVIS Act) enhances AI protections for a person's name, image, likeness, and voice (NIL+V). The ELVIS Act grants every individual a property right in the use of their NIL+V on any platform in any form.
In the European Union, the AI Act requires AI developers to preserve comprehensive archives of training information to ensure open and responsible AI development. This is to defend prevailing copyright protections and encourage responsible AI innovation. Furthermore, the AI Act recognizes the significance of complementary copyright protections that promote innovation and research. The law permits restricted exclusions for text and data scraping. It distinguishes the necessity for proportionality in compliance requirements. This law principally cushions small and medium-sized enterprises (SMEs), startups, and non-profits, enabling non-commercial study undertakings that also guarantee the interests of rights holders. In China, a milestone ruling acknowledged copyright protection for an AI-generated image, as long as the creation exhibits uniqueness and shows an individual’s intelligent endeavor.
V. Potential Ownership Models: User, Developer, or Public Domain
User as Owner, where the user contributes prompts into an AI system and considerably changes the production, ownership could be based on imaginative input.
Developer as Owner, where the AI developers, such as OpenAI, could assert rights over productions created by their platforms.
No Ownership (Public Domain), where AI-generated music enters the public domain instantaneously.
References
Bondari, N. (2025, February 4). AI, Copyright, and the Law: The Ongoing Battle over Intellectual Property Rights. IP & Technology Law Society, Gould School of Law.
Bailey, C. L., Pinkins, T., & Pratt, L. C. (2024). First-of-Its-Kind Artificial Intelligence Law Addresses Deep Fakes And Voice Clones. The Computer & Internet Lawyer, 41(7), 3-5.
Breakthrough Guitar. (2025). Copyright and AI-Generated Music: Legal Battles, Ownership, and the Future of Creative Rights.
Edwards, B. (2025, April 25). In The Age Of AI, We Must Protect Human Creativity as a Natural Resource. Ars Technica.
Majumdar, A. (2025). Facing the Music: The Future of Copyright Law and Artificial Intelligence in the Music Industry. University College London, Faculty of Laws. SSRN.



