skip to main content

As a professional musician, I’m always seeking ways to increase my productivity and deliver higher-quality work. Just like all new technologies, experimenting with and understanding developments in this space is important, as falling behind could mean missing out on new opportunities.

However, with how quickly things are evolving, it’s not always easy to stay on top of it all.

The currently confusing definitions mean that algorithms, machine learning, large language and generative models are all contained under the blanket term of Artificial Intelligence (AI).

The difference between large language models and generative AI

With assistive large language models, like ChatGPT, you can ask questions and receive — often unreliable — answers, and perform administrative tasks. But they are only useful if you have a prior understanding of the subject. Current models are programmed to supply an answer, regardless of accuracy.

Generative AI (GenAI) music on the other hand is largely uneditable but the vocal quality is outstanding, and so is the instrumentation in common styles; less so in niche ones. A year ago a trained ear could hear vocal artefacts (undesired or unintended sounds that appear during a recording) with ease, and music sounded unnatural. Now however, it can be much harder to recognise and discern audio fidelity.

From my own experiments generating music, I’ve found it inefficient, gaining nothing that meaningfully enhances my work or saves time. It often takes dozens of attempts to generate anything usable, due to inconsistent quality or incorrect interpretation.

How I use AI in my daily work as a musician

I personally find ChatGPT particularly helpful for providing audio-to-MIDI conversion, formulas on spreadsheets, basic coding, troubleshooting, proofreading, organisation, and research.

Exceptionally useful for sourcing hard-to-find information and problem-solving, it presents solutions as an alternative to trawling through forums or manuals. I can be more time efficient now I’ve understood how to properly engage with assistive AI, and as with any tool, it is only as good as the person operating it.

In an industry that forces us to be experts in all fields, AI helps me manage my workload of administrative tasks and allows me to return to skilled creativity. Musicians from any area could benefit from AI’s assistive capabilities, as many struggle to manage non-music-related tasks, especially those who are unsigned, solo practitioners.

Future possibilities

GenAI also holds the potential for integration into a musician's workflow — just as sampled instruments and Digital Audio Workstations did — as we spend much time programming to create realistic demos for clients, such as orchestral mockups.

What if an AI could generate a high-quality demo for us? Building upon our work while retaining our intent — potentially saving us hours, if not days — not as a creative replacement, but to streamline workflows and speed up the commissioning process.

Furthermore, lyrics are all that cannot be digitally demoed. GenAI is the only realistic possibility for this. In the future, we could provide our composition, melody and lyrics and get back a realistic representation of our vocal parts.

Ethical concerns

Unfortunately however, this technology has a significant environmental cost and with little industry transparency in data reporting, its true impact is hard to quantify.

Each prompt uses significant amounts of electricity (largely fossil fuels), and even more water, to power then cool data centre hardware. An estimated 500ml of freshwater every 30 responses, and 3 litres per image generated is consumed and evaporated.

Alongside this, AI companies are scraping the internet and using music without consent, accreditation or compensation. As a result, generated music is derived from our collective work. There is no legal precedent, so it is unknown if GenAI could infringe copyright.

Another concern of mine is the potential stifling of emerging talent. For example, if student filmmakers begin relying on generated music instead of collaborating with musicians, opportunities for newcomers become more limited, potentially stagnating the industry.

That said, a full-blown takeover still seems unlikely — at least with the way things are now. The study Self-Consuming Generative Models Go MAD showed the collapse of quality when AI is trained on its own output in just five cycles. I actually repeated this study using Suno and observed the same result.

Proceeding with caution

In my opinion, we should proceed with much caution. I use assistive AI minimally for priority tasks, relying on my ability first and then using it for support, and I won’t incorporate GenAI into my work until a legal precedent is set.

Most importantly, we need to protect our work and resist the ongoing power imbalance from technology and streaming companies, who view creativity as a public commodity — at least until copyright holders receive some benefit to contribute, such as royalties.

This industry exploded into life just a few years ago and its environmental and copyright impact has very quickly spiralled out of our control. Consequently, recreational and unnecessary AI use can be hard to justify, despite its aforementioned benefits. The creative industries are already so tough, should we use these tools to help us despite the drawbacks?

The arts are collaborative and fluid by nature. What would happen to culture if music became machine-made? From my experiences, generative AI 'creators' are not true creatives, they are curators.

I am cautiously positive about the future of Artificial Intelligence, technology has changed the world before and we have successfully adapted with it. It’s also worth considering that this technological development could have already plateaued. Ultimately I believe that creatives want to work alongside collaborators, and will continue to seek out these cherished opportunities.

My MU journey

My path into the industry has been influenced by many individuals, including the Musicians’ Union. I joined as a student and gained access to resources, networking events, and legal support to navigate my way through contracts and working with producers. Their guidance has aided my development, finding work — and more importantly, keeping it.

Daniel’s further reading suggestions and references

You can read more of Daniel’s research and findings via his official website.

Photo ofDaniel Finch
Thanks to

Daniel Finch

Daniel Finch, aka zenith soundscapes, is a music composer, post-production specialist, tutor and flautist. He is a versatile audio professional who works for the love of storytelling. His work falls on the intersection of music, technology, and the natural world. Challenging the conventions of traditional writing and audio with impressive attention to detail with an emotive and identifiable musical style, developed through creative sound worlds blending live musicians and electronic sounds. Daniel’s work is featured in video games, short films and theatre. Telling stories and influencing emotions through sound are core to his musical philosophy, inspiring him to write authentically. Whatever the field, he crafts a sound unique to the project through instrumentation, harmony, or technology. As well as working professionally in music, Daniel is also an educator, encouraging others to learn, develop their skills and foster passion.

Get support as a music creator through MU membership

The MU has a strong community of songwriters and music composers. We have specialist officials and advise music makers on the specific issues, including pay and contractual issues, career in composing and songwriting, employment and legal advice.

Explore our services available to music writing members

Join the MU now

Get support as a music creator through MU membership

Continue reading

Daniel stood against a tree at sunset, holding a flute, with a forest and lake in the background.

How I Use AI as a Professional Musician — and Why I’m Still Cautious

In this honest and informative blog, composer, post-production specialist, tutor, flautist, and MU member Daniel Finch explores how musicians are using AI in their work—sharing insights from his own research and outlining the potential benefits, risks, and ethical challenges.

Published: 27 May 2025

Read more about How I Use AI as a Professional Musician — and Why I’m Still Cautious