Automated music generation has been a topic in Artificial Intelligence since the dawn of computers. With the recent rise of Machine Learning, models that can effectively model artistic works, it seems that this quest for automated music generation may finally be coming to an end. This thesis analyzes the long history of algorithmic music generation, to help define where recent machine learning attempts succeed and fall short. It also defines a framework which allows for algorithmic music generation, that could incorporate these approaches along with many other approaches that have been developed over the past 70+ years. The goal of this framework is to provide a lightweight foundation that utilizes modular components of various music generation algorithms to produce music in real-time. We hope that the framework can be used to test and compare different music generation algorithms with one another to help better clarify which approaches are more successful and where there are gaps in existing music generation algorithms. Through the use of this framework, we highlight how combining symbolic and connectionist artificial intelligence approaches can provide model that is greater than the sum of its parts.
Advisor
Abstract
Publication Type
Publication Year
Subject
Computer Science