An interesting compendium of some of the players in musical AI, also known as Algo-Rhythm Inspired.
Very interesting post .
Years ago, as the ‘Object Oriented’ lemming-meme got hold in the software sector, vendors started to describe their systems as being ‘OO’. In several cases I knew factually that their architectures were about as OO as my butt, but the companies needed to be seen as being in line with the latest fad. It was ‘tick the box’ marketing.
Now we have ‘AI’ and ‘ML’ as new buzz words, attached to everything and anything.
Using computers for generating music according to some style is not the sort of latest state of the art result of AI as the media might like to convey. Band-in-a-Box was launched in 1990, as was Tim Cole’s ‘Koan’. Similarly, SoundTrek’s Jammer has been around for years and there have been several others of similar ilk. Koan was directed to the sort of musical wallpaper suitable for restaurants or around the house, playing unique sequences ad infinitum.
So lots of new developments, but it’s worth noting that these systems have some fine ancestors.
[As an aside, AI (in the guise of then called ‘expert systems’) has been around for years, and began to take off with the arrival of mini-computers in the last 70’s. I implemented a system to automate software error diagnosis for our support desk in the very early 80’s using a self adaptive Bayesian inference approach.]
Yeah, the level of marketecture in these topics is pretty deep no doubt. It is always interesting talking to someone about a “big new thing” only to know it has been around under a different name for years or decades. When blockchains started making the nightly news I found myself constantly describing the underlying architecture to people in terms of what we’ve been doing since the 80s (or longer) but a bit faster at larger scale and with different expectations and transparency.
"big things’ are most often made up of lots of little ones.
I find your interest in the Scaler permutations fascinating and much in line with my own. Since I’m not a trained (or skilled) musician much of my time is spent trying to understand the parts and pieces of music creation and then cobbling things together to see if I can make something that interests me. Scalers utility when it comes to exploring sonic pathways has been fascinating and deeply engaging. I’ve never been able to “see” music but now with it’s help it is starting to come into focus.
While I’ve not attempted to structure my exploration like you have, I am drawn towards the recursive process as well as trying to find patterns w/in how output is generated and how things like quantization can “calm” unwieldy sections and change their character dramatically.
I recently started applying a performance mode to a 4 chord progression and then feeding that to scaler for detection. All the resulting chords are then sent to the pad and a number of patterns are created. Now the pad contains 20-40 chords all derived from the original 4 chord progression. If I map them all across my keyboard I have a range of chords I can play that all sound at least ok and maybe great. Not rocket science but pretty fun.
Eventually I’ll try to map them to my Atom 4x4 pad or the pads on my Arturia as I see (and remember) patterns in a matrix faster than across a keyboard. This probably explains my interest in the CoF as UX
I digress…appreciate you sharing your background and tech interest in these topics and look forward to learning more.
The best AI band around
That’s a video game! One of the most adorably creative ones around
Thanks @TMacD for sharing this! And I always appreciate the perspectives of others with business and technology background, combined with those with musical experience (artistic and/or professional).
My hunch as to where things will be going is that music itself will become more of a highly adaptive, situational experience. AI to produce music is just the executing part, but the question is what the consumer will want, go for. I suspect music (and ambient sound) will become more adaptive to the situational experience, mood, surroundings, social context. Likely combined with stimulating purchasing behavior, or other manipulative motivations. This does not only require an AI/machine learning to go through historic music data, but also measure situational factors that would drive appropriate sound/music creation. “Surveillance Capitalism” delivers the data points outside of the pure musical expressions to correlate what sounds/music likely evokes what mood and behavioral modification opportunity desired. I think the big question complementary of how music will be created moving forward is how it will be consumed, for what purpose, decided by whom.
Yes, I loved it and re-played many times
Too bad Amanita’s didn’t do the sequel
I loved the 3 Samorost, and Botanicula as well, not Chuchel that I hated
Anyway, I mentioned here because the sound-tracks are amazing
Always were the best games. A new one Papetura, by a different company but the same composer is both elegant and inspirational. Papetura on Steam
thanks for the tip
I’ll check it
even Matt has done an AI collaboration
https://hiddenjams.com/2020/06/26/matt-bellamy-muse-3-new-unintended/