I keep running into this limitation I’ve brought up several times, namely that chords played into Scaler from an upstream MIDI source (most often another Scaler managing the master chord progressions, while the downstream Scaler(s) being used for expression, variation of rhythm, melody etc.) don’t get the same treatment regarding voicing & expression options as the chords the Scaler instance maintains itself. And my performance setups are too dynamic to be reliant on the (manual) sync capability. So instead of nagging further about the problem, I wanted to circulate a possible solution approach.
What if Scaler could automatically learn (on the fly) new scales and chords which then influence its execution of expressions, performances, phrases, melodies, rhythms etc - just the way it enables when it already knows a scale or chord progression. Now, with “learn on the fly” I don’t mean the existing DETECT mode. The gap there is today that only works manually, with user interaction. What if we could play in a couple of notes (polyphonically, all at once), at a control octave, say C-2 (a note range rarely if ever used for actual music). So if I wanted to set Scaler temporarily to adjust everything to a C Major scale, I would play simultaneously (e.g. from the DAW’s piano roll) C-2, D-2, E-2, F-2, G-2, A-2, B-2, basically a 7 voice chord containing all the notes from my desired scale. And scaler would pick this up live and configure its current scale accordingly and then adjust all settings dependent on scale to this new information. Like I said, think of it as “Detect” mode, but automatable, remote-controlled. This way you could skirt any more complicated MIDI CC implementation. I’ve actually got inspired toward this idea from how the Synthimuse learns new temporary scales.