BREAKING CHANGE: the order of these operations was inconsistent
across the different VMs. Go VM was the only one to first modulate
and then apply note tracking multiplication. But that made most
sense. So now all different VM versions work in this same way.
This demonstrates a bug found by Virgill:
the x86 templates optimize away the phase
modulation when all phases are set to 0,
but the unisons need the phase modulation
internally to offset the phase of the different
unison oscillators.
BREAKING CHANGE: The problem with crush was that it had very few usable values. This changes the crush to map the value nonlinearly, so the crush resolution is bits. Still the upper portion of the values is not very usable (bits 12-24 i.e. hardly any crushing), but at least the lower portion is usable. But now crush resolution has slightly different meaning.
Pregaining ran into trouble: could not bring the signal level back to near 0dB. For example, with infinite ratio in the pre-gain system, the signal level was capped at threshold, which in turn ran into trouble with stereo signals.
There is a new "sync" opcode that saves the top-most signal every 256 samples to the new "syncBuffer" output. Additionally, you can enable saving the current fractional row as sync[0], avoiding calculating the beat in the shader, but also calculating the beat correctly when the beat is modulated.
the send asm code is quite ugly atm (pushf & popf to save stereo flag), but the new regression test should ensure we don't break it again if we eventually refactor it
gopher had fixed this, but we foolishly removed it. reintroducing fix, although this could be optional only for those who really care. ultimate size optimizers could still want to get rid of it.
The header files are automatically generated during build. No need to #define anything; everything is fixed by the .asm file. This adds go as a dependency to run the unit tests, but this is probably not a bad thing, as go is probably needed anyway if one wants to actually start developing Sointu.
Trying to force a specific song length other than the default never quite worked, so we'll only support the default MAX_SAMPLES & will calculate it for the user in the user in the exported .h header file.
The old speed scaling of 24 was ill-chosen so that triplets resulted in a minor buffer overflow error. This was never caught by anyone until Visual Studio 2019 in debug mode. Presumably all compilers allocate some extra space so this didn't matter. Now 29 increments = double speed and speeds with alternating 52 and 81 result in triplets that are just slightly faster then ordinary bpm i.e. the buffer will be slightly underrun, which probably is unnoticable to the user.
The main interface is render_samples function, which renders several samples in one call,
to limit the number of calls from Go to C. This is compiled into a library, which is then
linked and called from bridge.go.
The stereo opcode variants have bit 1 of the command stream set. The polyphony is split into two parts: 1) polyphony, meaning that voices reuse the same opcodes; 2) multitrack voices, meaning that a track triggers more than voice. They both can be flexible defined in any combinations: for example voice 1 and 2 can be triggered by track 1 and use instrument 1, and voice 3 by track 2/instrument 2 and voice 4 by track 3/instrument 2. This is achieved through the use of bitmasks: in the aforementioned example, bit 1 of su_voicetrack_bitmask would be set, meaning "the voice after voice #1 will be triggered by the same track". On the other hand, bits 1 and 3 of su_polyphony_bitmask would be set to indicate that "the voices after #1 and #3 will reuse the same instruments".
The modulation is now always added during value transformation.
With this, a lot of *_MOD defines could be removed.
The waveform for some tests changed slightly, because when the
value is saved to memory after modulating it, there is some
rounding errors.