Sointu.asm / lib stuff lives at the root folder. There is a folder called "go4k", which is where
all go stuff lives. Following the ideas from https://medium.com/@benbjohnson/standard-package-layout-7cdbc8391fc1
the go4k folder is the "domain-model" of the go side, and should have no dependencies.
It contains Unit, Instrument, Synth interface etc. Putting go4k under a sub-folder is actually
in the spirit of Ben, as go4k adds dependency to the go language.
Bridge ties the domain-model to the sointulib through cgo. It returns C.Synth, but
makes sure the C.Synth implements the Synth interface, so others are able to use the
Synth no matter how it actually is done. MockSynth and WebProxy synth are good
prospects for other implementations of Synth.
It is a bit fuzzy where methods like "Play" that have no dependencies other than domain
model structs should go. They probably should live in the go4k package as well.
The file-organization on the Go-side is not at all finalized. But how packages are broken
into files is mostly a documentation issue; it does not affect the users of the packages at
all.
BTW: The name go4k was chosen because Ben advocated naming the subpackages
according to the dependency they introduce AND because the prototype of 4klang was
called go4k (there are still some defines in the 4klang source revealing this). go4k thus
honors our roots but is also not so bad name: it's the main package of a 4k synth tracker,
written in go.
The LOCALPORT and GLOBALPORT just get numeric parameters (unit, port) and (voice, unit, port), respectively, which should be now quite intuitive as most of the time the port index is one of the parameters visible in the .asm file. Only a few units have extra ports beyond transformed variables. Overall, this should make the parsing of the .asm files a lot easier.
The stereo opcode variants have bit 1 of the command stream set. The polyphony is split into two parts: 1) polyphony, meaning that voices reuse the same opcodes; 2) multitrack voices, meaning that a track triggers more than voice. They both can be flexible defined in any combinations: for example voice 1 and 2 can be triggered by track 1 and use instrument 1, and voice 3 by track 2/instrument 2 and voice 4 by track 3/instrument 2. This is achieved through the use of bitmasks: in the aforementioned example, bit 1 of su_voicetrack_bitmask would be set, meaning "the voice after voice #1 will be triggered by the same track". On the other hand, bits 1 and 3 of su_polyphony_bitmask would be set to indicate that "the voices after #1 and #3 will reuse the same instruments".