Keywords: sound, music, computer music, algorithms, generative models, sequence models

This turned out the quickest way to get it done, a graphical dump of my current picture related to the relationship of sound, music, sequences and search, based on a shared modelling approach; incomplete but more or less consistent [claim]. Domain layer in black, modelling layer in green.

Graphical description of the scenery drawing

Would like to argue that by combining coding, latent space state propagation, and hierarchical composition, most of the phenomena in sound, music, vocalization, spoken language, and written language can be modelled, all with the same few items. Let me know if you think, that this is obvious. Taking language as variable length sequences of actions, this can be extended straightforwardly to model navigation problems.




tag cloud

robotics music research sound books jcl startups intelligence feed ethology computational audio AI psychology models jetpack organization motivation micro management funding dsp brain algorithms testing test synthesis sonfication smp scope risk principles musician motion mapping gt generative exploration digital decision datadriven computing computer complexity aesthetics wyafinding visualization tools theory temporal sustainability stuff sonic-art sonic-ambience society signal-processing sequence self score robots robot-learning python pxp priors predictive policies philosophy perception organization-of-behavior open-world open-culture neuroscience networking network navigation minecraft midi measures math locomotion linux learning kpi internet init health hacker growth grounding-fail grounding graphical gaming games explanation event-representation epistemology embedding economy discrete development definitions cyberspace culture creativity computer-music compmus cognition business birds biology bio-inspiration android agents action