Silly question but why is the Pi necessary?
Isn't this a Linux project, or actually a C++ project?
My guess is it's because device names are a -mess- between Linux (etc) devices (starting with, the same device has different names ex /dev, lsusb, alsa|pulse|pipewire list devices; multiple layers of indirection does not make for a great casual experience), and this simplifies the mess to one well-known device to make it less of a hair-pulling exercise. Which is telling, because Pi is sold as a learning kit, and it takes away this most unhelpful level of knowledge.
I guess looking for a meta project where the driver was a specification, 'stereo-96+khz-active' (that survived reboots/hot plugging) would just create one more meta project? Like https://editorconfig.org/. But maybe its about time, cause "Linux" and even "open source" loses a lot of value when getting something running is 90% arbitrary yak-shaving, where people need deep magic to get audio working the way they like. I mean, you have to be a 90s-2020s cruft (no other word for it) expert to play with supposedly user friendly software.
Not sure about this project, but generally it is not. All it needs is a small board capable of running Linux and the necessary drivers for external ADCs/DACs where necessary, plus the digital fx software. As an example, Guitarix runs also on ARM and can work on cheaper boards such as the Orange PI, Nano PI and many others cheaper and more obtanium than the Raspberry PI. https://guitarix.org/
In some cases you don't even need to run Linux. There are many effects projects using a cheap Teensy board plus its piggybacked audio card; it features a really powerful audio library and is compatible with the Arduino IDE.
The Teensy is truly amazing, to the point one can build synthesizers that just a few years ago would cost hundreds of bucks. Take a look for example at the TSynth, 100% Open Hardware & Open Source, also available in kit.
Some have asked how this is different from what an impulse response convolution plugin can do. The simple answer is that IR convolution is a linear, time-invariant system. It can adjust the amplitudes and phases of different frequencies, but it can't create new ones. What this is doing is basically a nonlinear generalization of that, which makes it better able to model overdrive, amp saturation, and so on.
One thing I am curious about is exactly how this works. How do neural networks make black box nonlinear system modeling possible, and how does this relate to something like a Volterra series? Would love an explanation and/or some sources if anyone has any.
Wonderful project, if you already have a stock of Raspberry Pis at home. I wonder if we'll ever be able to buy a Pi again. They've been out of stock everywhere since forever. I'm starting to consider it a dead platform and looking at different hardware (example Odroid.)
I wonder if we'll ever be able to buy a Pi again. They've been out of stock everywhere since forever.
You can get them now, if one or more of a few things is true:
1. You're willing to pay exorbitant scalper prices. In that case, just head over to Ali-Express and you can pick up a Pi with no problem.
2. You're very patient and willing to watch rpilocator[1] a lot in order to score a Pi. I do that and managed to score one more 8GB model during a 24 hour (or so) long window when Elektor had some in stock last week. Just keep checking, they pop up here and there.
3. You live near a Microcenter store. Apparently they get stock in every now and then as well. Sadly that only benefits the people who live near by. A group which I can't count myself among. sigh
I'm starting to consider it a dead platform and looking at different hardware (example Odroid.)
Things are going to get better eventually. That's very close to 100% certain. Pi's aren't in short supply because of anything the RPi folks are doing wrong, it's down to the global chip shortage that's affecting just about everything.[2][3] But the bullwhip effect will almost certainly play out, like it always does and at some point there will be glut of all the chips that are in short supply today and probably a glut of Raspberry Pi's as well. The only issue is predicting when that will happen.
> Pi's aren't in short supply because of anything the RPi folks are doing wrong, it's down to the global chip shortage that's affecting just about everything
Aren't the RPI leaders on record saying that the vast bulk of RPI production is being allocated to commercial entities instead of retail?
It would be interesting to see data on the # of RPIs being manufactured (over time - to see impact and recovery from Covid), and contrast it with retail availability.
They announced something like that, but I don't recall all the details. But still, that doesn't change anything. The ultimate cause of the Pi shortage is the overall chip shortage. How they choose to allocate the ones they do make is a decision that one can quibble over, but it's not the root cause of anything.
And honestly, considering their stated justification (sending parts to places where jobs are on the line, etc.) it makes sense. I'd probably do the same thing.
Nope. There was a press release about the focus on getting enough Compute Modules out to satisfy commercial demand for some SKUs, but they didn't say how much it was impacting other models (and the BOM isn't exactly the same across SKUs, so it's hard to tell).
Eben Upton did mention on Twitter that it was good they got out the Pico W, since it might tide over hobbyists until the crunch lifts, but I can't find that tweet right now.
IMO audio is ripe for neural network advancements like pictures with DALL-E and Stable Diffusion. Even if it's not the same, maybe it's just neural networks to emulate expensive instruments and improve quality of digital filters and effects
I understand that the Peter Jackson Beatles documentary did some very clever things with ML to separate the voices and instruments to make a decent soundtrack from a fairly mediocre input stream.
With those you can either load profiles (models) made by other people, or „profile“ your own amps or pedals by playing back and at the same time recording a 2-3 minute track (sounds like various kinds of white & pink noise) through them.
I have one at home, and it sounds incredibly good. Once you‘ve loaded a bunch of amps into a van after an exhausting gig you come to appreciate having a lightweight option instead.
Although I don’t use an amp modeler I ripped the guts out of my Twin Reverb and replaced the heavy ceramic speakers with neodymium speakers and an external solid-state amp head from Quilter. No more moving cart for gigging and I still have a pleasure of standing next to an open-backed 2x12!
I’m too much of a minimalist for a modeling amp. I want physical dials and switches that are easily tweaked during performance and I want to be familiar with the entire range of settings along the audio path!
tone generated by analog circuits is notoriously difficult to reproduce digitally. the conceptual behavior is often easy to model, but hardware deviates from theory in ways that are technically subtle but audibly apparent.
it's deterministic, but the parameters may be unknown and approximate values must often be discovered by iterative guess-and-check. researching and manually modeling an approximation can be incredibly tedious and still fall short. this is exactly the kind of application that machine learning excels at.
The analog parts provide some interesting undocumented features. It’s possible to play 4 channels of 8-bit samples, filter them and still have 2 of the 3 SID channels free for other audio.
Not a silly question. One of the things I always miss from these studies is comparison with more classical inference models. In particular, back in the days we had to train against outputs for PCA models (linear and non-linear) in order to get a feasible production version with the best reduced parameters and justify equivalent performance. Today we have so much computing power that nobody cares.
Audio is a particular good application of this. For example, the old ADPCM algorithms have evolved naturally into their ML counterparts. Some have even less parameters and thus are more computationally efficient because of the advantages of flexible feedback of the training or production models (e.g. RNNs).
Very cool, but can’t we already do this with impulse responses? This seems kind of overly complicated to emulate something that already exists.
However, I can imagine this becoming much more interesting - describing some tone and effects and having it make a custom cocktail of sounds would be nuts.
Cool. There’s a startup working on something very similar, and they also have a fast audio inference engine and raspberry pi implementation: https://waveshaper.ai
I guess looking for a meta project where the driver was a specification, 'stereo-96+khz-active' (that survived reboots/hot plugging) would just create one more meta project? Like https://editorconfig.org/. But maybe its about time, cause "Linux" and even "open source" loses a lot of value when getting something running is 90% arbitrary yak-shaving, where people need deep magic to get audio working the way they like. I mean, you have to be a 90s-2020s cruft (no other word for it) expert to play with supposedly user friendly software.