The original seems to have been made by creating a feedback loop between one of the outputs and the filter input of an analog synthesizer. The examples I've seen were done on a Yamaha CS-15 but a lot of monophonic old analogs allow you to do that, like a MiniMoog. However the filters on a CS-15 are quite unique, don't think you can even do this exact patch on an MS-20.
Feedback loop normally called overdrive. Broke my Voyager improperly using the headphone output for that, haha. I've only ever heard of the MS20 being the source for the lead in the track though. Random Youtube demos seem to get it close enough.
Wow, really? That shouldn't happen, lots of people do that! I do it with every synth that allows me to loop back into the filter.
I don't think it's the MS-20 as it does come close but the MS-20 is slightly more nasal. Lots of moments I play with it and I think "this sounds ALMOST like The Funk" because with a big resonance on two filters it distorts pretty bad/good :). But perhaps if I try it with the same feedback technique it might get fuller.
Well that's opened up a fascinating new world to me. Is there a name for that regular logarithmic by linear music notation in the first image? I suddenly feel like a whole lifetime of musical enjoyment has been denied to me by the utterly utterly ridiculous staff notation system.
It's a representation of a midi step sequencer. It's pretty standard in all DAW (Digital Audio Workstation) software. Staff notation is way more nuanced than this. In order to represent all the details on the midi ui it requires layers (like photoshop) for all the different parameters. On staff notation it's all there on one page. Staff notation might be a bigger learning curve but it's also a far superior way of communicating musical pieces on paper.
I recall some composers notes such as 'with great vigor' or 'like raindrops' on sheet music. Midi step sequences would have a hard time representing the "emotion" some composers are looking for.
> Midi step sequences would have a hard time representing the "emotion" some composers are looking for.
They all allow different amounts of precision and interpretation of the performance. Staff notation is open to interpretation on the performance level. MIDI is an exact recording (or programming) of a performance.
With staff notation you can mark eighth notes and say "staccato, lag behind the beat", and there's an infinite number of subtly different ways to play it, even within the own composers interpretation. With MIDI, it would represented as "Note on A3, 12 pulses after first quarter note, velocity of 87. Note off A3, 54 pulses after first quarter note, velocity of 0. Note on C3 (etc....)". MIDI is great for computers (or routing signals during live performance) because it's exact.
Not quite sure what you mean by a different way, or sequencers author. Do you mean the musician, or like Roland/Korg/ARP?
MIDI is a pretty exact specification. If a sequencer is changing the timing of anything, the underlying MIDI structures are not the same then. You can have different amounts of swing or different PPQ values, but then I wouldn't consider that "the same MIDI".
You can feed it to different sound generators of course, different drum machines, synths, a laptop, what-have-you, (most sequencers have both builtin together) but that's different from the MIDI itself.
Given a specific MIDI file, with specific applied quantization, they should absolutely play the same way -- the only exception is the timing resolution they offer and smallish latency issues (which in practice should be 100% transparent).
So much of the notation is not codified, and is part of the artistry of the musician. Even if all those Italian/French words are included (martele, staccato, largo, et al.), and crescendos and ritards and are marked, one cannot play the tune from sight with the desired emotion without first hearing it or playing it over and over.
Compared to the possibilities for expression in step sequencing, something like a piano, which basically just has velocity and sustain/dumper is not even close.
I disagree, at least as far as rhythm and pitch are concerned. For classical music, staff notation is great — but once you get into syncopation, soloing, and the general fluidity that comes with practically all popular music from the past century, it becomes untenable. Seriously, writing down guitar solos in staff notation is just gross. (I've previously written a little about it here: http://beta-blog.archagon.net/2016/02/05/composers-sketchpad...)
Quite an on point explanation; I'd also contend though that modern DAWs like Ableton Live 9 have quite nuanced sequencing and MIDI controls/notation for professionals (and can go deeper with MAX).
I agree, but to me it is extremely hard to decompress. I've tried many times to get familiar with staff notation but I never could get an intuition of how it works.
Sure. I remember when learning to play the piano as a kid that it took sometimes seconds to decrypt the next key when learning a new piece.
I've understood that for a large category of adult learners, when starting on an instrument for the first time in their life, it's the staff notation that creates an usurmountable barrier. So, yeah, it's not definetly 'easy' for most beginners.
What I wish was explained to me at a younger age is the fact that there's nothing "special" about the black keys on a piano.
Musical keys are "mathematically" interchangeable. The physical keys that correspond to notes not in the key of C major (alternatively, A minor) are colored black for practical reasons. That is, for knowing one's place while playing the piano.
Fortunately I sorted this out in college, and I've enjoyed making music for my own enjoyment since.
OK, here is yet another musical representation method that you may find interesting (especially if you have a modern bent):
Take a tune, and now arrange some graph paper like the "note pattern view" in a sequencer, with one difference. Instead of arranging the pitches in the normal way, arrange them by the order they come in the tune. (so the first note of the tune will be the "lowest" pitch, second note the second "lowest" and so on.
You might see some interesting things that you didn't hear!
Probably Mozart would be a good target to do this with, and pop music would not, but I could be wrong on that. Possibly this is over your head, but possibly not.
Then if you want to add a ton more formalism, read a little about serialism, and apply this technique to a twelve tone piece. You will be rewarded for your efforts, as a previously difficult to follow piece makes much more sense. Doesn't mean you'll find it pleasing to listen to though.
Never seen that site before, very useful indeed! Whenever I see something like this I realise there's a world of sites out on the 'net that I'm missing out on, and a whole load of new ways to learn and demonstrate information like this.
If you have an iPad lying around, I recently made an app called Composer's Sketchpad that uses this sort of notation to its fullest. You "draw" notes directly onto a logarithmic pitch grid, and instead of having to snap to the usual array of discrete piano-key pitches (ABCDEFG), you can start your notes at any point on the y-axis and arbitrarily pitch-bend them as you draw. I've found this to be the perfect interface for playing around with musical ideas, and particularly useful for writing out expressive solos — something staff notation is just terrible for!
If you like the idea of making music by "drawing" it like this, and you have access to a Windows Machine, I'd recommend playing with FLStudio. With other DAWs I've found that it is more difficult to make music with just a mouse, not requiring real instruments, microphones and (maybe multiple) MIDI controllers.
I've been a Mac user for almost ten years now, but the lack of FLStudio for OSX makes me question that decision every six months or so.
Actually FLStudio has a beta of version 12 that can run on OS X (natively -- not a port based on Wine, although it had that too previously). It will be released sometime soon they say, mid-late 2016 or early 2017 is my guess.
But apart from that, you can do the same things (although less builtin tooling for electronic music, but still enough) with Garageband, and, if you don't have invested in VSTs/AUs, Reason.
Most other DAW's have caught up with their piano roll implementation but I still like FL's the most. I think what makes it unique is its toolset -- as in, the toolbar up top -- as its modal editing workflow lets you add, delete, and then transform notes incredibly quickly. Most of these modes have a shortcut to get to them, and its easy to flip between them. It even has a chord option, which functions much in the same way as that hook tool. Other DAWs take a modal approach but in many cases they complicate their implementation.
It's also worth noting that FL is pattern-based, whereas most DAWs are not. ("Pattern-based" means you generally write music phrases/patterns and repeat them throughout the track, this is helpful for electronic dance music.) A repeating bassline pattern in FL is more akin to a series of symlinks back to the original file, whereas in Cubase and Logic, each of these repeats would be copies (unless you specify otherwise, but I found the implementation convoluted and troublesome.)
Most of the other DAWs I've worked with are much more fiddly and require messing with technical details and audio routing to get a good result. FL also requires this, but again the implementation is seamless, has sensible defaults, and stays out of your way until you need to mess with it. (In the middle of a good flow I have very little patience for routing.) I do not find this to be the case with Live, Cubase, or especially Logic... I put in an effort to learn each of these at some point and ultimately left to return to FL.
I've noticed DAWs tend to optimize around performances (Live), MIDI orchestration (Cubase, Logic), recording (ProTools, Reaper) or 'in-the-box' production (FL). All of the above have tools for composing by mouse but FL's workflow is optimized for this specific use-case.
Perhaps, but Guitar Tab is one of the great equalizers of the internet - proper notation has typically cost money to access a copy, whereas Tab could spread in .txt format, small sizes, etc. Sure, they could be off or need the song to play along with, but that's the trade-off. Lastly, though it's a personal opinion, Guitar and "proper notation" are really only needed in the same sentence when playing Classical/Orchestra, Real Jazz, or as a Pro Studio Session Musician[1] - as in, very small, elite group of players. Most will get by just growing along with the instrument over the years, for which Tab helps a lot.
[1] Great studio guitarist Glen Campbell supposedly couldn't read music at all, just went along with what he heard and made it work; Tommy Tedesco by comparison could read sheet music upside down.
Ya, plenty of highly successful musicians and "composers" can't/couldn't read music, Paul McCartney being one of the most famous.
Joni Mitchell apparently didn't even use regular tab notation. She'd detune her guitar and annotate the relative pitch shift per string, e.g. D-2 G+1, etc or similar.
I agree, I just feel like it ends up as a crutch for a lot of musicians. For example, it isn't very specific in terms of rhythm so you get a lot of beginners who see that part of music as loose or unimportant.
But of course, up to a point, lowering barriers is always a good thing for beginners.
Depending on the style of tablature used, you can convey rhythm, and more. I'd say it can actually get more specific than staff notation when you allow for guitar specific notation, like all the different types of legato capable, harmonics, and specifying which string to play a note one (it makes a subtle difference).
Here's a screenshot I took of a random song in TuxGuitar (a GNU licensed tablature editor/viewer). You can see the rhythm fairly clearly, as well as the difference between pull-offs and slides.
I disagree; guitar has multiple fingerings for each note or chord and it's not necessarily clear which one will be the simplest to use when given regular staff notation.
There's a huge jump in the complexity of how you need to reason about translating the music to the instrument with staff notation. Tabs solve this by explicitly telling the musician what fingering to use.
Guitar players should still learn to read music because it's useful, but anyone who wants a guitarist to play their music (especially by sight) should use a notation similar to tabs, but with rhythm indicated as well. Good software to convert between the two could be generally useful, but I imagine there is a bit of nuance to what fingerings sound the best.
Early lute music was mostly written in tablature FWIW. A lot of the renaissance repertoire is still published that way. It uses letters rather than numbers to represent the frets, which takes getting used to, and puts separate timing marks above the staff.
In other words, you haven't yet spent the time learning to read music, so you'd rather trash the system that evolved over several centuries and pretend that it's ridiculous.
Nothing has been "denied to you". You just haven't spent the time it takes to learn. Have foreign languages been "denied to you", too?
What, precisely, is ridiculous about music notation? It represents time, on the x axis, and pitch, on the y axis (on a basic level). Seems reasonably logical, to me.
Where is this coming from? It's pretty clearly G minor to me. The bass is playing G. The main phrase starts and ends on a G.
> This 1.334 is a ratio of adding 5 semitones in hertz. This should sound like this.
This harmonic overtone kills it for me. I know there are some overtones in the original, but not like this. 5 semi-tones is an interval of a perfect fourth. I think maybe a perfect fifth would work (7 semi-tones) but this overtone pretty much destroys it for me.
Sonic Pi is another on of the new Live Music Programming Languages. Sonic Pi has a nicer GUI presently.
Right now it is the early stages of these kind of musical tools for live music coding. I really look forward to the near future for these new tools. I know that we have had programming for music for decades but this is a different genre at least to me.
Not knowing about Leipzig[0], I've been trying to come up with my own representation of music in JS to work with the Web Audio API. My biggest puzzle so far is how to a) keep a rhythm and b) allow musical "blocks", which are indeterminate in length, to fit into this rhythm. I was thinking that these blocks would be pre-defined functions which represent different aspects of music (bassline, riff, etc.) which could be repeated and inserted at arbitrary time in the composition.
If anyone is interested on working on this drop me a pm/comment (it's just for fun)
I hacked on this a while ago, and the best libraries I found were one called Wad for playing notes and applying effects, and another called Teoria for building chords and intervals and so on.
I was hindered by not having any idea what I was doing but here's what I came up with:
Think of it more as emulating a recording studio (instruments, amps, effects the whole thing) modeled as code.
This works beautifully in the composition first paradigm of lisps/clojure. The APIs are a bit hard to wrap your head around if you intend to use Overtone for the entire production process but I've had lots of fun with using it as the mother of all synthesis engine
I was a bit disappointed by the output. Is this the quality of music that is produced regularly by this software? The only thing that sounded good to my ears was the baseline piece, by itself. It sounded true to the original. The rest sounded like a Casio keyboard from the 80s. Not flattering to the depth of Da Funk. I was expecting something closer to the original, not something with the same tones measures and BPM but sounding totally flat overall. Granted, this is my only exposure to music produced by this software. I wonder if there are other examples, that don't sound like a computer making music? Fruity Loops makes better music than this.
It's incredibly hard to make music with commercial production values with nothing but code.
Commercial studios have racks full of boxes that start from $2000 each that fatten up and sweeten the sound, and a commercial track will patch every element through multiple boxes during mixing. The mixing desk itself will add its own sound.
Most of the boxes have analog circuitry inside them. Modelling it is not easy, and good models can use so many cycles it's no longer possible to listen to their output in real time (at least, not without using external hardware acceleration).
None of the mainstream music code environments - Overtone, SuperCollider, Csound, Max/MSP and so on - pay much attention to this. They mostly come with trivially simple DSP models which don't sound all that great.
Surprisingly, they also make it hard to use more complicated models even if you know what you're doing. Mostly you can't just add the model in user land - you have to add it as an external, and rebuild.
Commercial software from Korg, Yamaha, NI, UA, Access, and most pro and semi-pro VST makers puts more effort into sounding good, but the high quality models are somewhat proprietary and the code isn't often open sourced - although sometimes the models appear in papers from (e.g.) the ICMC.
Significantly better sounding synth sound is nowhere near that far out of reach. The sound he's trying to recreate is not something that comes out of a commercial stack of "$2000 boxes".
Da Funk's lead can be reasonably approximated with cheaply available modeling software, or moderately priced analog synths. Here's someone getting decently close with Massive, which is around 200 dollars.
Synthesisers are really complicated. This example takes about 4 steps from a pure saw wave, it's a demonstration of principles.
This is the exact same sort of sound you'd get if you picked up any DAW or hardware studio and tried to recreate the song in a few moments.
It's almost certainly possible to create something true to the original, but there's no way it would be suitable for a "recipe" for showing people how Overtone/SuperCollider works.
I just looked at the original and the copy through my laptop's shitty default equalizer. The copy has unnecessary bass. Around the 1k hz range. This is probably the largest contributor to what makes it sound hollow.
Also, just comparing them by ear. Over each note, the copy builds up and then builds down in volume in a <> shape. The original has a sharper attack and then fades gradually, which looks more like a > shape.
I think the copy also has an unnecessary reverb effect, but I'm not sure.
This is all malleable of course. None of this is the fault of the software. It's just a difficulty inherent to reverse engineering.
You're not wrong, but it was needless to say. It's like going into a freshmen physics class and pointing out that a frictionless vacuum is hardly a valid simulation of reality. You're missing the point.
the daft version is clearly mixed (eq'd, compressed, etc.) and probably done so in a nice "easy" to use DAW. i doubt he could get that sound (without working long hours) when hardcoding everything.
The Daft Punk version is pretty much the raw direct sound you get from the synthesizer. Maybe a touch of reverb, a hint of EQ but nothing that changes the sound.
I disagree. The music made by this software has no depth or emotion. It's like a vocoder from the 80s. I could never get one to say my name right,now matter how I spelled it. And da funk just doesn't sound right without depth. It sounds flat. It sounds like it was made by a computer, not a human. Not flattering to the original piece of the software itself. Total failure.
I once brought an interviewer to the brink of tears after she revealed how overloaded she was and how the EE team had effectively been eliminated for offshore workers. It was curious that all the other interviewers before her were MEs and systems engineers with no knowledge of the job I was interviewing for. They all kept referring to the "old product line" developed in house and how great it was to work on. The implication being that the new line was shit because nobody was around who new how it worked. All the money went into developing sexy enclosures and nothing into the actual electronics.
There were lots of glossy trade brochures lionizing the founders and how great they were but it was apparent that they went public to cash out and were riding the growth roller coaster to drive the stock price at the expense of capable employees who can sustain the company long term.
https://youtu.be/W4PEAKNtbVw?t=184
A good explanation how it works
https://www.youtube.com/watch?v=kdN43xfwV2U
Slightly better played