Hacker News new | past | comments | ask | show | jobs | submit login
Recreating Daft Punk's Da Funk with Overtone and Leipzig (overtone-recipes.github.io)
375 points by greenonion on April 7, 2016 | hide | past | favorite | 98 comments



The original seems to have been made by creating a feedback loop between one of the outputs and the filter input of an analog synthesizer. The examples I've seen were done on a Yamaha CS-15 but a lot of monophonic old analogs allow you to do that, like a MiniMoog. However the filters on a CS-15 are quite unique, don't think you can even do this exact patch on an MS-20.

https://youtu.be/W4PEAKNtbVw?t=184

A good explanation how it works

https://www.youtube.com/watch?v=kdN43xfwV2U

Slightly better played


Feedback loop normally called overdrive. Broke my Voyager improperly using the headphone output for that, haha. I've only ever heard of the MS20 being the source for the lead in the track though. Random Youtube demos seem to get it close enough.


Wow, really? That shouldn't happen, lots of people do that! I do it with every synth that allows me to loop back into the filter.

I don't think it's the MS-20 as it does come close but the MS-20 is slightly more nasal. Lots of moments I play with it and I think "this sounds ALMOST like The Funk" because with a big resonance on two filters it distorts pretty bad/good :). But perhaps if I try it with the same feedback technique it might get fuller.


Amazing – nice find. Never played a CS-15 but it looks like a real powerhouse.


The original was recorded on a Roland SH-101.


Well that's opened up a fascinating new world to me. Is there a name for that regular logarithmic by linear music notation in the first image? I suddenly feel like a whole lifetime of musical enjoyment has been denied to me by the utterly utterly ridiculous staff notation system.


It's a representation of a midi step sequencer. It's pretty standard in all DAW (Digital Audio Workstation) software. Staff notation is way more nuanced than this. In order to represent all the details on the midi ui it requires layers (like photoshop) for all the different parameters. On staff notation it's all there on one page. Staff notation might be a bigger learning curve but it's also a far superior way of communicating musical pieces on paper.


I recall some composers notes such as 'with great vigor' or 'like raindrops' on sheet music. Midi step sequences would have a hard time representing the "emotion" some composers are looking for.


Satie had some good instructions:

"In the morning on an empty stomach"

"Hypocritically"

"With a lot of difficulty"

“With conviction and a rigorous sadness”

“With a healthy superiority”

“Don’t eat too much”

“Shake like a leaf”

“Do not cough”

“Go away”

“Like a nightingale with a toothache”


Should apply that to lines of code in code review .

   x.ugly_hack=blah <- “With conviction and a rigorous sadness”


> Midi step sequences would have a hard time representing the "emotion" some composers are looking for.

They all allow different amounts of precision and interpretation of the performance. Staff notation is open to interpretation on the performance level. MIDI is an exact recording (or programming) of a performance.

With staff notation you can mark eighth notes and say "staccato, lag behind the beat", and there's an infinite number of subtly different ways to play it, even within the own composers interpretation. With MIDI, it would represented as "Note on A3, 12 pulses after first quarter note, velocity of 87. Note off A3, 54 pulses after first quarter note, velocity of 0. Note on C3 (etc....)". MIDI is great for computers (or routing signals during live performance) because it's exact.


I don't think that's quite right. Different midi sequencers will play in a different way, based on the decisions of the midi sequencers author.


Not quite sure what you mean by a different way, or sequencers author. Do you mean the musician, or like Roland/Korg/ARP?

MIDI is a pretty exact specification. If a sequencer is changing the timing of anything, the underlying MIDI structures are not the same then. You can have different amounts of swing or different PPQ values, but then I wouldn't consider that "the same MIDI".

You can feed it to different sound generators of course, different drum machines, synths, a laptop, what-have-you, (most sequencers have both builtin together) but that's different from the MIDI itself.


That's not how it works. MIDI is a standard.

Given a specific MIDI file, with specific applied quantization, they should absolutely play the same way -- the only exception is the timing resolution they offer and smallish latency issues (which in practice should be 100% transparent).


Because a machine is playing, so as with any computer code, you have to formalise everything and leave nothing implicit.


So much of the notation is not codified, and is part of the artistry of the musician. Even if all those Italian/French words are included (martele, staccato, largo, et al.), and crescendos and ritards and are marked, one cannot play the tune from sight with the desired emotion without first hearing it or playing it over and over.


>Midi step sequences would have a hard time representing the "emotion" some composers are looking for.

Most classically trained players do too.

Besides we have much more control over step sequences than you probably think, e.g.

http://www.steinberg.net/en/company/technologies/vst_express...

Plus, of course, tons of CC, etc.

Compared to the possibilities for expression in step sequencing, something like a piano, which basically just has velocity and sustain/dumper is not even close.


It can be done, you just need to know the variables to tweak and how.


I disagree, at least as far as rhythm and pitch are concerned. For classical music, staff notation is great — but once you get into syncopation, soloing, and the general fluidity that comes with practically all popular music from the past century, it becomes untenable. Seriously, writing down guitar solos in staff notation is just gross. (I've previously written a little about it here: http://beta-blog.archagon.net/2016/02/05/composers-sketchpad...)


Quite an on point explanation; I'd also contend though that modern DAWs like Ableton Live 9 have quite nuanced sequencing and MIDI controls/notation for professionals (and can go deeper with MAX).


"the utterly utterly ridiculous staff notation system"

I think you mean "dense and efficient".


I think you're both right. It's got some seemingly arbitrary quirks. But at the end of the day, it compresses information really well.


I agree, but to me it is extremely hard to decompress. I've tried many times to get familiar with staff notation but I never could get an intuition of how it works.


Sure. I remember when learning to play the piano as a kid that it took sometimes seconds to decrypt the next key when learning a new piece.

I've understood that for a large category of adult learners, when starting on an instrument for the first time in their life, it's the staff notation that creates an usurmountable barrier. So, yeah, it's not definetly 'easy' for most beginners.


What I wish was explained to me at a younger age is the fact that there's nothing "special" about the black keys on a piano.

Musical keys are "mathematically" interchangeable. The physical keys that correspond to notes not in the key of C major (alternatively, A minor) are colored black for practical reasons. That is, for knowing one's place while playing the piano.

Fortunately I sorted this out in college, and I've enjoyed making music for my own enjoyment since.


>What I wish was explained to me at a younger age is the fact that there's nothing "special" about the black keys on a piano.

I wish we had pianos with slightly thinner keys (to allow for the same reach) but with same-level black and white keys.

It would make piano playing normalized (like e.g. bass playing is), and make learning scales, transposing, etc childs play.

It would probably still need some color or small "bump" on C to know where our hands are, but that would be it.

Of course there are also several hexagonal etc keyboards with similar normalized layouts, but they also haven't caught on.


OK, here is yet another musical representation method that you may find interesting (especially if you have a modern bent):

Take a tune, and now arrange some graph paper like the "note pattern view" in a sequencer, with one difference. Instead of arranging the pitches in the normal way, arrange them by the order they come in the tune. (so the first note of the tune will be the "lowest" pitch, second note the second "lowest" and so on.

You might see some interesting things that you didn't hear!

Probably Mozart would be a good target to do this with, and pop music would not, but I could be wrong on that. Possibly this is over your head, but possibly not.

Then if you want to add a ton more formalism, read a little about serialism, and apply this technique to a twelve tone piece. You will be rewarded for your efforts, as a previously difficult to follow piece makes much more sense. Doesn't mean you'll find it pleasing to listen to though.


You might also enjoy Hookpad (https://www.hooktheory.com/hookpad/new) and Theorytab (https://www.hooktheory.com/theorytab/view/daft-punk/get-luck...) for examples. Hooktheory taught me a lot.


Never seen that site before, very useful indeed! Whenever I see something like this I realise there's a world of sites out on the 'net that I'm missing out on, and a whole load of new ways to learn and demonstrate information like this.


I believe it's usually called a 'piano-roll'.


I agree. I used to compose small pieces with soundtrackers [1] and it was a lot of fun. However, I've never been able to learn staff notation system.

[1] https://en.wikipedia.org/wiki/Ultimate_Soundtracker


I did barely learn some of staff notation at school, but Soundtracker was my tool of choice! Loved it. So many hours.


If you have an iPad lying around, I recently made an app called Composer's Sketchpad that uses this sort of notation to its fullest. You "draw" notes directly onto a logarithmic pitch grid, and instead of having to snap to the usual array of discrete piano-key pitches (ABCDEFG), you can start your notes at any point on the y-axis and arbitrarily pitch-bend them as you draw. I've found this to be the perfect interface for playing around with musical ideas, and particularly useful for writing out expressive solos — something staff notation is just terrible for!

Homepage: http://composerssketchpad.com

Trailer: https://www.youtube.com/watch?v=ypsLgTY8NXs


It's called a piano roll, after the 19th century player piano systems.

https://en.wikipedia.org/wiki/Piano_roll


If you like the idea of making music by "drawing" it like this, and you have access to a Windows Machine, I'd recommend playing with FLStudio. With other DAWs I've found that it is more difficult to make music with just a mouse, not requiring real instruments, microphones and (maybe multiple) MIDI controllers.

I've been a Mac user for almost ten years now, but the lack of FLStudio for OSX makes me question that decision every six months or so.


Actually FLStudio has a beta of version 12 that can run on OS X (natively -- not a port based on Wine, although it had that too previously). It will be released sometime soon they say, mid-late 2016 or early 2017 is my guess.

But apart from that, you can do the same things (although less builtin tooling for electronic music, but still enough) with Garageband, and, if you don't have invested in VSTs/AUs, Reason.


Could you describe what about the FLStudio piano roll makes it more suitable for making scores with a mouse than other piano rolls?


10+ year FL vet here.

Most other DAW's have caught up with their piano roll implementation but I still like FL's the most. I think what makes it unique is its toolset -- as in, the toolbar up top -- as its modal editing workflow lets you add, delete, and then transform notes incredibly quickly. Most of these modes have a shortcut to get to them, and its easy to flip between them. It even has a chord option, which functions much in the same way as that hook tool. Other DAWs take a modal approach but in many cases they complicate their implementation.

It's also worth noting that FL is pattern-based, whereas most DAWs are not. ("Pattern-based" means you generally write music phrases/patterns and repeat them throughout the track, this is helpful for electronic dance music.) A repeating bassline pattern in FL is more akin to a series of symlinks back to the original file, whereas in Cubase and Logic, each of these repeats would be copies (unless you specify otherwise, but I found the implementation convoluted and troublesome.)

Most of the other DAWs I've worked with are much more fiddly and require messing with technical details and audio routing to get a good result. FL also requires this, but again the implementation is seamless, has sensible defaults, and stays out of your way until you need to mess with it. (In the middle of a good flow I have very little patience for routing.) I do not find this to be the case with Live, Cubase, or especially Logic... I put in an effort to learn each of these at some point and ultimately left to return to FL.

I've noticed DAWs tend to optimize around performances (Live), MIDI orchestration (Cubase, Logic), recording (ProTools, Reaper) or 'in-the-box' production (FL). All of the above have tools for composing by mouse but FL's workflow is optimized for this specific use-case.


Reason is optimized of ITB too.


Ah yes, Reason! It's been a while :)


This Note Pattern View is standard in all major DAWs :)


Tablature is another common notation system:

https://en.wikipedia.org/wiki/Tablature


For those that don't want to learn how to read proper notation, usually.


Perhaps, but Guitar Tab is one of the great equalizers of the internet - proper notation has typically cost money to access a copy, whereas Tab could spread in .txt format, small sizes, etc. Sure, they could be off or need the song to play along with, but that's the trade-off. Lastly, though it's a personal opinion, Guitar and "proper notation" are really only needed in the same sentence when playing Classical/Orchestra, Real Jazz, or as a Pro Studio Session Musician[1] - as in, very small, elite group of players. Most will get by just growing along with the instrument over the years, for which Tab helps a lot.

[1] Great studio guitarist Glen Campbell supposedly couldn't read music at all, just went along with what he heard and made it work; Tommy Tedesco by comparison could read sheet music upside down.


Ya, plenty of highly successful musicians and "composers" can't/couldn't read music, Paul McCartney being one of the most famous.

Joni Mitchell apparently didn't even use regular tab notation. She'd detune her guitar and annotate the relative pitch shift per string, e.g. D-2 G+1, etc or similar.


But luckily Paul McCartney had George Martin, who could do it very well :)


I agree, I just feel like it ends up as a crutch for a lot of musicians. For example, it isn't very specific in terms of rhythm so you get a lot of beginners who see that part of music as loose or unimportant.

But of course, up to a point, lowering barriers is always a good thing for beginners.


Depending on the style of tablature used, you can convey rhythm, and more. I'd say it can actually get more specific than staff notation when you allow for guitar specific notation, like all the different types of legato capable, harmonics, and specifying which string to play a note one (it makes a subtle difference).

Here's a screenshot I took of a random song in TuxGuitar (a GNU licensed tablature editor/viewer). You can see the rhythm fairly clearly, as well as the difference between pull-offs and slides.

http://i.imgur.com/bg9MitN.png


I disagree; guitar has multiple fingerings for each note or chord and it's not necessarily clear which one will be the simplest to use when given regular staff notation.

There's a huge jump in the complexity of how you need to reason about translating the music to the instrument with staff notation. Tabs solve this by explicitly telling the musician what fingering to use.

Guitar players should still learn to read music because it's useful, but anyone who wants a guitarist to play their music (especially by sight) should use a notation similar to tabs, but with rhythm indicated as well. Good software to convert between the two could be generally useful, but I imagine there is a bit of nuance to what fingerings sound the best.


Properly written tablature uses spacing to indicate rhythm.


Early lute music was mostly written in tablature FWIW. A lot of the renaissance repertoire is still published that way. It uses letters rather than numbers to represent the frets, which takes getting used to, and puts separate timing marks above the staff.


In other words, you haven't yet spent the time learning to read music, so you'd rather trash the system that evolved over several centuries and pretend that it's ridiculous.

Nothing has been "denied to you". You just haven't spent the time it takes to learn. Have foreign languages been "denied to you", too?

What, precisely, is ridiculous about music notation? It represents time, on the x axis, and pitch, on the y axis (on a basic level). Seems reasonably logical, to me.


> The key is F major.

Where is this coming from? It's pretty clearly G minor to me. The bass is playing G. The main phrase starts and ends on a G.

> This 1.334 is a ratio of adding 5 semitones in hertz. This should sound like this.

This harmonic overtone kills it for me. I know there are some overtones in the original, but not like this. 5 semi-tones is an interval of a perfect fourth. I think maybe a perfect fifth would work (7 semi-tones) but this overtone pretty much destroys it for me.


Hi, post author here. Thanks for pointing out!

I know it's wrong and working on these improvements:

https://www.reddit.com/r/Clojure/comments/4dl454/recreating_...


Cool. Those changes sound pretty dead-on.


Applied the corrections, check the sound now :)


It sounds like a much better approximation than before. Thanks.


Might be relevant to the subject: Sonic Pi Daft Punk sample - https://www.youtube.com/watch?v=cydH_JAgSfg


Sonic Pi is another on of the new Live Music Programming Languages. Sonic Pi has a nicer GUI presently.

Right now it is the early stages of these kind of musical tools for live music coding. I really look forward to the near future for these new tools. I know that we have had programming for music for decades but this is a different genre at least to me.

Here is a video of Overture going over live coding from 3 years ago https://www.youtube.com/watch?v=imoWGsipe4k&nohtml5=False


You might also be interested in how this is made under the hood: https://mxs.sbrk.org/aerodynamic-en.html


Off topic but Sam Aaron an overtone contributor and the creator of Sonic Pi is currently looking for advice re the funding of his work on Sonic Pi

https://twitter.com/samaaron


Yes, he has announced on Twitter that his project is running out of funding (it is a research grant).

He has set up a Patreaon page [0] for those interested in contributing his live coding, development of Sonic Pi and Overtone.

[0] https://www.patreon.com/samaaron


Not knowing about Leipzig[0], I've been trying to come up with my own representation of music in JS to work with the Web Audio API. My biggest puzzle so far is how to a) keep a rhythm and b) allow musical "blocks", which are indeterminate in length, to fit into this rhythm. I was thinking that these blocks would be pre-defined functions which represent different aspects of music (bassline, riff, etc.) which could be repeated and inserted at arbitrary time in the composition.

If anyone is interested on working on this drop me a pm/comment (it's just for fun)

[0]https://github.com/ctford/leipzig


I was kinda bummed that they weren't referring to the iconic 303 bassline from da funk.


Is there anything like this for JS + WebAudio? Have a mate who wants to be able to embed something like this in a website (endless generative music).


I hacked on this a while ago, and the best libraries I found were one called Wad for playing notes and applying effects, and another called Teoria for building chords and intervals and so on.

I was hindered by not having any idea what I was doing but here's what I came up with:

http://www.aphall.com/random/audio-test/ (clicks weirdly in firefox, no idea why.)

Libraries:

Wad: https://github.com/rserota/wad

Teoria: https://github.com/saebekassebil/teoria


This is clojurescript but maybe it'll be of some use to your mate https://github.com/ctford/klangmeister


This is great, I hope there are more Overtone recipes in the future!


Here's a recreation of One More Time that I saw on HN a few years back:

https://www.youtube.com/watch?v=9sdtOpE_3aQ

And the follow up:

https://www.youtube.com/watch?v=0kZy5qNrlks


I had no idea they open-sourced their shit. That's amazing. They're one of my favourite groups.


How did I not know about Overtone. Clojure + Music, two things I love. Thanks for this.


Haha. This is awesome. I wonder what the final output would be. A full song?


Think of it more as emulating a recording studio (instruments, amps, effects the whole thing) modeled as code.

This works beautifully in the composition first paradigm of lisps/clojure. The APIs are a bit hard to wrap your head around if you intend to use Overtone for the entire production process but I've had lots of fun with using it as the mother of all synthesis engine


I was a bit disappointed by the output. Is this the quality of music that is produced regularly by this software? The only thing that sounded good to my ears was the baseline piece, by itself. It sounded true to the original. The rest sounded like a Casio keyboard from the 80s. Not flattering to the depth of Da Funk. I was expecting something closer to the original, not something with the same tones measures and BPM but sounding totally flat overall. Granted, this is my only exposure to music produced by this software. I wonder if there are other examples, that don't sound like a computer making music? Fruity Loops makes better music than this.


It's incredibly hard to make music with commercial production values with nothing but code.

Commercial studios have racks full of boxes that start from $2000 each that fatten up and sweeten the sound, and a commercial track will patch every element through multiple boxes during mixing. The mixing desk itself will add its own sound.

Most of the boxes have analog circuitry inside them. Modelling it is not easy, and good models can use so many cycles it's no longer possible to listen to their output in real time (at least, not without using external hardware acceleration).

None of the mainstream music code environments - Overtone, SuperCollider, Csound, Max/MSP and so on - pay much attention to this. They mostly come with trivially simple DSP models which don't sound all that great.

Surprisingly, they also make it hard to use more complicated models even if you know what you're doing. Mostly you can't just add the model in user land - you have to add it as an external, and rebuild.

Commercial software from Korg, Yamaha, NI, UA, Access, and most pro and semi-pro VST makers puts more effort into sounding good, but the high quality models are somewhat proprietary and the code isn't often open sourced - although sometimes the models appear in papers from (e.g.) the ICMC.


Significantly better sounding synth sound is nowhere near that far out of reach. The sound he's trying to recreate is not something that comes out of a commercial stack of "$2000 boxes".

Da Funk's lead can be reasonably approximated with cheaply available modeling software, or moderately priced analog synths. Here's someone getting decently close with Massive, which is around 200 dollars.

https://www.youtube.com/watch?v=AB1_JxbT_1s

The article's sound is just still too simple. I've not used Octave, but I'm sure that with some work, it could achieve quite a bit more.


Synthesisers are really complicated. This example takes about 4 steps from a pure saw wave, it's a demonstration of principles.

This is the exact same sort of sound you'd get if you picked up any DAW or hardware studio and tried to recreate the song in a few moments.

It's almost certainly possible to create something true to the original, but there's no way it would be suitable for a "recipe" for showing people how Overtone/SuperCollider works.


I just looked at the original and the copy through my laptop's shitty default equalizer. The copy has unnecessary bass. Around the 1k hz range. This is probably the largest contributor to what makes it sound hollow.

Also, just comparing them by ear. Over each note, the copy builds up and then builds down in volume in a <> shape. The original has a sharper attack and then fades gradually, which looks more like a > shape.

I think the copy also has an unnecessary reverb effect, but I'm not sure.

This is all malleable of course. None of this is the fault of the software. It's just a difficulty inherent to reverse engineering.


It is pretty cool that you can program a song in code, but it looks awfully verbose.

Do people really compose this way?


no.

well, live-coders do by definition .. but for everyone else, it's just an ineffective way to work, even if you are a good programmer.

people really do use graphical programming tools like max/msp.


yes, they do.


Is there any synthesizer that would sound like a real violin?



That 303 just isn't a 303.


You're right. It's funny listening to this. They're playing the same song, but it's not even close to as good as the Daft version.

The daft version is loud and has a bite. This version just sounds lethargic and boring.


We detached this subthread from https://news.ycombinator.com/item?id=11447862 and marked it off-topic.


You're not wrong, but it was needless to say. It's like going into a freshmen physics class and pointing out that a frictionless vacuum is hardly a valid simulation of reality. You're missing the point.


I think the author is more concerned with hacking and exploring than perfectly duplicating the existing track.


the daft version is clearly mixed (eq'd, compressed, etc.) and probably done so in a nice "easy" to use DAW. i doubt he could get that sound (without working long hours) when hardcoding everything.


The Daft Punk version is pretty much the raw direct sound you get from the synthesizer. Maybe a touch of reverb, a hint of EQ but nothing that changes the sound.


There's also analog filtering from one instrument to another, which has proven very hard to recreate.


Making ones dance music sound loud and biting rather than lethargic and boring requires deep wizardry which lisp alone cannot provide.


Whooosh!

The idea is not to perfectly replicate the existing production, but to showcase the use and possibilities of the systems used.


Then they have failed.


I don't think anybody in here agrees. It's a very good introduction into the platforms.


I disagree. The music made by this software has no depth or emotion. It's like a vocoder from the 80s. I could never get one to say my name right,now matter how I spelled it. And da funk just doesn't sound right without depth. It sounds flat. It sounds like it was made by a computer, not a human. Not flattering to the original piece of the software itself. Total failure.


I don't feel that way, it's close enough to my ears. And I take my ears are very well educated in that department.


I once brought an interviewer to the brink of tears after she revealed how overloaded she was and how the EE team had effectively been eliminated for offshore workers. It was curious that all the other interviewers before her were MEs and systems engineers with no knowledge of the job I was interviewing for. They all kept referring to the "old product line" developed in house and how great it was to work on. The implication being that the new line was shit because nobody was around who new how it worked. All the money went into developing sexy enclosures and nothing into the actual electronics.

There were lots of glossy trade brochures lionizing the founders and how great they were but it was apparent that they went public to cash out and were riding the growth roller coaster to drive the stock price at the expense of capable employees who can sustain the company long term.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: