The key difference is: do you go there to hook up, or do you go there because you like the place, which as a second order effect is great common ground to meeting someone?
Putting stickers onto phones, kicking out violators and fostering a culture around that does miracles to that kind of behavior. Once a majority enjoyes the moment and is very openly pissed off by any kind of filming we go back to being ourselves.
Sometimes, yes. Depends on the venue. Taking a sticker off is a zero discussion throwing out reason in some places (filming or not), and an overhead phone filming is hard to hide. The sticker is a clear sign that it's not just words on some code of conduct poster.
I've never seen a fight at a club. At this one big (80k) festival in northeast lake region Germany neither. In my experience, fighting and aggression are caused mainly by, in decreasing order of importance, alcohol, cocaine, crammed+overfilled spaces. Then a very long gap before bullies and such appear on the list.
You're both wrong about raves not being great sober, and the latent prejudice against "drugs". Source, I went to many raves sober, and I work in psycare on them (psychiatric ambulance of sorts).
For a festival that's proper shit without drugs, try Oktoberfest here in Munich.
How so? I saw a few vids about EDC and quite frankly it looked worse than tomorrowland (like, reminds me of mobile games patterns), quite the feat. Burning Man has a reputation, and it's definitely not a sharing economy to get there. I wouldn't fly half the globe to get there, I think it would be quite interesting though.
It’s more of an arts and community festival than a music event. There’s a smattering of respected DJs/producers each year and some good sets, but otherwise there’s a bunch of art cars roaming around thumping generic party music. Absolutely nothing against Donna summers, but I’m not shitting you when I say I heard her around a half a dozen times the first big party night, for instance. There is no curation there like a music festival.
That said, absolutely worth going to at least once, for everything else.
There are a couple of smaller burns here I have connections to, I like the participatory idea, as opposed to pure consumption like a holiday park. If burning man is (or was) like that I would find that pretty appealing. Not sure about August in the desert some 10000km away though : - D
It's how all cool underground things go. Small groups build something new and awesome, new people come and join, the culture shifts, and at some point money is made, and the culture drifts off into a business. Very few places resist this, very few stay cool despite. The rave scene around Berlin is quite alright, but we learned to protect our safe spaces, and many will not mention details online anymore.
It’s fear-mongering, broad and unfounded generalization. It’s a edgelord comment that serves no useful discussion. It’s not what HN is for, and downvoting is the way to show this.
Ahhh, I interpreted this as an oblique reference to accidental overdoses from impure/laced drugs which is a huge problem in the US and probably a real deterrent.
I realize now from the other responses this was maybe a reference to exploding pagers, which is dependent on (I suppose) the view that raves in Europe are heavily attended by people of middle eastern descent?
That’s not really the “known-as-ravers” crowd here in the US so that reference may have just flown way over my head and I injected the top-of-mind US issue which is drug contamination.
On second thought I think it was meant to be a reference to October 7, 2023. I've seen similar stickers at some, back in summer, with messages like "What if terrorists blew up YOUR music festival? #supportisrael"
Took a while to work it out though. Why can't people just say what they mean?
I find it kind of despicable that someone would spend the effort to highlight a terrorist attack against a rave, but ignore the background of terrorist attacks against important stuff like schools, hospitals and residences.
Not sure why the article has to mention monads? I mean there’s the (mathematically correct) joke that »monads are monoids in the category of endofunctors«, but understanding that requires getting elbow deep into category theory, and if you’re not one of the maybe 10 people in the world gifted that way has zero practical use when programming.
> A monad is a monoid over a set of curried functions.
Is that so? Sounds very wrong to me. If we want to go the monad joke way, monads have to have an operation (a -> m b) that composes, but those are just normal functions, and there’s nothing curried about it. It’s a statement that one could bend enough so it’s kind of right, but what it really does is raise eyebrows.
> Monads force sequential processing because you set up a pipeline and the earlier stages of the pipeline naturally must run first.
No, a counterexample is the (esoteric) reverse state monad, where values flow the normal way but state comes from the results of future computations.
> Is that so? Sounds very wrong to me. If we want to go the monad joke way, monads have to have an operation (a -> m b) that composes, but those are just normal functions, and there’s nothing curried about it. It’s a statement that one could bend enough so it’s kind of right, but what it really does is raise eyebrows.
I can see where they’re coming from, but they certainly haven’t set the stage for it to be a something you could deduce without already knowing what they’re referencing.
So to me, it seems they’re referencing the Free Monad, recursion schemes and a little of HomFunctor/Yoneda Lemma.
The free monad gives a coproduct of functions, where the value is either a recursive call or a value (branch vs node). To get from a set to a free monad, you need to define a Functor over the set, and given most things are representable, this is trivial.
Given this free monad, an algebra can be formed over it by providing a catamorphism, where the binary function would indeed be composition.
I found it useful to have them mentioned, since people new to this topic (or, e.g., Haskell) tend to bump onto monoids when they first try to understand monads.
A 'handwavy' association that somewhat makes sense and allows you to have some sort of perspective when moving on to monads is better than simply omitting the link to monads completely, just because one can "kindof maybe" find holes in the simplified explanation provided.
(fair enough, the words "this is somewhat oversimplified, but" could have been added, but personally I didn't care)
It's kind of natural that you need to progress from a magma/semi group monoid (algebra) to functors/applicative/monad (category theory)
Would it help if you defined a monoid as a combination of 3 things?
1) a data type A
2) an associative operation on A
3) an identity (or empty element)
Then you can correctly say that the string data type, admits an associative operation (concatenation of two strings) and you have an empty element (the empty string).
I think too many people talking about functional programming really overblow how much you need to understand about mathematics, they serious do.
Think about my definition and you can quickly understand that there are many monoid instances for numbers (with the associative operation being addition to get a monoid or multiplication to get another monoid instance).
There's infinite numbers of monoid instances for various data types.
Haskell has several shortcomings from what I remember and Haskell developers incorrectly assume that you can only have one semi group (or equality, monoid, etc) instances for your data type because they believe they are the closest to math, but in reality their language has its own limitations.
> Haskell developers incorrectly assume that you can only have one semi group (or equality, monoid, etc) instances for your data type
They don't assume that. The devs bent the compiler backwards several times trying to support more than one instance, but they still couldn't design an implementation that is actually good to use.
If you know of any language where this works well, it would be nice to know. AFAIK, representing that kind of thing is an open problem, nobody has an answer.
Scala does it well. Implicits make it easy to pass along the instance you mean. You can put a default on the companion object if you want, but you can override where needed. Implicits on the companion object have lower priority than those in current scope.
> tend to bump onto monoids when they first try to understand monads
That's unfortunate. They should be bumping onto monoids much earlier, and much more often.
Yeah, IO and do notation put monads on the face of people way before they have time to adapt to it. But monoids are the one that are extremely valuable, simple, and easy to learn. Also, they make for a nice step in a progressive adaptation to the "generalized thinking" one need to understand why Haskell does the things it does.
In the world that I imagine could exist, we'd do away with algebra 2 and pre-calculus in high school, which are a waste of 2 years, and instead do something like algebra -> geometry -> calc1 -> calc2 -> linear algebra -> abstract algebra, with linear algebra being concrete things like adding arrows and solving systems, and abstract algebra introducing basics of monoids, groups, vector spaces, and homomorphisms. It's sort of unfortunate that even the basic ideas of algebraic thinking (i.e. structures and structural transformations) are pretty much not even hinted at to anyone but math majors, and yet we spend years of school on something called "algebra". So even technical people can't see the point of structural modeling.
Why? Serious question, but what's the use of monoids? I encountered the term years ago, when I had an ill-fated ambition to make sense of monads. I've let it go and made peace with the world. But outside that narrow context, I've never even heard the term "monoid". What are people using it for in the real world?
Roughly, something is a monoid exactly when a parallel reduce type of algorithm can be used. The associativity lets you break it into sub-problems, and the unit lets you insert padding where necessary to get same-sized blocks for parallel processors. It's also a useful concept to know for library design. e.g. when there's a "combine" or "reduce" operation on some data type, it should occur to you that your users will probably want a neutral "do-nothing" element and that your operation should give you a monoid. APIs without one are usually annoying to work with and require extra if statements/special casing.
More generically, named concepts like this give you a way to compress knowledge, which makes it easier to learn new things in the future. You get comfortable with the idea of a monoid, and when you meet a new one in the future, you immediately have an intuitive ground to build on to understand how your new thing behaves.
Thanks. Parallelization of T[] => T operations makes a lot of sense. Monoids seem to introduce exactly the necessary constraint to allow some kinds of operation re-ordering needed for various perf optimizations like parallelization. I get it!
Taking that one step further, a monoid homomorphism is a function that transforms one monoid into another while preserving that essential structure (homo-morph: same-shape), so that map-then-reduce is the same as reduce-then-map. Being able to swap the two might be a useful optimization if one is more expensive than the other, for example. e.g. `e^(a+b) = e^a*e^b` turns addition into multiplication. Fourier transforms turn convolution (expensive, complicated) into multiplication (cheap, simple), if you know what that is.
In some other contexts, it's useful to talk about transforming your problem while preserving its essential structure. e.g. in engineering a Fourier transform is a common isomorphism (invertible homomorphism) which lets you transform your problem into an easier one in the frequency domain, solve it, and then pull that solution back into the normal domain. But to understand what's going on with preserving structures, you need to first understand what structures are even present in your problems in the first place, and what it means to preserve them.
This stuff isn't strictly necessary to understand to get real work done, but without it, you get lots of engineers that feel like the techniques they learn in e.g. a differential equations class are essentially random magic tricks with no scaffold for them to organize the ideas.
Another useful purpose of these concepts is to have the vocabulary to ask questions: A semigroup is a monoid without a unit. Given a semigroup, can you somehow add a unit to make a monoid without breaking the existing multiplication? A group is a monoid where the multiplication has inverses/division (So if your unit is called 1, then for any x, there's a "1/x" where x/x = 1). Can you take a monoid and somehow add inverses to make it into a group? etc. In a programming context, these are generic questions about how to make better APIs (e.g. see [0]). It also turns out that groups exactly capture the notion of symmetry, so they're useful for things like geometry, physics, and chemistry. If the symmetries of the laws of physics include shifts, rotations, Lorentz boosts, and adding certain terms to the electromagnetic potential, can I somehow understand those things individually, and then somehow understand the "group of the universe" as being made out of those pieces (plus some others) put together? Can we catalogue all of the possible symmetries of molecules (which can tell you something about the the states they can be in and corresponding energy levels), ideally in terms of some comprehensible set of building blocks? etc.
It's a type of nice structure. Lists with concat, strings with append, etc. "Friendly chunkability", if you like. For instance, map reduce is a monoid homomorphism - when I see 'monoid homomorphism' in the wild, I think 'parallelizable' etc. It's a handy concept.
As the article explains, fold functions are a simpler way to think about monoids. So in recursive pure functional programming, many important iterative processes (pipelines, accumulators, machine states...) can be expressed as an application of fold-l or fold-r.
The value of monoids is being a single concept that generalizes the ideas of several different things like arrays/lists/strings, integral numbers, or paralelizable computation.
Pretty much every time you see people start talking about monoids/monads/endofucntors/etc... they are trying to get their compiler to automatically vectorize their program.
A combinator is just a term without free variables. All the rules can be put as combinators, it’s just that they don’t have common names like SKICB(…).
reply