Hacker News new | past | comments | ask | show | jobs | submit login
Why I'm donating $150/month (10% of my income) to the musl Libc project (2019) (andrewkelley.me)
451 points by Tomte on Oct 7, 2021 | hide | past | favorite | 160 comments



2021 update:

* The Zig Software Foundation now exists

* My total income including ZSF Salary, GitHub Sponsors, Patreon is now about 4000 USD per month. Legally, my ZSF salary is 160K per year, but I have been consistently donating it back to the org since frankly we just don't have that much revenue at this point in time, and I've been prioritizing getting that money to other Zig contributors.

* The $150/month to Rich Felker now comes from ZSF instead of my personal account.

* Zig 0.5.0, 0.6.0, 0.7.0, 0.8.0 have all been released. 0.9.0 is coming soon.

* I no longer really have an opinion about V; it's been two years since I paid any attention to it.


Sorry, I'm not American so this may be a dumb question, but why set your salary to $160k instead of what it actually is? Does it have to do with US tax law or non-profit org law?


This is just me taking a guess, but one benefit to him setting a real salary and donating the money back is that for the purpose of credit checks and consumer loans (e.g. a home mortgage for a primary residence), he can qualify for appropriate amounts. It also appropriately models what's happening: he's paying himself a "market salary" (in quotes because his market value is probably much higher than that, but I'm speaking relatively to a "stipend salary" here) and he is working full-time on the project, but donating back the portion he doesn't need to spend on disposable income (due to his own personal frugality). Actually quite an admirable way to do it -- hopefully he has figured out some way to not suffer extra taxes from that setup.


He is definitely paying extra taxes. Donations don't reduce social security and medicare tax liability, and depending on the arrangement either he or the foundation is paying for the "other half" of the tax


But he will also qualify for a higher SS payout since his income is higher, if my understanding is correct.


SS contributions cap out at around $143k (indexed with inflation). Beyond that, extra income has no impact on your eventual SS payout. Even below that threshold, returns on Social Security tax associated with higher income are pretty small once you meet the 40 credit threshold for any benefit[1].

[1]: https://rootofgood.com/early-retirement-social-security/


~$400k in lifetime earnings fills out the 90% AIME bracket; ~$2M in lifetime earnings fills out the 32% bracket. I think that, so long as you're still in that 32% bracket, you're more likely than not getting (small) positive returns from paying SS taxes.

Though I agree, it'd be silly to continue working or to declare higher income purely for that.


To clarify, my comment was about annual income; not lifetime. Income in any given year beyond the annual cap does not affect your SS lifetime earnings. (I think you probably know that, just adding it for other readers.)


It's semi-complicated and depends on what happens later in his career (SS uses the highest income from X number of years to determine your payout), but in the general case, yes it will increase his SS payout.


It also depends on how much future politicians neuter the benefits, via some combination of deflating the currency and/or modifying Social Security to be more means tested. I would bet dropping birth rates will certainly mean some people will be need to take a haircut in the coming decades.


Never really thought about it. I’d love for someone with knowledge to weigh in on this: Does it ever make sense to pay more SS tax, based on a present-value analysis?

I guess it’s really three questions:

1) For each dollar to SS taxes, do I expect to get more than a dollar from SS in the future? 2) Same question as 1, but inflation-adjusted future dollars. 3) If the expected return is positive, does it beat reasonable market returns?

Guessing the answer depends on age, income levels, etc. I just don’t know how to do the math myself.


(not a financial professional, not advice)

You need at least 40 "credits" over your working life to be eligible for social security benefits. Each calendar year you can earn up to 4 "credits", which in 2021 corresponds to $1470 in earnings that are taxed. So it definitely makes sense to earn the credits every year e.g. $6000 in 2021. SSDI looks at the last 10 years of earnings if you're older than 30, which still incentivizes hitting that minimum target.

As for the actual income in retirement, that is a complex subject. For example, the optimal money move is to continue working to full retirement age, but you may want to take the early retirement option (even if the payout is smaller) to enjoy the extra healthy years.


> the optimal money move is to continue working to full retirement age

Only your highest-earning-of-SS-income 35 years matter, so it's conceivable (and probably not too rare for many HN readers) to have had 35 max-income years before FRA.


After the first "bend", the ROI for SS tax is super-duper low. If you're working for 35 or more years, this would be ~12k/year in 2021 dollars a year. Thus, if your income will put you over the poverty line, paying more unnecessary SS tax past that won't help very much. (I'd put a number on the ROI, but it depends on how old you are.)

It's not very hard to make a spreadsheet to play with numbers if you want. I'm considering retiring soon, decades before eligibility, and I've enjoyed playing with the numbers, as I have the opportunity to contribute more or less in the coming years fairly easily.


FWIW, given my history and current payout schedules and that I die at 85 (when actuarial tables say I will) and counting the employer portion of the SS tax as tax paid and assuming I retire very soon, SS tax payed by me has a 1.4% real rate of return.

That being said, SS has some benefits that make it more attractive than other investments with that return (in that it's an inflation-indexed annuity backed by the US government), especially when mixed with my other investments.

Still not a fantastic deal for me and it will only get worse in the likely event I get more SS income over the years, but like all welfare programs it isn't meant to help someone in my current situation. Part of getting into my current situation was benefiting from welfare when I was part of a household wasn't in such an amazing situation financially.


> Does it ever make sense to pay more SS tax, based on a present-value analysis?

Of course it does. Social Security is social, not individual. If you're a rich man - and a person who makes 160 KUSD / annum probably is - it makes a lot of sense to contribute more to social security, whether voluntarily or involuntarily.


They specified "based on a present-value analysis". They were interested in SS as a personal money-returning interest for that question.

That doesn't mean they don't think that people should pay taxes for welfare programs like social security or for other governmental functions, it just wasn't what the question was about.

Certainly if you're after the second bend (very realistic for a lot of folks) there is minimal return on contributing.


yes that's one of the reasons in some countries if you have the ability to set your own salary you aren't allowed to take salary and instead must set yourself up as a "freelancer".


It's also just good optics. "$160K is a reasonable developer salary." Even if in practice it's not how they want to use the money right now, it's setting expectations correctly.


One difference: you have control over when you give yourself a pay raise instead of having to go through the org chart/board. May not be the most efficient strategy with taxes though.


There are two benefits to showing a higher income on paper:

1) Some situations like mortgage applications and child adoption will require some sort of income verification. It's a lot harder when you aren't showing as much top-line income.

2) If the project goes south and the author has to find another job, showing a "market rate" salary is better than the true rate (sadly many people and companies still talk in terms of percentage increases)


How would a future employer know what his salary was?


He could show them W2's from the current employer.


Has anyone (OK, any SW dev in the US) ever done this? I've been in the industry for almost 30 years and no prospective employer has ever asked me what my salary was.


Not in the US, but in the UK all my jobs have asked previous salary. What's more ridiculous was one position where the recruiter was trying to convince me to take a job with a company at a lower salary than I wanted. His argument was your last salary was X so the new company aren't going to pay you y, because y was too much higher than X...


They've always asked me. But they've never asked for a W-2 to prove it.


I’ve had a few recruiters ask me. I don’t tell them anymore.


Yes, I've done this once. I was moving from the SW industry to another industry that doesn't pay nearly as well. I was asking for less than 1/5th of my salary, and they thought the number was too high. I offered my current salary to prove "we're both experiencing quite a bit of pain here..."


I think the poster was probably asking why anyone would tell a future employer this information.

Employers should pay what they value the position at, not what someone else was paying for it.


Would you refuse an employer that wanted a past W2 as a non-negotiable?


I feel like there’s a bit of trust involved in the whole hiring process. If they think I’m lying and can’t pay me what I want, then why do they believe anything I said during the interviews. I don’t want to work with people that think lying to them after a few conversations.


Yes.


But why would he? It's none of their business


I know nothing about it in the US, but in the UK it would be their business once he started and they set him up on payroll. You could still argue none of their business, 'can only harm you' in negotiating salary of course - and in a large enough org the same people would never know, but otherwise could be a short-lived deception and maybe you'd feel weird about that once colleagues.


In the US, it's asked but I learned a while ago to dodge the question if they ask. If they're genuinely unsure, that tends to be a red flag. If they have a number in mind, we just negotiate and see if we can't come to a common understanding. I say that with the understanding that negotiating isn't something that comes easy for a lot of folks. I personally have found it an extremely useful skill over the years.

Different markets will bear different negotiation tactics. California, for example, made it illegal for a potential employer to ask a candidate this information.


I meant that, negotiation aside, here it literally is their business (via form P45) in setting you up on payroll. (Unless you time it or take a break such that the new job is your only one since 6 April, i.e. in the current tax year.)

Thinking about it - I do know/can infer a little about the US - everyone does (or has someone do) their own taxes there, so it's not needed. The purpose of this in the UK is that the second employer within a tax year can set up the new employee correctly so tax isn't underpaid as a result of ignoring the previous income.

(PAYE - Pay As You Earn - income tax, national insurance contributions, and student loan repayments are handled by employer. People only 'do taxes' (self-assessment) here if self-employed, have capital gains over the allowance or sale price over the reportable amount, etc. extra curricular stuff like that that, especially because of allowances before the tax kicks in, most people don't need to. I would guess the most common reason is pension contributions made with post-tax money (employer's pension scheme is of course paid into primarily through payroll, not that) for those earning enough to have a higher marginal rate of income tax - you get the basic rate back on contributions automatically, since everyone pays it, but any extra requires self- assessment.)


It is never an employers business to know previous salaries. It's wildly I appropriate they would have stones to actually inquire


It really depends on how the business entity is set up. The line between what your salary is and what the business makes blurs. There are tax law implications, but it's less about tax avoidance and more about not screwing yourself. Depending on the business structure, one reason I know for setting a salary is because you're forced to. Certain taxes in the US (FICA) are based off of your income. They don't want you to set a salary of $1, for example, and then pay yourself via dividends to avoid taxes. The IRS states it has to be 'reasonable.'


I don't know if this applies to the States but at least in some European countries if you ever move to another country you are taxed for what you would have been expected to pay in taxes based on the profit the company made in the last years.

Paying yourself a high salary (as long as it's reasonable, so as not to arouse suspicion of hidden profit distributions) allows the business entity to not make any profit on paper and therefore allows you to move to another country without having to pay this "exit tax".

Disclaimer: This is my understanding of the situation, I'm neither a lawyer nor a tax professional and this is not financial advice.


This is inspiring and refreshing.


Could you clarify (I'm just confused) how is your income 4K a month but you say to make 160K a year from ZSF. Is the significant difference all donated back?


I've been following V for a few years now and it's pretty awesome. Development is very active.


... and developers are very clueless.

And just about nothing works as advertised.


Can you elaborate or provide any references for those statements? What in [0] doesn't work as advertised?

Also, something I'ld like to know that would otherwise be really dishonest on V devs' part, is their memory management innovative? Oortmerssen, creator of Lobster language, said that in their Discord they mentioned[1] that the memory model was taken from Lobster.

[0]: https://github.com/vlang/v#key-features-of-v

[1]: https://news.ycombinator.com/item?id=25514198


> * I no longer really have an opinion about V; it's been two years since I paid any attention to it.

Good to see that you finally got rid of your jealousy

I will be honest, seeing everyone getting "mad" at V project owner was very "dishonest" move, to stay polite and within HN 's rules


> Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community.


Yes it is hard to balance, the V's hate episode on Hackernews is the perfect example, a lot was going against the rules and yet "allowed", including personal attacks towards V's author, because it just is the nature of discussions in general, i guess


I think HN should be free to call out scams and bullshit.


scams?

so far V is the only one that delivered on their promises, a safe, self hosted language that compiles super quick

i'm yet to see self hosted zig or a fast zig compiler

so far zig has been only blabla, while V delivered


I guess no better post than this to remind everyone that the Zig Software Foundation is half-way through the current donation goal, which would allow us to hire two more full-time developers.

https://github.com/sponsors/ziglang

On a separate note, I'm learning a lot about how different software foundations operate and I'm very surprised to see that very few prioritize the goal of paying their core developers. I encourage everyone to check out the mission of the Zig Software Foundation and to compare it with how we're spending the money we're getting. Everything is available here:

https://ziglang.org/zsf/


I've had a look and it's hard to see how the foundation's money is being spent. The accounts suggest most of the income is from donations from Andrew Kelley and most of the outgoings is Andrew's salary.


How do you find your full-time developers and assign them projects?


Contributors that can bill hours to the foundation are people that have been involved for a while and who have a history of quality contributions. The foundation doesn't assign projects, people tend to have an affinity for a specific topic usually, but they are free to use their billable time to contribute to any part of the codebase.


Consider joining Giving What We Can - an international community of people who have pledged to give at least 10% of their income to causes that they think are most effective at eliminating what they think are the world's biggest problems. (I'm a proud member for 10 years).

https://www.givingwhatwecan.org/


I like the first half of the idea (donating 10% of one's income) but I feel strongly against their definition of "effective" and "biggest problems". This typically happens when someone multiplies a tiny probability by an infinite value like "time till the heath death of the universe".

In particular, their top four recommended charities are managed by "Effective altruism funds", with the 3rd best-ranked charity spending money to prevent "potential risks from advanced artificial intelligence" which I strongly consider a waste of money.

Of course, everyone's values are different. I'd just encourage people to take 5 minutes and choose their own charities instead of blindly choosing their suggestions.


I'd strongly recommend looking at GiveWell specifically: They avoid the weird "AI risk" stuff and mostly focus on Malaria prevention and other very well studied, high-value third world interventions.

In particular, they also list where your money is going, so you can just pick and choose which of their causes you want to support (although I personally donate directly to them so they can decide which charity they think currently maximizes impact-per-dollar): https://www.givewell.org/charities/top-charities

> I'd just encourage people to take 5 minutes and choose their own charities instead of blindly choosing their suggestions.

The problem here is that some organizations have a very well proven track record of solving real problems; some organizations instead focus on very nebulous "social messaging and awareness"; and some organizations exist primarily to pay their executives well.

A quick 5 minute search won't really yield a lot of information on effectiveness, but an organization like GiveWell can afford to hire actual researchers to answer these questions.

(I'm sure there's other decent sources to help evaluate that question, of course. GiveWell is just the one I'm most familiar/fond of)


Note also that if you spend all of your funds allocated to Lord Bezos at smile.amazon.com instead of the base 2LD, you can specify GiveWell as the beneficiary and they get a small cut of each purchase.

It's per purchase though so you have to update your bookmarks to point to smile.amazon.com. Uninstall the app to help you remember.


This is old - you can now set the app to use smile. And when you share product links it will use the smile link.


The setting in the app expires after a period of time, and requires that you remember to set it again, just like Twitter's algorithmic timeline opt-out.



> This typically happens when someone multiplies a tiny probability by an infinite value like "time till the heath death of the universe".

This is absolutely not what they are doing. Most of their charities are working on immediate problems and are evaluated on their short term impact. The obvious exceptions are the Long-Term Future category and the Climate Change category, which are looking at around 50-100 years from now. They are not multiplying some tiny utilitarian value by an extreme number of years.

Their top 4 charities by total donation amount are Against Malaria Foundation, GiveWell, Global Health and Development Fund, and GiveDirectly. All are fighting immediate problems.

There are different categories of charities because people will never agree on which categories are most important. Does animal suffering matter? Is preventing a life of extreme poverty better or worse than preventing an early death? Everyone has different answers to these questions, so the site gives donors a variety of suggestions. They suggest charities that are well evaluated for their category. E.g. Climate change charities are evaluated on their climate impact per dollar donated, and human health charities are evaluated by lives saved and disease prevented.


It's not always that random (not saying it couldn't be). GiveWell, the best known research outfit, publishes their process: https://www.givewell.org/how-we-work/process

Peter Singer (who also runs The Life You Can Save), has lots of thoughts about why one should go with "effective" altruism instead of always donating to your local charity, and I would say they are worth listening to.


> which I strongly consider a waste of money

Do you consider all research on AI ethics and bias to be a waste of money? The EA folks are doing some of the best work in that field, and have been focusing on it for quite a while. People used to consider biosecurity and pandemic preparedness a waste of money too, but hopefully we know better.


No, I do think research on AI ethics is important and necessary. But I also think there's a time and a place, and "third most pressing issue for humanity" is not it.

Biased AI algorithms are a problem for the 10% of countries where AI is part of their daily decision process. For the other 90%, "I don't have enough nutrients for my brain to develop" and "I'm severely disabled due to a preventable but unprofitable illness" are much more pressing issues.

I support basic research, even in areas whose economic benefit may not be clear. But if someone asks "how can I help humanity the most?", I consider "AI research" to be a really bad answer, to the point of being almost a lie.


Their concern isn't biased AI algorithms. Their concern (well, one of their main concerns) is that a super-intelligent AI could wipe out humanity sometime in the next 100 years. They have arguments that this could happen even if the AI wasn't malicious.

I'm a bit skeptical too (and I haven't looked into that much), but it's not obviously stupid. They'd point out that even if there's just a 1% chance of humans being wiped out in the not-so-distant future, that's a big deal (and a large expected value for number of deaths) and we should work on reducing those odds.

A lot of this rests on the assumption that super-intelligence is really powerful. Like, take-over-the-world type of powerful.


That's what I had in mind with my "tiny probability, infinite timescale" multiplier, and also why I said "waste of money". The comment I replied to had a more nuanced point about AI research, which I think merits a more nuanced approach.

Having said that, I repeat what I said at some point: once they show that they can stop the dumbest DDOS, then and only then I'll listen to what they have to say about a super-intelligent AI. If they can't do even that, then I don't know why I would listen to anything else they have to say.


I'm not sure if you're referring to a DDOS attack on one of their charities. I hadn't heard of that.

Cloudflare prevents DDOS attacks. Why would effective altruists work on DDOS attacks?


No, what I mean is: some effective altruists invest their time and money in stopping a possible future AI from slaving humanity.

So, my challenge to them is: if you think that you can stop a super-intelligent AI from taking over, show me that you can stop the dumbest possible malicious intelligence that we know of, which right now would be a DDOS. And the incentives to develop either are pretty much the same, too.

If they can't, and my guess is that they can't, then I don't see why I should believe that they can stop anything bigger.


People stop hundreds of DDOS attacks daily.

It's weird to demand AI safety researchers should stop working on AI safety and prove they can mitigate DDOS attacks. The two are nothing alike. A DDOS attack isn't an intelligence. DDOS protection services work largely by having more resources and infrastructure than the attackers. Anyone can do that with enough money.

They also aren't researching how to fight malicious AIs. They'd be researching how to program safe AIs. Largely the stuff discussed here: https://en.wikipedia.org/wiki/AI_control_problem


First, I want to state that I appreciate your comment - I'm about to use some strong words in my reply and I wanted to say that I'm using them for conciseness, and not out of disrespect towards your (valid) points.

If someone were to argue "our strategy for survival is not to fight our enemies, but rather to convince them to use non-lethal weaponry", you would read their research in the rubble of their country after an enemy laid waste to them. I feel the same way towards that line of research: why would the US Army program their AI to be less aggressive when they could... not? How about the Russian army? China? Iran? You point out that anyone with money can make a DDOS attack, and I feel exactly the same way here: malicious AI could come up from anywhere.

If those researchers truly believe that a malicious AI is possible (and again, the website that started my comment chain puts it as a top-3 priority), they have to assume that it will be developed by someone with no interest in playing nice, just like that factory that started spewing ridiculous amounts of CFCs in 2019. Why would anyone use anti-bias correction in their NN embeddings when the biased ones exploiting harmful stereotypes gets them higher profits?

If those researchers cannot stop the most likely scenario, then I consider their research little more than wishful thinking. And we have seen how good "everyone will surely play nice" has worked - spam, pop-ups, phone scams, the list goes on. That's why I like DDOS as an application: it's the world's stupidest AI causing a lot of trouble. Can you outsmart that? Good, then now we can talk about outsmarting Skynet.


Okay, I see your point about hostile states, and I appreciate the respectful debate.

I'm not so sure about AI safety research myself, because it's very hard to evaluate how effective it is (I guess if an AI ever wipes us out, we'll have a data point), and because I don't know very much about AI safety research.

But let's suppose that super-intelligence is really powerful and dangerous (suppose there's a 10% chance it could kill all humans, despite our intentions). Now what? Is there anything more productive we could be doing to prevent that than just waiting?

Let's further suppose that the US and Chinese militaries want to make aggressive AIs that will subdue, or worse, exterminate their rivals.

As outlined in the AI Control Problem wikipedia article, there's a lot of concern that we'll make a super-intelligence that harms us completely by accident. For example, if you build one with the goal of protecting citizens, it might reason that to protect people, it must continue to exist. Therefore it undermines and outwits anyone who wants to shut it down. Worse, it might reason it could better protect people if it had more political power and more physical resources. Or even worse, if we really mess up how we programmed its goal, it might reason we're best protected if we're all put in a permanent coma and stored in a concrete bunker.

So even if the US and Chinese militaries were fully evil and wanted to exterminate all other countries, they might still want to use results from AI safety research, just to ensure they don't accidentally destroy themselves. And to some extent, for humanity's sake, making an AI with safety that exterminates every country except the one that created it is still slightly better than an unsafe AI that exterminates everyone.

I don't think the US or China or Russia would want to exterminate other countries. But whatever they choose to do with their AI, the AI safety research is there to ensure they don't lose control of their own AI. If anything, those safety features should be even more desirable if you're building an aggressive military AI.

Finally, how do you fight a hostile super-intelligence? I think I know what these AI researchers might say: you need to have an even smarter friendly super-intelligence. What's more, a friendly AI could look for and destroy other AIs while they're still in development. So maybe the key to avoiding hostile super-intelligences is just to be the first to make a super-intelligence and ensure it's safe and friendly. And if we ever get in an arms race to build the first super-intelligence, we better hope AI safety is well researched and understood by then. Because if not, someone may create an unsafe AI just to have it before their rivals.


> pandemic preparedness

There's a big difference between a threat that has existed for all of human history (pandemics), and a completely novel threat model that might exist in the future (AI)


I'm interested: why join this when you can simply donate 10% to charities without joining this?


Sense of community. I used to have the belief that being an anonymous donor, giving money to charities quietly, was the right and noble thing to do. Then there was the Ice Bucket Challenge and I realized that an individual donation is nothing compared to the multiplying effect that comes from having others within your community and network join in, and that requires finding creative ways of signaling your donations to others. Some find it distasteful to signal this, but the problem is that the end result of signaling donations is so much more effective that it's worth it.

Joining a community of people who also pledge to participate in charity and promote charities to one another could be a way to eliminate the distasteful part of signaling to others that you donate to charities. It can also work as a way to integrate charities and donations as a routine part of your expenses and budgeting.


giving as a group activity feels weird as if without an audience these folks might just not be interested.


If true, then that's exactly what makes it good. We can save lives for the small cost of giving donors some recognition and a sense of community.

Edit: To add the above, the median annual donation amount for someone earning $60k-$80k in 2010 was $107. Roughly 0.15% of their income. It feels weird to criticize people donating 10% for having not-pure-enough motives while letting the majority who donate almost nothing escape criticism.


Performative altruism, so hot right now. I used to have this old book by this guy Matthew something or other that talked about good deeds being done for the purposes of being seen. Might have to revisit that one again, personally.


I kind of wish you'd stop shaming people for donating to effective charities. What if these comments discourage someone from donating?


Performang Altruism is still Altruism.


This seems like a really funny name for the org. Some people can give a whole lot more than 10%. Others cannot give nearly that much. Seems... contrary to the idea of "what you can"


Note the 10% was a minimum not a prescription. As for the "cannot give nearly that much" they don't believe everyone should pledge to give money. The FAQ is actually very cautious about who should do it https://www.givingwhatwecan.org/about-us/frequently-asked-qu...


Does it count if you work part-time for pay, and write FOSS the rest of your time? I mean, you're contributing over 10% of your potential-employment time instead of over 10% of the higher salary of a full-timer.


Per https://www.givingwhatwecan.org/pledge/:

"Our pledges are in no way legally binding. They are commitments made voluntarily and enforced solely by your own conscience."

"Our pledges do not restrict you to give to registered charities, specific organisations, or organisations working in a particular cause area. The only requirement is that you give to the organisations which you sincerely believe to be among the most effective at improving the lives of others."

---

So, if you think you genuinely believe working in FOSS results in a better world than pulling in a paycheck and donating to charity, I'd say: yes, absolutely?

It's certainly not the default expectation, mind you. I think you'd probably get less out of the community since they focus in a different direction. But if you still see value in being part of the pledge, I'd definitely encourage you to take the leap and count that FOSS work :)


From [1]:

> However, Zig Software Foundation will never have big tech companies on the board of directors. We are grateful to Pex, for example, for donating $5,000/month, even though they have no board seats. Our goal is to maintain independence by keeping the mix of donations balanced among many parties.

Seems relevant given the recent hub-bub over Rust leadership/governance.

Another interesting model is LF and Linus Torvalds. (NS)BDFL who has the best interest of the project in mind, but doesn't seem improperly influenced by LF supporter companies.

[1] https://ziglang.org/news/jakub-konka-hired-full-time/


Yeah, it's a choice. The Linux Foundation and Rust Foundation are 501(c)(6)s (i.e., business association) non-profit; the FreeBSD Foundation is a 501(c)(3) (charitable) non-profit. Sounds like ZSF is also a 501(c)(3). There are pros and cons to both approaches. I think a large enough ecosystem (e.g., Linux) might benefit from having both types of non-profit organization around.


Somehow I loved this article and the response, even though it was admittedly a funding stunt. There was something very endearing about how straitforward and innocent the stunt was.

Two very deserving projects got funding (musl and Zig). I have used musl, mainly through Alpine Linux. I have not used Zig, but it seems it is a very interesting answer to the question of what does a systems programming language that is not C or C++ look like. We have one recent answer in the form of Rust, and Zig is another answer with another set of priorities and tradeoffs.

It is a very exciting time in terms of real-world language development.

All the best to the open source contributors of all these languages and libraries!


My ignorance is showing but, why Zig deserve to be funded?


Because a "better C" is long overdue.

(And no, Rust is not it. It's a great language, but its design philosophy is very different.)

Ironically, today, Zig is also one of the best C/C++ compilers around, chiefly because it "just works", even for advanced scenarios such as cross-compilers:

https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...

(Yes, of course, the actual compiler is Clang. But Zig wraps it in a way that makes it so much easier to install and use, especially on Windows.)


> Because a "better C" is long overdue.

I owe you 1 beer, this is so true. If I can just get something that treats a string politely I'll be over the moon. If that's Zig, it has earned my contribution, simple as that.


rust is it. the sooner everyone figures that out, the better.


We can argue over whether Rust is a better low-level systems programming language than Zig; but that's not what I meant. C didn't shove you into a straightjacket, and neither does Zig - that's why the latter can meaningfully claim to be the true spiritual successor of the former. Rust is something else entirely (again, whether that "something else" is preferable is a different argument!).


dude, people are dumb and code has to work. we even would like it to scale per core. rust is it. this zig seems a personal project not the c replacement for the next 50 years.


Even disregarding the language entirely, the toolchain innovations that have come out of Zig are pretty exciting and worth the price of admission.

I hope Clang and Rustc learn a few tricks from them.


Can you link to an elaboration about this point specifically?


Consider reading this thread and its parent article discussion - https://news.ycombinator.com/item?id=27873193


that’s my thought. i get people having pet projects, but i’m over the hard sales pitch for why i should care


If you're a founder or a developer you're more likely than not taking advantage (either your bussiness or the business you work for, or you for your personal projects) of tons of such "pet projects" from Python and Node, to Postgres and Zig.

So, there's that...


I think you both are thinking deserved means something stronger and more like entitled than it does. In this context I just means the OP think the project mentioned merit their funding.


I'll donate $1000 worth of cryptocoin today (Monero, Nano, Bitcoin etc) if Zig Foundation (or Andy directly) publishes an address.


Shameless self plug - if Zig or Andy doesn't comment here, you can still donate crypto to Zig via their profile on every.org - https://www.every.org/zig-software-foundation-inc

edit: we charge no fees since our ops are supported by our donors - the only fee that gets taken out is that charged by our brokerage for selling the crypto


This looks great but did ZSF opt-in or are you including them here in the hopes they'd take donations from you?

Likely the latter?

> https://support.every.org/hc/en-us/articles/360059840294-Why...

Also, skimming the terms:

> If circumstances make Giver to that particular Nonprofit impossible or inappropriate (such as if the selected Nonprofit ceases operations or loses its tax-exempt status), we will attempt to contact you first for additional preferences on where you would like your Donation to go. If we do not receive your preferences before we must disburse the funds, then we may in our sole discretion select an alternate Nonprofit to receive the Donation. We will do our best to direct your Donation to a Nonprofit in a similar space and/or with similar goals. Except in cases of fraud, we do not issue refunds.

If I can get Andy or ZSF representative to say they'll claim it from every.org I'd consider that. I can't find any obvious disclosure about margin/overhead from Every.org but I might still consider it despite that.


Every.org looks awesome to me; I am signing up as we speak. Thanks for indirectly helping me discover it and thanks in advance for the donation.


> Also, skimming the terms:

This not a surprising term. By accepting donations to be passed on to someone else, they become trustees, which comes with hefty legal constraints. By having terms saying "we will try our best, but if we fail, we will try to honour your intentions", they can avoid winding up with little piles of money that can't be repurposed and simply ... exist.


No, I don't think it's surprising and I'm glad it's there. But it's the reason that I was nervous about donating absent an affirmative response that ZSF would claim the money. It's nice they'd try to apply it to similar projects but I'd prefer it go to Zig or nothing (or I can donate to another project of my choice).


Just to note - unless we can't reach you, we usually end up just crediting the donor's account with a balance that they can use to support any of the other nonprofits on the platform, so usually you do get to donate to another project of your choice :)


Great question - we have more info on how we disburse at https://www.every.org/disbursements - but in short, ZSF hasn't claimed their profile and connected a bank account, so we'd disburse via our partner NFG. We treat crypto donations like those made via bank account, meaning we cover the 2.25% fee that NFG charges, so (100% of your crypto donation - broker fee (<=1%)) goes to ZSF.


Hey just a heads up, you said 1 hour ago "you can donate via their profile here" and then 30 mins ago Andy comments and says he's making an account as we speak..

Did you just solicit donations without them even being signed up? If so, that feels weird, and I'd appreciate at least (as I cant speak for others) if you didn't do that without some heavy disclaimer text making clear that they've shown no interest yet. I love the concept of what your service does, but you can get users in other more honest ways than this.


How it works according to their website is that they will mail a check to the registered address of the charity. That's actually an amazing service - even charities which are not aware of Every.org still benefit from its existence.

I'm honestly really impressed.


I can appreciate that you derive value from this, however, envision it from Joe Casual walking by, it has in my case discouraged me from using their service because to me this feels dishonest in a lie-of-omission way. I'm not calling them dishonest, I'm not saying anybody lied, I'm saying the marketing and recruitment strategy should encourage people to use it, not use gray patterns like the ones random voip text people do about "claiming your funds" and such. It's just relating my personal experience and perception of it for the purposes of helping them make a better product, that's all.

I have a text message from just today after lunch with a link to click to claim my funds. No name, no context, just a random text that I can only imagine isn't in fact free money waiting for me.


Thanks for this feedback, and I definitely think we could do a better job of making it clear which orgs we have a direct relationship with vs not.

That being said, Network for Good, our partner for disbursing to nonprofits whom haven't connected directly with us, also handles disbursements for a lot of Facebook Giving and other large donation platforms, so I don't think the checks coming from there are necessarily unexpected by many nonprofits - see their support article on this at https://networkforgood.zendesk.com/hc/en-us/articles/1150073...


Their terms are pretty clear that it's how they work. I don't know that it's dishonest but it is slightly tricky IMO. It seemed like it was the case when I clicked on it and it would be nicer if it were obvious before donating.

But bottom line they'd have gotten the money delivered to ZSF anyways.


True, in this case it goes to a worthy cause and it seems as if Andy is thrilled by the ability to get a check mailed, so obviously there's utility, there's no disputing that... It's just you don't need gray patterns to attract people to things that are awesome.


Wow, thank you for sharing this. I was not aware of Every.org until today. I checked out the platform just now and am delighted. I love that it is a 501(c)(3) and that it is crystal clear with the promise to remain that way. We desperately needed something like this.

I also love that you will mail checks to organizations even if they do not sign up for electronic disbursement. However I will sign up ZSF now of course :)


Ok sounds like I can go ahead because you will accept from every.org and you do not plan to publish an address to accept donations directly?

I'll match other HNers donations of as much as $100 to a total of an additional $1000. Not sure how to substantiate every.org donations but maybe I'll take HNers at their word.

EDIT: scoreboard as of 22:20 UTC: $400 out of $1000 to be matched: easymuffin, slimsag, tav, _hl_.

Because we need some kind of time bound, I'll leave this matching window open for the next 48 hours, so ~21:20 UTC on Saturday (this is ~5pm US Eastern time IIRC).


Ryan Worl is offering to match up to 1K as well: https://twitter.com/ryanworl/status/1446216478416584706

slimsag up to 500: https://news.ycombinator.com/item?id=28792172

So I think that means if someone donates and then replies to the parent comment noting the amount, that amount will be quadrupled!


Not nearly as much as others, but I just donated $5 directly through GitHub: https://imgur.com/a/5LIUu0y



Is that conditional on donating via some cryptocoin? I'd be up for donating a well-deserved $100 (can't really afford much more right now sadly) but prefer traditional methods.


No, it's not. Donate to ZSF directly or via every.org and I'll match it.


Done. Thanks for being so generous.


Thanks for being so generous! Here's $500 from me:

https://imgur.com/gallery/qGBGwXc



Thanks everyone. Donated $100 https://i.imgur.com/BasBMKg.jpg


Matched your $100 :) https://i.imgur.com/GCuXArQ.png



Sure, why not: here's my $100 https://i.imgur.com/XwhkmzQ.png

And just to make it interesting, I'll also match donations up to $100 each to a total of an additional $500 - starting now for the next 48 hours. :)


Toaster King and aveao bring up the match to $455! Keep it going, HNers!


Here's another $100: https://imgur.com/a/uoQdmqg





Poster: match amount

compscidr: $100

easymuffin: $100

slimsag: $100

tav: $100

_hl_: $100

Toaster King: $50

Foxcoditrad54: $10

aveao: $5

Total match: $565! Great job, team!


Thank you for such a kind comment, and I'm glad we can support your and the foundation's work! I've just shared this with my team, it makes everyone's day when we get feedback like this <3


Deno currently requires glibc. Anyone have insight into it? Would getting musl more funding help make it so it can run on stock alpine?


A year and half ago kesor was able to get deno to build without glibc (so it would work on stock alpine). It seems to be gn args/configs that need fiddling.

Unfortunately this work has never been brought into main (a couple of attempts have been made subsequently).

https://github.com/denoland/deno/issues/3711#issuecomment-62...


Deno is built on V8, and V8 requires glibc. So it's down to Google supporting musl, and I don't think it's a money issue.


> V8 requires glibc

I don't _think_ this is true.

https://github.com/denoland/deno/issues/3711#issuecomment-62...


The aunt of your comment applies here. If anyone has gotten Deno to work without glibc, it means V8 can be made to work without glibc. That doesn't mean it's properly supported.


People have been building V8 on musl for a while now, but it's not supported by upstream. Last time I've looked into it a couple years ago, such builds were also not particularly stable.


I see. I love Deno and using a Debian or Ubuntu image is a small compromise.


Any good documentation on the language itself ?

This looks like an easier Rust to me.


It's probably closer to "a more modern, better C" than "an easier Rust".

The core features of Rust are the borrow checker, traits, and algebraic data types, and Zig is missing all of those. On the other hand, Zig has a lot of things that C doesn't have, and a few that Rust doesn't have as well - and some people gravitate towards the C-like simplicity of the language.


I'd say Zig is a more strictly low-level language than Rust, with less safety (it's not memory-safe) and a less sophisticated type system, but with more emphasis on being low-level (malloc is fallible) and with more metaprogramming capabilities.


It's not an "easier Rust", as you can't really claim to be Rust without the borrow checker, and you can't really claim to be Zig without the comptime and explicit allocation. They are different language, that took different decisions on how to do things, for different reasons.


https://ziglang.org/documentation/master/

Is a place to get a feel for things. There's also a generated doc set that is similar to rust's, but I just recommend looking at the source if you are curious about something, it's concise and readable. If you have more questions you can pop into irc or the discord, they'll have better resources at the ready.

https://github.com/ziglang/zig/wiki/Community


Another good resource: https://zig.news/


Previous discussion from two years ago:

https://news.ycombinator.com/item?id=20268087


musl is also key for the Emscripten project.


The Computer Science god does not actually require this kind of works righteousness for salvation.


10% is IMO too much, take care that you first sustain yourself. So as you can continue.


Where are you? You’re a poor man.


this person makes $1500 a month?


The Zig foundation is super transparent about where their money goes, modulo a few months of delay. Technically he's drawing a salary of something like 12k/month but he donates most of it back to the foundation (initially 100% of it was being donated back, but he's scaled back gradually so it's more like 80% last I checked). I don't know if he has any other income but it seems like Zig is really a labor of love.


If you're in charge why would you pay yourself a salary only to donate some portion of it back?


Probably for insurance and tax purposes.


He lives from donations to Zig, so that he can work full time in it.


The literal first sentence of the article will explain this one for you!


The literal first sentence is as follows:

>One year ago, I quit my day job to work on Zig full time.

So is it the case he only makes $1500 a month? I mean it's none of my business what he makes but he did bring it up and I am curious to know how much a prominent individual like him earns for working on his own open source passion project. I find it commendable that someone would work on their own project for such a low salary owing to a belief and desire to get their product out there.

That said if he makes $15000 a month that's cool too and great for him, but it's not clear to me what the truth here is.


Above he wrote that he is now up to $4k/month.


[flagged]


Yes, which is great! He's drawn attention to an important project and given it funding.

> Now if I'm being honest about my motivations for this blog post, it's that I want to prove that open source funding is not a zero-sum game.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: