This post is so interesting to me, esp. the build-vs-buy spectrum.
As Dan notes, a lot of software is just...not very good. It either isn't upfront with flaws (as in the case of the Postgres -> Snowflake tool), has too much scope, or is abstracted poorly. Finding things to buy/use (as in the case of open source) can often eat a lot more time than you anticipate.
I've been dipping my toes into the JS ecosystem, and I keep bumping into the fact that using mentally cheap signals of quality (such as stars or DL counts) almost never indicates the quality of the thing itself. Winners seem to be randomly chosen, almost! The only way to assess is to read the code and try integrating it in.
I'd go farther to argue that the larger an ecosystem/market is, the more untrustworthy it behaves as a whole, simply due to the size, and the types of people attracted to it who want to get influence/money. See also: appliances that everyone needs.
For me, what this post illustrates most is the cost of information. By making hasty decisions, buyers are trading present time that might be spent shopping and comparing with future time spent struggling with the wrong product. They're discounting future time. But they're also doing something very rational -- they're making a decision to see what happens. That is, they're testing a hypothesis in the only way the market allows them to. Because people are bad at predicting their own future needs and behavior, and products are bundles of features whose importance is often unknown until you have to use them in high-dimensional futures. So buying is an empirical test.
Unfortunately, most consumers, recruiters and sometimes hiring managers are in a position of information assymetry vis a vis the people selling them something. That is, consumers rely on the self-reporting of vendors which purport to be experts.
it's worse than that: nobody knows! how are we supposed to know if all the investment vendor A made in "reliability" of their appliance will actually work, or if it was just spent wastefully? And oops: past results don't reflect future outcomes, so you can't even really bank on a brand or a reputation: who knows if they just cut all Q/A staff? Welcome to entropy my friends!
Theory (and practice too) says that whatever was used in the proof-of-concept also stays. I mean you are right the initial thinking is to try and evaluate the fitness, but then you're already invested and unless the fitness is abysmal (it rarely is) everything else will be rounded and squared to fit the already implemented hole...
Consumers rely on advertised claims being truthful.
It is not a matter of people badly predicting their own needs in most cases,though there are some that do have problems with this. It is a matter of people being misled by false information and trying to course correct after that information comes to light.
In a world where lies of omission and ambiguity towards borderline malice isn't considered an outright lie, but the sales reps do make those outright lies, and fake reviewers are not punished; there are real problems especially when the presumption is they aren't doing this (when in many markets this is exactly the standard behavior, and even academic studies show these things happen regularly).
Presumptions are just assumptions. Someone will take advantage of the grey unenforced area to push a product that may not be as professionally tested as they claim (or even finished). I've certainly run into a lot of these bait and switch types in my long professional career. The general term to describe this is snake-oil, and with the concentration of the market over time (increasing marketshare less participants), this only gets worse.
You’re giving pmarca too much credit: he’s just lying. Ya know, for personal profit.
Starving for talent my ass. His portfolio companies have infinite appetite for talent at zero cost, the minute one person wants one point of the upside they’re right back to starving for free talent.
Silicon Valley is the ultimate thought experiment in how wealth inequality plays out when resources are effectively inexhaustible. You don’t have to speculate about a post-scarcity world, this is a post-scarcity world. Ballers in SV ship a billion in revenue on a Tuesday. And yet somehow it’s Andreessen who ends up with all the chips at the end of the night.
These talking heads don’t have a plan, they don’t know what their next big payday will be, they don’t code, they don’t design, they don’t sell anything other than their own personal brand, they don’t add value.
They’re just patient and connected like zen spiders sitting on a web: they’ll learn first about a new big thing, they’ll be there immediately, their friends will wire up the deal in their favor, and they’ll do a TED talk a few years later about how making themselves absurdly wealthy with no real effort is somehow the future of humanity.
They openly advertise their glee at the (ridiculous) idea that soon some autoregressive language model will do all the work and the owners of NIDIA cards can just pocket it all.
In the 90s there was this meme of a yuppy couple doing a business from their couch via “The Information Superhighway”: outsource everything, all you need is a laptop, a glass of Chardonnay, and a lot of cheek.
pmarca should spend less time yakking about AI on Lex stream and more time learning AI on geohotz stream.
Most of “the people selling something” have little to no credibility, so their words have no value beyond a very low bar.
It would be different if it’s someone e.g. very high up at a F500 selling something, even with a huge information asymmetry, because it’s still possible to bank on their credibility. (Assuming they offer sufficiently many guarantees signed by sufficiently many people.)
>I've been dipping my toes into the JS ecosystem, and I keep bumping into the fact that using mentally cheap signals of quality (such as stars or DL counts) almost never indicates the quality of the thing itself. Winners seem to be randomly chosen, almost! The only way to assess is to read the code and try integrating it in.
I wish morep people understood the "Kardashian effect" as I like to call it -- the most popular thing is only most popular because it was already popular. I think in almost everything in my life and in every domain, #2 or #3 is better-suited (for my preferences and needs).
A year or two ago on HN I read a short blog post about omitting the word "best" from internet searches and being more specific in your criteria (e.g. "car with best resale value" instead of "best car"), and it has made my life and way of thinking a lot better
> I think in almost everything in my life and in every domain, #2 or #3 is better-suited (for my preferences and needs).
I like to explore alternatives to the most popular choice, but more often than not I end up back at the #1 consensus choice.
I have some friends who simply must pick the #2 or #3 choice in every domain. They always have an elaborate justification for why it’s better. From my point of view it seems driven by contrarianism.
Some times they pick some interesting alternatives that I explore. Most of the time they end up with also-ran purchases that die off. I joke that my one friend is the best predictor of impending product line cancellation that I know. He used a Zune when everyone went iPod. He went Windows Phone when iPhone and Android were front runners. He event eschewed Instagram for some other platform that he was sure was going to win the social media wars, but was actually so unnoteworthy that I can’t even remember the name right now.
the same group of consumers has an outsized tendency to purchase all kinds of failed products, time after time, flop after flop
You really don't want these people as customers:
In a key part of the study, the researchers studied consumers whose purchases flop at least 50 percent of the time, and saw pronounced effects when these harbingers of failure buy products. When the percentage of total sales of a product accounted for by these consumers increases from 25 to 50 percent, the probability of success for that product decreases by 31 percent. And when the harbingers buy a product at least three times, it’s really bad news: The probability of success for that product drops 56 percent
I'm somewhat this way, but more than somewhat when it comes to TV shows. I think something like 80% of my favorite shows were cancelled early. Firefly, Freaks & Geeks, Shadow & Bone, ...
Perhaps I should set up a Patreon where people can give me money to not watch things.
(Admittedly, this may be a mathematical artifact. Cancelled shows have less opportunity to decline in quality. At one point, The Dragon Prince was my all-time favorite show. I haven't even bothered to watch the last several seasons. It is even possible that the people cancelling the show are able to accurately predict that something is going to go downhill in the future, though I doubt it.)
For what it's worth, I watched the show while working/cleaning and the later seasons felt pretty decent. Its seventh and (I believe) final season is premiering in two days. I can definitely feel the influence of Aaron Ehasz, and although I've probably aged out of the target demographic, the ATLA-like worldbuilding/character writing is nostalgic. Not bad to have on in the background IMO if you ever get curious about what happens.
Oh lord. My father also always seemed to pick gifts that appealed more to him and his impulsivity.
I got the vectrex too, and an Atari STM I think, it wasn't quite the same as an Atari STFM. When I wanted to learn piano/keyboard, he bought me a frickin keytar, a Yamaha SHS-10, instead of lessons or a simple full size like I asked for.
Instead of a gift certificate to get some clothes, he got me a gimmicky Canon SLR that ate batteries and that I couldn't afford to develop the film.
He was a very strange person. Sometimes incredibly funny and generous, other times hateful and selfish.
Edited to add: sorry for the trauma dump. I have no idea what point, if any, that I was originally trying to make.
Well I can understand your father better than you in this case. I bet time will come when you'll start appreciating what he did for you.
Fathers are not like mothers, dude... Geez - dreaming for clothes and not appreciating a Canon SLR instead.. The time and the maturity will help you appreciate at some point you've had an amazing father.
Added context: my father didn't pay child support. I wanted clothes because all I had was school uniform and old hand me downs.
I would have liked to have eaten something other than boiled potatoes and peas in my birthday. I would have liked to go to the ice rink with my friends. Instead I got a camera I couldn't afford to use.
What the hell does that have to do with this topic?
The problem you describe, now, is totally different and it wouldn't matter what the gifts were, whether they were things most people valued or things that ended up being flops or things you liked or didn't.
I might be one of these people. I don't have to pick #2 or #3, but I will give them a thorough reviewing as I will critically for #1. Sometimes I just want something different for the sake of it, but I want it to at least do the job reasonably well. Something about a fork in the road and taking the one less travelled by...
Since that can often mean extra effort/support, I won't recommend such things to others. I'll try to pick something that will be the least trouble for them.
Fascinating observation, and now that you point it out I know several people like this. It's like they are pathologically contrarian consumers. Then they often complain about the suboptimal situations they get themselves into.
Not to say that every #1 popular item is always the best, but definitely a lot more than never.
> I like to explore alternatives to the most popular choice, but more often than not I end up back at the #1 consensus choice.
Popularity is only a decent proxy for consensus if people actually look at the other options. I've been burned by trusting this metric more times than I can count.
Hah, plus one on this one. Once I went as far as buying a French car famous for the suspension problems due to the terrible quality of pavement in my country, mainly to prove everyone was wrong about the unreliability claims (and it was unreliable btw). I guess I was often feeling I was outsmarting the dumb crowd... got me screwed so many times.
[TL;DR: Hindsight is 20/20, but if you did a good job with your requirements and you had good information about if a product meets each one, then it doesn't always matter that other products which didn't meet the requirements as completely eventually win out.]
I may be one of these types, but at least in many of my cases, I don't really know that it mattered in the end?
Maybe after a review I pick something that didn't win in the long term or even eventually exited the market because it wasn't popular enough. But, my requirements are almost never strictly that it's popular. What I end up with does typically do the job very well for the time I have it, and after few years the requirements may change or the need may go away completely. If one of my requirements is that a device is built with metal instead of plastic, maybe I never have to replace it.
Another example: Your friend had a Zune, but then I'm guessing they moved on to a phone, [because phones eventually] became better music players. If the Zune did all the things they wanted while they had it, especially if they had a unique need, maybe were happy with it. (Although, that isn't necessarily always the case.)
This doesn't seem quite as applicable for selecting software, though. Popularity often is part of what I look at there, because I want to know dependencies won't need replacing and support will be available. Additionally, you can potentially work with the developers so the selection iterates and grows into your requirements.
I have friends who routinely go for #2 or #3 (occasionally even further down). The typical justification they gave is that one's paying a lot of price premium/marketing cost for #1, whereas with #2/3 one gets comparable or slightly worse quality for a lot cheaper.
Part of the problem, though, is that winners of the popularity contest get more support from the rest of the ecosystem, which doesn't take that long to turn into a moat.
This really depends. Some examples where the number one choice is IMO justified:
- ZFS. There are other filesystems that provide checksumming, CoW, etc. but ZFS is a proven solution in this space. I am happy work on BRTFS and bcache continues but they are not yet there.
- Debian. There are distros that do things better and differently but nobody has quite as smooth an experience, hardware support, security, and software availability all in one package. Alternatives are really good so it really says something about Debian that it is as popular as it is.
- ssh. There are some alternatives but when was the last time you heard an argument made in their favor?
I could go on but hopefully I’ve made my point. Sometimes the standard choice is the right now. And yes I know me picking Debian as a good distro can start a flame war :)
It makes me really appreciate tools that DO work. Things like: the Linux kernel, Vim, PostgreSQL, the Golang compiler, etc. Interestingly, the aforementioned tools come from different ecosystems, and levels of financial backing, but all of them have been reliable tools for me for many years, all are complex, and yes... they all have bugs, but of acceptable severity and manageability.
For me the most interesting case is HeidiSQL. I find it easily the most useful SQL GUI client, but it crashes pretty frequently, but not frequently enough for me to stop using it over the alternatives.
I often wondered how to strike the balance right on these things, since apparently all options can lead to success.
Might depend on the quality of crashes. Losing hours or days of work would quickly sour me. HP RGS crashes, twice or thrice daily are just 'meh' - reconnect and nothing - but 15 seconds and typing auth tokens - is lost - maybe 'flow' but I've become resilient there.
Are you actually applying some objective standard for "works" here? Or are you just deciding that the bugs in things you like are "of acceptable severity and manageability" and the bugs in things you don't like aren't?
Right, but which bugs prevent you from completing a task is probably a function of how much you like those tools and/or how useful you find their capabilities. Generally people get used to large classes of bugs in their tools and work around them without even consciously thinking about it.
More directly, I have never encountered a bug with any of these apps, aside from Linux kernel panics due to buggy drivers, which was almost always with long tail hardware back in the 90s.
Basically everything Dan posts online is deeply insightful, fearlessly honest, thoroughly footnoted, and dryly humorous.
I think even he is a little averse to straight up saying it out loud (we’ve discussed it many times): software and product outcomes have gone to shit because we don’t enforce anti-trust law already on the books, let alone update it in light of 50 Moore doublings.
You want a smart phone? Here are two vendors with the same App Store vig at payday loan shark rates. You want to rent cloud compute? Here are several vendors where the 4th place vendor charges the same as the first place vendor. You build disruptive encrypted messaging app? You better live on a boat and watch your six like Jason Bourne Mr. Marlinspike.
Is it even a little surprising that everything from Google search to Netflix is a shittier version of itself ten years ago? There’s no incentive to make it compelling!
this is a tangent for sure but I doubt you can find payday loans that lend for 30% APR (Apple Store takes 30%). People really have no idea what realistic rates are for people with bad credit. If you have a sub 580 credit score the average APR for unsecured loans is 100%.
> payday loans that lend for 30% APR (Apple Store takes 30%)
You have to flip the percentage. The fraction of money that goes to the intended destination after a 30% tax is 70%. The fraction of money that goes to the intended destination with a one year 100% APR loan is 50%.
Though there's no reason to assume a year in particular. If you take a six month loan at 100% APR, you have to pay back 141%. Paying back 141% is equivalent to a 29% tax. And if the best comparison is "six months of compounding payday loans" that's even worse than the initial comment suggested.
I think that we should either refer people to serious treatments of bond pricing or say nothing on the matter: it’s very easy to confuse everyone with “sort of” explanations of important math.
But Tony Soprano sucks the blood out of pre-existing economic activity while Apple created a brand-new playing field for billions of $ of new activity, complete with the hardware platform and app hosting.
At what point does a "new" playing field become an "existing" playing field? Surely they don't deserve an outsized cut forever, and we're more than 16 years in.
Their hardware sales shouldn't entitle them to a cut of the software used on it, and the hosting is not worth particularly much.
All property is public property in the sense that the duly constituted government informed by the wishes of the electorate has an iron monopoly on the use of force and in this case enough force to make anyone do anything.
Some internet platform thing stops being socially useful from competitive innovation and starts being an extractive rent?
The public has the power to dictate terms to the people running it. It’s a power the public hasn’t exercised a lot recently, but it’s only been about 30 years since LA 92, a little longer to Watts and Detroit.
It would be a grave error to mistake the public’s kindness for weakness.
> Of course they do? Just because they created a platform long ago doesn’t make it public
If that's what you think matters above all else, would you support a literal mafia group taking a cut if they had started the market in that particular city?
> public property
It's a market, not property.
But Apple owns none of the iPhones, so why do they get a say?
> The value of their platform is in both hardware and software by the way.
They shouldn't be artificially tied together by DRM.
> If that's what you think matters above all else, would you support a literal mafia group taking a cut if they had started the market in that particular city?
This is called being a "landlord" and it's actually completely legal.
You buy some land, you build a mall, you invite shopkeepers to set up shops and sell to customers visiting the mall, and the shopkeepers give you $$$$ every month, forever. If they ever stop paying, they lose their shops.
Even though it's shopkeepers that draw customers to the mall in the first place, and even though the shopkeepers are covering all the maintenance costs of the mall, you get paid anyway. Because you own the mall.
So if a mafia group created a market, took a cut from every shop, and threw out anyone who wouldn't pay them their cut? That's actually a legitimate business.
That doesn't quite hold when talking about a piece if hardware someone can own outright, like a phone. If the mall allowed shop owners to buy the store they'd have to function more like an HOA, collecting dues rather than rent.
That analogy still doesn't quite hold as you legally agree to pay HOA dues when buying the property and agree the property can seized eventually if you don't pay. No such agreement is made between phone owners and Apple, Apple just sells you the thing and puts some of it's functionality behind ongoing fees.
That's the point - the mall doesn't allow shop owners to buy their store. And Google doesn't let you own your phone outright. If they did, they'd have to give up that recurring revenue stream!
I don't like the app store model at all, but it's a stretch to call it rent. Google can't repossess your phone simply because you don't buy any apps.
I have heard the rent argument made for property tax and it holds better there. Own your house outright but fall behind on property taxes and the government can take your house and land - now that feels a lot more like rent.
Analogies between physical commerce and what the Cartel is up to are offensively wrong by 19 orders of magnitude or so.
There is no useful analogy from an idyllic high street with people shopping at leisure to what happens when you give sociopaths unbounded compute and legal carte blanche.
I struggle with this as a meme and I have a more and less charitable theory. My more charitable theory is that people never learned finance seriously. My less charitable theory is that people stand to benefit personally and are indifferent what it costs, as long as someone else pays the cost.
None of these concepts are in the bedrock law of the land: shopping mall, home owner’s association? Which amendment protects HOAs?
It is completely possible to change all of this, and while we should deliberate thoughtfully before making sweeping change, I mistrust anyone who just asserts dubious shit as bedrock laws of nature.
Maybe we’ve got a pretty sickening system composed of dubiously legal and strictly unethical status quos and it’s time to make some changes at arbitrary effort and cost.
I’d contend that the term “rentier” is more general and basically all economics from Smith and Ricardo up to Art Laffer was no big lobby for extractive rent seeking.
Profit margins and low-friction competition go hand in hand, if one endorses the one without demanding the other?
I’ll show you a pretty small-minded person with distressingly little empathy.
A legitimate business created an asset. There’s no good reason to believe they should cede ownership simply because 16 years have elapsed.
> But Apple owns none of the iPhones, so why do they get a say?
This is an interesting point. But the key thing is people buy iPhones in part because of the walled garden, and Apple bakes it into the price. Without that App Store revenue, those $999 iPhones would simply see a price increase.
At one time we had somewhat reasonable sounding protections via copyright and patent.
Today that’s so captured that Mickey Mouse is still in copyright and insulin for diabetics costs more than the saline it’s in.
It defies both the law as written and the human sense that wrote it to advocate for unproductive rent seeking: sixteen years? With a few point patches along the way? Thirty percent.
The Europeans find it ridiculous, the FTC finds it ridiculous, anyone who likes innovation finds it ridiculous.
But it’s mostly ridiculous because history is unambiguous on this point: you tell the peasants to eat cake long enough and you’ll swing from a dockyard crane.
> Without that App Store revenue, those $999 iPhones would simply see a price increase.
Why would that happen instead of Apple taking less profit overall?
I suspect that Apple's pricing for phones is based on primarily on what the customer is willing to pay rather than on the amount needed to be profitable.
I assume you mean "buy an Android and do nothing else", but in that case I think it's unreasonable to suggest I'm not allowed to want companies to be honest 99.9% of the time, and I can only express it once every few years while I'm choosing what phone to buy.
If you didn't mean that, I'm not sure why you gave me that advice. I'll keep it in mind but I'd prefer we keep the discussion focused on what Apple is doing and whether it should be accepted by the general public, both in terms of popularity and legality. And Google too, because this discussion started about both app stores.
On economic activity? Apple has generated a lot in the last few decades.
On human welfare? Every other day a new study comes out on the destructive force that heavily marketed smart phones represent: Phillip Morris and Enron put together couldn’t collapse the birth rate in a nation.
This debate is about economic activity as a good a priori. A lot of people assume that.
Some are ill-informed: they haven’t read the designer of GDP talk about the perils of GDP as a metric.
A small few know how mathematically comical the fucking Laffer Curve is and who thought it up and where and prey on the good intentions of the former group.
They’re going to catch a guillotine no matter how much caution I or anyone else advocates for.
No we’re talking about Apple and the ostensible economic impact as opposed to the humane assessment of outcomes friend, and we can talk about anything from child labor in Foxconn factories to union busting at Apple stores to parts pairing around right to repair before we even get into how fucking illegal the App Store shit is.
Nice, thanks. Making similar strides myself. Moved over to a Framework machine (which has been surprisingly pleasant) with Linux (also not too bad as a desktop OS in 2024) - coming from an M1 Pro. Photo's is my current sticking point though, do I really push everything into Google photos...
Endorsing this, because an upvote is invisible. Also consider: is it the phone that’s the problem, or is it TikTok/Twitter/Instagram/whatever? And even for them, is it the app itself, or is it the other people?
The problem with X is… the other people who use X.
Anti-trust laws, and their enforcement, are only feasible in the very-large-monopoly space. Even in smaller monopolies, where not a lot of a money is moving around, you'll get a crappy product. Do you see Linda Khan going after Workday?
The retailer has to pay a lot in shipping and stocking that a digital storefront doesn't. More importantly, the retailer doesn't prevent me from buying it directly for $1 or $2.
Indeed. There is a nascent alliance forming between socialists like myself and libertarians like many I respect: a “market” that has profit margins to increase prices but little or no competition to put downward pressure is a despotism. It’s a vampiric wealth transfer on the scale of great power GDP and many of us have fucking had it.
>I think even he is a little averse to straight up saying it out loud (we’ve discussed it many times): software and product outcomes have gone to shit because we don’t enforce anti-trust law already on the books, let alone update it in light of 50 Moore doublings.
None of the examples of crappy software from the OP are Magnificent 7 companies, are they? Those companies seem to be capable of hiring the best talent.
A critical input into the build vs buy decision is whether your company is actually capable of building. Suppose Dan was advising the company that made the crappy Postgres/Snowflake sync software on whether to build vs buy a supporting component. Given that their main product is already crap, can they really be expected to build a better supporting component in-house? Seems like they might as well buy, and focus their energy on fixing the main product.
Dan advocates vertical integration here, and elsewhere says engineers get better compensation from FAANG than startups. This suggests that big companies made up of lots of talented engineers have natural efficiencies, relative to the alternatives.
A natural interpretation of "antitrust" is: break companies up into their smaller business units. That basically replaces an in-house "build" relationship with an inter-business "buy" relationship. So it seems antithetical to the OP.
Dan, if you're reading this, I encourage you to write a reply here on HN tearing me apart, like you did to all the other HN commenters.
Your build/buy analysis is pretty astute if you accept the axioms: when reasonable people disagree firmly it’s almost always their starting assumptions.
I’ll challenge the notion that enriching mediocre people at the expense of society is either reasonable or useful. Enlightenment ideals around the sanctity of personal property fall apart when you extrapolate that to millions of people in passive index funds who don’t vote on governance.
Neither the United States Constitution nor the Universal Declaration of Human Rights nor the Geneva Conventions have any protections for rent seeking assholes as a bloc.
Don’t shoot the messenger: in my reading of history either one group backs down or the other slaughters them.
Your analysis seems really handwavey. Of course, rent seeking is not considered particularly valuable. But some amount is inevitable. You haven't been very specific about the rule change that you think would reduce rent seeking, or why you believe it would be helpful.
I see very little downside in an extreme form of campaign finance reform where basically any interaction between industry and legislature is deemed briber and treason and is a capital crime.
I’ll also advocate for a punitive wealth tax: not to raise revenue (who cares) but to cripple billionaires who aspire to nation state power. 50 million seems like plenty to live in arbitrary luxury but not enough to buy policy. Seize everything above that.
>basically any interaction between industry and legislature
If legislature doesn't talk to industry before regulating, expect lots of incompetent, ignorant, economy-crippling regulation.
You don't have to change anything in the US if you want this. Just move to Germany.
>cripple billionaires
You might be able to persuade Trump of this, actually, if you sell it as increasing his power over the other billionaires.
More seriously -- It sounds like you have a strong desire to live in an authoritarian country, where the state possesses unchallenged primacy over its citizens. I support your right to do that. There are many available to choose from.
I'll bet if you do enough research, you can even identify an authoritarian country which also has crippling regulations. That should be perfect.
These talking points went mainstream around the time Reagan got elected, around the time stuff like the Laffer curve got taken seriously.
The United States was mounting a vigorous defense against the Warsaw Pact in 1960, and again in 1970, and straight up through 1979 before Reagan was elected and we took the Laffer Curve straight off the back of a napkin and to the legislature.
There is a way to run a participating democracy without a dystopian upwards wealth transfer, without unlimited political spending, without a catastrophic class segmentation that will lead to war. And we can do it all while opposing unbounded statism.
It sounds like you’re doing well under the current system and trying to frame it like that makes a system good.
This will be my last reply in this thread (unless you apologize).
>These talking points went mainstream around the time Reagan got elected, around the time stuff like the Laffer curve got taken seriously.
Unfashionable isn't the same as wrong. It's telling that you're responding by saying "this is why you're unfashionable", rather than "this is why you're wrong".
>There is a way to run a participating democracy without a dystopian upwards wealth transfer, without unlimited political spending, without a catastrophic class segmentation that will lead to war. And we can do it all while opposing unbounded statism.
Are you aware that median wages in the US are some of the highest in the world?
I actually care more about global inequality than inequality within the USA. I would sooner support effective foreign aid than welfare.
But really -- It's not your specific policy ideas that disturb me, so much as your attitude. Your politics aren't the politics of benevolence or charity. They're the politics of envy, ego, bitterness, and resentment. That never ends well:
>It sounds like you’re doing well under the current system and trying to frame it like that makes a system good.
Currently surviving off of savings due to a chronic illness. I haven't had a good source of income for multiple years. My Medicaid coverage just renewed. I rent a room in an undesirable area. I don't have a car. I won't get a chance to see my family for Christmas.
But you know what? I am "doing well". That's because "doing well" is more about your mentality than your material resources.
I hope you get the support you need in that area. Merry Christmas.
One example is Google maps. There are two ecosystems: Google and Apple. Apple is kinda meh but lacks user information (ie: menu photos for a restaurant). Google maps has become simply horrendous but here is the catch: There is no third viable alternative. There used to be Foursquare for finding stuff nearby but that's gone. It's a very legitimate market. Businesses want to be found. Users want to find information and legitimate ratings. Money can be made as Google is monetizing Maps.
Sometimes I wonder if the free market has stopped working.
Markets sometimes experience market failure: markets are a great tool but they’re no more divinely infallible than any other human institution.
High-paying work in the US right now and software in particular is in a bit of a doldrums where the locally useful needles are like, friction to leave the platform, and the absolute limit of remnant ad load before people throw their phone at a wall, and taking a big cut out of everything, and lying constantly about what foundation models can and can’t do and shit.
I suspect it feels this way because this is a pretty exceptional run where the Valley just doesn’t have shit for the LPs. It was Web3 and then the Apple Watch (or vice versa) and then it was WeWork and then it was the Metaberse and now it’s “AI”.
But it will get better: dumb, lazy, corrupt management eventually gets rotated out. This crop is really hanging on by their fingernails, but the writing is on the wall.
2GIS is fantastic, it's light years ahead of both on map quality, and has very detailed information about companies, various points of interest, restaurants, etc. But it only covers a few ex-Soviet countries and is very unlikely to come to your side of the world under the current political climate.
How, specifically, has Google Maps become horrendous?
>Sometimes I wonder if the free market has stopped working.
The "free market" means people have the freedom to compete. It doesn't mean they are guaranteed do so. You might as well blame society for getting lazy and complacent.
It’s got the same disease as the rest of Google? What gained monumental trust by being a useful and accessible library of high-quality information is now a place to shove higher oCPM because fuck everyone?
I really admire Google over time, it’s a special place that still has engineers you can’t find many places.
But once they let climber TPMs write their own performance review by just ransacking the dwell? An even worse crop (Altman, Amodei) was inevitable.
Can you give me three recent instances where you were trying to accomplish something with Google Maps, and you weren't able to because it's gone to shit?
Without concrete examples your comment is just vibes.
You don't seem to be arguing that Google is a bad company actually. You're arguing that it was once a good/altruistic company, and now it's more ordinary. This strikes me as an entitled attitude. A company behaves in a good/altruistic way, people get used to it, they don't show appreciation, but they do start cursing if the altruism diminishes even a little. "No good deed goes unpunished"
The reason we like the free market is that there is guaranteed to be competition in a free market and that's guaranteed to lower prices and improve features, which are not things that happened in communism. If those things aren't actually true then why do we like the free market - how's it any better than communism?
"Guaranteed to happen" vs "doesn't happen at all" is a false dichotomy.
If you think a company is doing a bad job of serving its customers, compete with it.
If you're right, and you can do a better job of serving those customers, you are likely to become rich.
If you aren't interested in competing, maybe you don't actually believe it is possible to do much better.
I agree that in principle, there can be a role for the government in removing structural barriers to competition. I haven't seen that role particularly well-articulated in this thread, though.
> Here are several vendors where the 4th place vendor charges the same as the first place vendor.
That, uh, is kind of what you expect in a competitive commodity market, isn't it? I don't think that's the example you want. (No disagreement about app stores).
You raise a good point but I think it’s actually a great argument for the corruption. Modern cloud compute is priced at such absurd margins that AWS subsidizes the price dumping of the whole Amazon retail business: there’s a reason Jassey is second only to Bezos in clout.
When vendors 1-5 are all running 20, 30, sometimes 50 percent margins while their share varies from like 40% to like 5%?
The high profit margins, if those figures are correct, are a much more compelling argument than merely having similar prices. Those can be deceptive too, but I haven't looked into it...
Luckily both the US and EU are working on breaking Google's app store monopoly. Though the US might stop that if Google manages to convince the new top guy that it's anti-woke.
Many of the best engineers that I have worked with over the years have a discerning constitution which seems to innately allow them to identify high quality software, which is essentially a matter of taste.
The problem is that this disposition is not the norm for a technician, which is why I tend to prefer hiring and training engineers that are artistic, creative, and obsessive.
I once thought the same thing. Over time I realized those engineers weren’t necessarily good at picking the right thing up front. They were good at making it work despite any deficiencies.
Software will continue to be garbage until we expect more out of it. And that extends to the people writing it. This isn't a profession that is kind to people who want to do things by the book.
We incinerate the book every few years, but the fundamentals remain the same.
I've seen this same problem with many so-called low-code/no-code application creation tools (e.g. Betty Blocks). In their quest to cover every use case, they cover none of them well, forcing compromises and creating more real-code work for the actual application developers whose systems have to be accessed by these tools.
It would have been quicker and cheaper if the company just hired more actual developers to integrate properly with existing systems (and resulted in more featureful, less buggy applications), but the prospect of paying lower salaries for less qualified people to do the same end result (as promised by the slopware vendors) seems to be a siren song of sorts to management.
You're missing the point of low-code/no-code solutions. Those are intended to sold to executives who don't actually understand software, as proven by a prior history of buying other crap software. Whether it actually works or saves any money is irrelevant.
My utterly cynical take is that the way to win with software is to put as much money and effort into marketing and sales as possible and as little as possible into the actual product. Especially B2B seems to do this a lot in practice because that software is bought based on checklist items and demos by salespeople.
You're starting to realize why eng is a cost center.
The goal of the business it to make money, not make a great product.
They will make the product better if and only if it will lead to greater profits.
If you can make more money hiring more sales and marketing people to sell a broken product than hiring more engineers to fix the product, you're going to hire more sales and marketing people.
So don’t work at a company that sells a software product to other businesses. Work at a company that has a business of its own, which runs more efficiently when the software is better.
That's an observation opposite to mine. Having worked at several B2B and several B2C companies, I've found that the latter products were better and more interesting because they served an actual need. Sure, the B2C apps may have had better UX and design and maybe even more flashy tech, but they were frequently engaging in dark patterns and generally not that concerned about whether the users were actually benefiting (e.g. a language learning app with no research into how well users learn, but a big marketing budget and all sorts of conversion and retention metrics).
It's a balancing act. You may or may not agree that Oracle products are a good value but they generally have some good engineering behind them. Oracle also puts a ton of money into sales and marketing.
I think it's been true for some time that Oracle Database has largely been in maintenance mode, with relatively little new engineering or refactoring being done and more resources devoted to sales, marketing, and of course license compliance enforcement. "More lawyers on staff than engineers", as the anecdote goes.
Have definitely someone say, "well I tried Drizzle because the creators are good at shitposting on X."
I do not understand that at all, but maybe my reality is just substantially less mediated by "the conversation" that I don't get it. (Drizzle is fine BTW)
I think that's actually not unusual or even wildly unreasonable shopping behavior in a market where:
1. Many if not most "customers" (JS devs) don't understand how to evaluate quality deeply (thus, quality signals aren't trustworthy, making evaluation more difficult and resulting in further acceptance of using unevaluated products)
2. There's an overwhelming number of choices, with deprecation, replacement, and eclipsing all being fairly common - potentially even year-to-year
You kinda just have to try stuff out and work with what you've got.
Winners are chosen because they tend to solve a problem easily for the average person. 99% of people don't care if it's written in rust or has the proper abstractions. It's, "I need to do X" and the winner does X easily. People here are very out of touch.
It's a little worse than that. Given two products that seem to do the same job on paper, the one with glossier marketing will win even if it is an unreliable pile of crap in practice.
Cost as well. Most people don't want to invest in a high quality product unless it's something that is clearly miles above the competition (e.g. Vitamix, Herman Miller or Leap Chair, etc.)
The JS community has no sense of quality. The community doesn't value things that are well abstracted or work well. I dread every moment I have to work in JS because everything is so badly done.
A lot of people blame the JavaScript language itself but, the longer I'm around in the world of web development, the more I think that the quality of JavaScript applications is dictated by the economics of said applications.
Off the top of my head, the best software I use seems to fall into two categories:
- Closed source software that requires buying a license to use
- Open source software that is specifically made for developers and promises to do one job well
Whatever falls in the middle of those two categories tends to suffer, in my experience.
If you think about it, web based software tend not to fit neatly into either category. Most of them are the following:
- Closed source but are either too cheap or are free
- Open source but promises to do way too many things, and also too cheap or free (describes a lot of frameworks and design tools)
Web technology and JavaScript became the dumpster slut of software ecosystems. The end users are not given a big enough reason to pay for them adequately or at all, product owners care little about quality and reliability because it's way too easy to get a zillion low quality users to look at ads, and the barrier of entry for new JavaScript programmers is so low that it's full of people who never think philosophically about how code should be written.
> Web technology and JavaScript became the dumpster slut of software ecosystems.
I think an additional problem with the JavaScript ecosystem specifically is external resources are extremely easy to access and their cost is usually borne by end user resources. Therefore they're too tempting for many developers to avoid. Unfortunately the runtime environment of the end user rarely matches that of the developers and seemingly "cheap" resource access at development/test time isn't cheap for the end user.
JavaScript is happy to pull in some library hosted on some third party service at runtime. For the developer/tester this ends up cached by the browser and/or at the edge of the CDN. A developer may also have topologically close CDN endpoints. This inspires them to pull willy nilly from third party libraries because to them they're cheap to access and they save time writing some utility function.
The same goes for CSS, APIs, or media resources. With JavaScript the delivery is a client issue and costs can be pushed entirely onto the client. If pulling in an external resource(s) costs a developer non-trivial money to store and serve they'll put more effort into tree shaking or other delivery optimizations. They may omit it entirely.
I think this massively contributes to the LEGO piece construction of a lot of web apps. It also contributes to performance robbing things like tag managers that insert a bunch of incongruent plug-ins only at runtime from an unbounded number of sources.
Ehh... not sure I want to go in graphic detail here, but my understanding is that term denotes cheap and low grade jetsam. I think it was a more common term on the internet back when I was in high school in 2005.
I agree that many Frontend libraries are pretty intimidating to step into if you don't have a background in it.
Don't agree that JS community is bad, it is the largest community of any language by far, and it has the most money invested into it by a huge margin. There is a lot of trash but there is some seriously good stuff, and you can find 10+ packages trying to do pretty much anything you can think of.
When it's "the largest community of any language by far" - which is true enough - having "some good stuff" is a very low bar. The dev culture around JS and Node is notorious for cranking out poorly written libraries.
And yes, you can find 10+ packages to do pretty much anything... of which 8 are abandoned and no longer work on up-to-date Node versions or depend on other packages with known vulnerabilities.
JavaScript has the unfortunate situation of having years upon years of terrible standard library design, leading to people building lots of small libraries on top of those libraries to get things that were basic functionality in other languages.
Then people started stacking more and more things on top of those libraries, creating a giant dependency morass of microdependencies that larger frameworks are now build on top of. And because all these frameworks do things just different enough from each other, every larger library that a dev would want to integrate with those frameworks now needs a specialized version to work with those frameworks.
In most languages, if you want to know how something works, you can usually dig into your dependency tree and by the time you hit the stdlib's optimization spaghetti/language internals, you'll probably have figured out what's causing your snag/what the library is expecting out of you. In JavaScript, you hit the dependency morass and have to give up. Most competent devs then decide to pick another language.
You can write very legible JavaScript these days, even without a framework, but it looks nothing like JavaScript used in a framework.
The other language I know of with this issue is, ironically, Rust.
Similar "lots of microdependencies" issue, born in Rusts case from the desire to keep a conservatively sized standard library. It's a smaller problem in the sense that Rust has stronger API contracts as opposed to the absolute disaster that NodeJS is, but in terms of code comprehension you hit a similar dependency morass.
The one thing salvaging Rust for now is a lack of similar frameworks, but who knows how long that will last.
After a couple of years of doing this, you've built up a backlog of your own, bespoke library code that makes you into a wizard. People are amazed at what you can do and perplexed with how little time it takes you to do it.
Nobody else can understand how it's built, but for some reason that's not their problem? It's not like they're taking the time to understand how React is built, either. But as soon as you do something on your own, whoooa buddy. Cowboy programmer alert. It's not good engineering if it's a single, coherent, vertically integrated system. It's only good engineering if it's a mishmash of half-solutions teetering on top of each other.
You are about 4 years behind the curve, everyone uses JS Frameworks that bundle most of the libraries you will need for general dev together now.
I don't understand why people get so up in arms over npm modules, as if you could stand up code that does the same things in another language without having to manage dependencies.
Because most of the stuff in NPM sucks. I'm not going to keep going back to a store that has sold me nothing but shit so far just on your promise that somewhere, buried deep in the back, is a not-turd.
This feels like a knee-jeek false dichotomy. But in a sense, it's kind of right. I didn't work in teams anymore. I manage them.
I still do a lot of programming. And I expect my developers to be competent enough to read other people's code and figure out how it works, what it does, how to use it, based on the tests and plenty of extant examples.
I don't want developers who can only be productive in libraries that everyone else's is using/posting YouTube tutorials on/feeding LLM training corpus'.
The problem with adopting other people's software is that you have to make it work for your purposes, all while accepting it was only ever originally designed for their purposes. And if that's open source and you contribute to it, then you have to make sure all your changes don't break other people's work.
But with my own libraries, I can break anything I want. I have, like, 5 projects using them. It's not a big deal to discover an architectural problem and completely refactor it away to a newer, better design, propagating the change to all the others that use it in fairly short order.
And I don't have to argue with anyone about it. I can just do it and get the work done and prove it was the right thing to do long before any Github PR would ever get out of review.
It's more that the low-quality people are way more numerous than otherwise.
For example, in the 2000s and 2010s, javascript had the lowest barrier to entry by far thanks to browser dev tools, so a lot of what you saw was people with no prior experience teaching people with no prior experience. These people are still around, and while plenty have improved, they still form the basis of modern javascript practice.
This is exactly it. There is probably some bad embedded C floating around, but the barrier to entry is higher and thusly that world seems to be a lot more rigorous than the JS flavor of the day.
Embedded C usually isn't much better quality than JS, it's just less public. There's very little overlap with the relatively high quality OSS C available.
I never even look at stars. I hardly have any on my work, and that's fine with me. My stuff is of extremely high Quality, because I use it, myself. I'd actually prefer as few others as possible, use it, because then I'm Responsible to ensure that it works for them, and I can't just go in and do whatever I want.
I also use almost no software that has been written by others. I use two or three external dependencies, in my work. Two PHP ones, and one Swift one. All are ones that I could write, myself, if they got hit by a bus, but they do a great job on it, and, as long as they remain bus-free, I'm happy to use their stuff.
The one exception is an app that I just released, using SwiftUI. I needed an admin tool that displays simple bar charts of app usage data. I was going to write my own UIKit bar chart widget, but SwiftUI has a fairly effective library, so I figured I'd use it, and see how SwiftUI is doing, irt shipping apps.
I think that I'll avoid using SwiftUI for a while longer. It's still not ready, but it has come a long way, since my first abortive attempts at using it. The app works, but I did some customization, like pinch-to-zoom, and that's where SwiftUI kicks you in the 'nads. As long as you stay in your lane, things are sick easy, but start driving on the shoulder, and you are in for some misery.
And that's the biggest problem with relying on someone else's code. They usually punish you for any "off-label" use. Apple has always been like that, but they usually let you get away with it. I go "off-label" all the time, because I don't want my apps to look like Apple Settings App panels. SwiftUI doesn't suffer deviance at all. Just adding pinch-to-zoom was a bit of a misery (but I got it going, after several days of banging my head, and it now works fine). Some frameworks and libraries won't let you deviate at all. You can't have any pudding, if you don't eat yer meat.
I'm finding similar results. I used to just write everything myself when a need arose, but in trying to better spend my time I find myself checking what's out there. And inevitably I'll find something that will claim to do exactly what I need.
From there, I find two problems rather often. One, it doesn't actually do what it claims. Googling is fruitless because I must have been the first one to actually try it. Gotta love Googling an error and finding exactly one result - the source code. I sometimes open a bug report depending on how the project seems, and that gets anything from deaf ears to "oh thanks, we'll fix it in the April release" which is of little use to me now.
The other is something you touched on briefly, that the API or contract changes in unexpected ways over time. You can stave this off for a little while by pinning, but then you're missing out on bug fixes and new features. Which, especially for a middleware type thing, is usually a death sentence by bitrot.
The reason why software is crappy compared to physical products is because products come with a warranty and software doesn't. If you buy a blender and it makes some strange noise, you don't think twice about returning it, and maybe buying a different brand. If it breaks later, you can get repaired for free under the warranty, or else get it repaired.
You have none of these recourses with software. Yes, there are trial periods but that's not the same as buying and returning, and they're usually shorter than return windows.
And there's never any guarantee that it will "just work". We have managed to convince consumers to put up with software that "breaks" in a way that they would never do with hardware.
Also there's no ceiling to the debt that can accrue from integrating a certain software with a product, and when that debt stalls for a while then it'll just become accepted as part of the process. Whereas a blender has a debt ceiling of 1 blender, and the labor time of maintenance boils down to placing an order for a new one occasionally.
RE: stars and DL counts, I'd say the best measure is checking the creator's other repos, their number of followers and the issue churn rate. Stars can be useful but only as a very rough process of elimination.
If the project appears to have several active contributors that would be good too.
I'm a star inflater and I feel a little bad about it. AFAIK GitHub does not have a good "bookmark" mechanism besides stars. So when I come across an interesting/useful project I'll start it to be able to find it later. My browser bookmarks have become a bit of a black hole where URLs go to get lost.
So hopefully I didn't bookmark a project someone else is trying to judge the quality of based on stars. Who knows how much technical debt or damage I've caused because GitHub doesn't have a bookmark feature that isn't gamified.
> I keep bumping into the fact that using mentally cheap signals of quality (such as stars or DL counts) almost never indicates the quality of the thing itself.
I find that docs are typically a really good proxy for quality. Solid docs with clear expression of intent (design, usage, features) is usually a good sign.
Astro.js, VueUse, Quasar (it's ugly, but amazing).
The components are the most complete set I've come across and cover a really broad famit of use cases. Docs are really good. Great toolset for building fast (though AI now may bias towards Tailwind and just generate whatever you need).
As Dan notes, a lot of software is just...not very good. It either isn't upfront with flaws (as in the case of the Postgres -> Snowflake tool), has too much scope, or is abstracted poorly. Finding things to buy/use (as in the case of open source) can often eat a lot more time than you anticipate.
I've been dipping my toes into the JS ecosystem, and I keep bumping into the fact that using mentally cheap signals of quality (such as stars or DL counts) almost never indicates the quality of the thing itself. Winners seem to be randomly chosen, almost! The only way to assess is to read the code and try integrating it in.
I'd go farther to argue that the larger an ecosystem/market is, the more untrustworthy it behaves as a whole, simply due to the size, and the types of people attracted to it who want to get influence/money. See also: appliances that everyone needs.