The wording on the main driver of the experiment, their especially bad emails, leads website operators to think there is a problem where there is none. This, on top of the research being entirely devoid of consent between the human parties involved, makes it a _very_ bad study, one that could well cause both the university and the research team to lose money if some of the 'subject' parties actually had to go get a lawyer to have a look at their shoddy emails.
In better studies what is supposed to happen is, you propose taking part in the experiment, you get a signed agreement of some sort, and only then actually start experimenting. What happened here is more like some kind of youtube prank than a useful information gathering procedure.
In some countries, government support networks grow out of christian charity schemes where a powerful person will discretionarily share a tiny portion of their time and wealth with the destitute of their choosing and gets to be enthusiastically cheered on for this sacrifice.
This often does not translate well to actual country-scale structural problems.
Especially if in some of those countries' individuals also happen to consider paying taxes as a way in to government-scale decision making.
This is kinda sorta equivalent to "from Apache HTTP Server to Phoenix Framework" though.
Like, yes of course an actual RAD framework has more things built into it than a server runtime. That's kind of why they exist.
Your programming environment should more or less reflect your requirements, strengths, and priorities, and starting with plain Node makes sense for a development style that OP just doesn't seem to want to follow, so he's spending a ton of time building 'boilerplate' to try and shim his process into his tools when the tools should be the ones doing the work for him.
Like, all that stuff about CRUD and auth. Just use Prisma and Passport?. No need to twist yourself into a knot building the same abstractions over and over by hand, the things exist and are there.
It does not need /negative matter/ or energy, though.
What is needed, in the original, non-scammer-friendly version by Alcubierre is /something/ that can push spacetime in the opposite direction that mass and energy do.
At least the original Alcubierre solution, he basically lays out the mechanism and says, paraphrasing: "hey, the numbers do work out. We just need something that doesn't make sense, or something that makes /the thing/ happen"
We see then that, just as it happens with wormholes, one needs exotic matter to travel faster than the speed of light.
However, even if one believes that exotic matter is forbidden classically, it is well known that quantum field theory permits the existence ofregions with negative
energy densities in some special circumstances (as, for example, inthe Casimir effect [4]).
The need of exotic matter therefore doesn’t necessarily eliminate the possibility of using a spacetime distortion like the one described above for hyper-fast
interstellar travel.
The excitement here is partly due to the fact that this is both a trekkie thing and one that was previously not even supposed to numerically make sense.
It's likely this "not requiring any specific fringe-physics thing" is what is causing the idea to catch on and continue to be ellaborated, because having specificed this mechanics well enough, then maybe and only maybe it might be possible to cause at least an analogue of it to exist and that'd be awesome.
Any real warp drive is going to be kind of like setting off a bomb in the middle of a lake and then surfing the water to the center on a jetski, and hoping you can jump off the jetski before you hit the other stuff that is also rushing in with the water on the other side.
I like this as a placeholder look and feel? Looks fairly pretty, lets me focus on building the thing, and is easy to throw away when actual themeing takes place later.
Yes. Here's where I did it with IPFS libs a few years back: https://github.com/cretz/tor-dht-poc. The libs have improved since so it's probably easier.
This link here https://aris.iaea.org/PDF/MSTW.pdf seems to provide a little bit more info on how they manage to keep the corrosion within acceptable parameters.
It involves the addition of an unnamed chemical compound functioning as a reducing anode straight into the fuel, which is a fairly awesome thing I wasn't aware was possible.
Morels grow under a whole host of trees, some of them of very high commercial value, so I wouldn't bet on that.
What's more, one simple way of getting a bunch of morels to grow on a specific patch of land turns out to be setting it on fire, so growing them on a basement would even probably end up being a net positive.
Google famously sucks at customer service, mostly because they've figured that they'd have to pay a moderation team more money than whatever they'll lose from the few and far between problem cases there might be.
There is no fixing that without massive corporate reorganization, which won't be voluntary, and thus will take 10x the time with worse results. At this point this is just the Google way to do things, and making smaller fry follow this rules will probably only end up making Google bigger.
The one real way out there might be is making one-click self hosting viable again.
The problem is their monopoly power over video distribution on the internet: disgruntled users have nowhere else to go. All alternatives are stifled by the monopoly.
People have tons of alternatives. The problem is that ad subsidies pay more than most viewers are willing to pay after generations of internet users have been conditioned to think of content as free.
2.) In addition to being convenient, you don't need to pay to host on YouTube. It would be pretty trivial for people to host their own content--much of which is never going to pay anything to speak of in ads. But then they'd have to pay for the hosting.
#1 is definitely true — I was thinking of that mostly as an artifact of the cost point. There's a really strong ratchet effect: YouTube gets a ton of content because they not only provide hosting for free but also incentivize a ton of creators to direct traffic to YouTube since they get paid that way, which further encourages people to think of YouTube as where they go for video. Even places which self-host tend to report much higher numbers from YouTube than their own website, and that's a very hard thing to compete against.
From what I understand what YouTube does is super super difficult to do at scale as well as they do, I do not envy their competitors. Given that, I don't believe ever hoping for an actual competitor to pop up is reasonable, instead we should start thinking about how we can ensure fairness, inclusivity, and democracy(?) on what exists.
I'm not sure what the best way to go about it is, but I know that advertiser and corporate pressure being the only thing that matters is not working (see: dislike count removal). I would not be surprised that if YouTube continues to be unable to regulate itself we'll see governments stepping in and regulating it for them.
There are two reasons why P2P has been generally less successful than a lot of us hoped around the turn of the century: the first is largely technical, namely the small number of people with plenty of uplink bandwidth, and something like municipal fiber would actually help a lot there. When people have 1,000/25 connections they have less capacity to share in general and with things like video chat and gaming being common, the amount of idle capacity consistently available is relatively limited.
The other is harder: some ISPs outright blocked or throttled P2P protocols entirely (much to the annoyance of, say, Linux users torrenting ISOs) and most others will have some mechanism to respond to copyright claims. The latter is really hard for a YouTube competitor if it allows anyone to upload content — if the network attempts to auto-mirror content, you are potentially at risk if someone uploads something illegal; if it doesn't, only the most popular public content will be well-replicated.
The problem is an incumbency/network effect problem. Sure, the ad-supported model helped them get off the ground, but it's not what's keeping them #1. The larger your library, the greater your value to viewers, and, thereby, the greater your value to content creators, increasing your library, increasing your viewership, etc. Virtually every tech company out there has massive incumbent advantages. Unless you can offer a differentiated service (eg: Tik Tok vs Instagram), your business is DOA.
Nobody has been "conditioned" to think anything. If they opted to make things free at the start of the internet era then it's now their problem to figure out how to pivot to actually paying their expenses and turn profit.
It's not the users' problem per se (unless they get aggressively pushed out but then they go to alternatives so that company still doesn't make profit out of them).
Elementary psychology -- one that's studied at 9th grade where I live -- could have told them that it's super hard giving people something for free and then taking it away. People don't react well to that and never have.
> Nobody has been "conditioned" to think anything.
…
> Elementary psychology -- one that's studied at 9th grade where I live -- could have told them that it's super hard giving people something for free and then taking it away. People don't react well to that and never have.
That was in fact my point: since the mid-to-late 90s, Internet users have become accustomed to thinking that they pay for internet connectivity but that most of the content they view will be free either because it's been subsidized by investors or advertisers. There are exceptions but they generally tend to be either connected to the physical world or a handful of areas like commercial movies or gaming where the major rightsholders were largely successful at preventing people from getting used to free content.
That puts you in exactly the dynamic I described: very few people think that YouTube-level content is something they pay for but their expectations are set by the kind of resources that billions of ad profits can support.
From what I read they made well over $19 billion in revenue in 2020. They are making plenty of money they can afford to put towards the problem. They just don’t want to take from their own bottom line.
More Plates More Dates, a workout science YouTuber who sometimes talks about 'enhancement', recently got banned off of Bitchute. So it isn't a bastion either.
I've heard of all of those - mostly through my friends.
However, as to your larger point - you're right, most people don't know about these.
I would argue that those who want to escape Google's grasp should be willing to encourage person-to-person sharing as an alternative to the recommendation algorithm. Sure, 99.99% of the population isn't browsing PeerTube, but I bet you that most of them would be willing to watch a video hosted there if it was suggested to them by their friends.
This might be a good idea in general. Think of how much better ecosystems we would have if people relied on their friends to make personalized recommendations to content on any platform, as opposed to engagement-optimizing automated algorithms that exclusively recommend content on a single platform.
By and large, hosting isn’t the issue whether text, audio, images, or video. Yes, video is more intensive than the others but hosting taken by itself is a solved problem. While podcast directories are centralized the hosting is pretty distributed. Not so easy is finding things and the fact that some level of moderation is probably needed somewhere in the system. And you’ll have to pay.
Unless you have a following, probably no one will see it on YouTube either. I generally post on YouTube but, if I were concerned about being blocked, I'd probably just host on a VPS.
In better studies what is supposed to happen is, you propose taking part in the experiment, you get a signed agreement of some sort, and only then actually start experimenting. What happened here is more like some kind of youtube prank than a useful information gathering procedure.