There shouldn't have to be any political reasons at all to enforce safety testing listings, and I would actively discourage anyone from taking that path regardless of their views on any given administration.
Far too many Chinese vendors just treat the "UL circle" as a required marking to forge, along with everything else they're forging on the items. http://www.righto.com/2016/03/counterfeit-macbook-charger-te... is a good teardown that highlights the problems with the fakes.
Amazon has had more than enough chances to solve this problem somehow or another, and it's clear they do not care about it at this point. They cannot claim ignorance after a decade of people highlighting the problems to them. As much as I don't like Walmart, I'd go to Walmart over Amazon for anything electronic (realistically, I'll buy NewEgg or B&H Photo for most things), because I have somewhat more faith in Walmart's supply chains to actually get me the thing I'm buying, vs "binned product fraud" that seems to be Amazon's bread and butter these days.
"Fraudulent markings on poorly designed, unsafe electronics that lack all the safety systems that the markings indicate exist" isn't a political problem. It's a basic consumer safety problem.
Walmart has the same exact marketplace sellers and the same exact problems as Amazon (unless perhaps you go to their brick and mortar stores). The only solution is to shop for only brands that are known to be good and have a reputation to protect, which of course doesn't scale very well beyond our personal preferences.
While they make it a pain and constantly reset it, it is still possible to set the filter for Walmart only on their website. I'm not sure shopping for known good brands works since counterfeits are so prevalent and many third party sellers on walmart seem to just be running an Amazon reshipping service.
I'll go to one of their physical locations if I want something that plugs into the wall. It's likely to be actually what it says on the box, if I do that.
If you tell them that they'll go and ask him for a bribe to avoid tariffs and regulation enforcement.
Better to stick to basic racism and conspiracy thinking and say the China batteries are leaked from a lab to intentionally burn down American patriots homes.
No this is an example of pointing out hypocrisy. He says that others should do what he is not willing to do. If he were to practice what he preached, the bottom 50% of income earners would have access to his beach.
I have come to realize that over the years, though I still believe that wealthier companies like Apple, NVIDIA, Facebook, and the like could fund curiosity-driven research, even if it’s not at the scale of Bell Labs or Xerox PARC.
On a smaller scale, there is The Institute for Advanced Study where curiosity-driven research is encouraged, and there is the MacArthur Fellowship where fellows are granted $150,000 annual stipends for five years for them to pursue their visions with no strings attached. Other than these, though, I’m unaware of any other institutions or grants that truly promote curiosity-driven research.
I’ve resigned myself to the situation and have thus switched careers to teaching, where at least I have 4 months of the year “off the clock” instead of the standard 3-4 weeks of PTO most companies give in America.
If Zuck's obsession with VR isn't curiosity driven research than nothing is.
10 billion yearly losses for something that by all accounts isn't close to magically becoming profitable. It honestly just seems like something he thinks is cool and therefore dumps money in.
It's an example of Zuck's curiosity. When I refer to curiosity-driven research, I mean curiosity driven by the researchers, where the researchers themselves drive the research agenda, not management.
To be fair, though; Facebook, I mean, Meta is a publicly-traded company and if the shareholders get tired of not seeing any ROI from Meta's VR initiatives, then this could compel Zuck to stop. Even Zuck isn't free from business pressures if the funding is coming from Meta and not out of Zuck's personal funds.
Back to Bell Labs and Xerox PARC, my understanding of how they worked is that while management did set the overall direction, researchers were given very wide latitude when pursuing this direction with little to no pressure to deliver immediate results and to show that their research would lead to profits. Indeed, at one point AT&T was forbidden by the federal government from entering businesses outside of their phone business, and in the case of Xerox PARC, Robert Taylor was able to negotiate a deal with Xerox executives where Xerox's executives wouldn't meddle in the affairs of PARC for the first five years. (Once those five years ended, the meddling began, culminating with Bob Taylor's famous exit in 1983.)
I bet (litteraly, founded an xr game development company in february) xr/vr games will indeed became a mainstream gaming platform in the next 5 years, maybe even next year. If or when it become the case it may totally become as present as smartphone and replace a lot of monitors, especially if they succeed to reduce them as smartglasses like their totally are progressing to.
if it become the case, meta get 30% of the revenues associated with it.
If it does not, i'm pretty sure they can now make good smartphones and even have a dedicated os. I'm pretty sure they can find a way to make money with it.
A meta quest 3s in inself is an insane experience for 330€ and it's current main disadvantages for gaming are the lack of players and the catalogue size. Even using it as a main monitor with a bluetooth keyboard is "possible".
I would have find it 'improbable' a few years ago even as an enthousiasth, i now could totally imagine a headset replacing my screen in a few years with a few improvements on.
What about Musk and push to reach Mars? While I haven't liked Musk from long ago, SpaceX has given some steely eyed rocket men/women a pretty successful playground.
But from the days of Bell Labs, haven't we greatly improved our ability to connect between some research concept to the idea of doing something useful, somewhere ?
And once you have that you can be connected to grants or some pre-VC funding, which might suffice, given the tools we have for conceptual development of preliminary ideas(simulation, for ex.) is far better than what they had at Bell?
I believe this depends on the type of research that is being done. There are certain types of research that benefit from our current research grant system and from VC funding. The former is good when the research has a clear impact (whether it is social or business), and the latter is good when there is a good chance the research could be part of a successful business venture. There are also plenty of applied research labs where the research agenda is tightly aligned with business needs. We have seen the fruits of applied research in all sorts of areas, such as self-driving vehicles, Web-scale software infrastructure (MapReduce, Spark, BigTable, Spanner, etc.), deep learning, large language models, and more.
As big of a fan I am of Xerox PARC and Bell Labs, I don't want to come across as saying that the Bell Labs and Xerox PARC models of research are the only ways to do research. Indeed, Bell Labs couldn't convert many of its research ideas to products due to the agreement AT&T made with the federal government not to expand into other businesses, and Xerox PARC infamously failed to successfully monetize many of its inventions, and many of these researchers left Xerox for other companies who saw the business potential in their work, such as Apple, Adobe, and Microsoft, to name a few.
However, the problem with our current system of grants and VC funding is that they are not a good fit for riskier avenues of research where the impacts cannot be immediately seen, or the impact will take many years to develop. I am reminded of Alan Kay's comments (https://worrydream.com/2017-12-30-alan/) on how NSF grants require an explanation of how the researchers plan to solve the problem, which precludes exploratory research where one doesn't know how to attack the problem. Now, once again, this question from the NSF is not inappropriate; there are different phases of research, and coming up with an "attack plan" that is reasonable and is backed by a command of the prior art and a track record of solving other problems is part of research; all PhD programs have some sort of thesis proposal that requires answering the same question the NSF asks in its proposals. With that said, there is still the early phase of research where researchers are formulating the question, and where researchers are trying to figure out how they'd go about solving the problem. This early phase of research is part of research, too.
I think the funding situation for research depends on the type of research being done. For more applied research that has more obvious impact, especially business impact, then I believe there are plenty of opportunities out there that are more appropriate than old-school industrial research labs. However, for more speculative work where impacts are harder to see or where they are not immediate, the funding situation is much more difficult today compared to in the past where industrial research labs were less driven by the bottom line, and when academics had fewer "publish-or-perish" pressures.
I thought I had read somewhere that 2 weeks vacation is more common in USA, at least for software companies, before things like "unlimited vacation". which is right, 3-4 or 2 weeks?
I think most "tech/bay" companies offer 3-4 weeks of vacation + holidays. Some have mandatory minimums a year and ability to accrue up to 30 days of PTO at a time in my experience. (e.g. not "unlimited", but specific amounts of PTO earned/used)
I agree with you that the modern corporate world seems to be allergic to anything that doesn't promise immediate profits. It takes more than a monopoly to have something like Bell Labs. To be more precise, monopolies tend to have the resources to create Bell Labs-style research labs, but it also takes another type of driving factor to create such a research lab, whether it is pleasing government regulators (I believe this is what motivated the founding of Bell Labs), staying ahead of existential threats (a major theme of 1970's-era Xerox PARC was the idea of a "paperless office," as Xerox saw the paperless office as an existential threat to their photocopier monopoly), or for purely giving back to society.
In short, Bell Labs-style institutions not only require consistent sources of funding that only monopolies can commit to, but they also require stakeholders to believe that funding such institutions is beneficial. We don't have those stakeholders today, though.
That's my conclusion as well since now the closest we have to Bell Labs is the Google R&D where it has a virtual monopoly on Internet search and it's able to hire excellent well paid researchers [1].
[1] US weighs Google break-up in landmark antitrust case:
Oh wow - I saw this story a few weeks ago (absolutely horrifying) and ended up binging on caving incident stories. My takeaway was that regardless of how experienced the spelunkers are, something can go wrong.
In the midst of my binge, I also found this awesome(ly horrifying) Youtube Channel of cave explorers. They have explored some amazing caves, but here's a video of them in some really tight spaces to illustrate the risk these explorers take (warning, may induced some anxiety): https://youtu.be/Us-XA2BRLgg?si=Lb62ZE1IHG4MD6K3&t=677
I'm scared of heights but I'm attracted to rock climbing, so I climb anyway and just deal with it. Caves also have a certain attraction, but those stories terrify me far more than the prospect of a serious fall. The idea of dying trapped in a tight space underground makes dying on a rock climb sound downright comforting.
Admittedly the main lesson I take away from that one is "if you can only make progress by going vertically down with no space to move your arms...maybe give that cave a miss".
It's kinda related to something I always try to drill into people about going outdoors in my country (NZ) "never descend something you can't ascend" and "never climb up something you can't climb down".
I know it sounds stupidly repetitive but both are true in related manners.
1) Downclimbing is far harder than climbing up.
2) If you made a mistake, you need to be able to turn back.
When it comes to descending, it's primarily when you have options other than downclimbing, like jumping into a small waterfall's plunge pool.
Are you jumping into the pool because it's fun, or easy? Or are you jumping down because you can't climb down.
If you can't climb down, then how do you know you can climb back up if you made a mistake?
Jumping into plunge pools in a canyon is a good way to get bluffed, that is, trapped by cliffs and waterfalls, you hit a waterfall too high to jump down, and you can't climb out of the gorge you're in.
But goes the same when ascending, it is easy to climb up something, harder to climb down it, what's your exit strategy if you made a mistake.
So yeah, anything that involves compressing my ribs and immobilising an arm feels like it's far too committed, no ctrl-Z on this.
I find it hard to say whether that is more terrifying or the Sterkfontein accident: cave diver gets lost, finds an exist to a non-submerged part of the cave and waits for rescue, alone and in complete darkness. Until he starves to death after three weeks. Is found a few days too late.
Injun Joe’s death in Tom Sawyer comes to mind. He actually found the exit but it had been sealed shut to prevent more kids from getting lost inside the cave. Starved to death 3cm from freedom. (Apologies for the spoiler if anyone reading this had not read Tom Sawyer!)
There’s another massive entrapment in 1925 Floyd Collins. It captured the nation radio side for the duration. Not as well known because of media gaps over time. Floyd also didn’t make it out but the engineering / efforts were large similar to John Jones.
Disagree completely. OS package managers are one of the biggest sources of problems.
Basically, once you have an OS level package manager, you have issues of versioning and ABI. You have people writing to the lowest common denominator - see for example being limited to the compiler and libraries available on an old Red Hat version. This need to maintain ABI compatibility has been one of the hugest issues with evolving C++.
The OS package manager ends up being a Procrustean bed forcing everything into its mold whether or not it actually fits.
Also, this doesn't even have the issue of multiple operating systems and even distros which have different package managers.
Rust and Go having their own package managers has helped greatly with real world usage and evolution.
This is a weird opinion, but I think that the OS package manager's complexity is largely owing to the unix directory structure which it just dumps all binaries in /bin, all configuration files in /etc, all libraries in /lib. It comes from a time where everything on the OS was developed by the same group of people.
By dumping all the same file types in massive top-level directories, you need a separate program (the package manager) to keep track of which files belong to which packages and dealing with their versions and ABI and stuff. Each package represents code developed by a specific group with a certain model of the system's interoperability.
GoboLinux has an interesting play on the problem by changing the directory structure so that the filesystem does most of the heavy lifting.
I don’t care who is researching this, we won’t have AGI by 2027 and super intelligence by 2030s.
> Based on trends in AI capabilities research since GPT-2, we are on course to expect AGI by 2027. Once AGI capability is available, if labs focus on automating AI research itself, progress in AI should accelerate. If similar progress can be achieved as the phase from GPT-2 to GPT-4, or GPT-4 to AGI, we should expect Superintelligence before the end of the decade.
IIRC OpenAI has a clause in their agreement with Microsoft that they can terminate sharing models if they develop AGI. So it might not be a good bet unless you believe there will never be shenanigans (either "we developed AGI but Microsoft won't let us declare it" or "we developed 'AGI', now we're free from Microsoft")
These “soft sciences” have actually been responsible for 100s of millions of deaths in the 20th century.
Many times these practitioners of these ”soft sciences” are cloistered away in the ivory towers of academia and away from regular people whom they actually look down on because they don’t match their theories.
Jetbrains can annotate your source with what the actual type is.
and auto can help future maintainability if you need to change concrete types that have the same API surface.
reply