Sorry, these are actually terrible. Some of the biggest questions off the top of my head:
1) How do we build machines that repair themselves?
2) How can we get human bodies to automatically repair and regrow damaged limbs and organs?
3) How do we build an energy source that is the same volume and mass as a litre of petroleum, is as stable and portable as petroleum but possesses 10x the energy density?
4) How can we get the human immune system to automatically develop suitable antiviral antibodies to combat a viral infection?
5) When will heated toilet seats become standard in North American homes (or at least north of Virginia)?
put human (consciousness) into silicon. Now all problems becomes one dimensional. (2), (4), (5) becomes moot, (3) is quite unnecessary because time is now more or less expendable, and we can just work on (1).
Agreed. The questions specific to my field do not seem interesting to me at all as a researcher. They are either way too vague and ill-defined to qualify as serious research, or way too niche to have a "big social impact".
Also, IANAB (I am not a biologist), but it surprises me that unravelling the genetic architecture of complex phenotypes is not represented on this list in any way, shape or form.
it doesn't really seem like an important question anyway. suppose 150 is a large enough population to allow for humans to reproduce naturally without introducing too many genetic defects. it's a cool fact, but 150 is not nearly enough to have people that specialize in all the skills needed to support modern life.
a more interesting question might be how many humans are necessary for some target rate of quality of life improvements (and how does that rate respond to increasing/decreasing the population). this is probably more of a political/economic question though.
The year 2020 is providing all the Big Questions we can handle. How do we stop a pandemic? Climate change driven fires the size of a continent? Reforming the police to stop racial violence? Fighting bots and trolls that threaten democracy?
Being ad-driven is an important component because a big part of the incentive to maximize engagement is to drive ad impressions. If social media companies were compensated in some other way, the content might be different.
There will always be trolls. What the ads fund are armies of cognitive scientists making you addicted in the name of “engagement” aka ad revenue, consequences be damned.
I detest ad-driven models and especially ad-driven social media, but this could actually make the problem worse, unless they can profit by a different means. It takes tons of expensive resources and talent to identify, stop, and prevent information/psychological warfare campaigns like these in a haystack of hundreds of millions to billions of users, especially when your adversaries may be superpowers' world-class intelligence agencies and their affiliates, or dedicated intelligence firms like Cambridge Analytica.
It's hard enough for the existing companies. (And yes, I think they're genuinely trying. Or at least I think Twitter is.)
Just in case you’re not trolling: this is an impossible ask. Advertising is a vital step in the success of a business and has been going on for literal millennia.
I'm not saying you're wrong, but this type of argument where I'm asked to care about someone's business model really unconvincing in general. Someone else telling me that they need something sketchy to make money isn't persuasive - it just makes them sketchy a business or person.
What I'm certain will never happen even though I think it would be great would be to ban any sort of emotional appeal whatsoever in advertising. It's just manipulative.
I agree with this sentiment. However, I think advertising is necessary because what if you want to make a be candy bar? How will people know about it? The flashy wrapper you put it in? A fancy end cap store display? A billboard outside the shop you sell in? A television promotional? Is Steve jobs talking about iPhones an ad? Is bill gates talking about diseases an ad for Microsoft because they are intrinsically linked? There's no easy way to articulate what should and shouldn't be allowed.
This makes the argument that the need to convey product information is now fully accomplished by search engines, and advertising only adds harm / manipulation.
Ah, I was thinking the west coast of the U.S. Sorry for the U.S. centric comment.
From my understanding large fires in Australia probably are climate change given that they seem perennial, but I don't actually know enough to say there.
The U.S. was practically begging for huge forest fires though. Thus my comment.
All the computer science questions are about ML. Are there no impactful computer science problems left outside of ML? I also get the impression that the author did not really understand the ML/AI research questions. Otherwise, they would have probably chose more general/high-level questions.
As a psychologist/cognitive scientist (my identity depends on my mood/day), I was equally disappointed with the sections on psychology and neuroscience. Some of the questions have been answered (at least partially, such as the importance of domain knowledge versus "processing power").
Overall, it was a very odd look into a discipline that I am familiar with, which leads me to think that the ideas are not very promising for any of the fields I don't have expertise in.
The "effective altruism" movement (of which 80,000 Hours is a part) has long been preoccupied with preventing a malevolent, superintelligent AI from killing or enslaving humanity (they call this fostering "friendly AI"). Their position is that this is a low-probability but extremely severe risk that few people are working on preventing.
Whether AI is really more dangerous than, say, pandemics or asteroids, is left as an exercise for the reader.
AI safety isn't an EA "preoccupation"; it's just weird enough and noticeable enough that it's easy to mistake existence and prevalence. It's also not even their weirdest position.
The first question on their list is about the 'problem' of wild animal suffering - and I've personally seen EAs argue that, because some animals are carnivorous, nature should be destroyed.
That's not even the weirdest position EAs take. Look up Brian Tomasik. Specifically, his paper about the possibility that electrons might suffer.
Concern about superhuman AI is one thing; bullet-biting utilitarianism is another entirely.
(This isn't the only place where their philosophical framework is stuck in the British Empire; they also tend to take a teleological view of history and moral development, and believe that their views are the self-evident progression of ethical development that every culture and civilization will come to eventually. They may not be as bad about this now as they used to be - there are questions about China now - but I don't think they're quite to the point of coming to terms with cultural contingency yet.)
It's a preoccupation because EA is mostly a rationalist thing, and Elizier Yudlowsky has had tremendous influence on that movement by being involved with Less Wrong. His views on AI have kind of become a mainstream position among them.
80k hours is more a cultural snapshot of the rationalist movement than anything.
> Are there no impactful computer science problems left outside of ML?
Well, do Arden Koehler or Howie Lempel have 80,000 hours of CS Research experience? It looks like they have 80K hours of experience thinking about the best way to spend 80K hours. Woops.
On the contrary, the massive influx of applied research driven by deep learning hype means there's probably a lot more high-impact stuff outside of ML than inside of ML. Quantum computing, bullet-proof crypto implementations, safety engineering for robots, and even hardening the electric grid/key compute infra against solar flares all seem much more important. Also much less "sexy".
...and even most of the AI/ML problems are scraped from a few papers from corporate AI labs with the best PR skills, with an eye toward AGI risk. But 1) AGI risk is massively overblown and 2) trying to solve those problems in general has a "silver bullet" vibe. All of the solutions to "AI Safety" questions are going to boil down to "do good software engineering", which is often more about processes and org incentives than technology or science. A really good automated HiL testing setup is going to be a lot more important than "preventing reward hacking". If you don't believe me, go ask anyone who's actually built a robot.
So, I think maybe "AI Safety/Robustness broadly construed" could be one or two entries in a list of important problems in CS. But certainly not half the list as they are here.
TBH, one of the highest impact problems in CS is convincing young PhDs to not waste their time applying deep learning to every god damn thing regardless of whether it's the right tool for the job.
Needs a section on engineering research, specifically energy. Cleaner energy and energy distribution is the #1 global concern, in my opinion, and has been for a long time.
Ha, such a list is a great thing to argue about. But nevertheless, I am disappointed by the politics section.
The Big Question in politics is, as it has been for 2 centuries now, how do we deploy the brightest minds on the biggest problems.
Our biggest problem in politics is that decisions have to make sense to the median voter. If someone comes up with a process that can transcend that without the well understood failings of dictatorship, that would be game changing beyond game changing.
The world has too many problems with well known solutions. People keep reinventing square wheels instead of being honest about what worked and didn't in the past.
"The Big Question in politics is, as it has been for 2 centuries now, how do we deploy the brightest minds on the biggest problems."
I am sorry to have to inform you, but this is a meritocratic, very naive view of modern societies that every single datapoint points against. That is not the biggest problem.
The biggest problem is: what to do when there is an ongoing state capture of your country by oligopolic pan-continental corporate structures?
Stop assuming that we just have to find out a solution. Solutions exist. We can have co-owned drone farms sustaining entire villages. We COULD have that. We already have tens of thousands of bright scientists who work on drugs where the patents go to the corporations, not for those who actually did the research anyway. Profits, however, are more important to those who make decisions about our ways of living.
But the second biggest problem is that we can't agree on anything, including what the biggest problem is; however this is likely a symptom of the biggest problem.
This seems to me as a fundamental inability to coordinate without hierarchy, which probably has something to do with a limit on the number of trust relationships each human can maintain. The rest may be a consequence of that.
For instance, no matter how much you hate the President, you have no idea who you would appoint to be deputy secretary of interior farm management, so you’re not really offering an alternative solution.
> We already have tens of thousands of bright scientists who work on drugs where the patents go to the corporations, not for those who actually did the research anyway.
That's because corporations do the part of drug research that actually matters, viz. translational research and safety/effectiveness studies. And it would be just as expensive if non-profits were doing it, so talking about "profits" is just not relevant.
Both parts matter, but you are right it wouldn't be significantly cheaper to do the transnational and regulatory stuff publicly or by non-profits.
What probably would change is the choices on which conditions and drugs to target, and to what degree. Conceivably could shift from an ability-to-pay focus to an impact focus.
Exercise left to the reader to decide if that would be desirable and/or likely to be more successful overall.
I am mostly countering the libertarian "innovative hero" talking point that paints progress as an inherent part of profit-driven late stage capitalism.
Humans are cooperative by nature. We would have scientists even under different production systems. Therefore, I think it is a default failure state to just hope for heroes to emerge. Obama was portrayed as such, and he failed to tackle most of US's pressing social issues.
There is no evidence to indicate that we can reliably identify the brightest people, or that they would do better at solving political problems. For example, Donald Rumsfeld and Dick Cheney both seemed quite bright in 2003, at least to the extent that they probably would have scored highly on standardized IQ tests. And yet they made major decisions which seem obviously stupid in retrospect. So how much did their intelligence really help?
High intelligence in political leaders too often leads to hubris. It must be tempered by some level of emotional intelligence, humility, and wisdom.
Yeah, it's the classic confusion of expertise vs wisdom. Wisdom, I'd say, shouldn't even be considered an individual trait, as it comes from the aggregation of different perspectives towards a more holistic understanding. Only the mix of time, intellectual honesty and being observant distills that. A single individual has too many blindspots; a whole society, too much noise.
If you mean raw IQ, the brightest minds often lack the wisdom and good judgment necessary for sound political leadership. Orwell wrote eloquently about this, but unfortunately I don't remember where so I can't find a link.
In my opinion- the big question in politics is - "what is the big question in politics?" We spend so much time trying to figure out whats even important in the first place, and when theres this much disagreement on what the big question is, then figuring out which big question is the real important big question seems like a good place to start.
That's a good normative standard, and one I wholeheartedly endorse, but the problem in politics is also that your normative standard there is far from universally endorsed. There is a significant (and, during times such as 1930s Germany and pre-1860s USA, very powerful) group that does not believe that the suffering of all individuals should be minimized. Less extreme but perhaps more pervasive is a value structure that discourages developing "potential" to the fullest in lieu of knowing your place and not going outside of your lane. That means that if you hold to that normative standard, your political life is likely going to be less about coordinating society to meet that standard and more about trying to convince other people that your standard is correct.
Yes, and the harsh part is that it's all the same problem. To "convince other people that your standard is correct" is the same as coordinating society, which is to repeatedly "convince other people that our solutions/approaches/ideas are acceptable (or determine if they really are)" for each problem we face or proposal we want to try to apply (unless you are a dictator or capable of mass deception, of course). It might be an impossible task. But with all this in mind, and as a personal preference, I can say that I prefer to spend my time trying to come up with better solutions for coordination than trying to succeed at coordination for a single point in my catalog of ideas. Of course I'll share them publicly and all that, but my political life won't fall into that. Which wouldn't be that terrible either, but I personally prefer to fail at the big problem instead.
Lots of competing opinions on what the big question in politics is, so I figured I'd add my own :)
To me, the biggest question in politics is: how do we build political / societal systems whose impact on human society is consistently positive and equitable?
This implies _fundamental_ changes in those systems. To wit: publicly-funded R&D returning value to the public purse, and not just to private companies (see: Mazzucato's "The Entrepreneurial State"); effective and highly coordinated wealth / corporate tax regimes (see: Piketty's "Capital in the Twenty-First Century"); complete transformation of education and its relationship to both human psychology and technology (see: Papert's "Mindstorms"); eradicating the very concept of "good" and "bad" neighborhoods through massive and equitable investment in schools, housing, mixed-use zoning, mixed-income development, multi-modal transit, functional green space, etc. (see: Alexander, Ishikawa, and Silverstein's "A Pattern Language"); and, backing it all, a modern, responsive, truly digital government that is empowered to invest in tools and technologies to continually improve both internal work and external service delivery (see: Cyd Harrell's "A Civic Technologist's Practice Guide").
I'm also a bit disappointed by the philosophy section, even though I work on two of the topics mentioned. The one on moral uncertainty is quite trendy, though (not one I work on). I wonder how they came up with the list? There's lots of AI in it, too much in my opinion. It's not as if real AI is coming anytime soon...
In my opinion, the Big Question in politics is: what are corporations not doing well, and how do we do the best job of doing that? (if not directly, then by proxy/with incentives)
Structurally, corporations tend to underinvest in R&D with big future payoffs and societal benefits/positive externalities that are hard to capture. That's where the government should step in, and it has been slacking on that front for the last 20 years.
Another job of government is to ensure the proper functioning of markets. Anti-trust is an important part of that. The government has been slacking on that front for the last 40 years.
That's without getting into education, transportation, and inequality, all of which are very important problems that are either unaddressed by or exacerbated by corporations. I don't think we have a major problem with identifying existing talent, and if we do it is a comparatively lesser issue.
There's a simple explanation for that though: the government has more money, and socialises the profits.
At one point the US government was spending 4% of GDP on NASA. Not everything NASA did was possible to commercialise, but entire industries from satellite communications and air purifiers to baby formula and memory foam were created based on solving difficult problems for spacemen. All of these now mass market industries pay tax on their US operations.
Even if a corporation could spend that sort of money on a space program it couldn't hope to collect a return from every industry that grew up out of that space program.
Which paradigm-shifting innovations are you counting there and what was the ROI on the programs which produced those innovations? Why would it be better to have corporations doing that R&D instead of government?
This is a fair question, but it should be accompanied with another one: what do the corporations do well? Will our alternative option make some things worse? What's our criteria of success and what's our backout plan if the criteria of success is not met by a certain timeframe?
I don't disagree with any of that. Corporations do many more things much better than government. That is always a premise of mine, and I usually don't state it explicitly in these kinds of debates.
If anything, corporations themselves should be begging the government to spend more on basic research. If it doesn't, it will not reduce taxes by the amount that it doesn't spend. It will merely reallocate research funds to pure defense or some other lower return endeavor.
There is a symbiosis between capitalism, innovation, and government. Capitalism without good governance turns into crony capitalism. Innovation would decline without adequate and competent* government expenditure (*historically there's no reason to believe that the government is incapable of profound basic research - quite the opposite). Good governance without a culture which appreciates market economies can slowly devolve into bad governance and a basket case country.
- Degree of divergence between popular will and policy (thanks Citizens United)
- Level and direction of inequality
- Bargaining power of labor (automation does not fully explain the cratering of this bargaining power)
- Level of corporate profits (compared to the past)
Generally, I'd also look at the correlation between increased productivity and happiness. If, in aggregate, we're producing more year by year, and yet people are becoming more miserable and anxious, something is wrong.
Unfortunately, the ‘brightest minds’ often underperform the naive median, sometimes disastrously. I see many problems that arise from your solution, that is, well-considered research and reasoning, dumbed down to coercive propaganda for ‘the masses’ until it no longer resembles the original substance, but somehow now benefits the messenger. As in: “...If you can bear to hear the truths you’ve spoken, twisted by knaves to make a trap for fools...”
Let's say we've got the brightest minds on the biggest problems.
They come to you with the solutions to the largest problem: nationalize the energy industry globally, convert nearly entirely to mass transit systems, dramatically decrease consumption in the global north, and convert to a state-run economy because capitalism will collapse under decreased consumption.
What now? Is that going to happen just because the smart people are working on the tough problems? Of course not. The real biggest problems are not actually a matter of smart people knowing the solutions, but of competing powers (that are much stronger than you as an individual), propaganda, exploitation and human subjugation that prevent balancing against those powers, war, etc.
Worse, while it may seem like a good idea to have the wonks working on the biggest problems, consider the immediate precursor: who decides on what the biggest problems are? Right now, that will end up being heavily influenced by the moneyed interests that create the biggest problem. You'll stick entire generations of the biggest and most clever nerds on making better batteries and electric cars, but ever-growing consumption will make all of that moot and the descent into collapsing ecosystems and agriculture all the more inevitable because they were focused on a lack of tech (stuff to sell / consume) being the biggest problem rather than questioning core structures (an imposed economic requirement that consumption always increase lest we all go hungry and die).
Extreme life extension looms large in my mind and I don’t have much trouble imagining that many people hope for this as well. My gut tells me that it’s possible but my mind says we aren’t the ones to get it.
I feel like the barrier here is more of a political one than being a matter of research or feasibility. I never quite understood why we are not throwing money at this issue. I mean I get that we collectively have doubt it could happen but the alternative is the big sleep anyway so why the hell not try?
Every day we feel tantalizingly closer to solving each of these problems but each decade seems to pass by quicker than the last. Perhaps we are a terminal species.
Maybe the real big problem lies in social science or psychology. Better stewardship or collective modeling. Maybe we should start force-feeding lsd into people who actively vote against and stand opposed to scientific progress. There’s got to be more in life than just withdrawing a paycheque or not and dying eventually. Sinking into the past and eventually erased from living memory. I once saw a picture of a guy in the 1920s who bred horses in the Greek mountains. This vague memory is all that’s left of my great grandfather. I don’t even know his name.
As a practical matter the most reliable way to extend lifespan today is by delaying the onset of chronic diseases. People who live a long time tend to be the ones who start suffering from chronic disease later in life. Genetic and environmental factors play a role, but most chronic diseases can be delayed through lifestyle changes.
From a microbiology perspective it's really tough to prevent cellular senescence without also increasing the risk of malignancy. Those two factors are in direct conflict.
“ it's really tough to prevent cellular senescence without also increasing the risk of malignancy. ”
For sure it is. But even the very language you use gives me hope. Our understanding of disease and aging is very much advanced compared to 100 years ago. We are making progress, that’s for sure. Our understanding of exactly why extreme life extension is difficult is becoming more sophisticated.
I think the topic of prolonged life is really difficult. On one side I am curious like, what does the future look like. On the other side I am like, why even stick around? What would the future bring for a 190 years old version of myself? Will anyone care about what I have to say? And will it even be relevant what I have to say?
Think of it the other way around like, would a bunch of people from the 1800s really be of much use to society today? Or are we better off that we get to stake out the near future for ourselves and likewise that our descendants are better off getting to stake out their future? I don’t have an answer.
At the same time it’s like what about the other way around? Clearly 30 years of life for example is way too short. So just because 70 or 80 years or something is the norm now in developed countries that don’t mean that this is the ideal either.
Like I say, I find it really difficult to try and decide upon.
“ would a bunch of people from the 1800s really be of much use to society today”
Maybe not everyone but go back another hundred years. Would Newton be of negative value today? I seriously doubt it. I adore James Joyce, who was around in the 1800s. But let’s not consider people like that for a second but consider the random guy on the bus last year. A no name non-contributing zero. I want him to travel the stars. I don’t need a random guy on the bus to produce or display worth in order to not only live but live well- live better than any kings have ever had. I would like that for you. You don’t need to show your papers to stick around another year as far as I’m concerned. I would love for you to experience unending pure hedonistic pleasure for centuries free of charge. You don’t need to say anything if you don’t want to or can’t. I don’t think anyone should have to work for it if they don’t want to.
I strongly recommend reading The Structure of Scientific Revolutions by Thomas Kuhn for an excellent treatise on why the answer to "Would Newton be of negative value today?" is almost certainly yes (in the context of scientific progress). It's a dense book but it's a seminal work on the philosophy of science and Kuhn goes to great lengths to dig into how scientific revolutions developed and entered the mainstream.
Long story short, Max Planck was on the money when he said that "“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”
Ah, but we are also judging Newton’s future and value where he would still act as a person with a short lifespan. Living for hundreds of years would have a profound effect on the psyche. I’m prone to think he would be a little bit cooler than we are giving him credit for regardless. I’m not demanding calculus part two or off with his head. Also, would Newton really be a tyrant? A lot of this has a particularly precrime vibe to it. But I’ll accept your point about Newton. He was a bit of a dick.
Part of the reason why old fuddy duddies are obstructionist to science is the fact that brain plasticity drops off sharply somewhere in your mid twenties. There's zero reason to believe that we won't also find a solution to this issue if we manage to fix aging
There is zero evidence that keeping neural plasticity at adolescent or young adult levels will help, especially considering it's mostly an issue of power dynamics (preserving social status), not the merits of the science. Based on our studies of childhood trauma, increased neural plasticity is likely to hurt as much as help.
Don't worry about it, there's no need for you to decide upon anything. There is zero chance of seeing a dramatic breakthrough in life extension within our lifetimes. So the issue is entirely a matter of hypothetical philosophy.
I wouldn't say zero chance. Even if for sake of argument we take it as true, both the statement "There is zero chance of seeing a dramatic breakthrough in life extension within our lifetimes." and "People alive today may live to be biologically immortal." can be true at the same time.
Every minor advance medical technology that effectively increases lifespan gives those living more time on the clock to hit one of those major breakthroughs. If we had a sudden breakthrough that allowed us to reverse some aspects of aging resulting in a lifespan increase of 20 or perhaps even 10 years, I would think that those additional years plus the new public attention would give enough time to find further improvements. Eventually you reach the point that the average human lifespan starts increasing per year by more than one year. At this point even if you don't yet know how to make people biologically immortal you're going to drastically decrease the chance of death to the point that many of those alive will live to be biologically immortal.
To a lesser degree this applies to all sorts of medical advancements that increase lifespan. All sorts of things could combine to allow those currently living to survive long enough to make use of true longevity treatments. Cures for cancer, alzheimer, and other such symptoms of aging brings us closer to this possibility. At this point it's a lottery game and I wouldn't bet against at least some people winning the jackpot.
That's not how it works. There are multiple limiting factors which all conspire to limit human lifespan to an absolute maximum of about 120 years. Minor advancements in any of those individual factors won't significantly increase lifespan. One of the other limiting factors will just kill you soon after.
Look at the actual evidence. For centuries life expectancy in developed countries was on a general upward trend. Now it has flattened out and is no longer increasing. There is no scientific basis to think that curve will trend up at a rate of more than one year per year in our lifetimes. If you believe that will happen then it's based on faith, not science.
Can you imagine: on the one hand post industrial countries are inhabited by a bunch or multi centenarians, on the other hand poor countries are overpopulated by youngsters.
In the post industrial countries their minuscule pool of young people have no chance to dislodge entrenched old people who have the power.
There is less rejuvenation and less change in these multi centenarian societies and they become calcified, ossified. Old buggers hanging on for what? Will their minds be plastic? Sure, I don’t always agree with my young self or today’s young on all things, but I believe they should have the opportunity to shape their society in their time without obstructionist old timers.
Well the good news is that historically we've been more progressive than it takes for people to die. It is still a slow process, but it is clear that cultural change isn't happening because old people die and young people with new ideas replace them.
I don't think that's so clear because historically we've never been in a situation where the average age of the population moves towards 40-50, or already is there in some countries. The median age in the US in 1900 was 23.
If the oldest members of the population constitute a majority, and you're living in a democracy, cultural change is pretty hard to achieve.
Here in Germany, where we have one of the oldest populations on the planet, politics consists essentially of one issue, social services and pensions.
At least here in America it seems like culture has shifted very quickly. The last 5 years, 10 years, 20 years, 50 years. I can't speak much for German culture, but I can say with high certainty that German culture is extremely different than 30 years ago. Yes, there's similarities, but things have changed quite a bit.
It's kind of a false dilemma? Old persons who don't change versus young ones who do? Could be ageist, not to mention neglecting the real issue of experience.
I agree with the "room for new people" conundrum but you could approach that as a population control issue from a couple of different directions.
Or you could approach lifespan extension simultaneously with research on its consequences, like how to structure society (economics, politics, etc) so as to optimize well-being.
With longer time spans compared to regular folks who don’t have the multicentennarian means, these people would accumulate more wealth and more power and would distort power imbalance even further (think the disparaging “boomer” remark but make it “multi cent”.
Good news is that those youngsters will have plenty of time to plot and wait with their extreme lifespans. People change. Life is change. Why would a longer life span be even more extremely boring and unchanging. I’m not convinced. It’s a big galaxy.
Bad news: we probably won’t be getting extreme lifespan anyways. It’s very political.
One of the reasons power structures can be so commonly screwed up is that they are usually inherited in some way or another from a time before anybody living was even born.
Young and old, so that's like a century right there, and some things can get out-of-date a lot faster than that.
That's what makes it such a long-term problem.
The equivalent in a corporate culture is a structure inherited from a time before any current employees were there.
Either way you can end up with some misguided systems which are not relevant enough for a modern situation, and it can be very painful to correct.
>Can you imagine: on the one hand post industrial countries are inhabited by a bunch or multi centenarians, on the other hand poor countries are overpopulated by youngsters.
Since World War 2 this has been the trend, not people living that much longer, but average & median ages diverging widely.
American, German, British, Japanese, Russian, Italian WW2 participants are finally almost completely gone. Their descendants of these participants as a group skew older than the general populations which are trending less youthful than developing countries overall.
Remember these entire countries (their population at the time) participated, not just the warfighters, that's one reason they call it a World War.
So these older-populated countries still have a significant cohort whose fathers & grandfathers were the participants in WW2, and the legacy of the devastation was not forgotten by as many generations since then compared to developing countries. Some of whom had much smaller percentages of WW2 participants to begin with, and now it's largely their great-grandfathers & great-great-grandfathers.
Seems to me those are the ones most likely to start WW3. In recent decades some have known more intense fighting than there actually was in their location during WW2 anyway.
>In the post industrial countries their minuscule pool of young people have no chance to dislodge entrenched old people who have the power.
It could be worse, you could have a teenage monarch, then you'd have to wait a really long time before you could expect a royal change.
>they should have the opportunity to shape their society in their time without obstructionist old timers.
This is so important to youthful accomplishment, the sooner people can have a society more suitable for its time, the better off everyone would be. Especially young people with a long future ahead of them, it's amazing what you can accomplish in a few short decades if you start early.
Fortunately not all old timers are obstructionist, but it does tend to be a technique for seeking & holding on to undue legacy power, and they're the ones with the most time for consolidation.
I think we do not throw money at the problem because of the same reason we do not throw money at anti-gravity: we probably do not have a promising idea how to attack the problem. And you need a promising idea, since you cannot throw random substances at random people and see if aging slows down, i believe since aging takes long you can test too few substances.
There would be obvious commercial interest in antigravity but to the average person it would be a hard sell as a prime focus for the human race. If I were to triage a list of research problems with huge social impact then antigravity would be b tier at best. Obviously super cool with huge ramifications on our understanding of reality but I’d argue that we could and often do live very well without it.
What if instead of anti-gravity, we examine medical treatment for diseases correlated with ageing? We place huge amount of money into research and treatments for cancer and cardiovascular diseases - these feel like band aids, since the root cause is usually senescence. Are curing (To whatever degree you like) these more attainable goals than slowing/stopping aging? Probably. Maybe. I don't know, but I'm not convinced this isn't a case of treating the symptoms, when we could attack the disease.
Extreme, blind individualism is, in my opinion, the clearest root cause for many of our current problems. It's what drives our obsession with personal propriety, economic growth, and immaterial fantasies of immortality (scientific and religious). It also falls apart on the slightest inspection! As you say, we all die and are quickly forgotten (ironically, we are forgotten first by those of our progeny who share our self-obsession).
Unfortunately, it's also the core of Western culture.
I think the idea of psychodelic-assisted reeducation is a promising and humane strategy if we want to avoid total social collapse in the West over the next 3 decades. But it's also extremely, extremely radical and deeply offensive to vague interpretation of "liberty". After all, why should someone tell me what to think or do, as I willingly earn $13/hr to barely make ends meet while consuming corporate media products in my rare leisure hours...
“ It also falls apart on the slightest inspection!”
Does it though? You are failing to provide an adequate argument for death. I see it as more than just a physical revulsion of dying or a fear of letting go. A western perversion or as a crisis of individualism. Those are present, to be sure, (I don’t agree with the western part but let’s go with that too) but is dreaming for immortality immature? Immortality is the very essence of maturity, it’s purest expression. But, is immaturity also so bad, like let’s be adult here and like fall into an abyss without struggling? To be certain, I am not blind to the charms of being courageous in the face of death. But this isn’t what I’m driving at.
It’s not just fear of death, it’s also a collective refusal to live exponentially better than we do today. So many of our problems as a species stem from a joint agreement that life is inherently bad when life really is, maybe this is just me talking here, everything we could ever have in this place. And could we also dare to improve?
Quickly forgotten is a bit of a stretch when I often think of my grandparents. But your thinking on this notion of poor memory as a, what now? Good reason to die? Much of what you say seems to depend on this idea that we should die because we are selfish anyways so good riddance.
If blind selfishness is a prime cause of our ills than wouldn’t our slow rate of progress be a result of that? It’s not like blind selfishness is directly causing immortality.
You mention western culture which begs the question: are you culturally opposed to extreme life extension?
“Psychedelic re-education” I’m not a actually serious. The bottom of the rabbit hole is just more rabbit hole. LSD can inspire but it’s not an end goal.
If we assume that our particular individual human consciousness is the only form of consciousness, then many of your positions follow logically.
However, life and behavioral sciences show that this is a weak assumption. Humans operate in predictable ways, as do other forms of life. And "humans" are in fact merely a collection of trillions of individual cells with individual needs and capabilities. And so on.
If we break free of individual
and anthropocentric notions of life and consciousness, we can reexamine what it means to be "immortal". For example, as a species we are virtually immortal from the perspective of a single human lifetime. Reproduction can be viewed as a endosomatic technology for endless life; doubly so for sexual reproduction.
I posit that the endless vertical scaling of human consciousness is a dead end. Just as we are the product of billions of years of evolution and increasing organisation of life forms, we have reached a point at which horizontal scaling of human consciousness--to larger and more intimately connected social organisms--is extremely promising, and the proof is staring us in the face (the explosive success of collectivist China over the last 4 decades). Exosomatic "information technology" is the primary catalyst of this evolution. The potential for humanity and terrestrial life is truly staggering if we give ourselves permission to think outside the box of individualism.
> but is dreaming for immortality immature? Immortality is the very essence of maturity, it’s purest expression
Assumption of immortality is pretty much the epitome of childish narcissism, coming from a lack of having contextualized oneself in the real world, wanting to be everything, never making a choice, never making sacrifices to matter for others and future generations. It is literally encoded in the stories like Peter Pan, the infinite child, Puer Aeternus.
Humans have lifespans past reproductive age mostly because there is a survival advantage to elders sticking around and helping raising new generations with their wisdom (memetic transfer). Advanced medicine only protracts that a little bit, but we have no evidence that individuals staying alive much longer is beneficial to the survival of species nor meaningful for the individuals. I am not saying that because survival of species is an absolute normativity, but just in order to live as individuals, we also need to survive as species, a type of calculation many of us fail at when it comes to problems like covid or climate change.
> You are failing to provide an adequate argument for death.
Death is the norm. I think the burden is on the immortalist to make an argument why they should live forever. How is their existence justified beyond prolonging their consumptive egocentrism a little longer? What is their value to humanity and themselves? If they are in competition for the same resources with the young ones, who should win?
I think this is precisely the pathology of our times; narcissistic children of narcissistic parents can't let go of their meaning-lacking existences, because they are terrified of death, because their narcissism prevented them from placing themselves in a meaningful life that mattered for more than just themselves. This is a tragedy and not necessarily their fault, but they know that they will stop mattering the moment they stop consuming (death), so few, if anyone, will remember them or what they did beyond one generation.
“ because there is a survival advantage to elders sticking around and helping raising new generations with their wisdom (memetic transfer). ”
“I think the burden is on the immortalist to make an argument why they should live (longer) “
I would like to tie these two strings together here because there is a natural precedent for old age. You said it yourself, the previous generation can instruct the new one.
We could extend the period of doing serious lifelong work. Engage in more pleasure and leisure as well as personal development and research. You seem to hang on this notion that people are merely consumers, especially those who want to live longer, so they must die. A tortoise can live for 200 years. Why not yourself?
You approach this in utilitarian terms and I still ask so what? People want more pleasure? Why not? I won’t deny that I have a hedonistic streak in me but I’ve yet to see a convincing argument against hedonism which doesn’t take an extreme view of it. Your vocabulary set suggest an appreciation and maybe even training in science but the feeling is puritanical and punitive. I don’t share your misgivings and the burden of proof is not on the immoralist because there are no gods to judge us. Life extension would be our own decision to make. To deny it for your reasons infantilizes humanity, judging us as incapable of agency in our own lives. You would in fact render people as short lived consumers who are unable to chose. Peter Pan is on you after all. You want the natural order to crystallize as it is now, an eternal infancy of the human race as short lived by choice.
I don’t demand for you to be remembered nor do I expect you to earn your keep for eternity. I want everyone to live the life they want for as long as they want. Life extension is a part of that.
> You seem to hang on this notion that people are merely consumers, especially those who want to live longer, so they must die.
That is not an intrinsic property to people. But so far, at this scale, the only normativity we were able to organize human endeavour around has been the normativity of market. Religions are gone, particular states don't matter much, and so little else is left with purpose. Just the market. This naturally makes consumers of us. Pretending we will use the extra time to "engage in more pleasure and leisure as well as personal development and research." is self-deception in my opinion. Longer lifespans also amplify accumulatory dynamics of rent-seeking, causing greater inequality. Whatever potential we have right now that we are wasting, will go onto be waster at large scale while a select few, distributed by power laws, will "enjoy" that time of leisure and pleasure.
In short, we can't even organize the time and talent we have wisely, and the dynamics that create this imbalance would only be amplified with longer lifespans.
> People want more pleasure? Why not? I won’t deny that I have a hedonistic streak in me but I’ve yet to see a convincing argument against hedonism which doesn’t take an extreme view of it.
Here is my best argument; pleasure itself is a poor goal to organize one's life around. It is ephemeral, it is prone to confuse proximal rewards with actual goals behind it and therefore the meaning it can provide is a very narrow one. Life of any addict is a proof of this dynamic. Porn (proximal reward) is not intimacy. Hyperpalatable junk food is not nutrition. Cocaine is not a real win in life. In fact, all of those prevent the actual goals to be attained.
> the feeling is puritanical and punitive. I don’t share your misgivings and the burden of proof is not on the immoralist because there are no gods to judge us.
I am not suggesting any purity codes, nor complete exclusion of erotic consumption. But existence itself is punitive if you optimize for the wrong criteria. Reality is the god to judge all of us, the consequences of our choices and the lesser gods we thought we could identify with, like the god of pleasure or god of immortality, potentially misleading that we can cheat reality.
At the bottom of that reality is that we are humans, all of which had and will have a taste of death.
> I think the idea of psychodelic-assisted reeducation is a promising and humane strategy if we want to avoid total social collapse in the West over the next 3 decades
..when your idea to fix things as well as your dialogue is way too close for a Marvel supervillain's plot to control the world, you might want to rethink it.
Maybe you've read the book Lifespan, but the research talked about in it feels far more optimistic than what you've outlined. [1] Certainly there is already a decent amount of money being spent on it, and more and more each year with promising results. [2]
Reading this book, aging feels like an area where we might see huge advances in the next 15-20 years.
There are already really strong results for extending vital life by significant margins in other species, including rhesus monkeys. There are already drugs people take that seem to have some of this effect, though I think currently they come with some dangerous side-effects that might not be worth it if you didn't have the underlying disease the drug was intended for.
Great book. David Sinclair makes a good case for why aging is the primary disease, and increased risk for cancer, heart disease, neurological decline, etc...are symptoms of the same disease. Even a universal cure for cancer would not increase lifespan drastically, because you'd just die from another collection of age-related ailments a few years later. We should try to find a direct cure for aging. Aging research in the past 5-10 years has been promising.
IMHO I'd rather see more social/economic justice, so that all of the increases in healthcare and expected lifespan are more evenly distributed.
Otherwise with whatever extreme life extension stuff might happen, it'll just be unjustly distributed based on wealth and race is it is too often today.
Also, man, let's not fear death too much : ). I'm not looking forward to it, but, my gut sense this whole extreme life extension thing is 50% pharoahnic - afraid of death and wanting to hold on to the toys you have, as opposed to living a full life.
We barely throw money at existential threats like nuclear de-escalation, pandemics, and climate catastrophe. If we can't even try to ensure the lives of those here now are regular length, what makes you think anyone in government is going to search for a fountain of youth?
Science needs it's own legs to stand on here before government swoops in. Right now, its mostly hype and smoke and mirrors, there's nothing I've seen that warrants government funding.
> There’s got to be more in life than just withdrawing a paycheque or not and dying eventually.
Prolonging life, working for eternity while continuously having children, the planet rotting away because we refuse to die? That may be more (at least in terms of years), but is it much better?
Experience brings wisdom. I get it, this notion of death as a release but release from what? I think we can fix many of those what’s with time and enjoy living longer and healthier. Would you really say no to 200 years because of labour? I’m not convinced people wouldn’t change with time. Watching reality tv well into their 400s.
Really though, I don’t believe we’d ever get extreme life extension without getting many other ducks in a row such as sustainable living in massive numbers.
That is not a given. It is easy to imagine medicine being able to prolong life in terms of, well, preventing death, but the person would still suffer from some of the downsides of senescence like excessive credulity (i.e. the elderly being easily taken in by con-artists), slower cognitive function, etc.
Until people stop living poorly and dying young, making the oldest people live longer just isn't very important, and the extreme difficulty makes it a bad investment.
It may be that advances and improvements that help yound lives also help old lives. If so, great. But there's no need to focus on the extreme corner case.
> Maybe we should start force-feeding lsd into people who actively vote against and stand opposed to scientific progress.
Maybe we should stop assuming that every selfish impulse, gluttonous instinct, and absurd demand on the universe is somehow "scientific" and "good". Who wants to live longer if you can't appreciate the one, short, life you do have? Why waste it trying to get more? Life extension has at its core an unquestionable assumption that parades itself as morality--the sanctity and worth of human life!--but it is really the basest narcissism and selfishness that is utterly alien to the functioning of every single component of this vast ecosystem. Things die.
> Sinking into the past and eventually erased from living memory. I once saw a picture of a guy in the 1920s who bred horses in the Greek mountains.
Yeah, well, the atoms that make up the molecules in the proteins in the cells in your body were manufactured in the massive explosions of supernovae billions of years ago that no one remembers. You think your great grandfather is lost to the sands of time for all eternity, when in reality he is only lost to you. It's just a further expression of unquestioned selfishness, wanting more life for yourself by coopting his into your experience. Let your great grandfather and his entire life be as it was. There is nothing required more of him now.
> Who wants to live longer if you can't appreciate the one, short, life you do have? Why waste it trying to get more?
Why is the arbitrary length of human life sacred? Less is seemingly bad (because murder is illegal), but more would also be bad... so we got exactly the correct amount now, regardless of the fact that it is a different number for different people. Sounds like Stockholm syndrome.
If you like your life, you still have to prioritize. You would like to do X and Y, time is short, you choose X. With more time, you could experience Y, too. Why not?
If you don't want to live, I respect your decision, just please don't force it on other people.
My preferred option would be an unlimited life, with perfect health, and the ability to choose quick painless death if you decide so.
“If you don't want to live, I respect your decision, just please don't force it on other people.“
Isn’t that the funny part? It’s fascinating to me that the people arguing against life extension here have two main points:
1. People don’t deserve it because they are inherently bad, they are poorly optimized, are selfish agents, would exhaust their value, are incapable of managing the impact of life extension, and so on.
2. Life is meaningless without death. <—- I am willing to entertain this second idea but the first has a disciplinary puritanical feeling to it which I really don’t relate to at all. This profound pessimism is alien to me. I can’t help but feel that there are ultimately some religious undertones at play here.
> has a disciplinary puritanical feeling to it which I really don’t relate to at all. This profound pessimism is alien to me. I can’t help but feel that there are ultimately some religious undertones at play here.
You're reading into this what you want to. That is some kind of bizarre binary thinking that doesn't exist.
As for "alien", there have been literally thousands of human cultures that have been erased and replaced with the current spur of "scientific" advancement that holds the world in exactly one way, a conveniently self-centered and tragically misguided one, IMHO.
> If you don't want to live, I respect your decision, just please don't force it on other people.
This is exactly the problem. In the default state of nature the entirety of erosive processes lead to a natural cycle of decay and death. Life extension is the alteration to this, to bend all of society's energy, focus, and scientific endeavor to stave off the absolutely natural erosive process that is the engine of Darwinian evolution, so that some gluttonous individuals and spend "centuries in hedonistic pleasure".
> The replication crisis has cast into doubt important research findings, such as the Stanford prison experiment. What other socially important findings have been undermined? How should we interpret scientific literature post-crisis?
This should start early in middle school and high school, too. Surprised this idea of replication crisis fails to get more attention.
99.9% of teens do not have the knowledge, skills or capability to critically examine academic research. Hell, most secondary and high school teachers don't have those skills. Thus, all you can say is "be skeptical". But that is next to useless if the student has no basis to make an assessment -- all it will do is erode confidence in the process of science. Instead, IMO, students should be 1) given a firm foundation of well accepted knowledge and 2) taught how the process of science works.
As an aside, I'd ask those who promote the idea that children should be taught "how to think" instead of "facts they can just look up" to think a bit more deeply about what they are saying. IMO, you cannot reach valid conclusions without facts. Facts are the foundation of knowledge, synthesis and analysis. Without facts, one cannot understand a topic or even know what questions to ask about it. Note, I'm not saying that children should be force fed facts via rote learning. But consider, what happens when someone believes they can think for themselves but is ignorant of topic they're thinking about?
The reason we have a replication crisis now is because people took the Stanford prison experiment as a "fact" rather than something that should've been challenged.
You can absolutely teach critically thinking skills to teens. Force-feeding facts which are often questionable at best leads to poor outcomes.
I'd expect them to iteratively learn the facts of the topic, just as the people who first discovered those facts would have had to. Many areas of knowledge are traditionally introduced that way. Of course it can't be the only teaching strategy, since many things couldn't be learned within a human lifespan without explicit instruction on the facts, but it plays an important pedagogical role.
The replication crisis indicates, frankly, that confidence in the process of science should be eroded a bit. Many people I know treat scientific studies as true until proven otherwise, and the replication crisis demonstrates this isn't an accurate assessment.
There's two different kinds of confidence in science. There's the type that uncritically accepts an article at face value (oranges cause/cure cancer! Time to ban/require orange consumption!), and there's the type that says "the scientific community thinks evolution is true, so even if individual articles can be criticized, on the whole I shouldn't discount it as a theory based on a single critique or mistake." The former is good to undermine; the latter bad. But the boundary between them can be murky.
I’m not surprised. Science is subject to political control like anything else. Why does a study use this or that data? What flaws or assumptions do we explain away as permitted trade-offs rather than rejecting as lacking enough basic fidelity to even proceed. There are Overton Windows in science (when’s the last time a scientific paper suggested that a bloody revolution should be used to address climate change?) and ultimately what gets published is a media channel that only exists to suit purposes of those funding it or permitting it - wealthy think tanks with a particular agenda, political parties or specific administrations or legislators who need reelection or need to foment support for their favored plan or outrage for their opponent’s plan.
A replication crisis is by design - the system doesn’t exist to publish the best available science, it exists to create credential and status hierarchy in which to control scientific authority and shape it as a political tool.
To address this issue you’d have to first address general plutocracy which even 80000 Hours itself is enmeshed in.
Yes, academia is no better suited to handling a major internal crisis then any other institution. The intensives are just too skewed towards publishing. Every major player out there almost certainly has an un-reproducible paper they published under duress.
Indeed, there should also be better data and research literacy. On the hopeful side: looking at how the field of medicine evolved over the centuries from outright quackery to better data and ethical disclosures from those who do the research. It’s still not perfect but it’s getting better.
The prison experiment is a terrible example here because it is exactly the sort of thing that experts have discounted for ages but held sway in popular discourse. It isn't an example of the replication crisis, since experts correctly diagnosed its flaws. It is an example of failing to listen to current academics in social science, which is the opposite of what most people who complain about the replication crisis would tell us to do.
Can someone tell me more about the ideological (theological?) assumptions underpinning these questions? They seem to come from a very peculiar understanding of "social impact".
But in practice, this list of questions reflects less the principles of Effective Altruism and more the overlap between people calling themselves Effective Altruists and other personal and career preoccupations. That's how you get four of the six "most impactful" questions in politics and international relations being about AI and one of the others involving representation for 'sentient nonhumans'. (Suffice to say this is not a list a political scientist, or even your average consequentialist utilitarian cosmopolitan humanist who read more newspapers than AI papers would propose as research priorities in politics and IR. Probably great research questions to get you a popular blog and a job in a Silicon Valley research institute though)
Although all organizations are influenced by their particular constituency, the guys at 80k have been led to these causes by particular philosophical and empirical arguments, which they explain in the website at great length, not (say) because they all used to be ML researchers. (They do not have technical backgrounds.) You can certainly disagree with the arguments, and they would be first to agree that their conclusions (although not premises) are radical.
But more importantly, I think if you just went and talked to random political scientists, you would find they actually had not thought very hard about what the most important causes are. If you ask them, they would come up with something on the spot. Indeed, political scientists don't at all seem to be the correct experts for this question.
I'm going to go out on a limb and say that political scientists that spend their entire working life studying political science and teaching students the most important questions in political science (including ones which aren't their specialism!), might have put more thought into which political science questions have impact and which aren't studied enough than was required to pick three questions from Allan Defoe's paper on AI, once from a CNAS paper on AI and one from a list already on their website, before moving onto the next field in the alphabet. The list covers 19 academic disciplines and a third of their sources are papers mainly about AI!
To be fair to the authors, they openly state right at the beginning of the article that the scope of the questions they're listing is based entirely on questions being asked within their community. I'm not saying the questions aren't worth answering, but a community with less overlap with LessWrong's 'politics is the mind killer' singularity-believers would perhaps choose a set of 'important questions in politics' somewhat less narrowly focused on AI. And this community overlap clearly has more of an impact on the authors' conclusion that the most impactful questions in politics nearly all concern AI than pure philosophical commitment to maximising the utilitarian value of their time.
> I'm going to go out on a limb and say that political scientists that spend their entire working life studying political science and teaching students the most important questions in political science
Political scientists will know the most important questions in political science as judged by intellectual interest, but that's just not the question being asked here. They are asking which questions in political science will have the biggest impact on the world as judged by a utilitarian and long-termist framework, and I don't see why political scientists would have a confident answer to that.
Likewise, I am a physicist with expertise in what questions are of intellectual interest to physicists, but I don't think that I or my colleagues have good ready answers to which physics questions will have the biggest impact on the world.
> And this community overlap clearly has more of an impact on ...
But the community overlap wasn't a random event that is just now influencing this investigation. The members of the community were attracted to each other because they were convinced by certain abstract arguments. (LessWrong is concentrated in Berkeley while 80k Hours is UK based and the members mostly hail from there and Australia. They found each other through the internet and through the Oxford philosophy department.) You can certainly disagree with the arguments, but chalking this up to having overlap with some dorky community is a cheap ad hominem.
> They are asking which questions in political science will have the biggest impact on the world as judged by a utilitarian and long-termist framework, and I don't see why political scientists would have a confident answer to that.
> Likewise, I am a physicist with expertise in what questions are of intellectual interest to physicists, but I don't think that I or my colleagues have good ready answers to which physics questions will have the biggest impact on the world.
While answering the question involves predicting the future in ways no one should be overly confident in, it's worth noting that social impact questions of that type are, in fact, within the domain of political science in a way they are not within the domain of physics, so practitioners within the two fields aren't exactly similarly situated with regard to the question.
Sort of. Some political scientists are certainly more likely to estimate the impact of their policy suggestions than physicsts, mostly because physicists rarely make policy suggestions. But I don't think they try to survey all political science questions and systematically compare them along some measure of impact. I expect the hypothetically disagreement between Dafoe and a random political scientist is based on a disagreement outside the expertise (the importance and long-term impact of AI in general). Likewise, physicists wouldn't tell you much about the impact of their work because it hinges on things (e.g., econ) outside their expertise.
> But more importantly, I think if you just went and talked to random political scientists, you would find they actually had not thought very hard about what the most important causes are. If you ask them, they would come up with something on the spot. Indeed, political scientists don't at all seem to be the correct experts for this question.
Did you just come up with this on the spot? ; )
But, I agree! Personally, I'd ask the historians! ;)
No, the questions you're raising have been asked (reasonably) by many. I really do think the EA folks are doing something novel, and there is a reasonable but incorrect intuition that all of this must have been sorted out before (so that if they're getting unconventional answers they must have made a mistake in their assumptions or process).
Step 1 in finding the highest impact thing to do is being serious about establishing cause and effect, and historians by their own admission rarely try to do that. (Cliometrics is a bit better.)
Interesting. If you have time could you share some articles / sources / examples / something to lend some solidity to what you're arguing, and also so that I can follow up and learn more?
TBH I'd make an argument for the elimination of war being a huge way to better the world. Imagine:
- No invasion of Iraq.
- No bungling of the invasion of Afghanistan (or no invasion)
- No Syrian war and millions of refugees.
War has huge negative ripple effects in addition to all of the deaths it causes; societies and communities are destroyed, whose effects echo for generations. It consumes insane amounts of resources. If we want to think big, that's what I'd think about. Maybe even focus on exclusively. I mean, who do you think is going to buck against checks and balances on scary intelligent AI research? Defense-funded research.
Sure, EA talks about international relations, etc, but, where's the study on the cost/benefits of world-wide pacifism, if we're theorizing here?
Part of my skepticism of all this is because there is so much thought put into bettering the world over time:
- All the religions, which still undergird the various systems of morals/laws we have.
- To name a few philosophers / social thinkers who directly discussed making things "better":
- Plato
- Hobbes
- Marx
- Habermas (I haven't read him, but I've read about him; big on liberal democracy and the post WWII European system, I think)
That's a pretty hefty list right there. That's not even a full accounting of the Western philosophers/thinkers. Not to even mention the non-Western philosophers/thinkers.
And, tbh, I think it's easy for non-historians, or scholars who don't focus on political science or political philosophy/theory to just write the field off; and, IMHO the "hard" sciences and tech/CS folk in particular have a tendency to do this. So I'm always a bit skeptical.
It's also how you get a list at all. I don't like EA's "Jeremy Bentham solved philosophy forever" worldview, but at least it gets them thinking big, you know? Is anyone else making these lists?
The list seems pretty off. It lists the creatine makes vegetarians smarter in cog sci and psychology. The article that question links to kills the idea in the headline.
Sentience of animals and non-humans comes up a lot. Which looks less important than a lot of other areas of research, fusion, physics, ect.
Biology and genetics, only one focused on genetic engineering and it's for "crops that could thrive in the tropics during a nuclear winter scenario". This seems to reflect someone's biases more than anything.
There are a lot of question where I would have problems to asses whether an answer is a correct/valid one. I'd even argue that different people would accept different answers as answers. Which in turn make them bad questions to begin with.
It's frustrating that "Computer Science" is prefixed with "Machine Learning, Artificial Intelligence, and". Ignoring many of the notable research questions that have nothing to do with intelligence. Ditto the "Statistics and" prefix to mathematics. Both of which ignore the 90% of the field that isn't the prefix.
I would say that the absence of any overarching anthropological, epistemological or philosophical basis for these questions is the fundamental observation that needs to be made. "Big social impact" is hardly a defined category, let alone necessarily representative of desirable outcome.
As we move from a multiplicity of disparate cultures to a single common denominating quagmire of internet-unified global capitalism, how do we ensure that alternative perspectives on nature, society and purpose continue to exist and are granted reasonable space and resources to self-sustain?
Alternatively stated: how can homo sapiens as a group reliably value things such as the commons (nature, intellectual heritage, freedom of choice, individual dignity, etc.) that exist outside of conventional private ownership and economic rationalism without politically centralizing objective value thereby capitulating the great strengths known to be associated with heterogeneity?
> What future possibilities are there for brain-computer interfacing and how does this interact with issues in AI safety?
It all come down to scarring. Current attempts typically end in scarring, at least in the Central Nervous System. We've not got a good way around it yet, at least for little tetrode like probes you have to insert.
The most exciting (to me) is to use Clarity and Optogenetics. Clarity to make the brain, well, clear-er. And Optogenetics to stimulate cells with light. Shine light, nerves fire.
Trouble is the Clarity makes the brain swell ~3x, and nerves are smaller than the diffraction limit of light, mostly. Maybe attach some iron atoms in the optogenetic pore and then pull on them in a magnetic gradient, thereby separating out the frequencies needed. Still it's a long ways off.
> What do we know from animals about how cognitive abilities scale with brain size, learning time, environmental complexity, etc.?
There's been some good work out of the Hercule lab (U. Chile I think?) about this. The trouble is in getting a reliable brain marker for 'learning ability'. You can use markers for the number of layers of the cortex, and their cross connectivity, but it's still tough to understand. Turns out, the 'wrinklage' of a mammalian brain is correlated with increasing 'learning-ness', but atmospheric/hydodynamic pressure plays a huge role in the 'wrinklage'. Sorry, my google-fu is not good today and I don't have the paper citation
> Why has Mohism almost died out in China, relative to other schools of thought?
Damn! Now that is a good question. For those just seeing Mozi for the first time, he was a defensive siege engineer during the early Warring states period (~470BC). Guy's ethos would really fit well in Silicon Valley, he was a kinda engineery-hippy dude, kinda Kevin Kelly-esque: https://en.wikipedia.org/wiki/Mozi
"Improve our modelling of impact winter scenarios, especially for 1–10 km asteroids. Work with experts in climate modelling and nuclear winter modelling to see what modern models say. "
I think impact scenarios is a definite, but nuclear winter is actually an extremely questionable "theory". Many people are not aware, but the the Russian KGB boasted that they literally made this up in order to divert public opinion in the west against nuclear weapons as NATO rolled out medium range missiles in Europe back in 1982: http://www.rationaloptimist.com/blog/nuclear-winter/
The mushrooms clouds from nuclear detonations do indeed dissipate over a few days. I believe subsequent modeling has come up empty even in the worst case scenarios.
"Develop more reliable and tamper-proof measures for so-called ‘dark tetrad’ traits — psychopathy, Machiavellianism, sadism, and narcissism."
I actually like this question and have been thinking the same thing recently. These traits are very dangerous especially when exhibited by those at the top of the food chain in both politics and business and need to be identified early and treated, and not allowed to flourish among those in power (Some good reads on this is The Sociapath Next Door and the Sociopath Test). Bad actors have incredibly out-sized influence globally.
"Why have certain aspects of Chinese civilization been so long-lasting? Are there any lessons we can draw from this about what makes for highly resilient institutions, cultures, or schools of thought?"
I'd be curious to know what they are referring to here in modern China. Is this question even relevant post cultural revolution? I think its a more relevant question for Taiwan that preserved the "old" culture as best it could.
The personal benefits of power may not be great enough for anybody but full dark triads to pursue competitively. You can see this easily in first-line managers. And of course we are programmed to filter for this, so imagine how deep the deception must go for politicians.
I imagine that some people would still seek positions of authority for its own sake. Not to mention actual sociopaths who would make use of it the same way that they do now.
Did you mean Taiwan, the place which adopted western notions of governance and has implemented de-sinicization policies for the past 30 years, including the recently condensed history textbook?
What if every scientist stops what he is currently doing and starts working on General AI? After they complete that task, all the other problems on the list will take 0.1sec to solve.
This isn't true. NP-completeness and Turing-completeness both limit what an AI can reason about, and there's no reason to expect that AGI will spontaneously break those barriers.
To use the meme, an AI won't be smarter than us; it'll be faster at being stupid.
I don't see how being smarter than us, or solving any of the problems on the list is an NP hard problem. For example, computers beat humans in chess by a wide margin - nothing NP hard about it.
Bounded proof search is NP-complete, and you can see this if you imagine transforming a bounded proof search into a satisfiability problem; go from "what, if any, proof starts at these assumptions and reaches these conclusions in less than N steps?" into "what, if any, assignment of Boolean values will encode a proof in O(N) values?" This is precisely what got both Gödel and Nash interested in what would become P vs NP.
Computers beating humans at chess is a great example of not being smarter, but being stupid faster. Both humans and computers approach PSPACE-complete problems like chess by approximating their searches and looking at trees of possible moves. We have no special insights into how to consider or evaluate moves; we have psychological profiles on which moves might be connected during practical play but no reason to require people to make such moves, and those profiles are largely limited to "openings", classifying the relatively few initial trees.
How do computers beat humans in chess? They do exactly what we do, but faster. This isn't smart, but fast stupid.
We've got eugenics, corporatism, sci-fi bait, and worse. I do like the idea of a field of "China studies" through the very specific lens that they've sketched, which focuses on the experience of Hong Kong throughout the past century, but otherwise, ugh, what a series of shallow tropes. Their "climate studies" list is papering over the fact that those studies were done decades ago and the conclusions were well-understood. Here's one question in each field which is better than any of theirs:
Genetics: To what extent does the RNA world (leadzyme, ribozyme, RNA bases) influence our DNA's actions; is there more beyond genome and epigenome?
Business Development: Enumerate all models of cooperation.
Climatology: Classify the climates of Venus and Mars under anthropogenic climate change; what will terraforming do to their climates?
Earth Science: Classify the minerals and rocks of Venus and Mars.
Neuroscience: Is demyelination preventable or treatable?
Economics: Are markets efficient? Prove that either P<NP (no) or FP=PPAD (yes).
Medicine: Can phage therapy prevent an antibiotic-resistant bacterial doom scenario?
History: Just let historians critique history books for a few hours live on stage, and broadcast the results. This will more socially impactful than any specific question or research agenda.
Law: I do have a soft spot for "What rights might digital minds have under current law?" It's a good question. However, a better one, in terms of social impact today, might be: How can the law establish effective oversight over its executors and legislators?
Statistics: Are there tighter bounds on neural net performance than PAC/VC theory? Alternatively, are neural nets essentially polynomial regression?
Philosophy: The Hard Problem.
Physics: Scale quantum computing to today's classical computational regimes.
Astronomy: Finish building planned telescopes; do the planned experiments have results which agree with current theories?
Political Science: How can two-party democracies be converted by the will of the people to multi-party democracies; can Duverger's Law be bent?
Psychology: The Hard Problem.
Philosophy of Science: The Demarcation Problem.
Sociology: How do societies collapse; how are technologies lost? Case studies won't cut it; we need models.
Mathematics: The Riemann Zeta Hypothesis. If metamathematics is allowed, then whether P<NP, P=NP, etc.
Research on these lines of thought does have a big social impact today just by consideration and exploration, and will have big social impacts if they ever actually answer the questions.
Its just wonderful that I am being downvoted for asking why the parent post was being downvoted. Great going mods.
Asking why isn't trolling and deserves a response. If you don't want to respond it is fine, but downvoting? really? It implies that you (the mods) think you are beyond any questioning from the hoi polloi.
Well, saying "Here's one question in each field which is better than any of theirs" will sound presumptuous to a lot of people, but I agree that there was some serious effort in it and many good questions, which should more than compensate if one reads from start to end.
This is fair. I started my comment by responding to each section, but I quickly realized that I wouldn't be able to make a constructive contribution.
First, the most important thing: Note that one of the responses to my original comment is feeling intense emotional relief at having some uncanny unease put directly into words. This is not the first time I've responded to this sort of article, and not the first time I've seen this sort of reply. I gather that it is difficult for folks to articulate the ambient horror of what this sort of writing is sketching.
Eugenics is recognizable from phrenology. They want to know why brain size matters and how to measure intelligence in different parts of the brain. They also want to know whether FAAH-OUT can make people happier and less sensitive to pain, and whether "extreme" human life extension is possible. They want to know how small the human race can get, and what the marginal utility of each additional person is. Indeed they ask whether utilitarianism isn't the ultimate ethical method.
Corporatism is most obvious in the framing. Various already-established consensus positions are turned into but-really or what-about questions, so that it seems as if climate change is not something that definitely exists, that we definitely could do something about, and that we definitely are not doing enough about. Corporations are portrayed as the leaders in innovation, while governments and the public are incompetent and slow to change.
Why do I say that these positions are tropes? Because I've read enough Dark Enlightenment literature to recognize its fingerprints, mostly. The authors are very slanted towards a meritocratic transhumanist utopia, where their very intelligent observations about the coming utopia will be rewarded with high stations and praise in the coming utopia. Meanwhile all of the implications about classism and injustice are carefully worded away so as not to be blatant and repulsive.
I recognize that you might not like my questions. But you also admit that they're substantive, and that's good enough for me. Honestly, they're barely my questions at all; each of them has been open for decades, I think. Maybe quantum computing is the most recent one to have something interesting happen, and that's changed our trajectory from asking "can we?" to "can we, bigger?" and "can we, cheaper?" and "can we, easier?"
Thank you for clarifying. I still think those aren't good reasons to apply the labels. But at least now you've spelled out your justifications for those strong claims, which is more productive that the first paragraph of your original comment, since it allows others to respond. So I do not consider it trollish when phrased as your most recent comment does.
Also,
>I recognize that you might not like my questions.
Patrick Collinson- https://patrickcollison.com/questions
Gwern- https://www.gwern.net/Questions
Alexey Guzey- https://guzey.com/personal/research-ideas/
Anybody have more?