Always enjoy reading you, but I feel somewhat obliged to comment given that I’m in a PhD program now! A couple of quick thoughts: what field do you imagine one might go into that will be safe? Granted, I can conceive of a future where we still work but it looks really different - like managers are largely just AI tamers and start-ups proliferate like crazy - but surely we should wonder when the human begins to just be adding noise there, too? I value my training now not because I want to be an academic (not what I’m optimizing for) but because being in my program actually does give me a lot of intellectual freedom and the ability to explore these sorts of things. That can be pretty hard to come by in most typical post-grad jobs. Like, yeah, if you think you’re going to be the last guy to get the tenure-track job you shouldn’t go for it, but equivalently, going for a consulting job that’s going to be obsolete in four years is probably not a great alternative either. In that light I do think an asterisked PhD kind of makes sense.
So my first impression is the jobs that rely on person-to-person interaction and not manual tasks or knowledge production are probably the safest. I guess I assume most PhD students are angling for a job in academia (which is what the data show), but I suppose the minority who aren't might have the right idea if they've really thought about it and their individual circumstances. However, it's definitely dependent on the field, and there are many fields where you should just go straight into industry or consulting or whatever. Especially if you think AGI is coming within a decade or so, because when it does you'd probably rather have spent that time actively making money instead of grinding away in poverty as a grad student.
Yeah, fair enough. I think in general most grad students don’t think very hard about what comes next, when “what exactly are you optimizing for?” should be at front of mind. So your critique is broadly correct. We really have no clue what the brave new world is going to look like, though, and there are some serious AI folks who are like “well one way or another I can’t envision a world where I need retirement savings in 50 years, so no more 401k contributions.” In that light, should you really spend the last highest leverage years of human knowledge work sitting at a desk making PowerPoints at EY? Maybe wealth will just get all the more entrenched, maybe not. If we’re living in post-scarcity in twenty years I think I’ll be bummed for not having shot my shot on being a scientist while I could.
As a person with a masters in geosciences from about 20 years ago, followed by 20 years of consulting (mostly), I know there is a lot of truth to the old saying: there is more than one way to get a PhD.
The design of institutions for global governance, international cooperation, and AI governance are political science topics for which a phd program might be useful. One would want a program that emphasizes formal political theory / game theory (or just a phd in econ or CS theory); though, there's probably room for some historian-style work on these things.
Unfortunately, most programs have a lot of requirements, especially in the first year, so it takes a while before you get the freedom to do what you want.
You could also just study these things on your own, but access to one-on-one expert advising is valuable.
I feel like this is going off a wrong perception of what knowledge production is and what it does. Experts are needed because "AI" still hallucinates, meaning that its mistakes are of a different kind than human's, and that seems to be a fundamental feature any time soon. As for audience, if your field is at all developed, laypeople are not supposed to _understand_ what you do, let alone care.
I'm having mixed feelings here, you would do great in academia. Academics of the future will benefit from using LLMs (a skill you already have) and AGI might be delayed (Claude can't solve pokemon).
There's something odd about how AGI used in this context. Does it mean better text generation? You can already get the AI models to write papers for you right now. How much better do they have to be before it gets labeled as AGI?
Aren't the AI papers being generated now mostly low quality? Like they don't drive any fields forward, just pull together data that's already out there, don't generate new ideas? I could be wrong but I think that's the hurdle they have to overcome.
They can do somewhat decent literature reviews on relatively mainstream topics. Aside from that, yeah, they're Harvard-undergrad-tier midwits at best. When they start performing at the level of a bright student at a lower-tier state school, that's when it's time to start packing the go bag. When they hit _____ Institute of Technology level, everybody's fucked, and probably not in the fun way.
When AI is able to create new ideas and theories,is what he means,or so I think. I don’t think it’s coming,but if you do it’s a compelling reason to steer away from academia. To be frank if you think it’s coming it’s a compelling reason to steer away from college/secondary education in general
As someone contemplating a postgrad degree in econ, I appreciate you posting this when you did. One reservation I have... if AGI (and the ensuing robotics revolution) can swoop in with this mass knowledge and technical capability replacement, wouldn’t the returns on this just be exponential economic growth, potentially resolving employment and wealth distribution issues that seem trivial compared to whether we pursue fun science and humanities degrees?
If abundance, along with a large unemployed constituency pushing for an AI dividend income, reaches a point where employment and wealth concerns fade, wouldn’t the price tag on a PhD become insignificant in the grand scheme?
It’s also possible that human economists in the future will work more in oversight and guiding AGI models, rather than them outright replacing those positions. I remember Noah Smith mentioning that AGI could turn many of us into managers (I could see academics essentially becoming 'knowledge managers') rather than eliminating jobs in the field being the presumed outcome.
Always enjoy reading you, but I feel somewhat obliged to comment given that I’m in a PhD program now! A couple of quick thoughts: what field do you imagine one might go into that will be safe? Granted, I can conceive of a future where we still work but it looks really different - like managers are largely just AI tamers and start-ups proliferate like crazy - but surely we should wonder when the human begins to just be adding noise there, too? I value my training now not because I want to be an academic (not what I’m optimizing for) but because being in my program actually does give me a lot of intellectual freedom and the ability to explore these sorts of things. That can be pretty hard to come by in most typical post-grad jobs. Like, yeah, if you think you’re going to be the last guy to get the tenure-track job you shouldn’t go for it, but equivalently, going for a consulting job that’s going to be obsolete in four years is probably not a great alternative either. In that light I do think an asterisked PhD kind of makes sense.
So my first impression is the jobs that rely on person-to-person interaction and not manual tasks or knowledge production are probably the safest. I guess I assume most PhD students are angling for a job in academia (which is what the data show), but I suppose the minority who aren't might have the right idea if they've really thought about it and their individual circumstances. However, it's definitely dependent on the field, and there are many fields where you should just go straight into industry or consulting or whatever. Especially if you think AGI is coming within a decade or so, because when it does you'd probably rather have spent that time actively making money instead of grinding away in poverty as a grad student.
Yeah, fair enough. I think in general most grad students don’t think very hard about what comes next, when “what exactly are you optimizing for?” should be at front of mind. So your critique is broadly correct. We really have no clue what the brave new world is going to look like, though, and there are some serious AI folks who are like “well one way or another I can’t envision a world where I need retirement savings in 50 years, so no more 401k contributions.” In that light, should you really spend the last highest leverage years of human knowledge work sitting at a desk making PowerPoints at EY? Maybe wealth will just get all the more entrenched, maybe not. If we’re living in post-scarcity in twenty years I think I’ll be bummed for not having shot my shot on being a scientist while I could.
As a person with a masters in geosciences from about 20 years ago, followed by 20 years of consulting (mostly), I know there is a lot of truth to the old saying: there is more than one way to get a PhD.
The design of institutions for global governance, international cooperation, and AI governance are political science topics for which a phd program might be useful. One would want a program that emphasizes formal political theory / game theory (or just a phd in econ or CS theory); though, there's probably room for some historian-style work on these things.
Unfortunately, most programs have a lot of requirements, especially in the first year, so it takes a while before you get the freedom to do what you want.
You could also just study these things on your own, but access to one-on-one expert advising is valuable.
I feel like this is going off a wrong perception of what knowledge production is and what it does. Experts are needed because "AI" still hallucinates, meaning that its mistakes are of a different kind than human's, and that seems to be a fundamental feature any time soon. As for audience, if your field is at all developed, laypeople are not supposed to _understand_ what you do, let alone care.
I'm having mixed feelings here, you would do great in academia. Academics of the future will benefit from using LLMs (a skill you already have) and AGI might be delayed (Claude can't solve pokemon).
There's something odd about how AGI used in this context. Does it mean better text generation? You can already get the AI models to write papers for you right now. How much better do they have to be before it gets labeled as AGI?
Aren't the AI papers being generated now mostly low quality? Like they don't drive any fields forward, just pull together data that's already out there, don't generate new ideas? I could be wrong but I think that's the hurdle they have to overcome.
They can do somewhat decent literature reviews on relatively mainstream topics. Aside from that, yeah, they're Harvard-undergrad-tier midwits at best. When they start performing at the level of a bright student at a lower-tier state school, that's when it's time to start packing the go bag. When they hit _____ Institute of Technology level, everybody's fucked, and probably not in the fun way.
When AI is able to create new ideas and theories,is what he means,or so I think. I don’t think it’s coming,but if you do it’s a compelling reason to steer away from academia. To be frank if you think it’s coming it’s a compelling reason to steer away from college/secondary education in general
Soon afterward, the mechanics who fix broken robots will be replaced by robots who can fix broken robots.
As someone contemplating a postgrad degree in econ, I appreciate you posting this when you did. One reservation I have... if AGI (and the ensuing robotics revolution) can swoop in with this mass knowledge and technical capability replacement, wouldn’t the returns on this just be exponential economic growth, potentially resolving employment and wealth distribution issues that seem trivial compared to whether we pursue fun science and humanities degrees?
If abundance, along with a large unemployed constituency pushing for an AI dividend income, reaches a point where employment and wealth concerns fade, wouldn’t the price tag on a PhD become insignificant in the grand scheme?
It’s also possible that human economists in the future will work more in oversight and guiding AGI models, rather than them outright replacing those positions. I remember Noah Smith mentioning that AGI could turn many of us into managers (I could see academics essentially becoming 'knowledge managers') rather than eliminating jobs in the field being the presumed outcome.
I don’t think the UBI is going to happen, at least not immediately. So job security probably matters somewhat.