It's wild that there are as many jobs in the category "Top Executives" as in the category "Retail Sales Worker".
This makes sense given both automation and the US's role in the global economy, but it runs somewhat contrary to standard ideas of class and inequality.
That category has a median pay of $105,350, and includes "general and operations managers" as well as "chief executives". I assume it includes executives of very small enterprises.
Remember that exec tech salaries are extreme outliers. I worked for an exec in manufacturing. He had full p&l responsibility for a business segment with ~150 employees, $27 million in revenue at 40% gross margins, and a production plant. His total comp was ~$300k.
Now just think of the comp levels in sectors like government, education, etc.
If AI produces surplus where does it go? Not talking about investment backed datacenter buildout and AI labs. Talking about the results of AI work...
I think AI outcomes distribute to contexts where it is used, and produce a change in how we work, what work we take on. Competition takes care of taking those surpluses and investing them in new structure, which becomes load bearing and we can't do without it anymore.
In the end it looks like we are treading water, just like it was when computers got 1M times faster in a couple of decades, but we felt very little improvement in earnings or reduction in work.
Surplus becomes structure and the changed structure is something you can't function without. Like the cell and mitochondrion, after they merged they can't be apart, can't pay their costs individually anymore. Surplus is absorbed into the baseline cost.
> If AI produces surplus where does it go? Not talking about investment backed datacenter buildout and AI labs. Talking about the results of AI work...
The 1% pockets, this is where the vast majority of the extra productivity computers/internet/automation brought goes to for the last 50 years: https://www.epi.org/productivity-pay-gap/
> In the end it looks like we are treading water, just like it was when computers got 1M times faster in a couple of decades, but we felt very little improvement in earnings or reduction in work.
I think this is a very important point. The hedonic treadmill means real gains are discounted. The novelty information cycle is like an Osborn Effect for improvements, like the semi-annual Popular Mechanic's flying car covers where there is an enticing future perpetually nearly here and at the same time disappointingly never materialized.
If AI being a million billion zillion times more productive at doing bullshit jobs nets in very little economic gain, then that lays bare the net economic value of all our bullshit jobs.
But given that the stock market hasn't panicked, this must mean at least one of these premises is false:
1. Economic activity is relatively flat.
2. AI makes us a million billion zillion times more productive than we used to be.
> lays bare the net economic value of all our bullshit jobs.
This was already obvious, the more important question is what are we (collectively, society & our governments) going to do about it?
We (should have) already known most of our jobs were bullshit jobs, especially white collar jobs. The difference is now we might have something coming that will eliminate the bullshit jobs.
But society will always need bullshit jobs or the whole system collapses. Not everyone can go dig ditches, so what do we do?
I think it's gonna mirror how the white collar classes, coastal elites, professional managerial class, whatever you want to call them, sold the countries industrial base to the far east. They got a little bit of money out of it but the biggest gains were the material wealth. $1 widgets instead of $2 widgets. All the people who weren't hurt by it got to live with more material plenty. Of course the nominal values of things didn't go down, but that's just inflation which is somewhat separate of an effect.
This time the jobs most in the crosshairs of AI are the jobs that constituted the paper pushing overhead of modern society, all the paper pushing jobs. Instead of $1 widgets from China replacing $2 domestic widgets it's gonna be $1 AI services replacing $2 services that require a real human.
This is hard to reason about because people tend to consume these kinds of services in big multi hundred or multi thousand dollar increments but in practice what it means is that when you have to engage an accountant, engineer, having something planned out in accordance with some standard, that will be substantially cheaper because of the reduced professional labor component.
And of course, as usual, the string pulling and in investor class will get fabulously wealthy along the way.
Data is coming from BLS. Their data lags the true state of affairs, and their growth projections are never reliable. Remember when they touted from 2000-2010 that Actuaries are the hottest growing field with the best forward looking outlook?
BLS forward looking guidance means nothing when technology revolutionizes the nature of work.
lol i always wondered how actuary ever crossed the radar of my partner in college and this must have been it. hey they just finished up their FCAS cert and they are riding quite high and quite comfy. but it is for sure a very small pool of people just due to the immense work needed to get that point.
Right, I'm a Computer Programmer but any job with that title is likely horrible. But having the title Software Engineer doesn't magically make me an engineer. All word games.
- Analyze users’ needs and then design and develop software to meet those needs
Recommend software upgrades for customers’ existing programs and systems
Design each piece of an application or system and plan how the pieces will work together
- Create a variety of models and diagrams showing programmers the software code needed for an application
- Ensure that a program continues to function normally through software maintenance and testing
- Document every aspect of an application or system as a reference for future maintenance and upgrades
Ignoring the sentence that admits they can be the same ("Programmers work closely with software developers, and in some businesses their duties overlap.").
Programmers is like a translator; somebody else came up with what to do and you're doing the mechanical work of converting words into C++.
May look the same as a worker but if you're a corporation hiring an H1B worker the difference between computer programmer and software developer is a notable difference in the budget bylines.
The BLS classifies them as different roles. In essence: Software developers plan, computer programmers implement. Which in many cases might be the same person, but it has always been true that one person can hold multiple jobs.
They're saying that programmers will be declining. While Developers, and crucially, Testers and QA people will be increasing. That testers and QA become more important in the future sounds plausible to me in a future hypothetical world of ubiquitous AI.
All of that doesn't necessarily imply that the Developer class of employees will grow at the same rate as the Tester and QA classes of employees.
Interestingly, it seems from these statistics the median wage for individuals with a Master's is lower than a Bachelor's. I wonder if that's because of immigrants who pursue higher education for visa reasons skewing the data.
Anecdotally, many people get a bachelor's degree to check a box for job applications, whereas many people get a master's degree because they love the field and/or are afraid to leave school.
My friends and I who have a bachelor's degree in CS make more money than my friends who have or are working towards master's degrees in CS, because the former are working in the private sector and the latter are in academia making peanuts.
Other possible reason could be many or most Masters degrees not conferring additional pricing power, and those people’s Bachelors degrees also confer lower pricing power.
Edit: Another possible reason that Masters degrees were less common in the past, so the Bachelors pay statistics skew towards people with more work experience in their higher earning years, whereas the Masters pay statistics skew towards younger people with less work experience.
Masters seems to be a common theme in a few lower paying expansive fields like social work and education. I don't think that someone with a masters is typically making less in the same field all else equal.
My takeaway here: 3.XT $ of US salaries are the TAM for AI companies.
Apple, a very successful company, makes 300B/y revenue? (ish)
~10% is all you need to be Apple.
And, it can work by taking all of 10% of the jobs and collecting the whole salary (the AI employee -- dubious proposition),
or by taking 10% of everyone's salary and automating part of everyone's job (the AI "tool" -- much more plausible).
If "part" being automated is >10%, we all win in the long run, every company gets productivity growth without cost growth, etc etc.
If you add in data center costs, and multiple competing AI companies, and then expand the TAM to all white collar work worldwide, you can make everyone successful beyond their wildest dreams with a "20% of work for 20% of the cost" model. Again, how you distribute that 20% remains to be seen (20% new unemployment, or new 0% unemployment with "tools".
The replace-work TAM is overstated because it fails to address transaction costs, which are astronomical when refactoring work and dislodging stakeholders with sunk costs. Coding is now the leading app for AI now because it had already been factored to support division of labor, outsourcing, and remote work.
It's also understated, because the real value of AI is not in replacing work, but making new products possible either because it's finally cheap enough to make them, or because -- AI.
Your math is missing the fact that Apple products are differentiated from their competitors. If AI becomes a ubiquitous commodity, it's not worth 300B/y.
The bosses already hate their workers and are mad that they have to pay them a cent. Would they really accept paying another 10% on their wages to make their workers 10% more productive? When there is significant active competition between the providers of core models and huge pressure to reduce prices?
Frequently seen as a big fun number in pitch decks. "The TAM for our new Coca-Cola killer is $1.6T: all humans who imbibe liquids on a regular basis. You simply MUST invest."
small business is the majority of employment. Think of an indi-coffee shop, the person taking your order may very well be the ceo technically. So there's a lot of "top executives".
Are childcare and kindergarten teachers really exposed to AI? In theory, we could put a class of 30 children in front of chatbots with one supervisor. But I doubt we would chose to do this as a society. If office work becomes more automated, early childhood education is actually one area I'd expect to take up the Slack. I can't imagine a situation where we have millions of unemployed former office workers but we leave them idle and let our children waste away in front of screens.
Childcare and education requires a specific tolerance, mindset and passion to be effective though. I'd be curious how many previously-PMs or HR drones or email jockeys would be adequate (let alone thrive) in an environment where there are next-to-nonexistent budgets, and you're servicing literal babies and tiny children lol
On second thought, client service folks might do extremely well here!
In which theory? And if you can do anything in theory, then there is no justifiable "but" or any excuse. The only problem is your own ability to realize it or unexpected situation. A theory is a fact, a proven hypothesis, with all its parts such as formulas, laws, or a force as in the THEORY of gravitation. And no, you don't have one, and I assure you that you've never had a theory in your life.
There are a lot of education and curriculum companies pitching basically this- replace those 'expensive' teachers with aides making minimum wage as all they need to do is recite curriculum and help them log in to be evaluated.
> You are an expert analyst evaluating how exposed different occupations are to AI. You will be given a detailed description of an occupation from the Bureau of Labor Statistics.
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
Are LLMs good at scoring? In my experience, using an LLM for scoring things usually produces arbitrary results. I'm surprised to see Karpathy employ it
The fact that the LLM appears to never assign an actual 0 or 10 makes me suspicious. Especially when the prompt includes explicit examples of what counts as a 10.
Insights from a real estate perspective: Most of the jobs that have the highest AI exposure are office jobs. Clerks, assistants, secretaries, software developers, bookkeepers, customer service, lawyers, etc. There has been a narrative the past couple years that office real estate was recovering as companies returned to office. If AI job losses materialize, it looks like there may be a second hit to that sector.
Like, IT helpdesk? Yes. Almost all of the tickets I create as a "knowledge worker" to my enterprise helpdesk are sloved by an AI assistant - group ownership, adding me to an app, etc.
I'm colorblind as well and what's fascinating to me is that this is the second AI created chart in a week I've seen that I can't read. Surprisingly I've found such agressively colorblind-unfriendly charts to be far less common when created by humans.
I don't have any color discrimination deficiencies, but it is my understanding that for various types of signage, the move has been towards RED=bad/danger/etc, and BLUE (instead of green)=good/safe/etc.
For color deficiencies, different lightnesses are safe e.g. dark for loss and light for gain (could be dark reds for loss and light greens for gain, but don't mix the lightnesses). Other options are icons/shapes (like up/down arrows) or pattern fills (like stripes for loss).
The general trick is you can rely on differences in color lightness, patterns, text and icons, but not differences in color hue. The page should be usable in grayscale.
Does the LLM understand or consider "rent seeking"? Lot's of high-paying jobs and entire industries seem to be propped-up by those same people who already have the power.
Cool site and Andrej is the man. But the BLS data...
> Taxi Drivers, Shuttle Drivers, and Chauffeurs
> Overall employment of taxi drivers, shuttle drivers, and chauffeurs is projected to grow 9 percent from 2024 to 2034, much faster than the average for all occupations.
I'd like to see this but - not sure if it is already - adjusted by total pay. so # employed * average salary.
A -4.0% hit to cashiers may have less of an impact than -4.0% to lawyers or another category that is propping up the middle of the economy with spending.
It's the other way around. Cashier's spend their 4 percent, where's the lawyers probably save it. Though of course median salary for the two categories means 4 percent change is different in absolute dollars
This (tech) career has proven to be so disappointing, and it's all the stuff around the actual work. I love working on computers.
Started my career in the decade of offshoring and didn't think we'd have anything close to an "AI" taking our jobs before we potentially unionized or had a government that would protect its labor force from being replaced by literal robots.
2020-2022 felt like the usa tech ship was finally growing into something really great. All gone now.
When I worked in devops I always worried that my job was automating away other engineers, it definitely had a "when will this come for me" feeling, because it really was, now the dev and ops are both getting automated away.
This is my first time looking at HN in practically a year. Tech is just so uninteresting to me now. Nobody is hiring SDE/SWE/SREs except for the problem makers, like Anthropic, Meta, etc. Anthropic has pages and pages of $300k-$600k roles open right now. But do you go help the rest of your colleagues lose their jobs?
I guess lets talk about kubernetes or something...
It's kinda cool to see a whole lot of otherwise intelligent people who are so dogmatically and ideologically opposed to anything AI that they're going to willfully dismiss anything that AI produces regardless of utility.
It's not great for them, but it's a definite advantage for people who are already in the mindset of distinguishing and discriminating information and sources on merit, instead of running an "AI bad" rubric as part of their filter.
AI has already won. It's taking over.
It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term. Jevons paradox isn't relevant to cognitive surplus - you need a very different model to capture what's going to happen.
It's time to surf or drown, because it doesn't look like any of the people in charge have the slightest clue about how to handle what's coming.
> AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term
Maybe it was linked from a comment somewhere on HN but just today I saw a post saying “Microwaves are the future of all food: if you don’t think so, you better get out of the kitchen”
Microwaves have already won. There will be a microwave in every home over the next few years.
Re: kitchen appliance analogies, I stand by my "AI is a dishwasher" analogy.
It's annoying that the dishes still have some pooled water in them when the cycle finishes; it doesn't always get everything perfectly clean; I have to know not to put the knives or the wooden stuff or anything fancy in it. But in spite of all of that, I use it every day, it's a huge productivity boost, and I'd hate to be without it.
And other people choose to wash dishes by hand and they're fine with it and not significantly less productive. The use of a dishwasher wasn't forced on everyone.
It is significantly less productive to hand wash dishes. But that’s fine to do manually if you wish for something that takes up maybe half an hour of your own time every several days. It’s not fine if washing dishes is your job. No company is going to hire an artisanal dish hand washer that refuses to use a dishwasher.
I can tell you that I didn't observe a single hand-wash-only holdout.
Perhaps such holdouts existed at a point, but a restaurant can only flatter the ego of their performatively-unproductive seniors for so long. Competition exists.
> Hand-washing dishes also, from what I understand, uses more energy and water than the dishwasher does.
Correct, more energy, detergent, and water. Dishwashers are more efficient than what you can do by hand because they effectively manage their water usage.
A modern dishwasher will use 3 to 4 gallons on a run. By comparison, my kitchen sink holds about 10 gallons of water on each side. When I wash by hand, I'll fill one side with soapy water and rinse each dish individually. Easily more than 10 gallons of water get used in the whole process.
Dishwashers are so efficient because they rinse everything off the dishes with about ~1 gallons of water, they drain the water, then use detergent in the second run which gets off the tougher food stains, another 1 gallons of water. Then they rinse with another gallon of water.
Dishwashers maximize getting food particulates into dirty water in a way that you can't really sanely do by hand.
Ten gallons to hand wash is crazy. I have and use a dishwasher but when I hand-wash I use maybe two gallons of straight hot water. I wash everything, give it a minimal rinse with the sprayer and then hand dry to remove any remaining soap suds or water.
If I hand wash, I wash as I go. It takes maybe 5 minutes to wash up dishes from breakfast or lunch, maybe a little more for a big dinner, maybe not.
Dishwashers let you accumulate dirty dishes for a day or two which is the real advantage in water savings. But I've noticed a lot of people pre-wash by hand and then load the dishwasher. I don't understand that, if I'm going to "pre-wash" anything I'll just wash it completely and put it away.
> A modern dishwasher will use 3 to 4 gallons on a run. By comparison, my kitchen sink holds about 10 gallons of water on each side. When I wash by hand, I'll fill one side with soapy water and rinse each dish individually. Easily more than 10 gallons of water get used in the whole process.
I'm pro-dishwasher, but you could use much less water handwashing.
If I don't have a dishwasher, my normal method is to stopper one side of my sink, squirt some dish soap on the first few dishes, and run just enough water to wet the dishes. Then I scrub some dishes, run the water (into the stoppered sink) just to rinse them as I transfer to the dish rack, then turn off the water and repeat. The dirtiest dishes that have the most food stuck on get done last so they get the most time soaking in the soapy rinse water from the rest of the dishes. I can do a full dishwasher load with one side of my sink maybe 1/4 full of water.
Time how long you run the sink while washing and rinsing. If you run it for more than 1.5 to 2 minutes, you've used more water than the dishwasher would have.
This is in fact true (in the US at least), but part of why it is true is that people don't wash dishes the way they used to (with multiple bins of soapy + rinse water) and instead just run a bunch of hot water.
Modern high-efficiency dishwashers probably beat the most efficient humans now, but that's relatively recent and not a huge margin (and may not get the same results).
I use the time I spend to hand-wash my dishes as a time to pause and to let my mind wander.
Having the hands in water is soothing.
And its a pleasant feeling, where cleaning is part of the food workflow : I cook, I eat, I clean (the kitchen, the dishes, my teeth).
I hate home dishwashers: you have to play Tetris after each meal to fill them, trying not to get your hands/arms dirty, then you have to let it do the work, and now you have to spend a few minutes to get the dishes out and store them where they should be, even though most of them are not linked to a meal you just had.
Maybe worse, you could unload the dishwasher at a time completely unrelated to food, so that breaks the link.
On the other hand, having worked in restaurants, industrial dishwashers are awesome.
From my experience, restaurants hand-wash some stuff (anything that needs scrubbing such as cookware) and use dishwashers for light-soil service items (plates, glasses, cutlery). But these aren't dishwashers like you have at home. They run very hot water and complete a wash/rinse in just minutes.
This is a great analogy, because just like AI, microwaves are good for quick fixes, tasks where you don't really care about the quality and would rather minimise the effort.
A better analogy might be computers, self-driving cars, or humanoid robots, since unlike microwaves, they can actually improve. Meanwhile microwaves were more or less the same since their invention.
I know it's not the point of the comment but it's a bit of a flawed analogy. Microwaves have wone to a large extent, such that people without them are a bit of an oddity, and cooking with an oven is more of a special occasion thing than the default cooking method that it was before.
> cooking with an oven is more of a special occasion thing than the default cooking method that it was before.
This is an incredible self-report. If you consider microwaved meals to be your default method of cooking and not something primarily for reheating leftovers or defrosting frozen meat, I sincerely hope you've gotten your cholesterol and blood pressure checked recently. That is not normal.
Not to mention the amount of plastic they're adding to their body and the amount of trash they're creating. I know cooking for one can be arduous, but meal prep is a thing.
I haven't used my oven since buying a counter top air fryer (and a sous vide) a couple years ago. I can't think of a single reason why anyone needs a full size oven on a daily basis unless you're cooking for a large family.
Owning a counter top air fryer requires you to have enough counter space for one, I have been in kitchens where there is an oven built into the stove but counter space is at a premium.
I’d also say that while I like my air fryer oven, I would prefer to do some of the bigger things like a whole bird in the oven. It’s cheaper to buy a whole bird for meal prep.
I’m from northern Europe. I might use the micro to heat up leftovers or a cup of water for tea or whatever in a pinch, but in this household (and at all my friends’), the stove and the oven cooks the food. I know literally no-one who could say they cook most meals in the micro.
I didn’t have a microwave oven before we bought a house. It took up too much space to justify, for such a relatively rarely-used appliance.
Same. Microwave is mainly used for defrosting or warming up leftovers. Maybe baking a potato in a rush, it works and it's faster but it's not as good as oven-baked.
Seems like a lot of people are dunking on this comment with anecdata.
Thankfully there is real data if we want to know how microwaves are used. Survey below says they are used a bit more than ovens, but half as much as cooktops/stoves. Varies by cohort and meal.
Most houses still have ovens. Microwaves are pretty widespread as well. But, their main job is to warm up food which was cooked in an oven (either locally or at a centralized oven in a food manufacturing factory). Microwave and ovens are mostly complementary tools.
Although, the analogy seems sort of useless, in that the food preparation ecosystem is really not any less complex than the program creation ecosystem, so it doesn’t offer any simplification.
When I had neither I found it convenient to buy a small oven - the size of a microwave. It performs both functions. It doesn't reheat things as quickly as a microwave.
I've lived without a microwave for a long time and it's only a little bit inconvenient because things take longer to reheat.
> and cooking with an oven is more of a special occasion thing than the default cooking method that it was before.
That really only makes sense if for households with a toaster oven, single adults, childless couples, and retired people. A toaster oven makes a lot more sense for small meals, in part because it can heat up much faster than a full oven.
Otherwise, a daily family meal isn't a special occasion.
Ovens are a special occasion thing in my house because our oven is huge and I can usually do the same thing in the air fryer, which is just a small convection oven.
There’s a bit of irony here. A lot of commercial kitchens already rely heavily on microwaves and rapid heating equipment. In many restaurants the microwave is a very important tool in the workflow rather than something unusual. Do your friends not eat out much?
Sort of, although there's importance nuance. One would be surprised how often microwaves get used in proper commercial kitchens, as in places making their own food & not reheating stuff from a central commissary. But it's not being used in the way one likely pictures when they hear this. An example is that microwaves are great for par cooking vegetables, especially potatoes.
> [...] and cooking with an oven is more of a special occasion thing than the default cooking method that it was before.
Not true in my household, in my parent's, in my in-laws, or any of my closest friends'. And none of us are cooks, so it's not a niche thing.
I'm sure in a lot of households the microwave oven is the primary form of cooking, but it's important to look outside the bubble before reporting trends.
It’s a great analogy because it is something that is everywhere, that everyone does use from time to time, but the idea that it magically displaces everything forever (with no downsides) is naively optimistic
(The original phrase was not just made up, it was sourced from actual news articles and marketing about microwave ovens, that’s why it feels relevant to a hype cycle like this)
You also see this kind of naive optimism if you go look at illustrations from the early 1900s. People believed everything would eventually be a machine: that a machine would feed you, wake you up in the morning, physically move everything within your home etc. And yeah those things are possible to do, but in reality they aren’t practical and we do not actually use machines to do everything because it has costs
So, you know how people talk about AIs as dumb pattern matchers?
So, you know how looking at one pattern and then just saying "this one will be like that one?" without considering the similarities and differences is similar to what people complain about AIs doing?
Consider: Unlike my Microwave, Claude can work on Claude. Unlike my Microwave, Claude gets better at more things. Unlike my microwave, we do not know what causes Claude to work so well. My Microwave cannot improve the process that makes my microwave.
Also, um.
I'm not sure if you noticed?
But machines are everywhere.
I'm typing on one while another one (a microwave, in fact!) heats my breakfast, while another one washes my clothes, while another one vacuums my floor, while another one purifies the air in my room, while another one heats the air in my room, while another one monitors my doors and windows for unauthorized entry and another one keeps my food cool and another one pumps the Radon
gas out of my basement and another one scoops my cat's poop.
> I'm typing on one while another one (a microwave, in fact!) heats my breakfast, while another one washes my clothes, while another one vacuums my floor, while another one purifies the air in my room, while another one heats the air in my room, while another one monitors my doors and windows for unauthorized entry and another one keeps my food cool and another one pumps the Radon gas out of my basement and another one scoops my cat's poop.
You’re kind of missing the point a bit. Yes, machines are everywhere but the details are very different.
The machines don’t magically do that stuff for you. You have to buy them, plug them in, turn them on and off. Lots of people don’t have any at all. They can’t do most things unsupervised. There are still lots and lots of tasks for which a machine exists, that people will still do entirely manually
There is a naivety to these predictions that is chipped away by the mundane details of having to exist in the real world. Cost, effort etc
AI being bad isn't in conflict with AI winning or taking over. I think all of those things are true. I think what we currently call social media is bad. And it's won. No conflict there either.
> AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term. Jevons paradox isn't relevant to cognitive surplus - you need a very different model to capture what's going to happen.
No, AI has not "already" won. And phrasing it as you do, "It's taking over. It might be a year or two, or five, or ten" is an admission of that.
People may indeed not pause, but there's never any guarantee that the next step of progress is possible; whatever we reach may be all we can do, and we'll only find out when we get there. Or it might go hyperbolic and give us everything.
I'm not certain, but I suspect Jevons paradox is probably the wrong thing to bring up here, that's about cheaper stuff revealing more latent demand, and sure, that's possible and it may reveal a latent demand for everyone to build their own 1:1 scale model of the USS Enterprise (any of them) as a personal home, but we may also find that AI ends the economic incentives for consumerism which in turn remove a big driver to constantly have more stuff and demand goes down to something closer to a home being a living yurt made out of genetically modified photovoltaic vines that also give us unlimited free food.
(I mean, if we're talking about the AI future, why not push it?)
What I do think is worth bringing up is comparative advantage: Again, this is just an "I think", I'm absolutely not certain here, but if AI can supply all demand at unlimited volumes*, I think the assumptions behind comparative advantage, break.
> It's time to surf or drown, because it doesn't look like any of the people in charge have the slightest clue about how to handle what's coming.
Yes, and I think they've also not even managed to figure out the internet yet.
* and AI may well be able to, even if all models collectively "only" reach the equivalent of a fully-rounded human of IQ 115; and yes I know IQ tests are dodgy, but we all know what they approximate, by "fully rounded" I mean that thing their steel-man form tries to approach, not test passing itself which would have the AI already beat that IQ score despite struggling with handling plates in a dishwasher.
Ah, the classic, forever-untestable "it's just around the corner" hypothesis.
I've lived through multiple "it's gonna be over in 12-18 months" arguments since November 2022. It's a truism for any technology to say that it's going to get better over time. But if you're convinced that "AI has already won", why not make a specific prediction? What jobs are going to be obsolete by when?
There are a lot of things that people were saying were fads that ended up being fads. There are also a lot of things that people were saying were fads that weren't. Nobody knows. Anyone who confidently says "AI is inevitable" or "AI is just a fad" is full of shit. They don't have a crystal ball, and they don't know what the future holds.
just because it was wrong once doesn't mean its never wrong. And was it really that wrong? The internet is great but would it be the worst thing in the world if we didn't live our lives around it?
I don’t doubt the intelligence of the OP though I question their wisdom and I doubt they know how to surf. They are more or less correct in their assessment of the current state of things and where things are heading, but this would entail a significant existential risk. Having an natural aversion to our own destruction is probably a sensible approach going forward.
again, grateful for the better words :) it's funny, I'm pretty charismatic in my community spaces IRL, but I constantly displease the HN hivemind
i think i need more patience -- i seem to fall into a certain tone due to my low expectations, and it's likely a self-fulfilling process which i am complicit in
>It's kinda cool to see a whole lot of otherwise intelligent people who are so dogmatically and ideologically opposed to anything AI that they're going to willfully dismiss anything that AI produces regardless of utility.
You'd probably put me into that bucket, although I'd disagree. I'm not at all against using AI to do something like: type up a high level summary of a product featureset for an executive that doesn't require deep technical accuracy.
What I AM against is: "summarize these million datapoints and into an output I can consume".
Why? Because the number of times I've already witnessed in the last year: someone using AI to build out their QBR deck or financial forecast, only to find out the AI completely hallucinated the numbers - makes my brain break. If I can't trust it to build an accurate graph of hard numbers without literally double checking all of its work, why would I bother in the first place?
In the same way, if you tell me you've got this amazing dataset that AI has built for you, my first thought is: I trust that about as much as the Iraqi Information Minister, because I've seen first hand the garbage output from supposedly the best AI platforms in the world.
*And to be clear: I absolutely think businesses across the board are replacing people with AI, and they can do so. And I also think it'll take 18+ months for someone to start asking questions only for them to figure out they've been directing the future of their company on garbage numbers that don't reflect reality.
Asking an LLM to analyze data directly doesn’t work. But they’re great at writing scripts to analyze (and visualize) data. Anthropic just figured this out last week and gave Claude a mode that does that for you.
This. I only ask LLMs to summarize non-critical stuff, i.e. just give me a general summary of all the work done over the past week.
If I were in need of hard analytics you can be damn sure I'd have it build a tool with a solid suite of tests following a rigorous process to ensure the outputs are sound. That's the difference between engineering and vibing.
AI is great for searching. I ll give you that. And that itself is a big deal. In software development, there is also real value provided by AI if you use it for code reviews. But I am not sure how much worth it would be if you have to retrain a model with new information just to give better search results and for code reviews..
Maybe that will be subsidized by all the people like you who want everything to be done by AI, for the rest of us to use it as a better search tool and use it for quick reviews..who knows!
He is talking about the same thing as you, no? As you point out, the more AI exposure (red), the more likely to have higher wages (green). Which suggests that those who are embracing AI are those who are thriving the most. Same as what he suggested.
Whether people are adopting AI or not, everybody doing the same kind of job gets the same number for exposure to AI.
You can claim that AI is creating a Jevons paradox situation and making companies hire as crazy the people it nominally replaces. But then you would have to point any instance of that happening, because it's clearly not there either.
Uh huh.. but the data in Andrej's visualizer is showing software development growth outlook is at 15% (much faster than average)
Over the past year (where Opus has supposedly changed the game), we're seeing ~10% more job postings for software developers compared to this time last year [1,2]
A huge amount of our work is not easily verifiable, therefore it's extremely hard to actually train an LLM to be better at it. It doesn't magically get better across the board.
AI HAS WON. SURF OR DROWN. YOU DONT KNOW WHATS COMING!!!?!?!
Stop with this doomer drivel. It's sick. It's not based in reality and all it does is stress innocent people out for no reason.
Ah HNs favorite strawman the "dogmatically and ideologically opposed to anything AI" person who, from my experience, largely doesn't exist.
However I was completely unimpressed with this tool when I saw it this weekend for two reasons:
The first is directly related to how this is built:
> These are rough LLM estimates, not rigorous predictions.
This visualization is neat (well except for reason number two), but it's pretty much just AI slop repackaged. There's no substance behind any of these predictions. Now I'm perfectly open to the critique that normal BLS predictions are also potentially slop, but I don't see how this is particularly valuable.
And the second, like 8% of male population I'm colorblind, so I can't read this chart.
For the record, I do agentic coding pretty much everyday, have shipped AI products, done work in AI research, etc.
Ironically, it's comments like yours that keep me the most skeptical. The fact that an attack on a strawman is the top comment really makes me feel like there is some sort of true mania here that I might even be a bit caught up in.
> AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term.
I think AI is not going anywhere.
I also don't think the future will play out as you envision. AI is a very poor replacement for humans.
And I say this as a misanthrope who doesn't have a particular beef against AI.
I think a lot of the pushback comes down to your attitude. The way you're talking about AI is like how the crypto bros talked about bitcoin. Just being very insistent on your point of view is a red flag. Either you can present new data to convince people, or your insistence will just look like it's emotional rather than rational.
I use AI every day as part of my work, it's very unclear to me where it's going and we have no idea if we're on an exponent or S-curve. Now, normally people talk with conviction because they have more data. But one of the breakthroughs of crypto was this social convention of just have very strong opinions based on nothing. A lot of that culture has come over to AI.
Your comment typifies this, it's all about I need to get on board, AI has already won, you've got an advantage over me because you realise this.
Go back, look at the actual article you're commenting on. Did the AI analysis of job exposure provide anything of value. I'm not totally convinced it did, and you didn't even think about it. What critical thinking did you do about the data that came out of this dashboard.
What doesn’t make sense to me about the AI Inevitabilism Embrace Or Die trope is how there’s going to be a sudden trap door which will eliminate all the naysayers which can be avoided by Embrace. Because that doesn’t cohere well with how autonomuous AI is or will be.
I could understand if all the naysayers doing old fashioned stuff like work all of a sudden have no more work to do. But the AI Embracers will have what, in comparison? Five years of experience manipulating large language models that are smarter than them by a thousand fold?
> definite advantage for people who are already in the mindset of distinguishing and discriminating information and sources on merit
This cuts both ways...
> there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term
What work do you think AI is going to replace? There are whole categories of people who are going to drown in the hubris of "AI being able to do the job" when it cant.
The moment one stops pretending that its going to be AI, that were getting AGI and views it as another tool the perspective changes. Strip away the hype and there is a LOT there... The walls of the garden are gonna get ripped down (Agents force the web open, and create security issues). They end lots of dark patterns, you cant make your crappy service hard to cancel... because an agent is more persistent to that. One size fits all software is going to face a reckoning (how many things are jammed into sales force sideways... that dont have to be). These things are existential threats to how our industry is TODAY, and no one seems to be talking about the impact to existing business models when the overhead of building software gets cut in half (and how it leads to more software not less).
I'm very confused how you can put up such an obvious strawman, say all these wildly unsubstantiated things, and yet still get engagement. Who are you even talking to?
It's been several years and nothing has changed except the AI grift is crumbling as we get out of the post-covid slump.
It is free for you to say this, because if you're wrong, there will be no consequences. Words are cheap. No different than various CEOs saying "AI will replace these workers" and now having to hire back those they laid off. Klarna, Salesforce, etc. Will be a great comment to reference in the future to capture the exuberance of the times.
> Some companies that announced large headcount reductions because of AI have since revised their talent strategies or have faced public criticism. Klarna, for example, the Swedish fintech that offers “buy now, pay later” e-commerce loans, reduced its human workforce by 40% between December 2022 and December 2024 as it invested in AI. (The company used a hiring freeze and natural attrition, not layoffs to achieve this cut.) But in 2025 the company’s CEO told Bloomberg that Klarna was reinvesting in human support, explaining that prioritizing lower costs had also led to “lower quality.” A spokesman told HBR that the company has hired about 20 people to deal with customer service cases the AI assistant can’t handle, and that the use of AI “changes the profile of the human agents you need in the customer support role.” The language-learning company Duolingo announced that AI would be used to replace many human contractors, and it faced considerable criticism on social media.
> For one, AI typically performs specific tasks and not entire jobs. As an example, Nobel laureate Geoffrey Hinton stated in 2016 that it was “completely obvious” that AI would outperform human radiologists within five years. A decade later, there is no evidence that a single radiologist has lost a job to AI—in part because radiologists perform many tasks other than reading scan images. Indeed, there is a substantial shortage of them.
* Companies are "AI washing" layoffs, blaming artificial intelligence for workforce reductions they would have made anyway, according to OpenAI CEO Sam Altman.
* A Resume.org survey found that 59% of hiring managers say they emphasize AI's role in layoffs because it "is viewed more favorably by stakeholders than saying layoffs or hiring freezes are driven by financial constraints".
* The stated reason for the layoff matters more than the fact of the layoff, and framing cuts as proactive restructuring around AI can result in a valuation boost, even if the technology doesn't actually work.
> The AI premium isn’t even reliable. By late 2025, Goldman Sachs group Inc. found that investors were actually punishing AI-attributed layoffs, with shares falling an average of 2%. The analysts concluded that investors simply didn’t believe the companies. But Block’s surge shows the incentive hasn’t vanished. It’s just a lottery instead of a sure thing. And executives keep buying tickets.
> The broader data confirms the gap between narrative and reality. A National Bureau of Economic Research study published in February surveyed thousands of C-suite executives across the US, UK, Germany and Australia. Almost 90% said AI had zero impact on employment over the past three years. Challenger, Gray & Christmas tracked 1.2 million layoffs in 2025, and AI was cited in fewer than 55,000 of them. That’s 4.5%. Plain old “market and economic conditions” accounted for four times as many.
So! Sophisticated capital market participants don't believe this; why do people here?
> You are an expert analyst evaluating how exposed different occupations are to AI. You will be given a detailed description of an occupation from the Bureau of Labor Statistics.
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
The sad part isn't that this is low-effort AI slop, but that intelligent people and policy makers are going to see it and probably make important decisions impacting themselves and others based on these numbers.
This is 99.44% slop! You are completely correct. The "exposure" is based entirely on vibes and does not correspond to observable reality. Down here in the real world the very first sector that is being disrupted is manual farm labor. They are out here with machine vision and quadcopters picking fruit. But according to the prompt that produces the treemap, manual labor has an exposure rank of zero.
> You can be an expert in one field, and have no idea what you're doing in another.
And for whatever reason a lot of people in startup/tech seem to have a huge Dunning-Kruger effect blind spot where they believe knowing a lot about one thing makes them an expert in everything.
This used to just be funny, but when it started to intersect with politics it began to actively contribute to destroying society. It isn't funny anymore.
(I don't think Karpathy's job data here is destroying society, this is a more generalized observation).
It is wild that ya'll are hating on a website that visualizes data.
That's like table stakes standard common practice for software engineers for decades.
This is the equivalence of telling a Designer that can't create infographics on anything but principled design subjects -- or else they're out of line. Any research or data they might use isn't relevant because they're not exerts? lol?
THIS:
>
And for whatever reason a lot of people in startup/tech seem to have a huge Dunning-Kruger effect blind spot where they believe knowing a lot about one thing makes them an expert in everything.
<
Its especially(!) very common for people who made an exit and are now "wealthy" - sure they can afford to have an oppinion on everything, but very often they are just talking bullshit, thinking: "hey, I made it in field X, so why do not try field Y".
Esp the "MBA crowd" is famous for this: For whatever reason they think they are more intelligent than ana engineer who filed a patent, e.g. (while most of the MBA bobos would fail just in acquiring all documents required for this)
Other example: If you wrote once a book and it got traction, even if you are not a proven expert you will be invited to television shows etc. (and MORE than the people who are real experts with proven track record)
The VIEW could be AI slop, but underlying CONTENT has some meaning.
There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs, companies are squeezing every bit of dev slack time to produce more stuff with AI.
> The VIEW could be AI slop, but underlying CONTENT has some meaning. There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs
Is that notion supported by this content? The BLS Outlook for most software engineering jobs is most in the "much faster than average" growth range.
I'm not saying that your assessments are wrong. But you were talking about how valuable this content is, and I don't understand how the insight you claimed to get from the visualization ("There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs") could at all be discernible from the visualization.
BLS outlook is comically bad. For example, BLS had pharmacists' outlook as amazing all throughout the 2010s, while /r/pharmacy and sdnforums had a constant stream of posts complaining about declining pay and quality of life at work, all while the pharmacy business' profit margins and number of employers declined.
What would be useful is tracking the change in minimum pay per hour from legitimate job listings, now that there are quite a few states that require posting pay ranges on job listings.
This makes sense given both automation and the US's role in the global economy, but it runs somewhat contrary to standard ideas of class and inequality.
https://www.bls.gov/ooh/management/top-executives.htm
Apparently "top executive" median pay is $105,350 per year: https://www.bls.gov/ooh/management/top-executives.htm
Now just think of the comp levels in sectors like government, education, etc.
Can you elaborate?
I think AI outcomes distribute to contexts where it is used, and produce a change in how we work, what work we take on. Competition takes care of taking those surpluses and investing them in new structure, which becomes load bearing and we can't do without it anymore.
In the end it looks like we are treading water, just like it was when computers got 1M times faster in a couple of decades, but we felt very little improvement in earnings or reduction in work.
Surplus becomes structure and the changed structure is something you can't function without. Like the cell and mitochondrion, after they merged they can't be apart, can't pay their costs individually anymore. Surplus is absorbed into the baseline cost.
The 1% pockets, this is where the vast majority of the extra productivity computers/internet/automation brought goes to for the last 50 years: https://www.epi.org/productivity-pay-gap/
For a business, the question is whether you can make more money by doing more ambitious things.
Agriculture is a good example of that: http://www.johnhearfield.com/History/Breadt.htm
I think this is a very important point. The hedonic treadmill means real gains are discounted. The novelty information cycle is like an Osborn Effect for improvements, like the semi-annual Popular Mechanic's flying car covers where there is an enticing future perpetually nearly here and at the same time disappointingly never materialized.
But given that the stock market hasn't panicked, this must mean at least one of these premises is false:
1. Economic activity is relatively flat.
2. AI makes us a million billion zillion times more productive than we used to be.
3. The stock market is rooted in reality.
This was already obvious, the more important question is what are we (collectively, society & our governments) going to do about it?
We (should have) already known most of our jobs were bullshit jobs, especially white collar jobs. The difference is now we might have something coming that will eliminate the bullshit jobs.
But society will always need bullshit jobs or the whole system collapses. Not everyone can go dig ditches, so what do we do?
This time the jobs most in the crosshairs of AI are the jobs that constituted the paper pushing overhead of modern society, all the paper pushing jobs. Instead of $1 widgets from China replacing $2 domestic widgets it's gonna be $1 AI services replacing $2 services that require a real human.
This is hard to reason about because people tend to consume these kinds of services in big multi hundred or multi thousand dollar increments but in practice what it means is that when you have to engage an accountant, engineer, having something planned out in accordance with some standard, that will be substantially cheaper because of the reduced professional labor component.
And of course, as usual, the string pulling and in investor class will get fabulously wealthy along the way.
BLS forward looking guidance means nothing when technology revolutionizes the nature of work.
Putting aside the slop facade place atop the data....why would we trust the data?
Yay!
>Computer Programmers: -6%
Oh no
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Computer Programmers median pay according to BLS: $98,670 per year
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Software developers typically do the following:
- Analyze users’ needs and then design and develop software to meet those needs Recommend software upgrades for customers’ existing programs and systems Design each piece of an application or system and plan how the pieces will work together
- Create a variety of models and diagrams showing programmers the software code needed for an application
- Ensure that a program continues to function normally through software maintenance and testing
- Document every aspect of an application or system as a reference for future maintenance and upgrades
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Computer programmers typically do the following:
- Write programs in a variety of computer languages, such as C++ and Java
- Update and expand existing programs
- Test programs for errors and fix the faulty lines of computer code
- Create, modify, and test code or scripts in software that simplifies development
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Programmers is like a translator; somebody else came up with what to do and you're doing the mechanical work of converting words into C++.
Developer involves coming up with what to do.
Hence programmers is a lower paid position.
Reason for hope
They're saying that programmers will be declining. While Developers, and crucially, Testers and QA people will be increasing. That testers and QA become more important in the future sounds plausible to me in a future hypothetical world of ubiquitous AI.
All of that doesn't necessarily imply that the Developer class of employees will grow at the same rate as the Tester and QA classes of employees.
My friends and I who have a bachelor's degree in CS make more money than my friends who have or are working towards master's degrees in CS, because the former are working in the private sector and the latter are in academia making peanuts.
Edit: Another possible reason that Masters degrees were less common in the past, so the Bachelors pay statistics skew towards people with more work experience in their higher earning years, whereas the Masters pay statistics skew towards younger people with less work experience.
Apple, a very successful company, makes 300B/y revenue? (ish)
~10% is all you need to be Apple.
And, it can work by taking all of 10% of the jobs and collecting the whole salary (the AI employee -- dubious proposition),
or by taking 10% of everyone's salary and automating part of everyone's job (the AI "tool" -- much more plausible).
If "part" being automated is >10%, we all win in the long run, every company gets productivity growth without cost growth, etc etc.
If you add in data center costs, and multiple competing AI companies, and then expand the TAM to all white collar work worldwide, you can make everyone successful beyond their wildest dreams with a "20% of work for 20% of the cost" model. Again, how you distribute that 20% remains to be seen (20% new unemployment, or new 0% unemployment with "tools".
I formalized my thoughts here: https://jodavaho.io/posts/ai-jobpocolypse.html
It's also understated, because the real value of AI is not in replacing work, but making new products possible either because it's finally cheap enough to make them, or because -- AI.
Potable water is far more important than AI or iPads ever will be, but the world's most valuable water company only does about 5B/year in revenue: https://en.wikipedia.org/wiki/American_Water_Works
Frequently seen as a big fun number in pitch decks. "The TAM for our new Coca-Cola killer is $1.6T: all humans who imbibe liquids on a regular basis. You simply MUST invest."
On second thought, client service folks might do extremely well here!
What you mention here is the exact thing why my earlier relationship went bust, because I didnt have any of these, then the children arrived :-X
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
Are LLMs good at scoring? In my experience, using an LLM for scoring things usually produces arbitrary results. I'm surprised to see Karpathy employ it
Whats the outlook like?
Thank you!
Needs
- [utility] add filter by keyword / substring match, e.g majority of visualized reports are un-labeled requiring hovering with a mouse pointer
- [improve discovery] add sort by demographic / pop impact, e.g largest block is 7m ('Hand laborers and movers') and default sorted to bottom-left
The general trick is you can rely on differences in color lightness, patterns, text and icons, but not differences in color hue. The page should be usable in grayscale.
If you turn on the color filters in accessibility settings in macOS you can see what the contrast could look like to a colorblind person.
Stand in front with a gun while mobs come to burn down the data center that took their jobs.
(I think I'm half joking).
1: https://www.businessinsider.com/robot-dogs-quadruped-data-ce...
> Taxi Drivers, Shuttle Drivers, and Chauffeurs
> Overall employment of taxi drivers, shuttle drivers, and chauffeurs is projected to grow 9 percent from 2024 to 2034, much faster than the average for all occupations.
...word?
A -4.0% hit to cashiers may have less of an impact than -4.0% to lawyers or another category that is propping up the middle of the economy with spending.
I guess that was to be expected...
Started my career in the decade of offshoring and didn't think we'd have anything close to an "AI" taking our jobs before we potentially unionized or had a government that would protect its labor force from being replaced by literal robots.
2020-2022 felt like the usa tech ship was finally growing into something really great. All gone now.
When I worked in devops I always worried that my job was automating away other engineers, it definitely had a "when will this come for me" feeling, because it really was, now the dev and ops are both getting automated away.
This is my first time looking at HN in practically a year. Tech is just so uninteresting to me now. Nobody is hiring SDE/SWE/SREs except for the problem makers, like Anthropic, Meta, etc. Anthropic has pages and pages of $300k-$600k roles open right now. But do you go help the rest of your colleagues lose their jobs?
I guess lets talk about kubernetes or something...
https://news.ycombinator.com/newsguidelines.html
It's not great for them, but it's a definite advantage for people who are already in the mindset of distinguishing and discriminating information and sources on merit, instead of running an "AI bad" rubric as part of their filter.
AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term. Jevons paradox isn't relevant to cognitive surplus - you need a very different model to capture what's going to happen.
It's time to surf or drown, because it doesn't look like any of the people in charge have the slightest clue about how to handle what's coming.
Maybe it was linked from a comment somewhere on HN but just today I saw a post saying “Microwaves are the future of all food: if you don’t think so, you better get out of the kitchen”
Microwaves have already won. There will be a microwave in every home over the next few years.
It’s time to start microwave cooking or drown
It's annoying that the dishes still have some pooled water in them when the cycle finishes; it doesn't always get everything perfectly clean; I have to know not to put the knives or the wooden stuff or anything fancy in it. But in spite of all of that, I use it every day, it's a huge productivity boost, and I'd hate to be without it.
It is significantly less productive to do both, and yet…
I can tell you that I didn't observe a single hand-wash-only holdout.
Perhaps such holdouts existed at a point, but a restaurant can only flatter the ego of their performatively-unproductive seniors for so long. Competition exists.
Hand-washing dishes also, from what I understand, uses more energy and water than the dishwasher does.
Correct, more energy, detergent, and water. Dishwashers are more efficient than what you can do by hand because they effectively manage their water usage.
A modern dishwasher will use 3 to 4 gallons on a run. By comparison, my kitchen sink holds about 10 gallons of water on each side. When I wash by hand, I'll fill one side with soapy water and rinse each dish individually. Easily more than 10 gallons of water get used in the whole process.
Dishwashers are so efficient because they rinse everything off the dishes with about ~1 gallons of water, they drain the water, then use detergent in the second run which gets off the tougher food stains, another 1 gallons of water. Then they rinse with another gallon of water.
Dishwashers maximize getting food particulates into dirty water in a way that you can't really sanely do by hand.
If I hand wash, I wash as I go. It takes maybe 5 minutes to wash up dishes from breakfast or lunch, maybe a little more for a big dinner, maybe not.
Dishwashers let you accumulate dirty dishes for a day or two which is the real advantage in water savings. But I've noticed a lot of people pre-wash by hand and then load the dishwasher. I don't understand that, if I'm going to "pre-wash" anything I'll just wash it completely and put it away.
I'm pro-dishwasher, but you could use much less water handwashing.
If I don't have a dishwasher, my normal method is to stopper one side of my sink, squirt some dish soap on the first few dishes, and run just enough water to wet the dishes. Then I scrub some dishes, run the water (into the stoppered sink) just to rinse them as I transfer to the dish rack, then turn off the water and repeat. The dirtiest dishes that have the most food stuck on get done last so they get the most time soaking in the soapy rinse water from the rest of the dishes. I can do a full dishwasher load with one side of my sink maybe 1/4 full of water.
Modern high-efficiency dishwashers probably beat the most efficient humans now, but that's relatively recent and not a huge margin (and may not get the same results).
I use the time I spend to hand-wash my dishes as a time to pause and to let my mind wander. Having the hands in water is soothing.
And its a pleasant feeling, where cleaning is part of the food workflow : I cook, I eat, I clean (the kitchen, the dishes, my teeth).
I hate home dishwashers: you have to play Tetris after each meal to fill them, trying not to get your hands/arms dirty, then you have to let it do the work, and now you have to spend a few minutes to get the dishes out and store them where they should be, even though most of them are not linked to a meal you just had. Maybe worse, you could unload the dishwasher at a time completely unrelated to food, so that breaks the link.
On the other hand, having worked in restaurants, industrial dishwashers are awesome.
Fridge OTOH, not so much.
LLMs require a lot more effort.
This is an incredible self-report. If you consider microwaved meals to be your default method of cooking and not something primarily for reheating leftovers or defrosting frozen meat, I sincerely hope you've gotten your cholesterol and blood pressure checked recently. That is not normal.
this is nuts! I use an oven every day dude - so its a special occasion is it?
The default method for cooking is using an oven or using a stove. Microwaving is for heating up left-overs for the most part.
One of the dangers of people who are too close to programming is that they think of life as binary.
I’d also say that while I like my air fryer oven, I would prefer to do some of the bigger things like a whole bird in the oven. It’s cheaper to buy a whole bird for meal prep.
Or you're batch cooking
I’m from northern Europe. I might use the micro to heat up leftovers or a cup of water for tea or whatever in a pinch, but in this household (and at all my friends’), the stove and the oven cooks the food. I know literally no-one who could say they cook most meals in the micro.
I didn’t have a microwave oven before we bought a house. It took up too much space to justify, for such a relatively rarely-used appliance.
I think OP is just an outlier.
Thankfully there is real data if we want to know how microwaves are used. Survey below says they are used a bit more than ovens, but half as much as cooktops/stoves. Varies by cohort and meal.
Source: https://indoor.lbl.gov/publications/residential-cooking-beha...
Although, the analogy seems sort of useless, in that the food preparation ecosystem is really not any less complex than the program creation ecosystem, so it doesn’t offer any simplification.
I've lived without a microwave for a long time and it's only a little bit inconvenient because things take longer to reheat.
That really only makes sense if for households with a toaster oven, single adults, childless couples, and retired people. A toaster oven makes a lot more sense for small meals, in part because it can heat up much faster than a full oven.
Otherwise, a daily family meal isn't a special occasion.
Ovens are a special occasion thing in my house because our oven is huge and I can usually do the same thing in the air fryer, which is just a small convection oven.
The food have been cooked in industrial ovens in the factory.
Not true in my household, in my parent's, in my in-laws, or any of my closest friends'. And none of us are cooks, so it's not a niche thing.
I'm sure in a lot of households the microwave oven is the primary form of cooking, but it's important to look outside the bubble before reporting trends.
You think "there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term" is wrong?
(The original phrase was not just made up, it was sourced from actual news articles and marketing about microwave ovens, that’s why it feels relevant to a hype cycle like this)
You also see this kind of naive optimism if you go look at illustrations from the early 1900s. People believed everything would eventually be a machine: that a machine would feed you, wake you up in the morning, physically move everything within your home etc. And yeah those things are possible to do, but in reality they aren’t practical and we do not actually use machines to do everything because it has costs
So, you know how looking at one pattern and then just saying "this one will be like that one?" without considering the similarities and differences is similar to what people complain about AIs doing?
Consider: Unlike my Microwave, Claude can work on Claude. Unlike my Microwave, Claude gets better at more things. Unlike my microwave, we do not know what causes Claude to work so well. My Microwave cannot improve the process that makes my microwave.
Also, um.
I'm not sure if you noticed?
But machines are everywhere.
I'm typing on one while another one (a microwave, in fact!) heats my breakfast, while another one washes my clothes, while another one vacuums my floor, while another one purifies the air in my room, while another one heats the air in my room, while another one monitors my doors and windows for unauthorized entry and another one keeps my food cool and another one pumps the Radon gas out of my basement and another one scoops my cat's poop.
You’re kind of missing the point a bit. Yes, machines are everywhere but the details are very different.
The machines don’t magically do that stuff for you. You have to buy them, plug them in, turn them on and off. Lots of people don’t have any at all. They can’t do most things unsupervised. There are still lots and lots of tasks for which a machine exists, that people will still do entirely manually
There is a naivety to these predictions that is chipped away by the mundane details of having to exist in the real world. Cost, effort etc
No, AI has not "already" won. And phrasing it as you do, "It's taking over. It might be a year or two, or five, or ten" is an admission of that.
People may indeed not pause, but there's never any guarantee that the next step of progress is possible; whatever we reach may be all we can do, and we'll only find out when we get there. Or it might go hyperbolic and give us everything.
I'm not certain, but I suspect Jevons paradox is probably the wrong thing to bring up here, that's about cheaper stuff revealing more latent demand, and sure, that's possible and it may reveal a latent demand for everyone to build their own 1:1 scale model of the USS Enterprise (any of them) as a personal home, but we may also find that AI ends the economic incentives for consumerism which in turn remove a big driver to constantly have more stuff and demand goes down to something closer to a home being a living yurt made out of genetically modified photovoltaic vines that also give us unlimited free food.
(I mean, if we're talking about the AI future, why not push it?)
What I do think is worth bringing up is comparative advantage: Again, this is just an "I think", I'm absolutely not certain here, but if AI can supply all demand at unlimited volumes*, I think the assumptions behind comparative advantage, break.
> It's time to surf or drown, because it doesn't look like any of the people in charge have the slightest clue about how to handle what's coming.
Yes, and I think they've also not even managed to figure out the internet yet.
* and AI may well be able to, even if all models collectively "only" reach the equivalent of a fully-rounded human of IQ 115; and yes I know IQ tests are dodgy, but we all know what they approximate, by "fully rounded" I mean that thing their steel-man form tries to approach, not test passing itself which would have the AI already beat that IQ score despite struggling with handling plates in a dishwasher.
Ah, the classic, forever-untestable "it's just around the corner" hypothesis.
I've lived through multiple "it's gonna be over in 12-18 months" arguments since November 2022. It's a truism for any technology to say that it's going to get better over time. But if you're convinced that "AI has already won", why not make a specific prediction? What jobs are going to be obsolete by when?
1. Brick and mortar is dead.
2. The internet will die.
3. What is the business model? (this one still seems to exist to this day to some extent, lol)
Reality fell between 1 and 2.
https://en.wikipedia.org/wiki/Eternal_September
just because it was wrong once doesn't mean its never wrong. And was it really that wrong? The internet is great but would it be the worst thing in the world if we didn't live our lives around it?
Jevons paradox was never relevant to cognitive surplus. That isn't what it's about.
Cognitive surplus only strengthens Jevons paradox. Humans are a competitive advantage for businesses in a world dominated by human needs
OP comment is not clever
i think i need more patience -- i seem to fall into a certain tone due to my low expectations, and it's likely a self-fulfilling process which i am complicit in
You'd probably put me into that bucket, although I'd disagree. I'm not at all against using AI to do something like: type up a high level summary of a product featureset for an executive that doesn't require deep technical accuracy.
What I AM against is: "summarize these million datapoints and into an output I can consume".
Why? Because the number of times I've already witnessed in the last year: someone using AI to build out their QBR deck or financial forecast, only to find out the AI completely hallucinated the numbers - makes my brain break. If I can't trust it to build an accurate graph of hard numbers without literally double checking all of its work, why would I bother in the first place?
In the same way, if you tell me you've got this amazing dataset that AI has built for you, my first thought is: I trust that about as much as the Iraqi Information Minister, because I've seen first hand the garbage output from supposedly the best AI platforms in the world.
*And to be clear: I absolutely think businesses across the board are replacing people with AI, and they can do so. And I also think it'll take 18+ months for someone to start asking questions only for them to figure out they've been directing the future of their company on garbage numbers that don't reflect reality.
If I were in need of hard analytics you can be damn sure I'd have it build a tool with a solid suite of tests following a rigorous process to ensure the outputs are sound. That's the difference between engineering and vibing.
Published AI generated code is a mild negative signal for quality, but certainly not a fatal one.
Published AI generated English writing is worthless and should be automatically ignored.
Could you elaborate on this? Is it just a claim, or is there some consensus out there based on something that it doesn't/shouldn't apply?
AI is great for searching. I ll give you that. And that itself is a big deal. In software development, there is also real value provided by AI if you use it for code reviews. But I am not sure how much worth it would be if you have to retrain a model with new information just to give better search results and for code reviews..
Maybe that will be subsidized by all the people like you who want everything to be done by AI, for the rest of us to use it as a better search tool and use it for quick reviews..who knows!
a. "Has already won"
b. "Might be a year or two, or five, or ten"
So... What exactly are you talking about?
Whether people are adopting AI or not, everybody doing the same kind of job gets the same number for exposure to AI.
You can claim that AI is creating a Jevons paradox situation and making companies hire as crazy the people it nominally replaces. But then you would have to point any instance of that happening, because it's clearly not there either.
Over the past year (where Opus has supposedly changed the game), we're seeing ~10% more job postings for software developers compared to this time last year [1,2]
A huge amount of our work is not easily verifiable, therefore it's extremely hard to actually train an LLM to be better at it. It doesn't magically get better across the board.
AI HAS WON. SURF OR DROWN. YOU DONT KNOW WHATS COMING!!!?!?!
Stop with this doomer drivel. It's sick. It's not based in reality and all it does is stress innocent people out for no reason.
1: https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE
2: https://trueup.io/job-trend
However I was completely unimpressed with this tool when I saw it this weekend for two reasons:
The first is directly related to how this is built:
> These are rough LLM estimates, not rigorous predictions.
This visualization is neat (well except for reason number two), but it's pretty much just AI slop repackaged. There's no substance behind any of these predictions. Now I'm perfectly open to the critique that normal BLS predictions are also potentially slop, but I don't see how this is particularly valuable.
And the second, like 8% of male population I'm colorblind, so I can't read this chart.
For the record, I do agentic coding pretty much everyday, have shipped AI products, done work in AI research, etc.
Ironically, it's comments like yours that keep me the most skeptical. The fact that an attack on a strawman is the top comment really makes me feel like there is some sort of true mania here that I might even be a bit caught up in.
I think AI is not going anywhere.
I also don't think the future will play out as you envision. AI is a very poor replacement for humans.
And I say this as a misanthrope who doesn't have a particular beef against AI.
I use AI every day as part of my work, it's very unclear to me where it's going and we have no idea if we're on an exponent or S-curve. Now, normally people talk with conviction because they have more data. But one of the breakthroughs of crypto was this social convention of just have very strong opinions based on nothing. A lot of that culture has come over to AI.
Your comment typifies this, it's all about I need to get on board, AI has already won, you've got an advantage over me because you realise this.
Go back, look at the actual article you're commenting on. Did the AI analysis of job exposure provide anything of value. I'm not totally convinced it did, and you didn't even think about it. What critical thinking did you do about the data that came out of this dashboard.
I could understand if all the naysayers doing old fashioned stuff like work all of a sudden have no more work to do. But the AI Embracers will have what, in comparison? Five years of experience manipulating large language models that are smarter than them by a thousand fold?
brainbroken by chatbots lmao
Man.. I suggest you touch some grass. You are living in a bubble.
This cuts both ways...
> there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term
What work do you think AI is going to replace? There are whole categories of people who are going to drown in the hubris of "AI being able to do the job" when it cant.
The moment one stops pretending that its going to be AI, that were getting AGI and views it as another tool the perspective changes. Strip away the hype and there is a LOT there... The walls of the garden are gonna get ripped down (Agents force the web open, and create security issues). They end lots of dark patterns, you cant make your crappy service hard to cancel... because an agent is more persistent to that. One size fits all software is going to face a reckoning (how many things are jammed into sales force sideways... that dont have to be). These things are existential threats to how our industry is TODAY, and no one seems to be talking about the impact to existing business models when the overhead of building software gets cut in half (and how it leads to more software not less).
It's been several years and nothing has changed except the AI grift is crumbling as we get out of the post-covid slump.
Companies Are Laying Off Workers Because of AI’s Potential - Not Its Performance - https://news.ycombinator.com/item?id=47401368 - March 2026
> Some companies that announced large headcount reductions because of AI have since revised their talent strategies or have faced public criticism. Klarna, for example, the Swedish fintech that offers “buy now, pay later” e-commerce loans, reduced its human workforce by 40% between December 2022 and December 2024 as it invested in AI. (The company used a hiring freeze and natural attrition, not layoffs to achieve this cut.) But in 2025 the company’s CEO told Bloomberg that Klarna was reinvesting in human support, explaining that prioritizing lower costs had also led to “lower quality.” A spokesman told HBR that the company has hired about 20 people to deal with customer service cases the AI assistant can’t handle, and that the use of AI “changes the profile of the human agents you need in the customer support role.” The language-learning company Duolingo announced that AI would be used to replace many human contractors, and it faced considerable criticism on social media.
> For one, AI typically performs specific tasks and not entire jobs. As an example, Nobel laureate Geoffrey Hinton stated in 2016 that it was “completely obvious” that AI would outperform human radiologists within five years. A decade later, there is no evidence that a single radiologist has lost a job to AI—in part because radiologists perform many tasks other than reading scan images. Indeed, there is a substantial shortage of them.
The 'AI-Washing' of Job Cuts Is Corrosive and Confusing - https://news.ycombinator.com/item?id=47401499 - March 2026
* Companies are "AI washing" layoffs, blaming artificial intelligence for workforce reductions they would have made anyway, according to OpenAI CEO Sam Altman.
* A Resume.org survey found that 59% of hiring managers say they emphasize AI's role in layoffs because it "is viewed more favorably by stakeholders than saying layoffs or hiring freezes are driven by financial constraints".
* The stated reason for the layoff matters more than the fact of the layoff, and framing cuts as proactive restructuring around AI can result in a valuation boost, even if the technology doesn't actually work.
> The AI premium isn’t even reliable. By late 2025, Goldman Sachs group Inc. found that investors were actually punishing AI-attributed layoffs, with shares falling an average of 2%. The analysts concluded that investors simply didn’t believe the companies. But Block’s surge shows the incentive hasn’t vanished. It’s just a lottery instead of a sure thing. And executives keep buying tickets.
> The broader data confirms the gap between narrative and reality. A National Bureau of Economic Research study published in February surveyed thousands of C-suite executives across the US, UK, Germany and Australia. Almost 90% said AI had zero impact on employment over the past three years. Challenger, Gray & Christmas tracked 1.2 million layoffs in 2025, and AI was cited in fewer than 55,000 of them. That’s 4.5%. Plain old “market and economic conditions” accounted for four times as many.
So! Sophisticated capital market participants don't believe this; why do people here?
AI is making CEOs delusional [video] - https://www.youtube.com/watch?v=Q6nem-F8AG8
https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
The sad part isn't that this is low-effort AI slop, but that intelligent people and policy makers are going to see it and probably make important decisions impacting themselves and others based on these numbers.
All the "research" on the site comes from a single LLM prompt.
And for whatever reason a lot of people in startup/tech seem to have a huge Dunning-Kruger effect blind spot where they believe knowing a lot about one thing makes them an expert in everything.
This used to just be funny, but when it started to intersect with politics it began to actively contribute to destroying society. It isn't funny anymore.
(I don't think Karpathy's job data here is destroying society, this is a more generalized observation).
This is the equivalence of telling a Designer that can't create infographics on anything but principled design subjects -- or else they're out of line. Any research or data they might use isn't relevant because they're not exerts? lol?
It is a website that visualizes the output of an LLM prompt and passes it off as data. Big difference between the two.
Its especially(!) very common for people who made an exit and are now "wealthy" - sure they can afford to have an oppinion on everything, but very often they are just talking bullshit, thinking: "hey, I made it in field X, so why do not try field Y".
Esp the "MBA crowd" is famous for this: For whatever reason they think they are more intelligent than ana engineer who filed a patent, e.g. (while most of the MBA bobos would fail just in acquiring all documents required for this)
Other example: If you wrote once a book and it got traction, even if you are not a proven expert you will be invited to television shows etc. (and MORE than the people who are real experts with proven track record)
There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs, companies are squeezing every bit of dev slack time to produce more stuff with AI.
Is that notion supported by this content? The BLS Outlook for most software engineering jobs is most in the "much faster than average" growth range.
* Yes software engineering jobs can grow - by increasing demand for custom software thanks to coding agents unlock
* AI can impact it - by making software engineers LLM code approvers
What would be useful is tracking the change in minimum pay per hour from legitimate job listings, now that there are quite a few states that require posting pay ranges on job listings.