In October, I gave a short talk in an intro seminar class to over 300 freshmen studying Computer Science at Georgia Tech titled “Transformative AI, risk, and you.” The talk asked them to extrapolate AI progress and internalize that AI will very soon impact their careers and lives in ways (and perhaps extremely negative ways) they should start thinking about now.

It seemed well-received, and recently I was interviewed by Jason Armesto at the AJC about the talk and how college students are thinking about job loss from AI. The article came out well - it does a good job at capturing lots of different perspectives and getting across the varied vibes across campuses. There was one quotation in the article from Prof. David Joyner that I think is worth responding to in full, because I hear this argument a lot:

“Technology historically has always created more jobs than it’s replaced,” said Joyner, who sometimes finds himself trying to reassure students. “Look at all the jobs technology has displaced in the past, and yet the fields that these technologies were in continued to grow, and new jobs continued to come out on top of them.”

The economic counterargument is pretty simple; AI will be a very close substitute for human labor. Automating lots of knowledge work will increase unemployment in the short-to-medium-term, especially if this automation happens much faster than labor reallocation, which seems likely. In the long-term, there is no reason why AI can’t outcompete humans on most economically valuable tasks across most sectors, meaning unemployment will stay high.

I find that even if folks agree, the argument doesn’t really stick, so here are some frames to internalize this:

  1. There’s no rule that says previous technologies are like future technologies. Previous technologies, like the computer, haven’t led to widespread unemployment because humans still had some comparative advantage. Computers couldn’t code themselves or generate their own infrastructure. This may not be true of AI.
  2. AI is not a normal technology. It’s highly general and scaleable, and it autonomously creates and executes plans. A calculator can’t decide to also do your email. Excel doesn’t, unprompted, learn to handle your customer service calls. AI is not a tool, it is best thought of as an entity - an agent - that can do work in the same way a human can, just much better in some areas and (for now) much worse in others.
  3. OpenAI’s explicit goal is to build “highly autonomous systems that outperform humans at most economically valuable work.” Employment only exists for economically valuable work. What will we do when we’re outcompeted by something that doesn’t need health insurance and can run 100x faster? Yes, there will be new jobs, but there are trillions of dollars in capital trying to automate those as well.

Maybe this happens in 5 years, or maybe this happens in 20. Either way, students should be planning for this, and academics should be aware of the implications.