When it comes to AI’s potential future impact on jobs, Camp Automation tends to jump to the conclusion that most jobs will be automated away into oblivion. The progressive arm of Camp Automation then argues for the need for versions of universal basic income and other social services to ensure survival in a job-less world. Of course, this being the US… most in Camp Automation tend to panic and refuse to engage with how their views might intersect with late-stage capitalism, structural inequality, xenophobia, and political polarization.
The counterweight to Camp Automation is Camp Augmentation, of which I am far more analytically aligned. Some come to Camp Augmentation because they think that Camp Automation is absolutely nutsoid. But there are also plenty of folks who have studied enough history to have watched how fantasies of automation repeatedly turn into an augmented reality sans ugly headwear.
Mixed into Camp Automation and Camp Augmentation is a cultural panic about what it means to be human anyways. I find this existential angst-ing exhausting for its failure to understand how this is at the core of philosophy. It’s also a bit worrying given how most attempts throughout history of resolving this have involved inventing new religions. Oh, the ripples of history.
While getting into what it means to be human is likely to be a topic of a later blog post, I want to take a moment to think about the future of work. Camp Automation sees the sky as falling. Camp Augmentation is more focused on how things will just change. If we take Camp Augmentation’s stance, the next question is: what changes should we interrogate more deeply? The first instinct is to focus on how changes can lead to an increase in inequality. This is indeed the most important kinds of analysis to be done. But I want to noodle around for a moment with a different issue: deskilling.
Moral Crumple Zones
Years ago, Madeleine Elish decided to make sense of the history of automation in flying. In the 1970s, technical experts had built a tool that made flying safer, a tool that we now know as autopilot. The question on the table for the Federal Aviation Administration and Congress was: should we allow self-flying planes? In short, folks decided that a navigator didn’t need to be in the cockpit, but that all planes should be flown by a pilot and copilot who should be equipped to step in and take over from the machine if all went wrong. Humans in the loop.
Think about that for a second. It sounds reasonable. We trust humans to be more thoughtful. But what human is capable of taking over and helping a machine in a fail mode during a high-stakes situation? In practice, most humans took over and couldn’t help the plane recover. The planes crashed and the humans got blamed for not picking up the pieces left behind by the machine. This is what Madeleine calls the “moral crumple zone.” Humans were placed into the loop in the worst possible ways.
This position for the pilots and copilots gets even dicier when we think about their skilling. Pilots train extensively to fly a plane. And then they get those jobs, where their “real” job is to babysit a machine. What does that mean in practice? It means that they’re deskilled on the job. It means that those pilots who are at the front of every commercial plane are less skilled, less capable of taking over from the machine as the years go by. We depend structurally on autopilot more and more. Boeing took this to the next level by overriding the pilots with their 737 MAX, to their detriment.
To appreciate this in full force, consider what happened when Charles “Sully” Sullenberger III landed a plane in the Hudson River in 2009. Sully wasn’t just any pilot. In his off-time, he retrained commercial pilots how to fly if their equipment failed. Sully was perhaps the best positioned pilot out there to take over from a failing system. But he didn’t just have to override his equipment — he had to override the air traffic controllers. They wanted him to go to Teterboro. Their models suggested he could make it. He concluded he couldn’t. He chose to land the plane in the Hudson instead.
Had Sully died, he would’ve been blamed for insubordination and “pilot error.” But he lived. And so he became an American hero. He also became a case study because his decision to override air traffic control turned out to be justified. He wouldn’t have made it. Moreover, computer systems that he couldn’t override prevented him from a softer impact.
Sully is an anomaly. He’s a pilot who hasn’t been deskilled on the job. Not even a little bit. But that’s not the case for most pilots.
And so here’s my question for our AI futures: How are we going to prepare for deskilling on the job?
How are Skills Developed?
My grandfather was a pilot for the Royal Air Force. When he signed up for the job, he didn’t know how to fly. Of course not. He was taught on the job. And throughout his career, he was taught a whole slew of things on the job. Training was an integral part of professional development in his career trajectory. He was shipped off for extended periods to learn management training.
Today, you are expected to come to most jobs with skills because employers don’t see the point of training you on the job. This helps explain a lot of places where we have serious gaps in talent and opportunity. No one can imagine a nurse trained on the job. But sadly, we don’t even build many structures to create software engineers on the job.
However, there are plenty of places where you are socialized into a profession through menial labor. Consider the legal profession. The work that young lawyers do is junk labor. It is dreadfully boring and doesn’t require a law degree. Moreover, a lot of it is automate-able in ways that would reduce the need for young lawyers. But what does it do to the legal field to not have that training? What do new training pipelines look like? We may be fine with deskilling junior lawyers now, but how do we generate future legal professionals who do the work that machines can’t do?
This is also a challenge in education. Congratulations, students: you now have tools at your disposal that can help you cut corners in new ways (or outright cheat). But what if we deskill young people through technology? How do we help them make the leap into professions that require more advanced skills?
There’s also a delicate balance regarding skills here. I remember a surgeon telling me that you wanted to get scheduled surgery on a Tuesday. Why? Because on Monday, a surgeon is refreshed but a tad bit rusty. By Tuesday, they’re back in the groove but not exhausted. Moreover, there was a fine line between practice and exhaustion — the more that surgeons are expected to do each week, the higher the number of jobs that they’ll do badly at. (Where that holds up to evidence-based scrutiny, I don’t know, but it seems like a sensible myth of the profession.)
Seeing Beyond Efficiency
Efficiency isn’t simply about maximizing throughput. It’s about finding the optimum balance between quality and quantity. I’m super intrigued by professions that use junk work as a buffer here. Filling out documentation is junk work. Doctors might not have to do that in a future scenario. But is the answer to schedule more surgeries? Or is the answer to let doctors have more downtime? Much to my chagrin, we tend to optimize towards more intense work schedules whenever we introduce new technologies while downgrading the status of the highly skilled person. Why? And at what cost?
The flipside of it is also true. When highly trained professionals now babysit machines, they lose their skills. Retaining skills requires practice. How do we ensure that those skills are not lost? If we expect humans to be able to take over from machines during crucial moments, those humans must retain strong skills. Loss of knowledge has serious consequences locally and systemically. (See: loss of manufacturing knowledge in the US right now…)
There are many questions to be asking about the future of work with new technologies on the horizon, many of which are floating around right now. Asking questions about structural inequity is undoubtedly top priority, but I also want us to ask questions about what it means to skill — and deskill — on the job going forward.
Whether you are in Camp Augmentation or Camp Automation, it’s really important to look holistically about how skills and jobs fit into society. Even if you dream of automating away all of the jobs, consider what happens on the other side. How do you ensure a future with highly skilled people? This is a lesson that too many war-torn countries have learned the hard way. I’m not worried about the coming dawn of the Terminator, but I am worried that we will use AI to wage war on our own labor forces in pursuit of efficiency. As with all wars, it’s the unintended consequences that will matter most. Who is thinking about the ripple effects of those choices?