Not all artificial intelligence is created equal. The founders of companies that create AI will need to build their services differently depending on whether it augments human capabilities (Cran Cranes) rather than replacing them (weaving machines).

"Robots are coming!" Yet again, a wave of fear. Will this AI help us or harm us? Will robots replace Hollywood scriptwriters, teachers in schools, accountants or illustrators or management consultants or all of us? This is a range of other impacts of AI: national security threats, runaway destroyers, high bias, etc. At the forefront of these concerns right now in our global conversation is the fear that AI will take away jobs.

It is reasonable to be afraid. Imagine a hardworking person who has spent a skill building a career, only to find themselves and their colleagues out of work. Technology often takes away jobs and, while the overall economy improves, and more jobs seem to eventually replace those lost, it is little comfort to someone who can no longer support their family.

And, at the same time, no one knows what it means for a job to be automated, as Noah Smith points out. We have an intuition about technology harming humans, and we still struggle to pinpoint exactly what kind of technology we may (not) want.

Some want to change government policy (for example, guaranteed income so basic existence is less tied to work) or stronger organizations (like unions or other forms of organized labor) to protect them. As a startup investor, who often supports tech founders at the moment they start building, I see another opportunity - AI creators can choose what kind of AI to create.

AI is not a natural force, and different technologies have different impacts on humans. If we build technology that makes a person's efforts more valuable (specifically their marginal productivity) then those people are likely to be paid more.

When AI can predict the structure of a protein molecule, no one says wait, that's my job, unlike when it draws an illustration for a magazine cover. People are less worried when AI routes a plane accurately than when driving.

For years, economists have tried to draw a distinction between technologies that replace human efforts, versus those that augment them - and call on tech creators to create more technologies that enhance human efforts, such as

So far, those pleas have had too little effect. Tech creators remain enamored with the idea of a machine that can replace a person - looking at the shape of robots and the overall capabilities of AI that can write like a human.

To be fair, it is hard to pinpoint a clear distinction between technologies that replace humans versus those that augment them. If I replace a person's man tasks with AI (for example, by summarizing legal contracts), I have also augmented that person by giving them free time. If I augment them by speeding up the completion of a task (for example, as LLMs do for software engineers), I may need fewer people to perform that task.

Different types of augmentation also have different effects. Some simply make our work more convenient, while others provide us with capabilities we could never achieve (for those old enough to remember, imagine the weapons on the utility of Inspector).

So, I propose we (over) simplify the discussion and consider AI technologies in three categories:

  1. Machine-replacing monkeys - these can often replace a person, as a fully automated weaving frame can replace a weaver. For example, how AI can replace a customer support troubleshooting person, approve expenses, or drive.
  2. User slide rules - these assist a person, like a slide rule makes a calculation faster (again, for those old enough to remember). Software tools that write first draft code can speed up a developer's work and grammar checkers can improve a person's text. . _
  3. Cran Cranes - these allow a person to do something they would be completely unable to do on their own. For example, translating from one language to another, indexing millions of web pages and predicting where you want to click most, discovering new molecules for medicine or predicting the quality of applicants responding to job postings.

As builders and advocates of AI, we often seem preoccupied with weaving frames (and to some extent, slide rules). Why? Lookoom enhances the self-image of tech creators - humans create something to replace themselves. They also make easily recognizable problems easier to solve and reduce one of the largest costs a company incurs (labor). On the other hand, we often imagine slide rules: In why AI will save the world essay, nearly every one of the many examples Marc Andreessen gives is a slide rule, in which each person would have an AI assistant (besides references passing references to curing all diseases and traveling between stars).

And weaving frames, slide rules, and cranes can sometimes blur together. Some technologies can act in multiple types (for example: generative AI like ChatGPT can both do a task that a high school student could do and synthesize a large amount of data beyond any human's reading time) and technologies built in one category later can expand into others.

***

How do we get more cranes? The founders of tech companies can simply decide to create different technologies - and think about it this way. They can ask whether people today could do what they are building without technology and focus on creating AI that gives people entirely new powers. They can stretch their imagination to invent better ways to solve problems than a person has done. Why limit themselves?

AI builders can also realize that the nature of weaving frames, slide rules, and cranes will mean building AI services in different ways. The way people choose to use technologies, how companies pay for them and rearrange their organizational systems around them, and how we fund these technologies can vary significantly depending on whether the technology includes weaving frames, slide rules, or crane elements.

Workers can also demand more cranes and leverage them to make themselves work more efficiently. PhD students can shape their research. Academics can pose different problems to their students. Governments can decide what research to fund. (In fact, we shared this framework a few weeks ago with lawmakers in DC.)

We still want weaving frames, of course. . that is a problem.

If we want AI to help people in the workplace, consider creating more cranes and fewer weaving frames. And, as founders of companies, realize that different AI services are different and you will need to build your company differently depending on how it interacts with the people we actually care about - um, people.

Note: Some links here lead to companies in which Bloomberg Beta is an investor, for disclosure. Thank you to many for their comments, including

Users who liked