There has been a lot of speculation lately about prompt engineering. People have started to claim that ‘prompt engineer’ is the job of our AI-enabled future. I don’t think this is correct. Prompt engineering is a skill. It will be an increasingly valuable one, but it will be subsumed into approximatley all white collar knowledge work over the next few years. It will be a skill that you’re just assumed to have, akin to how being able to operate an email program or a spreadsheet or a browser is today. Skillfully interrograting large language models (LLMs), which, of course, is to say, skillfully engineering prompts, will be table stakes for any knowledge worker.
Consider this job post, from Anthropic:
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe for our customers and for society as a whole.
Anthropic’s AI technology is amongst the most capable and safe in the world. However, large language models are a new type of intelligence, and the art of instructing them in a way that delivers the best results is still in its infancy—it’s a hybrid between programming, instructing, and teaching. You will figure out the best methods of prompting our AI to accomplish a wide range of tasks, then document these methods to build up a library of tools and a set of tutorials that allows others to learn prompt engineering or simply find pompts that would be ideal for them.
Given that the field of prompt-engineering is arguably less than 2 years old, this position is a bit hard to hire for! If you have existing project sthat demonstrate prompt engineering on LLMs or image generation models, we’d love to see them. If you haven’t done much in the way of prompt engineering yet, you can best demonstrate your prompt engineering skills by spending some time experimenting with Claude or GPT3 and showing that you’ve managed to get complex behaviors from a series of well crafted prompts.
This is all very interesting, and I can understand why Anthropic is interested in hiring for this position. But to the extent that LLMs pervade and suffuse all knowledge work over the next several years, prompt engineering will become just another skill that any white collar knowledge worker will need to have in her quiver. It is not, by itself, a job.
New knowledge and skills spread quickly, and any competitive advantage that an advanced prompt engineer currently has will disappear as prompt engineering knowledge and skills become commoditized. Lawyers will be prompt engineers. Equity analysts will be prompt engineers. Sales people will be prompt engineers. Professors will be prompt engineers. Everyone whose job requires, in some form, the manipulation of language—words, symbols, etc.—will engineer prompts to get some LLM to generate output which helps that worker get some task done more efficiently than she could have prior to the wide availability of LLMs.
So when I read pieces like this one which assert that prompt engineering is the ‘career of the future,’ I have to wonder how well such a prophecy will age. There’s nothing inherently wrong with this article, but its headline prediction doesn’t seem to match reality. I expect prompt engineering to subsume nearly all white collar knowledge work, and it will become a skill that is table stakes. If you want to get ahead of the curve, by all means, become an expert in prompt engineering. But the alpha in being a prompt engineering expert will disappear pretty quickly.
I’d encourage people to think of prompt engineering as a skill. Here’s one way to think of this as a skill to acquire. Here’s a Reddit thread about whether prompt engineering is a ‘job of the future’. And a useful thread on YCombinator about how to improve one’s prompt engineering skills. Good prompt engineering requires both knowledge of the underlying model being queried and, simply, practice. Yours will be a more fruitful career if you reframe prompt engineering away from ‘hot new job’ to ‘very useful skill to use as a complement to all the other skills I bring to bear on the job.’