If I’d gotten my degree in Engineering, I’d be annoyed at how that word is being inserted into job titles with no requirement for numeracy or formal education. I question whether Prompt Engineer is the latest in a wave of job title inflation or if the position has any right to bear the “engineer” moniker.
What does a Prompt Engineer actually do?
Until the Illuminate AI meet-up last week, I imagined that a Prompt Engineer probably sat around trying to create meme-worthy images by typing hilarious word mashups into ChatGPT.
If the purpose of enabling semantic queries is to allow business users with no coding skills to retrieve the data they need, then inserting a role to broker and script those colloquially-worded requests seems akin to hiring someone on Task Rabbit to build your IKEA furniture.
I now realize that Prompt Engineering is probably more like “reverse engineering,” adding interpretability and transparency to the Large Language Model algorithms. With applications of the past, outputs were the result of explicit programing, whereas with deep learning in LLMs, much of what the algorithm is doing is opaque, even to the people who set them up initially.
Anthropic has a job post up for a Prompt Engineer and Librarian
The pay range is between $250k-$375k USD and they describe it as:
“…A hybrid between programming, instructing, and teaching. You will figure out the best methods of prompting our AI to accomplish a wide range of tasks, then document these methods to build up a library of tools and a set of tutorials that allows others to learn prompt engineering, as well as work with high value partners to directly solve their challenges.”
Semantic search is hard, mainly because its hard to be precise
I wondered to what extent it is necessary to have a mental model of the underlying data model and table structures to be effective at semantic search.
Current semantic search programs have trouble with some of the more complex cases (which, in fairness, are hard for programmers to illicit full requirements from laypeople):
– Multiple tables and/or complex joins
– Field names in multiple tables are identical or poorly labeled (Csmr.EffectiveDate, Employee.EffectiveDate]
– Natural language doesn’t capture the nuance of calculations. (Is Last Week a rolling 7-day loopback period or the last full week? Which day of the week do you start on?)
When you watch a trained software engineer do a demo of these features, it seems incredible, but then you realize they already think in SQL and understand the order of operations of programming languages and the level of specificity required to produce meaningful and relevant results.
What does a business analyst do?
As as business analyst, I’ve written quite a bit of pseudo code, and sometimes even code, that runs in Excel to create test scripts and communicate user requirements to development teams.
The greater my understanding of the underlying data, the constraints of the software platform, and the coding language, the less that gets lost in translation from business English to code.
You can ask clarifying questions like:
– “What timeframe should we include?” (Order Date filter)
– “Do you want the average of each SKU or the Product Category average” (before or after GROUPBY?)
What should humans do vs software in the future?
I’m not sure if I think the semantic search tools need to have template structures like Mad Libs to help users consider the important nuances of their requests OR if Prompt Engineer is just a renaming and a subset of business analyst duties.
The existence of Prompt Engineers suggests that much of the application development process is working backward from more traditional software design, developing the test cases after the baseline code is baked instead of as acceptance criteria.
If you want to delve more into how to get LLMs to write SQL, I highly recommend checking out:
If those two sources aren’t frivolous enough for your mood, then maybe this Bullshit Job Title Generator is more up your alley.