Natural Language Understanding is a productivity unlock
Chances are if you’re reading this you are a knowledge worker. And if you’re a knowledge worker, you probably do the following process many times a day:
Read a specific set of text documents
Find the piece of text document you are looking for
Take action with that information
Repeat step 1
My general sense is that going through this loop quickly is not only a major productivity boost for a knowledge worker but also leads to more satisfaction. After all, I doubt many people enjoy going through a bunch of PDFs or Google Docs to find the sentence they’re seeking.
The recent advances in NLU (Natural Language Understanding), especially around search and summarization, are going to accelerate the loop outlined above for the following reasons:
Natural language Tools can be adopted with no training
No-to-little engineering work is required to implement the tools
No Training Required
When technology meets people where they are that’s when the magic truly happens.
Natural language is the most user-friendly API there is. Having a search API that allows people to ask naturally worded questions like “how do we reimbursement users when they drop the device in the toilet” is going to get used more than an API that requires SELECT reimbursement_clause from policies where reason = ‘toilet_drop’
especially if it doesn’t require engineering help.
No Engineering Required
There’s an inherent tradeoff between data refinement and being able to trust the data without assistance.
For tabular data, you need to know SQL or ask an analyst to set up the databases and write the queries. Furthermore, you have to have a deep understanding of how the data stored in the SQL database was generated in the first place to make decisions using it. The data likely went through many refinement steps. You need assistance to answer questions like:
Where did the data originate from?
What data did we choose to filter out?
What other assumptions are baked in?
You can get an answer but you may not have the context to know if it’s actually correct.
If you’re dealing with raw text sources such as call notes, long contracts, insurance policies, help desk articles, etc. the context is all there in its unprocessed form. The extracted answer is more likely to be trusted. Until recently, it wasn’t technologically feasible to ask questions about large text sources without refinement from engineering and data teams.
Wrap Up / Prediction
NLU will help knowledge workers find answers faster because there’s no training involved in extracting them and far less help is needed in verifying them. My general sense is that most enterprise software will have an NLU layer that’s going to make information retrieval far more efficient. Like adopting better developer tools/APIs, adopting software that speeds up the knowledge extraction from the documentation loop is going to make the workforce more productive and happier.