I've been experimenting with using it to build a human-language-to-SQL tool, so that people can ask questions of their data "what country had the highest GDP in 2019" and it will turn them into the correct SQL query, given a table schema. I'm still iterating on this but it's shown some very promising initial results.
I use it a lot when I need to get something small working in a language that I don't have day-to-day familiarity with. "Write a bash script that loops through every MOV file in this folder and extracts the audio as MP3" is a good example of that kind of prompt.
That is the type of application that I am also interested in.
But how does one "train" GPT-3 on your business schema? How does one train it on any custom domain?
Maybe not for a developed, but for an AI based startup:
1. Generate synthetic data that is well aligned to your needs. With careful prompting + ensembling + after-fact human filtering you can generate a lot of very particular human-like data that you can then used to train/etc your product.
2. Generate labels. gpt-3 can give pretty good NLU results through appropriate prompting. You can do multiple prompts + ensembling to get very good labels on free text (sentiment, entity linking, intent, etc).
In both above use cases you can actually avoid deploying gpt-3 as part of client facing product, but instead leverage gpt-3 to train smaller "on-rails" models/rules/etc.
I wonder if anyone has successfully used it to create library documentation. Obviously you'd have to tweak whatever output you get but can GPT-3 provide a substantial starting point?