Why use GPT-4? The latency is significantly worse than 3.5 and this seems simple enough that the performance delta is marginal. If I was going for robustness, I probably wouldn’t be using AI in the first place.
Edit: I noticed they support both but I’m assuming by the speed all the demos are using 3.5?
We do have a switcher in the UI that lets you run against either GPT-3.5-turbo or GPT-4.
We've mainly been using GPT-4 for internal tests because in the "Make it work, make it right, make it fast" development flow, we're still firmly in the "make it work" phase. ;)
Edit: I noticed they support both but I’m assuming by the speed all the demos are using 3.5?