Hes hiring data scientists not building a service though. This might realistically be a one off analysis for those 6tb. At which point you are happy your data scientists has returned statistical information instead of spending another week making sure the pipeline works if someone puts a greek character in a field.
Even if I'm doing a one off, depending on the task it can be easier/faster/more reliable to load 6TiB into a big query table than waiting hours for some task to complete and fiddling with parallelism and memory management.
It's a couple hundred bucks a month and $36 to query the entire dataset, after partitioning thats not terrible.
A 6T hard drive and Pandas will cost you a couple hundred bucks, one time purchase, and then last you for years (and several other data analysis jobs). It also doesn't require that you be connected to the Internet, doesn't require that you trust 3rd-party services, and is often faster (even in execution time) than spooling up BigQuery.
You can always save an intermediate data set partitioned and massaged into whatever format makes subsequent queries easy, but that's usually application-dependent, and so you want that control over how you actually store your intermediate results.
I wouldn't make a purchase of either without knowing a bit more about the lifecycle and requirements.
If you only needed this once, the BQ approach requires very little setup and many places already have a billing account. If this is recurring then you need to figure out what the ownership plan of the hard drive is (what's it connected to, who updates this computer, what happens when it goes down, etc.).
This brings up a good point: why is the data scientist being asked architecture questions anyway? This seems more like the desired answer for a posting like "hiring for a scrappy ML engineer / sysadmin".