There was a LOT of discussion of this in the Q&A after the talk. Currently we have 4 main approaches:
(1) There's some stuff that's pretty much a bug for every program. If it segfaults, exits with a nonzero code, OOMs, triggers a TSAN error, fills the disk with fatal error messages, etc., etc., that's pretty easy to qualify.
(2) You can use our SDK to define additional custom test properties. Think like a normal assertions library, but you can also do existential quantification ("this code is reachable/this situation can happen") and soon temporal assertions ("this should never happen without this other thing happening first, possibly on a different node").
(3) We store all the output of your system in every timeline in a giant analytic database and support ad-hoc querying against it. Think "pre-observability", observability but for mirror universes. You can then do all the spelunking and analysis you would do with your production traces, but before your real customers are exposed to any issue.
(4) We have some very cool ML approaches in the pipeline that I can't talk about quite yet.
Can you define equivalence classes (mutations that shouldn't change the result) eg timing, order of events, idempotence, etc? So that you can use (3) to define the correct result for all members of the class
(Sorry if this is explained in the talk - I'll watch it but it's now too late in the day in my timezone)
(1) There's some stuff that's pretty much a bug for every program. If it segfaults, exits with a nonzero code, OOMs, triggers a TSAN error, fills the disk with fatal error messages, etc., etc., that's pretty easy to qualify.
(2) You can use our SDK to define additional custom test properties. Think like a normal assertions library, but you can also do existential quantification ("this code is reachable/this situation can happen") and soon temporal assertions ("this should never happen without this other thing happening first, possibly on a different node").
(3) We store all the output of your system in every timeline in a giant analytic database and support ad-hoc querying against it. Think "pre-observability", observability but for mirror universes. You can then do all the spelunking and analysis you would do with your production traces, but before your real customers are exposed to any issue.
(4) We have some very cool ML approaches in the pipeline that I can't talk about quite yet.