Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Think of Stable Diffusion but doesn't need the expensive multi-step diffusion yet still achieve similar result.


Why they open source it?


It's one thing to come up with the technique; quite another to source the dataset of images on which the model will be trained, train it up, and then run it as a service for millions of users.


huge computation train on red, blue, and yellow colors and generate shape I wonder what would come out in 100B parameter training data and then 2 steps require a lot of computation not efficient (IF it would work at all? )


For other researchers. It doesn’t look like the pre-trained models will be good enough to use?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: