Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Cool video. I would say that the clothes folding task in the UBTECH video is much, much easier than the ones in Google's videos. In fact, it could potentially be performed by simple replay of scripted motions with no sensing at all (with low reliability). I have some things I always look for when I'm watching robot demo videos:

1. Are there cuts in the video? If so, the robot may not be able to perform the entire task by itself without help. UBTECH's video has a couple of cuts. Google's videos have none.

2. Is the video sped up? If so, the robot may be very slow. UBTECH's video is 1x which is good, but you can see that the robot does move somewhat slowly and does not switch fluidly between actions. Google posted both 1x and 2x-20x videos so that you can easily see both real time speed and long duration reliability. In the 1x videos Google's robot is also somewhat slow, however it seems to switch more fluidly between actions than UBTECH.

3. Is the initial state at the start of the video realistic? If not, the robot may not be able to start the task without help. UBTECH's video starts with a carefully folded and perfectly flat shirt already in the hands of the robot. Google's videos start with shirts relatively messily placed on tables and somewhat crumpled.

4. Is the task repeated? If not, the robot may be very unreliable at finishing the task. Google's videos show a lot of repetition without cuts. UBTECH's video shows only one instance (with cuts). You could still produce this video even if the UBTECH robot fails 90% of the time.

5. Is there variation in the repeated tasks? If not, the robot may fail if there is any variation. Google shows different colors and initial states of shirts, and also a much larger sweater. That said, almost all the shirts are small polo shirts and the robot would certainly not generalize to anyone's real closet of varied clothes.

6. Does the robot react to mistakes or other unexpected events? If not, it may be mostly playback of pre-recorded motions with little or no sensing influencing the robot's behavior. UBTECH's video shows alleged sensing, but doesn't show any unexpected things or mistakes. Google's videos show the robot recovering from mistakes.



good points! it's a demo video, so I assume it's staged, as are all of them, but I assume it's not cgi.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: