Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Question:

"Build a household robot" is high up the list. That doesn't seem inherently 'general'.

Certainly, people have been working on that for years; there are all sorts of subproblems like vision, contextual reasoning etc.

It could be treated as a general problem, requiring a lot of 'common sense'.

But a team which sets out to optimize that particular goal, could spend years on relatively narrow tasks that get good performance returns on household chores (e.g. developing version 10 of the floor cleaning algorithm), but don't really make progress towards the problem of general intelligence.

For me, what was really interesting about the benchmarks that Deepmind chose (the choice of a selection of Atari games) was that they were inherently somewhat general.

Are you not worried that by putting a narrow domain fairly high up, you'll get distracted by narrow tasks, rather than making progress towards what's really interesting - generality? Won't it introduce tension to try and keep the general focus in the presence of a narrow goal, where you can get good returns by overfitting?



This isn't a particularly meaningful issue.

The problem you describe of falling into the trap of brute-force optimizing a narrow task also applies to the Atari games. In fact it applies even more-so: it would be trivial for a lot of HN programmers to brute-force code an AI for challenging Atari games that deep learning still struggles against (like Montezuma's Revenge). But it would be nontrivial (difficult but still possible) to brute-force code a program for many narrow household robot tasks. You avoid this problem by... not hard-coding brute force solutions/heuristics! The research community can smell BS very easily (HN, not so much).

A household robot is substantially (probably an order of magnitude) more general than Atari games, even for narrow tasks (obviously it is nowhere near the vicinity of the generality of AGI). The perception problem is tremendously more complex. The control/planning problem is similarly tremendously more complex.


>But it would be nontrivial (difficult but still possible) to brute-force code a program for many narrow household robot tasks.

How difficult are you talking about here? Similar to training game agents? I really doubt it.

I feel like the training problem is VERY hard in case of real world random object handling (factory like fixed, mechanical situation can be purely hard coded and be much better than a human). In case of virtual games you can just use a bunch of GPUs and accelerate the process. But it is a much more difficult problem in reality. The grasping ability that we have with everyday objects is a marvel once you try to make a computer to do it.

This might be of interest: http://spectrum.ieee.org/automaton/robotics/artificial-intel...


Atari game agents are trivial if you hard-code / do traditional brute-force search AI because there is no noise in observation and no noise in control, and the control is very simple (usually just up down left right, no torques or anything physically complex)

This is the state of the art of "traditional AI" (not deep learning) robotics: https://www.youtube.com/watch?v=8P9geWwi9e0

Most decent HN programmers could code an AI for an Atari game in a few weeks.

Again, I would encourage you to read the literature instead of speculating.


It seems like you've misunderstood what I'm trying to say. I'm saying your statement

>But it would be nontrivial (difficult but still possible) to brute-force code a program for many narrow household robot tasks.

is very wrong. It is not really possible comparing atari games to even the same class of difficulty as household chores. You've just agreed with my point and said that I'm speculating.

>Atari game agents are trivial if you hard-code / do traditional brute-force search AI because there is no noise in observation and no noise in control, and the control is very simple (usually just up down left right, no torques or anything physically complex)

I never said that they are trivial. Although the point I'm making, again, is that you can't say that we can brute force even narrow household chores. It has a level of complexity - friction (which is a huge problem), elasticity and even air flow can mess up the actions, and they lack the computing power to account for everything. Whereas we have something called intuition (which I may add everyone interested in AGI to properly read up on, starting with Brain Games - S4:Intuition which is on netflix)

And it seems like you don't consider brute-forced solutions as proper solutions. I agree with that, as will any one who has common sense and read a couple of wikipedia articles. But RL is not exactly the brute forcing as we think of it, although it might look like it. We all employ brute force learning in our own lives, to whatever extent it might be, although our feedback and thought processes are much more complex so we feel we are acting out of pure intelligent deductions we make in our brain. We still need a couple of 'brute force' attempts, although with the number of iterations we need, you can't call them that.

I suggest you read some literature too, and please point out where I'm speculating.

1. DeepMind's reinforcement learning paper : http://www.readcube.com/articles/10.1038/nature14236?shared_...


If you're agreeing with my original assertion that household robot tasks are more general and more difficult than atari games, great.


Almost. I'm saying the difficulty (of household tasks) is so much more than they are in a different class of problems, and cannot be equated using a comparative adjective




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: