Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AlphaCode Attention Visualization (deepmind.com)
144 points by MurizS on Dec 8, 2022 | hide | past | favorite | 45 comments


Has Google/Alphabet publicly released any of their AI models yet? I’ve seen plenty of hype surrounding Imagen [1] and Parti [2], but as far as I know, they’re still vaporware.

Normally I wouldn’t think too hard about this, but two things come to mind here:

Firstly, the speed at which competitors are launching and developing new AI models. Imagen/Parti both seem to rival Stable Diffusion and DALL-E… why can’t we use them yet?

Secondly (and perhaps this is clouding my judgment), the fact a Midjourney founder mentioned during an Office Hours session that [paraphrasing]: “it’s widely known in the industry that 90% of AI research is completely made-up garbage”…

[1] https://imagen.research.google/

[2] https://parti.research.google/


> Has Google/Alphabet publicly released any of their AI models yet?

You mean NLP field changing models from Google like BERT [1]? or Transformers paper [2]? or T5 model [3] (used by company doing ChatGPT like search currently on the front page on HN)?

1. https://arxiv.org/abs/1810.04805 code+models: https://github.com/google-research/bert

2. https://arxiv.org/abs/2112.04426

3. https://arxiv.org/abs/1910.10683 code+models: https://github.com/google-research/text-to-text-transfer-tra...


I read a lot on Twitter about the so-called "culture" of Google that prevents them from making AI based products, meanwhile OpenAI has made ChatGPT and is going to replace Google search within the next two months or so. I think this is the same narrative that's being expressed by the GP comment.

To add to your comment, Google has been using BERT to power Search since 2019: https://blog.google/products/search/search-language-understa...

I'm going to guess that the only reason they don't use larger models is because of the compute cost. ChatGPT at 4 billion users with today's hardware is an unsustainable business. However, that thought leads me to imagining: if Google offered Search "Premium" using the latest LLMs, how much would people pay for it?


> if Google offered Search "Premium" using the latest LLMs, how much would people pay for it?

I think they need to solve hallucination problem first, they are already working on optimization eg FLAN-T5 (smaller LLM models same performance) and RETRO (retrieval transformer that can use data index outside of the model) that takes them closer to use it in search.


Apparently you can avoid hallucination by basically reading the model’s mind instead of asking it questions.

https://arxiv.org/abs/2212.03827

Raises some ethical issues…


Right, so that’s a no. Google releases research papers but can’t productize anything. They are like a modern day Xerox Parc.


I must be missing something. Did you see the GitHub links in the post you’re replying to?


Some of the commenters in this thread are remarking on the lack of functional examples that they can interact with and others are saying they feel the existence of published technical papers is sufficient. They're talking through each other a bit.


> Google releases research papers but can’t productize anything

If you mean actual products and not just open sourcing models and code then:

https://cloud.google.com/products/ai

They also implement a lot (the most interesting stuff) of what they publish inside google to power their own products.


The arrogance of this comment is astoundingly funny.

> Firstly, the speed at which competitors are launching and developing new AI models. Imagen/Parti both seem to rival Stable Diffusion and DALL-E… why can’t we use them yet?

Because Google doesn't want you to have access to them? Why do you feel like you're entitled to their internal research?

Google releases papers on robots[0] as well. Do you expect them to ship you a free robotic arm? Or give you the ML model for it?

0: https://ai.googleblog.com/2022/12/talking-to-robots-in-real-...


I'm not the user you replied to, but I have the same view as them. And it's not that we are entitled to anything, it's that they're losing the race, or at least the image race.

For example, as an AI researcher, I can't consider Imagen/Parti to be the state of the art if all we have are cherry-picked examples and we can't verify anything. For all practical intents and purposes, they are just vaporware, and the state of the art are models like Stable Diffusion or DALL-E.

Of course they are free to keep them that way, but they risk losing their reputation as AI/ML/NLP leaders.


They show what it can do, people in Google can use it,.there are papers.

Google publishes plenty of other ml based papers.

Assuming in any way that Google might loose it's reputation because of imagine is very very far fetched.


Not losing its reputation as a whole, but losing the reputation of being the leader in this field? Sure.

When they released BERT there was no doubt that they were the leaders. Even laymen heard about it.

What AI advances do laymen most talk about now? DALL-E, Stable Diffusion and ChatGPT. AlphaCode and LaMDA gave some headlines but not even close. Everyone is too busy trying ChatGPT to pay attention to those.


Does elite AI talent really care what layman new to AI think? If so, problem for Google but I doubt they do.


Google has no incentive to share its AI research with the public. If it does release a tool like chatGPT, it will just increase calls to incorporate the tool into regular search. If Google does that, it willingly eats into its own ad-driven business.

Companies have often ignored newer tech because it eats into their existing business (Kodak and digital cameras, for example).

Google CAN'T adopt AI at this point, at least not without drastic changes to its revenue model.


> If Google does that, it willingly eats into its own ad-driven business.

I don't see why. They could still incorporate ads in the output of such a tool..


Impressions and CTRs for ads will go down drastically. CTRs for ads are already non-existent for any queries for which Google shows direct answers (such as "when was lincoln born").


I completely disagree, I think it is very unclear. They could show one direct answer + one or a few related ad, which I think might increase a lot the CTR.


The cultures at the various leading AI research organisations are wildly divergent.

Google is full of people that for a want of a better word are simply arrogant. They think that the purpose of AI is for them to show off their skills and... that's it. At best they'd use it internally for selling you more ads, they don't seem to think other people are worthy of using the output of their efforts in any shape, way, or form.

OpenAI is full of boyscouts that think that AI should be carefully censored so that it represents black, brown, asian, and white people equally. They deliberately skew the training data to enshrine wokeness into the product, while also trying to prevent anyone using their models to generate anything vaguely like porn. Basically, they're digital mormons. No fun.

Stability AI / Stable Diffusion is a bunch of people that had money thrown at them with no guard rails. Anything goes. Download our models and have fun! Make porn if you want to. Whatever.

To nobody's surprise, only the latter is of any interest or use to the general public.

The sad part is that Google had the most resources to spend on training their models, and it's the least accessible.

It's like Tony Stark inventing cold fusion energy and then using only to power his suit instead of... you know... changing the world for the better.


Google is full of people that for a want of a better word are simply arrogant. They think that the purpose of AI is for them to show off their skills and... that's it. At best they'd use it internally for selling you more ads, they don't seem to think other people are worthy of using the output of their efforts in any shape, way, or form.

I don't think the papers they put out are to "show off." It's for advancing the entire field. Imagine if they had kept the Transformer paper in house which everyone uses and is basically the standard in AI now. AI wouldn't be anywhere close to where it is today. Also, I think it's a little ridiculous to think Google would spend $100B over the past 10 years on AI research and not think this stuff will be seen in important products.


I can understand why OpenAI were initially hesitant to be true to their name — it made sense whilst these models were in their infancy.

But as we enter 2023, OpenAI no longer require approval before using GPT3, and DALLE2 is publicly accessible.

Google has yet to release anything publicly… In light of Midjourney’s comments, it’s hard not to be a little suspicious.


In a world of papers with code, open Ai is not open


What were the Midjourney comments?

Edit: Nm, missed it in the top level reply


Language models don’t work unless you “censor” them like OpenAI does with reinforcement learning. You get the opposite result - it starts writing erotica as soon as it sees a woman’s name.


I mean you say that, but I keep getting impressed responses to chatGPT, hearing stories of people integrating it into their work flow, etc


The op is saying, correctly, that open Ai is bad because they think brown black and Asian people are as good as him. Anything that makes him feel lesser is woke and bad.


Not being horribly racist is woke now?


Yes - that is literally why “woke” is a pejorative term.


what I'm interested in are the military-grade models they already have deployed against the russian and against the chinese. we're witnessing a war but cannot acknowledge it publicly.


what do you mean by this? how are they being deployed? disinformation campaigns or more than that?


imagine a platform that shadowbans people, but then has them interacting with a chatGPT based bot; so they keep busy and don't notice their shadowbans any time soon.

if I can have this idea, so can others, and it's not even a difficult idea to have right now.. somebody should be testing this on reddit (or fb); or in a 'foreign country' if you feel queasy about the ethics of this.


I cannot wait for the true downfall of Google. Many friends I have had began working there and fell into a blackhole of arrogance. Meanwhile, nothing from a technical perspective, aside from BERT, has been contributed by them in quite a while. Their technical open source (Tensorflow, Angular, Kubernetes) has all followed the same pattern of overly complex garbage. Facebook opensource (Pytorch, React) blows the doors off Google open source.

And exactly, where are the models? Where is the AI? Bert came out what 5 years ago?

Its certainly not in Search given how unusable the results are. They are busy using models to structure everyones content so they can display it on the Search page and capture value from content they didn't create

Not one mind blowing AI product from them. The results of OpenAI are exposing FAANGs as has beens


I guess it depends on your point of view. Deepmind as far as I am concerned is the one true success that Alphabet is funding. The work they do is truly cutting edge. They may not be good at making products out of their breakthrough research, but you can't argue they are the "beens" on the basis of that.


At some point in time they need to either release the models or release a product that leverages AI in an impressive way. When other companies start releasing more, and they dont respond for years, it means they were all hype


Their products are used internally at Google and they don't need to release anything to anyone if they feel their use of their ai gives them and advantage in their own business field.


For context on what this is, here is the associated blog post: https://www.deepmind.com/blog/competitive-programming-with-a...


One previous thread:

Competitive Programming with AlphaCode - https://news.ycombinator.com/item?id=30179549 - Feb 2022 (397 comments)


"As part of DeepMind’s mission to solve intelligence, we created a system called AlphaCode that writes computer programs at a competitive level. AlphaCode achieved an estimated rank within the top 54% of participants in programming competitions by solving new problems that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding."


Worth noting that in many programming competitions online, a large chunk of competitors either don't submit anything, or only submit a little example.


The real question is whether DeepMind can create a model which can outrun the goalposts.


The fact that the amount of text you have to write is greater than the amount of code AlphaCode writes should reassure programmers afraid that their job will ever be taken over by AI.


We should really let go of words per minute or typing effort as a useful metric.

Most of our time is spent thinking about what to write not the actual writing.

Also nothing stopping it from being a voice input rather than typing.


That's not what I meant. I'm saying that it is more overall effort to code this way than to code by just writing code.


Will we ever see the day when AlphaIOCCC submits a winning entry?


i don't know much about ai or anything. but i would really like if ai can replace competitive programming. i really hate it. it's not programming and a very bad merit to judge. it's one of the thing i hates the most. i hope this type of things makes us rethink how we are judging the people. and how cp rank one cannot design a full system which is maintainable and easy to reason about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: