Has Google/Alphabet publicly released any of their AI models yet? I’ve seen plenty of hype surrounding Imagen [1] and Parti [2], but as far as I know, they’re still vaporware.
Normally I wouldn’t think too hard about this, but two things come to mind here:
Firstly, the speed at which competitors are launching and developing new AI models. Imagen/Parti both seem to rival Stable Diffusion and DALL-E… why can’t we use them yet?
Secondly (and perhaps this is clouding my judgment), the fact a Midjourney founder mentioned during an Office Hours session that [paraphrasing]: “it’s widely known in the industry that 90% of AI research is completely made-up garbage”…
> Has Google/Alphabet publicly released any of their AI models yet?
You mean NLP field changing models from Google like BERT [1]? or Transformers paper [2]? or T5 model [3] (used by company doing ChatGPT like search currently on the front page on HN)?
I read a lot on Twitter about the so-called "culture" of Google that prevents them from making AI based products, meanwhile OpenAI has made ChatGPT and is going to replace Google search within the next two months or so. I think this is the same narrative that's being expressed by the GP comment.
I'm going to guess that the only reason they don't use larger models is because of the compute cost. ChatGPT at 4 billion users with today's hardware is an unsustainable business. However, that thought leads me to imagining: if Google offered Search "Premium" using the latest LLMs, how much would people pay for it?
> if Google offered Search "Premium" using the latest LLMs, how much would people pay for it?
I think they need to solve hallucination problem first, they are already working on optimization eg FLAN-T5 (smaller LLM models same performance) and RETRO (retrieval transformer that can use data index outside of the model) that takes them closer to use it in search.
Some of the commenters in this thread are remarking on the lack of functional examples that they can interact with and others are saying they feel the existence of published technical papers is sufficient. They're talking through each other a bit.
The arrogance of this comment is astoundingly funny.
> Firstly, the speed at which competitors are launching and developing new AI models. Imagen/Parti both seem to rival Stable Diffusion and DALL-E… why can’t we use them yet?
Because Google doesn't want you to have access to them? Why do you feel like you're entitled to their internal research?
Google releases papers on robots[0] as well. Do you expect them to ship you a free robotic arm? Or give you the ML model for it?
I'm not the user you replied to, but I have the same view as them. And it's not that we are entitled to anything, it's that they're losing the race, or at least the image race.
For example, as an AI researcher, I can't consider Imagen/Parti to be the state of the art if all we have are cherry-picked examples and we can't verify anything. For all practical intents and purposes, they are just vaporware, and the state of the art are models like Stable Diffusion or DALL-E.
Of course they are free to keep them that way, but they risk losing their reputation as AI/ML/NLP leaders.
Not losing its reputation as a whole, but losing the reputation of being the leader in this field? Sure.
When they released BERT there was no doubt that they were the leaders. Even laymen heard about it.
What AI advances do laymen most talk about now? DALL-E, Stable Diffusion and ChatGPT. AlphaCode and LaMDA gave some headlines but not even close. Everyone is too busy trying ChatGPT to pay attention to those.
Google has no incentive to share its AI research with the public. If it does release a tool like chatGPT, it will just increase calls to incorporate the tool into regular search. If Google does that, it willingly eats into its own ad-driven business.
Companies have often ignored newer tech because it eats into their existing business (Kodak and digital cameras, for example).
Google CAN'T adopt AI at this point, at least not without drastic changes to its revenue model.
Impressions and CTRs for ads will go down drastically. CTRs for ads are already non-existent for any queries for which Google shows direct answers (such as "when was lincoln born").
I completely disagree, I think it is very unclear. They could show one direct answer + one or a few related ad, which I think might increase a lot the CTR.
The cultures at the various leading AI research organisations are wildly divergent.
Google is full of people that for a want of a better word are simply arrogant. They think that the purpose of AI is for them to show off their skills and... that's it. At best they'd use it internally for selling you more ads, they don't seem to think other people are worthy of using the output of their efforts in any shape, way, or form.
OpenAI is full of boyscouts that think that AI should be carefully censored so that it represents black, brown, asian, and white people equally. They deliberately skew the training data to enshrine wokeness into the product, while also trying to prevent anyone using their models to generate anything vaguely like porn. Basically, they're digital mormons. No fun.
Stability AI / Stable Diffusion is a bunch of people that had money thrown at them with no guard rails. Anything goes. Download our models and have fun! Make porn if you want to. Whatever.
To nobody's surprise, only the latter is of any interest or use to the general public.
The sad part is that Google had the most resources to spend on training their models, and it's the least accessible.
It's like Tony Stark inventing cold fusion energy and then using only to power his suit instead of... you know... changing the world for the better.
Google is full of people that for a want of a better word are simply arrogant. They think that the purpose of AI is for them to show off their skills and... that's it. At best they'd use it internally for selling you more ads, they don't seem to think other people are worthy of using the output of their efforts in any shape, way, or form.
I don't think the papers they put out are to "show off." It's for advancing the entire field. Imagine if they had kept the Transformer paper in house which everyone uses and is basically the standard in AI now. AI wouldn't be anywhere close to where it is today. Also, I think it's a little ridiculous to think Google would spend $100B over the past 10 years on AI research and not think this stuff will be seen in important products.
Language models don’t work unless you “censor” them like OpenAI does with reinforcement learning. You get the opposite result - it starts writing erotica as soon as it sees a woman’s name.
The op is saying, correctly, that open Ai is bad because they think brown black and Asian people are as good as him. Anything that makes him feel lesser is woke and bad.
what I'm interested in are the military-grade models they already have deployed against the russian and against the chinese. we're witnessing a war but cannot acknowledge it publicly.
imagine a platform that shadowbans people, but then has them interacting with a chatGPT based bot; so they keep busy and don't notice their shadowbans any time soon.
if I can have this idea, so can others, and it's not even a difficult idea to have right now.. somebody should be testing this on reddit (or fb); or in a 'foreign country' if you feel queasy about the ethics of this.
I cannot wait for the true downfall of Google. Many friends I have had began working there and fell into a blackhole of arrogance. Meanwhile, nothing from a technical perspective, aside from BERT, has been contributed by them in quite a while. Their technical open source (Tensorflow, Angular, Kubernetes) has all followed the same pattern of overly complex garbage. Facebook opensource (Pytorch, React) blows the doors off Google open source.
And exactly, where are the models? Where is the AI? Bert came out what 5 years ago?
Its certainly not in Search given how unusable the results are. They are busy using models to structure everyones content so they can display it on the Search page and capture value from content they didn't create
Not one mind blowing AI product from them. The results of OpenAI are exposing FAANGs as has beens
I guess it depends on your point of view. Deepmind as far as I am concerned is the one true success that Alphabet is funding. The work they do is truly cutting edge. They may not be good at making products out of their breakthrough research, but you can't argue they are the "beens" on the basis of that.
At some point in time they need to either release the models or release a product that leverages AI in an impressive way. When other companies start releasing more, and they dont respond for years, it means they were all hype
Their products are used internally at Google and they don't need to release anything to anyone if they feel their use of their ai gives them and advantage in their own business field.
"As part of DeepMind’s mission to solve intelligence, we created a system called AlphaCode that writes computer programs at a competitive level. AlphaCode achieved an estimated rank within the top 54% of participants in programming competitions by solving new problems that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding."
The fact that the amount of text you have to write is greater than the amount of code AlphaCode writes should reassure programmers afraid that their job will ever be taken over by AI.
i don't know much about ai or anything. but i would really like if ai can replace competitive programming. i really hate it. it's not programming and a very bad merit to judge. it's one of the thing i hates the most.
i hope this type of things makes us rethink how we are judging the people. and how cp rank one cannot design a full system which is maintainable and easy to reason about.
Normally I wouldn’t think too hard about this, but two things come to mind here:
Firstly, the speed at which competitors are launching and developing new AI models. Imagen/Parti both seem to rival Stable Diffusion and DALL-E… why can’t we use them yet?
Secondly (and perhaps this is clouding my judgment), the fact a Midjourney founder mentioned during an Office Hours session that [paraphrasing]: “it’s widely known in the industry that 90% of AI research is completely made-up garbage”…
[1] https://imagen.research.google/
[2] https://parti.research.google/