Hacker Newsnew | past | comments | ask | show | jobs | submit | Jan454's commentslogin

The article says: "nuclear power produces very little waste"

Is that true? Does it take into account all the nuclear waste that takes millions of years to fade out?

Wouldn't it be fair to calculate "waste produced" as "amount x (duration to be fully recycled)"?

Example calculation based on my amateurish guestimates give us then: residual waste takes maybe 500 years until it's organic earth for planting again. Nuclear waste takes 5 million years, so does a nuclear plant really produce 500 years / 5.000.000 years = 1/50.000th of waste than comparable powerplants?


> Is that true?

> (1) The U.S. generates about 2,000 metric tons of used fuel each year. This number may sound like a lot, but it’s actually quite small. In fact, the U.S. has produced roughly 83,000 metrics tons of used fuel since the 1950s—and all of it could fit on a single football field at a depth of less than 10 yards. https://www.energy.gov/ne/articles/5-fast-facts-about-spent-....

When compared to coal, the a single coal plant produces nearly as much toxic (and radioactive) ash in an hour http://energyforhumanity.org/en/nuclear/what-do-we-do-with-a...

And this ignores recycling (that the U.S. doesn't have, but the French are notorious for (modern reactor technology) that reduces waste to 10%.


Maybe it's time for a public company-independent complains-platform where all companies are obliged by law to respond. Each complaint of course is hidden behind some anti-bot captcha/protection so that the companies themselve can't use AI to answer your complaints.


https://ec.europa.eu/info/law/law-topic/data-protection/refo...

I dunno, where's the evidence this case was even done by AI? People just say Google does everything with computers when they actually use armies of contractors.


That's basically what the author suggests when they say:

> Anyhow, I truly believe humanity has to rollback to operating at a human scale.

It's impossible to operate a complaints-platform on a global scale if it's run by humans. According to GMI [0], 500 hours of content are uploaded every minute.

Given an average video duration of about 12 minutes [1], that would be 2500 videos per hour. That's just too much to manually review and handle complaints.

[0] https://www.globalmediainsight.com/blog/youtube-users-statis...

[1] https://www.statista.com/statistics/1026923/youtube-video-ca...


Is it though? Let's do a very rough estimate: 500 hours of content per minute in 2019, lets say 1 reviewer can review 3 hours of video each hour (by increasing play speed/skipping etc.) and we have a global workforce in lower income countries working 3x 8 hour shifts + weekends. That's 50060/33 = 30000 reviewers, at a monthly cost of $1000 that's 30000100012*7/5 = $504m/year. Youtube had $15b in revenues in 2019, so this represents around 3.5% of revenue. Now this is assuming that we actually need to 100% review every video before releasing it (which is not the case) and one reviewer can probably review more than 3 hours of content per hour with the right AI assistance so the real cost would be quite a bit lower. Even then, spending less than 5% of revenues on content review, moderation and support sounds very reasonable to me.


The flaw is that you think humans would do a better job than AI which def not the case. Especially hiring 30k people in a low income country, what could go wrong... This is the kind of scale problems that can't be fixed by humans review.


> Is it though?

Yes, it is - because it's actually 2500 videos per MINUTE, not per hour, mea culpa. So your 30,000 reviewers would actually have to be at least 1.8 MILLION.


They didn't use your videos per hour/minute figure; they used hours of content per minute, so it still comes out 30'000 reviewers.


It's not about the viewing time, though, it's about the videos.

The misconception is that it's the review process that's the problem - it isn't. That can be automated just fine.

The problem arises as soon as there are complaints or issues with the content and that depends on the number of videos, not the duration.

So if there's a problem with a video it can get flagged, de-monetised or even taken down automatically by software (as is the case now). This is a non-issue. It gets complicated as soon as one party has a dispute over this and that scales with the number of videos, not their length.


> that scales with the number of videos, not their length.

That seems pausible, but if so, the entire calculation would have to be redone from scratch, with qualitatively (not quantitatively; different units, not just different values) different numbers, so bringing "1.6 million" into it is still a misleading non sequitor.


It takes a lot longer to make a video than to watch it. It therefore stands to reason that if humanity is capable of making all that content, humanity is capable of watching it - if it decided that were a priority.


2500 videos per minute doesn't equal 500 hours of original content per minute, which is part of the problem.

Just look at all the reaction channels and compilations that simply reuse the same content over and over again. You have one funny or shocking clip (often from 3rd party sources such as TikTok) and you'll find the same video snippet in at least 10,000 remix/compilation/reaction videos. Not to mention reuploads and straight up copies.

Algorithms have a hard time catching up with this and cropping, mirroring, tinting, etc. are often used to confuse ContentID. Asymmetry is the problem. Bots and software can both spam and flag content at superhuman rates.

The inverse - e.g. deciding whether a complaint is legit, fair use applies, whether monetisation is possible, etc. - is actually a really hard problem and therein lies the dilemma.

Certain parties are gaming the system and the scale is just too much to handle manually.


I don't have any data to back this up, but i believe of those 2500 videos per minute, 2450 or so the AI could classify them as safe, not requiring human interaction. The other 50 are classified on a scale from 0 to 100 on a badness scale. The ones closer to "not that bad" (ToS and such) gets put through automatically waiting for a review. The illegal content (rape, gore, child porn) and such gets blocked automatically until reviewed by a human. Doesn't sound that far fetched to implement with 50B a year in profits?


But how would that help with complaints, ContentId and copyright claims, though?

The problem isn't the automated review process, the problem is complaints and disputes.

Even if only 1% of all videos had any issues of this sort at all, that'd still be 25 complaints per minute about the most complex topic in media no less - copyright law and fair use.

The problem lies in the asymmetry - bots and automated flagging campaigns can scan, mark and take down thousands of videos per minute no problem.

But it's impossible for creators to get their issue reviewed in time by a human, because we just don't have AI capable of handling such decisions yet. And even then it's often still not as clear-cut as one might think and both sides need to be heard, etc.


And right now it's essentially impossible to be heard by a human, which is the problem. They don't have enough humans employed.


I've thought that something like the spamassassin model would be sufficient - calculate a 0.0 - 1.0 range of likelihood to block, and set cutoffs on the 0 end to auto approve, and toward the 1 end to auto block, and moderate the middle.

Was good for spam for a long time.


Maybe we don't need to let people upload that much video content.


*2500 videos per minute


Yes, thanks.


A small claims court for the internet. I like that.


That sounds like something that the government should oversee, maybe they could also make some regulations so that , I dunno, corporations might need to be responsible for their actions.


What do you think this will achieve? We know what Google thinks; they don't want this content on their platform.

Forcing them to state that in a specific forum as some sort of power play isn't going to help.


Time to install GPT-3 on your own server and unleash it onto Google-Support, YouTube-Support, Alpha-Support, etc. to complain about your situation ;-) You just need to answer the captchas to keep it goin.


With AI-based support automation, it might end up like https://www.youtube.com/watch?v=WnzlbyTZsQY


Combine that with 2captcha.com, what do we have? Lol


The good side of this witch-hunt is that the bad and the ugly revealed themselves. E.g. RedHat or Mozilla stopped their support to the fsf[1] with this hunt as scapegoat.

.. really sad for RedHat though ..

[1] https://www.theregister.com/2021/03/26/redhat_fsf_funding_ri...


Do you mean IBM? :-D


Red Hat is very independent from IBM.


IBM could snap their fingers at any moment and dissolve the entity now called “Red Hat” into dust.


And explain to their shareholders how they burned billions of dollars in brand awareness...


Like CentOS?


I don't understand why Google (and Apple) at all are allowed to deny apps on their own. This is self-administered justice! They are monopols. If they say "there are alternatives, just go to Apple/Google/China/F-Droid" that's just a farce that should be punished hard.


I agree. Apple and Google should be forced to use their resources to distribute apps for any developer under any terms the developer wants.


Oh what a burden it is to host APKs of 10 MB for people to download and use..

Let's stop the pretense that Google and Apple are doing us a big favor with hosting our apks for download. They are not paying for my cloud storage or cloud functions which my users actually use, but me personally. The only role Apple and Google have is to be the troll to take the toll to "allow" me to put a link and download my app from the stores.

I would love to just put the same APK on my website and have people install it directly.


Unlimited free downloads is not an inconsequential benefit. Imagine if Spotify tried to host their downloads on a SquareSpace account, how many microseconds would it be before that account was banned. Even a 10 Mb APK isn’t cheap to host when you have millions of monthly downloads.

And then you can just ignore all the ancillary services that add value to being on an App Store, such as app review, marketing promotion, international market access, etc, etc.

It’s as if you somehow think all those services appear out of the air instead of requiring thousands of expensive employees to design, build, test, and maintain.


That's not the problem with Google, as there are other methods of distributing apps for Android. But with Apple it's a different story.

Also AFAIK Android's browser is more advanced than Apple's one, so you can create web applications with more capabilities for Android compared to iOS. But web applications do not seem very popular for some reason I don't understand.


>I don't understand

put that on a tshirt


As a programmer, please first define the words "programming" and "easy" ;)

If you instead talk about whole process of everything involved, then split it up into 50 distinct skills that all come together to 'imagine, design, mock, develop, run and support production ready software'.

To most people some of the needed skills are 'easy' and others are 'hard'.

For example just writing 'compileable code' with the help of stackoverflow is probably easy for most. That's not what i call 'programming' but pure beginners might.


> As a programmer, please first define the words "programming" and "easy"

I came her to say pretty much the same thing.

As far as I'm concerned, "programming" is just the art of writing instructions. For software developers, that means writing instructions for computers in a machine-readable language. But really any form of recording instructions is "programming", regardless of who the instructions are meant for.

I've taught my wife a little programming and for her the hard part is understanding how to translate instructions into a machine-readable language. Her biggest struggle is understanding how to decompose each step until it can be accomplished using the limited features and tools provided by the language.

When I think back on my education, we really didn't talk very much about the process of logically decomposing instructions. I've also had a hard time finding any resources to help me better explain the process to her. I wouldn't be surprised if that hurdle is responsible for most people giving up on programming.


I really hope they now have to pay that 4% ransom due to violation of the GDPR .. for each stolen account of course ;-)


The important question (imho) would be: What can be done with this still open connection after the client closed the tab?

* Might the server use it for anything without my consent?

(e.g. hacking the sandbox as this state of the browser might open another code-tree with yet unknown security-bugs, or might still running js-scripts keep going in the background even after closing the tab?)

* Might the browser use it for anything without the server's consent?


Yeah, and the US took it almost 2 more month to do the same, so no difference here...


Which western doctors were arrested for raising awareness about covid?


Seems the same behaviour with any other country. Can you name just one that would behave different? So this is no accusation but just business as usual.

Besides, why at all trying to blame anyone on this? It's a natural disaster that could happen to anyone anywhere. The real culprit is, that as long as the wealthy don't try to vaccinate everyone in this world as quickly as possible, the virus still has potential to mutate to sth. where current vaccines don't protect against ..


Which other governments arrested & jailed people for reporting on the outbreak?

https://www.bbc.com/news/world-asia-china-54969682


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: