Hacker Newsnew | past | comments | ask | show | jobs | submit | drunkencoder's commentslogin

Still don’t get it. Is it to be able to show the user a local time for the timestamp? If that is a requirement , why not just also store the timezone along with the utc timestamp? Should the date-object really know this?


The problem is that the result of toUTCTimestamp(datetime: January 1st 2077 1:15am, timezone: New York) may be X today, but timezone rules could change soon, and then it becomes Y tomorrow. If you've only persisted (unixTime: X, timezone: New York) in the database and you try to turn X back into a local time, maybe now you get December 31, 11:45pm. If you're a calendar application or storing future times for human events, people will think your software is broken and buggy when this happens, and they will miss events.


I wonder how this would have played out using deep fake technology


Mandalorian Season 2 spoilers below.

This article looks relevant: “How Star Wars Deepfake Seriously Improves Luke Skywalker Cameo in The Mandalorian” - the youtuber that did that eventually got hired by ILM. There is also an example with Tarkin.

https://www.denofgeek.com/tv/star-wars-deepfake-luke-skywalk...


It is definitely better but it still has the problem of the original where his face just looks so stiff when he speaks.


It also looks like his face is sort of floating superimposed over the front of his head. I feel like I notice that a lot with deepfake stuff, it’s like the generated patch they’re compositing overtop doesn’t quite move at the same rate as the rest of the thing.


Yeah looking at it again I see what you’re saying


Being a father of two (oldest 3,5 years old) this kind of thread works as a friendly reminder of thankfulness. It suddenly opens up that deep void as thinking of your own death or the edge of the universe might do. I will hug my family and appreciate life and current happiness, knowing it can all end anytime. Thanks for the reminder and love to you all .


You’re right . That’s why scraping must be unlimited and legal for all. Any information accessible from internet should be legal to refine. Thus also us using GPT services to train our own models, scraping anything that’s publicly accessible. Our only defense is competing services that refines the data even more than any general llm. The solution is almost never regulation but competition. Fair competition


> You’re right . That’s why scraping must be unlimited and legal for all.

Unlimited scraping makes some of privacy regulations moot. Such as right to erasure (ability to delete personal data from a platform).


I don't think that's true. "Right to erasure" still works just as well as it always has, but you might need to ask the folks who have scraped and are re-sharing your information to also delete your personal data. That's not an unreasonable thing to have happen, nor is it an unreasonable thing to expect.

Let's suppose an embarrassing image of Person X is shared on Facebook and Person X uses their right to erasure with Facebook to delete their profile. Facebook has no control over the folks who may have downloaded or screenshot-ed that photo and turned it into subsequent memes. Likewise, if someone straight up scrapes and re-shares, that's not Facebook responsibility.

What I don't want to see happen is for:

1. Facebook to make it somehow impossible for anyone to ever copy or screenshot that or any photo, preventing anyone from ever doing anything with photos on Facebook without Facebook's explicit permission. This would seem to be quite the loss of user agency for very little society wide benefit (also, how would they do this?)

2. Facebook to somehow "control" that photo so closely that Facebook is able to remotely revoke folk's copies and screenshots of said photo in the spirit of "abiding by a persons right to erasure"; that'd be a huge overreach, but seems like the only other way to approach this (though "how" is also an open question).

Even asserting that "unlimited scraping makes some privacy regulations moot" seems like an implication that we can only have privacy laws by going towards situation #1, and that doesn't seem accurate given that folks can use existing privacy laws to remove content from any distributor (as long as they're compliant).


Not exactly. You can request a site to erase all the data it has on you, but not that they erase the memories of everyone who has seen this data. How is this any different?


Your tone implies you're serious, but I struggle to believe anyone could possibly equate persisting digital media with recalling a memory.

In case you really need an example to elucidate, consider reproducing an image. A scraper can quite literally accomplish that, trivially; a great artist would still be limited in multiple facets of the recreation, such that even one with the best memory and hand would find themselves far short of pixel-perfect.


I wonder how we would regard a person who could reliably perform such a feat whenever he pleased. Would we sterilize him, lest he give rise to a bunch of cute little privacy-invading monsters?


If the feat you mean is to perfectly recall disparaging information they see about people on web sites, we already have people with quite good memories. Irrelevance usually keeps them from bringing up the details of strangers' lives on a regular basis. If the juicy details are about friends or acquaintances, well, it's very easy to destroy one's social position - at least, with non-toxic people - by endlessly and tiresomely discussing other people's misfortunes or mistakes.


How many people who have seen that data are acting as a service to share it, at scale?


How many of them saved it and then reuploaded it elsewhere? Sorry, but talking about protecting the privacy of people who upload things for anyone to see just seems silly to me.


scale


So at which scale does the copying of data lower privacy, such that humans looking at it and potentially screenshooting it doesn't, but automated processes copying it does?


A fuzzy boundary doesn't make the two sides equivalent.


No, but since we are talking about laws, it is important to define the point beyond which a kind of behavior becomes unacceptable, or at least some set of criteria to determine when a specific instance is beyond that point.


You're making an idealogical argument but not confronting any of the business problems raised in the other comment.


Yes, recently upscaled research for my iOS app. Have made a pipeline for fact extracts from various sources, including some sentiment analysis. Would have taken me years to do it manually. Will mainly be value adding for end user . Not directly client facing since this is done offline and shipped as data into the final product.


So it will work for all historical earth quakes :) Sounds a bit like some climate research. Tune model parameters until it perfectly match historical data. This should be possible to debunk in shorter time though


Or a man, or a cat, or a person who identify itself as a cupboard. :)


Thanks. Learned a lot


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: