Hacker Newsnew | past | comments | ask | show | jobs | submit | wwhitlow's commentslogin

For what it's worth I have really appreciated this thread. I 100% agree with the comments that you have made and try and do the same. Unfortunately I doubt I will ever make the time to create a project like yours and just ignore certain media instead.


Thanks! If it makes you feel better, you can already use Calibre to do basic ebook editing (bulk find/replace profanity, etc.) - your ebook just needs to be DRM-free.

All my project allows me to do is export a set of changes on a book to a JSON-based metadata format, so the changes can be shared. I'm afraid to monetize it because I don't think I could deliver a good UX without breaking DRM (which would get me into legal hot water), but perhaps making it into a context-free Calibre plugin might be safe...


Alright, thanks! One more thing to add to the future tech list.


They get added to the carousel if they are AMP pages which for breaking news stories is life and death now.


Thank you for this site the collection of data here is amazing!


Anyone coming out of that site, without thinking climate change is real, is lost forever.


At what percentage would you allow an AI to consider a game unwinnable? While an AI that behaves erratically when the odds are low might be worth allowing it to be considered forfeit worthy, but the thing about humans is we make mistakes. Therefore an ideal AI that can continue to execute reasonable moves should have a lower percentage threshold where it decides to forfeit. See this match[0] for an example of a spectacular comeback that I feel an AI might have considered forfeit worthy if not well defined.

[0]: https://youtu.be/LwSQv_sNZBI


Maybe this is a limitation of self-play. If the opponent an AI faces during training is always optimal, then there's no surface area of mistakes. The losing AI, in its model/mind, knows that the game is over after a specific threshold. So it hasn't learned how to optimize for capitalizing on mistakes.

I wonder if this situation can be fixed by adding more randomness. For example, force AI'1 to be in a losing position to AI'2, but then suddenly switch the power level of AI'2 to be much weaker (where mistakes happen) so that AI'1 learns how to fight its way out of tough situations.


One of the most interesting takeaways from the post game interview for me was that the AI can be very stupid if you just blindly throw it in a self-play setting but with clever use of randomization (modifying power levels) and action restrictions (for example, only allowing the agent to spend an anti-invis item when a nearby enemy goes out of sight) it is possible to provide better learning opportunities for the AI.


> for example, only allowing anti-invis items when an enemy goes out of sight

These are the kind of actions you specifically don't want to code in because you're throwing in human knowledge. You want the AI to learn by itself that using anti-invis when everyone is visible is a low-value move.

The purist in me was even mad that they had a hand-crafted evaluation function. (e.g. prefer gold, prefer taking towers, each given some arbitrary value)


That's great I'm always glad to hear about the progress updates from them. Setting there service up on my personal website was one of the easiest improvements I have ever made. Thanks mostly to the fact that I have shell access to the server I host on. The auto renew works perfectly haven't even had to put thought about renewing my cert since last year and Let's Encrypt Certs don't last that long. We are getting very close for there to be no reason not to have an HTTPS connection to any website which is a great progression.

Thanks For Everything You Guys Have Done To Accomplish This Let's Encrypt!


That is interesting to me I haven't actually read anything that can be considered a modern best seller on my Kindle, so the books that I generally order are cheaper to get the Kindle version. That being said I do agree with your point if I had to pay more for the Kindle version I would immediately buy the physical book.


So my experience with an e-reader has varied over the years and I had essentially abandoned the idea of an e-reader over the past 5 years. However, I have been in NYC for this summer and riding the subway everyday so I eventually dusted off my Kindle and have loved it! I still think that you are right that a physical book provides a better experience, but the ability to pull out a book hold it in one hand, highlight passages I like, and take notes has been amazing. Not only that but I have enjoyed the fact that once I finish a book I just download a new one and don't have to wait for Amazon or find a bookstore(not that it is difficult to find a bookstore in NYC though). I do believe from my anecdotal survey as well that more people have physical books, but that does not mean that the kindles are nonexistent either I have seen several people with them as well. In reality it is probably closer to 60/40 because the vast majority of people I see on the subway are playing smartphone games or watching Netflix.


Yes I do think this is a tremendous PR campaign.


I agree as well the Intercept post has way more content.


I like the intent behind this idea and think that it is important for ethics to start becoming part of Computer Science. I'd be curious if someone with more knowledge about medicine could explain how those publications handle these dilemmas. Especially with regards to CRISPR Cas9 as that is probably the most famous recent discovery that needs some serious ethical considerations.

My fear is that if Computer Science doesn't start acknowledging the ethical consequences of the work being done that it will lead to a sharp increase in regulations. This fear is primarily held with self driving cars which some seem to have been rushed into production and have lead to some serious consequences.


CRISPR doesn't really mean the ethical considerations have changed. It's just a tool that makes genomic changes easier (and the jury is still very much out whether it can do so safely). Other tools already existed to do this, just more expensive and more challenging to work with.


I won't claim to speak for others, but my formal education in Computer Science included a course on ethics and professionalism. It presented several ethical frameworks.

None of those ethical frameworks would lead me to conclude that, say, self-driving vehicles are net-negative for society.

My personal experience conducting research in computer science is that at least some of it is too abstract to have clear social consequences. What are the social consequences of making it marginally easier and faster to produce chip designs that are easy to manufacture?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: