Hacker Newsnew | past | comments | ask | show | jobs | submit | DominikPeters's commentslogin

Large cars impose heavy many negative externalities on people (take up more space, make it difficult to get through a narrow street when they park there, higher mortality when they drive into pedestrians or cyclists, reduce visibility for others, aesthetically offensive). Policy is slow to shift those costs onto the people causing the externalities but it is predictable that it will happen eventually.

Are you using Opus 4.5? Sounds more like Sonnet.


Yes I'm using Sonnet 4.5. Thanks for the tip, will try Opus 4.5, although costs might become an issue.


> although costs might become an issue.

If you have a ChatGPT subscription, try Codex with GPT-5.2-High or 5.2-codex High? In my experience, while being much slower, it produces far better results than Opus and seems even more aggressively subsidized (more generous rate limits).


This seems like a very basic overleaf alternative with few of its features, plus a shallow ChatGPT wrapper. Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.


Loads of researchers have only used LaTeX via Overleaf and even more primarily edit LaTeX using Overleaf, for better or worse. It really simplifies collaborative editing and the version history is good enough (not git level, but most people weren't using full git functionality). I just find that there are not that many features I need when paper writing - the main bottlenecks are coming up with the content and collaborating, with Overleaf simplifying the latter. It also removes a class of bugs where different collaborators had slightly different TeX setups.

I think I would only switch from Overleaf if I was writing a textbook or something similarly involved.


Getting close to the "why Dropbox when you can rsync" mistake (https://news.ycombinator.com/item?id=9224)

@vicapow replied to keep the Dropbox parallel alive


Yeah I realized the parallel while I was writing my comment! I guess what I'm thinking is that a much better experience is available and there is no in-principle reason why overleaf and prism have to be so much worse, especially in the age of vibe-coding. Prism feels like the result of two days of Claude Code, when they should have invested at least five days.


I could see it seeming likely that because the UI is quite minimalist, but the AI capabilities are very extensive, imo, if you really play with it.

You're right that something like Cursor can work if you're familiar with all the requisite tooling (git, installing cursor, installing latex workshop, knowing how it all works) that most researchers don't want to and really shouldn't have to figure out how to work for their specific workflows.


> Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.

I have a phd in economics. Most researchers in that field have never even heard of any of those tools. Maybe LaTeX, but few actually use it. I was one of very few people in my department using Zotero to manage my bibliography, most did that manually.


Accessibility does matter


If you inspect the Chain of Thought summaries, the LLM often knows full well what it is doing.


That's not knowing. That's just parotting in smaller chunks.


As an arXiv author who likes using complicated TeX constructions, the introduction of HTML conversion has increased my workload a lot trying to write fallback macros that render okay after conversion. The conversion is super slow and there is no way to faithfully simulate it locally. Still I think it's a great thing to do.


I believe dginev's Docker image https://github.com/dginev/ar5ivist is very close to what runs on arXiv and can be run locally. It uses a recent LaTeXML snapshot from September.


Given that the warming impacts of contrails are short-lived (roughly a day), I think it is a good idea to do research now on the weather forecasting needed to avoid producing contrails. But I don't really see a reason to actually start avoiding them now, with the associated costs in terms of fuel, CO2 emissions, and time. We can start avoiding them in a few decades when it might have become urgent to have cooling.


Aren't the impacts perpetual if we're creating new contrails every single day?

Taken from another comment, this seems pretty clear:

> Contrail cirrus may be air traffic's largest radiative forcing component, larger than all CO2 accumulated from aviation, and could triple from a 2006 baseline to 160–180 mW/m2 by 2050 without intervention.

[1] https://en.wikipedia.org/wiki/Contrail#Impacts_on_climate

The original article describes associated costs in time and fuel usage in the realm of 1% increase.


Not sure how you haven't noticed, but climate change is already affecting precipitation and drought patterns, it exacerbates heatwaves, cold snaps, and flooding, it affects harvests, disrupts ecosystems etc. etc. Reducing warming is an urgent matter.


There was a really good section of the article that went into great detail of the math and how it would easily outweigh the CO2. How it would only require something like diverting 2% of all flights as it is only that percentage of flights that make the majority of the contrails and that the diversion of the average flight would be something small like an extra 2 minutes flight time for shorter flights and like 6 minutes on a longer flight which the article states is not much increase in fuel consumption as well as not such a time increase to dissatisfy customers. So if the article is accurate in their math then the associated costs in all three fuel, CO2, and time are not an issue.


Given the feedback loops associated with climate change, I'd expect early interventions to have a larger impact on the climate than later ones.


It is already urgent.


It was urgent 40 years ago.


No. Addressing CO2 production was urgent then but the actual impacts of heat were not. They are now.


But the warming started already back then. Contrails contributed. So less contrails, less warming in the last 40 years.


They have worded things dishonestly to make you think that POP can be replaced by IMAP. The IMAP support is only available in the mobile app (not gmail.com) and isn't a "fetch" that integrates fetched emails to your Gmail inbox. It's kept as a separate inbox.


It is not supported. You can only add an IMAP mailbox on the mobile app and not on gmail.com. The IMAP account is then displayed as an inbox completely separate from your gmail inbox. There is no pull and no integration.


Ah, I see, I think you're right. I misread the Google doc.


[flagged]


This isn't the same thing. Yes Gmail provides IMAP so you can read it from other clients. The issue here is that Gmail cannot use IMAP to ingest email from other accounts, as it can (or could) using POP3.


But you can't pull from a third party into gmail via IMAP.


Correct


So why you say "Not true at all"?

Is it AI spam to advertise your product?


Claude 3.7 was released in February 2025.


It will include many URLs that are semi-private, like Google Docs that are shared via link.


If some URL is accessible via the open web, without authentication, then it is not really private.


What do you mean by accessible without authentication? My server will serve example.com/64-byte-random-code if you request it, but if you don’t know the code, I won’t serve it.


Obfuscation may hint that it's intended to be private, but it's certainly not authentication. And the keyspace for these goog.le short URL's are much smaller than a 64byte alphanumeric code.


Sure, but you have to make executive decisions on the behalf of people who aren't experts.

Making bad actors brute force the key space to find unlisted URLs could be a better scenario for most people.

People also upload unlisted Youtube videos and cloud docs so that they can easily share them with family. It doesn't mean you might as well share content that they thought was private.


I'm not seeing why there's a clear line where GET cannot be authentication but POST can.


Because there isn't a line? You can require auth for any of those HTTP methods. Or not require auth for any of them.


I mean, going by that argument a username + password is also just obfuscation. Generating a unique 64 byte code is even more secure than this, IF it's handled correctly.


That's not any better than what archiveteam is doing. They're brute forcing the URLs to capture all of them. So privacy won't really matter here.


Then use something like argon2 on the keys, so you have to spend a long time to brute force them all similar to how it is today.


So exclude them


How?

How will they know a short link to a random PDF on S3 is potentially sensitive info?


I meant Google docs of which they know share settings


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: