Large cars impose heavy many negative externalities on people (take up more space, make it difficult to get through a narrow street when they park there, higher mortality when they drive into pedestrians or cyclists, reduce visibility for others, aesthetically offensive). Policy is slow to shift those costs onto the people causing the externalities but it is predictable that it will happen eventually.
If you have a ChatGPT subscription, try Codex with GPT-5.2-High or 5.2-codex High? In my experience, while being much slower, it produces far better results than Opus and seems even more aggressively subsidized (more generous rate limits).
This seems like a very basic overleaf alternative with few of its features, plus a shallow ChatGPT wrapper. Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.
Loads of researchers have only used LaTeX via Overleaf and even more primarily edit LaTeX using Overleaf, for better or worse. It really simplifies collaborative editing and the version history is good enough (not git level, but most people weren't using full git functionality). I just find that there are not that many features I need when paper writing - the main bottlenecks are coming up with the content and collaborating, with Overleaf simplifying the latter. It also removes a class of bugs where different collaborators had slightly different TeX setups.
I think I would only switch from Overleaf if I was writing a textbook or something similarly involved.
Yeah I realized the parallel while I was writing my comment! I guess what I'm thinking is that a much better experience is available and there is no in-principle reason why overleaf and prism have to be so much worse, especially in the age of vibe-coding. Prism feels like the result of two days of Claude Code, when they should have invested at least five days.
I could see it seeming likely that because the UI is quite minimalist, but the AI capabilities are very extensive, imo, if you really play with it.
You're right that something like Cursor can work if you're familiar with all the requisite tooling (git, installing cursor, installing latex workshop, knowing how it all works) that most researchers don't want to and really shouldn't have to figure out how to work for their specific workflows.
> Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.
I have a phd in economics. Most researchers in that field have never even heard of any of those tools. Maybe LaTeX, but few actually use it. I was one of very few people in my department using Zotero to manage my bibliography, most did that manually.
As an arXiv author who likes using complicated TeX constructions, the introduction of HTML conversion has increased my workload a lot trying to write fallback macros that render okay after conversion. The conversion is super slow and there is no way to faithfully simulate it locally. Still I think it's a great thing to do.
I believe dginev's Docker image https://github.com/dginev/ar5ivist is very close to what runs on arXiv and can be run locally. It uses a recent LaTeXML snapshot from September.
Given that the warming impacts of contrails are short-lived (roughly a day), I think it is a good idea to do research now on the weather forecasting needed to avoid producing contrails. But I don't really see a reason to actually start avoiding them now, with the associated costs in terms of fuel, CO2 emissions, and time. We can start avoiding them in a few decades when it might have become urgent to have cooling.
Aren't the impacts perpetual if we're creating new contrails every single day?
Taken from another comment, this seems pretty clear:
> Contrail cirrus may be air traffic's largest radiative forcing component, larger than all CO2 accumulated from aviation, and could triple from a 2006 baseline to 160–180 mW/m2 by 2050 without intervention.
Not sure how you haven't noticed, but climate change is already affecting precipitation and drought patterns, it exacerbates heatwaves, cold snaps, and flooding, it affects harvests, disrupts ecosystems etc. etc. Reducing warming is an urgent matter.
There was a really good section of the article that went into great detail of the math and how it would easily outweigh the CO2. How it would only require something like diverting 2% of all flights as it is only that percentage of flights that make the majority of the contrails and that the diversion of the average flight would be something small like an extra 2 minutes flight time for shorter flights and like 6 minutes on a longer flight which the article states is not much increase in fuel consumption as well as not such a time increase to dissatisfy customers. So if the article is accurate in their math then the associated costs in all three fuel, CO2, and time are not an issue.
They have worded things dishonestly to make you think that POP can be replaced by IMAP. The IMAP support is only available in the mobile app (not gmail.com) and isn't a "fetch" that integrates fetched emails to your Gmail inbox. It's kept as a separate inbox.
It is not supported. You can only add an IMAP mailbox on the mobile app and not on gmail.com. The IMAP account is then displayed as an inbox completely separate from your gmail inbox. There is no pull and no integration.
This isn't the same thing. Yes Gmail provides IMAP so you can read it from other clients. The issue here is that Gmail cannot use IMAP to ingest email from other accounts, as it can (or could) using POP3.
What do you mean by accessible without authentication? My server will serve example.com/64-byte-random-code if you request it, but if you don’t know the code, I won’t serve it.
Obfuscation may hint that it's intended to be private, but it's certainly not authentication. And the keyspace for these goog.le short URL's are much smaller than a 64byte alphanumeric code.
Sure, but you have to make executive decisions on the behalf of people who aren't experts.
Making bad actors brute force the key space to find unlisted URLs could be a better scenario for most people.
People also upload unlisted Youtube videos and cloud docs so that they can easily share them with family. It doesn't mean you might as well share content that they thought was private.
I mean, going by that argument a username + password is also just obfuscation.
Generating a unique 64 byte code is even more secure than this, IF it's handled correctly.
reply