I come from that world myself, my first iPod Touch 4G jailbroken with Cydia at age 9 is what got me interested in how things work under the hood. So the question makes me smile.
I've never tested on e-Ink. What I can tell you is that from a computer you can export the dark PDF by clicking the download icon in the toolbar, the file is a standard PDF you can send to any device. If you try it let me know how it turns out, I'm curious to see the result on a reflective display.
Thanks, I deeply agree. It's Alan Cooper's concept in About Face, where he says that to design well you need to take the role of the apprentice alongside the master, meaning observe how they work, understand their frustrations before proposing solutions. In my case I was the master and the apprentice at the same time, but it's an approach I want to carry with me.
Thanks for the report, you found a real bug. That document is a scan processed with Adobe Paper Capture, which adds an invisible OCR text layer on top of the scanned image. Veil sees that text and treats the PDF as native, so it protects the image from inversion instead of inverting it. The dark border you see is the PDF background margin between the page edge and where the raster image starts, that margin gets inverted by the CSS. I'll probably need to cross the text detection with image coverage, meaning that if there's an image covering almost the entire page, it's a scan even if it has native text. Thanks for the specific document, it'll be very useful for reproducing the issue.
Thanks, really appreciate it. Hearing that it solves a real problem means a lot to me. Running locally is definitely about privacy, but it partly comes from my past working in a factory where the network was unreliable, so I was basically forced to build tools that worked offline. Knowing that habit is appreciated fills me with joy.
This means a lot, thank you. "The original vision for apps" is exactly the philosophy I built it with. I invested a lot in the service worker and iOS rendering (smaller canvas pool, DPR capped at 2, periodic engine reset to stay under Jetsam's memory limits), so hearing that the experience holds up in real use is the most valuable feedback I could get. iOS was the hardest platform to optimize, glad to know it was worth it. If you notice anything that doesn't work well on longer documents, let me know.
That really means a lot, ainch. I hope it makes your late-night sessions a little more bearable. If you find anything that doesn't work well with the papers you read, keep me posted
That's actually exactly where I started. The initial idea involved a YOLO nano model to classify images, deciding what to invert and what not to. It worked as a concept, but during the feasibility analysis I realized that for native PDFs it wasn't necessary: the format already tells you where the images are. I walk the page's operator list via getOperatorList() (PDF.js public API, no fork) and reconstruct the CTM stack, that is the save, restore and transform operations, until I hit a paintImageXObject. The current transformation matrix gives me the exact bounds. I copy those pixels from a clean render onto an overlay canvas with no filters, and the images stay intact. It's just arithmetic on transformation matrices, on a typical page it takes a few milliseconds.
Your approach with a classifier makes a lot more sense for the generic web, where you're dealing with arbitrary <img> tags with no structural metadata, and there you have no choice but to look at what's inside. PDFs are a more favorable problem.
A case where a classifier like yours would be an interesting complement is purely vector diagrams, drawn with PDF path operators, not raster images. Veil inverts those along with the text because from the format's perspective they're indistinguishable. In practice they're rare enough that the per-page toggle handles them, but it's the honest limitation of the approach.
> In practice they're rare enough that the per-page toggle handles them, but it's the honest limitation of the approach.
I don't understand how you handle raster images. You simply cannot invert them blindly. So it sounds like you just bite the bullet of never inverting raster images, and accepting that you false-positive some vector-based diagrams? I don't see how that can justify your conclusion "it wasn't necessary". It sounds necessary to me.
Actually, raster images are never inverted, they're protected. The CSS filter: invert() hits the entire canvas (text and images together), then the overlay paints the original image pixels back on top, restoring them. The result is: inverted text, images with their original colors.
The choice to never invert raster images isn't a compromise, it's the design decision. The problem veil solves is exactly that: every dark mode reader today inverts everything, and the result on photos, histology, color charts, scans is unusable. Preserving all images is the conservative choice, and for my target (people reading scientific papers, medical reports, technical manuals) it's the right one.
It's absolutely true that there's a subset of raster images, like diagrams with white backgrounds and black lines, that would benefit from inversion. I could be wrong, but in my experience they're a minority, and the cost of accidentally inverting the wrong one (a medical photo, a color chart) is much higher than the benefit of inverting a black and white diagram, from my point of view. For now the per-page toggle covers those cases.
> It's absolutely true that there's a subset of raster images, like diagrams with white backgrounds and black lines, that would benefit from inversion. I could be wrong, but in my experience they're a minority, and the cost of accidentally inverting the wrong one (a medical photo, a color chart) is much higher than the benefit of inverting a black and white diagram, from my point of view. For now the per-page toggle covers those cases.
OK, so I did understand, but this sounds very hand wavy to me. You say it's a 'minority'; well sure, I never claimed that was >50% of images, so I suppose yes, that's technically true. And it is also true that a false positive on inverting is usually nastier than a false negative, which is why everyone defaults to dimming rather than inverting.
But you don't sound like you have evaluated it very seriously, and at least on my part, when I browse my dark-mode Gwern.net pages, I see lots of images and diagrams which benefit from inverting and where I'm glad we have InvertOrNot.com to rely on (and it's rarely wrong).
It may be nice to be able to advertise "No AI" at the top of the page, but I don't understand why you are so committed to biting this bullet and settling for leaving images badly handled when there is such a simple easy-to-use solution you can outsource to, and there's not a whole lot else a 'dark mode PDF' can do if 'handle images correctly' is now out of scope as acceptable collateral damage and 'meh, the user can just solve it every time they read every affected page by pushing a button'. (If Veil doesn't exist to save the user effort and bad-looking PDFs, why does it exist?)
It's not resistance toward AI. Machine learning isn't among my current skills and I preferred to build with tools I could maintain and debug on my own, but the door isn't closed. Thank you for pushing on this point.
FWIW, we did consider a histogram heuristic, and I believe GreaterWrong still uses one rather than InvertOrNot.com. But I regularly saw images on GW where the heuristic got it wrong but ION got it right, so the accuracy gap was meaningful; and that's why we went for ION rather than port over the histogram heuristic.
Really appreciate this AbanoubRodolf, thank you. The brightness analysis code and the image bounds are both already in the project, I just never connected the two. The distance between where I am and where you're suggesting I go is really short. Feedback like this is exactly why I posted here. Thanks again
reply