Hacker Newsnew | past | comments | ask | show | jobs | submit | namuol's commentslogin

> human bodies without a brain for testing

I think the way a drug impacts the brain is kind of important


The only question I have is whether the speed it was going was situationally appropriate and whether we’d expect a human to be considered reckless under the same circumstances. 17mph sounds pretty slow but it really depends on context.

Who? Where? I’m blissfully unplugged from this bubble (despite lurking on HN). Show me a usable browser that was “written with AI”. Such claims are likely PR.

It remains an open challenge. I've seen three credible initial attempts so far, though none of them have got further than being able to render static HTML and CSS:

- https://github.com/hiwavebrowser/hiwave

- https://github.com/wilsonzlin/fastrender

- https://github.com/embedding-shapes/one-agent-one-browser

I've started tracking them on this tag on my blog: https://simonwillison.net/tags/browser-challenge/


> though none of them have got further than being able to render static HTML and CSS

Ehh, really pushing my buttons to continue now huh? I have other things to do! twitching

Guess I'll start sketching out some lame JS engine from scratch too...


> Less than five minutes to read.

OMZ is still easier to set up consistently. That’s why we use it.

If the concern is the bloat of OMZ then make FMZ - fast my zsh - that is just as quick to set up and doesn’t add “bloat”?


I won't spoil the article, but if you read to the end you'll find out what this "FMZ" project is called.


Can you spoil it for me, because I read it to the end and saw no mention of such a project. Unless you are referring to the DIY approach the article suggests.


Somewhere else in the comment thread Zim (zimfw) was mentioned which after reading their website sounds pretty much like that.


The issue is with font rendering software not properly accounting for the subpixel arrangement of the display. I guess it’s a valid concern from a simple pragmatic perspective as a consumer, but as an enthusiast it’s not strictly a problem with OLED. Surely there’s some work out there that tries to improve font rendering on nonuniform subpixel layouts, right?


(Author here.)

The other issue is that it's not just a text problem. It affects any high color contrast edges, especially directly vertical or horizontal ones. So subpixel rendering tweaks for text rendering (eg: Cleartype) don't solve the whole problem.


> Surely there’s some work out there that tries to improve font rendering on nonuniform subpixel layouts, right?

Only Microsoft can fix it, and as far as I know, they don't seem interested.


Others can fix it too and they have. See:

https://github.com/snowie2000/mactype

https://github.com/CoolOppo/GDI-PlusPlus

I use MacType and it works really well. You can tune many more things than with ClearType.


Same here, running MacType/Windows on a 65" LG 4k OLED C5 TV with 100% scaling as my main display for all kind of stuff incl. coding. But i must admit that fonts on Linux looks noticeable better out of the box and MacType/Windows does not apply to all applications. E.g. for LibreOffice i had to change the rendering engine(disable skia) under options and on PDFGear MacType does not apply at all.

Anyway, OLED is great, I'm sitting 2 arm length away from the panel.

People complaining are probably Gen.Z that never sat in-front of an ol' CRT in the 90s and are spoiled by smartphones running 4k on minuscule 7" displays with 460ppi.


I've tried MacType before but sadly it came with significant slow down in many applications, lists would lag while scrolling, etc.

It's really annoying because all I really want is to disable ClearType on my primary high DPI monitor while keeping it with default settings for my two side monitors, but Windows does not let you configure it per monitor.


I guess we need a font rendering engine for arbitrary subpixel layout...


Keep it in the ground.


What even was this email? Some kind of promotional spam, I assume, to target senior+ engineers on some mailing list with the hope to flatter them and get them to try out their SaaS?


The AI village was given the goal of spreading acts of kindness:

https://theaidigest.org/village/goal/do-random-acts-kindness


Sounds like a different take on this Onion headline: https://theonion.com/cia-realizes-its-been-using-black-highl...


Programming in general is about converting something you understand into something a computer understands, and making sure the computer can execute it fast enough.

This is already hard enough as it is, but GPU programming (at least in its current state) is an order of magnitude worse in my experience. Tons of ways to get tripped up, endless trivial/arbitrary things you need to know or do, a seemingly bottomless pit of abstraction that contains countless bugs or performance pitfalls, hardware disparity, software/platform disparity, etc. Oh right, and a near complete lack of tooling for debugging. What little tooling there is only ever works on one GPU backend, or one OS, or one software stack.

I’m no means an expert but I feel our GPU programming “developer experience” standards are woefully out of touch and the community seems happy to keep it that way.


OpenGL and pre-12 DirectX were the attempt at unifying video programming in an abstract way. It turned out that trying to abstract away what the low-level hardware was doing was more harmful than beneficial.


> It turned out that trying to abstract away what the low-level hardware was doing was more harmful than beneficial.

Abstraction isn’t inherently problematic, but the _wrong_ abstraction is. Just because abstraction is hard to do well, doesn’t mean we shouldn’t try. Just because abstraction gets in the way of certain applications, doesn’t mean it’s not useful in others.

Not to say nobody is trying, but there’s a bit of a catch-22 where those most qualified to do something about it don’t see a problem with the status quo. This sort of thing happens in many technical fields, but I just have to pick on GPU programming because I’ve felt this pain for decades now and it hasn’t really budged.

Part of the problem is probably that the applications for GPUs have broadened and changed dramatically in the last decade or so, so it’s understandable that this moves slowly. I just want more people on the inside to acknowledge the problem.


> As far as I know, it is the only "social network" that allows you to grow intellectually through participation. This is probably the highest compliment an internet platform can receive in 2025.

Eh. It’s garbage in, garbage out, mostly like any other platform. It’s still easy to degrade the site if the users are determined enough.

How you choose to use it dictates your takeaway more than most social media platforms I suppose, which is actually the best thing about it IMO. That much is worth contrasting with the other options out there, no question.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: