Hacker Newsnew | past | comments | ask | show | jobs | submit | pjmlp's commentslogin

Well we had RCS, CVS and Usenet groups.

And before that, compressed archives on BBS, or type in listings with snail mail.


Khronos is trying to entice scientific folks with ANARI, because there was zero interest to move from OpenGL as you mention.

https://www.khronos.org/anari/


There is also the issue that it is designed with JavaScript and browser sandbox in mind, thus the wrong abstraction level for native graphics middleware.

I am still curious how much uptake WebGPU will end up having on Android, or if Java/Kotlin folks will keep targeting OpenGL ES.


You are missing one important detail, an Amiga alongside NewTek's Video Toaster.

https://en.wikipedia.org/wiki/Video_Toaster


24 Amiga 2000's each with a 68040, 32mb of RAM and a Video Toaster, managed by a 486 server with a 12gb of storage.

[1]https://www.atarimagazines.com/compute/issue166/68_The_makin...


I just had to add more, because I remember they used DEC Alpha systems at some point.

" Alphas for design stations serving 5 animators and one animation assistant (housekeeping and slate specialist). Most of these stations run Lightwave and a couple add Softimage. VERY plug-in hungry. PVR's on every station, with calibrated component NTSC (darn it, I hates ntsc) right beside.

P6's in quad enclosures for part of the renderstack, and Alphas for the rest, backed up 2x per day to an optical jukebox.

Completed shots output to a DDR post rendering and get integrated into the show.

Shots to composite go to the Macs running After Effects, or the SGI running Flint, depending on the type of comp being done, and then to the DDR (8 minutes capacity on the SGI)."[0]

[0] http://www.midwinter.com/lurk/making/effects.html


you are right, i left this detail out ... but it went somewhat together with the amiga & the lightwave-software :))

Interesting read, however as someone from the same age group as Casey Muratori, this does not make much sense.

> The "immediate mode" GUI was conceived by Casey Muratori in a talk over 20 years ago.

Maybe he might have made it known to people not old enough to have lived through the old days, however this is how we used to program GUIs in 8 and 16 bit home computers, and has always been a thing in game consoles.


I think this is the source of the confusion:

> To describe it, I coined the term “Single-path Immediate Mode Graphical User Interface,” borrowing the “immediate mode” term from graphics programming to illustrate the difference in API design from traditional GUI toolkits.

https://caseymuratori.com/blog_0001

Obviously it’s ludicrous to attribute “immediate mode” to him. As you say, it’s literally decades older than that. But it seems like he used immediate mode to build a GUI library and now everybody seems to think he invented immediate mode?


Is Win16 / Win32 GDI which goes back to 1985 an immediate mode GUI?

Win32 GUI common controls are a pretty thin layer over GDI and you can always take over WM_PAINT and do whatever you like.

If you make your own control you musts handle WM_PAINT which seems pretty immediate to me.

https://learn.microsoft.com/en-us/windows/win32/learnwin32/y...

Difference between game engine and say GDI is just the window buffer invalidation, WM_PAINT is not called for every frame, only when windows thinks the windows rectangle has changed and needs to be redrawn independently of screen refresh rate.

I guess I think of retained vs immediate in the graphic library / driver because that allows for the GPU to take over more and store the objects in VRAM and redraw them. At the GUI level thats just user space abstractions over the rendering engine, but the line is blurry.


No, that is event based programming, and also the basis of retained rendering, because you already have the controls that you compose, or subclass.

Handling WM_PAINT is no different from something like OnPaint() on a base class.

This was actually one of mindset shifts when moving from MS-DOS into Windows graphics programming.


Event based or loop based is separate from retained or immediate.

The canvas api in the browser is immediate mode driven by events such as requestAnimationFrame

If you do not draw in WM_PAINT it will not redraw any state on its own within your control.

GDI is most certainly an immediate mode API and if you have been around long enough for DOS you would remember how to use WM_PAINT to write a game loop renderer before Direct2D in windows. Remember BitBlt for off screen rendering with GDI in WM_PAINT?

https://learn.microsoft.com/en-us/windows/win32/direct2d/com...


It's like the common claim that data-oriented programming came out of game development. It's ahistorical, but a common belief. People can't see past their heroes (Casey Muratori, Jonathon Blow) or the past decade or two of work.

I partly agree, but I think you're overcorrecting. Game developers didn't invent data-oriented design or performance-first thinking. But there's a reason the loudest voices advocating for them in the 2020s come from games: we work in one of the few domains where you literally cannot ship if you ignore cache lines and data layout. Our users notice a 5ms frame hitch- While web developers can add another React wrapper and still ship.

Computing left game development behind. Whilst the rest of the industry built shared abstractions, we worked in isolation with closed tooling. We stayed close to the metal because there was nothing else.

When Casey and Jon advocate for these principles, they're reintroducing ideas the broader industry genuinely forgot, because for two decades those ideas weren't economically necessary elsewhere. We didn't preserve sacred knowledge. We just never had the luxury of forgetting performance mattered, whilst the rest of computing spent 20 years learning it didn't.


> I think you're overcorrecting.

I don't understand this part of your comment, it seems like you're replying to some other comment or something not in my comment. How am I overcorrecting? A statement of fact, that game developers didn't invent these things even though that's a common belief, is not an overcorrection. It's just a correction.


Ah, I read your comment as "game devs get too much credit for this stuff and people are glorifying Casey and Jon" and ran with that, but you were just correcting the historical record.

My bad. I think we're aligned on the history; I was making a point about why they're prominent advocates today (and why people are attributing invention to them) even though they didn't invent the concepts.


I don't really like this line of discourse because few domains are as ignorant of computing advances as game development. Which makes sense, they have real deadlines and different goals. But I often roll my eyes at some of the conference talks and twitter flame wars that come from game devs, because the rest of computing has more money resting on performance than most game companies will ever make in sales. Not to mention, we have to design things that don't crash.

It seems like much of the shade is tossed at web front end like it's the only other domain of computing besides game end.


I mean... fair point? I'm not claiming games are uniquely performance-critical.

You're right that HFT, large-scale backend, and real-time systems care deeply about performance, often with far more money at stake.

But those domains are rare. The vast majority of software development today can genuinely throw hardware or money at problems (even HFT and large backend systems). Backends are usually designed to scale horizontally, data science rents bigger GPUs, embedded gets more powerful SoCs every year. Most developers never have to think about cache lines because their users have fast machines and tolerant expectations.

Games are one of the few consumer-facing domains that can't do this. We can't mandate hardware (and attempts at doing so cost sales and attract community disgust), we can't hide latency behind async, and our users immediately notice a 5ms hitch. That creates different pressures- we're optimising for the worst case on hardware we don't control whilst most of the industry optimises for the common case on hardware they choose.

You're absolutely right that we're often ignorant of advances elsewhere. But the economic constraint is real, and it's increasingly unusual.


I think we as software developers are resting on the shoulders of giants. It's amazing how fast and economical stuff like redis, nginx, memcached, and other 'old ' software are written decades ago, mostly in C, by people who really understood what made them run fast (in a slightly different way to games, less about caches and data, and more about how the OS handles low level primitives).

A browser like Chrome also rests on a rendering engine like Skia, that has been optimized to the gills, so at least performance can be theoretically fast.

Then one tries to host static files on a express webserver, and is suprised to find that a powerful computer can only serve files at 40MB/s with the CPU at 100%.

I would like to think that a 'Faustian deal' in terms of performance exists - you give up 10,50,90% of your performance in exchange for convenience.

But unfortunately experience shows there's no such thing, arbitrarily powerful hardware can be arbitrarily slow.

And as you contrast gamedev to other domains who get to hide latency, I don't think its ok that a simple 3 column gallery page takes more than 1 second to load, people merely tolerate this not enjoy it.

And ironically I find that a lot of folks end up optimizing their React layouts way more than what it'd have cost to render naively with a more efficient toolkit.

I am also not sure what advances game dev is missing out on, I guess devs are somewhat more reluctant to write awful code in the name of performance nowadays, but I'd love to hear what advances gamedev could learn from the broader software world.

The TLDR version of what I wanted to say, is I wish there was a linear performance-convenience scale, where we could pick a certain point and use techniques conforming to that, and trade two thirds of the max speed for dev experience, knowing our performance targets allow for that.

But unfortunately that's not how it works, if you choose convenience over performance, your code is going to be slow enough that users will complain, no matter what hardware you have.


It clearly didn’t come out of game dev. Many people doing high performance work on either embedded or “big silicon” (amd64) in that era were fully aware of the importance of locality, branch prediction, etc

But game dev, in particular Mike Acton, did an amazing job of making it more broadly known. His CppCon talk from 2014 [0] is IMO one of the most digestible ways to start thinking about performance in high throughput systems.

In terms of heroes, I’d place Mike Acton, Fabian Giesen [1], and Bruce Dawson [2] at the top of the list. All solid performance-oriented people who’ve taken real time to explain how they think and how you can think that way as well.

I miss being able to listen in on gamedev Twitter circa 2013 before all hell broke loose.

[0] https://youtu.be/rX0ItVEVjHc?si=v8QJfAl9dPjeL6BI

[1] https://fgiesen.wordpress.com/

[2] https://randomascii.wordpress.com/


There's also good reasons that immediate mode GUIs are largely only ever used by games, they are absolutely terrible for regular UI needs. Since Rust gaming is still largely non-existent, it's hardly surprising that things like 'egui' are similarly struggling. That doesn't (or shouldn't) be any reflection on whether or not Rust GUIs as a whole are struggling.

Unless the Rust ecosystem made the easily predicted terrible choice of rallying behind immediate mode GUIs for generic UIs...


>Unless the Rust ecosystem made the easily predicted terrible choice of rallying behind immediate mode GUIs for generic UIs...

That's exactly what they did :D


They didn't. Biggest Rust GUI by popularity is Dioxus.

I mean, fair enough, but [at least] wikipedia agrees with that take.

> Graphical user interfaces traditionally use retained mode-style API design,[2][5] but immediate mode GUIs instead use an immediate mode-style API design, in which user code directly specifies the GUI elements to draw in the user input loop. For example, rather than having a CreateButton() function that a user would call once to instantiate a button, an immediate-mode GUI API may have a DoButton() function which should be called whenever the button should be on screen.[6][5] The technique was developed by Casey Muratori in 2002.[6][5] Prominent implementations include Omar Cornut's Dear ImGui[7] in C++, Nic Barker's Clay[8][9] in C and Micha Mettke's Nuklear[10] in C.

https://en.wikipedia.org/wiki/Immediate_mode_(computer_graph...

[Edit: I'll add an update to the post to note that Casey Muratori simply “coined the term” but that it predates his video.]


Dig out any source code for Atari, Spectrum or Commodore 64 games, written in Assembly, or early PC games, for example.

And you will see which information is more accurate.


Yeah no doubt you're correct. I wasn't disagreeing - just establishing the reasonableness of my original statement. I must have read it in the Dear ImGui docs somewhere.

I am pretty sure there are people here qualified enough to edit that Wikipedia page in a proper way.

Wikipedia clearly has never been shown to have faults regarding accuracy.

{{cn}}

> Maybe he might have made it known to people

Yes, he coined the term rather than invent the technique


He definitely did not name it. IRIS GL was termed “immediate mode” back in the 80’s.

He coined the term in the context of UI, by borrowing the existing term that was already used in graphics. Drawing that parallel was the point.

It might be more accurate to say that he repopularized the term among a new generation of developers. Immediate vs Retained mode UI was just as much a thing in early GUIs.

It was a swinging pendulum. At first everything was immediate mode because video RAM was very scarce. Initially there was only enough VRAM for the frame buffer, and hardly any system RAM to spare. But once both categories of RAM started growing, there was a movement to switch to retained mode UI frameworks. It wasn’t until the early 00’s that GPUs and SIMD extensions tipped the scales in the other direction - it was faster to just re-render as needed rather than track all these cached UI buffers, and allowed for dynamic UI motifs “for free.”

My graying beard is showing though, as I did some gave dev in the late 90’s on 3Dfx hardware, and learned UI programming on Win95 and System 7.6. Get off my lawn.


I won't be bothered to go hunting for digital copies of 1980's game development books, but I have my doubts on that.

It is more like,

Hardware first, software for iDevices second, macOS when time is available.


Which is one the reasons I keep being a Windows/UNIX/Linux person, and only use Apple hardware when it gets assigned to me on specific project delivery.

The stuff with Objective-C and Swift is cool, but not enough to justify fully migrating into Apple land.


Tiling window managers used to be a thing in the old days, they predate the invention of overlapping windows, there is a reason it is only a minority that reaches out to them nowadays.

Tilings are no better or no worse than floating. There are many users who would benefit from them (people who typically keep all their windows maximized), but have had literally zero exposure two them due to MacOS and Windows.

Complaints about lack of window snapping in MacOS vs Windows, a loose copy of tiling, are consistent across the internet. If MacOS and Windows had native tiling support, you'd see a fight fiercer than tabs vs. spaces.

The reason floating windows are used is because "that's the way it is done." Windows 95 wowed the world and established the status quo.

Not to mention the direction that the likes of Paper and Niri are going, these are things that very few users get to experience and therefore couldn't possibly have an informed decision on what they prefer.


> Not to mention the direction that the likes of Paper and Niri are going, these are things that very few users get to experience and therefore couldn't possibly have an informed decision on what they prefer.

niri is great because it gives you the best of all worlds.

Scrolling by default but you can easily float and tile things as needed. It feels so intuitive for how I use computers.

I've created a few posts and videos on using niri while going over my workflows in https://nickjanetakis.com/blog/how-is-niri-this-good-live-de... and https://nickjanetakis.com/blog/day-to-day-window-management-....

Having used Windows for 25 years, there's no chance I'll ever go back. This environment is already substantially better. That's after tricking Windows out with virtual desktops, global hotkeys, window positioning tools, launchers, multiple clipboards, heavily WSL 2 driven, etc..

I tried to switch a few times over the last decade but was always blocked by hardware issues on this machine, those blockers are gone now.


Windows does tiling just fine, it even has layout suggestions.

Yes, which is why people complain about MacOS vs Windows. People wouldn't complain about the lack of quasi-tiling in MacOS if they didn't care about it (which is the gist of your gp comment). The only reason they have experienced it is because Windows has quasi-tiling.

Not in a GUI though. Sun Windows was overlapping, GEM was overlapping and almost everything else since then.

I'm on a 5120x2160 monitor and tiling is super perfect.

Can't recommend it enough.


There were others out there, e.g. Oberon, Lilith,...

That's where I learned the power of tiling. Years of using the Acme text editor.

When you also learn drop down menus are not needed either.


Yup, but "normies" do need menus or at least some way to do things that has some degree of visual affordance (e.g. a persistent cmd/ctrl+p, which I think Office has/had).

not everyone can drive a Ferrari

That reason being that there is a minority of people who reach out to anything instead of just using what they're given. Compounded by baby duck syndrome, of course.

Depends on the window manager.

This is the newer generations re-discovering why various flavours of Shareware and trial demos existed since the 1980's, even though sharing code under various licenses is almost as old as computing.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: