a rabbit hole, at the end of which is an imgui theme, and me was^H^H^Hspending entirely too much time extracting actual fonts, color codes and other minuscule details.
what's better, i have absolutely no issue with that theme being my new default!
it uses a lightly modified @mori2003/jsimgui[1] and renders to webgpu. i can change that to webgl2 if anyone's browser fails because of that.
the fonts actual fonts appear to be tahoma and verdana. however, my imgui bindings couldn't bake the fonts at a specific size.
what i found interesting is running msiextract[2] on the above linked steam.msi revealed a TrackerScheme.res file with exact RGBA colors and layout configuration (borders, scrollbars, etc) for many widgets.
there's a lot left in there but i need to climb out of this hole for now. have fun!
keep in mind that people who point out a regression and measure the actual #tok, which costs $money, aren't just "being loud" — someone diffed session context usaage and found 4.6 burning >7x the amount of context on a task that 4.5 did in under 2 MB.
Being a moderately frequent user of Opus and having spoken to people who use it actively at work for automation, it's a really expensive model to run, I've heard it burn through a company's weekend's credit allocation before Saturday morning, I think using almost an order of magnitude more tokens is a valid consumer concern!
I have yet to hear anyone say "Opus is really good value for money, a real good economic choice for us". It seems that we're trying to retrofit every possible task with SOTA AI that is still severely lacking in solid reasoning, reliability/dependability, so we throw more money at the problem (cough Opus) in the hopes that it will surpass that barrier of trust.
sounds reasonable to me. i've been wondering about encoding detailed AI disclosure in an SBOM.
on a related note: i wish we could agree on rebranding the current LLM-driven never-gonna-AGI generation of "AI" to something else… now i'm thinking of when i read the in-game lore definition for VI (Virtual Intelligence) back when i played mass effect 1 ;)
The CH Mach III looks tight!
I was messing around with a DOS game Sword of the Samurai and was wondering what types of sticks were being used by enthusiasts at the time since that game supported a joystick.
What kind of things were you playing with that when you purchased it new? I'm in this industry in no small part to watching my brother play Descent with a joystick in the 90s. I think his was made by the same company - F-16 Flightstick. I remember it having suction cups on the bottom though
New would have been Chuck Yeager's Advanced Flight Trainer various EPYX sports games (e.g. Winter Games & California Games) A bit later would be Wing Commander and Traffic Department 2192. Even later (besides several Wing Commander sequels and spinoffs) Tyrian, One Must Fall 2097, and Duke Nukem 3D (keyboard & joystick instead of keyboard & mouse).
That works quite well, but I had to repeat it a few months later. I recommend cleaning also the other stick as the plastic in the DS4 casing can break from opening it too often.
i thought i could enjoy woodworking, but i was wrong.
i attended two 1-day woodworking courses: hand tools and machine tools. i made a nice bench/stool thingy that i still enjoy daily.
however, every time i work with wood, my mind immediately goes to CAD, 3d printing and wishing i had a CNC. i look at my soft baby hands (i can’t go bouldering for shit) and walk to the keyboard, create a parametric model in a notebook (build123d is great!), slice it and send it off to one of my printers…
I'm looking forward to the first native optimized WebGPU implementation of 3DGS rendering. I'm also curious how scene data could be compressed and decompressed efficiently.
I'm also looking forward to it. One of the big challenges is the sorting, for which I'm unaware of a good WebGPU implementation. I have some more notes on this question in a Zulip thread[1].
It needs to be done in the renderer. I think it's doable though, the FidelityFX library looks like it can be ported, it'll just run a bit slow because of the lack of subgroups. This particular library isn't based on a fancy scan implementation, as the state-of-the-art CUDA implementations are. There's a bit more followup in the linked Zulip thread.
Thank you for saying those nice things! I am not really on any social media platforms at the moment. I used to post updates to that blog I linked but it’s been a long time since I could work on that project. Hopefully someday I will have a playable level to release. There is a demo available at the bottom of that page if you are interested. Anyway, thanks!
cd path/to/vits/monotonic_align
mkdir monotonic_align
python setup.py build_ext --inplace
Then back to fairseq:
cd path/to/fairseq
PYTHONPATH=$PYTHONPATH:path/to/vits python examples/mms/tts/infer.py --model-dir checkpoints/eng --wav outputs/eng.wav --txt "As easy as pie"
(Note: On MacOS, I had to comment out several .cuda() calls in infer.py to make it work. But then it generates high-quality speech very efficiently. I'm impressed.)
https://archive.org/details/steam_10-08-2004
a rabbit hole, at the end of which is an imgui theme, and me was^H^H^Hspending entirely too much time extracting actual fonts, color codes and other minuscule details.
what's better, i have absolutely no issue with that theme being my new default!
reply