This is a fantastic discovery! Displaying azimuth in my ascii-side-of-the-moon [0] sounds useful, but then I would need to explain the symbol. I am displaying altitude/elevation below horizon, but there doesn't appear to be standard symbol for it. I checked the tables linked from article and there doesn't seem to be a symbol for it.
Maybe this is the opportunity to invent and suggest a symbol for Altitude?
Yes, the angle above the horizon is usually what is most useful because it is used to find something small but visible. In the case of my ascii moon, the angle below the horizon, is there to explain why something is not visible. The Moon is large enough that people can easily find it on their own if it is not obstructed by the Earth itself.
Consider the Moon as viewed from NYC at time of comment [0], it is hiding below the horizon. If you were to look at my website and then at the sky you might become upset that I am reporting the shape of the moon, but obviously it can't be seen. Hence why the website reports the angle below the horizon roughly half the time it isn't visible.
Adding Azimuth and Elevation when the Moon is above the horizon would be for completionism only and not the real enterprise use-cases served by ANSI compliant renderings of the Moon.
Shouldn't it be the same symbol but turned 90 degrees? Seems to mimic the sextant operation if so. I've always used some set of greek symbols (theta, phi, maybe psi) for these kinds of angles.
Great work! While I was building ascii-side-of-the-moon [0][1] I briefly considered writing my own ascii renderer to capture differences in shade and shape of the Lunar Maria[2] better. Ended up just using chafa [3] with the hope of coming back to ascii rendering after everything is working end to end.
Are you planning to release this as a library or a tool, or should we just take the relevant MIT licensed code from your website [4]?
No plans to build a library right now, but who knows. Feel free to grab what you need from the website's code!
If I were to build a library, I'd probably convert the shaders from WebGL 2 to WebGL 1 for better browser compatibility. Would also need to figure out a good API for the library.
One thing that a library would need to deal with is that the shape vector depends on the font family, so the user of the library would need to precompute the shape vectors with the input font family. The sampling circles, internal and external, would likely need to be positioned differently for different font families. It's not obvious to me how a user of the library would go about that. There'd probably need to be some tool for that (I have a script to generate the shape vectors with a hardcoded link to a font in the website repository).
I have a 24GB M5 macbook pro. In ComfyUI using default z-image workflow, generating a single image just took me 399 seconds, during which the computer froze and my airpods lost audio.
On replicate.com a single image takes 1.5s at a price of 1000 images per $1. Would be interesting to see how quick it is on ComfyUI Cloud.
Overall, running generative models locally on Macs seems very poor time investment.
If you want to get a single entry point into your repo's task, also consider my tool: dela[0]. It scans a variety of task file definitions like pyproject.toml, package.json, makefile, etc and makes them available on the cli via the bare name of the task. It has been very convenient for me so far on diverse repos, and the best part is that I didn't have to convince anyone else working on the repos to adjust the repos structure.
Dela doesn't currently support mise as a source of tasks, but I will happily implement it if there is demand. Currently [1] I saw mise use on 94 out of 100,000 most starred github repos.
Thank you for allowing this moment of self promotion.
Sounds great but does it support listing all tasks?
Whenever I enter a repository for a node project the first thing I do is "npm run" to list the scripts. When I enter a repository with a Makefile I look at it. If I see make targets where both the target and dependencies are variables I exit the repository again real quick though.
The view warrant canaries[0] link on the bottom of the page goes to a cloudflare 502 page. Bitrot is indistinguishable from subpoena, but neither is a good indicator.
I have been using SVGs for charts on my blog for a couple of months[0] now. Using SVGs satisfied me, but in all honesty, I don't think anyone else cares. For completeness the benefits are below:
* The charts are never blurry
* The text in the chart is selectable and searchable
* The file size could be small compared to PNGs
* The charts can use a font set by a stylesheet
* The charts can have a builtin dark mode (not demonstrated on my blog)
Additionally as the OP shown, the text in SVG is indexed by google, but comes up in the image sections [1].
The downside was hours of fiddling with system fonts and webfonts and font settings in matplotlib. Also the sizing of the text in the chart and how it is displayed in your page is tightly coupled and requires some forethought.
That's totally correct! I once replaced some blurry scans from the 6502 manual by SVG versions, and, while I was at it, I coded them by hand (really, because for this particular job it seemed easier than doing it in a drawing program.) While nobody will notice, it's satisfactory.
Nobody will notice, because that's how it should be... personally I often notice when it's bad: blurry plots, JPEG noise that should not be there, and so on, and think "oh no, another one who has no idea about how to do images properly..."
90% of your users will benefit from it without even realising. 9.9% will silently appreciate it. If you're lucky, the remaining 0.1% will tell you they appreciate it!
Another thing to watch out for with SVGs is how they appear in RSS readers or browser reader views. If you're using external SVG files then it should be fine. If you've optimized them by embedding into the HTML, then you need to be careful. If they rely on CSS rules in the page's CSS then it's not going to work well. For my website I try to make the SVGs self-sufficient by setting the viewBox, width, and height attributes, using a web safe font, and only relying on internal styles. You can still get some measure of light/dark mode support by setting fill or stroke to currentColor.
My advice, for web pages: always specify the <svg> width and height attributes, or the width and height properties in a style attribute, because if non-inline CSS doesn’t load (more common, for various reasons, than most people realise), the SVG will fill the available width. And you probably don’t want your 24×24 icon rendered at 1900×1900.
(For web apps, I feel I can soften a bit, as you’re more likely to be able to rely on styles. But I would still suggest having appropriate width/height attributes, even if you promptly override them by CSS.)
To complete the test, the website needs an HTML page that is mostly SVG. I think that might stand a chance of getting into the main search results rather than just the image search.
Also of interest for me would be whether SVG description markup gets picked up in the index.
To complete the search of possibilities, having the SVG generated by Javascript on page load would be of interest, for example, with some JSON object of data that then gets parsed to plot some SVG images.
Your SVG graphs are very neat and nobody caring is a feature not a bug. If they were blurry PNGs then people might notice but nobody notices 'perfection', just defects.
I noticed you were using 'NASA numbers' in your SVGs. Six decimal places for each point on a path is a level of precision that you can cut down with SVGOMG or using the export features from Inkscape. I like to go for integers when possible in SVG.
The thing with SVG is that the levels of optimisation go on forever. For example, I would set the viewbox coordinates so that (0, 0) is where the graph starts. Nobody would ever notice or care about that, but it would be something I would have to do.
Oh man, this is a deep mine to dig. I haven't even thought about svg size optimization. The default blog template I used really wants me to use hero images, and the jpgs are already hefty. I just looked at my network panel, and it seems the font files are loaded once per svg on initial load and then are cached.
What is the motivation for viewbox coordinates being at (0,0)? I have been thinking about setting chart gutters so that the graph is left aligned with the text, but this seems like an orthogonal issue.
Rather than use MatLab to create your bar charts, you could do something like this.
Here I am assuming you don't want standalone images that others can steal but you do want maximal SVG coolness.
Move the origin with viewBox voodoo witchcraft to 0,0.
Add a stylesheet in your HTML just for your SVG wizardry.
Create some CSS properties scoped to SVG for your colours, for example svg { --claude-code: red; --cursor: orange; --github-copilot: yellow; } and so on.
Put them in the stylesheet, and add some styles, for example claude-code line { stroke: var(--claude-code); } and so on.
Rather than use paths in groups with clip paths and whatnot, just use a series of lines, made nice and fat. Lines have two points, and, since the viewBox is zeroed out to the origin, you only need to specify the y2 value, with y1, x1 and x2 taking the defaults of zero. The y2 value could be whatever suits, the actual value divided by 1000, 10000 or something.
Put each line in a group with the group having a class, for example claude-code.
Add the label to the group with its own transform to rotate the text 45 degrees.
Add a transform to the group to move the fat line and its label along the y axis using a translate.
Rinse and repeat for all entries on the graph.
Now do some labels for the other axis.
As for the title of the graph, move that out of the SVG file. Put the SVG file in a figure element and put the title in a figcaption element. Add CSS for the figcaptions.
With SVG in HTML there is no need to do xlink and version things, just keep it simple, with just the viewBox and no width/height. Scale your figures in CSS with the SVG set to fill the space of the figure, so we are going full width.
You can also use some title elements for mouseovers, so, hover over a bar and you get the actual data number.
Why do it this way?
Say you don't like the colours or you want to implement dark mode. You can do the usual prefers media query stuff and set your colours accordingly, for all the graphs, so they are all consistent.
Same goes with the fonts, you want all that in the stylesheet rather than baked into every SVG, so you can update them all with one master change.
As for the last graph with lots of squares, those squares are 'rect' not path, for maximum readability. The rectangles can be put in a defs container as symbols, so you have veryLightBlueSquare, lightBlueSquare, BlueSquare and so on. Then, with your text you can put each value in a group that contains a text node and a use tag to pull through the relevant colour square.
> Also the sizing of the text in the chart and how it is displayed in your page is tightly coupled and requires some forethought.
I used to make a lot of charts with R/ggplot and the big disadvantage is, as you mentioned, the sizing of elements, especially text. So I wrote a small function that would output the chart in different sizes and a tiny bit of JS to switch between them at different breakpoints. It worked pretty well I think, the text was legible on all devices, though I still had to check that it looks fine and elements aren't suddenly overlapping or anything.
Another advantage of SVGs is that they can have some interactivity. You can add tooltips, hovers, animation and more. I used ggiraph for that: https://ardata.fr/ggiraph-book/intro.html
It does come up in normal results for me, I don't need to go to the images section, the page has keyword for testing lmtbk4mh, see result https://www.google.com/search?q=lmtbk4mh
I have been excited about bun for about a year, and I thought that 2025 is going to be its breakout year. It is really surprising to me that it is not more popular. I scanned top 100k repos on GitHub, and for new repos in 2025, npm is 35 times more popular and pnpm is 11 time more popular than bun [0][1]. The other up and coming javascript runtime, deno is not so popular either.
I wonder why that is? Is it because it is a runtime, and getting compatibility there is harder than just for a straight package manager?
Can someone who tried bun and didn't adopt it personally or at work chime in and say why?
It’s a newer, vc funded competitor to the open source battle tested dominant player. It has incentives to lock you in and ultimately is just not that different from node. There’s basically no strategic advantage to using bun, it doesn’t really enable anything you can’t do with node. I have not seen anyone serious choose it yet, but I’ve seen plenty of unserious people use it
I think that summarizes it well. It's not 10x better that makes the risky bet of going into vendor lock from a VC-backed company worth it. Same issue with Prisma and Next for me.
Considering how many people rely on a tailwind watcher to be running on all of their CSS updates, you may find that bun is used daily by millions.
We use Bun for one of our servers. We are small, but we are not goofing around. I would not recommend them yet for anything but where they have a clear advantage - but there are areas where it is noticeably faster or easier to setup.
I really want to like Bun and Deno. I've tried using both several times and so far I've never made it more than a few thousand lines of code before hitting a deal breaker.
Last big issue I had with Bun was streams closing early:
The bun team uses Discord to kick off the Claude bot, so someone probably saw the comment and told it to do it. that edit doesn't look particularly good though
I am also very curious what people think about this. To me, as a project, Node gives off a vibe of being mature, democratic and community driven, especially after successfully navigating then io.js fork drama etc a few years ago. Clearly neither bun nor deno are community driven democratic projects, since they are both VC funded.
I am Bun's biggest fan. I use it in every project I can, and I write all my one-off scripts with Bun/TS. That being said, I've run into a handful of issues that make me a little anxious to introduce it into production environments. For instance, I had an issue a bit ago where something simple like an Express webserver inside Docker would just hang, but switching bun for node worked fine. A year ago I had another issue where a Bun + Prisma webserver would slowly leak memory until it crashed. (It's been a year, I'm sure they fixed that one).
I actually think Bun is so good that it will still net save you time, even with these annoyances. The headaches it resolves around transpilation, modules, workspaces etc, are just amazing. But I can understand why it hasn't gotten closer to npm yet.
I think part of the issue is that a lot of the changes have been fairly incremental, and therefore fairly easy to include back into NodeJS. Or they've been things that make getting started with Bun easier, but don't really add much long-term value. For example, someone else in the comments talked about the sqlite module and the http server, but now NodeJS also natively supports sqlite, and if I'm working in web dev and writing servers, I'd rather use an existing, battle-tested framework like Express or Fastify with a larger ecosystem.
It's a cool project, and I like that they're not using V8 and trying something different, but I think it's very difficult to sell a change on such incremental improvements.
This is a long term pattern in the JS ecosystem, same thing happened with Yarn.
It was better than npm with useful features, but then npm just added all of those features after a few years and now nobody uses it.
You can spend hours every few years migrating to the latest and greatest, or you can just stick with npm/node and you will get the same benefits eventually
I have been using pnpm as my daily driver for several years, and am still waiting for npm to add a symlink option. (Bun does support symlinks).
In the interim, I am very glad we haven't waited.
Also, we switched to Postgres early, when my friends were telling me that eventually MySQL will catch up. Which in many ways, they did, but I still appreciate that we moved.
I can think of other choices we made - we try to assess the options and choose the best tool for the job, even if it is young.
Sometimes it pays off in spades. Sometimes it causes double the work and five times the headache.
There's still a few compatibility sticking points... I'm far more familiar with Deno and have been using it a lot the past few years, it's pretty much my default shell scripting tool now.
That said, for many work projects, I need to access MS-SQL, which the way it does socket connections isn't supported by the Deno runtime, or some such. Which limits what I can do at work. I suspect there's a few similar sticking points with Bun for other modules/tools people use.
It's also very hard to break away from entropy. Node+npm had over a decade and a lot of effort to build that ecosystem that people aren't willing to just abandon wholesale.
I really like Deno for shell scripting because I can use a shebang, reference dependencies and the runtime just handles them. I don't have the "npm install" step I need to run separately, it doesn't pollute my ~/bin/ directory with a bunch of potentially conflicting node_modules/ either, they're used from a shared (configurable) location. I suspect bun works in a similar fashion.
That said, with work I have systems I need to work with that are already in place or otherwise chosen for me. You can't always just replace technology on a whim.
To beat an incumbent you need to be 2x better. Right now it seems to be a 1.1x better (for any reasonably sized projects) work in progress with kinks you’d expect from a work in progress and questionable ecosystem buy-in. That may be okay for hobby projects or tiny green field projects, but I’m absolutely not gonna risk serious company projects with it.
There are some rough edges to Bun (see sibling comments), so there's a apparent cost to switching, namely wasted developer time in dealing with Node incompatibility. Being able to install packages 7x faster doesn't matter much to me so I don't see an upside to making the switch.
Bun is much newer than pnpm, looking at 1.0 releases pnpm has about a 6 year head start.
I write a lot of one off scripts for stuff in node/ts and I tried to use Bun pretty early on when it was gaining some hype. There were too many incompatibilities with the ecosystem though, and I haven't tried since.
That's an amazing addition! Once I read about Simpson's paradox[0], couldn't help but seeing it or suspecting it everywhere. Luckily, it is not a true paradox, and it can resolved if underlying data is available and not just summary statistics.
I recommend putting together the Quintet in one image, so that the original 4 charts, plus the new one are all visible and interpretable together. It will be learning aid for decades to come.
Yes, not saying the data dinosaur isn't cool. But for real-world applications, the quartet with the addition of this fifth dataset is more useful for pedagogical purposes.
Setting aside the reason that hydrating django templates in rust from django is useful in ways that hydrating jinja templates in rust from django isn't useful. Petcat's comment could be useful and the author may not be aware of existing prior art. As engineers, there is sometimes a huge urge to build without looking around first. I am guilty of this myself. When I started on dela [0], I didn't know about 2 alternatives to it; I only learned about them through comments.
Maybe this is the opportunity to invent and suggest a symbol for Altitude?
[0] https://aleyan.com/projects/ascii-side-of-the-moon