The nuclear share (red) is reducing during the 200Os. The wind and solar (light blue and yellow) went over the max nuclear share at the end of the period — it seems there is much more wind than sun in Germany ;-). The fossil fuels (dark colors below red) are still very high.
The media (web or desktop) is irrelevant: a file format must exists for backup and interoperability. I barely use office documents myself, but I work on software that produce and parse many spreadsheets every day.
An open standard is even more very relevant in public administrations where the process follows legal constraints and ISO standards. The Document Foundation's article reacts to an German institutional decision.
I hope that Germany mandating ODF over OOXML will enhance the whole ecosystem.
As a programmer, finding decent ODF libraries is far from certain. Last year I had to output some spreadsheets from a Go program, but I could not find any maintained library for ODS, so I had to output XLSX files. Recently, I was luckier while programming in Rust.
You missed an easier alternative that was in the article: ctrl-u saves and clears the current line, then you can input new commands, then use ctrl-y to yank the saved command.
With zsh, I prefer to use alt-q which does this automatically (store the current line, display a new prompt, then, after the new command is sent, restore the stored line). It can also stack the paused commands, e.g.:
When you're killing (C-u, C-k, C-w, etc) + yanking (C-y), you can also use yank-pop (bound to M-y in bash and zsh by default) to replace the thing you just yanked with the thing you had killed before it.
$ asdf<C-w>
$ # now kill ring is ["asdf"]
$ qwerty<C-a><C-k>
$ # now kill ring is ["qwerty", "asdf"]
$ <C-y> # "yank", pastes the thing at the top of the kill ring
$ qwerty<M-y> # "yank-pop", replaces the thing just yanked with the next
# thing on the ring, and rotates the ring until the next yank
$ asdf
I've contributed a few optimisations to some implementations in these benchmarks, but as I read the code of many other implementations (and some frameworks) I lost most of the trust I had in these benchmarks.
I knew that once a benchmark is famous, people start optimising for it or even gaming it, but I didn't realise how much it made the benchmarks meaningless. Some frameworks were just not production ready, or had shortcuts made just for a benchmark case. Some implementations were supposed to use a framework, but the code was skewed in an unrealistic way. And sometimes the algorithm was different (IIRC, some implementation converted the "multiple sql updates" requirements into a single complex update using CASE).
I would ignore the results for most cases, especially the emerging software, but at least the benchmarks suggested orders of magnitudes in a few cases. I.e. the speed of JSON serialization in different languages, or that PHP Laravel was more or less twice slower than PHP Symfony which could be twice slower than Rails.
I'm not the GP, but I've seen "rebase lies" in the wild.
Suppose a file contains a list of unique strings, one by line. A commit on a feature branch adds an element to the list. Later on, the branch is rebased on the main branch and pushed.
But the main branch had added the same element at another position in the list. Since there was a wide gap between the two positions, there was no conflict in Git's rebase. So the commit in the feature branch breaks the unicity constraint of the list.
For someone that pulled the feature branch, the commit seems stupid. But initial commit was fine, and the final (rebased) commit is a lie: nobody created a duplicate item.
Thanks for that. I'm definitely familiar with that kind of situation, but what I'm not seeing is how that leads to history "collapsing under its own weight" in larger teams. That seems like a relatively straightforward rebase error that is easily corrected. (Also, if it is important for that list to only include unique items and you were able to merge it anyway, maybe that also reveals a gap in the test suite?)
Git is so established now that it's sensible for alternative VCS to have a mode where they can imitate the Git protocol - or seven without that you can still checkout the latest version of your repo and git push that on a periodic basis.
Git is not a protocol, it is a data format. That only makes sense when your VCS system is similar enough to git to easily allow converting between the two representations.
It solves problems that you dont encounter if you are asking that question. I’ve lost a literal year or more of my life, in aggregate, to rebasing changes against upstream that could have been handled automatically by a sufficiently smart VCS.
An alternative explanation is that I already have a tool that helps me with these situations. The question was a bit rhetorical, because the vast majority of devs don't care what language many of their tools are written in or what algos are used.
A different example, Go's MVS algo can be considered much better for dependency management. What are your thoughts on the SAT solver being replaced in your preferred language tooling? It would mean the end of lock files
```
for HASH in $(cat all_changes.txt); do
pijul apply "$HASH"
pijul reset # sync working copy to channel state
git add -A
git commit -m "pijul change: $HASH"
done
```
git remote add origin git@github.com:you/pijul-mirror.git
git push -u origin main
I agree, though the list contains "L'œuvre au noir", another wonderful novel by Marguerite Yourcenar.
I think some of the books on this list had very few readers, but were selected because of their relative fame among a list of 200 books. For instance, how many people have read the full "Gulag archipelago"? Or writings by Lacan or Barthes? Or the "Journal" by Jules Renard?
> I find this other list more deserving of this title
How is a list spanning over the last 40 centuries deserving of the tile "Books of the Century by Le Monde"?
Why would the "Epic of Gilgamesh" or the "Book of Job" be on a list of 20th century books?
> ... it starts with one of my favorite.
From that same Wikipedia page: “The books selected by this process and listed here are not ranked or categorized in any way;”
All this would be true if Linux and FreeBSD had similar exposition. But there's obviously less users and less hardware in the BSD world, so we must expect a higher variance.
For instance, searching in recent FreeBSD issues, some hardware is compatible but 3× slower, as in "NFS is much too slow at 10GbaseT"[^1].
Or a FreeBSD upgrade to v14 could sink the NFS performance, as in "Write performance to NFS share is ~4x slower than on 13.2".
Of course, these bugs happen with Linux, but there are vastly more resources to detect and fix these problems in the Linux world.
The nuclear share (red) is reducing during the 200Os. The wind and solar (light blue and yellow) went over the max nuclear share at the end of the period — it seems there is much more wind than sun in Germany ;-). The fossil fuels (dark colors below red) are still very high.
reply