one limitation for Bazzite for instance would be some titles that require anti-cheating won't work but just like OP, only use case I have for windows is gaming and running some banking app which won't work on non-Windows device
love to see more and more users realize they can game just fine on linux
It's time to stop buying such games and send game studios a signal that we won't tolerate rootkits and/or closed platforms. Anti-cheats should run server-side, or better yet, servers should be community-operated. I would probably bought BF6, but since I exclusively use Arch, EA lost a sale -- too bad for them there are thousands of other games that work flawlessly on Linux.
I want to echo a previous comment of mine on this topic:
With the rise of mainstream-compatible, as in a standard gamer can get them running and use them with a similar frustration level as Win11, Linux first systems like steam deck, steam machine and even steam frame, there is a real, even if currently low, pressure for big publisher to support Linux/SteamOS. I somewhat hope/fear there will be a blessed SteamOS version that supports anticheats enough for publishers like EA, Epic and Riot to accept the risk.
Rumour has it that after the Crowdstrike fiasco future versions of Windows won't allow kernel level modules. I can only hope this is true if it kills off the main reason titles don't work on Linux as a side effect. I'd have bought BF6, some version of EAFC, and more.
Unfortunately the rumors were misinformed. Microsoft's official response states that while they will be moving towards allowing more security functions to be run outside of the kernel, "It remains imperative that kernel access remains an option for use by cybersecurity products to allow continued innovation and the ability to detect and block future cyberthreats" [1].
It's not going to happen. If anything they would just add functionality so things like Crowdstrike don't need to run in ring-0 but they won't remove the access.
(I had to make a HN account to reply to this, but…)
If only Riot, Epic, BE, whoever else knew about this wondrous approach! That way they wouldn’t have to reverse half the Windows kernel to figure out ways to stop & detect hacks.
Valve (mostly) does serverside analytics for CS2 and the success of their approach can be measured by one of FaceIT’s benefits being “we have a working anticheat”.
This. It should actually be easier to catch offenders - you're leaning on hundreds of years of applied statistics, rather than racing versus sneakier exploits.
> [...] or better yet, servers should be community-operated.
I'm conflicted about this one. I've wanted to host a game server at home since 2003, but couldn't get a public, static IP. The landscape hasn't changed much, perhaps even for the worse: a Quake 3 dedicated server could be run from a mid-range laptop while playing the game; Minecraft and Factorio (both great games with fantastic communities), by that measure, have unreasonable hardware requirements.
So, you pay a host.
OTOH there's many ways for a studio to build and operate an ethical live service. Check out Warframe: it's 100% F2P, the main source of revenue is cosmetics, and it's easy for people to gift stuff (whales spill their pockets reinforcing community goodwill, rather than gambling).
It's best when a game offers both, e.g. Brood War. StarCraft II isn't "simply" dying; lack of LAN play actively hinders on-site, professional tournaments. And we can do nothing about it.
There's lots of neat tricks for DHT peer discovery and NAT hole punching these days. Wouldn't be hard to make a local game sever manager that lets you share join information to your friends and have it automatically resolve all the networking needed with no VPN, static IP, or DNS required.
I normally recommend Tailscale, and that's a great starting point, for games that do not support any of the neat tricks natively. The problem: it's more difficult to build a community around that. You're introducing a point of friction, and a lot of newcomers will bounce. It's difficult enough to guide a non-technical friend, and Tailscale is top among the absolute easiest solutions.
How would a server-side anti-cheat work? You wouldn't be able to detect ESP or other information leaks. Best you can do is see how good they are vs. everyone else but how do you know if someone is cheating or just really good? Most cheaters are not blatantly cheating so it is hard to know for sure. Even something like aimbotting is almost always adjustable in cheat software to have varying levels of accuracy.
SSAC is already widely deployed for many games. I'm not a professional backend gamedev (just an enthusiast), so I don't know all the approaches / tricks, but here's off the top of my head:
> [...] see how good they are vs. everyone else [...]
It's called Elo or MMR. You match players with a similar rating. An unfair advantage in one area (e.g. aimbot, map hack) turns into a significant disadvantage in all of the other areas (strategy, team play, mechanics, situational awareness, decision making). In SC2 you can regularly see mid-high masters or low GMs play against map hackers and just destroy them. Match making simply works as intended.
As a cheater - aside from being a different (not more difficult, but different) kind of a challenger, how do you gain material advantage from this? Streaming the game? If you attract a community that cherishes cheaters? Well.
This is of course on top of normal AC.
> [...] how do you know if someone is cheating or just really good?
In versus - it will surface, as noted above. You will plateau, just like any other player. If you're "really good", you will become an outlier and get attention.
In a game like Warframe (PvE, you can farm goods that you can sell for in-game currency), the main limiting factor is your time. A very good loadout will shorten an exterminate mission from 4 to 3 minutes, and you can build a decent loadout within ~2 months of starting to play the game. To further shorten it to 2min, you need good mechanics, or - as noted - to cheat. That's assuming you run solo - but since this is a co-op game, there's often someone on your team who will clear the mission for you in 2min anyway. Choosing to cheat is your own risk.
I'd consider AC a core part of game design.
> Most cheaters are not blatantly cheating so it is hard to know for sure. Even something like aimbotting is almost always adjustable in cheat software to have varying levels of accuracy.
It depends on how high you want to go - you don't know where the radar is, and it only needs to spot you once. The problem space isn't just aimbotting, it's highly multidimensional. An arms race like any other, except your "enemy" (the host) has significantly more information.
You must combine client-side with server-side AC either way. A CS exploit will circulate the same way a regular aimbot will.
> You must combine client-side with server-side AC either way.
I should have clarified - this is exactly what I meant. Client-side anti-cheat cannot be replaced by server-side anti-cheat. You need both.
I work on an FPS game that is heavily targeted by cheaters (Rust). We do both but we are probably limited with what we can do server-sided because it's a PvP sandbox game. There is no matchmaking and no defined winner or loser to simplify ranking players against each other. It's also high stakes because a cheater can ruin a legit player's hours of preparation in moments. Drawing a line in the data to detect cheaters will catch outliers but there's a world of "legit cheaters" out there who use cheats but limit it to not stand out and avoid being banned.
It's a sandbox game. You play on the same server over the course of a wipe (up to one month long) and then the server's map is cleared/changed. Hundreds of players gather resources, build bases, craft weapons, etc. to fight each other and defend themselves from others. Players often team up but you never know who you can really trust unless you're actually friends with them.
Other than aiming there's just game and map awareness, including understanding the current meta. Base design is a whole other area relevant to defending against raiding while you aren't online or away from your base.
We use EAC but also have our own layers of protection. We do some of our own anti-tampering in the client, a player reporting system with staff to investigate, server-side antihack to guard against all kinds of weird state modified clients send, and a lot of data collection+analysis. If you look up Rust or any other popular FPS game up you'll see it's still not enough.
The most effective anticheat tool really is game design. Games can be designed to limit or even eliminate the worst of cheating... but only by significantly changing the games. It'd be simple if everything was like the Civilization games because they're turn based and have well-defined actions. All input can be 100% verified without the need for tolerances and hidden state (fog of war) can be networked only when necessary.
OK so it's like Minecraft, but with a lot more combat. I can see the appeal.
> The most effective anticheat tool really is game design.
Yep, that's always been my idea, and why I brought up Elo.
IMO second most effective is to play with people you already trust. Or like on many public Factorio servers: strangers get limited permissions until proven trustworthy. But none of this works in a game with just a couple hundred players.
It has been time for long time and I support your stance but the big publishers only speak money. I gather they still have enough customers for their mainstream AAA titles.
But I would like to think that Valve it indirectly putting pressure on them. I too am not far from removing Windows and making the full jump to Linux for my gaming needs.
To add some more context to your comment. One of the big attacks was in 2021 with the Kaseya ransomware attack that caused one of the larger grocers (coop) to essentially be unable to operate. Made national news as they had to give away product for free in some places.
> And yes, we use cash so seldom that most people cannot from memory recall what the bills/coins look like!
It didn't help that the Riskbank replaced all bills and coins during a relatively short time period, and did it badly. People used up/deposited their old and didn't get new.
The new coins and bills have unnecessary denominations and bad design that made cash bothersome to use.
They introduced an unnecessary 2 SEK coin, that is almost indistinguishable from the 1 SEK coin — especially if you are unused to them.
They also introduced an unnecessary 200 SEK bill, that was just too big to be useful for small purchases. Several times I've seen people at ATMs withdrawing 100 SEK over and over again, just because they wanted the more useful 100 SEK bills.
I liked the article so I wanted to give you some feedback. Hope it is useful to you!
- I don't think the definitions of error and failure are 100% correct as stated. Looking at the IEEE definition that you reference, I interpret error meaning the difference between the value that is stored in the program, and the correct/intended value. For example if we expect to have a value of 100, but in fact have 110, the error is 10. I don't think that whether the value is observed or not is what categorizes it as either an error or a failure. If I run my program in the debugger and find that a value is off from what it is supposed to be, does that shift it from an error to a failure?
- One point I think you should have leaned more into is how language constructs and tools can help prevent failures, or cause more of them if they are bad. You bring up the point with Haskell and Rust, and how they systematically reduce the number of faults a programmer can make. You also bring up the point of Exceptions introducing a lot of complexity. I think these two examples are great individually. I think putting them together and comparing them would have been powerful. Maybe a section that argues why Rust omitting exceptions makes it a better language.
- A side note since I also hate exceptions: did you know that the most common (and accepted?) way to communicate exceptions in C# is via doc comments written manually by humans. Good luck statically analyzing that!
- A lot of the text revolves around the terms error, failure, and fault and how people use these in communication. Often with different ideas of what the words mean. Even the titles (jokingly? "correctingly"?) reference this. Even with the definition at the start, the ambiguity of these terms was not dispelled. I think a major part of that was the text using the terms like you defined them, and also the common "misunderstood" versions of the terms. I think a strategy you could have deployed here is to use less overloaded words throughout the article and sticking to those throughout the article. For example (without saying these are the best terms for the job), instead of fault, error, and failure, using defect, deviation, and detected problem.
- A note on the writing style. Many words are quoted, and many sentences use parenthesis to further explain something. At least to me, these things make the text a bit jumpy when overused. I would try to rewrite sentences that end with a parenthesis by asking myself "what is missing in the sentence so I don't need to resort to parenthesis?". Don't be afraid to break a long sentence into many!
Hope my comments come of as sincere, if not then that's on me! Good luck with your continued writing.
> A side note since I also hate exceptions: did you know that the most common (and accepted?) way to communicate exceptions in C# is via doc comments written manually by humans. Good luck statically analyzing that!
Java having checked exceptions is the primary reason I’m sticking with that language. Many libraries don’t use them, unfortunately, but an application that embraces them systematically is bliss in terms of error handling, because at any place in the code you always know exactly what can fail for what non-bug reasons.
They had the right idea but implemented it poorly (overly verbose to work with, as is much of java). The end result were people taking too many shortcuts.
The only thing I'll comment on is the IEEE stuff. I was taught these terms in a university course on fault tolerance. You'll find slides from various courses using them like this or similar if you search on Google, and that particular IEEE standard was mentioned as the source (I never personally read it). I have read a later standard that rather than defining error specifically, mentions all the various ways in which the term is used.
The thing is, the actual standard is irrelevant, it wasn't meant as an appeal to authority. Rather, it's a source of 3 related terms (fault/error/failure) that can be used to refer to the 3 distinct ideas discussed throughout the post.
Your suggestions for alternative names are just as valuable and just as useless, neither the ones in the standard nor your own are generally agreed upon. My hope was that by using a somewhat common triple I would have avoided pointless discussion on the terms themselves, rather than the ideas discussed in the post.
As this hackernews comment section demonstrates, I was all for naught ;)
> did you know that the most common (and accepted?) way to communicate exceptions in C# is via doc comments written manually by humans.
Well, the accepted way to communicate them in Python is "we don't". I think C++ follows that same principle, but the ecosystem is extremely disconnected, so YMMV.
Java tried to do a new and very good thing by forcing the documentation of the exceptions. But since generics sucked for the first ~20 years of the language, and nobody still decided to apply them to the exceptions, it got bad results that discouraged anybody else from trying.
I think for dynamic languages exceptions are just a fact of life and it doesn't really make much sense to worry about them, you can't rely on the type system to remind the programmer of all the cases they need to handle.
So thinking in terms of failure handling is the way to go.
There is a cost in trying to force the language to find bugs for you. More is not always better. Unlike a linter, ignoring false positives from a compiler requires more work to work around them.
Not having exceptions in the language creates a tradeoff as well. This may lead to either ignoring errors or adding non-linear boilerplate between where the issue is detected and where the code can handle it, negatively impacting readability and refactoring.
Yup, see the section on handling failures in the post. Though note that I use "exceptions" to refer to a very particular language feature, rather than the mechanism. Rust panics and Go panics work like exceptions but are meant to be used differently. Panics are good as are exceptions when used like panics.
I've been using this as my standard font for maybe 1-2 years now (no, I am not joking). While I don't think that the font is any more legible than other fonts, the quirkiness and the character of the font makes it rather enjoyable to look at.
If legibility is an issue then I would seriously recommend increasing the font size, I think that will do much more than choice of "most optimal" font. And if increased font size makes your code "harder to read", consider that someone else might be unable to use a smaller font and will be forced to read code with a larger font size.
Daily driver for me as well, but only for terminal. People laugh sometimes while pair programming, but usually by the end they begin to really like it. Can't use anything else at this point.
I have a similar feeling. For some work I need to focus, but the "why don't we scrap this idea for one that does the same thing in 1/10 the code"-moments have mostly been in and between meetings and office chatter.
> The only solution is for the regulator, in this case the central banks, to issue guidance for the banks to create credit for only new productive investments, whether that be new housing, factories, machinery, or firms, because those are not inflationary and increase the size of the GDP pie.
I struggle to see exactly what you are advocating for. If a family wants to buy a house, they will generally have to take out a loan to cover the upfront cost and pay off the loan over a long period of time. However, the loan is not a "productive investment" (no new assets are being created, only traded) and as such the central bank should regulate normal banks to not be allowed to issue loans for existing houses.
Without the ability to take out a loan for a house, I think we can all see how no normal family without 20-40 years of combined salary payments would be able to afford a house. Is this in line with what you are suggesting, or is it something else?
I'm not trying to be asinine, this is just my interpretation of your suggestion and I am trying to understand what you are suggesting.
I have one example and an anti-example. Both are related to algorithms and computer science.
A) Packing a binary tree into an array. Anyone that has attended an algorithms course has likely created a binary tree with nodes, leaf nodes, left and right child etc. Seen the pine-tree like sketch with a larger example where each node except the leaf nodes have a left and right child. So how do you pack this tree into an array and traverse it efficiently?
Well you turn it 90 degrees side-ways, slightly shift all nodes on the same level so that none align and put them into an array by going from leftmost to rightmost. (or other way depending on if you shifted 90 degrees or -90 degrees). Congrats, you've packed nodes into an array. How do you traverse it? Our root node is at index 1 and if you packed the array correctly then `idx = (idx * 2) + 1` will move down one side and `idx = (idx * 2) + 0` moves down the other. I don't have a good visual explanation of this but you can think of the integer/index as a bit-sequence describing when in the tree a left vs. right path was taken (with the exception of the root node).
B) Anti-example: Ford Fulkerson algorithm for finding shortest paths between all nodes in a graph. The algorithm is basically just three for-loops stacked on top of each other, but I still can't grasp why it works. Something with dynamic programming and incrementally building on prior established intermediate paths. The algorithm is truly the product of a beautiful mind.
A)
I've seen this before but I've never thought of the "index as a bit-sequence describing when in the tree a left vs. right path was taken" before! This is a very nice intuitive explanation that'll really help in describing this to others.
This summer I read "On Writing Well" by William Zissner which was an eye opener for me. I'm far from an expert in writing clear texts, but I am definitely noticing more text which are just... big balls of blurb that don't actually say anything. All because of that book.
It sounds dumb, but this year it clicked for me how big of a difference a poorly written text compares to a well written text.
Zinsser's "On Writing Well" is one of my favorite books, I like how clear and concise its prose is.
I got Joseph M. William's "Style: Toward Clarity and Grace" recommended [1] so you might be interested in it. I read the introduction (I'm planning to read it this year) and with the few examples the author presents it sells the idea that prose doesn't need to be utterly complex to communicate ideas and concepts succinctly and clearly.
This is the first book I recommend to anyone who wants to improve their writing. With Minto's Pyramid Principle as a follow-on.
My top 3: 1/ Edit ruthlessly. Every single word is reduced to its simplest form and pulls its weight--it has a damn good reason for being there. 2/ Aspire to write at a third-grade reading level. Readers prefer simple writing even when reading deeply technical content. 3/ Start your most important conclusions up front, not at the end. You're not writing The Sixth Sense. Do your reader a favor and tell them the big reveal first. You can then follow through and persuade the reader why your conclusions are right.
The definition of working with people vs. working with things doesn't seem obvious to me. As a software engineer I am working a lot with my computer. Therefore I must be working with things! But... I am just as much working with my soft skills. Scrum retros, talking to stakeholders, discussions on design with fellow engineers, testing with end-users. Some days I don't spend even a minute working with "things".
I don't think the distinction of working with people vs working with things is clear enough to say wheter my job means I am doing the one or the other. So how could the participants in the study do the same? I'm assuming there are lots of occupations in this grey zone and even situations where in one company the same occupation is considered working with people while in another company it would be considered working with things.
I can see multiple issues the study runs into which would be interesting to see how/if it answers. A) Does it measure peoples perception of wheter they work with people or thing? B) Do the authors make their own interpretation of which occupations lands in which category? How do they then eliminate their own biases in what working with that occupation means? C) Did the authors observe the participants and make a judgement call based on their day-to-day activity? This would likely be the most accurate, but I can't imagine they did this because of the sheer cost of such an experiment.
I think it’s pretty clear to me. I am much happier to be quietly doing research, writing code, testing, writing documentation, fixing bugs, writing emails, etc. Now stick me in a meeting and I soon become very unhappy. Why? Because I’m really not interested in listening to people complain about stuff unrelated to my tasks or giving updates about random stuff or whatever.
Now I don’t mind socializing with people at work and making a bit of small talk but I have zero interest in the sort of collaborative work that involves daily meetings. I’m far more productive when I can be left alone to focus on the task and it drives me crazy when people constantly interrupt me.
Here I would find some support for the parents point: There is a lot of room for interpretation.
To me, writing an email can be the most intense "working with people" thing, more so than any amount of small talk or doing manual labor with people. The amount of stress that goes into a hard to write email – mostly because it will also be a hard to read email and then also the anticipation of some sort of unpleasant reaction – and the amount of time you have to wallow in that stress, rivals few other social interactions in its intensity.
See, I find writing an email to be a totally straightforward, impersonal task. Like writing code, but in English rather than a programming language. Just the facts, no sugar-coating or other nonsense!
I know the emails they’re talking about. They’re not as simple as writing the facts. Writing an extremely complex technical email in jargon and ways for people who aren’t familiar with any of it - even slightly - and explaining each detail concretely is quite the process.
It’s annoying when I have to write a 6,000 word email that is easily misunderstood because most others barely have an idea of the subject matter but it happens more often than I’d like to say.
Tbh - I find the emails pointless but this is what happens in low trust (dysfunctional) organizations.
It depends on the quality of the meetings. My last job I hated them for all the reasons you said. My current job they are enjoyable and productive. That's because the agenda is a "living" document that anyone can edit during the week, and the meetings stop when we've nothing left to discuss or need to get back to work
That's true. At my last job the meetings basically served as a way for the manager to broadcast stuff that could've just been sent out in an email and then collect updates on what everyone has been working on which also could be done with email. There was almost no back-and-forth to it at all, yet the damn thing took 2 hours per week. A huge waste of time!
A bricklayer works with things. Daycare staff work with people. These are very clear-cut.
Then there are a lot of points along that continuum -- nurses and HR reps work mostly with people. Economists a little of both. Software engineers and carpenters mostly with things.
We may not agree on a complete ordering but we will get reasonably close to each other, I suppose.
Yeah, I guess my point is that the categorization "prefer working with people" vs "prefer working with things" seems very weird. If I was a solo IT-entrepeneur I would likely work a lot with both! It does not even seem to be a scale between working more with things or more with people if you ask me.
And if there is so much grey zone that is open to interpretation, wouldn't that mean the study is rather measuring ones own perception of what they are working with?
Neithey daycare or bricklaying is something people often choose as their profession - ratger more like they end up doing because of a lack of options. Of course women would prefer bricklaying less as they are generally weaker. Daycare is a good luck call for getting hired as a man in the first place.
You cannot not work with people, only the degree varies. And as soon as you are part of a larger organisation, and / or working on complex things, you work woth people as much as you work with things. Including brick layers.
Nurses work with people more than things? Im not sure how much face time a nurse gets in a day but Ive talked to a few and its a lot of paperwork… do accountants work with things?
Are you serious? Nurses are not secretaries. They work with people. Just because they have to fill forms, it doesn't give away the fact that their primary job is to tend/care people, help in operations, etc... mostly interacting with people.
An accountant can stay deep down in quicken for days, and it is ok. Just because they interact occasionally with an client, doesn't make them a people's job.
On the other hand, most Sales job are all about people.
I've spent at 10 days a few years ago working in a medical team in an hospital assisting nurses. They have a lot of workflow stuff to follow but that doesn't mean they chose this job for this reason and they mostly hate it. And they are still running left and right for patients all day. Plus their desktop is usually not hidden in an office but in a counter at the center of the service where they multitask between doing the paperwork and talking to patients/visitors/other members of the medical or technical teams
>As a software engineer I am working a lot with my computer. Therefore I must be working with things! But... I am just as much working with my soft skills. Scrum retros, talking to stakeholders, discussions on design with fellow engineers, testing with end-users. Some days I don't spend even a minute working with "things".
Yes but did you join the profession for the Scrum meetings or for the programming?
software is an industry that supports both types of workers, and you probably can pick out amongst your working group which ones are the people who prefer things vs people. your job may ask you to do soft work, but that doesn’t mean you prefer it
I know some of my friends who would never like to work with people through things (like computers or the Internet). They prefer to have a real office or retail with actual persons around them and don't like working in solitude physically (and a co-working does not cut it for them).
This is a comment only an engineer could come up with. It's not a "grey zone" ffs. You compare talking to people in order to do your job with teaching a class of pubescent kids or wiping old people's asses. Those are not even remotely comparable.
It sounds to me like you did a full rewrite by replacing the app piece by piece, sprint by sprint, releasing changes quite often and bringing that value all the way to the user. I think that is really clever.
My impression from others in this thread is that they mean "start from scratch and build until features are on-par with current product" when they say full rewrite.
Your version of full rewrite seems like it is generally applicable, but I have very little faith in the latter approach.
ProtonDB is a goldmine when a game doesn't work. Oh, and switching from Nvidia GPU to AMD GPU seems to have worked great to get games to "just work".