Hacker Newsnew | past | comments | ask | show | jobs | submit | MaulingMonkey's commentslogin

Where I live (pacific northwest), it's not snow that's the problem, but windstorms. Presumably knocking over trees, which in turn takes down power lines - which of course implies said trees are tall, in proximity to the power lines, and not cut down. I maybe average 24 hours of outage per year (frequently less, but occasionally spiking to a multi-day outage.)

I don't think that's something that can be solved with just "build quality"... but it presumably could be solved through "maintainence" (cutting down or trimming trees, although that requires identifying the problem, permissions, a willingness to have decreased tree coverage, etc.)


> It'll mean GOG has to do less work

[citation needed]

GOG's launcher team is presumably already familiar with their codebase, already has a checkout, already has a codebase that's missing 0 features, has a user interface that already matches their customer's muscle memory, and presumably already has semi-decent platform abstraction layer, considering they have binaries for both Windows and OS X. Unless they've utterly botched their PAL and buried it under several mountains of technical debt, porting is probably going to be relatively straightforward.

I'm not giving Linux gaming a second shot merely because of a bunch of ancedata about proton and wine improvements - I'm giving it a second shot because Steam themselves have staked enough of their brand and reputation on the experience, and put enough skin in the game with official linux support in their launcher. While I don't have enough of a GOG library for GOG's launcher to move the needle on that front for me personally, what it might do is get me looking at the GOG storefront again - in a way that some third party launcher simply wouldn't. Epic? I do have Satisfactory there, Heroic Launcher might be enough to avoid repurchasing it on Steam just for Linux, but it's not enough to make me want to stop avoiding Epic for future purchases on account of poor Linux support.


Phase Alternating Line? What's "PAL" here?


Given the context probably Platform Abstraction Layer.


You can specify:

    "runOptions": { "runOn": "folderOpen" }
In tasks.json, which I use for automatically `git fetch`ing on a few projects. While I don't recall it's interaction with first run / untrusted folder dialogs, it's entirely automatic on second run / trusted folders.


> Wanting to be able to use anybody's machine is very strange, agreed.

Very useful if people are struggling to create reliable repro steps that work for me - I can simply debug in situ on their machine. Also useful if a coworker is struggling to figure something out, and wants a second set of eyes on something that's driving them batty - I can simply do that without needing to ramp up on an unfamiliar toolset. Ever debugged a codegen issue that you couldn't repro, that turned out to be a compiler bug, that you didn't see because you (and the build servers) were on a different version? I have. There are ways to e.g. configure Visual Studio's updater to install the same version for the entire studio, which would've eliminated some of the "works on my machine" dance, but it's a headache. When a coworker shows me a cool non-default thing they've added a key binding for? I'll ask what key(s) they've bound it to if they didn't share it, so we share the same muscle memory.


I bucket Eclipse under "heavyweight IDE". I used to use it, plus the CDT plugin, for my C++ nonsense.

Then Visual Studio's Express and later Community SKUs made Visual Studio free for ≈home/hobby use in the same bucket. And they're better at that bucket for my needs. Less mucking with makefiles, the mixed ability to debug mixed C# and C++ callstacks, the fact that it's the same base as my work tools (game consoles have stuff integrating with Visual Studio, GPU vendors have stuff integrating with Visual Studio, the cool 3rd party intellisense game studios like integrates with Visual Studio...)

Eclipse, at least for me, quickly became relegated to increasingly rare moments of Linux development.

But I don't always want a heavyweight IDE and it's plugins and load times and project files. For a long time I just used notepad for quick edits to text files. But that's not great if you're, say, editing a many-file script repository. You still don't want all the dead weight of a heavy weight IDE, but there's a plethora of text editors that give you tabs, and maybe some basic syntax highlighting, and that's all you were going to get anyways. Notepad++, Sublime Text, Kate, ...and Visual Studio Code.

Well, VSC grew some tricks - an extension API for debuggers, spearheading the language server protocol... heck, I eventually even stopped hating the integrated VCS tab! It grew a "lightweight IDE" bucket, and it serves that niche for me well, and that's a useful niche for me.

In doing so, it's admittedly grown away from the "simple text editor" bucket. If you're routinely doing the careful work of auditing possibly malicious repositories before touching a single build task, VSC feels like the wrong tool to me, despite measures such as introducing the concept of untrusted repositories. I've somewhat attempted to shove a round peg into a square hole by using VSC's profiles feature - I now have a "Default" profile for my coding adventures and a "Notes" profile with all the extensions gone for editing my large piles of markdown, and for inspecting code I trust enough to allow on disk, but not enough to autorun anything... but switching editors entirely might be a better use of my time for this niche.


> perhaps the downvoters can tell me why they are downvoting?

Not one of the actual downvoters, but:

Lack of proper indenting means your code as posted doesn't even compile. e.g. I presume there was a `char* p;` that had `*` removed as markdown.

Untested AI slop code is gross. You've got two snippets doing more or less the same thing in two different styles...

First one hand-copies strings character by character, has an incoherent explaination about what `pwbuf` actually is (comment says "root::", code actually has "root:k.:\n", but neither empty nor "k." are likely to be the hash that actually matches a password of 100 spaces plus `pwbuf` itself, which is presumably what `crypt(password)` would try to hash.)

Second one is a little less gross, but the hardcoded `known_hash` is again almost certainly incorrect... and if by some miracle it was accurate, the random unicode embedded would cause source file encoding to suddenly become critical to compiling as intended, plus the `\0`s written to `*p` mean su.c would hit the `return;` here before even attempting to check the hash, assuming you're piping the output of these programs to su:

        while((*q = getchar()) != '\n')
                if(*q++ == '\0')
                        return;
A preferrable alternative to random nonsensical system specific hardcoded hashes would be to simply call `crypt` yourself, although you might need a brute force loop as e.g. `crypt(password);` in the original would presumably overflow and need to self-referentially include the `pwbuf` and thus the hash. That gets messy...


crypt is defined in assembly at s3 crypt.s and it would appear to use the same family of "cryptographic machine" as V6's crypt.c but it is even shorter and I can't tell if it has bounds checks or not — V6 limits output size to 512.

edit: if hash output length is variable it may be impossible to find a solution and then a side channel timing attack is probably the best option.


someone liked this but note that someone else had already determined it is limited to 64 bytes on a previous HN post so the overflow hack does work.


This is something I do wish Rust could better support. A `#![no_std]` library crate can at least discourage allocation (although it can always `extern crate alloc;` in lib.rs or invoke malloc via FFI...)


Is the juice worth the squeeze to introduce two new function colors? What would you do if you needed to call `unreachable!()`?

It's a shame that you can't quite do this with a lint, because they can't recurse to check the definitions of functions you call. That would seem to me to be ideal, maintain it as an application-level discipline so as not to complicate the base language, but automate it.


> Is the juice worth the squeeze to introduce two new function colors?

Typically no... which is another way of saying occasionally yes.

> What would you do if you needed to call `unreachable!()`?

Probably one of e.g.:

    unsafe { core::hint::unreachable_unchecked() }
    loop {}
Which are of course the wrong habits to form! (More seriously: in the contexts where such no-panic colors become useful, it's because you need to not call `unreachable!()`.)

> It's a shame that you can't quite do this with a lint, because they can't recurse to check the definitions of functions you call. That would seem to me to be ideal, maintain it as an application-level discipline so as not to complicate the base language, but automate it.

Indeed. You can mark a crate e.g. #![deny(clippy::panic)] and isolate that way, but it's not quite the rock solid guarantees Rust typically spoils us with.


> Typically no... which is another way of saying occasionally yes.

You might be able to avoid generating panic handling landing pads if you know that a function does not call panic (transitively). Inlining and LTO often help, but there is no guarantee that it will be possible to elide, it depends on the whims of the optimiser.

Knowing that panicking doesn't happen can also enable other optimisations that wouldn't have been correct if a panic were to happen.

All of that is usually very minor, but in a hot loop it could matter, and it will help with code size and density.

(Note that this is assuming SysV ABI as used by everyone except Windows, I have no clue how SEH exceptions on Windows work.)

> Indeed. You can mark a crate e.g. #![deny(clippy::panic)] and isolate that way, but it's not quite the rock solid guarantees Rust typically spoils us with.

Also, there are many things in Rust which can panic apart from actual calls to panic or unwrap: indexing out of bounds, integer overflow (in debug), various std functions if misused, ...


FWIW, this is what I do, although typically with the prefix `dead/`, unto which I abandon ideas and misdirected refactoring with reckless abandon.


GPS started as a U.S. Department of Defense project, and they had qualms about freely giving the high accuracy positioning information they found so very useful for e.g. targeting bombs and missiles, to every unverified third party in the world. Depending on your preferred flavor of jadedness, one could say it was because of security concerns... or one could say it was because said third parties hadn't paid off the military industrial complex enough!


> Is it?

Yes.

> What happens if you remove that one last reference to a long chain of objects?

A mass free sometime vaguely in future based on the GC's whims and knobs and tuning, when doing non-refcounting garbage collection.

A mass free there and then, when refcounting. Which might still cause problems - but they are at least deterministic problems. Problems that will show up in ≈any profiler exactly where the last reference was lost, which you can then choose to e.g. ameliorate (at least when you have source access) by choosing a more appropriate allocator. Or deferring cleanup over several frames, if that's what you're into. Or eating the pause for less cache thrashing and higher throughput. Or mixing strategies depending on application context (game (un)loading screen probably prioritizes throughput, streaming mid-gameplay probably prioritizes framerate...)

> You might unexpectedly be doing a ton of freeing and have a long pause. And free itself can be expensive.

Much more rarely than GC pauses cause problems IME.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: