Hacker Newsnew | past | comments | ask | show | jobs | submit | hnthrowaway0315's commentslogin

> Scientists from China, Russia, Iran, North Korea, Cuba, Venezuela, and Syria are considered “high risk.”

I think this makes sense from a national security perspective (although I doubt there is any scientist coming from these countries who are working on sensitive projects, maybe except China). Since there is too much trouble to figure out who is a spy, might as well ban all of them for the moment.

I do feel a strong nostalgia about the globalization era between the 90s and the 2010s, when I spent most of my life. But I understand it comes to an end, and I'm going to spend my second half of life in a much more splintered world.


This list of high risk countries is not new (with the exception of maybe Venezuela being recently added, I’m not sure). Researchers with these citizenships have faced extra security review before joining NIST for years, and last year the lab increased the level of security review for everyone (not just this list)

I can understand a clearly communicated need for additional security requirements. But NIST operates almost totally in open science mode, with the main exceptions of being industry cooperative agreements. I don’t think this move to shed international researchers by reneging on commitments from the lab has been at all justified from a security standpoint.


So as to not mislead anyone who didn't read the article, the section following your quoted text is:

> Researchers from lower risk countries have been told they could lose access beginning in either September or December if at that point they have been at the lab more than 2 years or, under a waiver, 3 years.

In other words: they're also looking to bar foreign nationals outside of that quoted list, which to my mind is less understandable.


It makes sense to stop poaching talented scientists and instead let them continue working for your adversaries? I don't understand how this improves national security. The proposed rule is actually worse than this:

> The changes are part of proposed rules aimed at increasing security that would limit, to 3 years, the maximum length of time visiting international researchers can work at NIST.

If researchers know that they cannot stay in the US permanently and will be forced to return to their home country in a few years, it guarantees that they must maintain ties to that home country and dramatically increases their incentive to spy. What would you do if your government asked you to spy during a temporary stay abroad, and threatened you with arrest upon your return if you refuse?


It doesn’t make sense from a national security perspective actually.

A better plan would be to encourage skilled immigration and offer compelling benefits and stability like family visas, free movement, and so on. That way, the best people would make their contributions to science and society here. It’s actually a masterstroke because it deprives other countries of their best people.

The current administration is filled with weak men and therefore chose policies that look “strong” but are actually rooted in personal insecurity


There have been many cases of US born citizens selling secrets to foeign powers (same here in UK).

As a side note (tangentailly related) I wonder if the US would have gained nuclear capabilities if it wasn't for foreign scientists.


>makes sense from a national security perspective

Does it? AFAIK NIST doesn't work on national security relevant research.


Oh my god the national security! Someone make up the hypothetical situations the national security might be compromised without proof of any of it! Let me pull out my wallet and take out my national security detector…yep it’s lower than before! Quick pile on the propaganda!

It's an institute that's about setting international standards. It's not secret, by definition. You can just visit their web page and read their publications.

Just feels like side effects of poorly thought out rules from above.


Exactly how many North Korean scientists are working in the U.S. right now?

Not a lot, but what is your point exactly? There are a lot of really Chinese scientists working in the US, and the ones who are postdocs and research scientists at NIST are apparently being pushed out at the end of this month. They’ve already been vetted for security concerns, so that justification is kind of thin.

How many Taiwanese, German, Indian, French, South Korean, etc scientists are working in the US? The ones working at NIST are facing being pushed out at the end of September.


[flagged]


That was indeed the logic then. Keep in mind though that the internment was based on 'race' and 'ethnicity'. This action is based on citizenship and it's a job limitation not a forcible relocation into an open air prison.

I'm with you on the difference between labour limitations and imprisonment but

> Keep in mind though that the internment was based on 'race' and 'ethnicity'. This action is based on citizenship a

You say this like it's a meaningful distinction?


Surely it is? It would be a very different policy to say "anyone vaguely asian is banned from the lab, even if they're an American citizen"

Yes, we reserve additional scrutiny, roundup, and detention for people who are vaguely Latino.

It's a massive distinction. I live in Germany and if the government said I couldn't have a job because of my race or ethnicity I think that would be majorly problematic. If they said I couldn't because I'm American I would think that restriction should be reserved for narrow circumstances (secure projects) but it's generally acceptable.

Why?

I think the same method might be used again in a future conflict with China, when the question of life and death becomes serious. Not saying that I LIKE it, but I think it is at least plausible, and with a non-insignificant (note the double negation) possibility.

Man, if there were only something more reasonable... something in-between letting them spy at will and concentration camps. Hmmm, maybe we will think of something eventually.

Ok, then let them spy continuously I guess and then carry the know how home. Even countries openly hostile to you.

I mean it is unfair for sure but it's not your given right. If for example Chinese are literally breaking their law when they refuse to spy what else can you do?


But aren't they happy you bring them democracy? I am confused..

I think Windows 95/2000 and the contemporary MacOS (including the then future MacOS X) have the best UI in everything I used in my 30+ years of tech life.

I sincerely hope that one day we could go back to that road. If you want that achieved, please support me to join Apple/Microsoft to become the UI boss, fire all flat-design people and hire a small team to implement the older UI, then give a few passionate talks on EDX and conferences so people who supported flat UI magically support the older UI. They always follow whoever the lead is like headless flies.

LOL.


After nearly 30 years of tech life myself, I've come to the realization that the best UIs are not graphical. They can have graphical elements mostly for visualization purposes, but all of them should be as minimal and unobtrusive as possible. Any interactivity should be primarily keyboard-driven, and mouse input should be optional.

Forcing users to click on graphical elements presents many challenges: what constitutes an "element"; what are its boundaries; when is it active, inactive, disabled, etc.; if it has icons, what do they mean; are interactive elements visually distinguishable from non-interactive elements; and so on.

A good example of bad UI that drives me mad today on Windows 11 is something as simple as resizing windows. Since the modern trend is to have rounded corners on everything, it's not clear where the "grab" area for resizing a window exists anymore. It seems to exist outside of the physical boundary of the window, and the actual activation point is barely a few pixels wide. Apparently this is an issue on macOS as well[1].

Like you, I do have a soft spot for the Windows 2000 GUI in particular, and consider it the pinnacle of Microsoft's designs, but it still feels outdated and inneficient by modern standards. The reason for this is because it follows the visual trends of the era, and it can't accomodate some of the UX improvements newer GUIs have (universal search, tiled/snappable windows, workspaces, etc.).

So, my point is that eschewing graphics as much as possible, and relying on keyboard input to perform operations, gets rid of the graphical ambiguities, minimizes the amount of trend following making the UI feel timeless, and makes the user feel more in command of their experience, making them more efficient and quicker.

This UI doesn't have to be some inaccessible CLI or TUI, although that's certainly an option for power users, but it should generally only serve to enable the user to do their work as easily as possible, and get out of the way the rest of the time. Unfortunately, most modern OSs have teams of designers and developers that need to justify their salary, and a UI that is invisible and rarely changes won't get anyone promoted. But it's certainly possible for power users to build out this UI themselves using some common and popular software. It takes a bit of work, but the benefits far outweigh the time and effort investment.

[1]: https://news.ycombinator.com/item?id=46579864


The issue with this type of design is that it completely tanks discoverability. Every visual UI element trimmed is another pit of confusion for less-technical computer users.

Modern UIs aren't great with discoverability, either however and are not an example that should be followed.


That's not necessarily the case. In fact, if implemented well, keyboard/command-driven UIs can be much easier to discover than GUIs.

Consider the "Command Palette" and similar features that are part of many UIs (VS Code, Obsidian, Vim, Emacs, etc.). It allows the user to search all possible actions using natural language, and see or assign key bindings to them, so that they can get to their most commonly used actions faster. This search can be global for the entire program, or contextual for the current view.

It is far easier to search for what you want to do, than to learn to what action every GUI element is associated with, or to navigate arbitrarily nested menu hierarchies. This does require the user to be familiar with the domain language somewhat in order to know what to search for, but this too can be simplified, actions can have different names, etc. It also makes the program more accessible for speech navigation, screen readers, and so on.


> The issue with this type of design is that it completely tanks discoverability.

There are still ways to help, such as having a menu bar, and having good documentation. (Documentation is more important, in my opinion; but both are helpful.)


Pointers are still very useful for many paradigms. Think about something like Blender or a game level editor: there can be _a lot_ of controls visible at once, trying to navigate them all with the keyboard is just unfeasible. And doing a fully context sensitive setup to limit visible controls, like the MS Office Ribbon, is also infeasible because the changes would be happening almost continually as different objects are selected and modes are chosen.

Your bad UI example of resizing windows is way less about the round corners or lack of obvious grab area (handle). It's more that the handle is way too small. It's a couple pixels (maybe just one?) wide/tall on screens that are thousands of pixels wide! It's just too easy to overshoot. I'd say it comes from the obsession with minimalism and flat design such that there is almost no visible seperate border to act as a target. Combined with trying to remove ambiguity as to which window the click should go to (if you click two pixels "outside" a window, should the click go to the window beneath or be interpreted as trying to grab the border?), the grab handles are tiny, almost matching the actual (lack of) pixels of the border, instead of being a usable target to click on.

To me it points to a lack of usability testing, or at least lack of generalized usability testing, ie: they tested their own workflows, which seem to be just always leaving windows as the OS creates them initially, or maximizing everything, not much resizing at all. Similarly, generally testing a [mostly] keyboard interface is tough to do thoroughly without providing a thorough cheat sheet. You know the commands because you made them, easy to test how you work, but others need to learn them first.


> After nearly 30 years of tech life myself, I've come to the realization that the best UIs are not graphical. They can have graphical elements mostly for visualization purposes, but all of them should be as minimal and unobtrusive as possible. Any interactivity should be primarily keyboard-driven, and mouse input should be optional.

I agree, that the interactivity should be primarily keyboard-driven. However, mouse input is useful for many things as well; if there are many things on the screen, the mouse can be a useful way to select one, even if the keyboard can also be used (if you already know what it is, you can type it in without having to know where on the screen it is; if you do not know what it is, you can see it on the screen and select it by mouse).

> Forcing users to click on graphical elements presents many challenges: what constitutes an "element"; what are its boundaries; when is it active, inactive, disabled, etc.; if it has icons, what do they mean; are interactive elements visually distinguishable from non-interactive elements; and so on.

At least older versions of Windows had a more consistent way of indicating some of these things, although sometimes they did not work very well, often they worked OK. (The conventions for doing so might have been improved, although at least they had some that, at least partially, worked.)

> A good example of bad UI that drives me mad today on Windows 11 is something as simple as resizing windows. ... it's not clear where the "grab" area for resizing a window exists anymore

I had just used ALT+SPACE to do stuff such as resize, move, etc. I have not used Windows 11 so I don't know if it works on Windows 11, but I would hope that it does if Microsoft wants to avoid confusing people. (On other older versions of Windows, even if they moved everything I was able to use it because most of the keyboard commands still work the same as older versions of Windows, so that is helpful (for example, you can still push ALT+TAB to switch between full-screen programs, ALT+F4 to close a full-screen program, etc; I don't know whether or not there is any other way to do such things like that). However, many of the changes will cause confusion despite this, or will cause other problems, that they removed stuff that is useful in favor of less useful or more worthless stuff.)


> Forcing users to click on graphical elements presents many challenges: what constitutes an "element"; what are its boundaries; when is it active, inactive, disabled, etc.; if it has icons, what do they mean; are interactive elements visually distinguishable from non-interactive elements; and so on.

There are standards and common conventions for a lot of this in the Windows 9X/2000 design language, and even in basic HTML. These challenges could have been solved (for values of) by using them consistently, and I think we might have been there for a little while, at least within the Windows bubble. The fact that we threw all of those out the window with new and worse design, then did that again a few more times just to make sure all the users learned to never bother actually learning the UI, since it will just change on them anyway, doesn't entail that this is an unsolvable problem (well, it might be now, but I doubt it was back in 1995).

> Like you, I do have a soft spot for the Windows 2000 GUI in particular, and consider it the pinnacle of Microsoft's designs, but it still feels outdated and inneficient by modern standards. The reason for this is because it follows the visual trends of the era, and it can't accomodate some of the UX improvements newer GUIs have (universal search, tiled/snappable windows, workspaces, etc.).

I fail to see why any of these features couldn't be implemented within the design constraints of the Windows 9X/2000 design language. There are certainly technical constrains, but I can't see any design constrains. They were never implemented at the time, and those features didn't become relevant until we'd gone through several rounds of different designs, so we never had the opportunity to see how it would work out.


> There are standards and common conventions for a lot of this in the Windows 9X/2000 design language, and even in basic HTML. These challenges could have been solved (for values of) by using them consistently [...]

The thing is that GUIs naturally have to evolve to cater to their user base. The "office" metaphor was useful in the 1980s and 90s for making computing familiar to people who were used to "desktops", "folders", "files", etc. Some of these terms still exist today, but the vast majority of users can't relate to it, so it's meaningless to them.

This is why GUIs will always have to change and adapt to trends, which will always cause friction for existing users.

My point is that by minimizing the amount of graphical elements (note: not completely eliminate them), we minimize the amount of this friction. The difficult thing is, of course, maintaining the appropriate balance of all elements while optimizing for usability, which is ultimately very subjective.

But consider that CLIs are effectively timeless. The friction comes from their lack of discoverability, arcane I/O, every program can have a different UI, etc. And yet this interface has persisted and has largely remained the same for decades. Most programs rarely change their CLI, so the user only needs to learn a few commands to be productive.

So I think that the most usable UI is somewhere in the middle. It should avoid the constant churn of GUIs, and be more accessible than CLIs. This is possible to build for power users, but it can also be made approachable for less technical users.

> I fail to see why any of these features couldn't be implemented within the design constraints of the Windows 9X/2000 design language.

That's true. But then again, what exactly is the Windows 9x/2000 design language, and what makes it better than the modern Windows GUI? Is it the basic Start Menu? The task panel with blocks for each window instead of icons? The square instead of round windows? The lack of smooth transitions, transparency, and graphical effects? The overall brutalist theme?

We can certainly add all the features I mentioned to Windows 9x/2000, and we had some of them even back then via 3rd party tools, but isn't that essentially what modern Windows has become? There are ways to revert some Windows 11 features today with alternative shells and such, so is that the ideal UI then?

When I think of Win2k, I think of the overall simplicity. This is mostly due to nostalgia than for any practical reasons. I'm sure that I couldn't stand using its barebones UI today, as much as I would enjoy the simplicity for a brief moment.


> The thing is that GUIs naturally have to evolve to cater to their user base. The "office" metaphor was useful in the 1980s and 90s for making computing familiar to people who were used to "desktops", "folders", "files", etc. Some of these terms still exist today, but the vast majority of users can't relate to it, so it's meaningless to them.

We still 'dial' with our phones, even though phones haven't had dials in over 50 years by this point. Nobody would even explain phones using that metaphor anymore. Even just having a foundation of common terminology is helpful in teaching people new systems.

> This is why GUIs will always have to change and adapt to trends, which will always cause friction for existing users.

I fail to see the connection.

> My point is that by minimizing the amount of graphical elements (note: not completely eliminate them), we minimize the amount of this friction. The difficult thing is, of course, maintaining the appropriate balance of all elements while optimizing for usability, which is ultimately very subjective.

This is true in today's world, but not necessarily in a world where the UI language of computers is stable and users can trust their computers to not change render their understanding of the system from underneath them. If all buttons had the same hints to tell a user 'I'm a button', in the same way default HTML links tell users 'I'm a link', then we could trust users to have this understanding.

> But consider that CLIs are effectively timeless. The friction comes from their lack of discoverability, arcane I/O, every program can have a different UI, etc. And yet this interface has persisted and has largely remained the same for decades. Most programs rarely change their CLI, so the user only needs to learn a few commands to be productive.

It's remained true in a small niche of power users, while for the rest of the world, this environment might as well not exist (beyond the functionality it provides to them after it's been filtered through several layers). CLIs are irrelevant dead-end in the story of user accessible design; one that there's probably some lessons to take from, but not one to entertain in any serious manner.

> That's true. But then again, what exactly is the Windows 9x/2000 design language, and what makes it better than the modern Windows GUI? Is it the basic Start Menu? The task panel with blocks for each window instead of icons? The square instead of round windows? The lack of smooth transitions, transparency, and graphical effects? The overall brutalist theme?

Yes.

> We can certainly add all the features I mentioned to Windows 9x/2000, and we had some of them even back then via 3rd party tools, but isn't that essentially what modern Windows has become? There are ways to revert some Windows 11 features today with alternative shells and such, so is that the ideal UI then?

The classic theme survived up until Windows 7, and I'll give that a pass, since although there still are holes where the newer design language of Windows peeks through, it's stayed mostly consistent, and even managed to add new features without breaking the design language to fit them.

Then that died with Windows 8, and there's been no hope for consistency in UI language since. The dream of a casual user being able to learn a UI and stick to it is dead, since even if they do, it will just change out from underneath them. That's why they don't even bother. Heck, even I barely bother.

> I'm sure that I couldn't stand using its barebones UI today, as much as I would enjoy the simplicity for a brief moment.

I disagree. I don't use many modern UI features, and the few that I do use, like snappable windows, are things I can imagine working within the old design language. I still write documents using a copy of Word 2000 in a Win2K VM every now and then, and when I don't use that, I use LibreOffice, a program many people refuse to use because it looks ancient to them. That's a feature for me. It not changing and thus not breaking my workflow is a huge feature that nothing in Windows 11 can even hope to compare with.


This might work well for power users, but common users may frustrate out. I do think the classic UI + keyboard driven menu clicks (e.g. ALT+some key combination) is the best case, where power uses can mostly use key stroke combinations to navigate the menu, while common users can click with a mouse.

Whatever the UI it is, being consistent is the most important. But sadly as you said, UI designers need to eat, too.


> I think Windows 95/2000 and the contemporary MacOS (including the then future MacOS X) have the best UI in everything I used in my 30+ years of tech life.

Agreed. I do wonder how much of it is personal, in that that UI hit at a certain formative time in my life. But ever since then it's been the benchmark that I evaluate all other UIs by. The lack of a "classic" mode in Win10 was one thing that motivated me to switch fully to Linux. To make the switch, I spent a good amount of time trawling the themes to find one that mimicks the look of Win95/95/2000. (The one I use is a KDE theme called "Reactionary".)


I think it is both nostalgic and pragmatic. I don't have to second guess what's the point of a widget, or even whether there is a widget. In default MacOS setting, the scrollbar is invisible, which literally spent me many minutes in some cases to even find that I could actually scroll down to find more options. That was completely crazy!

> I do wonder how much of it is personal, in that that UI hit at a certain formative time in my life. But ever since then it's been the benchmark that I evaluate all other UIs by.

I know some of my preferences for UIs are informed by what I first really learned how to use. But I also have preferences that are informed by decades of heavy computer use.

I despise UI widgets that just look like the window background with no borders or shadows. I can't stand massive amounts of useless white space. UI widgets don't require oxygen to survive so they don't need to fucking "breath" that much. I also despise mystery meat UIs that change their arrangement because I clicked one button more often than another.

Everything that increases my cognitive load and doesn't allow me to build up muscle memory in a UI is supremely frustrating. I might like the "look" of Mac System 7, it was a great intersection of functional and whimsical in my opinion. The consistent behaviors and learnable interface go beyond subjective visual appeal however.


Another thing is that part of my liking for the UI of that time is connected with the fact that it was consistent across all apps. Like you could set the font and color for a menu bar and every app would have that. This era of web apps drives me nuts because now it's switched to "this app should look the same for all viewers", when I want it to be "all apps should look the same when I use them".

The lack of consistency in web apps and Electron apps is infuriating. The inconsistency increases my cognitive load switching apps. Then you've got in-browser web apps aliasing system keyboard shortcuts.

Modern software feels so primitive in terms of UI compared to software from twenty and thirty years ago.


Yep. I always cite XP as being Windows's peak, but I forgot that it shipped with their insulting Fisher-Price motif enabled by default. Step 1 was to switch the UI to "classic" (essentially Windows 95) mode, and all was well.

Windows 95 is a great case study because with that release, Microsoft did more for GUIs than Apple did through the entire decade of the '90s... and beyond.

All of it is now out the window (pun invited). It's a race to the bottom between Microsoft and Apple, with Microsoft having a HUGE head-start. But Apple has really stepped up to the plate with Tahoe, crippling it with big enough UI blunders to keep them in the enshittification game.


XP in early betas released had that slightly upgraded 9x interface called Watercolor [1] and if they'd keep it, surely majority would pick it up over plastic Luna.

Early experiments with totally new theme were rather unpleasant [2] and Watercolor was abandoned in favor of more familiar 9x looking theme as an option. W11 still comes with that old 9x widgets look - slightly flattened because of that trend but it's still there buried beneath for compatibility reasons. And I'm pretty sure they won't escape with that like Apple did with Aqua away from Platinum.

[1] - https://betawiki.net/wiki/Watercolor

[2] - https://betawiki.net/wiki/Windows_XP_build_2416#Gallery


I always installed Watercolor on a new computer. It's still beautiful and definitely the look they should have chosen and played to their strengths.

I think they were so caught off guard by how incredible Mac OS X _looked_, that they didn't realize it wasn't just veneer, but a genuine evolution and improvement of how Mac OS _worked_. This became Apple's competitive advantage for over a decade as Microsoft chased different styles while consistently botching how it would impact usability.


I really liked XP (and 7) because for me, having a capable theming engine built in that didn't take a ton of extra resources or cause instability (unlike Stardock's WindowBlinds) was a real value add. There were some absolutely gorgeous third party XP/Vista/7 themes on sites like DeviantArt that worked extremely well within the limits of the engine, had a unique look and feel, and were just as usable as the "classic" theme.

When MS gutted the theming engine with the release of Windows 8 (flat rectangles only) I was devastated.


Absolutely. I always hated the inverse color scheme that Windows defaulted to, but that was OK because Windows had its color-scheme editor that let anyone create a global scheme he liked. I created a charcoal-toned one that was right in line with today's "dark" themes, and used it throughout the '90s.

Then Microsoft buried and ultimately removed the color-scheme editor... just in time for people to realize that inverse schemes suck.

So now Microsoft and Apple have dribbled out hard-coded "dark" themes, which every application developer has had to cobble together support for separately. Windows had this problem solved more than 30 years ago. Think about it. But then they deleted the solution from their product.

At least Apple NEVER had a proper global color palette for its UI. The fact that Microsoft did, but shitcanned it, stands testament to its complete abandonment of anything resembling good design. Hell, you can't even select multiple PNGs in Explorer and say "Open with..." anymore. The option is just totally GONE. Windows is rife with regressions like this. It's unbelievable.

Design is getting shittier and more ignorant daily. It's depressing.


The engine itself isn’t gutted - it’s full of functionality that was never lost. MS just (correctly) reasoned that transparency effects in the UI - introduced in Vista simply to show-off the capabilities of the DWM compositor - ultimately detract from a good UI.

From what I remember it lost the ability to render rounded window corners, because while Windows 8 msstyle themes existed they all had the hideous boxed corners that clashed hard with many looks.

I don’t agree that transparency is always a detractor. Judicious use can be a net positive, but it doesn’t work for all themes and there should be an option to turn it off. Personally I didn’t find the W7 variation of Aero to be bad at all.


> From what I remember it lost the ability to render rounded window corners,

...I'm guessing you haven't used Windows 11?

--------

By "rounded corners" are you referring to rounded-off corners in the nonclient area (such that the hWnd's rect is not clipped at all)? If so, then no: those would be rendered using a 9-grid[1] and have always been supported.

If you're referring to how so many fan/community-made msstyles for Windows 10 retain the sharp corners, I understand that's not a limitation of DWM or msstyles, just more that you need to do a lot of legwork when defining nontrivial corners in an msstyles theme; it can be done (there are plenty of examples online, e.g. look for Windows XP's style ported to Windows 10), it's just that most people don't go that far.

-----

[1] In msstyles, the 9-grid defines how a rectangular bitmap is stretched/scaled/tiled to fill a larger area; it's very similar to how CSS image borders are defined with `border-image-slice`.


I’m speaking specially about Windows 8/8.1. Obviously 11 and the new Fluent design language it brought don’t suffer the same issue.

Whatever the case, rounded corners on the titlebars and window chrome were common in XP/Vista/7 custom msstyles but were nowhere to be seen for 8/8.1 custom msstyles. It was one of the most frustrating aspects of that era of Windows for me.


Hmm, yes; I think you're right. I honestly don't know the explanation behind that, sorry.

I think maybe the reformists are able to hold on now that the IRGC is being hammered. There might be more internal bloodshed but chances are that Iran might be a bit more open and more modern. Of course I have zero knowledge about how Iran politics works, so that was just a guess, not even an intelligent one.

BTW I don't actually think even the reformists will "accept Western ideas".


If the hard-liners IRGC generals went with him then it might be a good thing for its economy. I have heard some rumors that China was frustrated that IRGC pushed against the deals and were not willing to accept foreign investments in key oil/infra projects because they sit on them -- and that was why China never put down any real investments after signing the deals.

Why would a regime that came to be, ultimately, precisely because of foreign meddling in resource extraction ever entertain more foreign meddling in resource extraction, especially when it's levered with "or else we'll kill you."?

IRGC or whatever succeed next should wise themselves and stop hedging about whatever next deal with US/EU.

I think the biggest problem of IRGC is that they grabbed a large share of economy but spent a lot of that in geopolitical expansion for the last 1-2 decades. This in turn contributed to a more fragile Iranian economy and high inflation, which makes them extremely unpopular among the people.

No matter who steps up, Trump's actions make clear they have only two options:

1. Permanent subjugation to western countries, to be unilaterally abused whenever they felt like it, or

2. Race to the nuke as fast and as secretly as possible


Ah, is it the time when Skynet starts to manifest itself...

Ah, I was supposed to read Hyperion for a long, long time :/

This. It is just mental drug.


so is love

this level of reductive thought termination goes nowhere


Thank you. I love the wallpapers of Paged Out and always set it as my default wallpaper on MacOS.


What's the point of this web page?


I have given the topic some thoughts. I concluded that the ONLY way for ordinary people (non-genius, IQ <= 120) to be really good, be really close to the genius, is to sit down, condensate the past 40 or so year's tech history of three topics (Comp-Arch, OS and Compiler) into a 4-5 years of self-education.

Such education is COMPLETELY different from the one they offered in school, but closer to those offered in premium schools (MIT/Berkeley). Basically, I'd call it "Software engineering archaeology". Students are supposed to take on ancient software, compile them, and figure out how to add new features.

For example, for the OS kernel branch:

- Course 0: MIT xv6 lab, then figure out which subsystem you are interested in (fs? scheduler? drivers?)

- Course 0.5: System programming for modern Linux and NT, mostly to get familiar with user space development and syscalls

- Course 1: Build Linux 0.95, run all of your toolchains in a docker container. Move it to 64-bit. Say you are interested in fs -- figure out the VFS code and write a couple of fs for it. Linux 0.95 only has Minix fs so there are a lot of simpler options to choose from.

- Course 2: Maybe build a modern Linux, like 5.9, and then do the same thing. This time the student is supposed to implement a much more sophiscated fs, maybe something from the SunOS or WinNT that was not there.

- Course 3 & 4: Do the same thing with leaked NT 3.5 and NT 4.0 kernel. It's just for personal use so I wouldn't worry about the lawyers.

For reading, there are a lot of books about Linux kernels and NT kernels.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: