Just following basic rules of thumb (minimizing current loop area, length matching, impedance matching through trace width, series termination, etc.) may yield a working device, but how do you learn from that?
At some point you're already applying all those rules of thumb, but how do you then actually measure what works and what doesn't so that you can improve beyond that?
It seems it is difficult to find resources teaching that, and also the equipment needed becomes very specialized and expensive fast.
The answer, unfortunately, is you can't. Once you start talking about USB3 or HDMI, the testing equipment to test it 'properly' as you would professionally gets to be easily $10k, and can be upwards of $100k depending on the interface.
Those topics were the subject of the 3rd and 4th years of my EE degree (which was not cheap, and which involved some very specialized and expensive lab equipment).
But after 10 years in the field, I've sadly forgotten most of that and learned pragmatism instead.
I have read that even though there are no hard rules, after getting an employment based Greencard, one should stay at the company for another 6 months to avoid problems during naturalization. Is this true?
On two occasions, once after skiing for a week and once after playing flight sims an entire Saturday, I had an intense (and very nice) feeling of gliding down a (glide)slope as I fell asleep.
The website actually started as a sort of joke by someone in the kernel community. Then it stuck around.
For example, I found it useful professionally. We're releasing a new hardware model. We're doing some in-depth performance tuning/evaluation. To understand our performance characteristics, I needed us to break down the performance change into improvement due to new hardware and reduction due to new mitigations.
Linking this page in the Jira issue was the fastest way to get the point across what we needed to look into.
It is not, at all, something I would ever recommend to lay people, or use in the actual shipping product.
Depends on the criteria. Do automated reviews count?
I set up build+test via GH actions and have GitHub run them on PR's. And make all changes through PR's.
This not only prevents "forgot to run tests" type accidents during regular development, it also helps me when I come back to a hobby project months later, and it lets me confidently do small changes (think version bumps) directly through GitHub's editor, which saves a bunch of time.
And when, against the odds, I actually do get a contribution, the infra is already there and it really cuts down my turnaround time to get it merged, bump the version and deploy the release.
Of course if a single direct push yields you the badge then I guess pretty much every solo dev gets it sooner or later. But that indirectly turns it back into a good badge, right? "Independent enough to have a life outside a standardized corporate process"?
Actions can still provide the feedback even without a PR if you trigger on push, you just don't have a gate like PR for contribution. I do similar to what you're describing and would recommend a feedback loop for build/test 100% of the time.
I just don't think merging without review equates is an absolute negative depending on the situation, in this case no reviewers exist outside of myself and from my perspective this is how a lot projects start, someone just goes for it and starts committing! Yolo...
These contacts are occasionally scheduled with organizations (schools etc.), but unscheduled for the general public.
Exchanges tend to also be one short question (how does earth look from above right now?) and one short answer (earth looks beautiful), and then it's on to the next person. Not just due to congestion, but also because the ISS is only visible for a few minutes.
Most hams use commercially made equipment, this is just renting instead of buying.
What is protected is the actual radio spectrum. If a ham rented out his rig to a commercial entity for the purpose of transmitting business information over ham frequencies, that would be wrong and illegal. But here, the actual transmissions are ham radio transmissions with no commercial value (they contain no economically useful information), so it's (most likely) fine.
Why? Most people don't have the space, money and time to ever put up big antennas and kilowatt transmitters.
This is great for them. Especially since there's practically no setup overhead and no long term commitment - so you can use this one evening, then go back to doing SOTA with a portable QRP rig (and getting talked over by the folks with kilowatt output) the next weekend. It really opens up big station operation to entire new demographics - young people or others with not a lot of money, and urban people or others with no great location or not a lot of space. Ham radio has historically not been that diverse, and if this helps with diversity that's great.
I've used a big station before (at my university. Big tower, big antenna stack, big amplifier). Sure I didn't get to build it, but just operating that thing in a busy contest environment is an experience and made me appreciate how much skill there is in the actual operation of it. I wouldn't want to do it all the time, but I actually know people who do, and myself I don't want to miss the experience of having done it.
I worked in an office meant for 80 that only ever had 35. Room to grow! It was great. Then 30 people quit.
Later I worked in a scyscraper. Our floor was never exactly full, then like 80% on the floor were laid off. Entire teams gone.
Eerie is a good way to describe it. Being more or less alone with so much space, especially when you remember better days, is not fun.
I had the same experience - facilities dutifully kept doing their thing. They also worked through one week business shutdowns. First time, food kept coming too.
Now I work from home, which can be a different kind of isolating. If people left (one way or the other) I'd hardly notice unless they're on my team. (Still prefer the homeoffice overall, but eager to see my coworkers again occasionally.)
At some point you're already applying all those rules of thumb, but how do you then actually measure what works and what doesn't so that you can improve beyond that?
It seems it is difficult to find resources teaching that, and also the equipment needed becomes very specialized and expensive fast.