Focusing in on "grabbing references", it's as easy as drag-and-drop if you use Zotero. It can copy/paste references in BibTeX format. You can even customize it through the BetterBibTeX extension.
If you're not a Zotero user, I can't recommend it enough.
I have a terrible memory for details, I'll admit an LLM I can just tell "Find that paper by X's group on Method That Does This And That" and finds me the paper is enticing. I say this because I abandoned Zotero once the list of refs became large enough that I could never find anything quickly.
To be maximally pedantic, sine waves (or complex exponentials through Euler's formula), ARE special because they're the eigenfunctions of linear time-invariant systems. For anybody reading this without a linear algebra background, this just means using sine waves often makes your math a lot less disgusting when representing a broad class of useful mathematical models.
Which to your point: You're absolutely correct that you can use a bunch of different sets of functions for your decomposition. Linear algebra just says that you might as well use the most convenient one!
>They're eigenfunctions of linear time-invariant systems
For someone reading this with only a calculus background, an example of this is that you get back a sine (times a constant) if you differentiate it twice, i.e. d^2/dt^2 sin(nt) = -n^2 sin(nt). Put technically, sines/cosines are eigenfunctions of the second derivative operator. This turns out to be really convenient for a lot of physical problems (e.g. wave/diffusion equations).
Interesting, I somewhat of an opposite reaction, although I am certainly not a mathematician. Once everything became definitions, my eyes glazed over - in most cases the rationale for the definitions was not clear and the definitions appeared over-complicated.
It took me some time, but now it's a lot better -- like a little game I somewhat know the rules of. I now accept that mathematicians are often worrying about maximal abstraction or addressing odd pathological corner cases. This allows me to wade through the complexity without getting overwhelmed like I used to.
My dad always told me growing up today math was like a game and a puzzle, and I hated that. I also hated math at the time. It felt more like torture than a game.
I didn't fall in love with math until Statistics, Discrete Math, Set Theory and Logic.
It was the realization that math is a language that can be used to describe all the patterns of real world, and help cut through bullshit and reckon real truths about the world.
Not the original poster, but I want to push back on one thing -- being capable of something and being one of the best in the world at something are hugely different. Forgive me if I'm putting words in your math -- you mentioned "placing the bar for mathematical skill pretty" low but also mentioned running a sub-10s 100m. If, correspondingly, your notion of mathematical success is being Terence Tao, then I envy your ambition.
I do broadly agree with your position that some people are going to excel where others fail. We know there trivially exist people with significant disabilities that will never excel in certain activities. What the variance is on "other people" (a crude distinction) I hesitate to say. And whatever the solution is, if there is even a solution, I'd at least like for the null hypothesis to be "this is possible, we just may need to change our approach or put more time in".
On a slightly more philosophical note, I firmly believe that it is important to believe some things that are not necessarily true -- let's call this "feel-good thinking". If someone is truly putting significant dedicated effort in and not getting results, that is a tragedy. I would, however, greatly prefer that scenario to the one in which people are regularly told, "well, you could just be stupid." That is a self-fulfilling prophecy.
Five or six years ago my family started went through all the old recipes - from old newspapers, cookbooks, etc. that were in homes across my extended family. They then decided on which to keep, and printed a new cookbook from the compilation of these recipes.
Now if we find (or author) a recipe that we really like, we send it, with any additional annotations, to my parents so that they can include it in the next print edition. It's a relatively time-intensive and expensive process, but from this point forward we should be able to maintain our family's recipes in a physical, living document form.
Maybe we don't get the yellowed pages and flour from grandma's hands on the cover, but I think it's a good system.
I've been doing this, but on my personal blog ( https://www.bbkane.com/recipes/ ). I'm really glad I got to get some of these from my grandma before she passed, and it's been a huge hit to just send a link when someone wants a recipe.
The 600-6GHz range is a rough approximation for some of the most used bands in telecommunications, e.g. Wi-Fi and 5G NR FR1. It's worth noting that the article explicitly mentions that this filter will be useful for FR3, which is "7 GHz to 24 GHz". They do not claim full 600 MHz-6 GHz operation, and as the previous poster noted, the filter was demonstrated from 3.4-11.1 GHz.
More critically: you want to be very very careful about trying to extrapolate this filter down to lower frequencies. We're dealing with "weird physics" here. I am not an expert on spin-wave devices by any means, but a guy in my lab during grad school was working with them, so I do know that the resonant frequencies of the spin-waves are a function of the magnetic bias and the material. The researchers here are tuning the filter by tuning the magnetic bias. Someone more knowledgeable can correct me, but I believe YIG would have trouble propagating spin-waves down at 600 MHz, and so this kind of filter would not be practical.
That's true, Laplace corresponds to a basis of complex exponentials that can grow or decay in time instead purely imaginary exponentials. We restrict the Ae^[(a+jb)t] domain just to Ae^(jbt) for Fourier.
From an circuit analysis standpoint (your problem may be different), but exponentials that decay over time ("a" is negative) corresponds to loss in a circuit, whereas exponentials that grow over time ("a" positive) correspond to something blowing up (this is really a nonphysical result but generally means a circuit is going to oscillate on its own, without a source driving that response). I mostly do electromagnetics/passive RF types of problems, in which you generally want everything to be low-loss. In that case Fourier is perfect, especially since I typically care most about steady-state behavior.
I'm surprised you're one of the only commenters to bring this up. I have an electrical engineering background -- for analysis, lots of systems are assumed to be either linear or very weakly nonlinear, and a lot of our signals are roughly periodic. Fourier transforms are a no-brainer.
Convolution turns into multiplication, differentiation wrt time of the complex exponential turns into multiplication by j*omega. I don't know about you, but I'd rather do multiplication than convolution and time derivatives.
As a corollary, once you accept "we use the Fourier representation because it's convenient for a specific set of common scenarios", the use of any other mathematical transform shouldn't be too surprising (for other problems).
I've done a ton of low-budget analog hardware debugging, and the major problem with hardware debugging is each attempt to fix the problem takes a long time. If I had wanted to test every idea I had I could easily waste a week. Not to mention that I can't just run some automated test suite after the fact. For hardware, approaching debugging methodically is a necessity, not just best practice.
We don't typically have log files for hardware, but I'm always surprised when otherwise extremely intelligent people first try to debug by applying "fixes" that shouldn't have any causal effect on any weird observations we've gotten. I have no problem with people coming up with theories because each modification takes time, but each theory should ideally explain the data...
What they're saying is that the geometrical interpretation of an outwardly expanding spherical shell of power shouldn't depend on frequency. In this respect they are correct and they have a good intuition for the problem.
Now here's the catch:
If the receive area were not changing as a function of frequency when the receive antenna gain is kept constant (it does), this would break physics (it doesn't). However, the effective area of an antenna with fixed gain varies as 1/lambda^2. In effect the geometric interpretation is still correct, but the variation of antenna area with gain resolves the seeming paradox and saves physics.
> the geometrical interpretation of an outwardly expanding spherical shell of power shouldn't depend on frequency
I think nobody says that is does. I believe the problem is to call Friis transmission equation "Free-space loss". Actually the Friis formula is composed of 3 terms: the receiving and transmitting antennas gain and the actual free space loss which has the 1/R^2 dependency (which actually isn't a "loss" in energy balance terms, since it's not lost energy, just energy not received at a certain point, so we could argue about that term too...)
If you're not a Zotero user, I can't recommend it enough.