Hacker Newsnew | past | comments | ask | show | jobs | submit | NoiseBert69's commentslogin

You’re considering whether it would be possible - and perhaps quite elegant - to use an XY‑scanner to raster‑scan the end of an optical fiber across a prism, disperse the light, and then capture the resulting spectrum with a CCD line sensor.

With that setup, each pixel on the line sensor would effectively record the full spectral content of the light at that scanned position, all in a single acquisition.


You could probably use just an X-scanner, and instead of a CCD line sensor, use a regular 2D image sensor if you used a "1 pixel wide" slit aperture to crop the image perpendicularly to the direction that the prism disperses the light. So instead of a single pixel being dispersed, you disperse a line.

You would reduce the time required by the root of the number of pixels you want (assuming a square image).

(This is what we do in momentum-resolved electron energy loss spectroscopy. In that situation we have electromagnetic lenses that focus the electrons that have been dispersed, so we don't have as bad a chromatic aberration problem as the other response mentions).

I would love to see e.g. a butterfly image with a slider that I could drag to choose the wavelength shown!!


> I would love to see e.g. a butterfly image with a slider that I could drag to choose the wavelength shown!!

Here[1] are some 31-band hyperspectral images of butterflies. Numpy/pillow can unpack the .mat files into normal images. Then perhaps vibecode a slider, or just browse the band images?

[1] http://www.ok.sc.e.titech.ac.jp/res/MSI/MSIdata31.html (includes 8 butterfly 31-band hyperspectral visible-light images). These butterflies are also in their VIS-SNIR dataset, and others.

I knew of the site having explored "First-tier physical-sciences graduate students are often deeply confused about color. Color is commonly taught, starting in K... very very poorly. So can we create K-3 interactive content centered around spectra, and give an actionable understanding of color?"


Very nice idea! That makes it much easier!

A problem for multispectral imagery (even within visible rgb), is that the wavelengths of light are different so the lens cannot be in focus for all spectrum at once. I have tested this out with a few of my slr lenses. If you have blue channel perfectly in focus, red isn't just a little out of focus, it is actually noticeably way out.

This is called chromatic aberration, for those who are intrigued.

Given that regular phone cameras have sensors that detect RGB, I wonder if one could notice improved image sharpness if one had three camera lenses (and used single-color sensors) next to one another laterally, with a color filter for R, G and B for each one respectively. So that the camera could focus perfectly for each wavelength.


Next issue would be the perspective distortion in the merged image

there are lenses out there designed for apochromatic performance across the UV-Vis-IR band, but they tend to be really pricey.

The Coastal Optical 60mm is a frequently cited one. UV in particular is challenging, because glass that works well in the visible light range can be quite poorly translucent in UV. Quartz is better, but drives up the cost a lot, and comes with other tradeoffs.


I've had this problem as well, but it's just due to optical properties of the lens and extremely consistent from image to image, so you can calibrate and correct for it as long as you focus each wavelength and collect data separately.

I don't think you can property calibrate for it unless you also move the camera to compensate for focus breathing. I'm not sure if that would fully account for it either. That being said these things are only very noticeable pixel peeping.

Focus breathing can be compensated for. The "breathing" only changes the effective focal length, not the location of the camera, so you can map the pixels to match where they should be and bilinear/bicubic interpolate appropriately.

Shoot a checkerboard at both wavelengths each focused properly and then compute the mapping.

If you're shooting macro stuff then maybe you are changing the effective location of the camera slightly depending on the exact mechanics of the lens and whether the aperture slides with the focusing, but the couple of mm shift in camera location won't matter for landscapes.

Alternatively, use cine lenses which are engineered not to breathe, but they are typically more expensive for that reason.


I haven't seen a single functional real-world Reticulum network in the wild.

fefe = Felix von Leitner

fx = Felix Lindner

Please don't mix up the names and nicknames.

Both are highly renowned security experts.


Oh oops, my apologies :(

Feels like they had to develop around their component shortage. Very weirdo microcontroller/DSP mix.


It may even be a good thing, from a PoV of learning resiliency and adaptation to supply chain changes. They probably ended up very hard to disrupt.

I've seen an Orange Pi 5+ in a drone, which I wrote the upstream DTS for conincidentally, Raspberry Pi, etc.

Despite Opi5+ having sophisticated ISP and camera interfaces, they just used some USB/analog camera capture card. Probably because if you're using generic interface that just works, you can just throw in any SBC in that has some so-so working Linux build somewhere so that USB and gpio/i2c/spi or any of these generic interfaces work on and you're golden. Your other software can then stay the same, because everything that uses these interfaces from userspace is well abstracted away from platform details by Linux and works identically across all SBCs.


Restricted?

Even the monthly consumption of toilet paper on a base has this classification.


You could possibly infer an estimate of base staff based on that info; not like a great number, but a number.


more importantly you could see how it changes over time


It also depends on the food eaten and diseases.


We avoid censorship by ⸻ more often and talking to ⸻ about ⸻.


Bruh...

Can it mine bitcoins or run worms?


Due to lack of memory leaks which will stop increasing RAM prices?


Because it's more memory efficient than most other languages. So you can achieve the same result with lower RAM requirements.



I see that's from almost 10 years ago, it would be interesting to see how that's changed with improvements to V8, python and C# since.

Also, Typescript 5 times worse than Javascript? That doesn't really make sense, since they share the same runtime.


Why is that so unbelievable? TypeScript isn't JavaScript, and while they have the same runtime, compiled TypeScript often don't look like how you'd solve the same problem in vanilla JS, where you'd leverage the dynamic typing rather than trying to work around it.

See this example as one demonstration: https://www.typescriptlang.org/play/?q=8#example/enums

The TS code looks very different from the JS code (which obviously is the point), but given that, not hard to imagine they have different runtime characteristics, especially for people who don't understand the inside and outs of JavaScript itself, and have only learned TypeScript.


Enums are one of only a few places where there is significant deviation, I don't believe that makes it 400% less efficient.


Maybe read the paper and see if you can figure out their reasoning/motivation :) https://dl.acm.org/doi/10.1145/3136014.3136031

One thing to consider, is that with JavaScript you put it in a .js file, point a HTML page at it, and that's it.

TypeScript uses a ton more than that, which would impact the amount of energy usage too, not to mention everything running the package registries and what not. Not sure if this is why the difference is bigger, as I haven't read the paper myself :)

But if you do, please do share what you find out about their methodology.


This image comes from running the different versions of the benchmark games programs. Some of the difference between languages may actually be just algorithmic differences, and also those programs are in general not representative of most of the software that runs.


That, and also because rust compiler is a very good guardrail & feedback mechanism for AI. I made 3 little tools that I use for myself without knowing how to write a single rust line myself.


I can see that a reality but I am more comfortable using Golang as the language compared to rust given its fast compile times and I have found it to be much more easier to create no-dependices/fewer-dependencies project plus even though I wouldn't consider myself a master in golang, maybe mediocre, I feel much easier playing with golang than rust.

The resource consumption b/w rust and golang would be pretty minimal to figure out actually for most use cases imho.


Yes and no. It's much manual work to get WG to behave like Tailscale.


As a computer engineer I usually copy reference schematics and board layouts from datasheets the vendors offers. 95% of my hardware problems can be solved with it.

Learning KiCad took me a few evenings with YT videos (greetings to Phil!).

Soldering needs much more exercise. Soldering QFN with a stencil, paste and oven (or only pre-heater) can only be learned by failing many times.

Having a huge stock of good components (sorted nicely with PartsDB!) lowers the barrier for starting projects dramatically.

But as always: the better your gear gets - the more fun it becomes.


Even as a professional EE working on high speed digital and mixed signal designs (smartphones and motherboards), I used reference designs all the time, for almost every major part in a design. We had to rip up the schematics to fit our needs and follow manufacturer routing guidelines rather than copying the layout wholesale, but unless simulations told us otherwise we followed them religiously. When I started I was surprised how much of the industry is just doing the tedious work of footprint verification and PCB routing after copying existing designs and using calculators like the Saturn toolkit.

The exception was cutting edge motherboards that had to be released alongside a new Intel chipset but that project had at least a dozen engineers working in shifts.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: