From a technical perspective: prices change constantly; prices are inconsistently expressed and the volume of products is massive. Scrapping the volume of data required and then processing it and updating it is a substantially difficult task.
From a business point of view an independent operation would require a very large investment. To recoup at least part of that requires advertising and affiliate revenue which in turn creates perverse incentives.
That is a good point because there are so many products all over web and would need a huge search to find even a small amount of Google. Thanks for your feedback!
As a project lead, I have worked on dozens of projects and 100s of contractors. Contractors have ranged from brilliant and productive to struggling incompetents. In almost every case, I worked with teams I was given and had no say in the hiring process. A typical case when management thinks they know better than their tech lead.
I very much believe that comprehensive domain knowledge and technical proficiency are both essential. Actual code production can be mostly delegated. If AI produces better quality code than the contractors available to you, then it is the preferable option.
IMHO a small team of experienced engineers using AI is the optimal choice.
Vibe-coded startups without competent technical oversight is tech-debt on steroids.
Unlikely. There is just too large an installed base of software that will only run on Windows and is widely in use.
However, we have to acknowledge that if you consider Android as being Linux based and all the servers in the cloud, then Linux has already supplanted Windows.
When people receive any written material from me, it is I that wrote it. My thoughts and expressions. If they wanted to read some AI generated drivel, then they don't need me and my contribution.
My process is very simple. I just write from top to bottom. Of course, the first pass is rough. But the focus is on capturing the material in approximately the logical sequence. For really complex and lengthy materials, I might write an outline with mostly headings and snippets as they come to me.
For emails and other time critical writing, I go back to the top and edit / re-write. I don't need nor use AI for this. Then send. Rarely do I use a third pass.
For reports and papers, I tend to put the first pass aside for a day or more.
When I return to it, I edit viciously and re-write. Depending on the importance of the writing, I might repeat this process 2 or 3 more times.
I see you having a unique opportunity. There is a scarcity of strong SWE skills in the earth sciences related industries/businesses. You could refresh your reasons, motivations and interests for choosing earth sciences for your PhD. Perhaps your thesis and research connections reveal further dimensions.
In any earth sciences based work, there would be requirements for either data sciences or AI related processing, improvements, etc. Those are the sorts of areas that you might want to peddle your SWE skills as delivering.
Awesome! I wouldn't have thought that it is possible to make ICs in a garage. Of course it requires a lot of knowledge, etc. But still, not a multi-billion dollar clean room with specialist equipment.
You could make in a garage some decent analog integrated circuits, e.g. audio amplifiers or operational amplifiers or even radio-frequency circuits for not too high frequency ranges.
However you cannot make useful digital circuits. For digital circuits, the best that you can do is to be content to only design them and buy an FPGA for implementing them, instead of attempting to manufacture a custom IC.
With the kind of digital circuits that you could make in a garage, the most complex thing that you could do would be something like a very big table or wall digital clock, made not with a single IC like today, but with a few dozen ICs.
Anything more complex than that would need far too many ICs.
Not true. You are confusing "digital" with "microprocessor". You wouldn't be able to do any single-chip microprocessor, of course, but something like 74181 is very doable at this scale, and building a 1970s-era computer out of a few dozen of these is something enthusiasts still do. The main problem isn't logic, it's memory - memory needs density (thin film magnetics anyone?).
Then, of course, if by "useful" you mean "commercially viable", it is indeed not going to be competitive against either TSMC or your local 500nm foundry ever.
A CPU made with ALUs like 74181 would take alone a PCB of ATX or eATX size densely populated with integrated circuits and consuming much more power than an entire computer consumes today, while being slower than a tiny microcontroller with a cost of less than a dollar, which also includes enough memory for a practical application.
I call such a CPU as not useful.
It can be a very useful experience to design such a CPU, but you can simulate the design in a logic simulator and you gain nothing by building it.
As a valuable computer building experience, it is more useful to use much older components than digital integrated circuits, where you can see nothing without special instruments, e.g. you can build interesting computer blocks, like adders, registers, counters etc., made with electromechanical relays or with neon glow lamps, where you can see with your eyes how they function.
You can do lithography small but slow and expensive. But small means you need a stack, which is even more expensive. At small sizes, defectivity/variation are really difficult.
So if you want a paradigmatic shift, you need low cost patterning, and the best way I can see is to use clever chemistry and a much different design style.
Don't you think that a lot more improvement in variability and integration can be achieved with better optics? (for the photolithography, of course. I don't remember what they used for plasma etching and ion implantation.) I don't believe that they have explored a lot on that front yet.
> So if you want a paradigmatic shift, you need low cost patterning, and the best way I can see is to use clever chemistry and a much different design style.
Is that a speculation, or do you have a more concrete idea about what needs improvement and how? I'm especially curious about the 'much different design style' part. Could you elaborate that?
I heard of one intriguing alternative to photo lithography. Microfluidic channels in a plate (injection molded). I saw a couple research papers in 2021.
I think Deno's management have been somewhat distracted by their ongoing lawsuits with Oracle over the release of the Javascript trademark.
I started out with Deno and when I discovered Bun, I pivoted. Personally I don't need the NodeJS/NPM compatability. Wish there was a Bun-lite which was freed of the backward compatability.
The bloat. I prefer lean designs with plug-in modules for additional functionality. Not only do unused sub-systems take up memory, but they also increase the potential attack surface.
Removing the licence and/or authors from a FOSS project would generally be a violation of the licensing terms. The tool(s) you use don't change the legal principles.
Of course, the big AI companies blithely ignore moral and legal issues.
RGB LEDs (e.g. WS2812) connected to an ESP8266 and running microPython are great for experimenting. I have several "installations" that respond to broadcast datagrams (UDP). Let your imagination run riot. Add sensors, even MIDI interface.
I do doubt that you could actually discern 16M colors. But even many thousands is entertaining.
From a business point of view an independent operation would require a very large investment. To recoup at least part of that requires advertising and affiliate revenue which in turn creates perverse incentives.
reply