You'd be surprised how many people fit in the venn overlap of technical enough to be doing stuff in unix shell yet willing to follow instructions from a website they googled 30 seconds earlier that tells them to paste a command that downloads a bash script and immediately executes it. Which itself is a surprisingly common suggestion from many how to blog posts and software help pages.
Are LLMs too expensive, or not reliable enough at not making mistakes, or just something you haven't considered?
It's not something I generally need to do, so I haven't been keeping up with how good LLMs are at this sort of conversion, but seeing your question I was curious so I took a couple of examples from https://www.json.org/example.html and gave them to the default model in the ChatGPT app (GPT 5.2 - at least that's the default for my ChatGPT Plus account) and it seemed to get each of them right on the first attempt.
I don't have time right now to watch the video and will be coming back to do so later, but here's a couple of snippets from the text on that page that made me want to bother watching (either they're overhyping it, or it sounds interesting and significant)
> The identified vulnerabilities may allow a complete device compromise. We demonstrate the immediate impact using a pair of current-generation headphones. We also demonstrate how a compromised Bluetooth peripheral can be abused to attack paired devices, like smartphones, due to their trust relationship with the peripheral.
> This presentation will give an overview over the vulnerabilities and a demonstration and discussion of their impact. We also generalize these findings and discuss the impact of compromised Bluetooth peripherals in general. At the end, we briefly discuss the difficulties in the disclosure and patching process. Along with the talk, we will release tooling for users to check whether their devices are affected and for other researchers to continue looking into Airoha-based devices.
[...]
> It is important that headphone users are aware of the issues. In our opinion, some of the device manufacturers have done a bad job of informing their users about the potential threats and the available security updates. We also want to provide the technical details to understand the issues and enable other researchers to continue working with the platform. With the protocol it is possible to read and write firmware. This opens up the possibility to patch and potentially customize the firmware.
> Step 1: Connect (CVE-20700/20701) The attacker is in physical proximity and silently connects to a pair of headphones via BLE or Classic Bluetooth.
> Step 2: Exfiltrate (CVE-20702) Using the unauthenticated connection, the attacker uses the RACE protocol to (partially) dump the flash memory of the headphones.
> Step 3: Extract Inside that memory dump resides a connection table. This table includes the names and addresses of paired devices. More importantly, it also contains the Bluetooth Link Key. This is the cryptographic secret that a phone and headphones use to recognize and trust each other.
> Note: Once the attacker has this key, they no longer need access to the headphones.
> Step 4: Impersonate The attacker’s device now connects to the targets phone, pretending to be the trusted headphones. This involves spoofing the headphones Bluetooth address and using the extracted link-key.
> Once connected to the phone the attacker can proceed to interact with it from the privileged position of a trusted peripheral.
Keep in mind that "making money" doesn't have to be from people paying to use uv.
It could be that they calculate the existence of uv saves their team more time (and therefore expense) in their other work than it used to create. It could be that recognition for making the tool is worth the cost as a marketing expense. It could be that other companies donate money to them either ahead of time in order to get uv made, or after it was made to encourage more useful tools to be made. etc
«« I don't want to charge people money to use our tools, and I don't want to create an incentive structure whereby our open source offerings are competing with any commercial offerings (which is what you see with a lost of hosted-open-source-SaaS business models).
What I want to do is build software that vertically integrates with our open source tools, and sell that software to companies that are already using Ruff, uv, etc. Alternatives to things that companies already pay for today.
An example of what this might look like (we may not do this, but it's helpful to have a concrete example of the strategy) would be something like an enterprise-focused private package registry. A lot of big companies use uv. We spend time talking to them. They all spend money on private package registries, and have issues with them. We could build a private registry that integrates well with uv, and sell it to those companies. [...]
But the core of what I want to do is this: build great tools, hopefully people like them, hopefully they grow, hopefully companies adopt them; then sell software to those companies that represents the natural next thing they need when building with Python. Hopefully we can build something better than the alternatives by playing well with our OSS, and hopefully we are the natural choice if they're already using our OSS. »»
Agree with you but would add that even on the professional managerial side it is indeed a luxury - yes for many people it would be possible, but there's also many people (in startups, or small businesses, or not small but struggling businesses) whose options are as limited as teachers.
Some of whom might have good options for changing jobs, or good hopes of things improving in the near future, but for many it would be the lesser evil compared to trying to find a different job with the same positives (whether salary or other motivation) but without those negatives.
Not quite the same as Vtuber avatars, but what you said about their software makes me think (hope) you might be able to answer a question I was wondering about the other day: is any software/models good enough yet to be able to replace the face of someone talking into a webcam with a different, photorealistic face - either that of a different existing person, or an entirely fictitious face - in real time, such that it could be used to pretend to be a different person on a live video call? Or, if not real time, how about for non-live videos, is there a tool that can do it well enough to be convincing without needing any manual editing?
And if the answer is no, how far away might it be?
(I'd be curious to play with it myself if such a thing exists and is publicly available, but the main reason I'd like to know is to keep an eye on how soon we might see faked video calls joining faked voice phone calls in the toolbox of financial scammers.)
It’s not something I’ve looked into so I’m not sure. VTuber software output can be set up to appear as a webcam which can be used in Zoom and such, so that’d be the closest that I know of.
If you see my message quickly enough (I can't remember how long the green lasts, so not sure when the deadline is for this example), here's the most recent green I saw to give you an example:
jonathan7977, https://news.ycombinator.com/item?id=46264260 (linked directly to their comment but it shows green on the main thread also - https://news.ycombinator.com/item?id=46263317)
Agree with you over OP - as well as Qwen there's others like Mistral, Meta's Llama, and from China there's the likes of Baidu ERNIE, ByteDance Doubao, and Zhipu GLM. Probably others too.
Even if all of these were considered worse than the "only 5" on OP's list (which I don't believe to be the case), the scene is still far too young and volatile to look at a ranking at any one point in time and say that if X is better than Y today then it definitely will be in 3 months time, yet alone in a year or two.
reply