>Why does everything have to make money? People like to built things as a hobby.
The gp asked a reasonable question. Your admonition about making money is misplaced because your assumption about it being a hobby is incorrect.
The website was developed by Flighty LLC. To answer the gp's question: Although the website itself doesn't have direct monetization, it acts as "inbound marketing" for the paid iOS app. Clicking on "Download Flighty" takes the user to the Apple App Store:
In-App Purchases
Week-to-Week Flighty Pro $4.99
Annual Savings Flighty Pro $59.99
Month-to-Month Flighty Pro $9.99
Annual Savings (Family Plan) $119.00
Lifetime Flighty Pro $299.00
Flexible Monthly (Family Plan) $15.99
Week-to-Week Flighty Pro $4.99
Week-to-Week Flighty Pro $7.99
Pro Family Lifetime $449.00
Annual Savings Flighty Pro $59.99
The website's hyperlink url to the App Store page also has a tracking id so the company can attribute downloads/sales back to the webpage. This lets them see how well the "free website" is converting to paid customers. As a vehicle to generate sales leads, it seems to work very well. To wit... Wikipedia says the company has been in business for 7 years and it's been upvoted to the HN front page and we're discussing it. (The Flighty website is an example of the old saying, "The best advertising is free advertising.")
It's not just a $5/month VPS. Some cursory googling says Flighty gets data from the FlightAware Firehose api which costs a lot of money. The cost would exceed the financial resources of most people to make an equivalent free hobby website. (https://www.flightaware.com/commercial/firehose/documentatio...)
Funnily enough, I just went to download the app, checked the in-app purchases and saw the list you've posted here, then promptly closed the App Store.
How much does this app cost? Who knows?! Does a "Week-to-Week Flighty Pro" subscription cost $4.99 or $7.99? Why is Week-to-Week Flighty Pro $4.99 in the list twice? Same for Annual Savings Flighty Pro $59.99. Apple have made such a fucking mess of in-app purchases that we end up with this kind of rubbish, and I can't place any trust in a developer who allows their in-app purchases list to look like this. So they just lost a sale.
drawvg is very useful. Before drawvg, I was always fine using the stable FFmpeg releases such as 8.0.1 but when I saw drawvg added to the master branch[1], I didn't want to wait for the next stable release and immediately re-built ffmpeg from master to start using it.
My main use case is modifying youtube videos of tech tutorials where the speaker overlays a video of themselves in a corner of the video. drawvg is used to blackout that area of the video. I'm sure some viewers like having a visible talking head shown on the same screen as the code but I find the constant motion of someone's lips moving and eyes blinking in my peripheral vision extremely distracting. Our vision is very tuned into paying attention to faces so the brain constantly fighting that urge so it can concentrate on the code. (A low-tech solution is to just put a yellow sticky know on the monitor to cover up the speaker but that means you can't easily resize/move the window playing the video ... so ffmpeg to the rescue.)
If the overlay was a rectangle, you can use the older drawbox filter and don't need drawvg. However, some content creaters use circles and that's where drawvg works better. Instead of creating a separate .vgs file, I just use the inline syntax like this:
That puts a black filled circle on the bottom right corner of a 4k vid to cover up the speaker. Different vids from different creators will require different x,y,radius coordinates.
(The author of the drawvg code in the git log appears to be the same as the author of this thread's article.)
The quick way I'd solve that is to open any program window (like calculator or whatever), mark it as being on top of other windows, and resize it and place it on top of that area. It seems quick and easy enough for the effort.
Cline just added a useless Note icon animation wherw it used to say "Thinking..." or whatever to signal contwng incoming. I think its nothing about faces. Our UI/UX designs put action items in lower right. Youtube skip button the most prominent..
Couldn't you use a video player like mpv to achieve a similar effect? Not sure if you can cover a specific part of the image but you sure can crop the video however you want and bind the commands/script to a key.
>mpv to achieve a similar effect? Not sure if you can cover a specific part of the image but you sure can crop the video
mpv doesn't run on iPad so it's better for my situation to just burn the blackout into a new video. I actually do a lot more stuff than drawvg (also rescale, pts, framerate,etc) in filter_complex but left the rest of it out for the HN comment so the example is more readable.
I suppose it might be possible to use mpv with a custom shader mask glsl code to blackout circular areas of the screen.
>you sure can crop the video
Cropping the video is straightforward with no information loss when the geometry of presentation and the speaker is laid out like these: https://www.youtube.com/@MeetingCPP/videos
But cropping the following video by shrinking the boundaries of the rectangle until the circle overlay is not visible would result in too much of the text being cut off: https://www.youtube.com/watch?v=nUxuCoqJzlA
Scrub that video timeline to see the information that would be chopped off. For that, it's better to cover up only the circle overlay with a blacked out disc.
> mpv doesn't run on iPad so it's better for my situation to just burn the blackout into a new video.
I would love to stop using the YouTube client on iPadOS. Do you just d/l the video with yt-dlp+ffmpeg and then post process it based on your needs and then watch it from the Files app from iCloud or whatever?
>If I loved King Crimson, I might create a site expressing that love and also host lyrics to their songs. Not to generate ad revenue. Not with any expectation of being reimbursed for hosting costs. I did it because it was fun and because sharing knowledge felt like the point.
Unfortunately, music lyrics are protected by copyrights so your site of King Crimson lyrics would not be authorized unless you paid for a license. The music publisher may not expend the effort to have a lawyer send you a "Cease & Desist" letter to make you take it down because your personal website is small fish but they wouldn't ignore a popular website that tried to show all lyrics for free with no ads.
The legitimate ongoing licensing costs from Gracenote/Lyricfind for their catalogs of millions of song lyrics will cost significantly more than the hosting bill. The cost is beyond the resources of typical hobbyists who like to share information for free.
EDIT: I have no idea what the downvotes are about. If you think my information about lyrics licensing is incorrect, explain why. Several decades ago, volunteers were sharing guitar tabs for free on the internet and that also got shut down by the music publishers because of copyright violations. Previous comment about that: https://news.ycombinator.com/item?id=24598821
> The music publisher may not expend the effort to have a lawyer send you a "Cease & Desist" letter to make you take it down because your personal website is small fish but they wouldn't ignore a popular website that tried to show all lyrics for free with no ads.
Exactly. Now what if there wasn't one popular website with all the lyrics, but a million different small fanpages?
There's a tension that the fan engagement is what really makes entertainers rich. The industry has every right to crack down, but if they do say they are really cutting their own legs off.
I think if there's any negative phrasing in your first three words, those reading from the Philosophers Chair (bathroom) are primed to take what immediately follows as Bad Vibes and downvote accordingly. They're not in this for accuracy.
Well my problem isn't with the writing in its original form, it's with the downvoting in response to it. I am fine with someone bringing bad news if it's helpful info.
>when it's as easy as just using a small USB-C to 3.5mm audio jack converter to use wired headphones.
As someone who uses wired earphones exclusively and must use those USB-C adapters you suggest, it's not quite "just as easy" because there are several problems:
- it's an extra $10 dongle to buy and potentially lose. I've lost several of them over the years
- adds more mechanical stress to the USB-C jack. The office Apple USB-C 3.5mm adapter protrudes out from the phone and I've had several close calls with the wire getting snagged on a door knob which can damage the USB-C port. I've never been comfortable with this Rube-Goldberg dongle contraption that adds more risk to damaging a $1000 phone. It's a fear I never had with the built-in 3.5mm jack on my old iPhone 5. There are 3rd-party right-angle USB-C to 3.5mm on Amazon (including magnetic ones) but the ones I tried interfere with phone cases and they don't sound as good. (Apparently Apple uses a more premium DAC chip in their USB-C adapter.)
- can't simultaneously charge the phone while listening unless you buy a different USB-C adapter that has both 3.5mm input and a USB-C passthrough charging port. These are bulkier.
- it's an extra dongle that's easy to forget. I once got on a transatlantic flight and realized that I forgot my USB-C earphone adapter at home. I panicked and dreaded the idea of nothing to listen to for 8 hours but I was luckily saved by a friend that didn't need to use hers and let me borrow it. Why can't I just leave the USB-C dongle connected to the 3.5mm 100% of the time so there's nothing to forget?!? Because I often need to connect the earphones to things that don't need the adapter.
With all those drawbacks, I still use the USB-C adapters because I have to. But it has definitely made life more complicated.
>, by why would the term private credit bring to mind anything to do with retail specifically?
If a layman is unfamiliar that "private credit" is about business debts, and therefore only has intuition via previous exposure to "private X" to guess what it might mean, it's not unreasonable to assume it's about consumer loans.
"private insurance" can be about retail consumer purchased health insurance outside of employer-sponsored group health plans
"private banking" is retail banking (for UHNW individuals)
But "private credit" ... doesn't fit the pattern above because "private" is an overloaded word.
No 'private' meaning that the transaction is between the lender and the borrower without a public rating agency involved (Moody's, etc...). This used to be for niche things like a data center where a rating agency might have trouble figuring out reasonable rating. Then the data center company would go to somebody like Apollo who could do custom analysis on the risk.
But now those private loans are being syndicated to affluent investors who probably don't understand that while some of this debt is solid, alot of it is not. And without a rating agency involved nobody knows how much risk is in there.
>, we simply have to make a cultural change where non-technical people do more for themselves. I don't even think it's about technical difficulty (most of the time). I think people just want someone else to take care of their shit.
The above includes us highly technical people on HN. We really can't expect (or lecture) the normal mainstream population to make a cultural change to adopt decentralized tech when most of us don't do it ourselves.
E.g. Most of us don't want to self-host our public git repo. Instead, we just use centralized Github. We have the technical knowledge to self-host git but we have valid reasons for not wanting to do it and willingly outsource it to Github. (Notice this thread's Show HN about decentralized social networking has hosted its public repo on centralized Github.)
And consider we're not on decentralized USENET nodes discussing this. Instead, we're here on centralized HN. It's more convenient. Same reason technical folks shut down their self-hosted PHP forum software and migrate to centralised Discord.
The reason can't be reduced to just "people being lazy". It's about tradeoffs. This is why it's incorrect to think that futuristic scenarios of a hypothetical easy-to-use "internet appliance" (possibly provided by ISP) to self-host email/git/USENET/videos/etc and a worldwide rollout out IPv6 to avoid NAT will remove barriers to decentralization.
The popular essay "Protocols Not Platforms" about the benefits of decentralization often gets reposted here but that doesn't help because "free protocols" don't really solve the underlying reasons centralization keeps happening: money, time, and motivation to follow the decentralized ethos.
"But you become a prisoner of centralized services!" -- True, but a self-hosted tech stack for some folks can also be a prison too. It's just a different type. To get "freedom" and escape the self-hosted hassles, they flee to centralized services!
The cost ($$$, opportunity cost, and mental toll) of maintenance is very real. It can be hugely advantageous to outsource that effort to a professional, PROVIDED the professional is trustworthy and competent. To ensure that most professionals are trustworthy and competent two things need to be present:
1. A very high degree of transparency, so that it's very difficult for a service provider to act contrary to their user's interests without the user knowing about it.
2. Very low switching costs, so that if the service provider ever does act against their users' interests, they will be likely to lose their users.
As long as our laws encourage providers to operate in black-box fashion, and to engineer artificially high switching costs into their products, I believe there will continue to be a case for self-hosting among a minority of the population. And because they are a minority, they will be forced to also make use of centralized services in order to connect to the people who are held hostage by those high switching costs.
Somewhere in the multiverse, there's a world in which interoperability and accountability have been enshrined as bedrock principles and enforced since the beginning of the internet. It would be very interesting to compare that world with the one we inhabit.
It depends a lot on how accessible those services are. I tried to host some git repos 5 years ago and it was a hassle (needed mostly private git and reviews nothing fancy). I tried again this year and using forgejo was extremely easy. I don't remember exactly what problems I had before, so maybe I got better at finding things, but this time felt more polished. Containers, reasonable defaults, good tutorial on how to start, took in total less than one hour. I did in the meantime an upgrade and that was really 5 minutes (check change-log, apply it and go)
Of course, lots of work was done in the background to reach this point, but I think it is possible. Will I make the effort to make that happen for a social network? No, because I am not using them that much.
Technically things become simpler (in the sense that you can do it "at home" and if you add LLM-s to answer you when you don't know some obscure option it is even easier), but identifying well the use-case, deciding defaults, writing documentation, juggling trade-offs will remain as hard as before.
Note/edit: something being possible does not mean one should do it, so I think it will depend on everybody's priorities and skills. I wish though good luck to anybody trying...
Not the poster, but: use ZFS or LVM + XFS on your machine, do a snapshot, use restic or kopia to back it up to cheap object storage in the cloud, such as R2. If it's too technical, run syncthing and mirror it to a USB-connected external disk, preferably a couple of meters away from your machine.
The best backup is a proper 3-2-1, with regular testing of integrity, and regular restoration from a backup as an exercise. But most people cannot be bothered to care quite so much.
So, keeping a half-assed backup copy on a spouse's machine in a different room is still better than not keeping any copy at all. It will not protect from every disaster, but it will protect against some.
My own backups progressed from manual rsync to syncthing to syncthing for every machine in the house + restic backups (which saved my bacon more than once).
>And consider we're not on decentralized USENET nodes discussing this. Instead, we're here on centralized HN. It's more convenient. Same reason technical folks shut down their self-hosted PHP forum software and migrate to centralised Discord.
You're contradicting yourself. Why is HN centralized, while a phpBB forum is decentralized? Are you conflating decentralization and being open source?
>Why is HN centralized, while a phpBB forum is decentralized?
There's a spectrum of decentralized <--> centralized for different audiences.
For this tech demographic here where installing some type of p2p or federated discussion tech (Mastodon? Matrix?) is not rocket science, it's more convenient for us to avoid that and just be on a "centralized" HN. I used to be very active on USENET and HN is relatively more centralized than a hypothetical "comp.programming.hackernews" newsgroup. This is not a complaint. It's an observation of our natural preferences and how it aggregates. (Btw, it's interesting that Paul Graham started this HN website but doesn't post here anymore. Instead, he's more active on Twitter. He's stated his reasons and it's very understandable why.)
For the phpBB forums where a lots of non-tech people discuss hobbies such as woodworking, guitar gear, etc., the decentralization perspective is the php forums and the centralization is towards big platforms such as reddit / Discord / Facebook Groups.
I see similar decentralized --> centralized trends in blogs. John Carmack abandoned his personal website and now posts on centralized Twitter.
My overall point is that a lot of us techies wish the general public would get enlightened about decentralization but that's unrealistic when we don't follow that ideal ourselves. We have valid reasons for that. But it does a create a cognitive dissonance and/or confusion as to why the world doesn't do what we think they should do.
EDIT add reply: >Wouldn't comp.programming.hackernews concentrate discussion under a single heading and also be hosted from a single specific computer?
> I used to be very active on USENET and HN is relatively more centralized than a hypothetical "comp.programming.hackernews" newsgroup.
How so? Wouldn't comp.programming.hackernews concentrate discussion under a single heading and also be hosted from a single specific computer? This confuses me even further; I don't understand what you mean by centralization.
>For the phpBB forums where a lots of non-tech people discuss hobbies such as woodworking, guitar gear, etc., the decentralization perspective is the php forums and the centralization is towards big platforms such as reddit / Discord / Facebook Groups.
Surely by this interpretation HN is decentralized. It's a special interest (if relatively broad) forum just like those phpBB forums were. I ask again: is HN "centralized" just because you can't spin up your own copy of the software to use it to talk about gardening?
>Creating a new URL with effectively the same info but further removed from the primary source is not good HN etiquette.
I'm going to respectfully disagree with all the above and thank the submitter for this article. It is sufficiently different from the primary source and did add new information (meta commentary) that I like. The title is also catchier which may explain its rise to the front page. (Because more of us recognize "Github" than "Cline").
The original source is fine but it gets deep into the weeds of the various config files. That's all wonderful but that actually isn't what I need.
On the other hand, this thread's article is more meta commentary of generalized lessons, more "case study" or "executive briefing" style. That's the right level for me at the moment.
If I was a hacker trying to re-create this exploit -- or a coding a monitoring tool that tries to prevent these kinds of attacks, I would prefer the original article's very detailed info.
On the other hand, if I just want some highlights that raises my awareness of "AI tricking AI", this article that's a level removed from the original is better for that purpose. Sometimes, the derived article is better because it presents information in a different way for a different purpose/audience. A "second chance pool" doesn't help a lot of us because it still doesn't change the article to a shorter meta commentary type of article that we prefer.
The thread's article consolidated several sources into a digestible format and had the etiquette of citations that linked backed to the primary source urls.
How it works
Hardware detection -- Reads total/available RAM via sysinfo, counts CPU cores, and probes for GPUs:
NVIDIA -- Multi-GPU support via nvidia-smi. Aggregates VRAM across all detected GPUs. Falls back to VRAM estimation from GPU model name if reporting fails.
AMD -- Detected via rocm-smi.
Intel Arc -- Discrete VRAM via sysfs, integrated via lspci.
Apple Silicon -- Unified memory via system_profiler. VRAM = system RAM.
Ascend -- Detected via npu-smi.
Backend detection -- Automatically identifies the acceleration backend (CUDA, Metal, ROCm, SYCL, CPU ARM, CPU x86, Ascend) for speed estimation.
Therefore, a website running Javascript is restricted by the browser sandbox so can't see the same low-level details such as total system RAM, exact count of GPUs, etc,
To implement your idea so it's only a website and also workaround the Javascript limitations, a different kind of workflow would be needed. E.g. run macOS system report to generate a .spx file, or run Linux inxi to generate a hardware devices report... and then upload those to the website for analysis to derive a "LLM best fit". But those os report files may still be missing some details that the github tool gathers.
Another way is to have the website with a bunch of hardware options where the user has to manually select the combination. Less convenient but then again, it has the advantage of doing "what-if" scenarios for hardware the user doesn't actually have and is thinking of buying.
(To be clear, I'm not endorsing this particular github tool. Just pointing out that a LLMfit website has technical limitations.)
No, I'm asking why a website that someone could fill in a few fields and result in the optimized llm for you would need to run in a container? It's a webform.
Not to disagree with anything the article talks about but to add some perspective...
The complaint about "code nobody understands" because of accumulating cognitive debt also happened with hand-written code. E.g. some stories:
- from https://devblogs.microsoft.com/oldnewthing/20121218-00/?p=58... : >Two of us tried to debug the program to figure out what was going on, but given that this was code written several years earlier by an outside company, and that nobody at Microsoft ever understood how the code worked (much less still understood it), and that most of the code was completely uncommented, we simply couldn’t figure out why the collision detector was not working. Heck, we couldn’t even find the collision detector! We had several million lines of code still to port, so we couldn’t afford to spend days studying the code trying to figure out what obscure floating point rounding error was causing collision detection to fail. We just made the executive decision right there to drop Pinball from the product.
This underlines the argument of the OP no?
The argument presented is that the situation where nobody knows how and why a piece of code is written will happen more often and appear faster with AI.
Indeed, it’ll just result in legacy code faster. We’d need AI to be much better in reliably maintaining code quality, architecture and feature rationale documentation, than the average developer in the average software project. And that may be indistinguishable from AGI.
I've been doing a version of this in a side project. Instead of saving the prompt directly, I have a road map. When implementing features, I tell it to brainstorm implementation for the road map. When fixing a bug, I tell it to brainstorm fixes from the roadmap. There's some back and forth, and then it writes a slice that is committed. Then, I look it over, verify scope, and it makes a plan (also committed). Then it generates work logs as it codes.
My prompts are literally "brainstorm next slice" or "brainstorm how to fix this bug" or "talk me through trades offs of approach A Vs B" so those prompts aren't meaningful in their own.
I wonder how scalable that is. After the twentieth feature has been added, how much connection will the conversation about the first feature still have with the current code? And you’ll need a larger and larger context for the LLM to grok the history; or you’ll have to have it rewrite it in shorter form, but that has the same failure modes why we can’t just have it maintain complete documentation (obviating the need to keep a history) in the first place.
Things like MemGPT/Letta, ToM-SWE, and Voltropy have made long context documentation pretty manageable. You could probably build some specialized tooling/prompts for development artifacts specifically too. But I’ll be the first to admit this is basically “Throw more agents at the problem”
I agree with this, I like spec-driven-development tooling partially for this reason. That being said, what I’ve found is often that I don’t include enough of the “why” in my prompt artifacts. The “what” and “how” are pretty well covered but sometimes I find myself looking back at them thinking “Why did I do this?” I’ve started including it but it does sometimes feel weird because I feel like “Why would the LLM ‘care’ about this story?”
It can be about both meanings. The additional meanings of democratize to describe "more accessible" are documented in Oxford and Merriam-Webster dictionaries:
The gp asked a reasonable question. Your admonition about making money is misplaced because your assumption about it being a hobby is incorrect.
The website was developed by Flighty LLC. To answer the gp's question: Although the website itself doesn't have direct monetization, it acts as "inbound marketing" for the paid iOS app. Clicking on "Download Flighty" takes the user to the Apple App Store:
The website's hyperlink url to the App Store page also has a tracking id so the company can attribute downloads/sales back to the webpage. This lets them see how well the "free website" is converting to paid customers. As a vehicle to generate sales leads, it seems to work very well. To wit... Wikipedia says the company has been in business for 7 years and it's been upvoted to the HN front page and we're discussing it. (The Flighty website is an example of the old saying, "The best advertising is free advertising.")It's not just a $5/month VPS. Some cursory googling says Flighty gets data from the FlightAware Firehose api which costs a lot of money. The cost would exceed the financial resources of most people to make an equivalent free hobby website. (https://www.flightaware.com/commercial/firehose/documentatio...)
reply