>I think I like the idea, but I can't help wondering if it would have unforeseen consequences.
As I said in a sibling comment, quickie comments on HN should be taken more as mental stimulation and kickoff points for further discussion as opposed to "final bill that has been revised in committee and is going to the floor for a full vote". The details of implementation are certainly critical, and not trivial either! I'm fully in support of thinking through various use cases. But part of why I'm interested in alternate approaches is that they might give us finer grained tools.
>Could this approach undermine the protections afforded by open-source licenses? (IANAL.)
I have actually considered that as well but didn't add it into a quickie comment. If we take the second path of approaches I listed there, then thinking about it all open source software would fall under a special even more permissive class of the tier 3, in that it already has "fair, reasonable and non-discriminatory" licensing for all right? Except that it's also free. The motivation here is the "advancement of the useful arts & sciences" and the public good, so having it be explicit that "if you're releasing under an open source license and thus giving up your standard first, second, and part of your third period of IP rights and monopoly, you're excluded from needing to pay a license fee because you've already enable the public to make derivative works for free for decades when they wouldn't otherwise anyway."
All that said, I'll also ask fwiw if it'd even be that big a deal given the pace of development? I do think it'd be both ideal and justified if OSS had a longer period for free, that's still a square deal to the public IMO. But like, even if an OSS work went out protection (and keep in mind that a motivated community that could raise even a few thousand dollars would be able to just pay for an extra decade no problem, the cost doesn't really ramp up for awhile [which might itself be considered a flaw?]) after 10 years, how much is it worth it that 2016 era OSS (and no changes since remember, it's a constantly rolling window) now could have proprietary works be worth it against 10 year old proprietary software all getting pushed into the public domain far faster? That's worth some contemplation. Maybe requiring that source/assets be provided to the Library of Congress or something and is released at the same time the work loses copyright would be a good balance, having all that available for down the road would be a huge win vs what we've seen up until now.
> quickie comments on HN should be taken more as mental stimulation and kickoff points for further discussion
Agreed, and my comment was aimed at exactly that. :)
An example of my concern: What would happen to GPL-licensed software if the copyright expired quickly? Would that allow someone to include it in a proprietary product and (after the short copyright term ended) deny users the freedoms that the GPL is supposed to guarantee? I think those freedoms remain important for much longer than 10 years.
> (and no changes since remember, it's a constantly rolling window)
Do you mean that the copyright term countdown would reset whenever the author makes changes to their work? (I'm not sure if this is the case today.) If so, couldn't someone simply use an earlier version in their proprietary product in order to escape GPL obligations early?
> "if you're releasing under an open source license and thus giving up your standard first, second, and part of your third period of IP rights and monopoly, you're excluded from needing to pay a license fee because you've already enable the public to make derivative works for free for decades when they wouldn't otherwise anyway."
Yes, I think this makes sense. Thanks for sharing your thoughts.
> quickie comments on HN should be taken more as mental stimulation and kickoff points for further discussion
Indeed.
Setting aside variable details like time frames and cost structures which can be debated separately, what I found interesting about your suggestion is it's a mechanism to create an escalating incentive for copyright holders to relinquish copyrights even sooner than the standard copyright period. Currently, no matter what the term length, it costs nothing to sit on a copyright until it expires - so everyone does - even if they never do anything with the copyright. And the copyright exists even if the company goes bankrupt or the copyright holder dies. Thus we end up with zombie copyrights which keep lurking in the dark for works which are almost certainly abandon-ware or orphan-ware simply because our current system defaults to one-and-done granting of "life of the inventor + 70 years" for everything.
Obviously, we should dramatically shorten the standard copyright length but no matter what we shorten it to (10, 15, 20 yrs etc) we should consider requiring some recurring renewal before expiration as a separate idea. Even if it's just paying a small processing fee and sending in simple DIY form, it sets the do-nothing-default to "auto-expire" for things the inventor doesn't care about (and may even have forgotten about). That's a net benefit to society we should evaluate separately from debates about term lengths.
I see your suggestion about automatically escalating the cost of recurring renewal as another separate layer worth considering on its own merits. My guess would be just requiring any recurring renewal would cause around half of all copyrights to auto-expire before reaching their full term - even if the renewal stayed $10. The idea of having recurring renewal costs escalate, regardless of when the escalation kicks in, or how much it escalates, is a mechanism which could achieve even more net positive societal benefits by increasing the incentive to relinquish copyrights sooner.
The common gaming-focused Wine/Proton builds can also use esync (eventfd-based synchronization). IIRC, it doesn't need a patched kernel.
The point being that these massive speed gains will probably not be seen by most people as you suggest, because most Linux gamers already have access to either esync or fsync.
Maybe you are right about esync but anyway I would also gather a lot of people don’t have that either. At least personally I don’t bother with custom proton builds or whatever so if Valve didn’t enable that on their build then I don’t have it.
> if Valve didn’t enable that on their build then I don’t have it.
The Proton build is Valve's build. It supports both fsync and esync, the latter of which does not require a kernel patch. If you're gaming on Linux with Steam, you're probably already using it.
I would assume most of them? I'd be surprised if distros like Debian, Ubuntu, Fedora, etc. would ship non-mainline kernel features like that.
Sure, gaming-focused distros, or distros like Arch or Gentoo might (optionally or otherwise), but mainstream? Probably not.
Of course, esync doesn't require kernel patches, so I imagine that was more broadly out there. But it sounds like fsync got you performance pretty close to what ntsync can do, but esync was quite a bit behind both? With vanilla being quite a bit behind esync?
(Also, jeez, fsync, what a terrible name. fsync is a syscall that has to do with filesystem data. So confusing.)
Last I checked, every distro of note had its own patchset that included stuff outside the vanilla kernel tree. Did that change? I admit I haven't looked at any of that in... oh, 15 years or so.
> He fed only the API and the test suite to Claude and asked it to reimplement the library from scratch.
From GPL2:
> The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable.
Is a project's test suite not considered part of its source code? When I make modifications to a project, its test cases are very much a part of that process.
If the test suite is part of this library's source code, and Claude was fed the test suite or interface definition files, is the output not considered a work based on the library under the terms of LGPL 2.1?
Legally, using the tests to help create the reimplementation is fine.
However, it seems possible you can't redistribute the same tests under the MIT license. So the reimplementation MIT distribution could need to be source code only, not source code plus tests. Or, the tests can be distributed in parallel but still under LGPL, not MIT. It doesn't really matter since compiled software won't be including the tests anyways.
Sorry, I misspoke. Transformation is what makes the LLM itself legal -- its training data is sufficiently transformed into weights.
And so, a work being sufficiently transformative is one way in which copyright no longer applies, but that's not the case here specifically. The specific case here is essentially just a clean-room reimplementation (though technically less "clean", but still presumably the same legally). But the end result is still a completely different expression of underlying non-copyrightable ideas.
And in both cases, it doesn't matter what the original license was. If a resulting work is sufficiently transformative or a reimplementation, copyright no longer applies, so the license no longer applies.
The library's test suite and interfaces were apparently used directly, not transformed. If either of those are considered part of the library's source code, as the license's wording seems to suggest, then I think output from their use could be considered a work based on the library as defined in the license.
Google LLC v Oracle America assumed (though didn't establish) that API's are copyrightable... BUT that developing against them falls under fair use, as long as the function implementations are independent.
Test suites are again generally considered copyrightable... but the behavior being tested is not.
So no, it's not considered to be a work based on the library. This seems pretty clear-cut in US law by now.
Also, the LGPL text doesn't say "work based on the library". It says "If you modify a copy of the Library", and this is not a "combined work" either. And the whole point is that this is not a modified copy -- it's a reimplementation.
In theory, a license could be written to prevent running its tests from being run against software not derived from the original, i.e. clean-room reimplementations. In practice, it remains dubious whether any court would uphold that. And it would also be trivial to then get around it, by taking advantage of fair use to re-implement the tests in e.g. plain English (or any specification language), and then re-implementing those back into new test code. Because again, test behaviors are not copyrightable.
> Google LLC v Oracle America assumed (though didn't establish) that API's are copyrightable... BUT that developing against them falls under fair use, as long as the function implementations are independent.
That was only one prong of the four fair use considerations in that case. Look at Breyer's opinion, it does not say that copying APIs is fair use if implementations are independent, just that Google's specific usage in that instance met the four fair use considerations.
There are likely situations in which copying APIs is not fair use even if function implementations are independent, Breyer looked at substantiality of the code copied from Java, market effects and purpose and character of use.
If your goal is to copy APIs, and those APIs make up a substantial amount of code, and reimplement functions in order to skirt licenses and compete directly against the source work, or replace it, those three considerations might not be met and it might not be fair use. Breyer said Google copied a tiny fraction of code (<1%), its purpose was not to compete directly with Oracle but to build a mobile OS platform, and Google's reimplementation was not considered a replacement for Java.
Google vs Oracle ruled that APIs fall under copyright (the contrary was thought before). However, it was ruled that, in that specific case, fair use applied, because of interoperability concerns. That's the important part of this case: fair use is never automatic, it is assessed case by case.
Regarding chardet, I'm not sure "I wanted to circumvent the license" is a good way to argue fair use.
Yes. Specifically: The use of words to express something different from and often opposite to their literal meaning, and not some knifey spoony confusion.
Rather than indulging the inevitable argument that most users never read log messages, I hope we can remember a more important fact:
Some users do read log messages, just as some users file useful bug reports. Even when they are a tiny minority, I find their discoveries valuable. They give me a view into problems that my software faces out there in the wilds of real-world use. My log messages enable those discoveries, and have led to improvements not only in my own code, but also in other people's projects that affect mine.
This is part of why I include a logging system and (hopefully) understandable messages in my software. Even standalone tools and GUI applications.
(And since I am among the minority who read log messages, I appreciate good ones in software made by other people. Especially when they allow me to solve a problem immediately, on my own, rather than waiting days or weeks get the developer's attention.)
I was disappointed by Go's poor support for human-focused logging. The log module is so basic that one might as well just use Printf. The slog module technically offers a line-based handler, but getting a traditional format out of it is painful at best, it lacks features that are common elsewhere, and it's somehow significantly slower than the json handler. I can only guess that it was added as an afterthought, by someone who doesn't normally do that kind of logging.
To be fair, I suppose this might make sense if Go is intended only for enterprisey environments. I often do projects outside of those environments, though, so I ended up spending a lot of time on a side quest to build what I expected to be built-in.
I haven't explored enough of the stdlib yet to know what else that I might expect is not there. If you have a wish list, would you care to share it?
> it turned out some kind of modem manager service was messing with the port, and needed to be disabled.
Curious. What service was that?
I have an on-board serial port that's only working in one direction, which is something I've never encountered before. I wonder if the service you're referring to could be causing my problem.
ModemManager. You need to set the variable ENV{ID_MM_PORT_IGNORE}=“1” I. A udev rule.
Standard usb serial ports show up as ttyACM#, whereas nonstandard ports that require a driver like ftdi show up as ttyUSB#. Modems tend to be standard usb devices, so ModemManager by default scans all serial ports as if they were modems. This involves sending some AT commands to them to try and identify them.
Software implementations of serial devices tend to follow the standard, so they show up as ttyACM#.
Thanks for the tip. Unfortunately, it doesn't seem to be the cause of my one-way serial port issue. Adding the udev environment variable makes no difference, nor does stopping the ModemManager service.
ModemManager used to open() and probe every tty device attached to the system. I had a 8-channel relay card with an arduino nano wired up with my desk to control the lights and disco ball, interfaced with a custom ascii-based serial protocol. connecting it to an ubuntu machine (where modemmanager was active in the default install) turned the 2nd or 3rd channel on.
This was generally infuriating, there are many arduino forum posts about modemmanager messing up DIY setups.
Upstream fix was changing modemmanager to work on a whitelist / opt-in approach instead of blacklist / out-opt. My fix was to switch to debian.
Could this approach undermine the protections afforded by open-source licenses? (IANAL.)
reply