Where is this worry coming from? (I'm curious, not shutting it down)
I might be biased from having worked with production F#, but it feels more like functional is making its way into C#, as the general industry sees value in functional principles. So F# feels like its more here to stay?
C# has incomplete and often compromised versions of the constructs F# mostly took from OCaml, and as you extend those exhaustive guarantees towards formal verification you bump into F*.
C#s adoption of language features shows their utility but they’re not a replacement, per se. Without a clear functional answer in certain language and parallel computing scenarios MS would be ignored. Scala and Kotlin are comparable answers to comparable pressures on the JVM, and even keeping pace there with new and exciting tools/libraries requires some proper functional representation on the .Net platform.
F# will disappear when/if those other languages do, and already has lots of what C# is chasing with a more elegant syntax. It inherits VM and project improvements from C#, so the biggest threat to long term investment is something like the crippling changes made to FSharp Interactive (FSI), during the .Net Core transition. Otherwise it seems to be in a safe place for the foreseeable future.
Supporting as in maintenance mode, at least VB.NET. Thankfully F# is more community driven, but the CLR ecosystem is definitely getting C#-centric in the use of idioms and features from newer C# versions, which increasingly affects F# interop while they catch up.
.NET has always been both the biggest blessing and the biggest curse for F#.
We have access to millions of libraries. I look at BEAM languages and OCaml every once in a while but can’t quite drag myself over there, knowing that in .NET, just as an example, I can choose between a dozen JSON serialisation libraries that have been optimised and tuned comprehensively for decades.
But then, those libraries are also our curse. If you consume them, everything is OO so you either give up on functional purity and start writing imperative F# code, or you have to spend time writing and maintaining a F# idiomatic wrapper around it.
Similarly I was working recently on project to develop a library which was going to have downstream consumers. The problem lent itself really well to domain modelling in F#. But I knew that my downstream users would be C# devs. I could invest the time and write my library as “functional core, imperative shell”. But then I decided that since the interface would be OO anyway, I might as well just write it in C#.
Thankfully what keeps F# going is the wonderful community around it, not Microsoft. I know some people (outside of Microsoft) have worked on a standalone F# compiler but it’s still very early stages. Maybe one day.
Although you inevitably end up writing some OOP code in F# when interacting with the dotnet ecosystem, F# is a really good OOP language. It's also concise, so I don't spend as much time jumping around files switching my visual context. Feels closer to writing python.
The C# team admits to looking at how F# features work, but also keeps trying to make it clear that C# doesn't have a goal of entire eating F#.
C# still doesn't see itself as a functional programming language, even as it has added so many features. It may never get first-class currying or the broader ideas like generalized computation expressions, for instance. It certainly won't get F#'s cleaner syntax with fewer mandatory semicolons and whitespace nesting rather than curly brackets.
F# probably isn't going to disappear for a lot of similar reasons that GHC (the Glasgow Haskell Compiler) didn't disappear when F# was started (nor when key contributors left Microsoft). F# often already sees more outside open source contributors than contributions from Microsoft employees.
They killed off VB, which if I recall the announcement correctly, noted that it statistically had a larger user base (by Microsoft metrics) than F#. There are a number of companies relying on F# for critical operations and MS has some use of F# internally which I understand has no plans of replacement, which helps balance out the fear.
The task sounds similar to descriptions in the API space. People figured LLMs would be awesome at annotating API specs with descriptions that are so often missing.
Truth is, everyone is realising it’s a bit the opposite: the LLMs are “holding it wrong”, making a best guess at what the interfaces do without slightly deeper analysis. So instead, you want humans writing good descriptions specifically so the LLM can make good choices as to how to piece things together.
It’s possible you could set it off on the labelling task, but anecdotally in my experience it will fail when you need to look a couple levels deep into the code to see how functions play with each other. And again, imo, the big risk is getting a label that _looks_ right, but is actually pretty misleadingly wrong.
With regards to API specs, if you have an LLM have a swing at it, is it adding value or is it a box ticking exercise because some tool or organization wants you to document everything in a certain way?
If it's easy to generate documentation, and / or if documentation is autogenerated, people are also less likely to actually read it. Worse, if that comment is then used with another LLM to generate code, it could do it even wronger.
I think that at this stage, all of the programming best practices will find a new reasoning, LLMs - that is, a well-documented API will have better results when an LLM takes a swing at it than a poorly documented one. Same with code and programming languages, use straightforward, non-magic code for better results. This was always true of course, but for some reason people have pushed that into the background or think of it as a box ticking exercise.
My take on AI-for docs is - it’s good. But you need to have a human review.
It’s a lot easier to have someone who knows the code well review a paragraph of text than to ask them to write that paragraph.
Good comments make the code much easier for LLMs to use, as well. Especially in the case where the LLM generated docs would be subtly misunderstanding the purpose.
LLM-assisted reverse engineering is definitely a hard problem but very worthwhile if someone can crack it. I hope at least some "prompt engineers" are trying to make progress.
From the wording, they are comparing against all electric vehicles, no specific model year, called to a routine inspection (~MOT).
There is a small note explaining that the Model 3 was introduced in Denmark in 2019 (MY 2020), and therefore 2024 is the first time the M3 has been called to the routine inspection (periodiske bilsyn), hence the specific focus on that model year for this article.
A pleasant surprise seeing an Alastair Donaldson supported paper. What I liked most at Imperial was the amount of professors pushing for things that sat between the fully theoretical world, and the more real engineering world. From a first quick skim this paper seems to hit that right on the head. I've been wanting to see how I could introduce more of the fun of formal verification into my job, so I'll be digging deep into this one for some inspiration.
Well worth a quick trip to the source to see how it's implemented. After all, how would the author get around to packaging 4B if statements, like the OP?
He could package it to the size limit and range the numbers it could detect. Add a note to download particular package if the number is out of range of the package and that package is not installed.
One package should probably handle range of one million. Now it's just 4000 packages to install. You wouldn't even notice that in an average js project.
Almost identical pathway here, except with some Spybotics thrown in around the same time as Bionicle. I sometimes wish Mindstorms had that level of world building...
At Unity, Joachim Ante always stayed very close to the underlying engine and its tech, rather than the broader market strategies and moves. Of course, every situation is unique and difficult to compare 1:1.
And we don't necessarily need to step out of the western world to see it. I still remember Authorized Retailers in France before the expansion of Apple stores beyond the US. In Denmark, there is still exclusively these "Premium Authorized Resellers".
There's lots of them in Poland even though Apple now has official presence in the country(although online only, I don't think they have any official Apple Stores yet).
Thanks - but isn't this more like "reallocation"? I thought redistribution was more about taking money from some people and giving it to (or spending it on) other people, of which taxation is one method.
You take money that would have ostensibly gone to the citizens of Tokyo and instead provide it to the citizens of town of your choice.
Reallocation and redistribution are synonymous here, one is just used in more headlines (probably because more people use "distribute" more regularly than "allocate"). Nothing different should be inferred by either word choice IMO.
Our attention spans have been ruined by the modern age of the internet! If an audience needs to hold a train of thought for more than three minutes, they're gone. Surprise, anecdotes, humor, problem statements, etc... are great tools that stoke the fire of focus.
Another issue is the amount of attention you get to work with. Each surprise moment spurs an increased attention which decays. So, one introduction to the problem is not enough, you have to break things down! Introduce a complication on the original problem, then immediately fix that.
To each their own, but thinking with this pattern for my talks has worked really well for me. Even for mixed audiences (engineers + PMs + management), everyone gets something out of it, and finds their own questions to ask as well. Makes it easier for them to connect it all to their own problems as well.
I might be biased from having worked with production F#, but it feels more like functional is making its way into C#, as the general industry sees value in functional principles. So F# feels like its more here to stay?