Do the big updates to Elixir's type system help at all? afaik the most recent update added a huge amount of coverage that should extend to older code automatically.
I don't want to go into details of my work project too much, but the fundamental issue is that ElixirLS only supports 1.12+ (at least last time I checked).
> JOSE: Yeah, so what happened is that it was the old concurrency story in which the Clojure audience is going to be really, really familiar. I’ve learned a lot also from Clojure because, at the time I was thinking about Elixir, Clojure was already around. I like to say it’s one of the top three influences in Elixir, but anyway it tells this whole story about concurrency, right?
I work with elixir daily and I would concur. elixir's semantics line up nearly 1:1 with the clojure code I used to write a few years ago. Its basically if you replaced the lisp brackets with ruby like syntax. The end result is a language that is much easier to read and write on the daily with the disadvantage of making macros more difficult. I would argue that it should be difficult since you should avoid using it until absolutely necessary. Lisps on the other hand, practically beg you to use macros as the entire language is optimized for their use.
wouldn't that still add a lot of value, where the person in the loop (sadly, usually) becomes little more than the verifier, but can process a lot more work?
Anecdotally what I'm hearing is that this is pretty much how LLMs are helping programmers get more done, including the work being less enjoyable because it involves more verification and rubber-stamping.
For the business owner, it doesn't matter that the nature of the work has changed, as long as that one person can get more work done. Even worse, the business owner probably doesn't care as much about the quality of the resulting work, as long as it works.
I'm reminded of how much of my work has involved implementing solutions that took less careful thought, where even when I outlined the drawbacks, the owner wanted it done the quick way. And if the problems arose, often quite a bit later, it was as if they hadn't made that initial decision in the first place.
For my personal tinkering, I've all but defaulted to the LLMs returning suggested actions at logical points in the workflow, leaving me to confirm or cancel whatever it came up with. this definitely still makes the process faster, just not as magically automatic.
Also don't forget the 'memory' feature. As LLM providers get better at tailoring the LLM to the user, and probably obfuscating the details on how this user-specific memory works, it will be harder to switch to another provider.
The area I was briefly interested in was in the fintech/lending space. Not an expert though. I saw some cool ideas out of it at that time and interviewed several rounds with a company in that space.
I get the impression after using language models for quite a while that perhaps the one thing that is riskiest to anthropomorphise is the conversational UI that has become the default for many people.
A lot of the issues I'd have when 'pretending' to have a conversation are much less so when I either keep things to a single Q/A pairing, or at the very least heavily edit/prune the conversation history. Based on my understanding of LLM's, this seems to make sense even for the models that are trained for conversational interfaces.
so, for example, an exchange with multiple messages, where at the end I ask the LLM to double-check the conversation and correct 'hallucinations', is less optimal than something like asking for a thorough summary at the end, and then feeding that into a new prompt/conversation, as the repetition of these falsities, or 'building' on them with subsequent messages, is more likely to make them a stronger 'presence' and as a result perhaps affect the corrections.
I haven't tested any of this thoroughly, but at least with code I've definitely noticed how a wrong piece of code can 'infect' the conversation.