It's difficult to take comments like this in good faith when the Github profile linked on your account prominently features your signature on a letter calling for Richard Stallman to be reinstated to the FSF after his resignation, following his comments defending sex with minors and child pornography.
To an extent, you get this with Datalog, which is an easily embeddable subset of pure Prolog.
I've been spending a ton of time with the language and its implementation through my day job, and I recently spoke about its use as a DSL for embedded knowledge bases: https://www.youtube.com/watch?v=lYLkaOq7WbU
Agreed. I also wish fewer libraries started their own supervision tree, and instead gave you a child spec to drop into your supervision tree. There's definitely use-cases where shipping libraries as an application makes sense, but oftentimes that sort of design causes problems for me, because it means not being able to start multiple copies of the dependency with different configurations.
I think Phoenix PubSub is a perfect example of how libraries should be structured, in that you just need to drop the module + options into your supervision tree, and you have the freedom of starting multiple independent copies of the tree, in different contexts, and with their own configurations: https://hexdocs.pm/phoenix_pubsub/Phoenix.PubSub.html#module...
Alternately, the dependency can start its own supervision tree with any global processes/tables hanging off it from the beginning; and then export a Mod:start_link/1 function which clients will call, which will 1. start a child tree owned/managed by the dep's supervision tree; but which then 2. links that child subtree's root into the caller as well.
Such deps are integrated, by adding a stub GenServer that calls Mod:start_link/1 in its init/2 callback; and then adding a child-spec for that stub GenServer in your client app's supervision hierarchy.
The ssh daemon module in the stdlib works this way. Most connection-pooler abstractions (e.g. pg2, gproc) do as well.
Yes! This is a great approach, and I'd be happy to see more examples like this in the wild. This is similar to the same way Phoenix PubSub works, with the PubSub application starting a pg scope as part of its supervision tree, that client PubSub servers can join if configured to use the pg adapter.
I was a little bit flippant in my initial comment, but my main criticism was of libraries that don't support any sort of hooks like this into their supervision strategy, and instead rely entirely on a global and static supervision tree, usually configured using app config.
I'm 50-50 on that one (used to agree with you more but have since retraced a bit). This may be an overly nitpicky detail, but I you sort of want your own sup tree to not necessarily have a different-ly scoped "microservice" tied to it in terms of failure domains, and also just plain visual organization in your observer/livedashboard. For the 90% use case (e.g. http process pools) an indepentent sup tree is correct, but to your points,
1. it would be nice to have a choice. The library-writer should think about their users and choose which case is more correct. And make it opt-out and easy (let's say 2-3 loc) to implement the "other case", and spelled out explicitly in the readme/docs landing page.
2. PubSub indeed made (IMO) the correct choice when it migrated over from being its own sup tree to moving into the app's sup tree.
IIYC you're suggesting that what I am depending upon here is convenient but problematic?
My understand is not yet sophisticated enough to follow your point about "not being able to start mutiple copies of the dependency with different configurations".
Do you have any explanatory examples that could help me (and presumably others like me)? Thanks. m@t
Problematic is probably too strong of a term, and I think I'd use the word inflexible instead.
I want to be clear though: my issue isn't with applications -- the functionality you're talking about is powerful and useful -- it's purely with the tendency of starting a static and global supervision tree as part of a dependency: see some of the other comments in this thread for some neat examples of how applications like ssh and pg2 handle supervision.
When libraries are written like this, they usually start everything up automatically, and pull from their application environment in order to configure everything. This means that this configuration is global and shared amongst all consumers of the library.
Imagine an HTTP client, for example, that provides a config key for setting the default timeout. This key would be shared among all callers, and so if multiple libraries depended on this client, their configurations would override each other.
Fortunately, Elixir now recommends against libraries setting app config, so this problem is partially mitigated, but it's still a concern within your app: if I'm calling two different services, I want to use different timeouts for each, based on their SLA, so having a global timeout isn't helpful.
Instead, in this situation, I'd prefer something like what Finch provides, where I'm able to start different HTTP pools within my supervision tree, for different use-cases, and each can be configured independently: https://github.com/keathley/finch#usage
Another approach would be to do something like what ssh does, and have the Finch application start a pool supervisor automatically, but then provide functions for creating new pools against that supervisor, and linking or monitoring them from the caller.
There's a few other techniques you can use too, with different tradeoffs and benefits: like Ecto's approach of requiring that you define your own repo and add that to your tree. Chris Keathley describes some of those ideas here: https://keathley.io/blog/reusable-libraries.html
Global trees like this are also harder to test, especially if they rely on hardcoded unique names, and usually restrict you to synchronous tests, since you can't duplicate the tree for every test and run them independently of each other.
Again though, I want to stress that running processes in the library's application is not my problem: it's just not having any control over when or how those processes are started.
I'm just responding on my phone, and I need to run for a few hours, but feel free to ask for more info or reach out. I'm always happy to talk about this stuff! I enjoyed your article, and I apologize if my initial comment came across as an attack on your core points.
No indeed, I did not perceive it as an attack, rather as hinting at concerns that I am not aware of and I'm grateful for your comment and the links (and thank you for your compliment).
Reading what you've written I wonder if this is about configuration rather than the nature of a library starting a process per se.
In my case there is no configuration, the agent state is a pure-counter, I think firing it off is harmless as other users of the library would just bump the counter value. Your point about testing is a subtle one, I'm not 100% sure I have the right mental picture yet (something I struggle with most of the time anyway).
What I think you are getting at is a library starting a process that does have configuration around how it works, should be less automatic giving the user a chance to make choices about how it works.
A lot of what I'm talking about has to do with configuration, but reuse is another big element. Your example has no configuration, and so is good in that regard, however your example is not reusable, in the sense that it's only possible for a single counter to exist.
I realize this is a contrived example, because you were trying to keep things simple, but if I needed two distinct atomic counters in my app, then I wouldn't be able to use Ergo, as it's currently implemented, because the application only starts a single counter, and doesn't provide any capabilities for starting additional counters.
You could change Ergo to get around this, possibly by instead running a dynamic supervisor that can start named counters under it, using something like `Ergo.create_counter/1`, but this would only address this specific use case.
To go back to my last comment, if you instead exposed, for example, a `__using__` macro that modules could use to define new counters, then callers would be able to integrate as many counters as they needed, whenever or however into their supervision tree as they required.
This ties back to the testing point too: if the process is a singleton, managed by the application, then you can only run one test against that process at a time in order to isolate the state for this tests, and you need to ensure you properly clean up that state between tests. Instead though, if the library allows you to start the processes yourself, then each test can use `start_supervised!` to start it's own isolated copy of the process, which will be linked to the test's process, and automatically cleaned up once the test finishes.
If a library is not written like that, it's poorly designed. It could provide one global started version as a commodity, but not being able to start it multiple times would be a big no.
This has historically been fairly common among a lot of the early Elixir libraries, and I'd imagine that's a byproduct of many of the early adopters coming from the Ruby ecosystem, and not having prior experience with the patterns used in Erlang. I think some of the early confusion surrounding how application config should be used also led to some misguided decisions early on.
Fortunately it's something that I've seen improve over time, but it's a pain-point I've run into with a lot of dependencies, so I try to call it out when I see it.
I don't think this is a fair argument. Sure, the shirt probably isn't appropriate for a business casual environment, but that doesn't change the fact that this behaviour is sexual harassment: made worse by the fact that the recruiters are acting from a position of power.
It's also worth pointing out that these events occurred at Black Hat, in Las Vegas: what passes as acceptable business attire there is not the same as what would fly in an office, and I guarantee there were plenty of people wearing far more risqué shirts without facing any harassment.
It's easy to try to pin some responsibility on the woman here, but that ignores the fact that this sort of language and culture is extremely common at Black Hat and DEFCON, and a shirt like she was wearing would not have been out of place at the conference. Hell, I wouldn't be surprised if she won the shirt at the conference.
From the year before, here [0] is a sign from the vendor area at Black Hat, featuring an underwear clad model with the caption: "You know you're not the first... but do you really care?"
Similarly, to this day, DEFCON, held one week after Black Hat, and likely the largest security conference in the world, still holds a "Hacker Jeopardy" competition, featuring strippers who remove their clothing as contestants answer questions correctly.
I only say all of this because I think a lot of context is being lost in this article, by people who haven't been to these conferences: the women's shirt wouldn't have been what singled her out here, her gender was, and for the recruiters to harass her for that is unacceptable.
Ms. Mitchell, by wearing the t-shirt with overtly sexual language, was participating and engendering the ostensible 'sexualized culture' herself, and to absolve her of complicity in her own actions is maybe actually the sexist part.
If a man wore that shirt, we would declare it 'demeaning and sexist' I think, without doubt.
I fully agree, it's all too much, people should be more professional, and the Blizzard guys should not have referenced it at all.
You're 3rd paragraph doesn't add up. If she is wearing the 'dirty t-shirt' ... then she is not a 'victim' to what she herself is perpetuating, unless you think this person is unintelligent? I don't understand.
I don't think this is the story we are looking for, and I don't think legal action is warrant against people referring to someone by comments on their own t-shirt, and it's also upsetting that this nuanced information is not in the article.
'Sexism' is real, it happens, and it's important, and so we can't just flail around with bad information and journalism trying to push narratives. Facts matter and if people want to 'move the needle' it would behove us all to get the story straight.
Finnegan's Wake absolutely exists to be read, or even sung, out loud. If you've never gotten the appeal, just try listening to the start of an audiobook version. There's a Youtube series that covers the first few chapters that I adore: https://www.youtube.com/watch?v=6HgCjtd2iPU
Plenty and plenty, yes. Now, what's a solution that would work for everyone, males females and others alike?
This is very clearly a professor acknowledging the problems that arise due to interindividual biology in work environments. Unlike PC supporters hiding the issue under the blanket of professionalism.
I think an obvious solution would be to exclude anyone who is incapable of managing the bare minimum level of professionalism that's required in a workplace.
If someone is incapable of managing their feelings in a workplace then maybe they don't belong in one, and their colleagues should not be the ones who are punished for that.
When you're, like 2 years old, and you want a toy in the sandpit, you grab it. And if some other kid has it, too bad for them, because you want it, and that's what's important.
Somehow, most people manage to figure out that this kind of behavior isn't appropriate, and hide the issue under the blanket of being-a-decent-human-being.
I don't understand how dealing with sexual or romantic feelings is any different.
My highschool physically removed the right mouse button from the mice, because we were right clicking to make text files that we'd rename as batch files to get a command prompt open.
Some people would just bring their own mouse in to get past the defences.
Oh and it wasn't. That was the least of the school's problems though.
They also used a surveillance system called LanSchool, which sent out all of its commands entirely unencrypted and unauthenticated, so people would spoof the remote takeover command and steal exams from teachers' accounts. It ended up being a whole thing my senior year.
This thread is making me remember how incompetent my old school district used to be with technology. Once upon a time, a friend of mine got detention for opening a command line and running `tree`. The teacher said that he was hacking the computer.
And all of this is to say nothing of the frankly embarrassing problems that have plagued JWT as a result of algorithm agility (alg=none). Removing agility from JWT wouldn't make it a good specification, but it would certainly make it a better specification.
This is a really tough question to answer, because the answer depends on what you're using JWT for. JWT crams as much functionality into the format as possible, and most of that functionality isn't needed for most use cases. This means that offering an alternative requires knowing some context about what you need out of JWT in the first place.
That being said, for most purposes, you can do worse than using either mutual TLS or Macaroons [0]. As always with cryptography though, the devil is in the details, so for a more thorough discussion, check out @tptacek's "A Child's Garden of Inter-Service Authentication Schemes" [1]. It's one of my favourite treatments of the topic, and discusses the tradeoffs of a few different techniques for different use-cases.