Hacker Newsnew | past | comments | ask | show | jobs | submit | tobias2014's commentslogin

For any space-like event you can find reference frames where things happen in different order. For the time-like situation you described the order indeed exists within the cone, which is to say that causality exists.


You can still order them with the spacetime interval compared to a reference event, even for space like separated events.

It allows for differing elements of the set to share the same value but so does using time alone. It just also allows every observer to agree on the ordering.

Bc Assigning a distance function to elements of a set is a common way to do that in fact. It doesn’t work with just a time coordinate or space coordinate, because that’s effectively a Euclidean metric.

You just have to contend with a few nonintuitive aspects but it’s not so bad.


I think you meant compared to a reference observer? Events are not really independent of observers. Consider the case in baseball where a runner and the baseman tag the base at the "same" time from opposite sides of the base. Assume they move at equal speeds. If the umpire is closer to the baseman then the baseman has tagged it first, if he is closer to the runner, then the runner has tagged it first. The "event" of "touching the base" has two possible outcomes depending on where the observer stands, and there is no "view from nowhere" or observer-free view that we can reference.


No, I mean a reference event, though you bring up an interesting subtlety. (Essentially I just mean an event that definitely happened [A particle decay, a supernova, an omnidirectional radio signal, etc] which will serve essentially as an origin point on the spacetime manifold). You are right though that technically, we need at least one observer to define the coordinates of that event initially. Once that's done however, ALL observers can order events according to the spacetime interval between any event they observe and the reference point (transformed into their coordinates) and they will ALL agree on that ordering. A "good" event here would be something that observers can compare. I think using pulsar pulses counted from some epoch is a perfectly good reference here, assuming we could communicate that omnidirectionally. The difference, as measured by the spacetime interval, between any event in any observers reference frame, and a reference event in their past lightcones is something that ALL observers that can communicate will always agree on. Observers may disagree about how many pulses have occurred since that epoch at a particular time in their coordinate time, but it doesn't matter. As long as they're comparing in spacetime intervals to a particular count on the pulsar, no disagreement will occur. i.e. the spacetime interval between the 3rd pulse and some event will always be the same since it's a lorentz invariant scalar quantity (i.e. a rank zero tensor).

Your baseball analogy has flaws: No properly defined "event" in spacetime will have dual-outcomes. The events in that case are that "a baseman tagged the base", and "a runner tagged the base". "x tagged the base first" is NOT an event, that's a comparison between events, and it's one that was done in a particular observers time coordinate, which is not the correct procedure here. No Lorentz invariant transformation between observers within the light cone will disagree that those events happened, though observers may disagree which happened first within their coordinate time.

(Note the issue of observers needing to be in the same light-cone is a superficial one. I haven't defined that precisely, but I don't need to: If observers can communicate at all, they will agree, upon communication, that an event is within their past light cone. In the context of server synchronization, this will always be true.)


I guess when you use it for generic "problem solving", brainstorming for solutions, this is great. That's what I use it for, and Gemini is my favorite model. I love when Gemini resists and suggests that I am wrong while explaining why. Either it's true, and I'm happy for that, or I can re-prompt based on the new information which doesn't allow for the mistake Gemini made.

On the other hand, I can also see why Claude is great for coding, for example. By default it is much more "structured". One can probably change these default personalities with some prompting, and many of the complaints found in this thread about either side are based on the assumption that you can use the same prompt for all models.


And who believes that the difference between 91.9% and 92.4% is significant in these benchmarks? Clearly these have margins of error that are swept under the rug.


How is "If" as a function even a drawback? It is largely seen as something desired, no? I would see that as a huge advantage, which allows for very powerful programming and meta-programming techniques.


One potential issue is that unlike most other languages, it doesn't create a new scope. But almost nothing in Mathematica introduces a new scope, and Python also uses unscoped "if"s, so it's rarely much of a problem in practice.

But with pattern matching, you almost never need to use "If[]" anyways:

  fib[0] := 0
  fib[1] := 1
  fib[n_ /; n < 2] := Undefined
  fib[n_Integer] := fib[n - 1] + fib[n - 2]

  fib[8]
  (* Output: 21 *)

  fib /@ Range[10]
  (* Output: {1, 1, 2, 3, 5, 8, 13, 21, 34, 55} *)

  fib[-1]
  (* Output: Undefined *)

  fib["a string"]
  (* Output: fib["a string"] *)


This is not true. Mathematica has the concept of contexts. You can have each notebook have it's own unique context. Mathematica Packages create their own context too, we are not talking about module's here which are useful for local variable scoping. Packages and contexts lead to the isolation you are looking for. These are things that have been around since the initial Mathematica 1.0 in 1988 (!). https://reference.wolfram.com/language/ref/Context.html

Same about your criticism of error handling and control flow: https://reference.wolfram.com/language/guide/RobustnessAndEr...


I've been at a few universities and labs as a postdoc, and a Mathematica license always came either as part of the University or the department. It might not be relevant in some disciplines, but generally I assume it must be used a lot to warrant such broad licensing (it is a tool I use daily as a theoretical physicist).


Meanwhile companies exist that have built essentially layers in front of chatbots, masking or filtering sensitive data, then forwarding the masked query, then unmasking it when giving back to the user(e.g. https://www.liminal.ai/ ).

Ideally you shouldn't paste sensitive information into the chat in first place. But when such companies can guarantee certain compliance types, it might be better to offer this rather than letting people use chats uncontrolled in companies.


Oniux seems like an "officially" supported tool similar to orjail (which hasn't received a commit in four years, but still works great as a shell script with iptables/iproute tools [1]). Orjail has also an option to run with firejail for further isolation, which seems to be still a feature that Oniux doesn't have.

[1] https://github.com/orjail/orjail/blob/master/usr/sbin/orjail



You can use firejail for network isolation, it can run applications in a new network namespace [1]. I'm using this to run applications over tor to make sure that nothing leaks.

[1] https://firejail.wordpress.com/documentation-2/basic-usage/#... "A network namespace is a new, independent TCP/IP stack attached to the sandbox. The stack has its own routing table, firewall and set of interfaces."


I saw there's an option to match on a cgroup among nft meta expressions (but I've never tried it). It could be enough if you just want to add per-process firewall rules, but not configure an additional namespace with it's associated interfaces, routing/nating.


Yes. You could match packets based on username or even SELinux labels.

You could also set a special mark on a packet for each container and then filter based on that. The Internet is surprsingly very thin on nft resources. I spent a few weeks learning how to write them. Definitely, not for the average consumer.


This might turn into a debate of defining "simplest", but I think the ensemble/statistical interpretation is really the most minimal in terms of fancy ideas or concepts like "wavefunction collapse" or "multiverses". It doesn't need a wavefunction collapse nor does it need multiverses.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: