Hacker Newsnew | past | comments | ask | show | jobs | submit | johnisgood's favoriteslogin

A lot of people have bought real, legal, government IDs from places like Palau[] which sell them for like $300 and no crime is committed by doing so.

[] https://rns.id/app/palauidinfo


Proguard can also apply optimizations while it obfuscates. I think a good JVM will eventually do most of them itself, but it can help code size and warm-up. I'm guessing as JVMs get better and everyone is less sensitive to file sizes, this matters less and less.

> The comptime feature is probably the most exciting thing I've seen in a while

Just please do not make the mistake of believing that it is unique to Zig. Factor brings the best of Forth and Lisp together, so meta-programming or extending the language is possible quite easily, for example. You could extend the syntax or add constructs pretty easily, and so forth. Anyways, an example can be found here: https://rosettacode.org/wiki/Compile-time_calculation#Factor but this barely scratches the surface. It does not mention `<< ... >>` which evaluates some code at parse time. You can execute code before the words in a source file are compiled.

https://docs.factorcode.org/content/article-literals.html

https://docs.factorcode.org/content/article-syntax-literals....

https://docs.factorcode.org/content/article-syntax-immediate...

https://docs.factorcode.org/content/word-flags{,literals.htm...

I remember when I did something like:

  SYMBOL: aligned-16-char

  <<
  : 16-byte-alignment ( c-type -- c-type )
      16 >>align 16 >>align-first ;

  char lookup-c-type clone 16-byte-alignment \ aligned-16-char typedef
  >>
when I was working on some binding.

You could use it in a struct like:

  STRUCT: foo
    { bar aligned-16-char[16] } ;
Or something like this is pretty typical (when writing bindings/ffi):

  << "libotr" {
      { [ os windows? ] [ "libotr.dll" ] }
      { [ os macosx? ] [ "libotr.dylib" ] }
      { [ os unix? ] [ "libotr.so" ] }
  } cond cdecl add-library >>
Those are just some examples, but it is pretty powerful. It supports (and encourages) interactive development. Profiling and debugging is a breeze and highly detailed and useful, you can easily disassemble words (functions), you can get a list of how many times malloc has been called in some circumstances, there is runtime code reloading (a vocabulary that implements automatic reloading of changed source files[1]), and so on. And on top of all this, you can compile your stuff to an executable that is less than 4 MB!

And of course you do not have to do stack shuffling at all, you can easily use locals which is useful for math equations and whatnot. Plus did you know that the Factor compiler supports advanced compiler optimizations that take advantage of the type information it can glean from source code? The typed vocabulary (yes, it is not part of the language, but implemented as a vocab) provides syntax that allows words to provide checked type information about their inputs and outputs and improve the performance of compiled code.

I would like to repeat because if this was not the case, I would have never bothered with it: you can create a single executable file that is less than 4 MB of size if you wish so! Of course it encourages interactive development, but still, it is great to have an optimizing compiler that can do all this easily. And mind you, this part is also written in Factor itself and is available as a vocabulary (vocab).

[1] There is a vocabulary named io.monitors and loaded source files across all vocabulary roots are monitored for changes. You can read more about it here: https://docs.factorcode.org/content/article-vocabs.refresh.h...

---

So all in all, I think Factor is great. I was shocked at how modern (and how many) libraries it has, especially considering only a handful of people have been working on it. Slava Pestov created the language, and some people joined him later on. If you want to learn more about it, start here: https://concatenative.org/wiki/view/Factor. There are videos, there are papers, there are lots of resources to get started. :) The language misses a couple of things, but it is being worked on.


I just compiled "zed" with "cargo build --release" and not only did it pull >2000 dependencies, its size (executable file) is literally 1.4G. Debug is 1.2G.

  $ pwd
  /tmp/zed/target/release
  $ ls -lh ./zed
  -rwx------ 2 john john 1.4G Aug 28 17:10 zed
---

  $ dut zed/ | sort -h
   598M    0B   | | /- webrtc-sys-0a11149cbc74bc90
   598M    0B   | | | /- out
   598M    0B   | | |- webrtc-sys-090125d01b76a5e8
   635M  160M   | |   /- s-hal7osjfce-1h7vhjb-4bdtrsk93m145adnqs17i9dxe
   635M  160M   | | |- project-06kh4lhaqfutk
   641M  161M   | | /- project-1ulvakop54j8y
   641M  161M   | | | /- s-hal5rdrth3-0j8nxqq-d0wsc7qnin39797z4e8ibhj4w
   1.1G  1.1G   | | /- zed-ed67419e7a858570
   1.1G  1.1G   | |- zed
   1.3G  1.3G     | /- zed-64b9faeefdf3b7df
   1.3G  1.3G     |- zed
   1.4G    0B     |- build
   2.2G    0B   | |- build
   7.9G  1.4G     /- deps
   9.4G    0B   |- release
    14G  2.9G   | |- incremental
    19G  4.2G   | /- deps
    33G    0B   /- debug
    42G    0B /- target
    42G    0B zed
Summary:

  $ du -h ./target/debug/deps/
  20G     ./target/debug/deps/
  $ du -h ./target/release/deps/
  8.0G    ./target/release/deps/

  $ du -h ./target/debug/zed
  1.2G    ./target/debug/zed
  $ du -h ./target/release/zed
  1.4G    ./target/release/zed
This is on a whole new level of bloat; both with regarding to dependencies AND the resulting executable file(s) (EDIT: executable files are unstripped).

Any explanations as to why "cargo" does not seem to re-use libraries (dependencies) in a shared directory, or why it needs >2000 dependencies (that I see being downloaded and compiled), or why the executable file of the release mode is 1.4G unstripped while of the debug one it is less?


You made several good points. I agree with some of them and I think the remainder is a matter of framing. Below is a hopefully direct, precise reply that (a) acknowledges the valid objections, (b) restates the crux of your original argument cleanly, and (c) points the discussion toward the practical/architectural distinction that matters for verification and certification.

Chronology does not overturn the substantive claim I was making. My position is not: "SPARK is merely an older tool and therefore superior".

My position is:

  1. Ada and SPARK represent a continuous, coherence-of-purpose: Ada was designed from early on for high-integrity, safety-critical systems (strong typing, conservative semantics, explicitness) and SPARK is the engineering of that design into a verifiable subset and toolchain. SPARK was not an afterthought bolted on to make Ada proveable; it is the outgrowth of Ada's long-standing high-integrity philosophy and of work done in the 1980s/1990s to make that philosophy formally checkable[1][2][3][4][5].
  
  2. The practical distinction that matters in real-world certification and high-assurance engineering is not merely "does the language have an escape hatch" or "did a verification subset exist earlier or later". It is: does the language + toolchain provide an integrated, auditable, certifiable workflow that produces machine-checked caller-facing proofs and supplier artifacts acceptable to certification authorities today? SPARK + GNATprove + Ada toolchain do that now. The Rust ecosystem offers "promising" research tools and engineering efforts, but it does not yet provide the same integrated, certification-proven toolchain and supplier artifacts out of the box.

  3. From an academic or R&D viewpoint, your point about "you could build a Rust subset with verification primitives" is correct and constructive: people are doing exactly that. But the practical, operational gap between "research-verified components" and "industry-auditable, whole-program certification" is material. That gap is where the claim "SPARK is still the practical option for whole-program certifiable verification today" rests.
A few clarifications that may help collapse confusion in the thread:

  - When I say SPARK provides "industrial-strength, auditable" verification I mean the whole artifact chain: language contracts or annotations, a prover that produces verification conditions and discharge evidence, and vendor/supplier materials practitioners can use in DO-178C / EN-50128 / ISO 26262 certification efforts. That is different from saying "SPARK can verify X and Rust cannot verify X in any research setting".

  - When you observe that SPARK originally used annotations/comments and later adopted Ada 2012 syntax, that is historically correct and expected. The fact that the syntax later became native to Ada strengthens the point that Ada and SPARK are conceptually aligned, not orthogonal.

  - When you say "nothing prevents Rust from gaining a formal spec or evolving toward verification", that is also true; there are active efforts in that direction. The question I keep returning to is: how much engineering, process, and qualification effort is required before Rust + tools equals SPARK's current production story? My answer is: a lot. May not be impossible, but it is definitely not trivial, and not the same as "Rust already provides that".
TL;DR:

The relevant point remains that Ada+SPARK are an integrated, production-ready verification ecosystem today, whereas Rust is a promising base for verification research that has not yet produced the same integrated, certifiable toolchain."

---

> Good further discussion might involve looking at the Ada 83 rationale to see if you can find support for a claim that it was designed for verification. It's a fair bit of text to look through and interpret and a quick search didn't turn up anything obvious, but you might be better equipped to handle that than me.

Ada 83 Rationale: it explains the design goals (reliability, maintainability, suitability for real-time/embedded and defense systems) and the language design decisions (strong typing, modularity, explicitness)[1].

Origin and requirements driving Ada: describes the DoD process (HOLWG / Steelman) that produced a set of requirements targeted at embedded, safety-critical systems and explains why a new language (Ada) was commissioned. It shows Ada was created to serve DoD embedded-system needs.[2]

Standardization process and the language rationale across revisions (shows continuity of safety/verification goals)[3].

Guide for the Use of the Ada Programming Language in High-Integrity / Safety-Critical Systems (technical guidance / conformance): this guide and related validation/ACATS materials describe expectations for Ada compilers and use in safety-critical systems and explain the certification-oriented aspects of Ada toolchains. It is useful to show the ecosystem and qualification emphasis, as per your request.

[1] https://archive.adaic.com/standards/83rat/html/ratl-01-01.ht...

[2] https://archive.adaic.com/pol-hist/history/holwg-93/holwg-93...

[3] https://www.adaic.org/ada-resources/standards/ada-95-documen...

[4] https://www.open-std.org/JTC1/SC22/WG9/n350.pdf

[5] See my other links with regarding to SPARK's history.


Ada has that too, for what it is worth: https://learn.adacore.com/courses/intro-to-ada/chapters/cont... and https://en.wikibooks.org/wiki/Ada_Programming/Contract_Based.... It can be verified at compile time.

But what I love the most is: https://news.ycombinator.com/item?id=43936007

Instead of:

  const MIN_U32 = 0;
  const MAX_U32 = 2 ** 32 - 1;
  
  function u32(v) {
    if (v < MIN_U32 || v > MAX_U32) {
      throw Error(`Value out of range for u32: ${v}`);
    }
  
    return leb128(v);
  }
You can do this, in Ada:

  subtype U32 is Interfaces.Unsigned_64 range 0 .. 2 ** 32 - 1;
or alternatively:

  type U32 is mod 2 ** 32;
and then you can use attributes such as:

  First  : constant U32 := U32'First; -- = 0
  Last   : constant U32 := U32'Last;  -- = 2 ** 32 - 1
  Range_ : constant U32 := U32'Range; -- Range 0 .. 2**32 - 1
Does D have anything like this? Or do any other languages?

> No it's not, Rust is very well amenable to formal verification, despite, as you said, not being designed for it (due to the shared xor mutable rule, as I said), Perhaps even more amenable than Ada.

I would like to add a few clarifications that I may not have mentioned in my other reply.

You are correct that Rust's ownership/borrow model simplifies many verification tasks: the borrow checker removes a great deal of aliasing complexity, and that has enabled substantial research and tool development (RustBelt, Prusti, Verus, Creusot, Aeneas, etc.). That point is valid.

However, it is misleading to claim Rust is plainly more amenable to formal verification than Ada. SPARK is a deliberately restricted subset of Ada designed from the ground up for static proof: it ships with an integrated, industrial-strength toolchain (GNATprove) and established workflows for proving AoRTE and other certification-relevant properties. Rust's type system gives excellent leverage for many proofs, but SPARK/Ada today provide a more mature, production-proven path for whole-program safety and certification. Which is preferable therefore depends on what you need to verify - research or selected components versus whole-program, auditable certification evidence.

SPARK/Ada is used in many mission-critical industries (avionics, rail, space, nuclear, medical devices) for a reason: the language subset, toolchain, and development practices are engineered to produce certifiable evidence and demonstrable assurance cases.

Rust brings superior language ergonomics and strong compile-time aliasing guarantees, but it faces structural barriers that make SPARK's level of formal verification fundamentally unreachable. These are not matters of tooling immaturity, but of language design and semantics:

- Rust allows pervasive unsafe code, which escapes the borrow checker's guarantees. Every unsafe block must be modelled and verified separately, defeating whole-program reasoning. SPARK forbids such unchecked escape hatches within the verified subset.

- Rust's semantics include undefined behavior and panics, which cannot be statically ruled out by the compiler. SPARK, by contrast, can prove statically that such run-time errors are impossible.

- Rust's rich features (lifetimes, traits, interior mutability, macros, async, etc.) greatly complicate formal semantics. SPARK deliberately restricts such constructs to preserve provable determinism.

- Rust lacks a single, formally specified, stable verification subset. SPARK's subset is precisely defined and stable, with a formal semantics that proofs can rely on across versions.

- Rust's verification ecosystem is fragmented and research-oriented (Prusti, Verus, Creusot, RustBelt), whereas SPARK's GNATprove toolchain is unified, production-proven, and already qualified for use in DO-178C, EN-50128, and IEC-61508 workflows.

- Certification for Rust toolchains (qualified compilers, MC/DC coverage, auditable artifacts) is only beginning to emerge; SPARK/Ada toolchains have delivered such evidence for decades.

In short, Rust's design - allowing unsafe code, undefined behavior, and a complex evolving feature set - makes SPARK-level whole-program, certifiable formal verification structurally impossible. SPARK is not merely a verifier bolted onto Ada: it is a rigorously defined, verifiable subset with an integrated proof toolchain and an industrial certification pedigree that Rust simply cannot replicate within its current design philosophy.

If your objective is immediately auditable, whole-program AoRTE proofs accepted by certifying authorities today, SPARK is the practical choice.

I hope this sheds some light on why SPARK's verification model remains unique and why Rust, by design, cannot fully replicate it.


You're correct on the technical details: Pre/Post/Type_Invariant were added in Ada 2012, and Global/Depends are SPARK-specific annotations. Fair enough.

But saying "SPARK first appeared in 2009" is incorrect. SPARK dates to the mid-1980s at the University of Southampton.[1][2][3] SPARK was already in industrial use by the early 1990s, selected for Risk Class 1 systems on the Eurofighter programme.[3] The 2009 date is when Praxis/AdaCore released SPARK Pro under GPL[4] - that's commercialization, not creation.

This completely undermines your [irrelevant] evolution argument. SPARK didn't appear after Ada 2012 added contracts. It existed 25 years before them using special comments. When Ada 2012 added contracts, SPARK adopted the native syntax.

And there's no sleight of hand. My original comment explicitly said "Ada / SPARK" - not Ada alone. When discussing safety-critical development, you use them together. That's the deployed ecosystem.

Whether Ada's contract syntax was in the 1983 spec or added in 2012 is irrelevant. Ada 83 was designed for safety-critical embedded systems with strong typing, clear semantics, and explicit data flow - design goals that made formal verification feasible. That's why SPARK could exist starting in 1987.

The practical reality is that Ada/SPARK provides production-ready formal verification with qualified toolchains for DO-178C, EN-50128, and ISO 26262. Rust has experimental research projects (Prusti, Verus, Creusot, Kani) with no certification path and limited industrial adoption. SPARK-level formal verification in Rust is largely theoretical - Rust's design (unsafe blocks creating unverifiable boundaries, no formal specification, evolving semantics, procedural macros, interior mutability) makes whole-program certifiable verification structurally impossible, not just immature. It's almost a dream at this point.

[1] B.A. Carre and T.J. Jennings, "SPARK - The Spade Ada Kernel", University of Southampton, 1988, https://digital-library.theiet.org/content/journals/10.1049/...

[2] B. Carre and J. Garnsworthy, "SPARK - an annotated Ada subset for safety-critical programming", TRI-ADA 1990, https://dl.acm.org/doi/10.1145/255471.255563

[3] https://cacm.acm.org/research/co-developing-programs-and-the...

[4] https://www.adacore.com/press/spark-pro


> It actually fails to do what I want here and download h264 content so I have it re-encoded

I struggled with that myself (yt-dlp documentation could use some work). What's currently working for me is:

    yt-dlp -f "bestvideo[width<800][vcodec~='^(avc|h264)']+bestaudio[acodec~='^((mp|aa))']"

looks very promising, one of the biggest issue in golang for me is profiling and constant memory leaks/pressure. Not sure if there is an alternative of what people use now

Physically I am (although not at the moment). Psychologically, I am not. I just have an aversion to pain, depression, anxiety, and incontinence. If you love any of these things, you do you.

If you are asking because I mentioned lack of euphoria, then no. I NEVER experienced euphoria from opiates, ever. Not even the first time I have taken it, and I did take enough of a dose that should have given me euphoria or the feeling of warmth and bliss which I have never experienced.

Why? There could be many reasons, but I suspect it has to do a lot with how psychiatric medications have messed with my brain AND I have brain lesions due to MS.


Here is the code that should solve your output issue: https://slexy.org/view/s21DoMQaHF


FYI you can search your comment history with hn.algolia.com:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


https://en.wikipedia.org/wiki/Estimated_number_of_civilian_g...

https://en.wikipedia.org/wiki/List_of_countries_by_firearm-r...

Check out both tables and you will see that the facts do not say what you think they say, at all.

Homicide rates by firearm per 100,000 inhabitants (2017):

  Jamaica - 47.857
  United States - 3.342
  Serbia - 0.415
Ranking by country for civilian-held firearms per 100 population (2017):

  Jamaica - 8.8
  United States - 120.5
  Serbia - 39.1
Those are just to compare three countries, but you will see a similar trend for all other countries.

It shows that Serbia has loads of guns, yet barely any firearm-related homicides, whereas Jamaica has much less guns, yet homicide rates by firearm are way higher than the US.

Thus, the statement that "More guns -> More gun-related violence" is evidently false.


The reason you're getting the interactions you are is because you set up a false dichotomy. Kirk's moral calculus involves accepting that possibly some more people will die, beyond what would happen otherwise, in order to guarantee what he considers an essential right to everyone. This is perfectly compatible with "caring about lives".

It's interesting that you mention driver's licenses. Would you say that intellectual consistency would require a "pro-lifer" to be in favour of nobody being allowed to own a car? After all, sometimes fatal driving accidents occur.


Not necessarily.

Ever heard of linkable systems? They can detect when multiple proofs come from the same person, even if they can't identify who that person is. The system can also force reuse of the same secret, which stops the "infinite proof factory" problem.

Unique secrets can also be tied directly to identity. For example, if the ZKP is about knowledge of a secret key bound to your identity, then you can't just mint 5000 independent proofs unless you also have 5000 identities.

There's also the concept of nullifiers, used in privacy-preserving identity protocols. A nullifier is basically a one-time marker derived from your identity secret that prevents double-use of a proof.

On top of that, zk-SNARK-based credentials or verifiable credentials can prove "I am a unique registered person" without revealing which one. These systems enforce uniqueness at registration, so you can't magically spawn 5000 ZKPs that all look like 5000 humans. Similar ideas exist with linkable ring signatures and even biometric-based ZK proofs.

So there are plenty of ways to counteract your "5000 ZKPs per human" story (what's usually called a Sybil attack).

If you're being pedantic, yes: a bare ZKP alone doesn't enforce "one proof = one person", but ZKP + uniqueness enforcement (nullifiers, credentials, commitments, etc.) does, and that's what I had in mind. I thought it was obvious, but then again, nothing is obvious, and I should have specified. My bad.

In any case, people ought to know just how powerful and useful these ZKP-based systems can be when designed properly. I think this is the only way forward if we want to preserve our privacy, and at the same time we want to prove we're human without sacrificing anonymity, or verify we know the password without revealing it, or prove we're eligible to vote without revealing our identity, or demonstrate we meet age requirements without showing our birthdate, or verify we have sufficient funds without disclosing our balance, or show we're authorized to access something without revealing our credentials, or verify our qualifications without exposing personal details, and so on.

Edit: excuse the technical brain dump, I literally just woke up. I hope this helps to clear up some things, however.

Happy to dig deeper if you want.


> It's very difficult to review history. I stopped using it a while ago, but since everything's encrypted `git diff` won't give you anything useful and IIRC the command line tools were very hard to use for reviewing/restoring passwords when you mess up updates, etc.

pass sets up a .gitattributes and configures git to convert gpg files to text via a custom driver. This enables a text-diff of the encrypted contents out of the box (at least for a store I've just set up to test this).

  ~/.password-store # cat .gitattributes
  *.gpg diff=gpg
  ~/.password-store # cat .git/config
  # ...
  [diff "gpg"]
          binary = true
          textconv = gpg2 -d --quiet --yes --compress-algo=none --no-encrypt-to --batch --use-agent

This is the reason for why I refrain from commenting on medications, especially opioids. Why would anyone be against, say, someone taking Kratom? Let people with chronic pain and/or anxiety and/or depression take it. Why would I be against them improving their quality of life as long as they are not posing any harm to society or even themselves? Some people really just want others to suffer, it seems.

And yeah, I do not think there is much harm in drinking alcohol socially either. Get a buzz, take a cab / Uber home, etc. If someone starts a fight because they are piss-poor drunk, or drives under the influence, that does pose harm to society, which changes a lot, IMO. But if you go home and grab a beer, why would I be against that? Not my business. I especially hate it when people think it is their business and they want to control other people's lives.

Like damn, come, swap with me (not you), have a chronic pain with severe anxiety and depression and we will see if you could just think it away like I am sometimes being told to do, or you know, just let me take what works for me without harming anyone, including even myself.


Yes, absolutely.

Just like any weapon, "security" is only good if it's in your control. When the noose is around your neck, you'd better hope it easily breaks.


I regularly use local LLMs at work (full stack dev) due to restrictions and occasionally I get some results comparable to gpt-5 or opus 4

The process is described here: https://www.unicode.org/emoji/proposals.html#process

The list of past proposals is here: https://www.unicode.org/emoji/emoji-proposals-status.html Most have been declined.


I think the stronger point of what you're saying is if you can set yourself to avoid addiction—you have a time limited dose, you have no means of acquiring more—then opiate painkillers are the safest option in terms of potential damage to your body.

There's no avoiding it when it comes to some people's chronic pain but it's a tragedy we've ruined the reputation for opiate painkillers because they were prescribed for long periods which all but guarantees addiction. Folks in US hospitals have to unnecessarily suffer short term acute pain because squeamishness around prescribing effective painkillers in a situation where there's virtually no risk.


XRay / XTLS-Reality / VLESS work rather fine, and is said to be very hard to detect, even in China.

I followed [1] to set up my own proxy, which works pretty fine. More config examples may be helpful, e.g. [2].

[1]: https://cscot.pages.dev/2023/03/02/Xray-REALITY-tutorial/

[2]: https://github.com/XTLS/Xray-examples/blob/main/VLESS-TCP-XT...


OP: look into VLESS (and similar). And read up on ntc.party (through Google translate). There are certain VPN providers that offer the protocol.

It doesn't matter, he should look into the open source protocols that these services use. He doesn't have to use them.

VLESS / v2ray works in Russia, as far as I know.



> You have not posted any data indicating otherwise.

False. I have. See below.

> The Crime Survey data shows that crime in London, and the rest of the UK, has generally decreased over the past ten years

Your own source contradicts your claim.

The latest ONS "Crime in England and Wales: year ending March 2025" bulletin-the most recent data available shows headline crime rose to 9.4 million incidents, a 7% increase from the previous year (8.8 million). This is the opposite of the decrease you're claiming.

---

The crimes affecting daily safety have surged:

- Fraud: +31% (4.2 million incidents-highest since records began in 2017)

- Shoplifting: +20% (530,643 offences-highest since 2003)

- Theft from person: +15% (151,220 offences-also record highs)

---

You're conflating timeframes.

Yes, the 10-year trend shows overall decreases, but the ONS explicitly states there have been "increases across some crime types in the latest reporting period." The current trend shows London getting less safe, not more.

These aren't abstract statistics - fraud, shoplifting, and theft from the person are exactly the crimes that make London feel unsafe day-to-day. While homicides (-6%) fell slightly, that's a low-volume crime compared to millions of property offences hitting residents.

---

So... your own data source proves crime is rising in the categories that matter most for everyday safety.

---

PS. with regarding to:

> and then it turns out that you don't even live in London, people are going to wonder how you ended up holding such strong opinions on crime in London.

We have the internet. I can communicate with Londoners, visit regularly, read local London news sources, follow Metropolitan Police crime statistics, and so forth. The list is quite long.

By your logic, crime researchers, policy analysts, journalists, and statisticians could only study cities where they personally reside.

Your attempt to dismiss the data by questioning my location rather than addressing the statistics themselves suggests you're more interested in ad hominem attacks than substantive discussion.


FWIW, SBCL is pretty good at optimizing away dynamic type checks if you help it out.

Here are some examples under:

    (declaim (optimize (speed 2)))
First example is a generic multiplication. x and y could be _any_ type at all.

    (defun fn (x y) (* x y))
If we disassemble this function, we get the following:

    ; disassembly for FN
    ; Size: 34 bytes. Origin: #x1001868692                        ; FN
    ; 92:       488975F8         MOV [RBP-8], RSI
    ; 96:       4C8945F0         MOV [RBP-16], R8
    ; 9A:       498BD0           MOV RDX, R8
    ; 9D:       488BFE           MOV RDI, RSI
    ; A0:       FF142540061050   CALL [#x50100640]                ; SB-VM::GENERIC-*
    ; A7:       4C8B45F0         MOV R8, [RBP-16]
    ; AB:       488B75F8         MOV RSI, [RBP-8]
    ; AF:       C9               LEAVE
    ; B0:       F8               CLC
    ; B1:       C3               RET
    ; B2:       CC0F             INT3 15                          ; Invalid argument count trap
Note that it calls `GENERIC-*` which probably checks a lot of things and has a decent overhead.

Now, if we tell it that x and y are bytes, it's going to give us much simpler code.

    (declaim (ftype (function ((unsigned-byte 8) (unsigned-byte 8)) (unsigned-byte 16)) fn-t))
    (defun fn-t (x y) (* x y))
The resulting code uses the imul instruction.

    ; disassembly for FN-T
    ; Size: 15 bytes. Origin: #x1001868726                        ; FN-T
    ; 26:       498BD0           MOV RDX, R8
    ; 29:       48D1FA           SAR RDX, 1
    ; 2C:       480FAFD7         IMUL RDX, RDI
    ; 30:       C9               LEAVE
    ; 31:       F8               CLC
    ; 32:       C3               RET
    ; 33:       CC0F             INT3 15                          ; Invalid argument count trap*

> You're welcome to try to invent your own language.

What I'm doing with "news-teller" (or news teller) is not inventing my own language at all, and I don't appreciate dismissive or snarky remarks, especially when they're incorrect.

From a grammatical standpoint, "news-teller" is perfectly valid, English freely allows such noun-noun compounds. It's not an arbitrary invention, but a natural, transparent compound formed from two common, easily understood words. English has always created new terms this way (storyteller, truth-teller, lawgiver, bookseller), and while "news-teller" is not standard, it follows established morphological patterns. Because both components are familiar, most speakers would grasp its meaning immediately, arguably more so today than with the now-archaic "herald". You can even check Google's Books Ngram Viewer. Check for "news teller" and "herald".

The fact that "news-teller" isn't in widespread use does NOT make it "invented language". It makes it an uncommon but legitimate formation under the existing rules of English. If you doubt that, you could ask people whether they understand "herald" and then ask if they understand "news-teller". If you ask "news-teller" first, they will probably infer it means the same thing, so avoid that.

In any case, I already said I am fine with "reporter" or "announcer", and that I was providing a literal translation that everyone understands over "herald". This is literally the essence of my point, which is not the one being argued. Was I wrong in believing this literal translation is better (see: easier to understand) than "herald'?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: