Hacker Newsnew | past | comments | ask | show | jobs | submit | roqi's commentslogin

> The problem being tackled here is link time in debug builds. This affects all platforms.

I've worked on many projects, big and small, and the link time of debug builds was never a problem that was worth fixing.

In fact, even today I was discussing with a coworker the inefficiencies of the project's build process, we literally commented that having to only link a huge subproject is a major blessing.


For Google at least, link times are sufficiently important that the company has rewritten the linker from scratch twice--both open source. The Gnu-Gold linker which ships as part of gnu binutils and subsequently the ELF version of llvm's lld.

So although you might not encounter issues with link times (debug or otherwise), it is a multi-million dollar problem for big companies like Google and Apple. Both in resources and engineer time.


> So although you might not encounter issues with link times (debug or otherwise), it is a multi-million dollar problem for big companies like Google and Apple. Both in resources and engineer time.

I appreciate your appeal to authority, but I worked at a FANG on a project that had over 20 GB worth of static and dynamic/shared libraries.

Linking was never an issue.


Err, "Google has rewritten the linker twice. Both times with the stated goal to make link times much faster." isn't an appeal to authority. It's evidence that the company has found speeding up linking to be worth millions of dollars. Otherwise it wouldn't have done it.

They surely weren't doing it for fun.


> Err, "Google has rewritten the linker twice. Both times with the stated goal to make link times much faster." isn't an appeal to authority.

It is, and a very weak one considering Google has a history of getting people to work on promotion-oriented projects.

https://news.ycombinator.com/item?id=31261488

> It's evidence that the company has found speeding up linking to be worth millions of dollars.

It really isn't. You're buying into the fallacy that a FANG can never do wrong and all their developers are infallible and walk on water. All you're able to say is google did this and google did that, and you're telling that to a guy who has first-hand experience on how this and that is actually made. You never mentioned any technical aspect or more importantly performance differences. You only name-dropped Google, and to a guy who already worked at a FANG.

Linking was never an issue.


Want to throw out the technical aspects and performance differences. Judge for yourself:

On initial contribution to the gnu binutils project, the Google developer for Gold claimed a 5x speed improvement: https://sourceware.org/pipermail/binutils/2008-March/055660....

In a public talk on llvm lld, the Google developer for LLD claimed another 5x speed improvement:

https://llvm.org/devmtg/2017-10/slides/Ueyama-lld.pdf


There are many FAANG customers who care about link time; some of them are also FAANGs, but certainly not all. You're falling into the libertarian trap of thinking that because it didn't happen in your experience, it could not possibly happen to anyone.


If you had 20GB of dynamic and static libraries as a part of your build the only thing that can be a bottleneck is linking


> But I still think it's important to note that this isn't really advancing the state of the art like Apple would like you to believe. It's just a new middle-of-the-road approach with its own possibly-positive tradeoffs and pitfalls.

It also seems that this new library format barely solves any problem and in the process bumps up the number of library types that developers need to onboard and possibly support.


> "It also seems that this new library format ... bumps up the number of library types that developers need to onboard and possibly support."

No, a mergeable library is just a special type of dynamic library, which adds some additional features and metadata. I can't imagine why you'd distribute both mergeable and non-mergeable dylibs?

In fact, rather than having to support more library types, developers may actually have to support less - because it may eliminate the need to ship both a static lib and a .dylib with your framework?


> I'm pretty sure you were the problem

What problem is that?

Your comment reads like a puerile "no u" and adds nothing to the discussion. It's quite ironic given the topic.


> Um, I don’t think I was advocating any particular level of moderation, was I? More like visibility into its processes amd motivations, and that providing those convincingly and to an appropriate extent is the moderators’ responsibility.

If you are not advocating for a particular type of moderation then why are you all bothered about how any type of moderation is applied? What would be the point of your suggestion?


> I have to agree with GP, I am part of several subreddits where the moderators clearly enjoy being kings of their precious little fiefdom.

You never used IRC, have you?


> And nowadays you can deploy apps really close to users so latency is really low.

That sounds like blindly throwing money at the software architecture problem you created for yourself. Supposedly SPAs became popular because your line of reasoning was embarrassingly absurd, in the sense that you do not mitigate the penalty of a network call by microoptimizing the cost of a network call.


> Making a 'server call' on each interaction is also what web apps used to do before SPAs were a thing.

One of the reasons behind the popularity around webapps was that they would no longer need to make 'a server call' on each interaction.

> And in many cases it's what SPAs do as well.

I feel you're grossly misrepresenting what SPAs do. SPAs do calls to send and receive data, not to fetch server-side rendered content.


> One of the reasons behind the popularity around webapps was that they would no longer need to make 'a server call' on each interaction.

Ironically, many modern SPAs are significantly slower than the traditional apps they replaced. Try using Twitter's webapp on a non-premium phone, for example.

Sometimes it's regrettable that the webdev truck has no rear-view mirror.


The entire internet is slower because it's being squeezed to oblivion for monetization and tracking purposes. No matter what technology you choose to render HTML with, your company is going to have a slew of systems for injecting 3rd party scripts, running A/B tests, collecting analytics that include recording user sessions, etc etc. Back "before SPAs" we just weren't doing as much crap in the browser.


Tracking doesn't help but modern frontend development practices lead to slowness.

Rerendering and recomputing too much (ever encountered these chains of map() and filter() in render()?), many API calls, huge icon sets, huge custom fonts, CSS frameworks and JS frameworks, huge dependency trees with many instance of the same lib running sometime in several versions, big bundles... This heaviness in the name of convenience and branding is not free, it forces people to ditch perfectly fine hardware, which has a high environmental cost... not directly paid by people funding the code).

One of my pet peeves is managed text inputs, where render() is called each time you type a character, for a whole component tree if you are not careful and you happen to pass the content to some parent for some reason. Typing a message in Mattermost on the PinePhone is painful for this reason, there's just no reason typing should be slow even on slow hardware but managed component is considered good practice.

Making people buy newer / powerful and making people wait and using too much energy should be considered bad practice.


I agree. I recently went to great lengths in the React app I own to make it so that only the affected components would re-render on each input keystroke, and the result is gross but performant. My opinion is relatively nuanced: I am a proponent of SPAs in situations where they make sense, but if I had a choice I would never use a framework. They always impose at least as many problems as they solve, if not more.

I attempted to build a startup product using a vanilla JS SPA when I was a founding engineer, and the result was predictable: it worked great for me, but nobody I hired wanted to learn some random guy's vanilla JS codebase. I've since resigned myself to using React simply because that's what developers expect, despite all the headaches.

For what it's worth, we ended up migrating to Mithril in the V2 of that startup's product. We all enjoyed Mithril and it scaled really well for us, but my team did have some apprehension about their React skills falling behind.


> Rerendering and recomputing too much (ever encountered these chains of map() and filter() in render()?), many API calls, huge icon sets, huge custom fonts, CSS frameworks and JS frameworks, huge dependency trees with many instance of the same lib running sometime in several versions, big bundles...

This hits the nail in the head.

A single high-res background image weights more than all the code in a complex webapp. If that image is required to pull the first contentful paint then the page will feel slow.

It matters nothing if your JavaScript is just a hello world console log. That background image is in the critical path.


> The entire internet is slower because it's being squeezed to oblivion for monetization and tracking purposes.

Bullshit. All you need to track a user is data you send as part of a HTTP request. Pushing metrics is a fire-and-forget HTTP request away.

The internet feels slower because we're using way more of it, not only in the increased complexity of webapps to improve user experience and implement features but also in the volume of data we're transferring around.

> running A/B tests

A/B tests just means settings/feature flags and metrics. Feature flags is used in way more things than behavioral studies.

> include recording user sessions

User sessions are recorded since ever with zero performance penalty. The very same HackerNews page you're now browsing is tracking your user session whenever you login. That's not it.

> Back "before SPAs" we just weren't doing as much crap in the browser.

Right, and life sucked back then. Why do you think Flash was so popular?

It's trendy to shit on the status quo but it also is low-effort and lacks any insightfulness.


> All you need to track a user is data you send as part of a HTTP request.

Browsers impose a limit on concurrent requests (usually 6 per hostname) for a reason - HTTP requests, be they too many or too large, are one of the largest performance problems in a typical site. Just because it's easy to fire-and-forget an HTTP request doesn't mean it doesn't have a performance impact, especially in apps that are already network-chatty. I think you know this because you go on to list "the volume of data we're transferring" as a performance bottleneck, and obviously that's done through HTTP requests. Also, in the real world analytics installations virtually always include installing a third party library i.e. google analytics, so you're taking hits to download the library, interpret the source, run the code alongside your app, usually in the same thread, all before you've fired any of those HTTP requests.

> A/B tests just means settings/feature flags and metrics

I'm aware of the definition. If you are implying that conditionally loading settings or feature flags and tracking associated metrics doesn't have a performance impact, I would disagree. It is usually possible to implement a given A/B test with minimal performance impact, but in practice these are absolutely a contributor to the kind of bloat I'm frustrated with.

> sessions are recorded since ever with zero performance penalty

It sounds like you're talking about logs. I'm talking about "session replay" features provided by companies like FullStory and Sentry that allow you to replay every mouse move and keystroke of your users.

> It's trendy to shit on the status quo but it also is low-effort and lacks any insightfulness.

By this point in your comment I think you forgot what we were talking about. I was arguing that the architecture of modern SPAs shouldn't bear the blame for bad performance; my point being that whether or not we continued to use server-generated HTML we would still be suffering from similar problems of bloat today. The "status quo" that I'm shitting on is taking lazy approach to performance management and injecting way too much cruft; unless you work for an analytics provider I'm not sure why you would be opposed to that stance.


Hell, try using the Twitter web app on something with 8GB of memory or that's older than 5 years old.


> Ironically, many modern SPAs are significantly slower than the traditional apps they replaced.

All software becoming slower is a already a meme. This isn't exactly a SPA issue.

> Sometimes it's regrettable that the webdev truck has no rear-view mirror.

The "software is getting slower" meme was the Hallmark of Java in the server when it was released in the 90s. This is nothing new, or specific to web dev.

Also, I feel you're grossly misrepresenting the problem. Reddit's mobile page is considerably slower than the old reddit page, but it's perceived performance is quite good. All posts are cached and instead of full page reloads it just switches content virtually instantly.

It might be fancy to shit on everyone else's work, but this only happens if you lack objectiveness and sincerity.


> (...) leads you to the conclusion that they’re not spiritual gateways. I find this surprising as those are one in the same to me.

How does that surprise you? I mean do spiritual gateways exist at all? Is there even a concrete definition and concept? Or are they just a conjecture without any basis whatsoever?

It seems you start from baseless beliefs and blindly assume they are require no validation and afterwards feel surprised others don't share your wild assumptions.


My definition would be that spiritual gateways are tools that help you have spiritual experience.

Plenty of people have had that shared effect from psychedelics.

I’m sure what baseless belief you see.


> The products do come at a bit of a premium, but, in my experience, it's well earned and is a premium experience as well.

As a long time Linux and macOS user, I don't agree. Even though macOS is more polished than your average Linux desktop environment, it's really very hard to ignore the unjustified markup. Nowadays I can buy a miniPC with a Ryzen5 and 32GB of RAM for around 400€, but the cheapest Mac mini nowadays sells for over 700€ and comes with 8GB of RAM and an absurd 256GB SSD. Moreover, a Mac mini tops up at 24GB of RAM, and for that you need to pay an additional 460€ for your weak 256GB HD box.


I think to compare prices you need something with adequate GPU performance (which the M2 has).

The M2 GPU is roughly the performance of the notebook edition of the RTX3060.

To get a MiniPC with one of those it is over $1200! https://www.aliexpress.com/item/1005004771159431.html


> I think to compare prices you need something with adequate GPU performance (which the M2 has).

I feel your comment reads too much like fanboy excuses. CPU is not the only or even main requirement. I personally want to max out on RAM and HD. I can buy a mini PC with 32GB of RAM and a 500GB nvie for 400€. With a Mac, I need to spend almost twice that to only get 25% of te RAM and 50% of that HD space.

This was the norm since Apple shipped Intel core i5 Mac minis.

There is no way around this. Apple price gouges their kit. It's irrelevant how you feel they fare n artificial benchmarks.


>Apple price gouges their kit

How do you define price gouging?

Why is it wrong for a company to sell their product for the highest price people are willing to pay?


It's a bit of a different design point.

The M2 Mac mini's RAM is integrated into the SoC package, which has some advantages (good memory bandwidth, no copying between CPU and GPU RAM) and disadvantages (expensive, non-upgradable DRAM tiers.) Internal flash storage is basically non-upgradable as well (though you can easily plug in external thunderbolt m.2 storage.)

It also doesn't currently run Windows natively, nor does it support eGPUs.

I'm not sure any Mac mini model was ever much of a competitor to cheaper PCs, but mini PCs have gotten a lot better over time, probably inspired somewhat by the Mac mini, while the mini has followed in the footsteps of other Mac models by adopting Apple Silicon and unified memory.

The mini is a perfectly decent Apple Silicon Mac, and compares favorably with the older intel Mac minis in terms of performance, but I'd spring for 16GB of RAM (at least) for my use cases.


> It's a bit of a different design point.

I don't see the point of your comment. It matters nothing if you underline design differences if in the end you can get a cheaper minipc that's upgradeable and ships with more memory, and you can't do anything about your Mac mini other than scrap it and buy a more expensive model.

> The mini is a perfectly decent Apple Silicon Mac

That's all fine and dandy if you artificially limit comparisons to Apple's product line.

Once you step out of that artificial constraint, you get a wealth of miniPCs which have a smaller form factorz are cheaper, have more RAM and HD, are upgradeable and maintainable, and in some cases have more computational power as a whole.


Flash is upgradeable, but the only source of proprietary NAND chips with special firmware is Apple itself :/ https://www.youtube.com/watch?v=yR7m4aUxHcM


Where can you buy that miniPC?


After doing a quick search, a couple brands that have miniPCs that meet these specs include "Beelink" or "Minisforum"


> I know a genuine Panaphonics when I see it. And look, there's Magnetbox and Sorny!


Both of these companies are huge in the mini PC space. Most people have never heard of them because all they do is mini PCs.

Minisforum latest 7940HS lines are better than M2. More powerful, fairly close on power efficiency, better GPU, cheaper, and without all the nonsense that comes with buying Macs. Their fully juiced model is $800 (and doesn't lock you in to a model that milks casb from you like a sow).


The M2 GPU is a fair bit better performing than the 780M in that though isn't it?

I've been trying to work it out and the best comparison I've seen is this: https://i.imgur.com/MhM1fap.png

Although https://www.reddit.com/r/hardware/comments/138lari/comment/j... indicates it is a reasonable match.


It isn't. As the Reddit link explains, the only benchmarks it wins in are synthetics.

You can play Red Dead Redemption 2 at 1080p at over 60fps. You can produce all the synthetic benchmarks in the world but this is as powerful as a console. This is the most powerful iGPU out there, it is about as powerful as 1060.


There is also the fact that the AMD CPU is newer, has a 50% higher TDP, and is built on a smaller node, but is far from providing 50% better performance than the M2 [0]

On the other hand, I don't own a Mini, nor I'm on the market for a mini PC, but I don't feel like dropping $800 for a prebuilt PC unless they provide stellar support and warranty, and chinese OEMs aren't really known for that.

[0] https://www.notebookcheck.net/M2-vs-R9-7940HS-vs-M1_14521_14...


I'm not sure what Beelink is supposed to be mistaken for. I only know them for their micro-PCs and I'm not familiar with another brandname that it is supposed to remind me of.

I hadn't heard of Minisforum though. But the same goes there - not sure what is is supposed to be mistaken for.


I just priced the Ryzen Lenovo ultra small form factor, which is smaller than a Mac Mini and only slightly larger than a Playstation 2 slim, and it was £500 rather than £400 but other than that those numbers didn't seem far off the mark.


The Mini comes with an internal PSU, though. All these mini PCs come with external PSUs, some hilariously large at over half the size of the PC itself.


This is true about it being an external PSU, but it is nowhere near 1/2 the size of the computer.

I have one because I was able to get an i5 that was passively cooled, so great for a Plex server that's second hand for only £100. The PSU is more like 1/8th, maybe smaller.


I happen to own a Lenovo Thinkcentre. The PSU is perhaps 1/3 the volume of the computer itself, which I think is crazy for a computer with a mobile chip inside.

I know that some Intel NUCs have monstrous PSUs [0], which I think should constitute false advertising regarding the actual size of the computers.

[0] https://www.servethehome.com/intel-nuc-11-pro-review-tiger-c...


Fair enough, if I had got that Intel one I would probably have a similar opinion. The power brick I have is about the size of the small Lenovo travel power supplies, maybe about 25% of that Intel one, at a guess without seeing it in person.

It's definitely smaller than 1/3 the volume of the unit unless I have a incredibly small form factor rather than an ultra small form factor unit? But I don't think so. It is very small.


>Even though macOS is more polished than your average Linux desktop environment, it's really very hard to ignore the unjustified markup.

It's justified if people pay it.

>Nowadays I can buy a miniPC with a Ryzen5 and 32GB of RAM for around 400€, but the cheapest Mac mini nowadays sells for over 700€ and comes with 8GB of RAM and an absurd 256GB SSD.

So what's the problem?

If you like running Linux on your Ryzen5 miniPC with 32GB of RAM for 400€, you're more than free to do that. Apple's not stopping you.


The original point of this thread is that they'd like to be able to get an M2 PC like that. So that's the problem.


I, too, would like to pay Ford prices for a Rivian.


I'll gladly play Apple processor prices for an Apple processor, but I'm not going to pay Apple PC prices for an Apple PC when all I really want is the CPU.


I guess Apple doesn’t want to be Intel. It is like the old Kelloggs adverts “we don’t make cereal for anyone else”. Alluding to those brands that use the same cereal at a marked up price and resell it to a supermarket brand to sell at a lower price in a plain box.


It's an SoC; what do you think you're paying for besides the CPU? The whole rest of the computer (for a Mac Mini, at least) is worth maybe $50.


Except if you want 32gb ram and 2tb disk space, they charge well in excess of what anyone else (for the most part) is able to charge for similar upgrades.


> The original point of this thread is that they'd like to be able to get an M2 PC like that.

Not really. The point is that Apple's products are overpriced, and in particular the Apple Mini underperforms and simply isn't competitive when compared to today's alternatives. I repeat, a mini PC with 8GB of RAM and a 256GB HD on the market for over 700€ simply doesn't compete with miniPCs with Ryzen or Intel i7 or even i5 that ship with 500GB HDs and at least 16GB of RAM which can cost 200 or 300€ less.


Neither of them really have HDs, rather SSDs, but more importantly the memory numbers are not directly comparable because macOS has a memory compressor. Which matters a very large amount for many workloads.


The markup is because I can go to apple.com, pick the RAM and storage I want, and not really have to figure out anything about flavors or distros, knowing that it comes with the best chip on the market. This is leaving out that it just works with my phone and tablet.


The markup is because Apple can command it based on a long history of quality and building a tech luxury brand. It's solely about keeping margins high and the brand status high.

Even if they could make the same money by lowering prices (and increasing volume), it'd be a terrible idea based on how consumer behavior and status seeking actually works. If the quality were the same and the product were far cheaper, consumers wouldn't want it as much. The absolute worst place to be in any market tends to be the middle, you go to the middle to die. High margins provide a margin for error in business, it's invaluable.


The worst mistake these naive proponents of free speech absolutism make is assuming that everyone is arguing in honest, good faith instead of exploiting founding liberal democratic values to undermine it.

Time and again we see fascists demanding their fascist and outright racist views are entitled to all the air time they can possibly get because "to defend free speech you must defend our right to defend our views everywhere we can". Yet, when the subject of defending views they oppose pops up, they are quick to try to silence those with threats of violence and intimidation as they see entitled to it.

Worst, when their threats of intimidation are faced with threats of violence, they cynically hide behind the very same liberal and democratic values they undermine, arguing that their oppression campaign should not be subjected to any form of oppression because they are only defending what they believe.

This loophole is known for ages, and so is the antidote.

https://en.wikipedia.org/wiki/Paradox_of_tolerance


I often see "Paradox of Tolerance" cited along the lines of "if we tolerate the intolerant (e.g. racists), they will silence everyone else and this will destroy society; hence, we shouldn't tolerate the intolerance". -- In this interpretation, it makes sense to put "fascist and racist" in the same bucket.

But, Popper's "Paradox of Tolerance" is about what should be tolerated at the government/society level. You want "what the state silences" to be very small. The threshold for "who the government should silence" should be a high threshold of "society would be destroyed if they're not silenced"; and silencing should be a last resort, only after reasoning/argument fails. -- In that sense, putting "fascist and racist" in the same bucket is absurd.


Regardless of how it's framed, I just cannot get behind that whole concept. When I point out the extremely obvious (to me, at least) hypocrisy, people get hand-wavy. "Well obviously, it's different." No. It's not. You're making a judgment call about what's "good" and what's not and still somehow manage to do enough mental gymnastics to convince yourself that your intolerance is righteous and fundamentally different.

And yeah, I get that there are terrible ideologies out there that really deserve no respect. But this "don't tolerate the intolerant" thing gets trotted out and applied to such huge swaths of the population that it just loses any shred of credibility it might have had.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: