Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Apple's New Mac Pro Begins Showing Up in Benchmarks (macrumors.com)
56 points by zekers on June 20, 2013 | hide | past | favorite | 55 comments


As someone who's never owned a mac pro can someone give an indication of how often "power" users are upgrading their CPU.

When I was a windows desktop user (up until about 8 years ago) I found that everytime i wanted upgrade the internals the CPU socket had changed along with the Mobo chipset necessitating a upgrade of not just CPU, but Mobo and memory.

I can understand swapping out gfx cards fairly regularly but are there that main users who completely gut their mac pro on a regular basis?


I am a workstation user.

I've used both Mac Pro and HP Z800 workstations, and currently use a HP Z800 that is probably 3 or 4 years old, but also have used a Mac Pro as recently as 2 years ago.

I've never upgraded a CPU in a workstation.

The CPUs I buy are chosen for performance at a sane (relative) price. I'm currently running a couple of Xeon X5560 CPUs, I think they cost £1,600 each at the time.

I also buy as much RAM as is reasonable for the box, I started this Z800 on 24GB ECC RAM which was dirt cheap (relatively - as many slots on the motherboard meant I could get lots of the smaller sizes which were cheap). When the price later dropped on the larger modules I upped it to 192GB RAM.

As for graphics card... now that is something I upgrade every 2 years or so. I started on a Quadro FX1800 and am now on a Quadro 6000.

Aren't the names of these things wonderful? You know you get more when the number is HUGE!

I think this is basically the norm: Upgrade RAM, upgrade GPU... leave CPU until you replace the whole box.

PS: And whilst I'm here, the Mac Pro is a good design... but for the wrong product. I think this is a new design that is the right design with airflow, cooling and silence in mind and would suit desktop users well... but desktop requirements are not workstation requirements and not being able to replace GPUs is a non-starter. I'll skip this design of Mac Pro entirely, or at least until a very wide range of upgradeable GPUs are available. The RAM is also going to be costly, given the few slots available and the higher price for denser modules. I love the design, but this is the wrong class of computer for the constraints that come from the execution of that design.


It's a shame Maverics only supports 128GB RAM.


4 slots, 128GB max = 4 x 32GB modules @ ~£1k per module (price based on average observed price for 32GB ECC RAM).

£4k for 128GB RAM, compared to 12 * 16GB modules @ £200 = £2.4k for 192GB RAM.

Hmm. Yeah, I'll probably be sticking with the PC route and running Linux still.

Even with these purely speculative prices, the Mac Pro design forces fewer denser modules and is really going to hit the pocket hard.


Apple does not support CPU upgrades in the Mac Pros. You need non-standard tools to do it, it's not as easy as in a typical PC, and occasionally have to hack a firmware update depending on what Mac Pro model and what new CPU you want to use. The firmware hack is because Apple does not update firmware in older models to support newer CPUs, even though the hardware is otherwise capable of it.

I think people who upgrade their CPU in Mac Pros are a minority of Mac Pro users. I'm probably not a typical Mac Pro user (a "mid-range" tower would've been fine for me), but I bought a 2007 Mac Pro with the least expensive CPU option, then in 2012 upgraded the CPUs in it to extend it's life. The Xeon CPUs I used were ones Apple never offered, but they happen to work in the machine I had.

Also: I upgraded the graphics card several years ago, to an official Apple supported graphics (ATI 5770), but my Mac Pro model isn't officially supported by Apple (but it works fine). I also did a hack to get the current OS X version, Mountain Lion, running on my machine (it isn't supported by Apple). My Mac Pro borders on being a Hackintosh at this point.


What did you change to get Mountain Lion running?


I followed these instructions to install Mountain Lion on my 2007 Mac Pro 2,1: http://www.jabbawok.net/?p=47

Note: For a Mac Pro 1,1 (2006), you may also want to update to the 2,1 firmware depending on what CPU you are using (have to create an account and login on these forums to download the file): http://forum.netkas.org/index.php/topic,1094.0.html (The 1,1 and 2,1 are identical hardware except for firmware)


Xeons never shipped in sufficient volume to make up for the huge discount that the system assemblers like Apple, Dell, HP &c were getting; I remember looking at upgrading a ~2009 era MP CPU, and you couldn't get the fastest CPUs for significantly less money than just upgrading the entire box.


I could be wrong as I've been out of the powerful-beige-box world for quite a few years. However when I was in it, the upgrade and replace thing was always with consumer grade CPUs and was like this for everyone I knew. The xenon isn't really targeting the same market. Rock solid stability and sustained performance wasn't something the gaming crowd I was in was after. Massive over-clocks, super hot chips (that died hot deaths) and a GPU clocked to the threshold of showing artefacts. The frame rate must stay high at all costs. This is nothing like what the Xeon targets. Just a thought, and again, I may be wrong.


Not a Mac guy, but I believe the Mac Pro normally uses a Xeon chip, which I believe have used fairly stable sockets over the years. For example, socket 604 was used by the Xeon for 5 years, and LGA771 has been supported from 2006 to present.


The speedup is lacking. 12 cores is only 2x faster than 4 cores from 2010. I wonder how the GPU factors into these benchmarks.


Geekbench doesn’t use the GPU; Geekbench also doesn’t use AVX, so half of the floating-point performance of recent cores is left on the table.

Actually, looking at disassembly of Geekbench, they barely even use SSE vectors, so it’s really nearly 3/4 of the floating-point performance that goes unmeasured by the benchmark.

One could perhaps argue in the defense of the benchmark that many consumer applications don’t take advantage of these ISA extensions, but you definitely cannot argue that pro apps don’t use them; this alone makes Geekbench (and in fairness, most benchmarks, which tend to be pretty naive) extremely misleading for evaluation of “pro” hardware. Ultimately there’s only one benchmark that’s useful for such evaluation: actually exercising the workload that you intend to run on the machine.

Geekbench also doesn’t use available libraries that offer high-performance implementations of the computations in question (of which any serious application developer would avail themselves). Looking at numbers for released hardware, the “LU decomposition” results in Geekbench make clear that they are either using far too small of a problem size or using a naive C implementation of the operation and measuring the performance of whatever code their compiler generated. If they wanted to show the performance of the system, they would use one of the available library implementations.

Such benchmarks tell you something, but it isn’t really how good hardware is; it’s closer to a measure of how well a compiler can salvage naive C code when it isn’t allowed to take advantage of the available hardware features. I’m not sure what that’s representative of, but I know that it’s not representative of professional workloads.


I guess a LINPAK Benchmark would be more suitable? http://www.top500.org/project/linpack/


LINPACK has its own issues as a benchmark; it only exercises extremely predictable floating-point multiply add loops. If a well-tuned implementation of the operation is used (as you see in the scientific computing community), this is representative of the peak arithmetic throughput that the hardware is capable of (so it is a good proxy for raw compute power), but it is still not representative of the way most real workloads (even professional or scientific computing) drive hardware. Even the scientific computing community is slowly starting to move away from LINPACK (see, for instance, http://www.sandia.gov/~maherou/docs/HPCG-Benchmark.pdf).

LINPACK as you typically see reported for consumer and mobile devices is far worse; the implementations used there are often naive scalar implementations that do not do any form of cache tiling. This means that for small problem sizes they are actually measuring how clever the compiler is, and for large problem sizes they are actually measuring memory throughput (both of which are interesting, but they are not what a “CPU benchmark” should be reporting).

The set of operations covered by Geekbench is actually fairly interesting (with some exceptions). If they simply used the best available libraries instead of their own naive implementations it would almost be a respectable benchmark.


Also LINPACK doesn't do IO, which a frequently sets the max speed.


Apple also has it's own heavily optimised version of LINPACK available in Accelerate.framework


Geekbench does not measure GPU performance at all AFAIK.


> Geekbench does not measure GPU performance at all AFAIK.

Nor does it measure I/O. I'm interested to see how that PCIe SSD performs.


PCIe SSDs are a terrible disappointment.I was able to achive the same performance as 'http://www.newegg.com/Product/Product.aspx?Item=N82E16820227... with 3 Samsung RAID0 drives.


I fixed the link for you: http://www.newegg.com/Product/Product.aspx?Item=20-227-724&T...

But I don't understand. That card alone is $3,500.

Here are two links which come to a different conclusion from you.

http://www.barefeats.com/mba13a.html

http://www.macrumors.com/2013/06/11/macbook-airs-pcie-based-...


Uhm... your link 404s, but I very much doubt that you were able to achieve 1.25GBps (thats GBps) reads with 3 Samsung spinning platter disks, right?

Because that's what Apple says the 2013 Mac Pro SSD is capable of.


For one, I doubt you got the numbers Apple states for the Pro with 3 HDs in Raid0.

Maybe you beat some other PCIe SSD product, but that's not saying much. Capabilities differ between SSDs, even if they are all using PCIe.

But even if you did, how does what you write make them "disappointing"?

Seeing that you need 3 (THREE) drives in RAID0 to beat ONE SSD.

And that of course that won't help you in random access times.


I'm just excited the older models will be getting less expensive soon so I can afford one.


Don't be too sure of that.. plenty of pros want upgradable systems... Most of the mac pro users I know tend to keep them 4-5 years or more, with an upgrade cycle every 12-24 months (faster cpu, more ram, ssd, etc) in the same box. The new form factor really limits that, and would probably be better served with a consumer grade CPU etc.. this is much closer to a mid-range desktop a lot of people have been asking for, except it's going to cost an arm and a leg.


Yeah, as much as I love the aesthetics, I'm predicting poor sales unless the price is comparable to other desktops. It's precisely the reason I avoided the Cube back in 2000/2001 and opted for the Tower.


Comparable to whose desktops? The server edition of the Mac Mini is a thousand dollars and its absolutely nothing special in hardware. I haven't seen this much disappointment amongst people who actually use the hardware when it comes to an Apple product ever. Well, we have the cheer leaders for the new hardware but its pretty obvious they don't use the current hardware. You get these types in any conversation.

No internal expansion, no built in CD/DVD drives, limited memory, no apparent means of upgrading the video, and if one of the components breaks its off to the store. Yet if predictions are right this will be even more expensive than current solutions. Worse, Thunderbolt enclosures are not cheap, if can even find them.

Unless those price predictions are way off base I and a few others I know have even started serious talk of going hackintosh.

Apple had every opportunity to produce a slimmer tower that still gave options for expansion. Instead its a replay of the "Cube" days.

Its not even good looking, the Cube at least could pass as a conversation piece


I think it's pretty good looking myself... just that it would be better with desktop class parts, and sold as a mid-range option (between the mini and pro). As it stands, as a new pro, I think it's a failure. I'm with you on the hackintosh route.


this is much closer to a mid-range desktop a lot of people have been asking for, except it's going to cost an arm and a leg.

Kind of like Apple is pulling something like this:

Mid-range desktop you say? Hmm... Yes, I've got just the thing- How about a mid-range desktop with the price of a top-end desktop?


I don't see how a machine that has almost 1GB/s disk transfer speeds, 60GB/s memory bandwidth and 7 teraflops GPU power (almost double the top of the line geforce) is "mid-range" unless you're from the future.


If you do something that is fairly CPU bound, such as large project C++ compiles for iOS software, it matters, a lot.

GPUs are not going to make those compiles any faster, and apple has abandoned distcc. The more CPU cores the better.

I'm seriously disappointed that they are not offering a 2 or even 3 socket version of the new mac pro. They're all, fuck you iOS developers, the only people who are practically stuck on our platform.


I don't think slow C++ compilation was the problem that Apple thought it was addressing in the new MacPro.


I don't think performance numbers have been a good way to distinguish between mid-range and high-end for a while now. Especially for users who are looking for a single machine that covers a range of uses including games.

ECC memory is a high-end feature. As is the specific choice of GPUs. Arguably, the fast PCIe SSD counts, too, although I suspect that won't last -- everyone benefits from I/O bandwidth, and there's no particular reason, other than market segmentation, for PCIe SSDs to be more expensive.


So what makes the Mac Pro a mid-range computer? Bear in mind that it's not a consumer product, it's squarely aimed at audio/video/graphics professionals, even though it'll blow any current gaming rig out of the water with those specs.


It is a mid-range machine and it is a consumer product.

It's more a Mac Mini Pro than a Mac Pro.

This is a high end machine. Go compare:

http://www.hp.com/united-states/campaigns/workstations/z820_...


They're in the exact same ballpark. That workstation has 2 x 8 cores, and a higher RAM ceiling on it's favor. On the other hand, it's limited to 7 PCI slots for expansion, three of them slow/legacy, vs a max of 36 chained devices on the Pro. The quadro GPU it comes with is 3.5x less powerful than the one in the mac pro, and it will come in at a higher price tag.

Do you by any chance drive a mid-range Bentley?


Except in the HP it all goes inside, which is where I need it so it can be locked and powered from one supply, every part is replaceable, the GPU can be upgraded, it has a hot swap enclosure built in and guaranteed support for 5 years.

Plus some kit I use won't go on the end of a thunderbolt bus...


The Thunderbolt bus is faster than 4 out of the 7 PCIe ports on the HP. You're trying hard not to get it.


Well, being limited to a 256gb mSATA SSD might hold it back a little. It's not the hardware I have issue with so much as the form factor... I think the form factor screams low-mid range, and not upgradable. I think the form factor would be great with mid-range consumer hardware, a Core i7 and two consumer class GPUs.


I don't see mention of a 256GB limit anywhere. You also have 6 ports capable of 20GB/s, enough for way more external storage you'll ever need during the machine's lifetime.

The key in the new Mac Pro is extensibility (those 6 ports, each daisy-chainable). Thunderbolt is like a PCIe cable, you could add a better GPU, storage or whatever you wanted as external devices. It's much more open to third-party manufacturers than the current model of apple-sanctioned upgrades.


I'm saying that the form factor would be better suited as a mid-range product with more consumer class hardware inside. As it stands, I don't think it will do very well.


That's definitely how this looks.

If it turns out that there are actually a range of machines, with GPU options other than Firepro, this could still end up as a very compelling mid-to-high-end personal desktop.


Pretty sure that macbook pros are cheaper and faster that that already.


Not even close. They come out at about 1/2 of the new mac pro: http://browser.primatelabs.com/geekbench2/search?utf8=%E2%9C...


Smoking graphics performance on the MacBook Pro too.


Registered Apple developers should watch WWDC 2013 session 109 to get a taste of what the new Mac Pro is capable of.


What's the title? (The index[1] is currently hard to search by session number.)

[1]:https://developer.apple.com/wwdc/videos/


It's a lunch talk: Painting the future


Ah, cool. Thank you. I actually saw that at the event. Excellent demo.


>> "and includes the latest four-channel ECC DDR3 memory running at 1866 MHz to deliver up to 60GBps of memory bandwidth.* "

- It should be "running at 933⅓ MHz". A DDR3-1866 module is 933⅓ MHz, 1866⅔ MT/s, ~15GB/s. They have quad channel DDR3-1866. (If you disagree - check the DDR3 spec. Article at apple.com is simply wrong in stating running at 1866 MHz I know it is a minor point, but when you are listing specs, you have to be precise.) -

Update: sorry, saying "running at 1866 MHz" is Ok. I stand corrected.


Absolutely nobody uses that scheme for referring to memory speeds. Everybody uses the effective clock speed, and there's really nothing technically wrong about labeling that with MHz, since Hz isn't restricted to referring to only sinusoidal clock signals. If you're going to be pedantic, then you ought to make clear when you're referring to the memory clock speed or the I/O clock speed or the transfer rate. For anyone who is interested only in using a memory module and not implementing the memory bus, 1866 is the only number that matters.


Hz refers to the number of cycles per second. Everybody uses this as the effective transfer clock, but hey, everybody is wrong.

It's correct to use MT/sec instead, but you're right that it's being pedantic and it's unlikely to confuse anyone who really did care. On the other hand, it does stink of specmanship to quote the doubled frequency. I would have hoped a Pro spec machine would rise above that kind of thing.


Hz is just 1/sec. If you want to be technically correct, don't add any implicit meaning to it. Frequency is the rate of something happening per unit of time. That "something" can be "number of data transfers" or "number of clock cycles" or "number of times a ball hits the floor". The frequency of all of those can be measured in Hz. You should be clear about what quantity you're measuring, but the unit is Hertz nevertheless.

(And MT/sec is really "some unit-less quantity/sec", so it can be accurately called MHz.)


The doubled frequency is just the way the market works. It is actually kind of nice when you are trying to compare sticks that are double sided vs. sticks that are single sided and actually run at the listed frequency.


You can also add the dual naming scheme that gives us gems like "PC3-14400" where the latter number is the MBps of bandwidth or something similarly confusing to 95% of people, myself included.


"PC3-14400" wouldn't be half so bad if there wasn't also "PC3-14440", "PC3-14444", "PC3-14000", and maybe "PC3-14402" depending on which manufacturer you are buying from. To make things even more dandy, they aren't interchangeable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: