Lack of a 32GB option in the MacBook Pros is because Intel's memory controller cannot handle LPDDR4 (except in U SKUs, which Apple does not use for MacBook Pro), and Apple decided using regular DDR4 to reach 32GB would consume too much power. [1]
Apparently Intel will fix this in their 2018 mobile SKUs, but until then Apple has chosen not to kneecap battery life or build a different logic board for the people who want 32GB.
This situation has a lot of parallels for me to the situation they faced with IBM and the G5. Everyone at the time wanted the G5 in a laptop to replace the G4, but IBM couldn't get the power consumption down.
Now over a decade later, Apple is taking shit for Intel's delays in supporting LPDDR4. I bet this is going to accelerate their plans to migrate Mac to their own ARM designs.
> This situation has a lot of parallels for me to the situation they faced with IBM and the G5.
That's a great observation, and having witnessed the PowerPC -> Intel migration I'm disappointed I didn't make it myself. Both Motorola and IBM, who were supplying Power CPUs to Apple, sold off their microprocessor divisions after a litany of manufacturing difficulties. IIRC, that was what drove Apple to abandon PowerPC in the first place. It would be ironic if Intel, having benefitted so greatly from the manufacturing shortfalls of past competitors, would find itself in a similar situation.
> It would be ironic if Intel, having benefitted so greatly from the manufacturing shortfalls of past competitors, would find itself in a similar situation.
Intel is already in this position, though not because they sold off their fabs.
All the money these days is going into mobile SoC manufacturing by the likes of TSMC and Samsung. Intel simply lost because Samsung and TSMC are able to outspend Intel on fab R&D and it's showing in Intel's numerous node shrink delays.
People will argue that TSMC/Samsung 7/10nm is not the same as Intel, and probably they're right, but only at present. TSMC/Samsung are eventually going to surpass Intel's fab technology because they're killing it in volume manufacturing chips for Apple, Qualcomm, Nvidia, AMD, and others.
Meanwhile Intel is fabbing for... Intel. Plus some Altera FPGA IP they don't seem to be integrating very well into their product stack. If Intel wants to survive the next 20 years the only option I see is that they start fabbing for other people too.
ARM is moving into servers, and once the perf/watt surpasses Intel, it won't be long for the hyperscale cloud companies like Amazon, Google, Microsoft, and Facebook to migrate away from Intel. For those guys, a 10% TCO reduction is a big deal, and while some of the perf/watt is due to the design, a lot of it comes from having the better process. If Intel loses their process lead, which is happening right now, then they're going to be second tier.
People will look back in 15 years at Intel snubbing Apple for the original iPhone SoC and mark that decision as the beginning of the end for Intel.
From Paul Otellini, Intel CEO at the time:
> At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn't see it. It wasn't one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.
Yeah, the current Intel chipsets support DDR4 which can go beyond 16GB or LPDDR3 which is capped at 16GB (LP = low power). So Apple had the choice between a higher RAM ceiling or longer battery life, and they choses longer battery life.
The forthcoming Cannon Lake chips will support LPDDR4, enabling MacBook Pros with a higher RAM ceiling without sacrificing power utilization.
It was: 32GB required LPDDR4 memory but Skylake didn't support that. It sounds like that's dependent on shipping Cannon Lake which was originally slated for 2017 but has been pushed back to 2018.
Intel. Currently, if you want to use low power RAM with an Intel mobile chip, you have to use LP-DDR3 in a config that maxes out at 16GB. You can also use non-LP DDR4 up to 32GB (Dell makes a laptop that does, for instance), but at that point you have a somewhat increased power draw when the system is running, and a _dramatically_ increased power draw on standby (IIRC about five times the power). Apple laptops have traditionally had excellent standby battery life; they're presumably not willing to sacrifice this.
Upcoming Intel mobile chips will resolve this, allowing use of LP-DDR4.
That claims to support up to 32GB memory, and it claims to support, among other things, LPDDR3-1866. Are you saying that if you want to use LPDDR3-1866 with that CPU, you're limited to only 16GB? I can't find anything about that through some quick googling, but if it's true, I retract my snarky comment.
Note "(dependent on memory type)". It can do LPDDR3, and it can do 32GB, but not both at once. You'll note that any laptops which do 32GB will list DDR4 RAM.
Also, I wish Intel would just list the maximum supported memory configuration for each memory type, instead of just having a worthless "(depending on memory type)".
With the old chipset generation you had the choice between a low power chipset that could sport 16GB tops, or a desktop chipset that could do more - but also draw lots more power. Windows laptops often chose the latter, but Apple on the path to a 1mm thin MBP chose to sacrifice RAM for thinness and battery life.
But maybe think about replacing the 4MB background video with a static image or something. It felt a bit overpowering for me (moves too fast) and it also doesn't scale right with the viewport leaving white text on a white background.
Ditch the video altogether. It is sluggish at best but also detracts from the material the user actually interacts with. And after a minute or so the jump back to the start of the loop is jarring. If they want the entire moving/sparkling water effect, use an animated pic with a transition effect between few frames rather than a full-motion videos.
My work laptop with an i5, 8GB memory, and typed characters lagged 5-10 seconds behind. I did an "asdf" check and though the browser had completely frozen it took so long to appear. The video is killing this site for me.
Thank you, we'll look for alternatives on the video ha :) Our mobile view also is a bit confusing at the moment so trying to make that look a bit better too!
Yep, you're right, our Markdown documentation describes a lot of GitLab Flavored Markdown that will (currently) only work inside GitLab itself, not on the docs site which uses a completely different Markdown renderer.
I've been doing exactly this too over the last few months. Safari iOS is so fast now on the iPhone 6s that I find the mobile sites faster than the apps in most cases.
Context is not a good feature, and even the developers didn't want to write documentation for it because they didn't want people to use it. https://github.com/facebook/react/issues/580
As people adopt external state containers, like Flux and Redux, it becomes increasingly impractical to bucket brigade chunks of data and callbacks to the leaves of the component tree via props. Some sort of delivery mechanism that doesn't couple in intermediate components is pretty critical to avoid having to maintain an unbounded amount of glue code. Context isn't without its kinks, but it's pretty useful in cutting boilerplate and decoupling components. The one thing I don't buy is that it's somehow worse than props from an architectural perspective (it's often compared to global variables, which makes little sense to me).
https://beta.companieshouse.gov.uk/company/01591116/filing-h...