Friday, February 22, 2019

How Elastic is NAND demand?



When Memory prices drop dramatically, we are always told that demand is elastic so demand will pick up to make up for the drop. I am not sure this always happens.











When I worked in memory manufacturing, I would say "whenever people are talking about elastic demand, we are about to lose money. The memory market is more profitable today but I wonder how elastic demand is and in which markets. 

Some thoughts:
  • Smartphones are a leading customer of NAND memory. I just checked Apples website and they are currently charging $200 for adding 256GB of memory. The cost for this memory to Apple is about $32 and dropping. So it is pretty clear that a change in the price of NAND is not whats driving 256GB vs 512GB sales. Side note: This also the reason I refuse to let my kids get memory increases on their Iphones .... I can't get myself to pay Apple 80c/GB for NAND!
  • In 2017 NAND prices were flat to up a little. While client SSD sales slowed, overall NAND bit growth was still 35% despite an increase in NAND prices when we expected a decrease of 20%+
  • In Q4 2018, Client SSD unit sales were recently shown to be up 35% YoY. The client SSD pricing has dropped over 40%, which more than double the consensus ASP reduction predicted at beginning of 2018. The prediction for SSD unit growth was 30-35%. So did the crash in SSD pricing dramatically help client sales? Revenue dropped 5% YoY
We are always challenged with the unknown what if of "if prices didn't,crash, then units would have dropped" 

A possible model: There is elasticity but it has two characteristics.

1) Elasticity is delayed by 1+ year. AWS is not going to redesign there datacenters in a month based on low SSD costs. They need time to redesign and show the financial benefit and to ensure it is a sustainable change. Also people based future designs and architectures on the expectation that ASP will drop. AWS is planning for NAND ASP of about $25/TB in 2025 and they are making plans based on that.

2) When it happens, Elasticity is smaller than people think. Obviously a instantaneous 50% drop in ASP could easily lead to people buying 2x the chip size or 2x the capacity in an SSD. But do more units get sold? Prices drop on average and bits grow on average. A simple proposed metric: An incremental 15% ASP reduction will lead to an incremental 5% increase in bit growth. So if ASP was predicted to drop 25% and bits grow 30%, If ASP drops 40% we will see 35% bit growth.

I have data to back these ideas up and anecdotal stories from purchasing discussions with PC, Hyperscale and  Handset OEMS. We can also talk about how this affects memory revenue and margins in 2019. Call for more info and discussion

Mark Webb






Monday, January 28, 2019

Five Thoughts from SNIA 2019 Persistent Memory Summit

Last week I attended the Persistent Memory Summit in Santa Clara. This is a great one day conference each year bringing together experts on Persistent Memory examples, system support, and applications. The presentations are posted and there is a video as well (thank you SNIA!)








5 thoughts:
  1. Now that persistent memory has moved from a “wouldn’t it be great if we had this?” concept to a “we have some options, now what?” debate…. We need to define “persistent memory” based on the new reality. Rob Peglar and Stephen Bates reminded us that using the term SCM is not politically correct and can only be used in a safe space away miles from a SNIA conference (Starbucks Milpitas worked for me). This is good since it was way too vague and theoretical. Andy Rudoff offered a simple definition: it needs to be address in load/store like memory (Not blocks and pages), and persistent. Speed is in the eye of the beholder but a year ago there was a definition of <2us latency in applications which I liked. The NVDIMM-N and NVDIMM-P definitions would indicate that it does not need to be one type of memory but is a DIMM or system. These simple definitions would seem to eliminate some products that are often referred to as “persistent memory” (side discussion)
  2. The most common persistent memory today arguably is NVDIMM-N which provides us with up to 32GB DIMMS that can be written to like DRAM but never lose data. The challenges here are that the use of DRAM for entire capacity plus NAND plus energy support leads to a high cost that is 3x or more per bit compared to DRAM. As a result, a small amount of systems (typically SANs) use them today. Multiple providers were at the conference and you can buy this persistent memory whenever you wish.
  3. Frank Hady presented Intel Optane Persistent Memory and the applications. Two modes, one is persistent memory (App Direct) and one is memory Mode (which loses data on power cycle). Memory mode is great for adding tons of memory that is somewhat slower and cheaper but it is not persistent per Intel documentation. This is poised to grow rapidly with Intel backing but it is off to a slow start. From talking to customers, most say they still can’t get Optane PM to build their own system and the availability today is running apps on cloud systems. I have details on modes and projected revenue in other publications
  4. NVDIMM-P is proposed as an open source version similar to Optane PM where the architecture supports some DRAM plus NAND or other memory type to optimize for cost. This will allow DIMMs that are LESS expensive than DRAM, higher density, and more non-proprietary options. We need this ASAP! When can I get one???
  5. From the conference, it feels like Infrastructure support and application drivers are ahead of the actual hardware…. This is probably not totally true but there is drive from Intel and SNIA to get all the support in place and the OS supports it and we have applications. Once Intel ships significant volume and competitors start shipping their versions of PM, we can test out all the applications

See more info on our blogs or website. Thanks to Chris Mellor of Register fame for republishing some of my FMS work on persistent memory and Optane with all the gory details and numbers.

Mark Webb
www.mkwventures.com


Tuesday, January 22, 2019

Jan 2019 Intel Optane Revenue Update

At 2018 Flash Memory Summit we presented models for 3D XPoint/Optane revenue, costs, performance and endurance. We update these here.










In the 6 months since, Intel has updated product roadmaps and has provided details on the Optane DIMMS. New projections are based on these changes

Revenue From FMS2018 and this Blog



UPDATE:
Most likely 2018 sales did not quite meet expectations. DIMMs sales were fairly low pre-production (read: Samples). Non DIMM SSDs sell but at lower prices and in lower volumes than expected.

2020 Still can meet our projections from FMS.... but Intel is off to a slow start in 2019

  • Cascade lake and volumes are later and lower than planned. 
  • Attach rate projection for Optane DIMM to cascade lake was low and we lowered it even more based on customer reports and timing. Also Intel showed and we reviewed in this blog that when used as main memory expansion, Optane DIMM is not persistent. To be persistent it is a separate memory block... More like a SSD on the DRAM bus.
  • Server DRAM Demand is down, prices are down. This is not good for 2019
  • The Lehi Factory is in transition to Micron ownership. Intel has plans on how to ramp XPoint internally but those are in progress most of 2019.
  • Optane Memory for desktops has not taken off. Intel now plans notebook version with Optane Memory + QLC SSD which we have shown to be a cost effective performance SSD.
Unless Intel gives us data at Persistent Memory Summit or in Earnings announcement, we have to model the revenue (Intel can always correct me!). We project 2019 to be below the 2018/2020 midpoint by about $100M in DIMMs and overall Optane revenue in 2019 to be about $200M below 2018/2020 Midpoint. Micron will have no measurable revenue in 2019. As you can see, the revenue is based on DIMM sales, if those continue to slip, the numbers will get lower and competition is enabled

We have detailed data on GB shipments, DIMM vs SSD sales, Pricing, Cost, performance and endurance for 3D XPoint and Optane. We also can discuss the JV agreement changes and implications. Call to discuss. We will be at 2019 Persistent Memory Summit this week to discuss details as well

Mark Webb





Wednesday, January 16, 2019

2019 Persistent Memory Summit and Reports


SNIA is hosting the 2019 Persistent Memory Summit next week. We will have detailed updates and reports on hot topics in Persistent Memory shown below


I highly recommend the people attend this as it provides are great vision for technology and markets. 


While I am not presenting, I will attend and have updated data from my FMS presentations on 
  • 2019 Persistent Memory Revenue numbers
  • Optane revenue for Intel overall and specifically the "quasi-launched" Optane DIMMs
    • Hint: both applications and the market are changing
    • Sometimes persistent memory is not persistent
  • 3d Xpoint technology roadmap, specifications, endurance, and challenges for both Micron and Intel
  • Competition to Optane from ReRAM, NVDIMM-P, Low latency NAND, ZNAND etc. Can they match or even surpass Optane?
Plus I will provide commentary on technology, market, and application presentations real time. 

Hope to see you there, text me or email to set up a meeting. 

#Persistent_Memory 
#SNIA

Mark Webb
www.mkwventures.com





Thursday, November 1, 2018

Intels Announcement of Beta Deployment of Optane Persistent Memory




Intel announce that Optane Persistent Memory is shipping to select customers and those companies will deploy solutions "soon".





What was announced:
Optane Persistent Memory is shipping as Beta Program to select customers.
Widespread shipping in 2019
More importantly, they announce two ways in which it will work.

1) Memory Mode: Big and Affordable, but Volatile
This is 100% lined up with what we forecast. Optane Memory main value proposition is adding tons of memory at slower speed in DIMM format on the memory bus. Half the price of DRAM, probably 7x higher latency, much higher density. 6TB is possible. You make the speed OK by using DRAM cache. Example:  1.5TB of Optane with 192GBs of DRAM. but it only appears as 1.5TBs of memory since the DRAM is all used as cache. The downside? The memory is not persistent. Since you use a volatile cache, you can lose the data. 

Summary: Optane Persistent Memory in Memory Mode is not persistent. 

2)  App Direct Mode: Big, Affordable, and Persistent

In this mode, you have Optane in addressable memory. you have DRAM as well each is addressable separately. If you want to write to 192GBs of DRAM fine... just like always. If you want to write the the Optane, fine... its persistent. But you have to explicitly decide where to write. 

Summary: Optane Persistent Memory in App direct mode is persistent. 


Actual value proposition will depend on a few items:
A) Do you need tons of memory but dont need persistence? 
B) Do you need persistence and have the application software to manage it
C) Are you OK that the persistent memory is about 7x slower but is 50% cheaper

We have more data on actual speed, cost, and models on the next Gen Optane. We also previously published the models for Revenue from Optane SSDs adn Optane persistent memory.

Final Thought: If memory mode was persistent, we would actually have a simple memory solution. And something tells me there will be limitations built into the drivers on writing optane in App direct mode since it does not have infinite endurance. I am thinking that those details will come out when we see the end products... we will let you know.

Mark Webb
www.mkwventures.com



Intel announcement

https://newsroom.intel.com/news/intel-optane-dc-persistent-memory-readies-widespread-deployment/

Explanation of Modes
https://itpeernetwork.intel.com/intel-optane-dc-persistent-memory-operating-modes/



Tuesday, October 2, 2018

MKW Ventures Consulting Reports and Presentations

Out recent reports and presentations from Flash Memory Summit are published here

MRAM
Persistent Memory
3D Xpoint Optane
ReRAM
Emerging Memory


Also detailed presentation on out cost model which is available for DRAM, NAND, 3D XPoint, and Other memory technologies.

Future reports on details of China's Memory technologies and plans coming

Many reports listed below

http://www.mkwventures.com/reports.html

http://www.mkwventures.com/





Saturday, September 29, 2018

Xpoint Optane Performance and Revenue


At 2018 Flash Memory Summit, I presented a summary of 3D Xpoint status. It includes applications, a model for the chip performance, and revenue projections. Summary is attached here



Some of my estimates. A model for what 3D Xpoint chips could be….


  • 128Gbit Chip with >10% overprovisioning on the chip itself
  • Read Latency: ~125ns, Write Latency: “higher”  (why higher is a long story)
  • Endurance: ~200K cycles spec with management techniques. Note: cycling capability is at a certain fail rate which is managed by controller.
  • 2nd Gen will have 2x the density at about 30% lower cost. It will be available in 2020
  • It is a very fast, high endurance, byte addressable, NVM replacement (at much higher cost than NAND)
  • Not DRAM replacement. 

Model is based on Optane Memory electrical analysis, Intel announcement and PCM physics 


So how high could the sales be? Revenue model




While high performance, high price SSDs are great, the volumes and revenue from this market are too low to pay for the development of the technology and manufacturing costs. Memory costs are all about volume efficiency paying for expensive development and fab capital expenditures.

Optane DIMMS are the best opportunity for Intel to achieve scale needed to make 3D Xpoint successful. The DIMMS require Cascade lake to work so we expect yet another "launch"* in December to signal Optane shipping as Apache Pass with Cascade lake processor.

Detailed presentation below from Flash Memory Summit Page ... I have other presentations there on persistent memory, and a comparison of all new memories performance and cost

Side Note: After FMS, Intel announced that it is moving 3D XPoint development to Rio Rancho, New Mexico, a site I worked at for the better part of 20 years. There is a separate discussion on why this was done and the implications to Micron and Intel. For now its just great to have memory development 20 minutes from my home!

Flash Memory Summit page

*Launch definition for many companies is quite different from what you might expect. Intel is no exception!


Mark Webb
www.mkwventures.com

#Optane
#3DXpoint
#NVDIMM