Comments Locked

65 Comments

Back to Article

  • Pork@III - Sunday, December 21, 2014 - link


    This will be a new era of computer devices! I wait 2017 with great hope!
  • jjj - Sunday, December 21, 2014 - link

    ""Faster, more durable and cheaper SSDs and other storage devices are a win for everyone and ultimately even 3D NAND is just an interim solution until something better comes around, which may very well be RRAM"
    This phrase seems a bit misleading since 3D RRAM following 3D NAND has been the default scenario for some years now. Timing might be a bit unclear but timing always is.
    Are you aware of any of the big NAND guys aiming for a different path as the default not just exploring other options? Guess maybe extending NAND's life with charge trap might be an option (maybe Spansion makes that work, or better said Cypress since they are merging).
  • Kristian Vättö - Monday, December 22, 2014 - link

    I remain doubtful until RRAM is in high volume production by a major fab and is being adopted to mainstream products, and RRAM still has a long way to come before it reaches that state. While its physics are certainly alluring, there are all sorts of hurdles when it comes to mass production and scaling the design, so I don't want to be that sensational guy who says NAND is dead. Besides, you never know what happens in the next decade anyway -- some companies (e.g. Micron) haven't even revealed their next gen memory plans, which could be something other than RRAM (their roadmap lists next gen A and B technologies, so they seem to be developing more than just one tech).

    As for charge trap, Samsung has already deployed that in their 3D NAND design and we may see others following that path.
  • jjj - Monday, December 22, 2014 - link

    As far as i know Micron is pretty much on the RRAM train ,they used to be less certain but i'm pretty sure that's the thinking now . Just a couple of years ago 3D RRAM was expected this decade by Sandisk for example. Now it seems it's getting pushed back a bit, it's always about costs but hopefully it will be less than a decade.
    Guess my point was that by not presenting it as being in a clear pole position, you make this news item seem like more than it is.
    What i would like to know about Crossbar is what exactly would allow them to compete with the big guys or at least if they got enough IP to be bought by some Chinese entity ( China has no major player in RAM/NAND so they would be likely to try to get into that, it would be a big deal for us consumers since they would likely be aggressive and pricing would decline a lot more than with just the current players).
  • squngy - Monday, December 22, 2014 - link

    The word memristor comes to mind.
  • jjj - Monday, December 22, 2014 - link

    And who is doing what there? HP and Hynix are doing RRAM with it.
  • Jaybus - Monday, January 5, 2015 - link

    They can be used in other than simple binary switch applications, such as IBM's neurosynaptic chips. See http://www.research.ibm.com/cognitive-computing/ne...
  • MikhailT - Sunday, December 21, 2014 - link

    Sounds too good to be true. What's the catch?
  • sTITh - Sunday, December 21, 2014 - link

    It only works when you lick it?
  • LukaP - Sunday, December 21, 2014 - link

    It will be expensice at first, since its an emerging technology, as was NAND at first
  • WatcherCK - Sunday, December 21, 2014 - link

    A super speed solid state drive than can scale up to multi TBs in size and (after some evolutions from the initial production model) effectively will last a human lifetime... big storage may not like selling something the consumer/prosumer/OEM only ever needs to buy one of :)

    As an aside TechReports SSD torture test has the Samsung 840 Pro at 2PBs of written data and still trucking along ... http://techreport.com/review/27436/the-ssd-enduran...
  • arnavvdesai - Sunday, December 21, 2014 - link

    Really looking forward to the article. Does the name RRAM actually mean that the idea is to replace the RAM in your PC with these (with the added benefit of being it non-volatile)? My assumption was that RAM was the fastest accessible memory for a CPU (outside of any cache it has built in). If that is the case then is the assumption for that general computation devices the OS would be completely loaded on it?
  • tuxRoller - Sunday, December 21, 2014 - link

    Can't replace ram due to limited write cycles.
    Should be quite interesting in the embedded space.
  • Tchamber - Sunday, December 21, 2014 - link

    I understand that this technology is an alternative to NAND storage. Even at 50 micro seconds it will have too much latency for a replacement for conventional RAM. And even at its high durability, I'd guess DDR3/4 have orders of magnitude more read/write durability...
  • iwod - Sunday, December 21, 2014 - link

    It is 50 nano seconds, not micro seconds. And DDR Memory operate at around 10ns. At first i was guessing this would be the solution for superfast embedded solution like in a smartphone. Where current NAND at its size pretty much limit the performance. But then 3D NAND should allow current best desktop class SSD performance on a phone, and possibly even better.
    So what benefits do RRAM brings to the table? Are the latency improvement worth the price?
  • nirwander - Sunday, December 21, 2014 - link

    It's technologically cheaper to produce 3D RRAM than 3D NAND (in perspective) and RRAM is better in every other aspect, so HW companies will invest in it anyway.
  • beginner99 - Monday, December 22, 2014 - link

    And price is what matters most. If it was more expensive than NAND it would not be adopted just because it is faster or if only for enterprise segment with according pricing.
  • Gigaplex - Friday, December 26, 2014 - link

    So why did we get SSDs in the first place if they cost more than HDDs? Sometimes performance is enough.
  • hpglow - Monday, December 22, 2014 - link

    Try reading the article to find the benefits.
  • Kristian Vättö - Monday, December 22, 2014 - link

    It's certain that RRAM won't be adopted by the mainstream space until its price is competitive with NAND. However, there are more benefits that just latency and endurance because RRAM doesn't have the same block structure as NAND has. I.e. it doesn't have to be erased in blocks (actually, it doesn't need erasing at all as data can be overwritten), which means that there is practically no need for garbage collection and wear-leveling. That in turn allows for simpler controller designs as less processing power is needed, which is excellent for mobile devices since power is always a big limitation in mobile devices. But I'll cover the basics of RRAM and its pros and cons in more depth soon :)
  • iwod - Monday, December 22, 2014 - link

    Thx, Looking forward to it. I guess the processing overhead of RRAM will be much much smaller compared to NAND if those features aren't needed by the controller.
  • FunBunny2 - Monday, December 22, 2014 - link

    -- which means that there is practically no need for garbage collection and wear-leveling.

    I don't see wear leveling not needed. Less, may be, but not not.
  • Guspaz - Monday, December 22, 2014 - link

    Wear levelling implies a performance impact, since part of effective wear levelling is moving previously written data around to expose lower-cycle blocks. When you're talking about 100 million cycles or more, wear levelling isn't really important anymore. If you assume 4KB blocks (even if RRAM isn't block-based, the underlying filesystem will be), you're talking about writing 400 gigabytes to a single block before you'd have to start worrying about wear. At those kinds of endurances, you can effectively assume that there is no cycle limit at all.
  • FunBunny2 - Monday, December 22, 2014 - link

    -- for the early designs Crossbar is aiming at more conservative ~100K cycles.

    Not to mention leakage and such.
  • tuxRoller - Wednesday, December 24, 2014 - link

    Using a regular fs on this type of device doesn't make much sense. FSs are designed around constraints which aren't present for this type of device.
    I know the linux community has been discussing what such a fs should look like. New apis will need to be created.

    http://lwn.net/Articles/547903/
  • Gigaplex - Friday, December 26, 2014 - link

    "Using a regular fs on this type of device doesn't make much sense."

    Under Windows, there's pretty much only one option for file systems. They're not going to move away from NAND to RRAM if the SSDs don't work on Windows.
  • Jaybus - Saturday, January 3, 2015 - link

    In many ways, a "regular" fs isn't a good fit for NAND either. To compensate, low level features such as trim were inserted into existing the existing fs. On the other hand, there are existing paradigms utilizing traditional fs's that could more readily match RRAM. Memory-mapped i/o is a perfect fit. But this requires application software rewrites and will take time.
  • JonnyDough - Wednesday, December 24, 2014 - link

    "Wear levelling implies a performance impact, since part of effective wear levelling is moving previously written data around to expose lower-cycle blocks"

    Not necessarily, if it is done in the background when nothing is being read or written to the drive and all cores/cycles of the processor are not in use. It's just a matter of properly written firmware and OS's.
  • FunBunny2 - Wednesday, December 24, 2014 - link

    -- Not necessarily...

    It is generally agreed that, once a threshold level is filled, an SSD will stall on GC, wear leveling, and general NAND management. One trick often used is to increase spare area, thus delaying reaching the threshold and leaving more scratch space to do the management.

    But yes, if RRAM/etc. behave like NOR (byte addressable) and r/w, then we have a winner. .
  • extide - Monday, December 22, 2014 - link

    Seems like you would still need wear leveling, as there is a limited endurance (although it is quite high) -- but not the whole garbage collection and trim stuff that we have now.

    Although, I still see having trim as a good thing for enterprise storage as for example when using thin provisioned storage it can actually release blocks for use in something else, instead of now where one a block is written to it is used forever, or at least until that image or volume is deleted.
  • metayoshi - Tuesday, December 23, 2014 - link

    You wouldn't need TRIM anymore since Kristian said that blocks can be overwritten in a previous comment. The reason TRIM was needed on an SSD was because once you write to a NAND block, it would need to be erased before writing to it again, and the erasing caused a huge performance degradation to future writes. TRIM tells the NAND controller that these specific blocks are not being used by the OS or firmware anymore, so that it can preemptively erase the block in the background (or during the TRIM command) in order for future writes to that block to be as fast as the first write. For RRAM, if overwriting a block has no performance penalty compared to writing data from a "clean" block, like in HDDs, TRIM has no benefit at all.
  • JonnyDough - Wednesday, December 24, 2014 - link

    HDD's lose capacity over time as well, they just mark sectors as bad if they can't repair it. SSD trim functionality does the same doesn't it? What happens if a block goes bad? What happens if an entire chip dies to overvoltage or something? Why should you lose all your data? Isn't parity built into the drive? If each chip isn't being "RAID 5'd" then it should be for consumer drives at the 1TB level I think.
  • JonnyDough - Wednesday, December 24, 2014 - link

    You'd still utilize it simply to maintain fuller capacity. Regardless of overall life expectancy, parts of the drive will undoubtedly go bad with use, as with current NAND technology. Wear leveling will help to minimize these failures. Just because it may happen much much less frequently does not mean it won't happen some - although the more resolute the storage medium the less important wear leveling becomes.
  • JonnyDough - Wednesday, December 24, 2014 - link

    Apple will adopt it. Elitists don't care about cost. LOL Once Apple buys in and it lowers the cost due to economies of scale everyone else will follow suit. The first to successfully fab this is going to make a LOT of money - I just wish I could tell who it was going to be so I could sell everything I own and invest at the right moment.
  • p1esk - Tuesday, December 23, 2014 - link

    DDR memory access time is about 100ns, not 10ns.
  • rkcth - Sunday, December 21, 2014 - link

    50 ns as in 1/000 of a microsecond. It's still several times slower than typical RAM I think, but MUCH faster than NAND (by about 100X).
  • Jaybus - Saturday, January 3, 2015 - link

    The 50 ns is the switching time, or the time it takes to switch a cell from a 0 to a 1 or a 1 to a 0. Actual write time is going to be longer due to addressing overhead, just as with DRAM. It will still be at least 20x faster than NAND, so a significant performance advantage. The biggest advantage, of course, is that it requires only a tiny 4F cell size and is expected to function at sub-10 nm, meaning it will be much more dense than NAND. Another advantage is that it can operate at lower voltage than NAND, so should use less power.
  • CaedenV - Sunday, December 21, 2014 - link

    Very interesting. Hyper Memory Cube tech is supposed to act as a RAM+Flash replacement, but is super expensive. Would RRAM be fast enough to use as both internal memory and storage? Or would this stick to the more traditional model of still needing DRAM for active memory? 50ns may be slow for a desktop PC, but for a phone or tablet it would not be all that bad, plus just having to flag memory between storage and active RAM would dramatically reduce load times. Not to mention one less part is one less part... I would much rather have something cheap like RRAM than something expensive like HMC.
  • menting - Monday, December 22, 2014 - link

    HMC isn't targeted as a RAM+FLASH replacement. It's targeted at high performance memory in a limited space, as well as less energy used per bit, which RRAM is defintely not targeted for in the short term.
  • p1esk - Tuesday, December 23, 2014 - link

    50ns is not slow for a traditional PC. DDR memory access time is about 100ns.
  • kyuu - Tuesday, December 23, 2014 - link

    Not sure what your source is, p1esk, but you're way off.

    Even for old-school SDRAM, the highest latency was 90ns to transfer 8 words. With current DDR3, the latency is about 20ns maximum, again for all 8 words and at the slowest frequency (1066 MHz). Given that these are worst-case numbers, average latency is significantly lower. As we move into DDR4, even worst-case latencies are going to be under 10ns.
  • p1esk - Tuesday, December 23, 2014 - link

    Pretty much every source I could find states the access time of RAM is on the order of 100ns. Even L3 cache latency is around 20ns, so your 10ns is strictly in the realm of L2 caches.
    Here's the Anandtech review of Haswell, scroll down the page to the nice graph of memory latency, where you can see the 100ns latency when requests go to RAM.
  • p1esk - Tuesday, December 23, 2014 - link

    Forgot the link: http://www.anandtech.com/show/7003/the-haswell-rev...
  • WaitingForNehalem - Sunday, December 21, 2014 - link

    "At first Crossbar is aiming at the embedded market and is licensing its technology to ASIC, FBGA and SoC developers with first samples..."

    *FPGA
  • Kristian Vättö - Monday, December 22, 2014 - link

    Fixed, thanks for the heads up! :)
  • stephenbrooks - Monday, December 22, 2014 - link

    Field Ballgrammable Gate Array
  • mkozakewich - Wednesday, December 24, 2014 - link

    Sounds like something that happens when Steve Ballmer and Bill Gates collide.
  • capawesome9870 - Monday, December 22, 2014 - link

    when on the Linus Tech Tips forum and telling everyone about the 2011v3 board with 10GbE, i forgot the "E" at the end to signify that it was Ethernet and some dummy thought it was a board with a 10GB SSD on the board.

    but think of it you buy a board that will now come with a 120+GB RRAM hooked up to the chipset and no need to buy a SSD for the system to boot with.
  • Eidigean - Monday, December 22, 2014 - link

    It comes with an M.2 slot, hooked right up to the chipset. Throw one in! I'd rather have the choice in SSD vendor than have the MB vendor choose an SSD and drive up the price because of something I'd want to replace.
  • hojnikb - Monday, December 22, 2014 - link

    Interesting technology. But what about data retention ? Does this also have limited data retention as NAND or is it better/worse ?

    And what about MRAM ? It also seems an interesting techology, that appears to be in development for quite some time.
  • zodiacfml - Monday, December 22, 2014 - link

    Hmm...no mention of anticipated capacity of shipping products. I highly doubt this will be any useful yet for devices such as phones or tablets due to low capacity and high price.
    DDR ram sizes in devices would stagnate if this will have similar success as NAND.
  • Eidigean - Monday, December 22, 2014 - link

    He mentions 1 terabit per package when discussing 16 layer 3D stacks. 8 chips could lead to 1 terabyte drives.
  • zodiacfml - Tuesday, December 23, 2014 - link

    Yes but how many layers are they capable to put out as early as possible? If one layer would be their first, that is approximately 64 GB drive?
  • Eidigean - Tuesday, December 23, 2014 - link

    "first commercial standalone chips are expected to feature 16 layers"
  • Eidigean - Monday, December 22, 2014 - link

    A lot of software should be rewritten to take advantage of this tech. There's not much need for caching gigabytes of static content in DRAM if the long term storage medium is fast enough. Games for instance could load faster by no longer copying everything to DRAM. Just load it from RRAM on demand. It was cached because of the latency in accessing spinning discs.

    Perhaps compression is also something to reconsider. This could read uncompressed data faster than reading a compressed version and decompressing it on demand. It use to be that the decoder could keep up with the spinning disc, so the less you read, the better. Not so with 50 ns access times.
  • kyuu - Tuesday, December 23, 2014 - link

    Even as fast as RRAM is, it's still significantly slower than DDR3 RAM. Also, the data loaded into RAM is modified quite a lot, I do believe, so that's a lot of additional wear on the storage.
  • FunBunny2 - Monday, December 22, 2014 - link

    Crossbar isn't first, by any means. Rambus bought Unity the end of 2012, and Unity had been developing CMOx (their name for a RRAM) for years. And there are others. Let the patent fight begin.
  • Kumouri - Monday, December 22, 2014 - link

    I think you meant to say Crossbar in the last sentence, i.e. "...more of a heads up about the state of RRAM and [Crossbar's] recent developments..."

    And I think EVERYTHING is just an interim solution in computers, haha!
  • Witchunter - Monday, December 22, 2014 - link

    A very nice article indeed.
    PS. That's probably "Crossbar's recent developments" near the end of the article.
  • name99 - Monday, December 22, 2014 - link

    Commercialization means what, exactly?
    For example --- the ability to place 256KiB of persistent storage on a SoC may be valuable for many purposes, but that's very different from the ability to create a 1TB storage device at a price comparable to flash, let alone HD.
  • Gigabob - Monday, December 22, 2014 - link

    I too have followed the Crossbar story for a few years. Meanwhile I hear bupkis from RRAM "leader" HP - who expects to lead with this technology for "The Machine". I don't see any reason our friends at Micron/Intel can't move this into production once they feel the time is right.

    I hope Crossbar can go from design win to commercial products - but not holding breath for a Crossbar SSD. Appreciate the need to scale up - but with Volume production using 3D NAND at 40-44nm - for Intel the production cost hurdle for RRAM to undercut in 3 years will be nontrivial.

    I used to worry Crossbar would be another A123 and end up in China, but doubt they can capture that kind of imagination or funding - which explains the SoC and embedded focus.
  • HisDivineOrder - Tuesday, December 23, 2014 - link

    This article reminds me of OLED, which was going to supplant LCD in our monitors.

    Doubt it. Maybe if (or rather when) "Crossbar" is bought up by Samsung or Toshiba.
  • FunBunny2 - Friday, December 26, 2014 - link

    Crossbar, by that name, has been licensed from IBM, although not by Crossbar the company, so far as I know.
  • MRFS - Saturday, January 3, 2015 - link

    Keeping things simple, I can see RRAM fitting into the existing workstation and server ecosystems with the following: (a) an RRAM replacement for SanDisk's UltraDIMM (but OS changes may be required) (b) a 1-for-1 replacement for existing 2.5" Nand Flash SSDs, but with a "SATA-IV" interface using the PCIe 3.0 128b/130b "jumbo frame" and 8GHz clock rate, scaling up sooner (or later) to 12G and 16G either with jumpers, option ROMs or auto-detection (c) pre-loading the OS once into an upper region of very large main memory subsystems e.g. 1TB and leaving it there e.g.for an "instant ON" Windows Desktop (just like a light switch :) (d) my favorite is to integrate RRAM chips on the SODIMM form factor for high-density solutions (e) don't forget, RAID technology is very mature, and speed can be easily configurable with 4x, 8x 12x and 16x SSDs: choose your RAID mode (f) tons of additional applications I haven't even thought of yet. NTFS will not need changing if existing SATA and SAS protocols are implemented at the storage end of the data cables.
  • MRFS - Saturday, January 3, 2015 - link

    A while back we filed a provisional patent application for loading Windows directly into a ramdisk: this can be implemented more easily now with UEFI technology. Just add a "Format RAM" option so as to make that NTFS partition transparent to Windows Setup, and then install Windows directly into that ramdisk e.g. call it "C:" The key here, of course, is that the RRAM is non-volatile, just like HDDs and Nand Flash SSDs, so the OS will still be there whenever the user invokes Shutdown. I can also see motherboards that support hybrid DIMM socket regions -- one subset for super high-speed operations e.g. DDR4+, and the other subset for storing the OS at run-time. Properly implemented, the latter subset operates just like the C: system partition in billions of NTFS implementations now running AOK all around the world, but withOUT any of the penalties of HDDs and existing Nand Flash SSDs. Think "RamDisk Plus" from SuperSpeed, LLC, but without the penalty of pre-loading each ramdisk at every Startup.

Log in

Don't have an account? Sign up now