Promise VTrak J300s

by Jason Clark & Dave Muysson on 2/2/2007 8:00 AM EST
Comments Locked

31 Comments

Back to Article

  • LordConrad - Sunday, February 4, 2007 - link

    They may not be used much in corporate environments, but I think it would be interesting to see where the Raptors fall on these charts considering their higher rotational speeds.
  • dropadrop - Tuesday, February 6, 2007 - link

    quote:

    They may not be used much in corporate environments, but I think it would be interesting to see where the Raptors fall on these charts considering their higher rotational speeds.


    Yeah, I never saw a commercial product offered with Raptors. SATA seems to always be with 500GB 7200rpm drives. I guess the logic is, that people will only go with SATA to get 'cheap' space. The price / capacity ratio would fall quite drastically as soon as you move to Raptors negating the advantage.
  • bob4432 - Saturday, February 3, 2007 - link

    how can you camparee older 10k scsi with brand new fujtisu max 15k sas? you do kow that they make a u320 version of the max drive? or the industry leader atm - the seagate 15k.5 (which i currently own and have both a str and burst of 96MB/s on a single channel u160 card due to 32bit pci limitations...) ? why woould you compare apples to oranges when you could apples to apples? why not add soem 5400rpms hdds to the mix too???
  • JarredWalton - Saturday, February 3, 2007 - link

    Sometimes you have to test with what you have available. Obviously, the SCSI setup is going to perform better with a 15K spindle, and we mention this numerous times in various ways. However, the realizable throughput is not going to come anywhere near SAS. The sequential tests show maximum throughput, and while having a SCSI setup with two connections rather than one would improve throughput, SCSI's parallel design is becoming outdated. It can still hold its own for now, but most drive companies are putting more effort into higher capacity, higher performance SAS models now.
  • shady28 - Sunday, February 4, 2007 - link


    I agree your approach to SCSI is tabloid like. You are looking at a JBOD array on a single SCSI channel using obsolete 3-year old drives. Moreover, I have yet to see a production SCSI system utilize only one SCSI channel. A setup like that is the mark of a newbie, and a dangerous one if handling critical data.

    There is a huge difference in the performance of new 15k SCSI drives and the old 10K drives. Check storagereview.com and look at their IOPs readings - a critical measure for databases and OLTP applications. The top 2 ranked drives are SCSI, you don't even see SATA until you get down to the Raptor - a drive that has an IOPS rating that is more than 1/3 lower than the top rated Atlas 15K II 147GB. Even the SCSI JBOD array you used was pulled from market some 7 months ago.

    If that doesn't convince you of how silly your SCSI approach is consider this :

    The Seagate Cheetah 15k.5 U320 single drive has a sequential transfer rate that is better than your entire array of 14 10k rpm SCSI drives. I have seen two drives on the even older U160 interface do better in sequential reads than your array.

    None of this is really a good way to benchmark arrays. A much better and more informative method would be to utilize benchmarks with Oracle and MS-SQL server under Linux and Windows with various disk configurations.
  • yyrkoon - Sunday, February 4, 2007 - link

    Guys, you completely missed the whole point of WHY they used those drives in the comparison. They already had those drives, so thats what they used. In other words, they couldn't afford whatever the latest greatest SCSI drive costs x14 (and to be honest, why even bother buying SCSI drives, when you already have a goodly amount of SAS drives ?).

    Some of you guys, I really don't know what to think about you. You seem to think, that reviewers have endless amounts of cash, to drop on stuff they don't need, and would most likely never use, because they already have something better. Regardless whether you except it or not, SAS is far superior to SCSI, and has a very visible road map, compared to SCSI's, 'shaky' and un-certain future. Yes, SCSI has proven its self many times in the past, and for a long time, was the fastest option without using solid state, but now, a NEW technology, BASED on SCSI, and SATA has emerged, and I personally think that SCSI days are drawing to an end. Who knows though, maybe I'm wrong, and not like it would be the first time either . . .
  • JarredWalton - Monday, February 5, 2007 - link

    I can't say that we purchase most of the hardware that we review, simply because it would be too expensive. In this case, however, why would a manufacturer want to send us SCSI hard drives when they already know SAS is going to be faster in many instances? Basically, SCSI and SAS 15K RPM drives cost about the same amount, but either the enclosures cost more for SCSI (in order to get multiple SCSI channels) or else they offer lower total performance (throughput). In random access tests or seek times take precedence over throughput, SAS and SCSI are going to perform about the same. With most storage arrays being used for a variety of purposes, however, why would you want a SCSI setup that offers equal good performance in a few areas but lower performance in others?

    At this point, the only major reason to purchase SCSI hard drives is because of existing infrastructure. For companies that have a lot of high-end SCSI equipment, it would probably make more sense to upgrade the hard drives rather than purchasing Serial Attached SCSI enclosures and hard drives, at least in the short-term. The long-term prospects definitely favor SAS over SCSI, however -- at least in my book.
  • yyrkoon - Monday, February 5, 2007 - link

    Oh, hey Jarred, whilst you guys are still paying attention to this thread, something I personally would like to see, is minimum hardware requirements, for certain storage 'protocols'. I don't suppose you guys plan on doing something like this ?

    Let me clarify a little. Lately, I've been doing a LOT of experimentation with Linux / Windows file / block level storage. This includes AoE, iSCSI, CIFS, NFS, and FTP. Between two of my latest systems, I seem to be limited at around ~30MB/s(Megabytes/second). The hardware I'm using isn't server grade, but isn't shabby either, so I'm a bit confused as to what is going on. Anyhow, network is p2p GbE, and I've used multiple different drive configurations (including a 4x RAID0 array capable of 210MB/s reads). MY personal end goals are to have a very reliable storage server, but secondary goals are as high speed as possible. I wasn't expecting too much I don't think, in thinking that ~30MB/s is too slow (I was hoping for ~80-100MB/s, but would settle for ~50-60MB/s).

    Anyhow, some food for though ?
  • JarredWalton - Monday, February 5, 2007 - link

    I actually don't do too much with high-end storage. I've had transfer rates between systems of about 50 MB/s, which is close to my HDD's maximum, but as soon as there's some fragmentation it drops pretty quickly when doing network transfers. 20-30 MBps seems typical. I don't know how the OS, NIC, switch, etc. will impact things - I would assume all can have an impact, depending on the hardware and situation. Motherboard and CPU could also impact things.

    Best theoretical performance on GbE tends to be around 900-920 Mbps, but I've seen quite a few NICs that will top out at around 500-600 Mbps. That also creates a CPU load of 20-50% depending on CPU. Depending on your hardware, you might actually be hitting a bottleneck somewhere that caps you at ~30 MBps, but I wouldn't know much about the cause without knowing a lot more about the hardware and doing lots of testing. :|

    Maybe Jason or Dave can respond - you might try emailing them, though.
  • yyrkoon - Monday, February 5, 2007 - link

    I understand that you guys do not buy most of your hardware, well the hardware that you review, but thats part of my point, I assuming Promise either 1) gave you the SAS enclosure, for the review, or 2) 'lent' you the system for review. Either way, in my book, it doesn't really matter. Anyhow, Promise sent you guys hardware, you reviewed it, and compared it to whatever else you had on hand (no ? ).
  • JarredWalton - Monday, February 5, 2007 - link

    "or seek times" = "where seek times"
  • yyrkoon - Saturday, February 3, 2007 - link

    many 'old timers' are going to claim SCSI is better than anything, because its been around a long time, and has a proven track record. What these 'old timers' don't realize, is that SAS, and SCSI drives, share the same ancestry, except that SAS, also shares a history with SATA. *shrug*
  • mino - Sunday, February 4, 2007 - link

    Yes they are those.
    However most posters here do NOT dispute the benefits and superiority os SAS over U320.
    The point is that according the numbers published even SATA 7.2k is on par with SCSI320 10k !!!
    Those numbers simply coudn't be much more away from the reality then they are.

    Artificially more than halving the performance of a tested platform simply is not acceptable.

    Also to make tables in the sense SATAI vs. SATAII vs. SCSI vs. SAS is on itself seriously misleading to the extent that best thing to do for AT (provided they were not paid for it!) would be to call in the article eithe indefinitelly or for rewrite.

    Actually the reality is:
    SATA I or SATA II driver do not exist, there are only SATA drives inn existence as of now.
    performance-wise on single to 6-drives/system:
    SATA(7.2k) < SATA(10k) < SCSI(10k) = SAS(10k) < SCSI(15k) = SAS(15k)
    on 8+drives system:
    SATA(7.2k) < SATA(10k) < SCSI(10k) < SAS(10k) < SCSI(15k) < SAS(15k)

    For an 12-drive test the results should be:
    SATA(7k) << SCSI(10k) << SAS(15k) which is obvious even before any testing.

    However much more beneficial test would be:
    SATA(10k-Raptor) vs. SCSI(10k) vs. SAS (10k) with SCSI and SAS driver ideally from the same line.
  • mino - Sunday, February 4, 2007 - link

    "SATA I or SATA II driver" --> "SATA I or SATA II drives"
  • mino - Saturday, February 3, 2007 - link

    Yes, one sometimes has to make compromises.
    But benchmarking SCSI setup with 12drives on a SINGLE cable is plain stupid and tabloid-like aproach.
    This organization seriously criples perfromance and is NEVER used unless there is some very serious reason for it.
  • mino - Saturday, February 3, 2007 - link

    If you did have no SCSI option than the one you used, you should not have published those "SCSI" numbers at all. Those numbers as they are have nothing to do with SCSI being poor, they are simply showcasing that 3yrs 10k drive are slower than new 15k drives. Nothing new here.
  • Googer - Friday, February 2, 2007 - link

    That chart is missing the old 5.25 inch drives. The most famous of those was probably the Quantum Bigfoot. Quantum was bought out by Maxtor.

    http://images.anandtech.com/reviews/it/2007/promis...">http://images.anandtech.com/reviews/it/.../promise...

    http://www.pcguide.com/ref/hdd/op/formIn525-c.html">http://www.pcguide.com/ref/hdd/op/formIn525-c.html
  • Justin Case - Friday, February 2, 2007 - link

    Maybe the article author should read this...

    http://www.sata-io.org/namingguidelines.asp">http://www.sata-io.org/namingguidelines.asp
  • monsoon - Friday, February 2, 2007 - link

    Hello,

    I'm used to change computers frequently, I have lots of data to store.
    Currently I've got 4 external 300Gb drives and 4 external 400Gb drive; all of them connected through firewire.

    I've been looking eagerly for solutions similar to the NORCO DS-1220; but I need to connect the storage unit to laptops as well, so it has to control RAID5 all by itself.

    I can't find alternatives in the market, and while the UNRAID solution looks interesting, it's not safe, neither easy to implement.

    Looking forward to external storage devices reviews for home users with big archives.
    Units need to stand the test of time and be there while PCs come and go.
    Ideally, I must be able to replace drives with higher capacity when they get cheaper, without having to replace all of them at the same time.

    It better be silent; well, at least not loud...

    Any idea ?

    Thanks
  • mino - Saturday, February 3, 2007 - link

    Look for some reliable NAS solution (+Gbit swith - now dirt cheap).
  • yyrkoon - Friday, February 2, 2007 - link

    When are you guys going to do some reviews on consumer grade equipment ? Well, let me clarify, 'consumer grade' with on card RAID processor(s). For instance,, right now, I'm in the market for a 8 + port RAID HBA, but would like to know if buying a Highpoint 16 port SATA RAID HBA, would really be any worse than getting an Areca 8 port HBA, for ~$200 usd more. 3Ware, from what I understand offers the best Linux/Unix support, or does it ? If so, would it really make much of a difference in a SOHO application ?

    I personally, would like to see a comparison of the latest Promise, Highpoint, Areca, 3Ware, etc controllers. In short, there is a lot out there for a potential buyer, such as myself, to get lost in, and basically, I personally am interested in reliability first, speed second (to a point).

    Anyhow, I just thought I'd point out, that while you guys do cover a lot in the area, you guys seem to have a gap, where I think it really matters most to your readers (home PC / enthusiast crowd/SOHO).
  • mino - Saturday, February 3, 2007 - link

    I would stay away from Highpoint.
    We have had several issues of RAID HBA(new one!) consistently going down AND screwing the whole RAID5 ubner some workloads. For the money one is better off with QuadFX ASUS board than to go Highpoint-like solutions.
    Areca is pretty much on a different level, ofcourse...
  • yyrkoon - Sunday, February 4, 2007 - link

    Again, this only reinforces what I've said, need a good article on which HBAs are good for reliability, etc.
  • mino - Sunday, February 4, 2007 - link

    Any 3Ware, Areca, LSi, Adaptec solution should be just fine.

    Most people do not actually need RAID5 for home usage and it is usually cheaper to go _software_ RAID1 with every drive in the RAID attached to different controller. In such a scenario even the cheapest or onboard controller offers comparable fault-tollerancy to high-end RAID% solutions.

    However the simplest way to go is really 2 NAS RAID5 boxes mirroring each other.
  • dropadrop - Tuesday, February 6, 2007 - link

    quote:

    Any 3Ware, Areca, LSi, Adaptec solution should be just fine.


    I would rule out Adaptec and the older LSI chipsets still available (under several brands like Intel for example). We replaced a bunch of Intel 6 & 8 port controllers with top of the line 8-port Adaptec SATA II controllers.

    The performance of the Intel controllers (with LSI chipsets) was terrible. We got about 8-13MB/s sequential writes with RAID 10 arrays, and tested using alot of differant drives. The Adaptec products are alot better in regard to speed, but keep dropping drives. This seems to be a common problem, but they have no solution.

    I've previously used 3ware without any problems, and would gladly test Areca if they where available here.
  • yyrkoon - Sunday, February 4, 2007 - link

    why would I want to spend 1300 usd + per 5 disk array (minus drives), when I could build my own system much cheaper, and use the hardware/software I wanted ? Just because I don't know which HBAs are more reliable, than others (because I obviously cant afford to buy them all), doesn't mean I'm an idiot ;)
  • Bob Markinson - Friday, February 2, 2007 - link

    Interesting review!
    I would have liked to see a comparison with latest gen 15K SCSI drives and not 10K SCSI drives to see the true SAS interface performance advantage over SCSI. Futhermore, the Serveraid 6M comes in two versions - one with 128 MB cache and the other with 256 MB cache. Also, there were performance issues with early 7.xx firmware/sw revisions on the 6M at high IO loads - hopefully you ran the tests most recent firmware. Write-back cache was enabled on the 6M, right?

  • Lifted - Tuesday, February 6, 2007 - link

    Based on the title of the article, Promise VTrak J300S, you are expecting too much. The "comparison" was more like an ad for the product. What is point in comparing 10K U320 vs 15k SAS? It's supposed to tell us what exactly? You clearly need to look elsewhere for a SAS vs U320 comparison if that's what you were expecting here. This was more for kicks I think, and perhaps to make the J300S look better than ____ ??? I don't get it, it's just a storage enclosure. The RAID adapters and drives are what determine performance, so why was this apples-to-oranges "performance" review thrown into an enclosure article?

    Odd, quite odd.
  • fjeske - Friday, February 2, 2007 - link

    Isn't it a bit unfair to use old IBM 10K SCSI drives in this comparison? None of the now Hitachi drives show good performance on Storagereview.com. Compare to Seagate's Cheetah 15K.5 and I think you'll see a difference.

    Also, how was the SCSI setup done? Attaching 12 drives to one U320 bus will obviously saturate it. Servers usually pair them when connecting this many drives.
  • cgaspar - Friday, February 2, 2007 - link

    SAS and SCSI drives have disk write caches disabled by default, as the drives' caches are not battery backed. IDE and SATA drives frequently have write caching enabled by default. This makes writes much faster, but if you loose power, those writes the drive claimed were committed will be lost, which can be a very bad thing for a database. I'd suggest disabling the write cache on the SATA drives and re-testing (if you still have the gear), I suspect the results will be illuminating.
  • shady28 - Sunday, February 4, 2007 - link

    Here are some graphs the author should look at :

    http://www.storagereview.com/articles/200609/ST330...">http://www.storagereview.com/articles/200609/ST330...

    http://www.storagereview.com/articles/200601/250_i...">http://www.storagereview.com/articles/200601/250_i...

Log in

Don't have an account? Sign up now