Raid ssd reddit

Raid ssd reddit. UNVR with 3x 2TB SSD. There is no point in running Docker images from a separate SSD. When ordering a Legion 7i they give you a couple different options. This raid type allows you to set a 'master' SSD which will receive more read/writes than the others with a view to allowing this drive to fail earlier. The RAID5 option: RAID5 all 4T drives, but use RAID1c3 for metadata. Our msp is small so I always feel it's a bit over kill as these are 2TB ssds which can add up quickly. Typically drives are fully written to twice to ensure wear leveling and garbage collection and others have kicked in. My favorite is now the Lexar 790. Both SSDs put in. UNVR with 3x 4TB purple drives. As for gaming, I play mostly online games that are bottlenecked by network speed, so loading times are barely affected by a single SSD, let alone doubling it's speed. Can anyone point me to a best practices guide for ZFS SSD pool setup. ) and Windows Storage spaces. RAID 6 SSD HPE Disks. SSDs have much lower failure rates than HDDs for a few simple reasons. Reply reply. Split the four drives into two RAID 0 arrays. If you’re using the drives caching (since you don’t have a dedicated hw cache on card), and you have a power drop you’ll lose all the data and possibly corrupt or puncture the whole array. No, I had a raid 0 setup with two 840 EVO's, benched at about 1. All your media will live on the SATA drives. Raid 5 is likely going to wear out your SSDs too quickly, but I would have to do some research to confirm. Being that the socket supports both SATA and PCIe SSDs. So after a day, here are my first thoughts. - 6x2TB Crucial BX500 SSD Pool RAIDZ2 = 8GB storage (2 drives for data protection) Thanks. Scenario: You have 6 hard drives SAS configured in RAID 5 on a dell server. That worked, and I was able to install windows to the SSD, thinking I could combine them again later. 2 enclosure and a cheap 256GB M. But not THAT much more reliable - maybe 2-3 times, not 10 times. I have Samsung Pro drives in mdraid 1 running without issues. 2 drives (SK hynix 1T) 4TH GEN. Thank you in advance. To note you have 8 4tb HDD’s. snatch1e. Depends on the OS, you are going to use. 5 host whose most important VM is an SQL 2012 server. Share. Hello! I recently built a new PC with a single 500GB SSD as storage. Still 4 drive RAID10. 2. Memory is 64GB. If you would use Raid1 with 10 drives, you would still lose 50%. While unRAID won’t stop you from doing this, SSDs are only supported for use as cache devices due TRIM/discard and how it impacts parity protection. 2 nvme with a 2. A larger SSD will usually last longer than a smaller SSD because there are more blocks to spread the wear around. 2 with a Sata M. If I have one PCIe4 M. 2 on a PCIe adapter card), or consumer 2. 8x 1. It doesn’t hurt your data to lose the read cache. Second go into the BIOS and make sure the SATA controller for the drive you WANT to show up is set to AHCI and not RAID, AHCI means "use the onboard controller" which Windows should see just fine. TESTS. I use both motherboard RAID (RAID 0 on my MB NvME system drives & and additional SSD pool. 2 the best choice considering that I plan to keep my drives running 24/7? The drives will mainly be used for backup, so mainly Read, and occasionally Write. Video/Photo editing in a business setting. Oscarcharliezulu. Reply. root drive, Virtual machines and There are a half dozen raid calculators, Google around, enter your drive size, quantity of drives and scroll thru the different raid types, and it will tell you what to expect. If you are slightly computer savvy I have been thinking you could buy a TB3 20GBPS M. For me RAID-10 is the best. AHCI is more of an operation for SATA drives while RAID is an advanced mechanism that provides performance enhancements by using multiple hard drives in different configurations. If you lose the write cache, you lose the whole volume, not just cached data. HPE P440AR with 2GB flash memory RAID controller. Of course ymmv here. In the case of upgrading to RAID 0 from a single SSD already, no. Help. It's fast and it's only $199 for 4TB. With more blocks, the controller of the larger SSD can spread out more evenly amongst all the available blocks. When ordering the 2tb Option, the reason the 2nd SSD option disappears 1. Some games, esp. The RAID controller is a PERC H740P 8GB cache. Yes, you should be able to. Performance barely makes any difference in basic daily tasks or gaming. Your backups need to be OUT of the server's control, not in the same chassis. Raid 5 also requires a minimum of 3 drives and gives you n-1 drives worth of capacity. 2 SSD and one PCIe3 M. RAID 1. 1. you can even partition it for read and write caches separately. HPE vMware 7. With the advancements in storage technology and the availability of high-capacity HDDs and SSDs, I wanted to get some feedback from the community before making a decision. when a documents is iopen its takes ages or sometimes the server To clarify it is $400 ($200x2) for two 1TB or $350 for one 2TB. I personally use the Linux command 'dd' for this sort of thing. I am now downsizing my home lab, To greatly reduce my power costs, I am considering moving to a 10th gen i3-10105 CPU, with 16GB of ram. You are not using that SSD group for anything except the operating system. SSD in a RAID 5 configuration? Is M. The problem you'll run into is there are no RAID capabilities for these cards, so you'll either need large capacity m. In building the new spec, I've been going back and forth in comparing 4 SSDs in RAID10 (something like 960 evo's or even enterprise drives) to something much faster like a single NVMe drive. Windows will regularly optimise the way files are distributed across the disk to I'm planning our next VMware virtualization servers with VSAN. Any links will be appreciated. 5x the performance of SAS 12G (server NVME specs) RAID 10 with 8 x SAS 12G SSDs seems to be 8x read/4x write performance. I bought two. and using 4x SSD's 1TB (consumer grade, rated to deliver 500+ MB/s reads, and 375+ MB/s writes each) in a raidz1. First turn off fast boot in Windows, google it, it's needlessly complicated. And whether your raid should do write-thru or write-back. Is it worthwhile to put Opinions on SSDs in servers. 4- What are the lifetimes of M. ILO licensed. Legion slim 5 14 first thoughts, the good, the bad and the weird bugs. In short, I have an LSI 2208 RAID card with a bunch of HDDs that I'd like to replace with the same or larger SSDs. Swapping each HDD with an SSD one by one (and waiting for the array to rebuild each time). All tests were done with the exact parameters: fio (read 70% - write 30%), RAID10, Intel 530 SSD drives Right now I use an HP P410 RAID card with both SATA and SAS hard drives. 02. On some rare occasions, you can achieve very high write performance on RAID-5 if you hit full stripe write. 2-in-a-PCIe-slot cards, you'll need to enable bifurcation on that PCIe slot in the BIOS to allow each SSD to use 4x Running Two M. So here's my $00. 2. 2 SSD and use RAID 0 to connect the the internal and external drive as a fast 512GB SSD for about $60 ($20 for the SSD, $40 for the enclosure) as opposed to Apple's $200, and save a bit of money. Just put your current actively played games on an SSD. 22T each), combined into an LVM volume group with striping (so the RAID volumes are used in parallel). It is possible to run it in a Windows environment and configure either ZFS or mdadm. Using SSDs as data/parity devices is unsupported and may result in data loss at this time. I will install it in a Z690 msi Motherboard, which comes with five M. It can have up to 4 drives in it and with Raid 0 it can get up to 2800MBps in speed. 3. For sure, if you need more performance and redundancy, you can move forward with RAID 10 or 6. But yes, if your ssd raid gets to a point where it cannot recover and must fail, you will have a degraded array of 1 disk. As for RAM, as far as I know, Lenovo officially supports up to 32GB while You're more likely to run into HDDs at RAID 0 failing as opposed to HDDs, sure speed vs reliability is personal concern, but the reliability isn't bad with SSDs. But for a lower number 5 is fine. Sort by: messords. Future plan is to purchase another one and add a DAS to it so I can use it as my "NAS". Possible. I use Premiere, editing off a 2TB Samsung NVMe SSD hitting 3500mbps via a PCIe port. If that is the case, does using this device wears out both SSDs at same time if it is configured with RAID1 ? Yes, and not really. SSDs do not keep data in the same persistent physical locations. djlewt. Server is HPE Gen 9 DL 360. Build in 4x 1GB NIC. CrystalDiskMark tests: Some virtual machine scenarios. Not to mention the performance difference is night and day with load times, especially those games with lots of small files, no disk RAID 0 can match a single SSD. You can't raid an m. As long as this cloning application you use can recognize the logical device that the raid array creates, you should be fine. It supports RAID 0/1/5/6 configuration. If your spending the money on SSD space for a root or application drive, then buying 2 separately at that time for a few £ more than a larger single one is worth it. If you did same arrangement with 1gbit SATA mechanical drives you would still easily saturate 10gb for sequential access. This is further complicated by that fact that upgrading the first SSD from 1tb to 2tb costs $470, whereas adding a 1tb to the 2nd drive costs only $280, although both are still 2TB in total. Granted, with all that said, there's nothing saying you can't have your cake and eat it, too-- SSDs can be run in RAID as well. This only applies to the Legion 7i with an 11th gen Intel CPU as the AMD Legion 7 only support PCIe Gen3. Performance tests on a clean drive are always Out of these options, which would you recommend (or suggest another option): UNVR re-using 5TB hard drive. But not supported and strongly not advisable. Plus it has an extra USB C port and display port out. It depends on your operating system but if you decide to run the NAS system as VM, consider OpenMediaVault, or Starwind SAN and NAS. After choosing to upgrade the first SSD to 2TB, the 2nd SSD option is no longer available. While presenting as one disk to the OS, Internally it'll have three levels: Slow HDD massive storage, Fast SSD storage, and a dedicated part of the SSD storage as cache. I like ZFS. Raid 10 is overkill you’re burning through drives for performance that you won’t use at that scale. You'll always trade off speed for redundancy with RAID. 5" sata, but you could raid a sata M. Configure RAID-1 on SSD drives for OS and depends o your requirements you can either configure RAID-5,6 or 10. RAID0 is fast but unreliable and if one of your drives dies, you lose everything. HPE 10GB NIC with 2 SFP+ ports. 92T NVMe SSD drives with 2x PERC H755N front NVMe card (8 drives for each card). It is common to have 64k or 128k stripe size for common workloads and bigger stripe size if large files will be written. First, drive preconditioning is very important when testing. 5" SATA SSDs (RAID-1 attached to a supported RAID controller). RAID1 - (2) NVMe Gen4 SSD. My boss builds all the servers and loves ssds in everything. Look up Raid F1. A decent hardware-based RAID controller Raid 50 is a great alternative because you trade the reliability for performance. 2 TB with RAID 5 or 9. If you’re concerned about IOPS, you can do similar calculations. And the way failure math works in RAID0 vs Basic vs SHR/RAID1/RAID5, that means redundant drives is an order of The ones i've seen using RAID 0 only used it because they didn't want to have two separate drives. Then, make a windows VM and use that. If you want to use a graphics card with your windows VM, pass it through using VFIO. Stripe Element Size - 64KB / 128KB / 256KB / 512KB / 1MB. "wear" isn't a uniform wear on all the cells. Using a good pair of nvme SSDs as a special vdev for metadata would improve search speeds dramatically. From LimeTech themselves: Do not assign an SSD as a data/parity device. But also backup with the 3-2-1 rule. X. So with those assumptions in place, you would only use hardware raid where needed (because they aren't cheap), and should always have a backup card in storage. HDD vs. For the local storage, I'm thinking of using 8 x 12G SAS SSDs in RAID 10 instead of a single NVME PCIe. So you need 10Gbps NAS/Switch to even start utilizing the performance increase of an SSD over HDD. To my knowledge each cell of ssd can only be written certain number times, then it wears out. earthwormjimwow. The same a HDD to SSD really. Settings -> IO Ports -> SATA Configuration -> Chipset SATA Port Enable =Enabled. For $280 you have a 4TB enclosure with 2500 mb/s, which is max for these 40mb/s USB3/4 enclosures. Where is my mistake? I'm considering investing in a RAID storage system for my data storage needs, but I'm not sure if it's still worth it in 2023. Everything is just instant. My previous machine was 2020 MacBook Pro 13 inch Intel. We began trialing with consumer Samsung Evo 840 SSDs, and Intel Consumer SSDs about three years ago. RAID 0 provides the best performance with zero redundancy. ago. Its not the biggest upgrade, but there are certain times where it is notable. I've read conflicting information on using only a single NVMe without any RAID at all and relying on backup. • 2 yr. Settings -> RAIDXpert2Configuration Utility -> Array 1, RAID0, 1. Unless it's a Supermicro board with an embedded LSI/Avago controller with battery & cache, NEVER use or rely on the motherboard's built-in "RAID" controller because 9 times out of 10 it's software-based and GUARANTEED to fail. ) RAID0 no. Hi Reddit community :) , I am about to build a new gaming PC. With Raid5 on 3 HDs (minimum amount of HDS) you only lose 33% for 1 HD redundancy. As per choosing between RAID-10 and RAID-5 on 4 drives, just choose whether are more needed, performance or capacity. At what point do you guys start using ssds when it comes to servers and hosts. We have a lot of customers who run RAID 5 on SSD on big enterprise systems. You can build a "Tiered Storage Spaces" in windows, which will probably do what you want. 1gbs transfer speeds, used it as my boot drive, I didn't see a noticeable difference in boot times vs non-raided SSD. ssps • 3 yr. A single 512Gb, a single 1TB, a 1TB in the Primary & 1TB in the Secondary, and lastly a 2TB option. Disadvantage: most RAID5 bugs regarding spurious errors still apply (see the ML link from before). But for raid1 mdadm should also be good. Advantages: more capacity than the previous option. UNVR upgrading to new 8TB Purple hard drive. the main box runs linux, but your keyboard, mouse, and monitor control appear to be No, raid 0 is not worth it. 6 TB with RAID 6, RAW = 14. You should remeber about RMW (read-modify-write) penalty for raid5 setups and if writes match stripe size (which depends from strip It's not worth the headache. The amount of parity writes are minimal. OWC Express 4M2 4-Slot M. I have two internal 4TB SSDs in mine. I recently got my hands on a Dell XPS 8500 equipped with an Intel Core i7-3770 CPU along with three Intel® SSD 530 Series 180GB SSDs. Cache isn't important, so you wouldn't need fault tolerance, just probably Two SSD drives in RAID 0 VS one SSD with bigger capacity. Award. In my free time I tested a couple of cards and logged the results. after doing some research its appear something related to the RAID and SSD. 9 TB, Normal. I cant seem to find any sort of options in memory management or in bios to re-enable RAID between the two, and now the computer screams at me because C drive is full. 2 slots, from which only one feeds from CPU, and the other four from chipset. RAID1 makes a mirror copy to the other drive, so you basically lose all the space in the second drive. I run a couple of Dell servers with 16x 1. Given you see quite a bit of variation based on controller settings, you do not seem to be close to the limit of the disks, but you may already be close to what the controller can do. this is the right answer. My motherboard is an ASUS Crosshair VIII Hero. With Raid5 on, let's say 10 HDS, you only lose 10% for 1 drive redundancy. They are stupid fast and a 500gb nvme drive is about the same price as a 500gb sata ssd so might as well get the better one. 2 vs. SSDs are already fast so putting them in RAID0 is near useless and it's just asking for trouble in terms of data stability. A simple HDD raid array will easily saturate a 1Gbps link. Plus, with RAID we'll have hot-swap This is quite hard to say. Ubisoft, won't work in storage spaces for REASONS. For a 5 disk unit I would choose RAID 5 as you get minimal redundancy with good performance and large storage size. If you dont want to deal with an adapter the 500GB MX500 ssd has been on sale quite a bit for about $50 and Id leave one nvme for your os and background programs and then maybe raid the ssds. The BTRFS allocation hint option: RAID1 all 4TB drives, the 120G SSD and buy another 120G SSD. You can make a read-only cache with a single SSD or two in RAID0. You only really need a SATA SSD (NVMe not needed) and you can pick up a 2TB for about $180. • 1 yr. First you need to check what the limiting factor is for the work load. The stripe size coming straightly from the strip size. 6TB Intel DC S3510 SSD totalling 11. A 1 Gbps copper link is capable of 125MB/s. If using Windows, use "storage spaces" in mirror mode to make good use of misc drives. If someone have several SSD SATA drives, they can connect them to RAID 0 or can buy NVME. I'm gonna chime in here even though this is a year old. You probably won't care with 4 disks. Once you replace this disk, you'll then select another disk to be the master. So I have to choose between nvme 2x2tb RAID-10 and ssd 4x1tb raid-10. With Raid1 you always lose 50% for redundancy, the worst proportion. co/ehrMONn. At that point raid 10 is the better option. An 3x8 raidz2 of SATA SSD's should be able to do nearly 100gbit read and maybe 80gbit write. Little can tax the endurance of an SSD faster than video editing. Nothing is impossible. I can get 3 x 1tb SATA (dramless) ssd's pretty cheap and I have one in the system already. ago • Edited 1 yr. Besides, since the OS has to read both SSDs for each I/O, the SSD life span will be shorter than usual, as if the usual life span is not short enough. The whole point of creating the RAID 1 is in case one of the SSD's fail. since raid 1 mirrors the data to both drives. If you're tight on capacity, and are keeping up with Settings -> IO Ports -> SATA Configuration -> NVMe RAID Mode =Enabled. As you may already know, if one of your SSDs dies, then all of your files are gone. (Due to my lack of knowledge i have no idea if running TrueNAS as VM or some Hardware is the bottleneck) Planning to add another pool: - 6-port Sata3 Card. - double read performance of a single drive. One 2tb ssd incase u want more storage in the future since u said there are only 2 slots on the motherboard. Also, the article which might help with general understanding, including Pros and Cons. NVME is around 2. It is used for either performance increase (striping) or as a failsafe. Spent some time searching but couldn't really find anything concrete about this. Not just because of the obvious risk of data loss, but also because SSDs are so F-ing fast that you won't feel a difference with 2 in raid1 or 0. With more disks (say if you have a bigger system one day), you could do a Storage Spaces array (SSD acting as a hot tier). BobZelin. The performance difference between RAID-6 and RAID-5 is the performance of writing on six drives instead of seven, so RAID-6 is about 87% the speed of RAID-5 for arrays of the same number of drives (assuming RAID-6 has two drives for redundancy - really, you can have as many as you want). There isn't a RAID configuration that uses 2 drives with one as a boot drive and one for storage. The whole point of raid is that if drive fails — nothing happens to the rest of the system nor user data. Your question has a very simple answer, you cannot use SSDs with Unraid for the parity array. But it would still be significantly faster than a single hdd i have right now. We now use the Samsung Evo 860 SSDs. •. there's no advantage to two separate ssds. SSDs are such a significant performance increase that even a single SSD will outperform a high-end RAID0 volume of hard disks. 2 NVMe SSDs for good sized datastores, or up to 4 smaller ones in each server. My use case: work from home web developer (with some content creation) and it my day to day tasks well, can handle light But up to now I have mostly used hard disks on older hardware. I would recommend to add 2 more SATA SSD (small one, 120 GB - 60-80 $ each). . 2 drive with a 2. I started my research, setuping the Bios to manage the RAID mod, making sure the 2 SSD are the same and usable for a RAID 1, but when I'm in the partition center, right clicking on one of my SSD there is So i can get 4x 256gb hard drives for cheap (like 5$ each) and a pcie 2. I was wrong. I have been googling, but can't find any setups explaining if you need zil, slogs or even arc with an SSD setup. If you create a RAID 0, and one of the SSD's fails, then you can't use your QNAP. I also can't find anyone providing expected read writes stats on their setup. (Loss in performance, especially if your SSD has high speeds. Edit: thanks for all of the responses! I will be going with a single 2TB SSD. Raid 5. On Intel platform 4x 860 Evo 1TB on RAID 0 = 1x NVME PCIe 3 SSDs are more reliable than HDDs, especially if they're enterprise-grade with very high TBW ratings, and especially if they're underprovisioned or less than half-full. Meaning if you lose one drive the whole array is kaput. SSDs may run at SATA 2 3Gbps as a result. To counter this, Synology introduced raid F1 for a large number of enterprise NAS's. People who say RAID is unnecessary make me laugh. The motherboard has an Intel built in RAID Controller. It popped up in my google search. you would get 11000 MB/s reads and 4800 MB/s writes in RAID 1. One of my coworkers asked me to install a new server that we will use at our work place, asking me to setup a RAID 1 between 2 brand new SSD. Replace drives in an HDD RAID array with SSDs. we are using SSD RAID 6 but cannot seems to find the cause why. The first pair of drives for active project files, the second RAID 0 array for cache. So I don't use RAID 1. 2 NVMe SSD Enclosure w/ Thunderbolt3 Ports https://a. Despite the age, they perform very well. 2 SSD's, sata and nvme. 5" sata drive, you also can't raid an nvme M. Depends on what you do with your machine. 03d on local storage for the VMs. There's 2 types of M. Im seeing a not very high performance even only at moving files inside the pool -> Network I/O at 19 MiB/s. With two smaller SSDs, there will be two controllers doing this, but obviously, one controller cannot access the blocks A Raid 0 HDD even with a SSD cache won’t perform even close to the performance a SSD would on its own. So, you should be completely fine with that. Instead data is constantly shuffled around to avoid write multiplication and also during garbage collection and trim. I would consider keeping disks in array, get (a raid 1) bigger cache pool, tweak mover to keep files on cache for a longer time (since you are more likely to watch new things than archived, I guess) and spinn down your disks. 0 raid controller that i will put in a 1x slot for 25$. You still need a backup plan. Though you said reducing disk locations was a main reason in the first place. you need to be aware each drive durability (TBW Yeah, that last one happened. Looking for best performance on the SQL server. Generally if you hear words “usb” and “raid” in one sentence — RUN. Like someone else mentioned you can use a PCI adapter also. If you had 12 or 20 drives then I would go raid 6 or 10. The write speeds of a pair of 970 pros in RAID 0 should exceed 5 Gigabytes per second. Installing and running games is significantly faster on RAID 0. But it would stall writing 2TB. In short, I wouldn’t. Spec: Ryzen 7 7840hs 32gb ram 1tb ssd 4060 mobile 8gb. They just wanted to have one C: drive and throw everything on there and done. I am using 2 RAID-5 volumes (12. 2 SSD, will running a RAID configuration slow down the speed/performance of the PCIe4 SSD? If there is a performance difference with the RAID configuration, are we talking imperceptible differences during real-world use? RAID 5 out of SSDs should do it and a standalone HDD can be used for local backups or some cold data. As a former SSD firmware engineer and now working with storage for streaming servers, I have a couple things to add. let's say you had transfers of 5500/4800 for your NVMe drive specifications. 2 SSDs in RAID Configuration. Well it turns out 500GB won't be enough and I'm looking for an upgrade path. In any case, you should have separate backups with any drives you have in a RAID, especially consumer SSDs. RAID 0 also means lower reliability. Owc makes a thunderbolt 3 enclosure that I use. This results in a RAID 0 array that the BIOS sees perfectly. Think windows storage spaces can fit drives of any size u to a pool, although from what i remwmber the performace is much slower than a real raid 0. 6. Create a ZFS raidz1 pool of the 3 disks, and then attach the SSD as cache. Id leave one nvme for your os and background programs and then maybe raid the ssds. So I went into bios and turned off RAID between the two drives. I would do the calculations, because if the data you are accessing and storing is constantly being accessed you will use 50% of your usable space on parity. As mentioned, main issues with consumer grade SSDs is lack of power-loss protection and low TBW comparing to Enterprise grade drives. - same write performance single drive. With any of these more-than-one-M. If you have a need for SSDs, then you also need to consider your network and end devices. I was thinking 1-2TB SSD storage just for games and other stuff that is expendable. I work at an msp as the main technical lead. EDIT: I would like a minimum of 7 days of retention and would shoot for 14 days. The main advantages SAS has over SATA (assuming apples to apples comparisons with speeds) are better bus management, higher numbers of ports / enclosure options. Just use a single NVMe drive and backup to the cloud or a cheap spinning HDD (or ideally both). You should be able to clone the raid array block device straight to an appropriatly sized SSD. Setup is ESXi 7. I'm configuring a new RAID 10 array out of 8 SSD drives, looking for help configuring for best performance on a ESXi 6. Reads should be a little faster than a single drive although writes are slower. Hi guys, we are facing a extermly slow server. Is RAID 0 with NVMe SSDs still considered unnecessary? I just testet it for fun on a new rig, and it looks promising. Sounds like a terrible idea - in Raid 0 if one drive has a problem you lose all data effectively off both drives - and using an SSD depending on the type of caching would exacerbate any data loss. A single SSD averages around 400-500MB/s. Edit: SATA 3 6Gbps on this particular card only seems to be supported with Hard drives and not SSDs according to another post. Storage Spaces is basic Raid 0 configuration Vs Single M. I know that it would still be like 100MBps slower than ssd and the 4k rw wont even be comparable. I have the option of going with enterprise NVMe SSD (U. 4 TB. I have bought two PCLe NVMe M. Also, raid 5 gets you good read performance, but no gain on write. If you don't care about this, the LEVEN is the absolute cheapest, $140, and it's very fast. Maybe we have been lucky, but we have setup at least 15+ SSD-only RAID arrays over the past three years, only had a single drive failure (out of roughly 120x deployed SSDs, kind of eerie. Well, when you have 20TB of data the extra reliability of Raid 5 becomes quite nice. You don't need "speed" for the OS. Are performance improvements by nvme enough to not use raid-10? Synology will only allow a read/write cache if you have two SSDs in RAID1. In the BIOS, I built a RAID Stripe 0 array out of all the drives above to use as a single, fast SSD RAID volume. We, for example, completely abandoned hardware RAID as CPU based If you want some redundancy plus extra speed and don't mind the cost, RAID10 is a pretty good choice. Get a 500GB WD Blue NVME drive if you can use NVME. You can rebuild so fast on SSD that you won’t have to worry about multiple failures. skibare87. 6 is 5 with an extra parity disk. 4k sample will be unchanged but others such as 4-64k and sequential will increase. 2 SSD. gm yo cd vu le zn rt nt iz uj