Over the past year, I received several great compliments regarding these postings, and for that I just wanted to say thanks. There have been a few questions floating around regarding the RAID configuration possibilities and today, I have something I found interesting to share with everyone.
You can in fact change the volume and array structure to fit your needs. I found a really nice command line tool on the Intel® Download Center called the RST CLI. The tool corresponds with a particular driver release, but allows you to configure the controller and drives for any particular array and volume you choose. In my initial tests, I have been able to successfully use the version 188.8.131.521 driver and RST CLI tool, to change my 4 drive array,
into a RAID 10 configuration (*see note at the end). I have since changed back to a RAID 5 configuration and modified the stripe size to fit my particular data storage and performance needs.
To get started, download the drivers and toolkit from the links above, or just visit the download center and search for RST drivers and RST CLI. Note: they do have a newer driver available (version 13.1); however, as of the time of this writing they do not have a matching toolkit available (version 13.0 is the latest for RST CLI) so use the 12.9 versions.
If you have been following along my articles to this point, congratulations on being able to decipher my sometimes cryptic process. Similarly to Part 1 at Step 3, you will need to inject the driver into the boot.wim; however, make sure to “remove” the older version from the image prior to doing this. It is really simple, just get a list of the drivers you currently have injected by executing the following command after mounting the image:
dism /get-drivers /image:E:\WINPE\MOUNT\WINRE
You should see a list of “OEM” drivers you have previously injected that you can selectively remove. For example:
If I needed to update the drivers related to the RST controller, I would remove oem0.inf, and oem1 as listed in Figure 1 by executing the following command:
dism /remove-driver /image:E:\WINPE\MOUNT\WINRE /DRIVER:oem0.inf /DRIVER:oem1.inf
and then inject the new drivers (note the location where you extracted them):
dism /add-driver /image:E:\WINPE\MOUNT\WINRE /DRIVER:E:\WINPE\WDRST\12.9\f6flpy-x64
You do not have to list the driver specifically, unless you want to. You could just list the directory you extracted the files into, and DISM will pick up any INF files in the directory and add them to the image. In the case of our RST driver download, this adds both the iaahcic and iastorac files into the image.
Add RSTCLI toolkit:
This is pretty simple, just extract the CLI tool you downloaded to a directory and make sure to extract the x64 version. There should be some scripts zipped up there, which you can ignore for now, and the RSTCLI64.EXE command line tool.
Create a directory for that tool at the root of WIM image you extracted (E:\WINPE\MOUNT\WINRE in my case examples above). I created a directory simply called RSTCLI, and copy that file into it. Close any explorer windows you may have open to prevent DISM issues with open file handles, and commit the changes to your updated boot.wim for use in your bootable USB.
Setting up your ARRAY:
This is where things can get a bit complex and I’ll try to only go over what I know for fact and boil it down as simply as possible. The most important thing you need to remember when using the RST CLI tool… it is destructive! It will in fact wipe out anything and everything related to your volumes and arrays. So just make absolutely sure you’ve got backups of your data before using the tool to reconfigure the array. Unless you have a good friend at a data recovery company, which I don’t, then just be aware that the changes made are not reversible.
Of course I do have to say, try to read a document called “Intel® Rapid Storage Technology OEM Technical Guide.” If you do a Bing or Google search, you may be able to find it for download. I am not an OEM, and I’m nearly certain the document is hard to get outside of that circle… but there are versions floating around out there. Otherwise, you’ll need to do the old-school method of using the command line tool help feature (rstcli64.exe –help). Unfortunately, there are limited examples in the help feature, and the document provides several of how to use the tool for many configurations.
To get to the basics though, there are really only 4 steps to deconstruct and build an array with the tool:
- Clear the metadata on the disks
- Create the volume and type with your specific parameters
- Modify the cache policy if so desired
- Initialize the volume (note: this is not required, but is important as I’ll explain a little later)
Ok, so maybe there are five steps, the initial being… get some information about your controller and array, which can be done by executing the following command:
You should be presented with a plethora of details that might, or not thrill you. The important thing to note, are the ID values provided to you about each disk in the subsystem. Just make a mental note, or jot it down. You’ll find it a bit odd, as I did, that the drive in Port 1 will have an ID of 0-2-0-0.
If you are thinking like I did, yes indeed… the onboard controller is missing two slots. As you and I have now discovered the Max Disks support on this system is 6, not 4. However, 0-0-0-0 and 0-1-0-0 are not accessible to us due to the fact that the PCIe x4 riser card that feeds the BUS only supports the last four ports. Wouldn’t it be great if someone made a riser card that supported the first two internal ports? I thought so.
To clear the metadata, execute the following command in WINPE shell in the RST CLI directory:
X:\RSTCLI\rstcli64.exe –manage –delete-all-metadata
The text there is red for a reason, not because I’m angry but because you really need to make sure you want to do this.
To create the volume, you will need to decide what kind of configuration your heart desires. In this example, I’ll show you how to rebuild the subsystem for RAID 5 with a stripe size of 128K.
X:\RSTCLI\rstcli64.exe –C –l 5 –n VolumeName 0-2-0-0 0-3-0-0 0-4-0-0 0-5-0-0 –stripe-size 128
Valid unit sizes for the stripe are 4, 8, 16, 32, 64, and 128. These are for the subsystem and are not related the allocation unit values which you will chose when you format the volume using DISKPART. This property is related to how the controller reads data blocks from the drives. This is actually a really important value to get right and it truly depends on how the system is going to be used. In simple terms, if you read and write a lot of small files, use a lower stripe size. However, if you are like me and store rather large files like video or ISO files, then perhaps you can benefit from using a larger stripe.
For further information, I recommend reading an article posted by Kendal Van Dyke regarding “Disk Performance Hands On”. It is a good article.
To modify the cache policy execute the following:
X:\RSTCLI\rstcli64.exe –manage –volume-cache-policy WB –volume VolumeName
Just note, this sets the policy to Write-back, which is great in my opinion for performance but you must have a backup power supply for the server. The risk is too great without one.
To initialize the array, it is a really simple command; however, you should know that this is an optional feature and should probably only be done when you have settled on your configuration and want to solidify it. It is not necessary to do this but I do recommend it. If you do not initialize the array, there is no guarantee that the data can be verified. So it must be initialized. I noticed that on my DX4000, the array was not initialized from the factory.
You can safely work with the subsystem while it is being initialized. You can even power down the system and on startup the system will continue the initialization process from where it left off. There may be some performance hit during this period, but so far my 12TB configuration has been working over 24hours on the initialization with practically zero impact and I’ve rebooted multiple times.
To begin the initialization, execute the following command for the volume you’ve structured:
X:\RSTCLI\rstcli64.exe –manage –initialize VolumeName
Those are the nuts and bolts to reconfiguring your very own array subsystem on the Western Digital DX4000. This will also provide you a way to customize the stripe size for optimal performance, which is nice as we all do not use our storage system the same way. This also gives you a way to actually initialize a volume properly to ensure some data integrity if the need to reconstruct or verify a drive happens.
It would be great if I had time to do some benchmarks on various stripe sizes versus allocation unit sizes, but unfortunately I can’t right now. I will leave you with a few notes on my discoveries, I hope this post has been helpful. And again, good luck out there!
- While dynamic storage accelerator is a feature which can be enabled, it seems the actual use of acceleration is not possible in this system. Why? I have no clue, but after several attempts of sitting an SSD in port 1 and 2, I could not get the system to successfully do storage acceleration. It is unfortunate and I go back to the “Max Disks” thing I mentioned earlier. The technology requires internal ports for use, and even though drives in ports 1 and 2 show up as internal, they are not usable as such. Perhaps there is a magical way to get it done, but I could not figure it out.
- Regardless of the OROM built in to the UEFI, you can in fact use the 12.9 driver and CLI to modify the array. I was not able to find any OROM updates that could potentially allow me to enable DSA (dynamic storage acceleration).
- I may have missed in an earlier article a requirement for getting the LCD PowerShell thing working properly. If (like me), you are moving up to server 2012R2, you will need to get the VC++ 2010 x64 redistributable package installed along with the original WD LPC driver from their recovery ISO to get the script working. Also note that in order to get the PowerShell script working in the scheduler in 2012R2, you need to remove the quotes from around the path to the script… why? No idea.
- TIP: I believe I mentioned in the video that I was experiencing an issue related to the cache policy for the array. Well, it turns out that since the volume contained both partitions C: and D:, and I had promoted the system to a domain controller, Microsoft has a built in policy to disable write-back caching. To solve this, I rebuilt the array by taking one drive out, and placed a SSD into port 1 to be used for EFI, System, and Windows partitions. I took the remaining drives, built my array with the CLI tool in RAID 5. The RAID will obviously now utilize caching. While the SSD looses the ability to use write-back caching, it really isn’t necessary. The system boots considerably faster, and is far more reactive in RDP sessions. Oh, and here’s a great review on the product I used for the SSD, thanks to the folks over at Home Server Show for this little treat. Here’s what I used from Newegg: ICY DOCK MB882SP.
- ** seems that RAID 10 may not be working. I’m checking into it; however, it seems another feature of the subsystem is perhaps crippled. †
- † After several attempts to rebuild the array for RAID 10, it would now seem pretty clear that the ROM has been feature crippled. The controller is fully capable for both DSA and RAID 0, 10; however, without being able to gain access via IPMI or some type of remote UEFI shell, WD has built a brick wall for any enthusiast. That’s unfortunate. I was able to get two RAID 1 volumes built, but that’s just silly. I have some serious data storage for rendering, images, and video so being able to gain the RAID 10 benefit and redundancy is important to me. However, I guess I’ll just have to build my own solution for that purpose and keep the DX4000 as a simple client storage dump.
- On these notes, I will say that after building the array myself and updating the RST drivers to 12.9, enabling the write-back cache policy on the array, and installing Windows Server 2012R2… the write-back cache policy has held even after promoting the system to a domain controller. This is more likely due to the fact that pulling one of the drives and replacing it with a SSD and using it as the primary boot drive allows the system to function with that caching policy on the array.