QSAN iSCSI subsystem P300H61 / P300H71 GbE iSCSI to SATA II / SAS RAID subsystem User Manual Version 7.79 (MAR, 2011) QSAN Technology, Inc. http://www.QsanTechnology.com User Manual# QUM201106-P300H61_P300H71...
About this manual This manual is the introduction of QSAN P300H61 / P300H71 subsystem and it aims to help users know the operations of the disk array system easily. Information contained in this manual has been reviewed for accuracy, but not for product warranty because of the various environments / OS / settings.
Page 4
Volume configuration..............56 4.4.1 Physical disk......................57 4.4.2 RAID group ......................60 4.4.3 Virtual disk ....................... 63 4.4.4 Snapshot........................68 4.4.5 Logical unit ......................71 4.4.6 Example ........................73 Enclosure management ..............77 4.5.1 Hardware monitor....................78 4.5.2 UPS .......................... 80 4.5.3 SES...........................
Page 5
Event notifications ..............135 How to get support..............142 Appendix ................146 Compatibility list ...............146 Microsoft iSCSI initiator ............147...
QSAN subsystem can provide non-stop service with a high degree of fault tolerance by using QSAN RAID technology and advanced array management features. P300H61 / P300H71 subsystem connects to the host system by iSCSI interface. It can be configured to numerous RAID level. The subsystem provides reliable data protection for servers by using RAID 6.
Highlights • QSAN P300H61 / P300H71 feature highlights Host 8 x iSCSI GbE ports Interface Drive 16 x SAS or SATA II (P300H61) Interface 24 x SAS or SATA II (P300H71) RAID Dual-active RAID controllers Controllers Scalability SAS JBOD expansion port...
Protection Connection Load balancing and failover support on the 8 iSCSI GbE ports Availability Dimension 447 x 490 x 130 (mm) (P300H61) (W x D x H) 447 x 490 x 171 (mm) (P300H71) Power 2 x 500W PSU (P300H61)
Page 9
Virtual Disk. Each RD could be divided into several VDs. The VDs from one RG have the same RAID level, but may have different volume capacity. Logical Unit Number. A logical unit number (LUN) is a unique identifier which enables it to differentiate among separate devices (each one is a logical unit).
Page 10
S.M.A.R.T. Self-Monitoring Analysis and Reporting Technology. World Wide Name. Host Bus Adapter. SCSI Enclosure Services. Network Interface Card. Battery Backup Module • Part 2: iSCSI iSCSI Internet Small Computer Systems Interface. LACP Link Aggregation Control Protocol. MPIO Multi-Path Input/Output. MC/S Multiple Connections per Session Maximum Transmission Unit.
1.2.2 RAID levels There are different RAID levels with different degree of data protection, data availability, and performance to host environment. The description of RAID levels are on the following: RAID 0 Disk striping. RAID 0 needs at least one hard drive. RAID 1 Disk mirroring over two disks.
The below graphic is the volume structure which QSAN has designed. It describes the relationship of RAID components. One RG (RAID group) consists of a set of VDs (Virtual Disk) and owns one RAID level attribute. Each RG can be divided into several VDs. The VDs in one RG share the same RAID level, but may have different volume capacity.
Page 13
The target is the storage device itself or an appliance which controls and serves volumes or virtual volumes. The target is the device which performs SCSI command or bridge to an attached storage device. Host 2 Host 2 (initiator) (initiator) Host 1 Host 1 iSCSI...
Memory: 2GB DDRII 533 DIMM, maximum 4GB support per controller Hardware iSCSI off-load engine 2 x UARTs: serial console management and UPS Fast Ethernet port for web-based management use Backend: 16 x SAS or SATA II drive connections (P300H61) Backend: 24 x SAS or SATA II drive connections (P300H71)
Page 15
LCM for quick management Hot pluggable BBM support (optional) 10. SAS JBOD expansion port for expansion 11. QSATA board support for SATA drives (optional) 12. Two power supplies (P300H61) Three power supplies (P300H71) 13. Redundant fans • RAID and volume operation...
Page 16
Management UI via serial console SSH telnet HTTP Web UI secured Web (HTTPS) Notification via Email SNMP trap Browser pop-up windows Syslog Windows Messenger iSNS support DHCP support • iSCSI features iSCSI jumbo frame support Header/Data digest support CHAP authentication enabled Load-balancing and failover through MPIO, MC/S, Trunking, and LACP Up to 32 multiple nodes support •...
Dimensions 3U16 19 inch rackmount chassis (P300H61) 4U24 19 inch rackmount chassis (P300H71) 447mm x 490mm x 130mm (W x D x H) (P300H61) 447mm x 490mm x 171mm (W x D x H) (P300H71) 1.4.2 FCC and CE statements...
Page 18
Mechanical Loading - Mounting of the equipment in the rack should be such that a hazardous condition is not achieved due to uneven mechanical loading. D. Circuit Overloading - Consideration should be given to the connection of the equipment to the supply circuit and the effect that overloading of the circuits might have on overcurrent protection and supply wiring.
Chapter 2 Installation Package contents The package contains the following items: P300H61 / P300H71 subsystem (x1) HDD trays (x16) (P300H61) HDD trays (x24) (P300H71) Power cords (x2) (P300H61) Power cords (x3) (P300H71) RS-232 cables (x2), one is for console (black color, phone jack to DB9 female), the...
LCD display. Power LED: Blue Power on. Power off. Access LED: Orange Host is accessing. Host is no access. Status LED: System is failure. System is good. Mute button. Up button. Down button. Enter button. ESC button. 2.3.3 Install drives Remove a drive tray.
HDD tray description: HDD fault LED: HDD is failure. HDD is good. HDD activity LED: Blue HDD is active. Violet blinking HDD is accessing. No HDD. Latch for tray kit removal. HDD tray handhold. 2.3.4 Rear view Figure 2.3.4.1 (P300H61)
Page 23
Figure 2.3.4.2 (P300H71) • PSU and Fan module description: Power supply unit (PSU3). Fan module (FAN2). Power supply unit (PSU2). Power supply unit (PSU1). Fan module (FAN1). Controller 1. Controller 2. Figure 2.3.4.3...
• Connector, LED and button description: Gigabit ports (x4). LED (from left to right) Controller Health LED: Green Controller status normal or in the booting. Other than above status. Master Slave LED: Green Master controller. Slave controller. Dirty Cache LED: Orange Data on the cache waiting for flush to disks.
Figure 2.4.1 BBM (Battery Backup Module) supports hot pluggable. Regardless of the subsystem is turned on or off. Remove the cover of BBM. Insert the BBM. Tighten the BBM and use screws to lock the both sides. Done. Deployment Please refer to the following topology and have all the connections ready.
Page 26
In addition, installing an iSNS server is recommended for dual controller system. Power on P300H61 / P300H71 and J300H61 / J300H71 (optional) first, and then power on hosts and iSNS server.
Page 27
Figure 2.5.2 The following topology is the connections for console and UPS (optional). Figure 2.5.3...
Page 28
Using RS-232 cable for console (back color, phone jack to DB9 female) to connect from controller to management PC directly. Using RS-232 cable for UPS (gray color, phone jack to DB9 male) to connect from controller to APC Smart UPS serial cable (DB9 female side), and then connect the serial cable to APC Smart UPS.
Chapter 3 Quick setup 3.1 Management interfaces There are three management methods to manage QSAN subsystem, describe in the following: 3.1.1 Serial console Use console cable (NULL modem cable) to connect from console port of QSAN subsystem to RS 232 port of management PC. Please refer to figure 2.3.1. The console settings are on the following: Baud rate: 115200, 8 data bit, no parity, 1 stop bit, and no flow control.
Page 30
192.168.10.50 Qsan P300H61 ← Figure 3.1.3.1 192.168.10.50 Qsan P300H71 ← Figure 3.1.3.2 Press “Enter” button, the LCM functions “System Info.”, “Alarm Mute”, “Reset/Shutdown”, “Quick Install”, “Volume Wizard”, “View IP Setting”, “Change IP Config” and “Reset to Default” will rotate by pressing (up) and (down).
Page 31
Default gateway: 192.168.10.254 • LCM menu hierarchy: [Firmware Version x.x.x] [System Info.] [RAM Size xxx MB] [Alarm Mute] [ Yes No ] [ Yes [Reset] No ] [Reset/Shutdown] [ Yes [Shutdown] No ] RAID 0 RAID 1 RAID 3 [Quick Install] RAID 5 [Apply The [ Yes...
Caution Before power off, it is better to execute “Shutdown” to flush the data from cache to physical disks. 3.1.4 Web UI QSAN subsystem supports graphic user interface (GUI) to operate. Be sure to connect the LAN cable. The default IP setting is DHCP; open the browser and enter: http://192.168.10.50 (Please check the DHCP address first on LCM.) And then it will pop up a dialog for authentication.
Page 33
Figure 3.1.4.2 There are seven indicators and three icons at the top-right corner. Figure 3.1.4.3 • Indicator description: RAID light: Green RAID works well. RAID fails. Temperature light: Green Temperature is normal. Temperature is abnormal. Voltage light: Green voltage is normal. voltage is abnormal.
controller alive and well. Return to home page. Logout the management web UI. Mute alarm beeper. Tips If the status indicators in Internet Explorer (IE) are displayed in gray, but not in blinking red, please enable “Internet Options” “Advanced” “Play animations in webpages” options in IE. The default value is enabled, but some applications will disable it.
Page 35
Figure 3.2.1.2 Step2: Confirm the management port IP address and DNS, and then click “Next”. Figure 3.2.1.3 Step 3: Set up the data port IP and click “Next”.
Page 36
Figure 3.2.1.4 Step 4: Set up the RAID level and volume size and click “Next”. Figure 3.2.1.5 Step 5: Check all items, and click “Finish”.
Figure 3.2.1.6 Step 6: Done. 3.2.2 Volume creation wizard “Volume create wizard” has a smarter policy. When the system is inserted with some HDDs. “Volume create wizard” lists all possibilities and sizes in different RAID levels, it will use all available HDDs for RAID level depends on which user chooses. When system has different sizes of HDDs, e.g., 8*200G and 8*80G, it lists all possibilities and combination in different RAID level and different sizes.
Page 38
Figure 3.2.2.1 Step 2: Please select the combination of the RG capacity, or “Use default algorithm” for maximum RG capacity. After RG size is chosen, click “Next”. Figure 3.2.2.2...
Page 39
Step 3: Decide VD size. User can enter a number less or equal to the default number. Then click “Next”. Figure 3.2.2.3 Step 4: Confirmation page. Click “Finish” if all setups are correct. Then a VD will be created. Step 5: Done. The system is available now. Figure 3.2.2.4 (Figure 3.2.2.4: A virtual disk of RAID 0 is created and is named by system itself.)
Chapter 4 Configuration 4.1 Web UI management interface hierarchy The below table is the hierarchy of web GUI. System configuration System name / Date and time / System indication System setting MAC address / Address / DNS / Port Network setting Login setting Login configuration / Admin password / User password...
System System information information Event log Download / Mute / Clear Upgrade Browse the firmware to upgrade Firmware Synchronize the slave controller’s firmware version with the master’s synchronization Reset to factory Sure to reset to factory default? default Import/Export / Import file Import and export Reboot / Shutdown...
Figure 4.2.1.1 Check “Change date and time” to set up the current date, time, and time zone before using or synchronize time from NTP (Network Time Protocol) server. Click “Confirm” in System indication to turn on the system indication LED. Click again to turn off. 4.2.2 Network setting “Network setting”...
Figure 4.2.2.1 4.2.3 Login setting “Login setting” can set single admin, auto logout time and admin / user password. The single admin is to prevent multiple users access the same system in the same time. Auto logout: The options are (1) Disabled; (2) 5 minutes; (3) 30 minutes; (4) 1 hour. The system will log out automatically when user is inactive for a period of time.
Figure 4.2.3.1 Check “Change admin password” or “Change user password” to change admin or user password. The maximum length of password is 12 characters. 4.2.4 Mail setting “Mail setting” can enter 3 mail addresses for receiving the event notification. Some mail servers would check “Mail-from address”...
Figure 4.2.4.1 4.2.5 Notification setting “Notification setting” can set up SNMP trap for alerting via SNMP, pop-up message via Windows messenger (not MSN), alert via syslog protocol, and event log filter for web UI and LCM notifications.
Page 46
Figure 4.2.5.1 “SNMP” allows up to 3 SNMP trap addresses. Default community setting is “public”. User can choose the event log levels and default setting enables ERROR and WARNING event log in SNMP. There are many SNMP tools. The following web sites are for your reference: SNMPc: http://www.snmpc.com/ Net-SNMP:...
Figure 4.3.1 4.3.1 “NIC” can change IP addresses of iSCSI data ports. P300H61 / P300H71 has four gigabit ports on each controller to transmit data. Each of them must be assigned to an IP address and be set up in multi-homed mode, or the link aggregation / trunking mode has been set up.
Page 48
(Figure 4.3.1.1: There are 4 iSCSI data ports on each controller. 4 data ports are set with static IP.) • IP settings: User can change IP address by checking the gray button of LAN port, click “IP settings for iSCSI ports”. There are 2 selections, DHCP (Get IP address from DHCP server) or static IP.
Page 49
The following is the description of multi-homed / trunking / LACP functions. Multi-homed: Default mode. Each of iSCSI data port is connected by itself and is not link aggregation and trunking. This function is also for Multipath functions. Select this mode can also remove the setting of Trunking / LACP in same time. Trunking: defines the use of multiple iSCSI data ports in parallel to increase the link speed beyond the limits of any single port.
For example, LAN1 and LAN2 are set as Trunking mode. LAN3 and LAN4 are set as LACP mode. To remove Trunking / LACP setting, check the gray button of LAN port, click “Delete link aggregation”. Then it will pop up a message to confirm. •...
Figure 4.3.2.1 4.3.3 Node “Node” can view the target name for iSCSI initiator. P300H61 / P300H71 supports up to 32 multi-nodes. There are 32 default nodes created for each controller. Figure 4.3.3.1 • CHAP: CHAP is the abbreviation of Challenge Handshake Authentication Protocol. CHAP is a strong authentication method used in point-to-point for user login.
Page 52
To use CHAP authentication, please follow the procedures. Select one of 32 default nodes from one controller. Check the gray button of “OP.” column, click “Authenticate”. Select “CHAP”. Figure 4.3.3.2 Click “OK”. Figure 4.3.3.3 Go to “/ iSCSI configuration / CHAP account” page to create CHAP account. Please refer to next section for more detail.
Page 53
Figure 4.3.3.4 Click “OK”. In “Authenticate” of “OP” page, select “None” to disable CHAP. • Change portal: Users can change the portals belonging to the device node of each controller. Check the gray button of “OP.” column next to one device node. Select “Change portal”.
Check the gray button of “OP.” column next to one device node. Select “Rename alias”. Create an alias for that device node. Click “OK” to confirm. An alias appears at the end of that device node. Figure 4.3.3.6 Figure 4.3.3.7 Tips After setting CHAP, the initiator in host should be set with the same CHAP account.
Figure 4.3.4.2 (Figure 4.3.4.2: iSCSI Connection.) 4.3.5 CHAP account “CHAP account” can manage a CHAP account for authentication. P300H61 / P300H71 can create multiple CHAP accounts. To setup CHAP account, please follow the procedures. Click “Create”. Enter “User”, “Secret”, and “Confirm” secret again. “Node” can be selected here or later.
Figure 4.3.5.1 Click “OK”. Figure 4.3.5.2 Click “Delete” to delete CHAP account. 4.4 Volume configuration “Volume configuration” is designed for setting up the volume configuration which includes “Physical disk”, “RAID group”, “Virtual disk”, “Snapshot”, “Logical unit”, and “QReplica” (optional).
Figure 4.4.1 4.4.1 Physical disk “Physical disk” can view the status of hard drives in the system. The followings are operational steps: Check the gray button next to the number of slot, it will show the functions which can be executed. Active function can be selected, and inactive functions show up in gray color and cannot be selected.
Page 58
Figure 4.4.1.3 (Figure 4.4.1.3: Physical disks in slot 1,2,3 are created for a RG named “RG-R5”. Slot 4 is set as dedicated spare disk of the RG named “RG-R5”. The others are free disks.) Step 4: The unit of size can be changed from (GB) to (MB). It will display the capacity of hard drive in MB.
Page 59
Status The status of hard drive: “Online” the hard drive is online. “Rebuilding” the hard drive is being rebuilt. “Transition” the hard drive is being migrated or is replaced by another disk when rebuilding occurs. “Scrubbing” the hard drive is being scrubbed. Health The health of hard drive: “Good”...
Command Newer SATA and most SCSI disks can queue multiple commands queuing and handle one by one. Default is “Enabled”. • PD operation description: Set Free disk Make the selected hard drive be free for use. Set Global Set the selected hard drive to global spare of all RGs. spare Set a hard drive to dedicated spare of the selected RG.
Page 61
Figure 4.4.2.1 Step 2: Confirm page. Click “OK” if all setups are correct. Figure 4.4.2.2 (Figure 4.4.2.2: There is a RAID 0 with 4 physical disks, named “RG-R0”. The second RAID group is a RAID 5 with 3 physical disks, named “RG-R5”.) Step 3: Done.
Page 62
Total (GB) Total capacity of this RAID group. The unit can be displayed in (MB) GB or MB. Free (GB) Free capacity of this RAID group. The unit can be displayed in (MB) GB or MB. The number of physical disks in a RAID group. The number of virtual disks in a RAID group.
Activate Activate the RAID group after disk roaming; it can be executed when RG status is offline. This is for online disk roaming purpose. Deactivate Deactivate the RAID group before disk roaming; it can be executed when RG status is online. This is for online disk roaming purpose.
Page 64
Step 1: Click “Create”, enter “Name”, select RAID group from “RG name”, enter required “Capacity (GB)/(MB)”, change “Stripe height (KB)”, change “Block size (B)”, change “Read/Write” mode, set virtual disk “Priority”, select “Bg rate” (Background task priority), and change “Readahead” option if necessary. “Erase” option will wipe out old data in VD to prevent that OS recognizes the old partition.
Page 65
Figure 4.4.3.2 (Figure 4.4.3.2: Create a VD named “VD-01”, from “RG-R0”. The second VD is named “VD-02”, it’s initializing.) Step 3: Done. View “Virtual disk” page. • VD column description: The button includes the functions which can be executed. Name Virtual disk name.
Page 66
“Online” The virtual disk is online. “Offline” The virtual disk is offline. “Initiating” The virtual disk is being initialized. “Rebuild” The virtual disk is being rebuilt. “Migrate” The virtual disk is being migrated. “Rollback” The virtual disk is being rolled back. “Parity checking”...
Page 67
RG name The RG name of the virtual disk • VD operation description: Create Create a virtual disk. Extend Extend the virtual disk capacity. Parity check Execute parity check for the virtual disk. It supports RAID 3 / 5 / 6 / 30 / 50 / 60.
“Disabled” Disable AV-media mode. (Default) Type: “RAID” the virtual disk is normal. (Default) “Backup” the virtual disk is for clone usage. Attach LUN Attach to a LUN. Detach LUN Detach to a LUN. List LUN List attached LUN(s). Set clone Set the target virtual disk for clone.
Page 69
Step 1: Create snapshot space. In “/ Volume configuration / Virtual disk”, Check to the gray button next to the VD number; click “Set snapshot space”. Step 2: Set snapshot space. Then click “OK”. The snapshot space is created. Figure 4.4.4.1 Figure 4.4.4.2 (Figure 4.4.4.2: “VD-01”...
Page 70
Figure 4.4.4.4 Figure 4.4.4.5 (Figure 4.4.4.5: This is the snapshot list of “VD-01”. There are two snapshots. Snapshot VD “SnapVD-01” is exposed as read-only, “SnapVD-02” is exposed as read-write.) Step 5: Attach a LUN to a snapshot VD. Please refer to the next section for attaching a LUN.
Health The health of snapshot: “Good” The snapshot is good. “Failed” The snapshot fails. Exposure Snapshot VD is exposed or not. Right The right of snapshot: “Read-write” The snapshot VD can be read / write. “Read-only” The snapshot VD is read only. #LUN Number of LUN(s) that snapshot VD is attached.
Page 72
Figure 4.4.5.1 Figure 4.4.5.2 (Figure 4.4.5.2: VD-01 is attached to LUN 0 and every host can access. VD-02 is attached to LUN 1 and only the initiator node which is named “iqn.1991-05.com.microsoft:qsan” can access.) • LUN operation description: Attach Attach a logical unit number to a virtual disk. Detach Detach a logical unit number from a virtual disk.
4.4.6 Example The following is an example to create volumes. This example is to create two VDs and set a global spare disk. • Example This example is to create two VDs in one RG, each VD shares the cache volume. The cache volume is created after system boots up automatically.
Page 74
Figure 4.4.6.2 (Figure 4.4.6.2: Creating a RAID 5 with 3 physical disks, named “RG-R5”.) Step 2: Create VD (Virtual Disk). To create a data user volume, please follow the procedures. Figure 4.4.6.3 Select “/ Volume configuration / Virtual disk”. Click “Create”. Input a VD name, choose a RG Name and enter a size for this VD;...
Page 75
Figure 4.4.6.4 (Figure 4.4.6.4: Creating VDs named “VD-R5-1” and “VD-R5-2” from RAID group “RG-R5”, the size of “VD-R5-1” is 50GB, and the size of “VD-R5-2” is 64GB. There is no LUN attached.) Step 3: Attach a LUN to a VD. There are 2 methods to attach a LUN to a VD.
Page 76
Tips The matching rules of access control are from the LUNs’ created time, the earlier created LUN is prior to the matching rules. Step 4: Set a global spare disk. To set a global spare disk, please follow the procedures. Select “/ Volume configuration / Physical disk”.
To delete the virtual disk, please follow the procedures: Select “/ Volume configuration / Virtual disk”. Check the gray button next to the VD number; click “Delete”. There will pop up a confirmation page, click “OK”. Done. Then, the VD is deleted. Tips When deleting VD directly, the attached LUN(s) of to this VD will be detached together.
Temperature sensors: 1 minute. Voltage sensors: 1 minute. Hard disk sensors: 10 minutes. Fan sensors: 10 seconds . When there are 3 errors consecutively, system sends ERROR event log. Power sensors: 10 seconds, when there are 3 errors consecutively, system sends ERROR event log.
Figure 4.5.1.1 If “Auto shutdown” is checked, the system will shutdown automatically when voltage or temperature is out of the normal range. For better data protection, please check “Auto Shutdown”. For better protection and avoiding single short period of high temperature triggering auto shutdown, the system use multiple condition judgments to trigger auto shutdown, below are the details of when the Auto shutdown will be triggered.
Page 81
First, connect the cable(s) between the system and APC smart-UPS via RS-232. Then set up the following values for what the system will do the actions when power fails. Figure 4.5.2.2 (Figure 4.5.2.2: With Smart-UPS.) • UPS column description: Select UPS Type. Choose Smart-UPS for APC, None for other UPS Type vendors or no UPS.
“Communication lost” “UPS reboot in progress” “UPS shutdown in progress” “Batteries failed. Please change them NOW!” Battery level Current power percentage of battery level. The system will shutdown either “Shutdown battery level (%)” or “Shutdown delay (s) ” reaches the condition. User should set these values carefully. 4.5.3 SES represents SCSI Enclosure Services, one of the enclosure management standards.
This is much better than hard drive crash when it is writing data or rebuilding a failed hard drive. “S.M.A.R.T.” can display S.M.A.R.T. information of hard drives. The number is the current value; the number in parenthesis is the threshold value. The threshold values from different hard drive vendors are different;...
“Maintenance” allows the operations of system functions which include “System information” to show the system version and details, “Event log” to view system event logs to record critical events, “Upgrade” to the latest firmware, “Firmware synchronization” to synchronized firmware versions on both controllers, “Reset to factory default”...
Page 85
“Event log” can view the event messages. Check the checkbox of INFO, WARNING, and ERROR to choose the level of event log display. Click “Download” button to save the whole event log as a text file with file name “log-ModelName-SerialNumber-Date-Time.txt”. Click “Clear”...
Tips Please plug-in any of the first four hard drives, then event logs can be saved and displayed in next system boot up. Otherwise, the event logs cannot be saved and would be disappeared. 4.6.3 Upgrade “Upgrade” can upgrade controller firmware, JBOD firmware, change operation mode, and activate QReplica license.
Please prepare new controller firmware file named “xxxx.bin” in local hard drive, then click “Browse” to select the file. Click “Confirm”, it will pop up a warning message, click “OK” to start to upgrade firmware. Figure 4.6.3.2 When upgrading, there is a progress bar running. After finished upgrading, the system must reboot manually to make the new firmware took effect.
“Reset to factory default” allows user to reset subsystem to factory default setting. Figure 4.6.5.1 Reset to default value, the password is: 1234, and IP address to default DHCP. Default IP address: 192.168.10.50 (DHCP) Default subnet mask: 255.255.255.0 Default gateway: 192.168.10.254 4.6.6 Import and export “Import and export”...
“Reboot and shutdown” can “Reboot” and “Shutdown” the system. Before power off, it’s better to execute “Shutdown” to flush the data from cache to physical disks. The step is necessary for data protection. Figure 4.6.7.1 4.7 Home/Logout/Mute In the right-upper corner of web UI, there are 3 individual icons, “Home”, “Logout”, and “Mute”.
Chapter 5 Advanced operations 5.1 Volume rebuild If one physical disk of the RG which is set as protected RAID level (e.g.: RAID 3, RAID 5, or RAID 6) is FAILED or has been unplugged / removed, then the status of RG is changed to degraded mode, the system will search/detect spare disk to rebuild the degraded RG to a complete one.
Page 91
Sometimes, rebuild is called recover; they are the same meaning. The following table is the relationship between RAID levels and rebuild. • Rebuild operation description: RAID 0 Disk striping. No protection for data. RG fails if any hard drive fails or unplugs. RAID 1 Disk mirroring over 2 disks.
5.2 RG migration and moving To do migration, the total size of RG must be larger or equal to the original RG. It does not allow expanding the same RAID level with the same hard disks of original RG. There is a similar function “Move”...
Page 93
Tips “Migrate” function will migrate the member disks of RG to the same physical disks but it should increase the number of disks or it should be different RAID level. “Move” function will move the member disks of RG to totally different physical disks.
Figure 5.2.4 5.3 VD extension To extend VD size, please follow the procedures. Select “/ Volume configuration / Virtual disk”. Check the gray button next to the VD number; click “Extend”. Change the size. The size must be larger than the original, and then click “OK” to start extension.
5.4 QSnap Snapshot-on-the-box (QSnap) captures the instant state of data in the target volume in a logical sense. The underlying logic is Copy-on-Write -- moving out the data which would be written to certain location where a write action occurs since the time of data capture.
There are two methods to take snapshot. In “/ Volume configuration / Virtual disk”, check the gray button next to the VD number; click “Take snapshot”. Or in “/ Volume configuration / Snapshot”, click “Take snapshot”. Enter a snapshot name, and then click “OK”. A snapshot VD is created. Select “/ Volume configuration / Snapshot”...
There are two methods to set auto snapshot. In “/ Volume configuration / Virtual disk”, check the gray button next to the VD number; click “Auto snapshot”. Or in “/ Volume configuration / Snapshot”, click “Auto snapshot”. The auto snapshot can be set monthly, weekly, daily, or hourly. Done.
The data in snapshot VD can rollback to original VD. Please follow the procedures. Select “/ Volume configuration / Snapshot”. Check the gray button next to the Snap VD number which user wants to rollback the data; click “Rollback”. Done, the data in snapshot VD is rollback to original VD. Caution Before executing rollback, it is better to dismount file system for flushing data from cache to disks in OS first.
Page 99
VSS, please refer http://technet.microsoft.com/en-us/library/cc785914.aspx. QSAN P300H61 / P300H71 can support Microsoft VSS. • What if the snapshot space is over? Before using snapshot, a snapshot space is needed from RG capacity. After a period of working snapshot, what if the snapshot size over the snapshot space of user defined?
If there are two or more snapshots existed, the system will try to remove the oldest snapshots (to release more space for the latest snapshot) until enough space is released. If there is only one snapshot existed, the snapshot will fail. Because the snapshot space is run out.
Check the firmware version of two systems first. It is better that either systems have the same firmware version or system-2 firmware version is newer. All physical disks of the RG should be moved from system-1 to system-2 together. The configuration of both RG and VD will be kept but LUN configuration will be cleared in order to avoid conflict with system-2’s original setting.
Page 102
Figure 5.6.2 Here are the objects, a Source VD and a Target VD. Before starting clone process, it needs to deploy the VD Clone rule first. Click “Configuration”. Figure 5.6.3 There are three clone configurations, describe on the following. Figure 5.6.4...
Page 103
Snapshot space: Figure 5.6.5 This setting is the ratio of source VD and snapshot space. The default ratio is 2 to 1. It means when the clone process is starting, the system will automatically use the free RG space to create a snapshot space which capacity is double the source VD. Threshold: (The setting will be effective after enabling schedule clone) Figure 5.6.6 The threshold setting will monitor the usage amount of snapshot space.
Page 104
Figure 5.6.7 When running out of snapshot space, the VD clone process will be stopped because there is no more available snapshot space. If this option has been checked, system will clear the snapshots of clone in order to release snapshot space automatically, and the VD clone will restart the task after an hour.
Page 105
Figure 5.6.9 Now, the clone target “TargetVD_R6” has been set. Figure 5.6.10 Click “Start clone”, the clone process will start. Figure 5.6.11 The default setting will create a snapshot space automatically which the capacity is double size of the VD space. Before starting clone, system will initiate the snapshot space.
Page 106
Figure 5.6.12 10. After initiating the snapshot space, it will start cloning. Figure 5.6.13 11. Click “Schedule clone” to set up the clone by schedule. Figure 5.6.14 12. There are “Set Clone schedule” and “Clear Clone schedule” in this page. Please remember that “Threshold”...
Page 107
Figure 5.6.15 • Run out of snapshot space while VD clone While the clone is processing, the increment data of this VD is over the snapshot space. The clone will complete, but the clone snapshot will fail. Next time, when trying to start clone, it will get a warning message “This is not enough of snapshot space for the operation”.
Figure 5.6.16 5.7 SAS JBOD expansion 5.7.1 Connecting JBOD QSAN controller suports SAS JBOD expansion to connect extra SAS dual JBOD controller. When connecting to a dual JBOD which can be detected, it will be displayed in “Show PD for:” of “/ Volume configuration / Physical disk”. For example, Local, JBOD 1 (QSAN J300H), JBOD 2 (QSAN J300H), …etc.
Page 109
Figure 5.7.1.1 (Figure 5.7.1.1: Display all PDs in JBOD 1.) “/ Enclosure management / Hardware monitor” can display the hardware status of SAS JBODs.
Page 111
Figure 5.7.1.3 “/ Enclosure management / S.M.A.R.T.” can display S.M.A.R.T. information of all PDs, including Local and all SAS JBODs.
Page 112
Figure 5.7.1.4 (Figure 5.7.1.4: Disk S.M.A.R.T. information of JBOD 1, although S.M.A.R.T. supports SATA disk only.) SAS JBOD expansion has some constraints as described in the followings: User could create RAID group among multiple chassis, max number of disks in a single RAID group is 32.
5.7.2 Upgrade firmware of JBOD Before upgrade, it’s better to use “Export” function to backup all configurations to a file. To upgrade the firmware of JBOD, please follow the procedures. Please login subsystem as username admin first, and then go “/ System maintenance / Upgrade”.
Page 114
Figure 5.8.1 MC/S: MC/S (Multiple Connections per Session) is a feature of iSCSI protocol, which allows combining several connections inside a single session for performance and failover purposes. In this way, I/O can be sent on any TCP/IP connection to the target. If one connection fails, another connection can continue processing I/O without interruption to the application.
Figure 5.8.2 Difference: MC/S is implemented on iSCSI level, while MPIO is implemented on the higher level. Hence, all MPIO infrastructures are shared among all SCSI transports, including Fiber Channel, SAS, etc. MPIO is the most common usage across all OS vendors. The primary difference between these two is which level the redundancy is maintained.
Page 116
Link aggregation is the technique of taking several distinct Ethernet links to let them appear as a single link. It has a larger bandwidth and provides the fault tolerance ability. Beside the advantage of wide bandwidth, the I/O traffic remains operating until all physical links fail.
Figure 5.9.2 Caution Before using trunking or LACP, he gigabit switch must support trunking or LACP and enabled. Otherwise, host can not connect the link with storage device. 5.10 Dual controllers 5.10.1 Perform I/O Please refer to the following topology and have all the connections ready. To perform I/O on dual controllers, server/host should setup MPIO.
Figure 5.10.1.1 5.10.2 Ownership When creating RG, it will be assigned with a prefered owner, the default owner is controller 1. To change the RG ownership, please follow the procedures. Select “/ Volume configuration / RAID group”. Check the gray button next to the RG name; click “Set preferred owner”. The ownership of the RG will be switched to the other controller.
Figure 5.10.2.2 (Figure 5.10.2.2: The RG ownership is changed to the other controller.) 5.10.3 Controller status There are four statuses described on the following. It can be found in “/ System maintenance / System information”. Normal: Dual controller mode. Both of controllers are functional. Degraded: Dual controller mode.
Tips iSNS server is recommended for dual controller system. 5.11 QReplica QReplica function will help users to replicate data easily through LAN or WAN from one subsystem to another. The procedures of QReplica are on the following: 1. Copy all data from source VD to target VD at the beginning (full copy). 2.
Page 121
Figure 5.11.1 2. The setting can be reverted by select “Disable QReplica” in the operation menu. Figure 5.11.2 • Create backup virtual disk on the target subsystem 1. Before creating the replication job on the source subsystem, user has to create a virtual disk on the target subsystem and set the type of the virtual disk as “BACKUP”.
Page 122
Figure 5.11.3 2. The backup virtual disk needs to be attached to a LUN ID before creating replication job. And the virtual disk of “BACKUP” type can only be attached with “Read-only” permission to prevent it from being modified incautiously. Figure 5.11.4...
Page 123
• Create replication job on the source subsystem 1. If the license key is activated on the subsystem correctly, a new QReplica tab will be added on the Web UI. Click “Create” to create a new replication job. Figure 5.11.5 2.
Page 124
Figure 5.11.7 4. The QReplica uses standard iSCSI protocol for data replication. User has to log on the iSCSI node to create the iSCSI connection for the data transmission. Enter the CHAP information if necessary and select the target node to log no. Click “Next” to continue. Figure 5.11.8 5.
Page 125
Figure 5.11.9 6. A new replication job is created and listed on the QReplica page. Figure 5.11.10 • Run the replication job 1. Click the “OP.” button on the replication job to open operation menu. Click “Start” to run the replication job. Figure 5.11.11 2.
Page 126
Figure 5.11.12 3. User can monitor the replication job from the “Status” information and the progress is expressed by percentage. Figure 5.11.13 • Create multi-path on the replication job 1. Click the “Create multi-path” in the operation menu of the replication job. Figure 5.11.14 2.
Page 127
Figure 5.11.15 3. Select the iSCSI node to log on and click “Next”. Figure 5.11.16 4. Choose the same target virtual disk and click “Next”.
Page 128
Figure 5.11.17 5. A new target will be added in this replication job as a redundancy path. Figure 5.11.18 • Configure the replication job to run by schedule 1. Click “Schedule” in the operation menu of the replication job.
Page 129
Figure 5.11.19 2. The replication job can be scheduled to run by hour, by day, by week or by month. The execution time can be configurable per user’s need. If the scheduled time of execution is arrived but the pervious replication job is stilling going, the scheduled execution will be ignored once.
Page 130
The QReplica uses QSnap, the snapshot technique of QSAN, to help user to replicate the data without stop accessing to the source virtual disk. If the snapshot space is not configured on the source virtual disk in advance, the subsystem will allocate snapshot space for the source virtual disk automatically when the replication job is created.
Page 131
the source virtual disk to the target virtual disk. Next time, when the rest snapshot space has been used over 50%, in other words, the total snapshot space has been used over 75%, the subsystem will start the replication job again. “Restart the task an hour later if failed”...
Page 132
Figure 5.11.23 Here are more details about these two methods for setup the replication job at the first time. Method 1: Skip full copy of replication job for a new, clean virtual disk. For a new created virtual disk which has not been accessed, the subsystem will recognized it and skip full copy automatically when the replication job is created on this virtual disk at the first time.
Page 133
It is better that there is no host connected to the source virtual disk. Then run “VD Clone” to synchronize the data between source and target virtual disks. After the data is synchronized, change the cloning job to a QReplica job by selecting “Change to QReplica”...
Page 134
Figure 5.11.26 Select the replication job to rebuild. Figure 5.11.27 Follow the steps as creating a new replication job. If a wrong target virtual disk is selected when rebuilding the replication job, an error message of “The rebuilding source and target VDs are mismatched” will be pop-up and stops user from finishing the creation.
Chapter 6 Troubleshooting 6.1 System buzzer The system buzzer features are listed below: The system buzzer alarms 1 second when system boots up successfully. The system buzzer alarms continuously when there is error occurred. The alarm will be stopped after error resolved or be muted. The alarm will be muted automatically when the error is resolved.
Page 136
ERROR SATA PRD mem fail Failed to init SATA PRD memory manager ERROR SATA revision id fail Failed to get SATA revision id ERROR SATA set reg fail Failed to set SATA register ERROR SATA init fail Core failed to initialize the SATA adapter ERROR SATA diag fail SATA Adapter diagnostics failed...
Page 137
• RMS events Level Type Description INFO Console Login <username> login from <IP or serial console> via Console INFO Console Logout <username> logout from <IP or serial console> via Console INFO Web Login <username> login from <IP> via Web UI INFO Web Logout <username>...
Page 138
ERROR VD move failed Failed to complete move of VD <name>. INFO RG activated RG <name> has been manually activated. INFO RG deactivated RG <name> has been manually deactivated. INFO VD rewrite started Rewrite at LBA <address> of VD <name> starts. INFO VD rewrite finished Rewrite at LBA <address>...
Page 139
INFO VD erase started VD <name> starts erasing process. • Snapshot events Level Type Description WARNING Snap mem Failed to allocate snapshot memory for VD <name>. WARNING Snap space Failed to allocate snapshot space for VD <name>. overflow WARNING Snap threshold The snapshot space threshold of VD <name>...
Page 140
INFO PD upgrade started JBOD <name> PD [<string>] starts upgrading firmware process. INFO PD upgrade JBOD <name> PD [<string>] finished upgrading firmware finished process. WARNING PD upgrade failed JBOD <name> PD [<string>] upgrade firmware failed. INFO PD freed JBOD <name> PD <slot> has been freed from RG <name>. INFO PD inserted JBOD <name>...
Page 141
• System maintenance events Level Type Description INFO System shutdown System shutdown. INFO System reboot System reboot. INFO System console System shutdown from <string> via Console UI shutdown INFO System web System shutdown from <string> via Web UI shutdown INFO System button System shutdown via power button shutdown...
• Clone events Level Type Description INFO VD clone started VD <name> starts cloning process. INFO VD clone finished VD <name> finished cloning process. WARNING VD clone failed The cloning in VD <name> failed. INFO VD clone aborted The cloning in VD <name> was aborted. INFO VD clone set The clone of VD <name>...
Page 143
QSAN Support Form version: 1.1 Customer information Customer name Contact email Target information Model name (*) Hardware version MB (Main board): DB (Daughter board): Serial number (1) Firmware version (*)(2) Backplane / Chassis model Backplane version Target configuration (*) RAID Attached file name: configuration (*)(3)
Page 144
Connect diagram Attached file name: Problem description (*) Reproduce step (*) Screenshot Attached file name: QSAN description Fields marked as (*) are MUST. At least, these informations are MUST to have. Fields marked as (1)(2)(3)(4), you can get them on the following descriptions. (1) In / Maintenance / Info / Controller serial no.
Page 145
This form can be got from QSAN ftp. ftp://ftp.qsan.com.tw/QSAN_Support_Form.doc...
Page 149
It can connect to an iSCSI disk now. • MPIO If running MPIO, please continue. Click “Discovery” tab to connect the second path. Click “Discover Portal”. Input IP address or DNS name of the target. Figure B.4 Figure B.5 Click “OK”.
Page 150
Figure B.6 Figure B.7 Click “Targets” tab, select the second path, and then click “Connect”. 10. Enable “Enable multi-path” checkbox. Then click “OK”. 11. Done, it can connect to an iSCSI disk with MPIO. • MC/S 12. If running MC/S, please continue. 13.
Need help?
Do you have a question about the P300H61 and is the answer not in the manual?
Questions and answers