About this manual This manual is the introduction of D-Link DSN-64x0 IP SAN storage and it aims to help users know the operations of the disk array system easily. Information contained in this manual has been reviewed for accuracy, but not for product warranty because of the various environments / OS / settings.
Chapter 1 Overview 1.1 Features D-LINK DSN-6000 series IP SAN storage provides non-stop service with a high degree of fault tolerance by using D-LINK RAID technology and advanced array management features. DSN-6410 & 6410w/640 IP SAN storage connects to the host system by iSCSI interface.
1.1.1 Highlights D-LINK DSN-6410 & 6410w/640 feature highlights • Host 4 x 10GbE iSCSI ports (DSN-6410 with DSN-640) Interface 2 x 10GbE iSCSI ports (DSN-6410) Drive 12 x SAS or SATA II Interface RAID Dual-active RAID controllers (DSN-6410 with DSN-640)
1.2 RAID concepts RAID is the abbreviation of “Redundant Array of Independent Disks”. The basic idea of RAID is to combine multiple drives together to form one large logical drive. This RAID drive obtains performance, capacity and reliability than a single drive. The operating system detects the RAID drive as a single storage device.
both data cache and accessed physical disks. Write-Back cache-write policy. A caching technique in which the completion of a write request is signaled as soon as the data is in cache and actual writing to non-volatile media occurs at a later time.
MPIO Multi-Path Input/Output. MC/S Multiple Connections per Session Maximum Transmission Unit. Challenge Handshake Authentication Protocol. An optional CHAP security mechanism to control access to an iSCSI storage system over the iSCSI data ports. iSNS Internet Storage Name Service. Part 3: Dual controller •...
1.2.3 Volume relationship The below graphic is the volume structure which D-LINK has designed. It describes the relationship of RAID components. One RG (RAID group) consists of a set of VDs (Virtual Disk) and owns one RAID level attribute. Each RG can be divided into several VDs. The VDs in one RG share the same RAID level, but may have different volume capacity.
1.3 iSCSI concepts iSCSI (Internet SCSI) is a protocol which encapsulates SCSI (Small Computer System Interface) commands and data in TCP/IP packets for linking storage devices with servers over common IP infrastructures. iSCSI provides high performance SANs over standard IP networks like LAN, WAN or the Internet.
initiators use standard TCP/IP stack and Ethernet hardware, while iSCSI HBA(s) use their own iSCSI and TCP/IP stacks on board. Hardware iSCSI HBA(s) provide its own initiator tool. Please refer to the vendors’ HBA user manual. Microsoft, Linux, Solaris and Mac provide iSCSI initiator driver. Please contact D- LINK for the latest certification list.
Write-through or write-back cache policy for different application usage Multiple RAID volumes support Configurable RAID stripe size Online volume expansion Instant RAID volume availability 10. Auto volume rebuilding 11. On-line volume migration with no system down-time Advanced data protection • D-Link writeable snapshot...
S.E.S. inband management UPS management via dedicated serial port Fan speed monitors Redundant power supply monitors Voltage monitors Thermal sensors for both RAID controller and enclosure Status monitors for D-LINK SAS JBODs Management interface • Management UI via serial console ...
Host access control: Read-Write and Read-Only Up to 128 sessions per controller One logic volume can be shared by as many as 16 hosts OS support • Windows Linux Solaris Drive support • SATA II (optional) SCSI-3 compliant Multiple IO transaction processing Tagged command queuing...
This device has been shown to be in compliance with and was tested in accordance with the measurement procedures specified in the Standards and Specifications listed below and as indicated in the measurement report number: xxxxxxxx-E Technical Standard: EMC DIRECTIVE 2004/108/EC (EN55022 / EN55024) UL statement FCC statement...
Reliable Earthing - Reliable earthing of rack-mounted equipment should be maintained. Particular attention should be given to supply connections other than direct connections to the branch circuit (e.g. use of power strips). Caution The main purpose of the handles is for rack mount use only. Do not use the handles to carry or transport the systems.
Chapter 2 Installation 2.1 Package contents The package contains the following items: DSN-6410 & 6410w/640 IP SAN storage (x1) HDD trays (x12) Power cords (x4) RS-232 cables (x2), one is for console, the other is for UPS. CD (x1) Rail kit (x1 set) Keys, screws for drives and rail kit (x1 packet) SFP and 5 Meter cable 2.2 Before installation...
The drives can be installed into any slot in the enclosure. Slot numbering will be reflected in web UI. Tips It is advisable to install at least one drive in slots 1 ~ 4. System event logs are saved to drives in these slots; If no drives are fitted the event logs will be lost in the event of a system reboot.
2.3.3 Install drives Note : Skip this section if you purchased a solution populated with drives. To install SAS or SATA drives with no Bridge Board use the front mounting holes: To install SATA drives with Bridge Board (DSN-654), fit the Bridge Board first Then install the drive using the rear mounting holes:...
Figure 126.96.36.199 HDD tray description: • HDD power LED: Green HDD is inserted and good. Off No HDD. HDD access LED: Blue blinking HDD is accessing. Off No HDD. ...
Controller 2. (only on DSN-6410 with DSN-640) Controller 1. Power supply unit (PSU1). Fan module (FAN1 / FAN2). Power supply unit (PSU2). Fan module (FAN3 / FAN4). ...
Figure 188.8.131.52 (DSN-6410 SFP+) Connector, LED and button description: • 10GbE ports (x2). Link LED: Orange Asserted when a 1G link is established and maintained. Blue Asserted when a 10G link is establish and ...
BBM Status Button: When the system power is off, press the BBM status button, if the BBM LED is Green, then the BBM still has power to keep data on the cache. If not, then the BBM power is ran out and cannot keep the data on the cache anymore.
2.5 Deployment Please refer to the following topology and have all the connections ready. Figure 2.5.1 (DSN-6410 with DSN-640) Figure 2.5.2 (DSN-6410) Setup the hardware connection before power on servers. Connect console cable, management port cable, and iSCSI data port cables in advance.
In addition, installing an iSNS server is recommended for dual controller system. Power on the DSN-6410 or DSN-6410w/DSN-640 and the DSN-6020 (optional) first, and then power on hosts and iSNS server. Host server is suggested to logon the target twice (both controller 1 and controller 2), and then MPIO should be setup automatically.
Figure 2.5.4 Using RS-232 cable for console (back color, phone jack to DB9 female) to connect from controller to management PC directly. Using RS-232 cable for UPS (gray color, phone jack to DB9 male) to connect from controller to APC Smart UPS serial cable (DB9 female side), and then connect the serial cable to APC Smart UPS.
3.1.1 Serial console Use console cable (NULL modem cable) to connect from console port of D-LINK IP SAN storage to RS 232 port of management PC. Please refer to figure 2.3.1. The console settings are on the following: Baud rate: 115200, 8 data bit, no parity, 1 stop bit, and no flow control.
3.1.3 Web UI D-LINK IP SAN storage supports graphic user interface (GUI) to operate. Be sure to connect the LAN cable. The default IP setting is DHCP; open the browser and enter: http://192.168.0.32 And then it will pop up a dialog for authentication.
Indicator description: • RAID light: Green RAID works well. Red RAID fails. Temperature light: Green Temperature is normal. Red Temperature is abnormal. Voltage light: Green voltage is normal. Red voltage is abnormal. ...
Mute alarm beeper. Tips If the status indicators in Internet Explorer (IE) are displayed in gray, but not in blinking red, please enable “Internet Options” “Advanced” “Play animations in webpages” options in IE. The default value is enabled, but some applications will disable it. 3.2 How to use the system quickly The following methods will describe the quick guide to use this IP SAN storage.
Figure 184.108.40.206 Step2: Confirm the management port IP address and DNS, and then click “Next”. Figure 220.127.116.11 Step 3: Set up the data port IP and click “Next”.
Figure 18.104.22.168 Step 4: Set up the RAID level and volume size and click “Next”. Figure 22.214.171.124 Step 5: Check all items, and click “Finish”.
Figure 126.96.36.199 Step 6: Done. 3.2.2 Volume creation wizard “Volume create wizard” has a smarter policy. When the system is inserted with some HDDs. “Volume create wizard” lists all possibilities and sizes in different RAID levels, it will use all available HDDs for RAID level depends on which user chooses. When system has different sizes of HDDs, e.g., 8*200G and 8*80G, it lists all possibilities and combination in different RAID level and different sizes.
Figure 188.8.131.52 Step 2: Please select the combination of the RG capacity, or “Use default algorithm” for maximum RG capacity. After RG size is chosen, click “Next”. Figure 184.108.40.206...
Step 3: Decide VD size. User can enter a number less or equal to the default number. Then click “Next”. Figure 220.127.116.11 Step 4: Confirmation page. Click “Finish” if all setups are correct. Then a VD will be created. Step 5: Done. The system is available now. Figure 18.104.22.168 (Figure 22.214.171.124: A virtual disk of RAID 0 is created and is named by system itself.)
Chapter 4 Configuration 4.1 Web UI management interface hierarchy The below table is the hierarchy of web GUI. System configuration System setting System name / Date and time / System indication Network MAC address / Address / DNS / Port ...
Maintenance System System information information Event log Download / Mute / Clear Upgrade Browse the firmware to upgrade Firmware Synchronize the slave controller’s firmware version with the master’s synchronization Reset to factory Sure to reset to factory default? default Import and Import/Export / Import file...
Figure 126.96.36.199 Check “Change date and time” to set up the current date, time, and time zone before using or synchronize time from NTP (Network Time Protocol) server. Click “Confirm” in System indication to turn on the system indication LED. Click again to turn off. 4.2.2 Network setting “Network setting”...
Figure 188.8.131.52 4.2.3 Login setting “Login setting” can set single admin, auto logout time and admin / user password. The single admin is to prevent multiple users access the same system in the same time. Auto logout: The options are (1) Disabled; (2) 5 minutes; (3) 30 minutes; (4) 1 hour. The system will log out automatically when user is inactive for a period of time.
Figure 184.108.40.206 Check “Change admin password” or “Change user password” to change admin or user password. The maximum length of password is 12 characters. 4.2.4 Mail setting “Mail setting” can enter 3 mail addresses for receiving the event notification. Some mail servers would check “Mail-from address”...
Figure 220.127.116.11 4.2.5 Notification setting “Notification setting” can set up SNMP trap for alerting via SNMP, pop-up message via Windows messenger (not MSN), alert via syslog protocol, and event log filter for web UI and LCM notifications.
Figure 18.104.22.168 “SNMP” allows up to 3 SNMP trap addresses. Default community setting is “public”. User can choose the event log levels and default setting enables ERROR and WARNING event log in SNMP. There are many SNMP tools. The following web sites are for your reference: SNMPc: http://www.snmpc.com/ Net-SNMP:...
Most UNIX systems build in syslog daemon. “Event log filter” setting can enable event log display on “Pop up events” and “LCM”. 4.3 iSCSI configuration “iSCSI configuration” is designed for setting up the “Entity Property”, “NIC”, “Node”, “Session”, and “CHAP account”. Figure 4.3.1 4.3.1 “NIC”...
Figure 22.214.171.124 Default gateway: • Default gateway can be changed by checking the gray button of LAN port, click “Become default gateway”. There can be only one default gateway. MTU / Jumbo frame: • MTU (Maximum Transmission Unit) size can be enabled by checking the gray button of LAN port, click “Enable jumbo frame”.
LACP packets to the peer. The advantages of LACP are (1) increases the bandwidth. (2) failover when link status fails on a port. Trunking / LACP setting can be changed by clicking the button “Aggregation”. Figure 126.96.36.199 (Figure 188.8.131.52: There are 2 iSCSI data ports on each controller, select at least two NICs for link aggregation.) Figure 184.108.40.206 For example, LAN1 and LAN2 are set as Trunking mode.
Figure 220.127.116.11 (Figure 18.104.22.168 shows a user can ping host from the target to make sure the data port connection is well.) 4.3.2 Entity property “Entity property” can view the entity name of the system, and setup “iSNS IP” for iSNS (Internet Storage Name Service).
Figure 22.214.171.124 CHAP: • CHAP is the abbreviation of Challenge Handshake Authentication Protocol. CHAP is a strong authentication method used in point-to-point for user login. It’s a type of authentication in which the authentication server sends the client a key to be used for encrypting the username and password.
Figure 126.96.36.199 Go to “/ iSCSI configuration / CHAP account” page to create CHAP account. Please refer to next section for more detail. Check the gray button of “OP.” column, click “User”. Select CHAP user(s) which will be used. It’s a multi option; it can be one or more. If choosing none, CHAP can not work.
Rename alias: • User can create an alias to one device node. Check the gray button of “OP.” column next to one device node. Select “Rename alias”. Create an alias for that device node. Click “OK” to confirm. An alias appears at the end of that device node. Figure 188.8.131.52 Figure 184.108.40.206 Tips...
DataSeginOrder(Data Sequence in Order) DataPDUInOrder(Data PDU in Order) 10. Detail of Authentication status and Source IP: port number. Figure 220.127.116.11 (Figure 18.104.22.168: iSCSI Session.) Check the gray button of session number, click “List connection”. It can list all connection(s) of the session. Figure 22.214.171.124 (Figure 126.96.36.199: iSCSI Connection.) 4.3.5...
Figure 188.8.131.52 Click “OK”. Figure 184.108.40.206 Click “Delete” to delete CHAP account. 4.4 Volume configuration “Volume configuration” is designed for setting up the volume configuration which includes “Physical disk”, “RAID group”, “Virtual disk”, “Snapshot”, “Logical unit”, and “Replication”. Figure 4.4.1...
4.4.1 Physical disk “Physical disk” can view the status of hard drives in the system. The followings are operational steps: Check the gray button next to the number of slot, it will show the functions which can be executed. Active function can be selected, and inactive functions show up in gray color and cannot be selected.
Figure 220.127.116.11 (Figure 18.104.22.168: Physical disks in slot 1,2,3 are created for a RG named “RG-R5”. Slot 4 is set as dedicated spare disk of the RG named “RG-R5”. The others are free disks.) Step 4: The unit of size can be changed from (GB) to (MB). It will display the capacity of hard drive in MB.
“Failed” the hard drive is failed. “Error Alert” S.M.A.R.T. error alert. “Read Errors” the hard drive has unrecoverable read errors. Usage The usage of hard drive: “RAID disk” This hard drive has been set to ...
Set Dedicated Set a hard drive to dedicated spare of the selected RG. spares Upgrade Upgrade hard drive firmware. Disk Scrub Scrub the hard drive. Turn on/off the Turn on the indication LED of the hard drive. Click again to turn indication LED off.
Step 2: Confirm page. Click “OK” if all setups are correct. Figure 22.214.171.124 (Figure 126.96.36.199: There is a RAID 0 with 4 physical disks, named “RG-R0”. The second RAID group is a RAID 5 with 3 physical disks, named “RG-R5”.) Step 3: Done.
Health The health of RAID group: “Good” the RAID group is good. “Failed” the RAID group fails. “Degraded” the RAID group is not healthy and not completed. The reason could be lack of disk(s) or have failed disk RAID The RAID level of the RAID group.
property Write cache: “Enabled” Enable disk write cache. (Default) “Disabled” Disable disk write cache. Standby: “Disabled” Disable auto spin-down. (Default) “30 sec / 1 min / 5 min / 30 min” Enable hard drive ...
Figure 188.8.131.52 Caution If shutdown or reboot the system when creating VD, the erase process will stop. Step 2: Confirm page. Click “OK” if all setups are correct. Figure 184.108.40.206 (Figure 220.127.116.11: Create a VD named “VD-01”, from “RG-R0”. The second VD is named “VD-02”, it’s initializing.) Step 3: Done.
VD column description: • The button includes the functions which can be executed. Name Virtual disk name. Size (GB) Total capacity of the virtual disk. The unit can be displayed in (MB) GB or MB. Write The right of virtual disk: “WT”...
Clone The target name of virtual disk. Schedule The clone schedule of virtual disk: Health The health of virtual disk: “Optimal” the virtual disk is working well and there is no failed disk in the RG. “Degraded” At least one disk from the RG of the Virtual ...
Stop clone Stop clone function. Schedule Set clone function by schedule. clone Set snapshot Set snapshot space for taking snapshot. Please refer to next space chapter for more detail. Cleanup Clean all snapshots of a VD and release the snapshot space. snapshot Take Take a snapshot on the virtual disk.
Figure 18.104.22.168 (Figure 22.214.171.124: “VD-01” snapshot space has been created, snapshot space is 15GB, and used 1GB for saving snapshot index.) Step 3: Take a snapshot. In “/ Volume configuration / Snapshot”, click “Take snapshot”. It will link to next page. Enter a snapshot name. Figure 126.96.36.199 Step 4: Expose the snapshot VD.
Step 5: Attach a LUN to a snapshot VD. Please refer to the next section for attaching a LUN. Step 6: Done. Snapshot VD can be used. Snapshot column description: • The button includes the functions which can be executed. Name Snapshot VD name.
Delete Delete the snapshot VD. Attach Attach a LUN. Detach Detach a LUN. List LUN List attached LUN(s). 4.4.5 Logical unit “Logical unit” can view, create, and modify the status of attached logical unit number(s) of each VD. User can attach LUN by clicking the “Attach”. “Host” must enter with an iSCSI node name for access control, or fill-in wildcard “*”, which means every host can access the volume.
LUN operation description: • Attach Attach a logical unit number to a virtual disk. Detach Detach a logical unit number from a virtual disk. The matching rules of access control are followed from the LUN’ created time; the earlier created LUN is prior to the matching rules. For example: there are 2 LUN rules for the same VD, one is “*”, LUN 0;...
Figure 188.8.131.52 Select “/ Volume configuration / RAID group”. Click “Create“. Input a RG Name, choose a RAID level from the list, click “Select PD“ to choose the RAID physical disks, then click “OK“. Check the setting. Click “OK“ if all setups are correct. Done.
Figure 184.108.40.206 Select “/ Volume configuration / Virtual disk”. Click “Create”. Input a VD name, choose a RG Name and enter a size for this VD; decide the stripe height, block size, read / write mode, bg rate, and set priority, finally click “OK”. Done.
Figure 220.127.116.11 Select a VD. Input “Host” IQN, which is an iSCSI node name for access control, or fill-in wildcard “*”, which means every host can access to this volume. Choose LUN and permission, and then click “OK”. Done. Figure 18.104.22.168 Tips The matching rules of access control are from the LUNs’...
Figure 22.214.171.124 (Figure 126.96.36.199: Slot 4 is set as a global spare disk.) Step 5: Done. Delete VDs, RG, please follow the below steps. Step 6: Detach a LUN from the VD. In “/ Volume configuration / Logical unit”, Figure 188.8.131.52 Check the gray button next to the LUN;...
To delete a RAID group, please follow the procedures: Select “/ Volume configuration / RAID group”. Select a RG which all its VD are deleted, otherwise the this RG cannot be deleted. Check the gray button next to the RG number click “Delete”. There will pop up a confirmation page, click “OK”.
If “Auto shutdown” is checked, the system will shutdown automatically when voltage or temperature is out of the normal range. For better data protection, please check “Auto Shutdown”. For better protection and avoiding single short period of high temperature triggering auto shutdown, the system use multiple condition judgments to trigger auto shutdown, below are the details of when the Auto shutdown will be triggered.
Figure 184.108.40.206 (Figure 220.127.116.11: With Smart-UPS.) UPS column description: • Select UPS Type. Choose Smart-UPS for APC, None for other UPS Type vendors or no UPS. When below the setting level, system will shutdown. Setting Shutdown level to “0” will disable UPS. Battery Level (%) If power failure occurs, and system power can not recover, the...
Battery Current power percentage of battery level. Level (%) 4.5.3 SES represents SCSI Enclosure Services, one of the enclosure management standards. “SES configuration” can enable or disable the management of SES. Figure 18.104.22.168 (Figure 22.214.171.124: Enable SES in LUN 0, and can be accessed from every host) The SES client software is available at the following web site: SANtools: http://www.santools.com/...
Figure 126.96.36.199 (SAS drives & SATA drives) 4.6 System maintenance “Maintenance” allows the operations of system functions which include “System information” to show the system version and details, “Event log” to view system event logs to record critical events, “Upgrade” to the latest firmware, “Firmware synchronization”...
Status description: • Normal Dual controllers are in normal stage. Degraded One controller fails or has been plugged out. Lockdown The firmware of two controllers is different or the size of memory of two controllers is different. Single Single controller mode. 4.6.2 Event log “Event log”...
The event log is displayed in reverse order which means the latest event log is on the first / top page. The event logs are actually saved in the first four hard drives; each hard drive has one copy of event log. For one system, there are four copies of event logs to make sure users can check event log any time when there are failed disks.
4.6.3 Upgrade “Upgrade” can upgrade controller firmware, JBOD firmware, change operation mode, and activate Replication license. Figure 188.8.131.52 Please prepare new controller firmware file named “xxxx.bin” in local hard drive, then click “Browse” to select the file. Click “Confirm”, it will pop up a warning message, click “OK”...
master ones no matter what the firmware version of slave controller is newer or older than master. In normal status, the firmware versions in controller 1 and 2 are the same as below figure. Figure 184.108.40.206 4.6.5 Reset to factory default “Reset to factory default”...
Import: Import all system configurations excluding volume configuration. Export: Export all configurations to a file. Caution “Import” will import all system configurations excluding volume configuration; the current configurations will be replaced. 4.6.7 Reboot and shutdown “Reboot and shutdown” can “Reboot” and “Shutdown” the system. Before power off, it’s better to execute “Shutdown”...
But rebuilding in the same failed disk may impact customer data if the status of disk is unstable. D-LINK suggests all customers not to rebuild in the failed disk for better data protection.
Rebuild operation description: • RAID 0 Disk striping. No protection for data. RG fails if any hard drive fails or unplugs. RAID 1 Disk mirroring over 2 disks. RAID 1 allows one hard drive fails or unplugging. Need one new hard drive to insert to the system and rebuild to be completed.
5.2 RG migration To migrate the RAID level, please follow below procedures. Select “/ Volume configuration / RAID group”. Check the gray button next to the RG number; click “Migrate”. Change the RAID level by clicking the down arrow to “RAID 5”. There will be a pup- up which indicates that HDD is not enough to support the new setting of RAID level, click “Select PD”...
5.3 VD extension To extend VD size, please follow the procedures. Select “/ Volume configuration / Virtual disk”. Check the gray button next to the VD number; click “Extend”. Change the size. The size must be larger than the original, and then click “OK” to start extension.
any unfortunate reason it might be (e.g. virus attack, data corruption, human errors and so on). Snap VD is allocated within the same RG in which the snapshot is taken, we suggest to reserve 20% of RG size or more for snapshot space. Please refer to the following figure for snapshot concept.
Figure 220.127.116.11 Check the gray button next to the Snapshot VD number; click “Expose”. Enter a capacity for snapshot VD. If size is zero, the exposed snapshot VD is read only. Otherwise, the exposed snapshot VD can be read / written, and the size is the maximum capacity for writing.
Figure 18.104.22.168 (Figure 22.214.171.124: It will take snapshots every month, and keep the last 32 snapshot copies.) Tips Daily snapshot will be taken at every 00:00. Weekly snapshot will be taken every Sunday 00:00. Monthly snapshot will be taken every first day of month 00:00.
5.4.4 Snapshot constraint D-LINK snapshot function applies Copy-on-Write technique on UDV/VD and provides a quick and efficient backup methodology. When taking a snapshot, it does not copy any data at first time until a request of data modification comes in. The snapshot copies the original data to snapshot space and then overwrites the original data with new changes.
On Linux and UNIX platform, a command named sync can be used to make the operating system flush data from write caching into disk. For Windows platform, Microsoft also provides a tool – sync, which can do exactly the same thing as the sync command in Linux/UNIX.
When a snapshot has been rollbacked, the other snapshots which are earlier than it will also be removed. But the rest snapshots will be kept after rollback. If a snapshot has been deleted, the other snapshots which are earlier than it will also be deleted. The space occupied by these snapshots will be released after deleting.
Figure 5.6.1 Create two virtual disks (VD) “SourceVD_R5” and “TargetVD_R6”. The raid type of backup target needs to be set as “BACKUP”. Figure 5.6.2 Here are the objects, a Source VD and a Target VD. Before starting clone process, it needs to deploy the VD Clone rule first.
Figure 5.6.4 Snapshot space: Figure 5.6.5 This setting is the ratio of source VD and snapshot space. The default ratio is 2 to 1. It means when the clone process is starting, the system will automatically use the free RG space to create a snapshot space which capacity is double the source VD. Threshold: (The setting will be effective after enabling schedule clone) ...
Caution The default snapshot space allocated by the IP SAN storage is two times the size of source virtual disk. That is the best value of D-LINK’s suggestion. If user sets snapshot space by manually and lower than the default value, user should take the risk if the snapshot space is not enough and VD clone job will fail.
Figure 5.6.9 Now, the clone target “TargetVD_R6” has been set. Figure 5.6.10 Click “Start clone”, the clone process will start. Figure 5.6.11 The default setting will create a snapshot space automatically which the capacity is double size of the VD space. Before starting clone, system will initiate the snapshot space.
Figure 5.6.12 10. After initiating the snapshot space, it will start cloning. Figure 5.6.13 11. Click “Schedule clone” to set up the clone by schedule. Figure 5.6.14 12. There are “Set Clone schedule” and “Clear Clone schedule” in this page. Please remember that “Threshold”...
Figure 5.6.15 Run out of snapshot space while VD clone • While the clone is processing, the increment data of this VD is over the snapshot space. The clone will complete, but the clone snapshot will fail. Next time, when trying to start clone, it will get a warning message “This is not enough of snapshot space for the operation”.
VD Clone Schedule clone Check threshold Manually start every hour clone by user Run out of Run out of snapshot space snapshot space Auto delete old Manually release clone snapshot snapshot space Auto restart an Restart clone hour later by user Start clone process with fully copy...
5.7.1 Connecting JBOD D-LINK controller suports SAS JBOD expansion to connect extra SAS dual JBOD controller. When connecting to a dual JBOD which can be detected, it will be displayed in “Show PD for:” of “/ Volume configuration / Physical disk”. For example, Local, JBOD 1 (D- LINK DSN-6020), JBOD 2 (D-LINK DSN-6020), …etc.
Figure 126.96.36.199 Figure 188.8.131.52 “/ Enclosure management / S.M.A.R.T.” can display S.M.A.R.T. information of all PDs, including Local and all SAS JBODs. Figure 184.108.40.206 (Figure 220.127.116.11: Disk S.M.A.R.T. information of JBOD 1, although S.M.A.R.T. supports SATA disk only.)
The following table is the maximum JBOD numbers and maximum HDD numbers with different chassises can be cascaded. Dual controllers +Dual JBOD RAID Storage System DSN-6120 DSN-6020 no. Max HDD no. 5.7.2 Upgrade firmware of JBOD To upgrade the firmware of JBOD, please follow the procedures.
5.8 MPIO and MC/S These features come from iSCSi initiator. They can be setup from iSCSI initiator to establish redundant paths for sending I/O from the initiator to the target. MPIO: In Microsoft Windows server base system, Microsoft MPIO driver allows initiators to login multiple sessions to the same target and aggregate the duplicate devices into a single device.
Figure 5.8.2 Difference: MC/S is implemented on iSCSI level, while MPIO is implemented on the higher level. Hence, all MPIO infrastructures are shared among all SCSI transports, including Fiber Channel, SAS, etc. MPIO is the most common usage across all OS vendors. The primary difference between these two is which level the redundancy is maintained.
LACP packets to the peer. Theoretically, LACP port can be defined as active or passive. D-LINK IP SAN Storage implements it as active mode which means that LACP port sends LACP protocol packets automatically. Please notice that using the same configurations between D-LINK controller and gigabit switch.
Figure 5.9.2 Caution Before using trunking or LACP, he gigabit switch must support trunking or LACP and enabled. Otherwise, host can not connect the link with storage device. 5.10 Dual controllers (only for DSN-6410 with DSN-640) 5.10.1 Perform I/O Please refer to the following topology and have all the connections ready. To perform I/O on dual controllers, server/host should setup MPIO.
Figure 18.104.22.168 5.10.2 Ownership When creating RG, it will be assigned with a prefered owner, the default owner is controller 1. To change the RG ownership, please follow the procedures. Select “/ Volume configuration / RAID group”. Check the gray button next to the RG name; click “Set preferred owner”. The ownership of the RG will be switched to the other controller.
Figure 22.214.171.124 (Figure 126.96.36.199: The RG ownership is changed to the other controller.) 5.10.3 Controller status There are four statuses described on the following. It can be found in “/ System maintenance / System information”. Normal: Dual controller mode. Both of controllers are functional. Degraded: Dual controller mode.
5.11 Replication Replication function will help users to replicate data easily through LAN or WAN from one IP SAN storage to another. The procedures of Replication are on the following: 1. Copy all data from source VD to target VD at the beginning (full copy). 2.
3. If you want the replication port to be on special VLAN section, you may assign VLAN ID to the replication port. The setting will automatically duplicate to the other controller. Create backup virtual disk on the target IP SAN storage •...
Figure 5.11.4 Create replication job on the source IP SAN storage • 1. If the license key is activated on the IP SAN storage correctly, a new Replication tab will be added on the Web UI. Click “Create” to create a new replication job. Figure 5.11.5 2.
Figure 5.11.7 4. The Replication uses standard iSCSI protocol for data replication. User has to log on the iSCSI node to create the iSCSI connection for the data transmission. Enter the CHAP information if necessary and select the target node to log no. Click “Next” to continue.
Figure 5.11.9 6. A new replication job is created and listed on the Replication page. Figure 5.11.10 Run the replication job • 1. Click the “OP.” button on the replication job to open operation menu. Click “Start” to run the replication job. Figure 5.11.11 2.
Figure 5.11.12 3. User can monitor the replication job from the “Status” information and the progress is expressed by percentage. Figure 5.11.13 Create multi-path on the replication job • 1. Click the “Create multi-path” in the operation menu of the replication job. Figure 5.11.14 2.
Figure 5.11.15 3. Select the iSCSI node to log on and click “Next”. Figure 5.11.16 4. Choose the same target virtual disk and click “Next”.
Figure 5.11.17 5. A new target will be added in this replication job as a redundancy path. Figure 5.11.18 Configure the replication job to run by schedule • 1. Click “Schedule” in the operation menu of the replication job. Figure 5.11.19...
Configure the snapshot space • The Replication uses Snapshot technique of D-LINK, to help user to replicate the data without stop accessing to the source virtual disk. If the snapshot space is not configured on the source virtual disk in advance, the IP SAN storage will allocate snapshot space for the source virtual disk automatically when the replication job is created.
Figure 5.11.21 There are three settings in the Replication configuration menu, Figure 5.11.22 “Snapshot space” specifies the ratio of snapshot space allocated to the source virtual disk automatically when the snapshot space is not configured in advance. The default ratio is 2 to 1.
5.12 VLAN VLAN (Virtual Local Area Network) is a logical grouping mechanism implemented on switch device using software rather than a hardware solution. VLANs are collections of switching ports that comprise a single broadcast domain. It allows network traffic to flow more efficiently within these logical subgroups.
Figure 5.12.2 4. VLAN ID 66 for LAN2 is set properly. Figure 5.12.3 Assign VLAN ID to LAG(Trunking or LACP) • 1. After creating LAG, press “OP” button next to the LAG, and select “Set VLAN ID”. Figure 5.12.4 2. Put in the VLAN ID and click ok. VLAN ID of LAG 0 is properly set.
Figure 5.12.5 3. If iSCSI ports are assigned with VLAN ID before creating aggregation takes place, aggregation will remove VLAN ID. You need to repeat step 1 and step 2 to set VLAN ID for the aggregation group. Assign VLAN ID to replication port •...
Chapter 6 Troubleshooting 6.1 System buzzer The system buzzer features are listed below: The system buzzer alarms 1 second when system boots up successfully. The system buzzer alarms continuously when there is error occurred. The alarm will be stopped after error resolved or be muted. The alarm will be muted automatically when the error is resolved.
ERROR SATA PRD mem fail Failed to init SATA PRD memory manager ERROR SATA revision id fail Failed to get SATA revision id ERROR SATA set reg fail Failed to set SATA register ERROR SATA init fail Core failed to initialize the SATA adapter ERROR SATA diag fail SATA Adapter diagnostics failed...
RMS events • Level Type Description INFO Console Login <username> login from <IP or serial console> via Console INFO Console Logout <username> logout from <IP or serial console> via Console INFO Web Login <username> login from <IP> via Web UI INFO Web Logout <username>...
ERROR VD move failed Failed to complete move of VD <name>. INFO RG activated RG <name> has been manually activated. INFO RG deactivated RG <name> has been manually deactivated. INFO VD rewrite started Rewrite at LBA <address> of VD <name> starts. INFO VD rewrite finished Rewrite at LBA <address>...
INFO VD erase started VD <name> starts erasing process. Snapshot events • Level Type Description WARNING Snap mem Failed to allocate snapshot memory for VD <name>. WARNING Snap space Failed to allocate snapshot space for VD <name>. overflow WARNING Snap threshold The snapshot space threshold of VD <name>...
INFO PD upgrade started JBOD <name> PD [<string>] starts upgrading firmware process. INFO PD upgrade JBOD <name> PD [<string>] finished upgrading firmware finished process. WARNING PD upgrade failed JBOD <name> PD [<string>] upgrade firmware failed. INFO PD freed JBOD <name> PD <slot> has been freed from RG <name>. INFO PD inserted JBOD <name>...
System maintenance events • Level Type Description INFO System shutdown System shutdown. INFO System reboot System reboot. INFO System console System shutdown from <string> via Console UI shutdown INFO System web System shutdown from <string> via Web UI shutdown INFO System button System shutdown via power button shutdown...
Level Type Description INFO VD clone started VD <name> starts cloning process. INFO VD clone finished VD <name> finished cloning process. WARNING VD clone failed The cloning in VD <name> failed. INFO VD clone aborted The cloning in VD <name> was aborted. INFO VD clone set The clone of VD <name>...
Appendix A. Certification list iSCSI Initiator (Software) • Software/Release Number Microsoft Microsoft iSCSI Software Initiator Release v2.08 Windows System Requirements: Windows 2000 Server with SP4 Windows Server 2003 with SP2 Windows Server 2008 with SP2 Linux The iSCSI Initiators are different for different Linux Kernels. For Red Hat Enterprise Linux 3 (Kernel 2.4), install linux-iscsi-3.6.3.tar For Red Hat Enterprise Linux 4 (Kernel 2.6), use the build-in iSCSI initiator iscsi-initiator-utils-188.8.131.52-4 in kernel 2.6.9...
Vendor Model Seagate Constellation, ST9500530NS, 500GB, 7200RPM, SATA 3.0Gb/s, 32M (F/W: SN02) B. Microsoft iSCSI initiator Here is the step by step to setup Microsoft iSCSI Initiator. Please visit Microsoft website for latest iSCSI initiator. This example is based on Microsoft Windows Server 2008 R2. Connect •...
Figure B.2 Figure B.3 It can connect to an iSCSI disk now. MPIO • If running MPIO, please continue. Click “Discovery” tab to connect the second path. Click “Discover Portal”. Input IP address or DNS name of the target.
Click “Targets” tab, select the second path, and then click “Connect”. 10. Enable “Enable multi-path” checkbox. Then click “OK”. 11. Done, it can connect to an iSCSI disk with MPIO. MC/S • 12. If running MC/S, please continue. 13. Select one target name, click “Properties…”. 14.
Figure B.10 Figure B.11 17. Select Initiator IP and Target portal IP, and then click “OK”. 18. Click “Connect”. 19. Click “OK”. Figure B.12 Figure B.13 20. Done.
Disconnect • 21. Select the target name, click “Disconnect”, and then click “Yes”. Figure B.14 22. Done, the iSCSI device disconnect successfully.
C. From single controller to dual controllers This SOP applies to upgrading from the DSN-6410 to DSN-6410 with DSN-640. Before you do this, please make sure that the DSN-6410 is properly installed according to the manuals, especially the HDD trays. If you are using hard drives, you need to use HDD trays with either multiplexer board or bridge board to install your HDDs in order to...
Please follow the steps below to upgrade to dual controller mode. Step 1 Go to “Maintenance\System”. Copy the IP SAN storage serial number. Step 2 Go to “Maintenance\Upgrade” and paste the serial number into “Controller Mode” section. Select “Dual” as operation mode.
Step 3 Click “confirm”. The system will ask you to shutdown. Please shutdown IP SAN storage. Click Ok.
Go to “Maintenance\Reboot and shutdown”. Click “Shutdown” to shutdown the system. Click Ok.
Step 4 Power off the DSN-6410. Insert the second controller to the IP SAN storage. And then power on the system. The IP SAN storage should now become in dual controller mode as the DSN-6410 with DSN-640. You may go to “Maintenance\System information” to check out. The IP SAN storage is running in dual controller mode now.