Download Print this page

Dell PowerEdge C5220 Test Report page 10

Hide thumbs Also See for PowerEdge C5220:

Advertisement

8. On the next screen, select Yes to accept this license.
9. The auxiliary server now has a Web interface for managing Hadoop. Select OK on the Next Step screen.
10. On the installer's Finish screen, select OK to exit the installer.
11. Using a Web browser, open the page http://<clouderamanager>:7180/ , where <clouderamanager> is the IP
address of the auxiliary manager server.
12. Copy the SSH keys, previously generated in the Configuring the auxiliary server section, from the auxiliary server
to the local desktop running the Web browser.
13. Log onto the Cloudera manager with username admin and password admin.
14. On the Thank you for choosing Cloudera Manager and Cloudera's Distribution including Apache Hadoop (CDH)
screen, click Continue.
15. On the register your CDH3 installation, click Skip Registration.
16. On the Specify hosts for your CDH3 cluster installation screen, enter a list of hostnames or IP addresses in the
text box at the bottom, such as 192.168.1.3[1-8]. Click Find Hosts.
17. The upper text box will contain a list of potential Hadoop hosts. Select the applicable nodes, and click Continue.
18. On the provide SSH login credentials screen, select All hosts use the same public key, click Choose file for the
Public Key File, and browse to and select the SSH public-key file for the auxiliary server's public key (see step 12).
19. Repeat step 18 to select the auxiliary server's Private Key File.
20. Click Start Installation.
21. After the Hadoop installation on the nodes has finished, switch to a console session on the auxiliary server.
22. Format two of the remaining four disks on each node as part of the Hadoop file system.
# Note: in the following example, the contents of disks /dev/sdc and /dev/sdd
#
will be destroyed
# partition /dev/sdc and /dev/sdd and create EXT4 file systems
for i in $(seq 31 38); do
ssh 192.168.1.$i parted -s /dev/sdc mklab gpt \; parted -s /dev/sdd mklab gpt
ssh 192.168.1.$i parted /dev/sdc mkpart primary
ssh 192.168.1.$i mkfs.ext4 /dev/sdc1 \; mkfs.ext4 /dev/sdd1
done
# Modify fstab so that these file systems are mounted at boot time
for i in $(seq 31 38); do
ssh 192.168.1.$i '(echo "/dev/sdc1 /dfs/d1 ext4 defaults,noatime 1 2";\
done
# Create the default Hadoop HDFS directories
for i in $(seq 31 38); do
ssh 192.168.1.$i mkdir -p '/dfs/{d1,d2,n1,n2,s1,s2}' '/mapred/{local,jt}'
ssh 192.168.1.$i mount '/dfs/d{1,2}'
done
for i in $(seq 31 38); do
ssh 192.168.1.$i chmod 700 '/dfs/{d1,d2,n1,n2,s1,s2}' \;
Dell PowerEdge C5220: Hadoop MapReduce Performance
parted /dev/sdd mkpart primary
echo "/dev/sdd1 /dfs/d2 ext4 defaults,noatime 1 2") \
>> /etc/fstab'
chmod 755 '/mapred/{local,jt}'
'"1 -1"' \; \
'"1 -1"'
A Principled Technologies test report 10

Advertisement

loading