Monday 2 May 2016

Raid definition and configuration


What is RAID 

RAID stands for Redundant Array of Independent Disks (full form of raid). RAID is a method of combining several hard disks into one unit or group.It  is a data storage Virtualization technology that combines multiple disk components into a logical unit for the purposes of data redundancy or performance improvement. It also offers fault tolerance role and higher throughput levels than a single hard disk or group of independent hard disks. RAID levels 0, 1, 5, 6 and 10 are the most popular configurations.


RAID Step and Configurations

Raid 0 Ittechpoint

RAID 0 splits data across disks, resulting in higher data throughput. The performance of this configuration is extremely high, but a loss of any disk in the array will result data loss. This level is commonly known to as striping.
Minimum number of disks required are: 2
Performance: High
Redundancy: Low
Efficiency: High

Advantages:

  • High performance
  • Easy to implement
  • Highly efficient (no parity overhead)

Disadvantages:

  • No redundancy
  • Limited business use cases due to no fault tolerance
Raid 1 ittechpoint

RAID 1 writes all data to two or more disks for 100% redundancy: if either disk fails, no data is lost. Compared to a single disk, RAID 1 tends to be fast on reads, slow on writes. This is a good entry-level redundant configuration. However, since an entire disk is a duplicate, the cost per MB is high. This is commonly known to as mirroring.
Minimum number of disks required are: 2
Performance: Average
Redundancy: High
Efficiency: Low

Advantages:

  • Fault tolerant
  • Easy to recover data in case of disk failure
  • Easy to implement

Disadvantages:

  • Highly inefficient (100% parity overhead)
  • Not scalable (becomes very costly as number of disks increase)

RAID 5 stripes data in a block level across several disks, with parity equality distributed among the disks. The parity information allows recovery from the failure of any single disk. Write performance is rather quick, but due to parity data must be skipped on each disk during reads, reads are slower. The low ratio of parity to data means low redundancy overhead.
Minimum number of disks required are: 3
Performance: Average
Redundancy: High
Efficiency: High

Advantages:

  • Fault tolerant
  • High efficiency
  • Best choice in multi-user environments which are not write performance sensitive

Disadvantages:

  • Disk failure has a medium impact on throughput
  • Complex controller design
raid 6 Ittechpoint

RAID 6 is an upgrade version from RAID 5: data is striped in a block level across several disks with double parity distributed among the disks. As in RAID 5, parity information allows recovery from the failure of any single disk. The double parity gives RAID 6 additional redundancy at the cost of lower write performance (read performance is the same), and redundancy overhead remains low.
Minimum number of disks required are: 4
Performance: Average
Redundancy: High
Efficiency: High

Advantages:

  • Fault tolerant – increased redundancy over RAID 5
  • High efficiency
  • Remains a great option in multi-user environments which are not write performance sensitive

Disadvantages:

  • Write performance penalty over RAID 5
  • More expensive than RAID 5
  • Disk failure has a medium impact on throughput
  • Complex controller design
raid 0+1 ittechpoint

RAID 0+1 is a mirror (RAID 1) array whose segments are striped (RAID 0) arrays. This configuration combines security of RAID 1 with an extra performance boost from the RAID 0 striping.
Minimum number of disks required are: 4
Performance: Very High
Redundancy: High
Efficiency: Low

Advantages:

  • Fault tolerant
  • Very high performance

Disadvantages:

  • Expensive
  • High Overhead
  • Very limited scalability
raid 10 ittechpoint

RAID 10 is a striped (RAID 0) array whose segments are mirrored (RAID 1). RAID 10 is a popular configuration for today environments where high performance and security are required. In terms of performance it is similar to RAID 0+1. However, it has superior fault tolerance and rebuilds performance.
Minimum number of disks required are: 4
Performance: Very High
Redundancy: Very High
Efficiency: Low

Advantages:

  • Extremely high fault tolerance – under certain circumstances, RAID 10 array can sustain multiple simultaneous disk failure
  • Very high performance
  • Faster rebuild performance than 0+1

Disadvantages:

  • Very expensive
  • High overhead
  • Limited scalability
raid 50 ittechpoint

RAID 50 combines RAID 5 parity and stripes it as in a RAID 0 configuration. But high in cost and complexity, performance and fault tolerance are superior to RAID 5.
Minimum number of disks required are: 6
Performance: High
Redundancy: High
Efficiency: Average

Advantages:

  • Higher fault tolerance than RAID 5
  • Higher performance than RAID 5
  • Higher efficiency than RAID 5

Disadvantages:

  • Very expensive
  • Very complex / difficult to implement
raid 60 ittechpoint

RAID 60 combines RAID 6 double parity and stripes it as in a RAID 0 configuration. Although high in cost and complexity, performance and fault tolerance are superior to RAID 6.
Minimum number of disks required are: 8
Performance: High
Redundancy: High
Efficiency: Average

Advantages:

  • Higher fault tolerance than RAID 6
  • Higher performance than RAID 6
  • Higher efficiency than RAID 6

Disadvantages:

  • Very expensive
  • Very complex / difficult to implement
- See more at: http://www.ittechpoint.com/2015/04/raid-definition-and-configuration.html#sthash.Wvc0OAMF.dpuf

Installing VMWare ESXi 5.5 on a HP ProLiant Microserver Gen8

Installing VMWare ESXi 5.5 on a HP ProLiant Microserver Gen8

Introduction

In this tutorial we will set-up a VMWare ESXi 5.5 as a Testlab for serving virtual machines on a small-scale server system from Hewlett Packard, a ProLiant Microserver G8 aka Gen8.
Two methods are presented below. I strongly advise you to take the second variant presented. The first one, that should work ended in a Red Screen of Death!

Hardware modifications

As the basic machine is already a nice one, but not powerful enough equipped with hardware, I upgraded the server with additional components directly when buying the Gen8. Therefore I went to the smallest available model of the HP ProLiant MicroServer Gen8 servers, a G1610T model.
I bought a G1610T and in addition the following hardware:
  • 1x Intel Xeon E3-1265L v2, a quad-core processor (2,5GHz, Sockel 1155, L3 Cache, 45 Watt, BX80637E31265L2) and a turbo frequency of up to 3.5 GHz
  • 2x Kingston KTH-PL316E/8G DDR3-RAM with each 8 GB capacity (1600MHz, PC3-12800 and ECC!)
  • 2x Seagate Barracuda ST3000DM001 SATA III hard drives with each 3TB (7200rpm, 64MB Cache)
  • 1x USB thumb drive (found one in the drawer…)
The main pros of this server are:
  • It includes a cheap but fine hardware hd controller, a HP Smart Array B120i with a throughput of 91.4K IOPS.
  • The form factor! – It is in fact an Ultra Micro Tower.
  • Less than 150 W power consumption – even with four HDDs, it will stay with less than 100 W!
  • Two 1Gb Ethernet ports and one extra dedicated iLO 4 network port.
  • Internal micro sd and usb port to use them as additional hard drive ports.
I will not cover the hardware installation here in detail, but just link to other pages that mentioned working CPU / Ram upgrades. Up to today me haven´t seen anyone who managed to have 32 GB of RAM working on the G8 server; which would give a nice opportunity for the hosting. But I am quite sure with a wider distribution of 16 GB ECC memory module; one will give it a try and make it work. It might be that HP will provide some kind of BIOS update to officially support more total memory.
See the following pages for more information about the servers:

Additional preparatory work

Configure HP Integrated Lights-Out (iLO)

You should set-up iLO before the actual installation process, as this will make your further server life easier and of course because this tutorial makes use of iLO. This does not mean, you cannot go without the iLO, but I suggest you to give it a try. Just check my previous post about “iLO on the HP ProLiant Microserver Gen8” if you need any help regarding iLO.

Download the VMware ISO image

Download the modified current version of ESXi from the VMWare web page. You will be forwarded from HPs to VMwares web page. You have to login or create a login during this process. The ISOs name should be similar to: VMware-ESXi-5.5.0-Update1-1746018-HP-5.74.27-Jun2014.iso

Upgrading to latest available firmware

My server was delivered with a 1.3 version of HP Intelligent Provisioning Online update. The current version, while writing this tutorial is already v. 1.5, so we will cover this firmware upgrade here as well.
Open the Remote Console, found under “Remote Console” > “Java Integrated Remote Console” (Java IRC), that provides remote access to the system through iLO.
HP Smart Deployment
HP Smart Deployment
Open the “Maintenance section” on the right and check “Firmware-Upgrade” there.
Maintenance section
Maintenance section
Click on “Firmware-Update” here and install all available updates. This process will take a while.

Creating disk array

In the “HP Smart Storage Administrator” (SSA) (also available in the Maintenance section) you have to create a hard disk array. I went for 2×3 TB HDDs as Raid 1 here. As I am not planning to use the disk array as boot volume but for the provision of virtual machines, and install ESXi on the USB thumb drive, we can exceed the 2 TiB limit here.
Smart Storage Administrator
Smart Storage Administrator

Installation of VMWare ESXi 5.5
(Method 1 – a non-working solution!)

After installation of the additional hardware, upgrading the iLO and creating a disc array, we can now initiate the actual installation of our hypervisor system.
  1. Log in again to your iLO web view and open the mentioned remote console again.
    HP Intelligent Provisioning
    Server iLO Management
    HP Intelligent Provisioning

  2. This time, choose “Config and Install”
    HP Intelligent Provisioning
    “Config and Install” or “Service”?
  3. In the next step, choose “minimum power consumption”, “skip update” and “Keep Array configuration” as we already managed this procedure just before.
  4. On the next screen, just choose “VMWare ESXi/vSphere-Image”, “Manuell Install” and “Drive media” here.
    Install from Image file
    Install from Image file
  5. From the “Virtual Drives” in your Remote Console, just check “Image File CD-Rom/DVD” and select the downloaded ISO file there.
    Mounting media
    Mounting media
  6. Confirming on Step 4 will install the VMWare ESXi Server on your HP system. This will take some minutes to complete.
  7. After the installation the server will be rebooted. This will take additional > three minutes. You can stay connected to the iLO that time!
  8. The server will restart directly into the ESXi Server, displaying an IP address where it is available through.
This is, how it should work! – In my case, this way wasn´t working properly, after rebooting, I ended on a Red Screen of Death! So I went for the second variant, which is explained in the next section.

Installation of VMWare ESXi 5.5
(Method 2, working solution!)

If you encounter errors during the first method,  please check the following variant, that might be even better, as you are actual performing a manual installation.

Choosing your boot device

In fact there are several ways to boot the installation media from, I will just outline two ways to boot from.
  1. You may connect an USB thumb drive with the installation media preloaded as explained on the very short article “Preparing ESXi boot image for USB Flash drive“. Just plug it in.
  2. Choose to add the .iso file as a virtual drive to the iLO remote console. Go to “Virtual Drives” > “Image File CD-Rom/DVD” > Select the installation .iso file you just downloaded before.
    Choosing an image file as birtual drive
    Choosing an image file as virtual drive
Both of the ways should work for the following install process.

The main installation process

  1. Boot your server and hit “F11” to go to the boot menu.
    HP Proliant Boot Screen
    HP Proliant Boot Screen
  2. In the menu, you can choose
    1. “USB DriveKey”, if you filled a USB thumb drive before with the ISO file.
      For this option, you might take a look to the “Red Screen of Death” information to select the right boot device (it is in fact the first external drive!).
    2. “One Time Boot to CD-ROM”, if you added the virtual drive before.
    Boot Menu
    Boot Menu
  3. After selecting your device, the pre-installation screen will be shown. If you made it up here, the rest of the process should work properly. Just hit “Enter” here to proceed or wait a couple of seconds for the installer to go further.
    Pre-Install Screen
    Pre-Install Screen
    Loading data - ESXi installation
    Loading the necessary data for ESXi installation
  4. In the next step, you will be welcomed to the “Installation of ESXi 5.5.0”. Just hit “Enter” to proceed.
    Installation starts here
    Installation of ESXi starts here
  5. Accept the EULA by pressing “F11”.
    Expect the EULA
    Accept the EULA
  6. Now we have to select a disk to install the ESXi System to. There a listed two types of storage devices. Local and Remote ones. The locals include some volumes:
    1. HP Logical Volume with 2.73 TiB (on the Raid controller)
    2. USB 2.0 Flash Disk – The internal USB thumb drive with 3.73 GiB where we will install the ESXi to.
    3. An HP iLO device “LUN 00 Media 0”, that is our virtual CD-ROM drive, we mounted the ISO to.
    As we will use the RAID Logical Volume as a data storage for the virtual machines later, we will take the USB 2.0 Flash Disk instead.
    Choose the destination
    Choose the destination
  7. Now, we choose our language. In my case, this is German language.
    Language settings
    Language settings
  8. The installer is asking for a root password. Just choose any here – it is suggested to add a new account later in the vSphere Client after the installation.
    Providing root password
    Providing root password
  9. With “F11” we can now start the installation, Cancel with “ESC” or going back, taking changes with “F9”. Double check, that you choose the right device and proceed.
    Confirm Install
    Confirm Install
  10. The installation itself took about 10-12 minutes. Just wait and go for a coffee.
    Installation in progress
    Installation in progress
  11. Next showing the installation was completed successfully. At this stage, you should remove the installation media. Either remove the external USB thumb drive or the virtual drive CD-ROM and press “Enter” to reboot.
    Installation complete
    Installation complete

Post-installation

First boot

The server will reboot and this will take a couple of minutes. When the Starting EXSi Server 5.5 Screen is shown, we are almost done. When the last (yellow) screen is presented, the server is ready for the deployment of virtual machines. You will see the URL that you can access in the middle left of the screen.
Rebooting server
Rebooting server
Starting ESXi Server 5.5
Starting ESXi Server 5.5
ESXi Server ready
ESXi Server ready

Installing vSphere Client

Open the presented URL. There is a link presented to download the vSphere Client. Download the VSphere Client and install it. Open the Client and enter the IP address. The user is “root” and the password the one you entered during the installation process.

Creating a Data Storage

After Login to the Server, you need to add a data storage. The following message (in my case in German) should be similar for you:
“The ESXi-Host does not provide a persistent storage” and a bit below “To add storage, click here”, like shown in the following picture.
Create a storage device
Create a storage device
Choose VMF-5 to add the 2TB+ support during this process.
Add a new storage
Add a new storage on the raid controller
Add a new storage
Choose VMF-5
You are now able to work with your ESXi Server. Have fun with your teststand virtualization server!
- See more at: http://blog.ittechpoint.com/2015/10/installing-vmware-esxi-55-on-hp-ProLiant-Microserver-Gen8.html#sthash.okctHiJR.dpuf