Skip to main content

Posts

Showing posts with the label Storage

Manage the new volume(s)

When you mount a brand new iSCSI-based volume on your server, Windows treats it the same as if you had added a new hard drive to your computer. Take a look at this:  Open Computer Management (Start | Right-click My Computer | Manage).  Choose the Disk Management option. If the volume you are using is still blank  — that is, newly created on your iSCSI target and does not contain data —  Windows will pop up the Disk Initialization wizard, as shown  The Windows Disk Initialization wizard. Note  that Disk 1 is not yet initialized and has a size of 1,020 MB. This disk is a small target I created on my iSCSI host. An iSCSI-based volume follows the same rules as any other Windows volume.  You can create this volume as basic or dynamic (although dynamic is notrecommended for iSCSI) or even as GPT (GUID partition table) volumes, which support volumes in excess of 2TB. Just as is the case with any Windows volume, you need to initialize the new drive, create a partition, and format the

Bind the targets

Now, you have successfully connected to a shared target on your iSCSI array. If you selected the Automatically Restore This Connection When The System Boots check box as explained in the previous step, you can now add the target to the iSCSI service’s binding list. By doing so, you make sure that Windows does not consider the iSCSI service fully started until connections are restored to all volumes on the binding list. This is important if you have data on an iSCSI target that other services depend on. For example, if you create a share on your server and that shared data resides on an iSCSI target, the Server service that handles the share depends on the iSCSI service’s complete availability to bring up the shares. Note: With older versions of the iSCSI initiators, creating this kind of dependency structure required you to reconfigure individual service dependencies — a process that could get complicated. With the iSCSI Initiator version 2, Microsoft has fixed this issue, but you

Install the iSCSI initiator

If you’re running an operating system on which the iSCSI initiator software is not installed, execute the file you downloaded and follow the installation instructions . The installer will ask you to decide which components you would like to install. Choose your installation options. Initiator service   — This is the service behind the actual work. Software initiator   — The software initiator is the software service that handles iSCSI traffic. Microsoft MPIO Multipathing Support for iSCSI — MPIO is a way that you can increase the overall throughput and reliability of your iSCSI storage environment. See Step 6 for more information about how MPIO can be of benefit. If you have a target that supports Microsoft’s MPIO (check with your manufacturer), you should enable this option. Otherwise, if your target supports MPIO through the use of a proprietary device-specific module (DSM), obtain that DSM from your array manufacturer and follow the manufacturer’s installation recommendations.

Connect to the iSCSI array1

Now that you have the initiator software installed, you need to tell it where to look for mountable volumes. Start the initiator configuration by going to the Control Panel and choosing the iSCSI Initiator option. From the initiator, choose the Discovery tab, shown . The iSCSI initiator’s Discovery tab. On the Discovery tab, click the Add button under the Target Portals box. This will open the Add Target Portal dialog box, shown . The Add Target Portal dialog box. In the Add Target Portal dialog box, provide the name or IP address of your iSCSI array. The default communication port for iSCSI traffic is 3260. Unless you have changed your port, leave this as is. If you have configured CHAP security or are using IPSec for communication between your client and the array, click on the Advanced button and make necessary configuration changes. The Advanced Settings dialog box Advanced options for connecting to your iSCSI array. Back on the Add Target Portal, click the OK bu

Connect to a target/volume

Even though you’re connected to the array itself, you still need to tell the initiator exactly which target or volume you want to mount on your local machine. To see the list of available targets on the array you selected, choose the Targets tab. The iSCSI initiator Target tab in this example has only a single volume available. To connect to an available target, choose the target and click the Log On button. A window pops up with the target name and two options from which you can choose. iSCSI target Log On options. The two options are important. If you want your server to connect to this volume automatically when your system boots, make sure you choose the Automatically Restore This Connection When The System Boots check box. Unless you have a good reason otherwise, you should always select this check box. If you do not, you can’t make the iSCSI target persistent after a reboot and will need to manually reconnect it. To enable high availability and to boost performance,

Set up your target and communications infrastructure

Before you install the iSCSI initiator on any of your servers or workstations, you must have something to which the initiator will connect. This can be one of the enterprise class arrays, such as those available from LeftHand, EqualLogic, Dell, or EMC or, if you’re on a tighter budget and want to build your own array, a target running iSCSI target software, such as StarWind. I recommend that, whenever possible, you use either a physically separate infrastructure or separate IP network/VLAN for your iSCSI traffic. By doing so, you simplify troubleshooting and configuration later on.

Configure your local iSCSI network adapter

One best practice is to assign either a dedicated gigabit Ethernet NIC or TCP offload adapter (ToE adapter) in each server to handle iSCSI traffic — in other words, don’t share your user-facing network connection for storage traffic. If you’ve created a separate physical network or VLAN for storage traffic, assign this adapter an IP address that works on the storage network. By placing storage traffic on its own network that is routed separately from the main network, you increase the overall security of your storage infrastructure and simplify the overall configuration.

Install, configure, and use Microsoft’s iSCSI initiator?

(iSCSI) has taken the storage world by storm. No longer is shared storage a niche enjoyed by only large, wealthy corporations. Internet SCSI is leveling the playing field by making shared storage available at a reasonable cost to anyone. By leveraging the ubiquitous Ethernet networks prevalent in most organizations, IT staff training costs for iSCSI are very low and result in quick, seamless deployments. Further, operating system vendors are making it easier than ever to get into the iSCSI game by making iSCSI initiator software freely available. iSCSI networks require three components: An iSCSI target   — A target is the actual storage array or volume, depending on how you have things configured. An iSCSI initiator   — An iSCSI initiator is the software component residing on a server or other computer that is installed and configured to connect to an iSCSI target. By using an iSCSI initiator, target-based volumes can be mounted on a server as if they were local volumes and are

iSCSI initiator configuration in RedHat Enterprise Linux 5

[root@rhel5 ~]# rpm -ivh /tmp/iscsi-initiator-utils-6.2.0.871-0.16.el5.x86_64.rpm Preparing… ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root@rhel5 ~]# [root@rhel5 ~]#rpm -qa | grep iscsi iscsi-initiator-utils-6.2.0.871-0.16.el5 [root@rhel5 ~]# rpm -qi iscsi-initiator-utils-6.2.0.871-0.16.el5 Name : iscsi-initiator-utils Relocations: (not relocatable) Version : 6.2.0.871 Vendor: Red Hat, Inc. Release : 0.16.el5 Build Date: Tue 09 Mar 2010 09:16:29 PM CET Install Date: Wed 16 Feb 2011 11:34:03 AM CET Build Host: x86-005.build.bos.redhat.com Group : System Environment/Daemons Source RPM: iscsi-initiator-utils-6.2.0.871-0.16.el5.src.rpm Size : 1960412 License: GPL Signature : DSA/SHA1, Wed 10 Mar 2010 04:26:37 PM CET, Key ID 5326810137017186 Packager : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla> URL : http://www.open-iscsi.org Summary : iSCSI daemon and utility programs Description : The isc

Installing the Virtual SCSI Controller Driver for Virtual Server 2005 on Windows Server 2008

You can install the virtual SCSI controller driver during the installation of the guest operating system by performing the following steps: Description and screenshots where made while installing Windows Server 2008 on Virtual Server 2005 R2 SP1, however the same instructions apply to the installation of Windows Vista. For Windows 2000/2003/XP you will need to press the F6 key during the text phase of the installation process, then press “S” to specify additional drivers, and then provide the driver floppy image. 1. Begin the installation by inserting the appropriate Windows Server 2008 installation media into your DVD drive. 2. Continue with the installation process, until you reach the point where you’re prompted for the location of the system partition. Click on the Load Driver link. 3. Now you need to load the driver files as a virtual floppy image. The image’s name is “SCSI Shunt Driver.vfd”, and it is located in the C:\Program Files\Microsoft Virtual Server\Virtual Ma

vicfg-mpath35 - configure multipath settings for Fibre Channel or iSCSI LUNs

SYNOPSIS vicfg-mpath35 [OPTIONS] DESCRIPTION vicfg-mpath35 provides an interface to configure multipath settings for Fibre Channel or iSCSI LUNs on ESX/ESXi version 3.5 hosts. Use vicfg-mpath for ESX/ESXi 4.0 and later hosts. OPTIONS --help Prints a help message for each command-specific and each connection option. Calling the command with no arguments or with  --help  has the same effect. --list | -l Lists all LUNs and the paths to these LUNs through adapters on the system. For each LUN, the command displays the type, internal name, console name, size, and paths, and the policy used for path selection. --policy | -p Sets the policy for a given LUN to one of "mru", "rr", or "fixed". Most Recently Used (mru) selects the path most recently used to send I/O to a device. Round Robin (rr) rotates through all available paths. Fixed (fixed) uses only the active path. This option requires that you also specify the --lun option. --state | -s Sets the state

Rescan dynamically the scsi bus (applicable to CX Clariion SAN infrastructure)

Rescan dynamically the scsi bus I've been working for a while with a Dell - Clariion CX-300, and the best way to add new attached LUNs was always to reboot the server. However, that procedure is not always the most acceptable if you're in a hurry or if just want to do some tests. I found the procedure described above, in an outdated website, but worked very well in my case. I also recommend to use rescan-scsi-bus.sh script with the options -lwc. Type rescan-scsi-bus.sh --help to see the description of each option. /root/rescan-scsi-bus.sh Host adapter 1 (qla2xxx) found. Host adapter 2 (qla2xxx) found. Scanning for device 1 0 0 0 ... OLD: Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: DGC Model: LUNZ Rev: 0208 Type: Direct-Access ANSI SCSI revision: 04 Scanning for device 2 0 0 0 ... OLD: Host: scsi2 Channel: 00 Id: 00 Lun: 00 Vendor: DGC Model: LUNZ Rev: 0208 Type: Direct-Access AN

iSCSI initiator configuration in RedHat Enterprise Linux 5

[root@rhel5 ~]# rpm -ivh /tmp/iscsi-initiator-utils-6.2.0.871-0.16.el5.x86_64.rpm Preparing... ########################################### [100%] 1:iscsi-initiator-utils ########################################### [100%] [root@rhel5 ~]# [root@rhel5 ~]#rpm -qa | grep iscsi iscsi-initiator-utils-6.2.0.871-0.16.el5 [root@rhel5 ~]# rpm -qi iscsi-initiator-utils-6.2.0.871-0.16.el5 Name : iscsi-initiator-utils Relocations: (not relocatable) Version : 6.2.0.871 Vendor: Red Hat, Inc. Release : 0.16.el5 Build Date: Tue 09 Mar 2010 09:16:29 PM CET Install Date: Wed 16 Feb 2011 11:34:03 AM CET Build Host: x86-005.build.bos.redhat.com Group : System Environment/Daemons Source RPM: iscsi-initiator-utils-6.2.0.871-0.16.el5.src.rpm Size : 1960412 License: GPL Signature : DSA/SHA1, Wed 10 Mar 2010 04:26:37 PM CET, Key ID 5326810137017186 Packager : Red Hat, Inc. <h

Inventory and Catalog in Backup Exec

What is an Inventory?   An Inventory is the process of mounting media in the drive and reading the media label, which is then displayed in the  Devices  view. If this is the first time that Backup Exec (tm) has encountered this media, the media label is also added to the  Media  view.     Note : Each time new tape is introduced in the tape drive or robotic library, it must be inventoried so that the Backup Exec database gets updated with the new tape Information.     To Inventory a Tape/Robotic Library:   1. Insert the tape   2. Click the  Devices  tab   3. Select the correct tape drive/robotic library slot   4. Right-click on the tape drive/robotic library slot and select  Inventory  (Figure 1)   Figure 1   The inventory will complete and should display the correct tape name.   What is a Catalog?     When cataloging a tape, Backup Exec reads the header information from the tape and stores it in a file on the hard drive.   The information contained in the catalog i

Adding virtual disk units to a Linux logical partition

You can add virtual disk units dynamically to a Linux® logical partition that uses IBM® i resources. This allows you to increase the storage capacity of your AIX® logical partition when needed. Virtual disks simplify hardware configuration on the server because they do not require you to add additional physical devices to the server in order to run Linux. You can allocate up to 64 virtual disks to a Linux logical partition. Each virtual disk supports up to 1000 GB of storage. Each virtual disk appears to Linux as one actual disk unit. However, the associated space in the i integrated file system is distributed across the disks that belong to the i logical partition. Distributing storage across the disks provides the benefits of device parity protection through i. Therefore, you do not have to use additional processing resources and memory resources by setting up device parity protection through Linux. IBM i provides the ability to dynamically add virtual disks to a Linux lo

Clariion CX, CX3, CX4 – How to add Storage Capacity to Attached Host

Microsoft Windows 1. If needed, add new drives to the storage system. 2. If needed, create a hot spare using Navisphere Manager. 3. If needed, create an additional RAID group on the drives using Navisphere Manager. 4. Create (bind) additional LUNs on the RAID group using Navisphere Manager. 5. Assign the new LUNs to the server using Navisphere Manager. 6. Verify that the new LUNs were assigned to the server using Navisphere Manager. 7. On the server, verify that the server has access to the LUNs: a. On the Windows desktop, right-click My Computer and click Manage. b. In the left pane of the Computer Management dialog box, double-click on the storage icon. c. Click Disk Management. d. Verify that the LUNs that you added are listed in the right pane. 8. Start or restart the Navisphere Host Agent to push the server’s LUN mapping and operating system information to the storage system. 9. On the server, start PowerPath and verify that PowerPath s