Skip to main content


Showing posts with the label Storage

Troubleshooting Storage Performance in vSphere

When we troubleshoot performance related issues, the first think which would hit our mind it "Storage". So let's have a sneak peak about the basic troubleshooting of the storage related issues.  Poor storage performance is generally the result of high I/O latency. vCenter or esxtop will report the various latencies at each level in the storage stack from the VM down to the storage hardware.  vCenter cannot provide information for the actual latency seen by the application since that includes the latency at the Guest OS and the application itself, and these items are not visible to vCenter. vCenter can report on the following storage stack I/O latencies in vSphere.  Storage Stack Components in a vSphere environment GAVG (Guest Average Latency) total latency as seen from vSphere KAVG (Kernel Average Latency) time an I/O request spent waiting inside the vSphere storage stack.  QAVG (Queue Average latency) time spent waiting in a queue inside the vSphere St

vVOLS (Virtual Volumes)

# Jagadeesh Devaraj #vVols I believe its a hot and trending topic now in internet. By now you guys might heard a lot about the vVols at VMworld 2014 or through various forums and the reason it is important to manage the end-to-end Infrastructure. The vVols takes cares of end-to-end Infra from compute of storage at the virtual machine (VM) and its VMDK ( vDisk) level. Virtualization meant VMs and vDisks are the unit of management at the compute layer. VMware® Virtual Volumes is meant to bridge the gap by extending the paradigm to storage specifically on VMware vSphere® deployments. What is vVols :  VVOLs is a provisioning feature for vSphere 6 that changes how virtual machines (VMs) are stored and managed. ( Information source : VVOLs is an out-of-band communication protocol between vSphere and storage. It allows VMware to associate VMs and vDisks with storage entities, and allows vSphere to offload some storage management functions, like provisioning of VM's

Connect to the iSCSI array

Now that you have the initiator software installed, you need to tell it where to look for mountable volumes. Start the initiator configuration by going to the Control Panel and choosing the iSCSI Initiator option. From the initiator, choose the Discovery tab, shown in Figure B. Figure B The iSCSI initiator’s Discovery tab. On the Discovery tab, click the Add button under the Target Portals box. This will open the Add Target Portal dialog box, shown in Figure C. Figure C The Add Target Portal dialog box. In the Add Target Portal dialog box, provide the name or IP address of your iSCSI array. The default communication port for iSCSI traffic is 3260. Unless you have changed your port, leave this as is. If you have configured CHAP security or are using IPSec for communication between your client and the array, click on the Advanced button and make necessary configuration changes. The Advanced Settings dialog box is shown in Figure D. Figure D Advanced options for conn

Linux Configuration

The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical bonded interface.The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed. You have to install ifenslave, it is a tool to attach and detach slave network interfaces to a bonding device.     sudo apt-get install ifenslave Configuring your network interfaces and modules You need to edit /etc/network/interfaces file and make it looks like     sudo nano /etc/network/interfaces Add the following (This is just example enter you ip details)     # This file describes the network interfaces available on your system     # and how to activate them. For more information, see interfaces(5).     # The loopback network interface     auto lo     iface lo inet loopback     # The primary network interface     auto eth0     iface

Microsoft Hyper-V will not boot virtual SCSI devices

“Each IDE controller can have two devices. You can not boot from a SCSI controller. This means an IDE disk will be required. The boot disk will be IDE controller 0 Device 0. If you want a CDROM it will consume an IDE device slot.” Source:   MSDN Blog The hypervisor that runs the virtual BIOS does not support booting from a SCSI controller, today, but it does support the following boot devices: CD IDE Legacy Network Adapter Floppy The root reason is SCSI in a synthetic device and there is no VMBUS until after boot. One might think that this shouldn’t be a problem, after all, the virtual machines can still boot from regular IDE-based virtual disks. So where’s the catch? The main problem is related to the fact that in Virtual Server, virtual SCSI controllers have major performance benefits over virtual IDE controllers. In Virtual Server, it is recommended to attach the Virtual Disks to one or more SCSI controllers to improve disk input/output (I/O) performance. IDE is limited to

Inventory and Catalog in Backup Exec

What is an Inventory? An Inventory is the process of mounting media in the drive and reading the media label, which is then displayed in the  Devices  view. If this is the first time that Backup Exec ™ has encountered this media, the media label is also added to the  Media  view. Note : Each time new tape is introduced in the tape drive or robotic library, it must be inventoried so that the Backup Exec database gets updated with the new tape Information. To Inventory a Tape/Robotic Library: 1. Insert the tape 2. Click the  Devices  tab 3. Select the correct tape drive/robotic library slot 4. Right-click on the tape drive/robotic library slot and select  Inventory  (Figure 1) Figure 1 The inventory will complete and should display the correct tape name. What is a Catalog? When cataloging a tape, Backup Exec reads the header information from the tape and stores it in a file on the hard drive. The information contained in the catalog includes, but is not limited

What is an SNTP?

The Simple Network Time Protocol (SNTP) is a simpler version of the Network Time Protocol (NTP). SNTP synchronizes the time between networked computer systems and is relied on when data is being transferred via the Internet. The NTP protocol is one of the most established protocols still used on the Internet. It uses a GPS or radio clock to tell time and is accurate past the seconds place. Why is the SNTP Necessary? The need for precise time synchronization has continued to increase with the evolution of computer technology over the past several decades. In the networking field, network servers and their client computers require precision to the millisecond and beyond in order to ensure data file transfers occur without errors. Computers also require specific time synchronization in order to ensure data packet and email delivery in the proper sequence to destination networks and email recipients. The importance of the SNTP and NTP protocols exponentially expands with the number of

How to Scan new LUNs on Linux with QLogic driver

Q: I am using QLogic driver and I would like to know how do I scan new LUNs on Linux operating system? A: You need to find out driver proc file /proc/scsi/qlaXXX. For example on my system it is /proc/scsi/qla2300/0 Once file is identified you need to type following command (login as the root user): # echo "scsi-qlascan" > /proc/scsi/qla2300/0 # cat /proc/scsi/qla2300/0 Now use the script  new LUN as a device. Run script as follows: # ./ -l -w 

HBA & multipathing on RHEL

Introduction The firmware gets updated by the driver or each time the "qla2300" or "qla2400" modules are loaded. Drivers need specific firmware versions. Nevertheless here's QLogic firmware repo : Note. it's ok to have a more recent BIOS than firmware, but not the contrary. Driver & Firmware installation The driver should be included into the RHEL distribution. If not, use constructor's provided one e.g., - HP Approved Software : - IBM Supported Software : make sure you have the gcc package, rpm -q gcc and install the driver, ./INSTALL -h ./INSTALL -f -a Note. make sure the default binary isn't a link to gcc 2.95 (as it's sometimes the case on Oracle installs), ll /usr/bin/gcc or check that the gcc version matches the distribution build, dmesg |

Multipathing Support in Windows Server 2008

Windows Server 2008 includes many enhancements for connecting to a Storage Network. One notable feature is inclusion of native multipathing (Microsoft MPIO) inbox. Microsoft MPIO in delivers high availability by establishing multiple sessions/connections from a Windows Server host to an external storage array through iSCSI, Fibre Channel, and SAS (Serial Attached SCSI). Microsoft MPIO use redundant physical path components–adapters, cables, and switches–to create logical "paths" between the server and the storage device. In the event that a device in the path fails, Microsoft MPIO automatically redirects IO to an alternate path for continued application availability. Each NIC (in the case of iSCSI Software Initiator) or HBA (in the case of Fibre Channel, SAS, iSCSI HBA) should be connected through redundant switch infrastructures to provide continued access to storage in the event of a failure in a storage fabric component. Note: Failover times can vary by storage vendor and