Friday, 19 October 2012
Now that you have the initiator software installed, you need to tell it where to look for mountable volumes. Start the initiator configuration by going to the Control Panel and choosing the iSCSI Initiator option. From the initiator, choose the Discovery tab, shown in Figure B.
The iSCSI initiator’s Discovery tab.
On the Discovery tab, click the Add button under the Target Portals box. This will open the Add Target Portal dialog box, shown in Figure C.
The Add Target Portal dialog box.
In the Add Target Portal dialog box, provide the name or IP address of your iSCSI array. The default communication port for iSCSI traffic is 3260. Unless you have changed your port, leave this as is. If you have configured CHAP security or are using IPSec for communication between your client and the array, click on the Advanced button and make necessary configuration changes. The Advanced Settings dialog box is shown in Figure D.
Advanced options for connecting to your iSCSI array.
Back on the Add Target Portal, click the OK button to make the initial connection to the iSCSI array. Note that, at this point, you’re not connecting to an actual volume, but only to the array in general. (Figure E)
The target portal has been added to the initiator.
The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical
bonded interface.The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services.
Additionally, link integrity monitoring may be performed.
You have to install ifenslave, it is a tool to attach and detach slave network interfaces to a bonding device.
sudo apt-get install ifenslave
Configuring your network interfaces and modules
You need to edit /etc/network/interfaces file and make it looks like
sudo nano /etc/network/interfaces
Add the following (This is just example enter you ip details)
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
iface lo inet loopback
# The primary network interface
iface eth0 inet static
iface eth1 inet manual
iface eth2 inet manual
iface bond0 inet static
up /sbin/ifenslave bond0 eth1 eth2
down /sbin/ifenslave -d bond0 eth1 eth2
Save and exit the file
Now you need to edit /etc/modprobe.d/aliases.conf file
sudo nano /etc/modprobe.d/aliases.conf
Add the following lines
alias bond0 bonding
options mode=0 miimon=100 downdelay=200 updelay=200
Save and exit the file
If you want more details about modes
mode=0 (balance-rr) Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
mode=1 (active-backup) Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond’s MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.
mode=2 (balance-xor) XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.
mode=3 (broadcast) Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.
mode=4 (802.3ad) IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.
* Ethtool support in the base drivers for retrieving the speed and duplex of each slave.
* A switch that supports IEEE 802.3ad Dynamic link aggregation. Most switches will require some type of configuration to enable 802.3ad mode.
mode=5 (balance-tlb) Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
* Prerequisite: Ethtool support in the base drivers for retrieving the speed of each slave.
mode=6 (balance-alb) Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.
Restart network services using the following command
sudo /etc/init.d/networking restart
A new white paper about Windows Storage Server 2008 R2 Architecture and Deployment (including the Microsoft iSCSI Software Target 3.3) has just been published.
Here's an outline of this content:
Windows Storage Server 2008 R2 Overview
Comparing Windows Server Operating System Storage Offerings
Comparing Windows Storage Server with Windows Server
Identifying Windows Storage Server Features
What’s New in Windows Storage Server 2008 R2
Comparing Windows Storage Server 2008 R2 with Windows Server 2008 R2
Windows Storage Server 2008 R2 Editions
Identifying Storage Challenges
Identify Scalability Storage Challenges
Identify Availability Storage Challenges
Identify Security Storage Challenges
Identify Manageability Storage Challenges
Identify Data Recovery Storage Challenges
Identifying Windows Storage Server Solution Benefits
Identifying Scalability Benefits
Identifying Availability Benefits
Identifying Security Benefits
Identifying Manageability Benefits
Identifying Data Recovery Benefits
Exploring Windows Storage Server Features and Capabilities
Providing Access to File Services Workloads
Supporting File Services Workloads Using CIFS, SMB, or SMB2
Supporting File Services Workloads Using NFS
Supporting File Services Workloads Using WebDAV
Supporting File Services Workloads Using Windows SharePoint Services
Providing Access to iSCSI Block I/O Workloads
Supporting iSCSI Block I/O Workloads Using Microsoft iSCSI Software Target
Supporting iSCSI Boot
Providing Access to Web Services Workloads
Providing Access to FTP Services Workloads
Providing Access to Print Services Workloads
Providing Reduction in Power Consumption
Improve the Power Efficiency of Individual Servers
Processor Power Management
Storage Power Management
Additional Power Saving Features
Performing Highly Automated Installations
Managing Windows Storage Server
Management Tools for All Workloads
Managing Power Consumption for All Workloads
Remote Manageability of Power Policy
In-Band Power Metering and Budgeting
Managing File Services Workloads
Managing File Services Using File Server Resource Manager
Managing File Services Using Share and Storage Management
Managing DFS Namespaces and DFS Replication
Managing Single Instance Storage
Managing iSCSI Block I/O Workloads
Managing the Microsoft iSCSI Software Target for iSCSI Block I/O Workloads
Managing the Microsoft iSCSI Software Initiator for iSCSI Block I/O Workloads
Managing iSCSI Block I/O Workloads Using Windows PowerShell
Managing Web Services Workloads
Managing Print Services Workloads
Protecting Windows Storage Server Workload Data
Using Windows Server Backup to Protect Data
Using Shadow Copies of Shared Folders to Protect Data
Using the Volume Shadow Copy Service to Protect Data
Using LUN Resynchronization to Protect Data
Comparison of LUN Resynchronization and Traditional Volume Shadow Copy Service
Comparison of LUN Resynchronization and LUN Swap
Benefits of Performing Full Volume Recovery Using LUN Resynchronization
Process for Performing Full Volume Recovery Using LUN Resynchronization
Using DFS Replication to Protect Data
Using Automated System Recovery to Protect Data
Using System Center Data Protection Manager 2007 to Protect Data
Using Virtual Disk Snapshots to Protect Data
Using the Appcmd.exe Tool to Backup IIS Configuration
Using the PrintBRM.exe Tool to Backup Printer Information
Securing Windows Storage Server Workloads
Securing Windows Storage Server for All Workloads
Securing File Services Workloads
Securing iSCSI Block I/O Workloads
Securing Web Services Workloads
Securing Print Services Workloads
Improving Availability of Windows Storage Server Workloads
Improving Availability of File Services Workloads
Improving Availability of iSCSI Block I/O Workloads
Creating Highly-Available iSCSI Targets
Creating Highly-Available iSCSI Initiators
Improving Availability of Web Services Workloads
Improving Availability of Print Services Workloads
Improving Performance and Scalability for Windows Storage Server Solutions
Improving Performance and Scalability for All Workloads
Improvements in Processor and Memory Capacity
Improvements in the Next Generation TCP/IP Protocol
Improvements in Network Adapter Performance
Reduction in Processor Utilization for I/O Operations
Improving Performance and Scalability for File Services Workloads
Review Improvements in the SMB2 Protocol
Review SMB-based File Services Workload Test Results
Reviewing Performance Improvements in SMB Version 2.1 in Windows Server 2008 R2
Improving Performance for Branch Offices Using BranchCache
Improving Performance for Folder Redirection and Offline Files
Improving Performance and Scalability for iSCSI Block I/O Workloads
Identify Methods for Improving iSCSI Block I/O Workload Performance and Scalability
Review I/O Storage Test Results
Improving Performance and Scalability for Web Services Workloads
Identify Methods for Improving Web Services Workload Performance and Scalability
Review Web Services Workload Test Results
Improving Performance and Scalability for Print Workloads
Windows Storage Server Deployment Scenarios
Overview of Windows Storage Server Configurations
Using Windows Storage Server in a Stand-Alone NAS Configuration
Using Windows Storage Server in a Highly-Available NAS Configuration
Using Windows Storage Server in a NAS Gateway Configuration
Using Windows Storage Server in iSCSI Block I/O Configuration
Creating Branch Office Solutions
Creating Highly-Available Solutions
Creating Solutions for Storage Consolidation
Creating Small to Medium Business Solutions
Creating Solutions for Heterogeneous Environments
Creating Application Consolidation Solutions
Creating Unified Storage Solutions
Creating Virtualization Solutions
Connecting Virtual Machines to iSCSI LUNs
Running Virtual Machines on Windows Storage Server
Creating iSCSI Boot Solutions
Instead of configuring a virtual machine as a DHCP server, you can use the virtual DHCP server for your virtual network.
To configure the virtual DHCP server:
1. Open the Virtual Server Administration Website.
2. Under Virtual Networks, selectConfigure and then click the virtual network.
3. In Virtual Network Properties, click DHCP server.
4. Check the Enabled checkbox, then configure the necessary DHCP server options.
5. Click OK.
Wednesday, 1 August 2012
To avoid the rather complex set of instructions that you needed to follow in 4.1, VMware introduced new detach and unmountoperations to the vSphere UI & the CLI.
As per KB 2004605, to avoid an APD condition in 5.0, all you need to do now is to detach the device from the ESX. This will automatically unmount the VMFS volume first. If there are objects still using the datastore, you will be informed. You no longer have to mess about creating and deleting rules in the PSA to do this safely. The steps now are:
- Unregister all objects from the datastore including VMs and Templates
- Ensure that no 3rd party tools are accessing the datastore
- Ensure that no vSphere features, such as Storage I/O Control or Storage DRS, are using the device
- Detach the device from the ESX host; this will also initiate an unmount operation
- Physically unpresent the LUN from the ESX host using the appropriate array tools
- Rescan the SAN
What is VIC: VIC - vSphere Integrated Containers enables IT, teams, to seamlessly run traditional workloads and container workloads ...
Update Manager 6.5 issue ""interface com.vmware.vim.binding.integrity.VcIntegrity is not visible from class loader""This post is related to the issue what we faced today when we replaced the SSL certificates in our setup. When I launched the web-...
vRealize Automation Installation Overview You can install vRealize Automation to support minimal, proof of concept environments, or...