Skip to main content


Scavenge.exe tool to delete cached content from secondary cache drive

Prerequisites This article assumes that you are familiar with the overall functionality of ARR and know how to deploy and configure ARR with disk cache. If you have not done so already, it is strongly recommended that you review the following walkthrough before proceeding: Configure and enable disk cache in Application Request Routing If Application Request Routing Version 2 has not been installed, you can download it at: Microsoft Application Request Routing Version 2 for IIS 7 (x86)  here . Microsoft Application Request Routing Version 2 for IIS 7 (x64)  here . Follow the steps outlined in  this  document to install ARR Version 2. This walkthrough also assumes that secondary cache drive has been added to ARR for caching. If not, please follow the Configure and Enable Disk Cache in Application Request Routing walkthrough. Scavenge.exe tool in ARR Scavenge.exe is a command line tool that can be used for managing the secondary drive by the administrators. The exe is

Connect to the iSCSI array

Now that you have the initiator software installed, you need to tell it where to look for mountable volumes. Start the initiator configuration by going to the Control Panel and choosing the iSCSI Initiator option. From the initiator, choose the Discovery tab, shown in Figure B. Figure B The iSCSI initiator’s Discovery tab. On the Discovery tab, click the Add button under the Target Portals box. This will open the Add Target Portal dialog box, shown in Figure C. Figure C The Add Target Portal dialog box. In the Add Target Portal dialog box, provide the name or IP address of your iSCSI array. The default communication port for iSCSI traffic is 3260. Unless you have changed your port, leave this as is. If you have configured CHAP security or are using IPSec for communication between your client and the array, click on the Advanced button and make necessary configuration changes. The Advanced Settings dialog box is shown in Figure D. Figure D Advanced options for conn

Linux Configuration

The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical bonded interface.The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed. You have to install ifenslave, it is a tool to attach and detach slave network interfaces to a bonding device.     sudo apt-get install ifenslave Configuring your network interfaces and modules You need to edit /etc/network/interfaces file and make it looks like     sudo nano /etc/network/interfaces Add the following (This is just example enter you ip details)     # This file describes the network interfaces available on your system     # and how to activate them. For more information, see interfaces(5).     # The loopback network interface     auto lo     iface lo inet loopback     # The primary network interface     auto eth0     iface

New white paper: Windows Storage Server 2008 R2 Architecture and Deployment

A new white paper about Windows Storage Server 2008 R2 Architecture and Deployment (including the Microsoft iSCSI Software Target 3.3) has just been published. Here's an outline of this content:     Introduction     Windows Storage Server 2008 R2 Overview         Comparing Windows Server Operating System Storage Offerings             Comparing Windows Storage Server with Windows Server             Identifying Windows Storage Server Features             What’s New in Windows Storage Server 2008 R2         Comparing Windows Storage Server 2008 R2 with Windows Server 2008 R2         Windows Storage Server 2008 R2 Editions         Identifying Storage Challenges             Identify Scalability Storage Challenges             Identify Availability Storage Challenges             Identify Security Storage Challenges             Identify Manageability Storage Challenges             Identify Data Recovery Storage Challenges         Identifying Windows Stora

Getting WINS-like computer name resolution over VPN in SBS 2008

One of these was something that I used for my convenience over a VPN connection from home. You see, the internal order processing application that I wrote uses some shared folders to store some temporary data, such as e-mails that are generated but not yet released to Exchange, or a local copy of images that are available on the Web site. This software–and our users–are used to referring to Windows file shares as  \\COMPUTER-NAME\SHARE-NAME ; for example,  \\CYRUS\Pickup Holding , because for some reason some of the older servers are named after my boss’s dead cats. When connecting through VPN to SBS 2008, however, that “suffix-less” name resolution was not working. So when  \\CYRUS\Pickup Holding  failed to resolve to anything, \\\Pickup Holding  would work fine. This was super annoying. The reason this worked previously with our SBS 2003 installation is that it was acting as a WINS server, which provided this type of computer name resolution for us. SBS 2008

Virtual Server 2005: How To Configure the Virtual DHCP Server

Instead of configuring a virtual machine as a DHCP server, you can use the virtual DHCP server for your virtual network. To configure the virtual DHCP server: 1. Open the  Virtual Server Administration Website . 2. Under  Virtual Networks , select Configure  and then click the virtual network. 3. In  Virtual Network Properties , click  DHCP server . 4. Check the  Enabled  checkbox, then configure the necessary DHCP server options. 5. Click OK.

Microsoft Hyper-V will not boot virtual SCSI devices

“Each IDE controller can have two devices. You can not boot from a SCSI controller. This means an IDE disk will be required. The boot disk will be IDE controller 0 Device 0. If you want a CDROM it will consume an IDE device slot.” Source:   MSDN Blog The hypervisor that runs the virtual BIOS does not support booting from a SCSI controller, today, but it does support the following boot devices: CD IDE Legacy Network Adapter Floppy The root reason is SCSI in a synthetic device and there is no VMBUS until after boot. One might think that this shouldn’t be a problem, after all, the virtual machines can still boot from regular IDE-based virtual disks. So where’s the catch? The main problem is related to the fact that in Virtual Server, virtual SCSI controllers have major performance benefits over virtual IDE controllers. In Virtual Server, it is recommended to attach the Virtual Disks to one or more SCSI controllers to improve disk input/output (I/O) performance. IDE is limited to