For Geeks @nd the not so Geeky

Clustering Server – Windows Server 2008 R2

Windows Server 2008 Clustering Options

Clustering Windows Server 2008 R2 is used for scalability, load balancing and high availability. There are basically 3 types of clustering server methods available.

Round-Robin Clustering Server

The easiest to configure as it only requires DNS to provide basic load balancing by distributing the network load over various servers.

One serious drawback of Round-Robin is that if one of the servers goes down DNS will continue to direct clients to the inactive server until the DNS record is manually removed. Clustering Windows Server

Client caching is another  problem in server clustering using this configuration as the client will always connect to the server active in its cache.

Last but not least, each record is given equal weight which means that you cannot set the preference for clients to connect to a more powerful server.

Network Load Balancing (NLB)

Adds scalability on top of the load balancing feature of Round-Robin.

Unlike Round-Robin NLB automatically detects servers that have failed and will therefore redistribute client requests over the remaining servers.

As far as load balancing is concerned in clustering in Windows 2008 Server the big advantage of using NLB over Round-Robin is that you can specify the load each server will handle.

The scalability of NLB means that you can add additional servers to the farm with little administrative overhead.

Failover Clustering in Windows Server 2008

Provides availability in the case of a server or application failure.

Fail-over clustering in Windows 2008 Server usually involves a  Storage Area Network (SAN) fabric involving Logical Unit Numbers (LUNs) or storage volumes that are physically connected to all servers in the cluster.

To improve fault tolerance a Witness disk or Quorum disk which contains a copy of the cluster configuration database is often deployed when using clustering in Windows Server 2008.

Windows 2008 Server Clustering Quorum Settings – http://technet.microsoft.com/en-us/library/cc770620%28v=ws.10%29

Network Load Balancing Cluster Setup and Configuration

  • Install two or more servers with identical configurations and applications
  • Install the NLB feature on all servers that will participate in the cluster
  • Launch the Network Load Balancing Manager from administrative tools to configure the cluster

The NLB manager allows you to set a default IP address and network interface for the cluster. This interface will use virtual IP addresses to load balance client traffic over the different servers.  Priority is set by assigning a unique ID for each server. The lower the number the more traffic the host will handle providing there are no port rules defined.

Port rules let you define specific ports or port ranges for load balancing. Any traffic not included in a port rule will be handled by the default host. You can apply port rules to a single server or multiple hosts.  The Affinity option in multiple host-filtering mode lets you decide whether you want NLB to direct multiple requests from the same client to the same cluster host or not (No Affinity).

Data Storage Methods

Direct Attached Storage (DAS)

  • Directly connected to the local bus, either an internal disk or SCSI/FC (Fiber Channel) connected disks in a rack
  • Fast block-based performance as opposed to file-based which needs to copy over the whole file again when changes are made
  • Good performance in smaller networks requiring only a single server

Network Attached Storage (NAS)

  • Self-contained and uses Ethernet, USB, Fire wire or eSATA to make a connection to the PC
  • Uses web based management tools for configuration
  • File-based and limited by network speeds or connection media type

Storage Area Network (SAN)

  • Used in most large corporations/enterprises
  • Multiple server connections requiring extensive and expensive hardware such as dedicated switches, fiber optic cabling and host bus adapters
  • Divided into Logical Unit Numbers (LUNs) to represent an individual drive
  • Very fast and block-based, limited to 10 km
  • All of the above is referred to as the SAN Fabric

Fiber Channel SAN

  • Based on serial SCSI, expensive and complex
  • Most widely adopted but lacking built-in security

ISCSI SAN

  • SCSI over Ethernet instead of over a traditional SCSI cable
  • ISCSI target uses the ISCI initiator (available in Windows Server 2008 R2 Administrative tools) to set up a connection to the ICSCI device
  • Internet Storage Name Service (ISNS) allows for the discovery and zoning of the ISCSI SAN in larger environments
  • Less expensive, simpler than FC and supports long distances depending on your network
  • Can be secured by using CHAP, IPsec and configuring the appropriate firewall rules

Virtual Disk Specification (VDS) API

  • SAN appears as local storage as opposed to a mapped network drive
  • Allows an administrator to install the Storage manager for Sans (SMFS) as well as storage explorer
  • More friendly and consolidated user interface than the ISCI Initiator

Leave a Response