Thursday, 26 January 2012

Are SAN’s “old hat”? Bring on the DAS


Here goes my first “opinion” post opposed to one detailing a useful command or script.

I’m in the middle of a greenfield infrastructure redesign at the moment; a topic that has been playing on my mind is SAN vs. DAS. When I say SAN, I’m talking about a storage area network. That’s several trays of disk attached to a SAN head unit, which is then connected to a pair of fibre switches, or to a 10gig switch via Ethernet. Servers are then connected to the switches (via Fibre or Ethernet). When I say DAS, I’m talking about Direct Attached Storage. That’s several trays of disk attached to a DAS head unit, which is then connected to a number of servers via SAS cables, or indeed just a dumb tray or trays of disk connected directly to the server.

So, what are SAN’s traditionally used for? In a basic sense they present a large (or small) amount of scalable storage to a number of servers.  Why do these servers need this storage? Either because the server hosting the application needs more disk space or spindles than you can fit into the server chassis, or if you are utilising clustering and need shared storage between two servers.

Clustering is what I think changes things. It has always struck me that you build clusters with multiple nodes, NIC’s, power supplies etc to offer high availability and yet the data is still in one place. Therefore  the SAN  is effectively a single point of failure.

Traditional Cluster

Although a SAN itself will have no single point of failure (dual controllers, multiple paths etc.) the data is still on a single RAID volume so could potentially be a victim of bitrot or the RAID group having a hole punched in it. There is also the obvious risk that the physical file could become corrupt. Software vendors are obviously thinking the same.  Exchange 2003/2007 was made highly available in the traditional cluster sense (multiple nodes with the DB on shared storage).

In Exchange 2010 however you have the concept of DAG’s. With DAG’s the database itself is replicated to nodes rather than being shared.  This means a SAN is not required to provide a highly available exchange environment.  If you can find a server with enough capacity you can run two (or three, or four) exchange servers in a DAG and have mailbox databases failover between them. This is actually more resilient than a traditional exchange cluster because the databases are being replicated rather than shared, which means you have protection against a corrupted database, as well as hardware failure.

Exchange 2010 DAG.


The upcoming SQL 2012 “always on” feature works in a very similar way to DAG’s. The selected databases are replicated between cluster nodes.  This means you can now have two core business systems (Exchange and SQL) made highly available without needing any kind of shared storage.

Failover clustering in itself is also moving forward with “shared storage-less clusters”. You can create a cluster and use a file share as a witness, which means that’s another requirement for shared storage out of the window!

If you have services you would like to make highly available and they don’t require a common area to write to, you can easily make them highly available in failover clustering by using a file share witness. If a service does require a common area to read or write data to, then you could always create the directory locally on each server and use DFS replication to keep them in sync.

This brings me on nicely to DAS. With applications moving to a model where shared storage isn’t required, the only real reason you would need a SAN is to present more storage or spindles to a server.  Because there isn’t the need for multiple servers to all access a common bit of storage, DAS comes into play. You can buy a dumb tray of 12 disks that can have additional trays daisy chained off of it to provide around 120ish disks for about  £7k per tray (the Dell Powervault MD1200 for example) these can be dual connected to a single host. Or if you want to connect more hosts to the DAS solution, you can get an “intelligent” DAS head unit that can then have multiple “dumb” trays connected to it to provide 192ish disks. These can usually support four dual connected hosts and can be picked up for about £12k. (the Dell Powervault MD3200 for example)

There are still applications that require shared storage, such as Hyper-V or VMware for example. In this scenario the MD3200 (intelligent) with a few MD1200’s (Dumb) connected to it would be ideal. You can have four nodes in the cluster sharing the storage.

The initial reaction I get to this suggestion is that of shock, as it’s not very scalable like a SAN. I understand the argument, but on the flip side, do you really want 10 – 20 hosts sharing the backplane of your SAN (6 – 12gig) with the DAS solution those four hosts are sharing the dual 6gig backplane.  If you need more servers then you’ll probably need more storage, so buy another head unit instead of a dumb tray. This method leaves you with two clusters of four nodes each with their own 12gig backplane (2x 6gig) opposed to potentially eight nodes sharing the SAN’s backplane.

I’m a big fan of DAS over a SAN for several reasons:

·         The physical trays are cheaper than trays for a SAN

·         There is no requirement for fibre switches which are eye wateringly expensive, not only for the tin but also the port licencing

·         DAS is really simple as the cable goes from the head unit to the server. Simple is fast and also easy to support and fix when it goes wrong.

·         DAS removes a single point of failure. It’s affordable to build two SQL clusters attached to 2 DAS arrays. Unless you’re a fortune 500 company you wouldn’t be able to do this with a SAN.

I can also see the downsides of DAS vs.SAN

·         Physical limit of SAS cables mean your servers need to be near the DAS head unit.

·         The administrative overhead of many storage arrays vs one SAN.

·         DAS lacks some of the mirroring features that SAN’s do.

Based on the above though, I think the cost savings by going DAS in both financial terms and for simplicity, outweighs the disadvantages.

I’m open to constructive feedback on this; I still have an open mind on the subject. However at the moment I think SAN’s are a thing of the past in 90% of situations.

Thursday, 27 October 2011

Getting service tag / bios info using powershell

Following on from my post “script to get service tag from dell device” I felt a bit “dirty” that I was using VB opposed to my new favourite thing in the world, powershell!

You can use a cmdlet “get-wmiobject” to pull all sorts of info from WMI, including the bios.

Therefore, this very simple one liner will return not only the service tag (or serial number for non dell devices), but bois version and a raft of other information.
Here is the command

Get-wmiobject win32_bios | fl *


The result will look something like this



If you have WinRM remoting configured, you can run this on a remote device by starting an interactive session, and then running the command

PS> enter-pssession servername
Server name: PS > get-wmiobject win32_bios | fl*

If you don’t have WinRM remoting enabled, run this command on the host to enable it.

PS > winrm quickconfig
WinRM already is set up to receive requests on this machine.
WinRM is not set up to allow remote access to this machine for management.
The following changes must be made:

Create a WinRM listener on HTTP://* to accept WS-Man requests to any IP on this machine.
Enable the WinRM firewall exception.

Make these changes [y/n]? y

WinRM has been updated for remote management.

Created a WinRM listener on HTTP://* to accept WS-Man requests to any IP on this machine.
WinRM firewall exception enabled.



Wednesday, 28 September 2011

Script to get service tag from Dell device

I needed to get the service tag off my Dell laptop today, but i was in the middle of doing a million things, so didn’t fancy undocking it to look underneath.

So I put this quick vb script together to get the service tag.

If you’re not running any kind of NMS like SCCM, SCOM or SCE (which would gather the service tags for you) this may be useful to use if you need the tag from a remote host.
Enjoy!

strComputer = "."
Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
Set colSMBIOS = objWMIService.ExecQuery _
("Select * from Win32_SystemEnclosure")
For Each objSMBIOS in colSMBIOS
Wscript.Echo "Dell Service Tag: " & objSMBIOS.SerialNumber
Next

Sunday, 25 September 2011

Configuring default FTP logon domain

If you’re still stuck in the dark insecure age of the internet and using FTP, you may want users to login to your FTP site using their domain credentials.

By default, the FTP service will use the local user database on the server itself (unless you enter your username in the domain\username format), you can however configure IIS to use a domain by default.

Take caution in doing this though, if you’ve ever put an FTP server on the internet, take a look at the event logs, it will have a ton of brute force attacks on it within minutes.
By default FTP will be trying to authenticate locally, which is a much smaller attack surface (fewer users) as soon as you point it at your domain, it’s going to have a much larger attack surface (more users)

You need to make sure you don’t have any accounts such as “test” or users like “mary” with passwords of “password” or any dictionary word at all. You should also tie the FTP site down to the specific users that need access, so if an account does get compromised it can’t be used to put data in the FTP directory.
With the above in mind, use an elevated command prompt to run the following on the FTP server

adsutil set msftpsvc/DefaultLogonDomain "YourDomainName"
This will set the default logon domain for all FTP sites.

Wednesday, 21 September 2011

Keeping up to date with technology (Specifically Microsoft)

There is plenty going on with Microsoft Technology at the moment, Windows 8, Windows Server 8, cloud, Configuration Manager 2012, the list goes on.

Keeping up to date with these while still doing a dayjob is a struggle.
I use the RSS feed functionality in outlook and I have feeds from a select few blogs, so when something interesting comes along, its dropped into my outlook.

Below is a list of feeds that I use:

Ctrl P - The Data Protection Manager Blog! -

http://blogs.technet.com/b/dpm/rss.aspx

Windows Server Division WebLog - http://blogs.technet.com/b/windowsserver/rss.aspx

Windows Virtualization Team Blog - http://blogs.technet.com/b/virtualization/rss.aspx

Forefront Team Blog - http://blogs.technet.com/b/forefront/rss.aspx

System Center Configuration Manager Team Blog -
http://blogs.technet.com/b/configmgrteam/rss.aspx

Microsoft Forefront Unified Access Gateway Product Team Blog -
http://blogs.technet.com/b/edgeaccessblog/rss.aspx

Microsoft Server and Cloud Platform Blog - http://blogs.technet.com/b/server-cloud/rss.aspx

TechNet Blogs - http://blogs.technet.com/b/MainFeed.aspx?Type=BlogsOnly

The Configuration Manager Support Team Blog -
http://blogs.technet.com/b/configurationmgr/rss.aspx

The Microsoft Application Virtualization Blog - http://blogs.technet.com/b/appv/rss.aspx

The WSUS Support Team Blog - http://blogs.technet.com/b/sus/rss.aspx

Enterprise Strategy UK - http://blogs.technet.com/b/enterprise_strategy_uk/rss.aspx

Friday, 19 August 2011

Viewing queues on all hub transport servers in one handy PowerShell command

I can’t take any credit for this, a college and I came up with the idea that we needed a way of viewing the queues on all of our hub transport servers in once place, opposed to having to connect to each one individually, it just so happened that he came up with the goods quicker than I did!

So what is the problem?  Using the queue viewer in EMC, it will only display the queues on the server you have selected, the same goes for the PowerShell command get-queue; you have to specify a hub transport server.

The solution, pipe the results of a get-exchangeserver cmdlet filtered to return hub transport servers into the get-queue command.
Here it is – enjoy!

get-exchangeserver | where {$_.ishubtransportserver -eq $true } | get-queue | sort messagecount –descending

Thanks Jon!

Monday, 1 August 2011

Creating a dynamic distribution group based on any Active Directory attribute in exchange 2010

A Common requirement I’m sure for most businesses is to be able to send a mail to all users who are located in a specific building.

A dynamic distribution group based on the office attribute is surely the answer – well yes it is, but not using the Exchange Management Console.

I have the office attribute set for each user within active directory




However, if you use the exchange management console to build your query, its options are limited and does not include the office attribute.



Although using the EMC it isn’t possible, it can be done in powershell.

The new-dynamicdistributiongroup cmdlet doesn’t natively support anything other than the attributes you see listed in the EMC, however you can use a recipientfilter to specify any attribute you like.

The command below will create a dynamic distribution group called “Users in Example Office name” which will contain any user with the office location set to “Example office Name”

New-DynamicDistributionGroup -Name "Users in Example Office Name" -OrganizationalUnit "domain.net\users" -RecipientFilter { ((RecipientType -eq 'UserMailbox') –and (Office -eq 'Users in example office name')) }

This command can be extended futher using the –and variable. The command below would create the same dynamic distribution group, only the members would be those who are in the “Example office name” building AND their manager is James Bond



New-DynamicDistributionGroup -Name "Users in Example Office Name" -OrganizationalUnit "domain.net\users" -RecipientFilter { ((RecipientType -eq 'UserMailbox') -and (Manager –eq 'James Bond') –and (Office -eq 'Users in example office name')) }

Wednesday, 1 June 2011

A quick way to set calendar permissions using Powershell

A Common request from users is to grant others access to their calendars.
You can either talk the user through this, or setup a new outlook profile to open their mailbox and set it yourself using the GUI – both are time consuming.
This simple powershell command allows you to set permissions with ease:


add-mailboxfolderpermission -identity USERNAME:\calendar -user "Username of person who needs access" -accessrights reviewer



The Identity switch needs to be the username of the mailbox which you are giving access TO, the user switch is the user you are giving access FROM.
The accessrights switch is the level of access you wish to grant the user, the link below lists some additional switches you can use:


http://technet.microsoft.com/en-us/library/dd298062.aspx

Friday, 13 May 2011

Using a PAC file to set proxy settings

There are many ways to configure proxy settings, via a GPO, via a build, or an application.

Proxy settings can cause issues for mobile users if they use their device away from the corporate LAN as the proxy server will not be reachable, this will render the internet browser unusable (unless of course Direct Access has been implemented)

There are many solutions to this problem, some common ones are:
1. Teach users to enable and disable proxy settings, This is not the most elegant solution, is likely to cause a fair amount of support calls, and also means proxy settings cannot be enforced.

2. Run a 3rd party app that users can click on and select proxy on or proxy off. Im not a fan of these types of applications that sit there and use up resources for no real reason.

3. Run a login script that sets the proxy setting if you are connected to the corporate LAN, and doesn’t if you are not. This is a long winded way of doing it, and is not 100% effective.

In my opinion, the most effective and efficient way of configuring proxy settings is to use a proxy auto config file (PAC)
A PAC file contains a JavaScript function "FindProxyForURL(url, host)". This function returns a string with one or more access method specifications. These specifications cause the user agent to use a particular proxy server or to connect directly

.
You configure your browser (works in all popular browsers) to use a script to configure proxy settings, this setting remains in place permantly. If the PAC file is placed on a web server accessible only within the corporate LAN, if the user is away from the LAN, the config file is not found, so therefore a proxy is not used.


When the user is within the LAN, the file is found, and proxy settings configured.
Some say that a login script can achieve this too, however the login script requires you to login to take effect.


Take a scenario where a user is in the office, closes the lid on his or her laptop, gets on the train then opens the lid, and connects via 3G.
If proxy settings were configured with a login script, the office proxy settings would still be present unless the user logged off and on again.
With a PAC method in place, the browser looks for the settings each time a page is requested, therefore it would fail to find the config file and connect directly.

Below is an example PAC file which can be modified to suit your needs. This could be further extended to look at the current IP of the client, and return a different proxy depending on where the client is. Eg if the client is within an IP range which is associated with the Paris office, the Paris proxy would be returned, or if the client is on a New York IP range, the New York proxy would be returned.


function FindProxyForURL(url, host)
 {
        
        // Direct connections to Hosts
         if (isPlainHostName(host) ||
         (host == "127.0.0.1") ||
         (host == "www.a-whole-domain.com") ||
         (shExpMatch(host, "*.a-entire-domain.com")) ||
         (shExpMatch(host, "10.20.30.*"))) {
           return "DIRECT"
         } else {
           return "PROXY proxy-server.domain.com:8080"
         }
 }



Within this file, access to the IP range 10.20.30.0 - 10.20.30.255 would be accessed directly (bypassing the proxy) aswell as the domain www.a-whole-domain.com. anything under the domain a-entire.domain.com would also bypass the proxy. everything else will be directed at the proxy server "proxy-server.domain.com" on port 8080.
Add additional sites to the proxy bypass list by copying an existing line and pasting it below.


Although a WPAD file could also offer similar functionality, in my experience a PAC file is much more flexible and will enable changes to take effect instantly.

Tuesday, 25 January 2011

Using Powershell to grant access to all user mailboxes, or a whole exchange database

You may have a requirement to be able to open any users mailbox in your exchange 2010 environment.

The first thing to consider, is how you will control access, will you add individual users, or a security group with users in it.
A security group is the most efficient and tidiest by far, therefore this post will assume you are using a security group.

Method one

The first option is to give the security group full access to all user mailboxes

Advantage

The permissions will follow the mailbox around when it is moved between databases

Disadvantage

You will have to apply the permission to all new users you create

To use this method, use the Exchange Management Shell (also known as Powershell or EMS)to get all the mailboxes in your organisation, and then pipe this into a command that set the permissions:




Add-PSSnapin Microsoft.Exchange.Management.Powershell.Admin -erroraction silentlyContinue

$userAccounts = get-mailbox -resultsize unlimited

ForEach ($user in $userAccounts)

{

add-MailboxPermission -identity $user -user “Your Security Group Name” -AccessRights FullAccess

}


Method two:
The second option is to apply the permissions to the exchange mailbox database, so all mailboxes within that database will inherit those permissions.
Advantage

All new users will automaticly inherit the permissions you set on the storage group

Disadvantage

If different permissions are set on different databases, when users are moved between databases they will not be subject to the permissions that were assigned to the original database.

Use EMS to run the following command


Add-ADPermission -identity YourDatabasename -user “Your Security Group Name” -AccessRights genericall