Thursday, 27 August 2015

Find Hyper V Virtual Machine by IP Address

Sometimes you need to find a VM by IP address. This could be because of various reasons, maybe the end user of a VM doesn't know what the machine is called in Hyper-V for example

I wrote the function in this script to do just that. simply load the function and call

find-vmip 10.20.30.40

and the VM that has this IP will be returned.


find-vmIP -ip 10.20.30.40

VMName Status IPAddresses

------ ------ -----------

(244) - Marc Turner Lab - DC {Ok} {10.20.30.40, fe80::851a:7585:a4bd:ce93}


function find-vmIP
{
    <#
       .Synopsis
      
       Finds the virtual machine on a Hyper-V server that has the IP address specified
       .Description
       .Example
       find-vmIP -ip 10.20.30.40
        VMName                        Status    IPAddresses
        ------                        ------    ----------- 
        (244) - Marc Turner Lab - DC  {Ok}      {10.20.30.40, fe80::851a:7585:a4bd:ce93}


        AUTHOR: Marc Turner
        LASTEDIT: 26/08/2015
       .Link
        http://www.marcturner.co.uk
    #>
    param($IP)
   
    # Clear variables used previously
    $vms = $null
    $FoundHost = $null
    # if the IP address was specified, carry on, otherwise throw an error
    if ($IP)
    {
        # Get a list of all VM's, pipe it to get network adapter details
        try
        {
            $vms = get-vm | Get-VMNetworkAdapter
        }
        catch
        {
            throw {$_.exception.message}
        }
       
        # if VM's were found carry on, otherwise throw an error (could be being ran on a client without Hyper-V)
        if ($vms)
        {
            # Search through list of VM's and find the match for the IP address, warn user if not found.
            try
            {
                $FoundHost = $vms | where {$_.ipaddresses -like "$IP"} | select vmname,status,ipaddresses
            }
            catch
            {
                throw {$_.exception.message}
            }
           
            if ($FoundHost)
            {
                return $FoundHost
            }
            else
            {
                Write-Warning "VM with the IP address '$IP' Was not found"
            }               
        }
        else
        {
            throw {"No Virtual machines were found on this host"}
        }
    }
    else
    {
        throw {"The IP address to search for was not specified, use find-vmIP -ip 10.20.30.40"}
    }
}

Thursday, 2 July 2015

Does the Active Directory user have an Exchange Mailbox?

Part of a script I built to deal with starters and leavers is to hide a leavers mailbox from the GAL.

I do this because

• We do not re use user objects, we keep them in a disabled state so references to sAMAccountNames in audit logs are valid.

• Leaver’s mailboxes stay online for 3 months after a leave date, as frequently the line manager may require access.

• After 3 months we archive and remove leavers exchange mailboxes, but as above the user object stays.

All leavers accounts are in a generic “leavers OU”

To hide the account from the GAL, the script loops through each user in the leavers OU and if the hidden from GAL attribute on the mailbox isn’t true, it sets it.

Simple enough,  but there will be users in there who no longer have Exchange mailboxes as they have been archived.  So the script errors all over the place because the get-mailbox $user part of the script fails for those objects.

So, I want to wrap an IF statement in the loop to only look for the variable if the user has an exchange mailbox.

How would I know? There are lots of obvious attributes I can think of, but how do I know that they are removed when the mailbox is disabled / gone.


So quite simply, I took a dump of get-aduser $user BEFORE disabling the mailbox, and then after and compared them.


The following attributes have data in them when a mailbox is present, and are null when a mailbox is disabled.


EmailAddress
homeMDB
legacyExchangeDN
mail
mailNickname
mDBUseDefaults
msExchDumpsterQuota
msExchDumpsterWarningQuota
msExchELCMailboxFlags
msExchHomeServerName
msExchMailboxGuid
msExchMailboxSecurityDescriptor
msExchMailboxTemplateLink
msExchMobileAllowedDeviceIDs
msExchMobileMailboxFlags
msExchOWAPolicy
msExchPoliciesIncluded
msExchRBACPolicyLink
msExchRecipientDisplayType
msExchRecipientTypeDetails
msExchTextMessagingState
msExchUserAccountControl
msExchVersion   
proxyAddresses
showInAddressBook
textEncodedORAddress
 



I used msExchMailboxGuid in my script


Foreach ($user in $leavers)
{
      If ($user.msExchMailboxGuid)
      {
             $mailbox = Get-mailbox $user.samacountname
             If ($mailbox. HiddenFromAddressListsEnabled -eq $False)
             {
                    Try
                    {
                              Set-Mailbox -Identity $User.SamAccountName -HiddenFromAddressListsEnabled $True
                    }
                    Catch
                    {
                             $_.exception.message
                    }
            }
      }
}

Sunday, 12 October 2014

Hyper-V Memory and Disk Allocations - Common Values

This post Is more of a reminder for myself opposed to something you will struggle to find elsewhere on the internet.

I work with Hyper-V a lot, bizarrely memory allocation is done in MB (who assigns less than a gig of RAM nowadays!) and disk space in GB (Fair enough, but I find myself creating 1tb+ VHD’s more often than less than a TB)


The table below lists some common conversions


MB to GB
Typical RAM allocations




MB
GB
1024
1
2048
2
4096
4
8192
8
12288
12
16384
16
32768
32
65536
64
 

GB to TB
Typical Disk allocations


GB
TB
1024
1
2048
2
3072
3
4096
4
5120
5
10240
10
15360
15
20480
20

Friday, 17 May 2013

ASP.NET fails to detect internet explorer 10 – The patches


We all know about the bug in .net 2 and .net4 browser definition files that prevents it from recognising certain browser types (namely IE10)

There are hotfixes available for this, but not via Microsoft update – you have to request them and the link is emailed to you.

This is an easy enough process and can be requested from:


.NET 2.0

http://support.microsoft.com/kb/2600100 - for Win7 SP1/Windows Server 2008 R2 SP1, Windows Vista/Server 2008, Windows XP/Server 2003

http://support.microsoft.com/kb/2608565 - for Win7/Windows Server 2008 R2 RTM
 

Or, if you run Server 2008 R2 SP1, here are the direct download links to save time:

 

.net 4


 

.net 2.0

Thursday, 18 October 2012

Granting users rights to run SQL profiler without SA rights

If you have a group of users (say software developers) who may occasionally need to run SQL profiler but you do not wish to grant excessive rights such as SA, you can grant the “trace” right to a security group, or indeed a user. But why would you do that?!

Personally, I create a security group called “SQL Profiler Users” and grant the trace permission to that group. If a user needs to run profiler they can simply be placed in this group.

To grant the permission, run the following query:

Use master
Go
grant Alter Trace to [YourDomain\SQL Profiler Users]

Tuesday, 10 July 2012

Changing IP Settings on an SQL Cluster

This simple five minute job during an implementation  really threw me.

To paint a picture there is a three node SQL cluster with two instances (2 Active nodes, one Passive) these are isolated from clients behind a firewall.

To facilitate a hardened firewall policy to permit only TCP 1433 to the instance resource group IP addresses, as well as  ensuring only the instance resource group IP address listens on that port (opposed to the default ALL IP’s setting) some changes are required to the network settings in SQL configuration manager.

On a standalone SQL server, it’s simply a matter of changing the settings using the Configuration Manager GUI, restarting the SQL service and the change takes effect. However when in a cluster the changes revert back to the previous ones immediately after clicking ok.

After venturing into this issue a bit more I discovered what I was trying to do wasn’t really documented any ware, but some other articles pointed me in the general direction of the joys of quorum in clustering. In a nutshell I was making a change on one box but as the registry settings being changed are managed by the cluster service the the other two nodes in the cluster won quorum and overwrote the settings.







To change these settings the cluster the reservation checkpoint for the registry path needs to be removed, the changes made in the registry and then the cluster reservation checkpoint added again.

The first step is to get the checkpoint name of the instance you are going to modify, run the following command:

Cluster res /checkpoints




Once you have the instance name, take the SQL server offline in failover cluster manager and run the following command:

cluster res "SQL Server (INSTANCENAME)" /removecheck: "Software\Microsoft\Microsoft SQL Server\MSSQL.INSTANCENAME\MSSQLSERVER"



You should now edit the registry or use SQL configuration manager to make the changes you wish to make.
Personally I prefer to edit the registry as this enables you to delete the unused IP addresses and just leave the cluster IP in place, which is much tidyer.
The path to edit the registry settings is



HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10_50.INSTANCENAME\MSSQLServer\SuperSocketNetLib\Tcp\

Delete any of the IPx keys you don’t need but leave IPAll
To specify the port for the IP address to listen on, simply modify the TCPPort value, and remove the value from the TcpDynamicPorts

Once you are happy with the changes, run the following command to add the checkpoit back into clustering


cluster res "SQL Server (INSTANCENAME)" /addcheck: "Software\Microsoft\Microsoft SQL Server\MSSQL.INSTANCENAME\MSSQLSERVER"


Bring the SQL Server resource back online and check SQL configuration manager, the changes should have taken affect.

As a result of this change, your firewall rules will be more secure as the massive dynamic port range doesn’t need to be permitted and if need be both SQL instances can be failed over to the one server without ports comflicting. There is also the added bonus that the IP configuration in SQL configuration manager looks a whole lot tidyer.



Thursday, 26 January 2012

Are SAN’s “old hat”? Bring on the DAS


Here goes my first “opinion” post opposed to one detailing a useful command or script.

I’m in the middle of a greenfield infrastructure redesign at the moment; a topic that has been playing on my mind is SAN vs. DAS. When I say SAN, I’m talking about a storage area network. That’s several trays of disk attached to a SAN head unit, which is then connected to a pair of fibre switches, or to a 10gig switch via Ethernet. Servers are then connected to the switches (via Fibre or Ethernet). When I say DAS, I’m talking about Direct Attached Storage. That’s several trays of disk attached to a DAS head unit, which is then connected to a number of servers via SAS cables, or indeed just a dumb tray or trays of disk connected directly to the server.

So, what are SAN’s traditionally used for? In a basic sense they present a large (or small) amount of scalable storage to a number of servers.  Why do these servers need this storage? Either because the server hosting the application needs more disk space or spindles than you can fit into the server chassis, or if you are utilising clustering and need shared storage between two servers.

Clustering is what I think changes things. It has always struck me that you build clusters with multiple nodes, NIC’s, power supplies etc to offer high availability and yet the data is still in one place. Therefore  the SAN  is effectively a single point of failure.

Traditional Cluster

Although a SAN itself will have no single point of failure (dual controllers, multiple paths etc.) the data is still on a single RAID volume so could potentially be a victim of bitrot or the RAID group having a hole punched in it. There is also the obvious risk that the physical file could become corrupt. Software vendors are obviously thinking the same.  Exchange 2003/2007 was made highly available in the traditional cluster sense (multiple nodes with the DB on shared storage).

In Exchange 2010 however you have the concept of DAG’s. With DAG’s the database itself is replicated to nodes rather than being shared.  This means a SAN is not required to provide a highly available exchange environment.  If you can find a server with enough capacity you can run two (or three, or four) exchange servers in a DAG and have mailbox databases failover between them. This is actually more resilient than a traditional exchange cluster because the databases are being replicated rather than shared, which means you have protection against a corrupted database, as well as hardware failure.

Exchange 2010 DAG.


The upcoming SQL 2012 “always on” feature works in a very similar way to DAG’s. The selected databases are replicated between cluster nodes.  This means you can now have two core business systems (Exchange and SQL) made highly available without needing any kind of shared storage.

Failover clustering in itself is also moving forward with “shared storage-less clusters”. You can create a cluster and use a file share as a witness, which means that’s another requirement for shared storage out of the window!

If you have services you would like to make highly available and they don’t require a common area to write to, you can easily make them highly available in failover clustering by using a file share witness. If a service does require a common area to read or write data to, then you could always create the directory locally on each server and use DFS replication to keep them in sync.

This brings me on nicely to DAS. With applications moving to a model where shared storage isn’t required, the only real reason you would need a SAN is to present more storage or spindles to a server.  Because there isn’t the need for multiple servers to all access a common bit of storage, DAS comes into play. You can buy a dumb tray of 12 disks that can have additional trays daisy chained off of it to provide around 120ish disks for about  £7k per tray (the Dell Powervault MD1200 for example) these can be dual connected to a single host. Or if you want to connect more hosts to the DAS solution, you can get an “intelligent” DAS head unit that can then have multiple “dumb” trays connected to it to provide 192ish disks. These can usually support four dual connected hosts and can be picked up for about £12k. (the Dell Powervault MD3200 for example)

There are still applications that require shared storage, such as Hyper-V or VMware for example. In this scenario the MD3200 (intelligent) with a few MD1200’s (Dumb) connected to it would be ideal. You can have four nodes in the cluster sharing the storage.

The initial reaction I get to this suggestion is that of shock, as it’s not very scalable like a SAN. I understand the argument, but on the flip side, do you really want 10 – 20 hosts sharing the backplane of your SAN (6 – 12gig) with the DAS solution those four hosts are sharing the dual 6gig backplane.  If you need more servers then you’ll probably need more storage, so buy another head unit instead of a dumb tray. This method leaves you with two clusters of four nodes each with their own 12gig backplane (2x 6gig) opposed to potentially eight nodes sharing the SAN’s backplane.

I’m a big fan of DAS over a SAN for several reasons:

·         The physical trays are cheaper than trays for a SAN

·         There is no requirement for fibre switches which are eye wateringly expensive, not only for the tin but also the port licencing

·         DAS is really simple as the cable goes from the head unit to the server. Simple is fast and also easy to support and fix when it goes wrong.

·         DAS removes a single point of failure. It’s affordable to build two SQL clusters attached to 2 DAS arrays. Unless you’re a fortune 500 company you wouldn’t be able to do this with a SAN.

I can also see the downsides of DAS vs.SAN

·         Physical limit of SAS cables mean your servers need to be near the DAS head unit.

·         The administrative overhead of many storage arrays vs one SAN.

·         DAS lacks some of the mirroring features that SAN’s do.

Based on the above though, I think the cost savings by going DAS in both financial terms and for simplicity, outweighs the disadvantages.

I’m open to constructive feedback on this; I still have an open mind on the subject. However at the moment I think SAN’s are a thing of the past in 90% of situations.

Thursday, 27 October 2011

Getting service tag / bios info using powershell

Following on from my post “script to get service tag from dell device” I felt a bit “dirty” that I was using VB opposed to my new favourite thing in the world, powershell!

You can use a cmdlet “get-wmiobject” to pull all sorts of info from WMI, including the bios.

Therefore, this very simple one liner will return not only the service tag (or serial number for non dell devices), but bois version and a raft of other information.
Here is the command

Get-wmiobject win32_bios | fl *


The result will look something like this



If you have WinRM remoting configured, you can run this on a remote device by starting an interactive session, and then running the command

PS> enter-pssession servername
Server name: PS > get-wmiobject win32_bios | fl*

If you don’t have WinRM remoting enabled, run this command on the host to enable it.

PS > winrm quickconfig
WinRM already is set up to receive requests on this machine.
WinRM is not set up to allow remote access to this machine for management.
The following changes must be made:

Create a WinRM listener on HTTP://* to accept WS-Man requests to any IP on this machine.
Enable the WinRM firewall exception.

Make these changes [y/n]? y

WinRM has been updated for remote management.

Created a WinRM listener on HTTP://* to accept WS-Man requests to any IP on this machine.
WinRM firewall exception enabled.



Wednesday, 28 September 2011

Script to get service tag from Dell device

I needed to get the service tag off my Dell laptop today, but i was in the middle of doing a million things, so didn’t fancy undocking it to look underneath.

So I put this quick vb script together to get the service tag.

If you’re not running any kind of NMS like SCCM, SCOM or SCE (which would gather the service tags for you) this may be useful to use if you need the tag from a remote host.
Enjoy!

strComputer = "."
Set objWMIService = GetObject("winmgmts:" _
& "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
Set colSMBIOS = objWMIService.ExecQuery _
("Select * from Win32_SystemEnclosure")
For Each objSMBIOS in colSMBIOS
Wscript.Echo "Dell Service Tag: " & objSMBIOS.SerialNumber
Next

Sunday, 25 September 2011

Configuring default FTP logon domain

If you’re still stuck in the dark insecure age of the internet and using FTP, you may want users to login to your FTP site using their domain credentials.

By default, the FTP service will use the local user database on the server itself (unless you enter your username in the domain\username format), you can however configure IIS to use a domain by default.

Take caution in doing this though, if you’ve ever put an FTP server on the internet, take a look at the event logs, it will have a ton of brute force attacks on it within minutes.
By default FTP will be trying to authenticate locally, which is a much smaller attack surface (fewer users) as soon as you point it at your domain, it’s going to have a much larger attack surface (more users)

You need to make sure you don’t have any accounts such as “test” or users like “mary” with passwords of “password” or any dictionary word at all. You should also tie the FTP site down to the specific users that need access, so if an account does get compromised it can’t be used to put data in the FTP directory.
With the above in mind, use an elevated command prompt to run the following on the FTP server

adsutil set msftpsvc/DefaultLogonDomain "YourDomainName"
This will set the default logon domain for all FTP sites.