Showing posts with label windows. Show all posts
Showing posts with label windows. Show all posts

Friday, 17 May 2013

ASP.NET fails to detect internet explorer 10 – The patches


We all know about the bug in .net 2 and .net4 browser definition files that prevents it from recognising certain browser types (namely IE10)

There are hotfixes available for this, but not via Microsoft update – you have to request them and the link is emailed to you.

This is an easy enough process and can be requested from:


.NET 2.0

http://support.microsoft.com/kb/2600100 - for Win7 SP1/Windows Server 2008 R2 SP1, Windows Vista/Server 2008, Windows XP/Server 2003

http://support.microsoft.com/kb/2608565 - for Win7/Windows Server 2008 R2 RTM
 

Or, if you run Server 2008 R2 SP1, here are the direct download links to save time:

 

.net 4


 

.net 2.0

Tuesday, 10 July 2012

Changing IP Settings on an SQL Cluster

This simple five minute job during an implementation  really threw me.

To paint a picture there is a three node SQL cluster with two instances (2 Active nodes, one Passive) these are isolated from clients behind a firewall.

To facilitate a hardened firewall policy to permit only TCP 1433 to the instance resource group IP addresses, as well as  ensuring only the instance resource group IP address listens on that port (opposed to the default ALL IP’s setting) some changes are required to the network settings in SQL configuration manager.

On a standalone SQL server, it’s simply a matter of changing the settings using the Configuration Manager GUI, restarting the SQL service and the change takes effect. However when in a cluster the changes revert back to the previous ones immediately after clicking ok.

After venturing into this issue a bit more I discovered what I was trying to do wasn’t really documented any ware, but some other articles pointed me in the general direction of the joys of quorum in clustering. In a nutshell I was making a change on one box but as the registry settings being changed are managed by the cluster service the the other two nodes in the cluster won quorum and overwrote the settings.







To change these settings the cluster the reservation checkpoint for the registry path needs to be removed, the changes made in the registry and then the cluster reservation checkpoint added again.

The first step is to get the checkpoint name of the instance you are going to modify, run the following command:

Cluster res /checkpoints




Once you have the instance name, take the SQL server offline in failover cluster manager and run the following command:

cluster res "SQL Server (INSTANCENAME)" /removecheck: "Software\Microsoft\Microsoft SQL Server\MSSQL.INSTANCENAME\MSSQLSERVER"



You should now edit the registry or use SQL configuration manager to make the changes you wish to make.
Personally I prefer to edit the registry as this enables you to delete the unused IP addresses and just leave the cluster IP in place, which is much tidyer.
The path to edit the registry settings is



HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10_50.INSTANCENAME\MSSQLServer\SuperSocketNetLib\Tcp\

Delete any of the IPx keys you don’t need but leave IPAll
To specify the port for the IP address to listen on, simply modify the TCPPort value, and remove the value from the TcpDynamicPorts

Once you are happy with the changes, run the following command to add the checkpoit back into clustering


cluster res "SQL Server (INSTANCENAME)" /addcheck: "Software\Microsoft\Microsoft SQL Server\MSSQL.INSTANCENAME\MSSQLSERVER"


Bring the SQL Server resource back online and check SQL configuration manager, the changes should have taken affect.

As a result of this change, your firewall rules will be more secure as the massive dynamic port range doesn’t need to be permitted and if need be both SQL instances can be failed over to the one server without ports comflicting. There is also the added bonus that the IP configuration in SQL configuration manager looks a whole lot tidyer.



Thursday, 26 January 2012

Are SAN’s “old hat”? Bring on the DAS


Here goes my first “opinion” post opposed to one detailing a useful command or script.

I’m in the middle of a greenfield infrastructure redesign at the moment; a topic that has been playing on my mind is SAN vs. DAS. When I say SAN, I’m talking about a storage area network. That’s several trays of disk attached to a SAN head unit, which is then connected to a pair of fibre switches, or to a 10gig switch via Ethernet. Servers are then connected to the switches (via Fibre or Ethernet). When I say DAS, I’m talking about Direct Attached Storage. That’s several trays of disk attached to a DAS head unit, which is then connected to a number of servers via SAS cables, or indeed just a dumb tray or trays of disk connected directly to the server.

So, what are SAN’s traditionally used for? In a basic sense they present a large (or small) amount of scalable storage to a number of servers.  Why do these servers need this storage? Either because the server hosting the application needs more disk space or spindles than you can fit into the server chassis, or if you are utilising clustering and need shared storage between two servers.

Clustering is what I think changes things. It has always struck me that you build clusters with multiple nodes, NIC’s, power supplies etc to offer high availability and yet the data is still in one place. Therefore  the SAN  is effectively a single point of failure.

Traditional Cluster

Although a SAN itself will have no single point of failure (dual controllers, multiple paths etc.) the data is still on a single RAID volume so could potentially be a victim of bitrot or the RAID group having a hole punched in it. There is also the obvious risk that the physical file could become corrupt. Software vendors are obviously thinking the same.  Exchange 2003/2007 was made highly available in the traditional cluster sense (multiple nodes with the DB on shared storage).

In Exchange 2010 however you have the concept of DAG’s. With DAG’s the database itself is replicated to nodes rather than being shared.  This means a SAN is not required to provide a highly available exchange environment.  If you can find a server with enough capacity you can run two (or three, or four) exchange servers in a DAG and have mailbox databases failover between them. This is actually more resilient than a traditional exchange cluster because the databases are being replicated rather than shared, which means you have protection against a corrupted database, as well as hardware failure.

Exchange 2010 DAG.


The upcoming SQL 2012 “always on” feature works in a very similar way to DAG’s. The selected databases are replicated between cluster nodes.  This means you can now have two core business systems (Exchange and SQL) made highly available without needing any kind of shared storage.

Failover clustering in itself is also moving forward with “shared storage-less clusters”. You can create a cluster and use a file share as a witness, which means that’s another requirement for shared storage out of the window!

If you have services you would like to make highly available and they don’t require a common area to write to, you can easily make them highly available in failover clustering by using a file share witness. If a service does require a common area to read or write data to, then you could always create the directory locally on each server and use DFS replication to keep them in sync.

This brings me on nicely to DAS. With applications moving to a model where shared storage isn’t required, the only real reason you would need a SAN is to present more storage or spindles to a server.  Because there isn’t the need for multiple servers to all access a common bit of storage, DAS comes into play. You can buy a dumb tray of 12 disks that can have additional trays daisy chained off of it to provide around 120ish disks for about  £7k per tray (the Dell Powervault MD1200 for example) these can be dual connected to a single host. Or if you want to connect more hosts to the DAS solution, you can get an “intelligent” DAS head unit that can then have multiple “dumb” trays connected to it to provide 192ish disks. These can usually support four dual connected hosts and can be picked up for about £12k. (the Dell Powervault MD3200 for example)

There are still applications that require shared storage, such as Hyper-V or VMware for example. In this scenario the MD3200 (intelligent) with a few MD1200’s (Dumb) connected to it would be ideal. You can have four nodes in the cluster sharing the storage.

The initial reaction I get to this suggestion is that of shock, as it’s not very scalable like a SAN. I understand the argument, but on the flip side, do you really want 10 – 20 hosts sharing the backplane of your SAN (6 – 12gig) with the DAS solution those four hosts are sharing the dual 6gig backplane.  If you need more servers then you’ll probably need more storage, so buy another head unit instead of a dumb tray. This method leaves you with two clusters of four nodes each with their own 12gig backplane (2x 6gig) opposed to potentially eight nodes sharing the SAN’s backplane.

I’m a big fan of DAS over a SAN for several reasons:

·         The physical trays are cheaper than trays for a SAN

·         There is no requirement for fibre switches which are eye wateringly expensive, not only for the tin but also the port licencing

·         DAS is really simple as the cable goes from the head unit to the server. Simple is fast and also easy to support and fix when it goes wrong.

·         DAS removes a single point of failure. It’s affordable to build two SQL clusters attached to 2 DAS arrays. Unless you’re a fortune 500 company you wouldn’t be able to do this with a SAN.

I can also see the downsides of DAS vs.SAN

·         Physical limit of SAS cables mean your servers need to be near the DAS head unit.

·         The administrative overhead of many storage arrays vs one SAN.

·         DAS lacks some of the mirroring features that SAN’s do.

Based on the above though, I think the cost savings by going DAS in both financial terms and for simplicity, outweighs the disadvantages.

I’m open to constructive feedback on this; I still have an open mind on the subject. However at the moment I think SAN’s are a thing of the past in 90% of situations.

Thursday, 27 October 2011

Getting service tag / bios info using powershell

Following on from my post “script to get service tag from dell device” I felt a bit “dirty” that I was using VB opposed to my new favourite thing in the world, powershell!

You can use a cmdlet “get-wmiobject” to pull all sorts of info from WMI, including the bios.

Therefore, this very simple one liner will return not only the service tag (or serial number for non dell devices), but bois version and a raft of other information.
Here is the command

Get-wmiobject win32_bios | fl *


The result will look something like this



If you have WinRM remoting configured, you can run this on a remote device by starting an interactive session, and then running the command

PS> enter-pssession servername
Server name: PS > get-wmiobject win32_bios | fl*

If you don’t have WinRM remoting enabled, run this command on the host to enable it.

PS > winrm quickconfig
WinRM already is set up to receive requests on this machine.
WinRM is not set up to allow remote access to this machine for management.
The following changes must be made:

Create a WinRM listener on HTTP://* to accept WS-Man requests to any IP on this machine.
Enable the WinRM firewall exception.

Make these changes [y/n]? y

WinRM has been updated for remote management.

Created a WinRM listener on HTTP://* to accept WS-Man requests to any IP on this machine.
WinRM firewall exception enabled.



Sunday, 25 September 2011

Configuring default FTP logon domain

If you’re still stuck in the dark insecure age of the internet and using FTP, you may want users to login to your FTP site using their domain credentials.

By default, the FTP service will use the local user database on the server itself (unless you enter your username in the domain\username format), you can however configure IIS to use a domain by default.

Take caution in doing this though, if you’ve ever put an FTP server on the internet, take a look at the event logs, it will have a ton of brute force attacks on it within minutes.
By default FTP will be trying to authenticate locally, which is a much smaller attack surface (fewer users) as soon as you point it at your domain, it’s going to have a much larger attack surface (more users)

You need to make sure you don’t have any accounts such as “test” or users like “mary” with passwords of “password” or any dictionary word at all. You should also tie the FTP site down to the specific users that need access, so if an account does get compromised it can’t be used to put data in the FTP directory.
With the above in mind, use an elevated command prompt to run the following on the FTP server

adsutil set msftpsvc/DefaultLogonDomain "YourDomainName"
This will set the default logon domain for all FTP sites.

Friday, 13 May 2011

Using a PAC file to set proxy settings

There are many ways to configure proxy settings, via a GPO, via a build, or an application.

Proxy settings can cause issues for mobile users if they use their device away from the corporate LAN as the proxy server will not be reachable, this will render the internet browser unusable (unless of course Direct Access has been implemented)

There are many solutions to this problem, some common ones are:
1. Teach users to enable and disable proxy settings, This is not the most elegant solution, is likely to cause a fair amount of support calls, and also means proxy settings cannot be enforced.

2. Run a 3rd party app that users can click on and select proxy on or proxy off. Im not a fan of these types of applications that sit there and use up resources for no real reason.

3. Run a login script that sets the proxy setting if you are connected to the corporate LAN, and doesn’t if you are not. This is a long winded way of doing it, and is not 100% effective.

In my opinion, the most effective and efficient way of configuring proxy settings is to use a proxy auto config file (PAC)
A PAC file contains a JavaScript function "FindProxyForURL(url, host)". This function returns a string with one or more access method specifications. These specifications cause the user agent to use a particular proxy server or to connect directly

.
You configure your browser (works in all popular browsers) to use a script to configure proxy settings, this setting remains in place permantly. If the PAC file is placed on a web server accessible only within the corporate LAN, if the user is away from the LAN, the config file is not found, so therefore a proxy is not used.


When the user is within the LAN, the file is found, and proxy settings configured.
Some say that a login script can achieve this too, however the login script requires you to login to take effect.


Take a scenario where a user is in the office, closes the lid on his or her laptop, gets on the train then opens the lid, and connects via 3G.
If proxy settings were configured with a login script, the office proxy settings would still be present unless the user logged off and on again.
With a PAC method in place, the browser looks for the settings each time a page is requested, therefore it would fail to find the config file and connect directly.

Below is an example PAC file which can be modified to suit your needs. This could be further extended to look at the current IP of the client, and return a different proxy depending on where the client is. Eg if the client is within an IP range which is associated with the Paris office, the Paris proxy would be returned, or if the client is on a New York IP range, the New York proxy would be returned.


function FindProxyForURL(url, host)
 {
        
        // Direct connections to Hosts
         if (isPlainHostName(host) ||
         (host == "127.0.0.1") ||
         (host == "www.a-whole-domain.com") ||
         (shExpMatch(host, "*.a-entire-domain.com")) ||
         (shExpMatch(host, "10.20.30.*"))) {
           return "DIRECT"
         } else {
           return "PROXY proxy-server.domain.com:8080"
         }
 }



Within this file, access to the IP range 10.20.30.0 - 10.20.30.255 would be accessed directly (bypassing the proxy) aswell as the domain www.a-whole-domain.com. anything under the domain a-entire.domain.com would also bypass the proxy. everything else will be directed at the proxy server "proxy-server.domain.com" on port 8080.
Add additional sites to the proxy bypass list by copying an existing line and pasting it below.


Although a WPAD file could also offer similar functionality, in my experience a PAC file is much more flexible and will enable changes to take effect instantly.

Wednesday, 24 November 2010

Draining sessions from Remote Desktop Session Hosts / Terminal Servers

Maintaining terminal servers (or remote desktop session hosts as they are known now) in today’s world when users require access 24/7 is a challenge. Setting up an RDS farm, with a session broker will give you load balancing and fault tolerance. (I will write more about remote desktop server farms and session brokers in another article)
However notice I say “Fault Tolerance” this doesn’t mean that you can reboot session hosts without affecting users, it just means that your system will tolerate the failure of a session host. The users who were connected to the rebooted (or failed) session host will lose what they were working on and will have to reconnect.
The nature of a session broker is that it will try to distribute sessions evenly across all members of a farm; this is great, apart from when you want to reboot a session host without annoying your users.
There is no “live migration” of RDS sessions, once a user is on a host, that’s where they will stay until they log off. 
So how do you free up a session host to perform maintenance on it? Firstly you will need to plan your work in advance.
You can then use the “chglogon” command to begin “draining” sessions. There are many ways sessions can be drained, but it basically means the session host will stop accepting new connections. Eventually once your users have logged off, they will not be able to establish a new log onto the draining session host, so will establish a new connection on another session host, which mean eventually the session host you are draining will have no users logged into it.
There are four switches for the chglogon command:
/query – this will tell you what mode the session host is currently in
/enable – allows users to establish connections to the session host
/disable – doesn’t allow any new connections, or reconnections to an existing session.
/drain – doesn’t allow any new connections, but does allow users to reconnect to an existing session
/drainuntilrestart – does the same as /drain, but reverts to /enable after a reboot
NOTE: when using the /disable switch, this will prevent you reconnecting to the server via RDP. You need to ensure you have access to the console via another method other than RDP, or use the RD configuration utility from another RDS server to change the setting.
These commands could be utilised to help with automated updates. You could configure RDS1 to automatically install updates on a Saturday at 6PM, then create a scheduled task to run on a Friday at 6AM to run the chglogon /drainuntilrestart command.
This would hopefully mean by Saturday at 6PM there were no users left on RDS1 and it would be safe to automatically reboot after an update installation.
You could then use the same method with RDS2, RDS3 etc, but on different days to ensure 100% uptime of your RDS farm

Monday, 25 October 2010

Using dsget and dsrm to delete users who are a member of a group from active directory.

I use the “DS” set of commands almost daily, they are a very powerful set of tools, which allow the output to be piped between them.

In this example, we are going to use the dsget command, to retrieve a list of users from a security group, then pipe the result into dsrm to delete them.
This can be useful in an educational environment where lots of users leave at once, and hundreds of accounts need removing. Or in the current corprate climate, when an entire department disappears!


Before jumping in at the deep end, I recommend seeing what results you are going to pipe into a dsrm, so run the dsget command on its own.

Dsget group “cn=year13,ou=groups,ou=myschool,dc=domain,dc-suffix –members -expand

This will return the members of a group called “Year13” which is in an OU called “Groups”, which is within an OU called “myschool” which is in the domain domain.suffix.




You are telling the dsget command that it is looking at a group by specifying “group” after dsget. the switches at the end are also important.


-members tells dsget to return the members of the group
-expand returns all members of the group, if this isn’t used it is limited to 100

If you are happy with the results returned, you can pipe the results into DSRM. Piping is just like typing something into a command yourself, only you’re letting the previous command do the work.

To get the pipe character, hold shift and press your backslash key

Dsget group “cn=year13,ou=groups,ou=myschool,dc=domain,dc-suffix –members –expand | dsrm –noprompt

The –noprompt commant prevents dsrm from asking you to confirm before deleting each object. If your deleting a large amount of objects this well worth using (as long as you are confident the results being outputted by dsget are correct)

Friday, 8 October 2010

Error code 0xC004C020 when activating windows

When activating windows using a MAK key, if you receive the error code 0xC004C020 it means you have ran out of activations using that key.

You can login into the Microsoft licensing website to check how many activations you have remaining on your MAK keys, and also find the contact information to get additional keys if required.
https://www.microsoft.com/licensing/servicecenter/

Further information on activation error codes can be found here:

http://support.microsoft.com/kb/938450

Monday, 4 October 2010

Implementing AppLocker – some important steps before you start!

AppLocker is a feature within Windows 7 and Server 2008 R2 which uses rules and properties of files to provide access control for applications.
In an environment where you want to prevent the use of certain applications, or even to deny all applications and only allow the applications you name, AppLocker is the solution for you.


Before you get started, there are some pre requisites which aren’t so obvious. Without configuring the prerequisites detailed in this article, although your be able to configure AppLocker policy’s, and a gpresult will show them as being applied, they will not be.

The first step is to enable AppLocker Rule enforcement. To do this, edit the group policy object which you wish to use to apply the AppLocker policys, and navigate to

Computer Configuration | Policies | Windows Settings | Security Settings | Application Control Policies | Applocker

Select “Configure rule enforcement



Select all three configured boxes (ensuring that enforce rules is selected from the drop down boxes) and click ok. This now means any policies you put in place will be applied for executable applications, windows installers and scripts.




The next stage is to ensure the “application identity service” is running
This can be done manually on all your workstations, as part of a generic build or via group policy preferences. Group policy is by far the most effective way of doing this so it is detailed here.


Edit the group policy object which you want to use to configure the service, this GPO must apply to all computers you wish to have AppLocker policies applied on

Services can be configured by using Group Policy Preferences, to configure this navigate to:
Computer Configuration | Preferences | Control Panel Settings | Services
Right click ion services and select New | Service

Modify the start-up to be "Automatic" and browse for the service named “Application Identity” ensure the service action is “Start Service” then click ok



Because these are all machine policies, the workstations may need to be rebooted twice for them to take effect.

Thursday, 23 September 2010

Installing a System Centre Essentials 2010 agent manually

You may come into a situation where you need to manually install the SCE agent, here is how.

Run setupsce.exe and click on install essentials agent

Specify the FQDN of your SCE server, and the management group name (by default this is SCEServername_mg)

You will also need the update services SSL certificate and the code signing certificate, you can find these on the SCE server in:

C:\program files\system center essentials\certificates

Copy these to the PC you are installing the agent on, and then browse to them in the installer setup window.

Once the agent is successfully installed, it will need to be manually approved in the SCE management console

Launch the SCE management console and select the administration section, then expand device management | pending management

Under pending management, you should see a section called “Manual Agent Install”, simply right click on the computer listed and click approve

Your agent should now check in.

Wednesday, 22 September 2010

SAN Certificates – a great way to get more for your money

When it comes to SSL certificates, you have two choices, go for a standard SSL certificate for a single domain, or get a wildcard cert for *.yourdomain.com

I’ve always been a fan of wildcard certificates; I believe in the long run these are cheaper as a single certificate will cater for all of your SSL needs; however wildcard certificates come at a price.

When using IIS or ISA/TMG you have the ability to host multiple domains on a single IP address or Web listener using host headers, however this only applies to HTTP traffic. When using SSL only one SSL certificate can be applied to an IP address.

This causes a problem, do you apply lots of different IP addresses to your web server or ISA/TMG server and use a certificate for each domain, or do you buy a wildcard certificate.

In the environment I work in, we are able to get certificates for pennies, but this doesn’t cover wildcard certificates. This means its difficult to justify a wildcard certificate.

However, there is a way! You can create what is known as “Subject Alternative Name” certificates. This is just like a normal certificate, but it is valid for any other domain you specify.

For example, I could request a SAN cert for:

Webmail.yourdomain.com
Portal.yourdomain.com
Crm.yourdomain.com
Anythingelseyouwant.yourdomain.com


To request a SAN cert, open an mmc and add the certificates snap in to it (ensure you select local computer)

Expand Certificates | Personal | Certificates

Right click on certificates and select All Tasks | Advanced Operations | Create custom request

Click next on the first two prompts, then select the web server template and click next.

Click the details button to expand the web server certificate template, and then click properties.

Add the normal subject names such as Organisation and country. Then add as many common names (domains) as you like!



Follow the rest of the wizard until completion; you will then have a CSR to upload to your certificate provider. This certificate will be valid for all of the domains you specified. If you think far enough in the future and specify some domains you think you may need in the future, it will save even more money!

Saturday, 11 September 2010

Using servermanagercmd to automate the installation of common roles and features in Windows Server 2008 (inc R2)

I try to configure as many of the settings on a server via group policy. This not only saves time, but provides 100% consistency and a very simple way of making system wide changes. An example of this is I will configure SNMP settings via group policy.



This is all well and good, but many of the configuration settings are dependent on a “role” or “feature” that may not be installed (such as SNMP) there is no built in way to automatically install roles and features using group policy like there is to configure services, or firewall rules.


This is where servermanagercmd comes in. as you will probably gather this is a command line interface to the server manager GUI.


If you’re deploying a large amount of servers and you want to avoid manually installing a common role or feature, this is very useful, and will save hundreds of clicks!


Within the Active Directory design of the network I support, each server role has its own OU, which is under a generic servers OU. In most cases there is a group policy applied to each OU, so settings specific to a server role can be set.


Because of this, I can use group policy preferences to create a registry entry under the runonce key which will run servermanagercmd with the appropriate switches to install what I want based on the role of the server.


The runonce key is located:


HKLM\Software\Microsoft\Windows\CurrentVersion\RunOnce


Any REG_SZ string that is created under this key will be run once on startup (hense the name)


I typically install three features on most of the servers I look after, Telnet Client, SNMP Services and Failover clustering, these are the commands I use:


Servermanagercmd –install –telnet-client


Servermanagercmd –install snmp-services –allsubfeatures


Servermanagercmd –install failover-clustering


You will notice on the SNMP string, I have added –allsubfeatures, this installs all the sub features under the feature or role heading you have specified.


More information on the servermanagercmd as well as switches for other roles and features can be found here:


http://technet.microsoft.com/en-us/library/cc748918(WS.10).aspx


Other switches of servermanagercmd worth noting are:


-query this will output the current roles and features installed


-remove this does the opposite of add

Friday, 10 September 2010

Shibboleth IdP not writing logs to the logs directory

I’ve recently been tasked with implementing a Shibboleth IdP for the network I support. The service has been implemented on a Windows Server 2008 R2 server running Tomcat and fronted by Apache and Microsoft Forefront TMG 2010.



I will post more on the implementation of this later, but a quick bit of info to resolve an issue that had me pulling my hair out for most of a day.


Shibboleth has a logs directory within its installation directory, in my setup the shibboleth directory was c:\program files (x86)\shibboleth-idp


I found that the shibboleth logs directory wasn’t filling up with anything. I searched around for hours to discover the cause and eventually found the answer here:


https://spaces.internet2.edu/display/SHIB2/IdPLogging


Basically, the logging mechanism used by shibboleth IdP does not support a path with brackets in it, so on any x64 system by default this would fail.


To resolve that I changed the log paths in the logging.xml file in the shibboleth conf directory to point to c:\shiblogs


After bouncing the tomcat service, logs appeared.

Thursday, 9 September 2010

Labelling those NIC’s

Tracing cables in a busy rack is a nightmare. Many people who share my switchport labelling OCD will always label up the port on the switch in a format such as “Link to MyServer01” and this is good. However it seems it’s not such common practise to label up where on the server the connection goes to, and then at the server end, where the connection goes to.


Most server’s today have PCI slots labelled in numerical form, and anyone with common sense will count the NIC’s from left to right. With the huge uptake of virtualisation, servers are now packed with NIC’s, Therefore my plea starts today, let’s start labelling server NIC ports in the same way we label switch ports.


The image below is a screenshot from one of the servers i look after. There are two quad port NIC’s installed in PCI slots one and two. Therefore I have adopted the naming convention “Slot X – NIC X – Link to Switchname giX-X” then on the switch side its reversed, the port is labelled as “link to MyServer01 NIC x\x . Implementing this method of labelling does take a bit of extra time and effort to keep up to date, but it will make yours and others life’s supporting the network so much easier. You can now tell exactly where a link goes from either the server or the switch end. I also add to the server NIC label what the NIC is used for eg “Slot X – NIC X – Link to switchname gix-x – Hyper-V Host Management” it’s a long label but your find it very useful!