Friday, 13 May 2011

Using a PAC file to set proxy settings

There are many ways to configure proxy settings, via a GPO, via a build, or an application.

Proxy settings can cause issues for mobile users if they use their device away from the corporate LAN as the proxy server will not be reachable, this will render the internet browser unusable (unless of course Direct Access has been implemented)

There are many solutions to this problem, some common ones are:
1. Teach users to enable and disable proxy settings, This is not the most elegant solution, is likely to cause a fair amount of support calls, and also means proxy settings cannot be enforced.

2. Run a 3rd party app that users can click on and select proxy on or proxy off. Im not a fan of these types of applications that sit there and use up resources for no real reason.

3. Run a login script that sets the proxy setting if you are connected to the corporate LAN, and doesn’t if you are not. This is a long winded way of doing it, and is not 100% effective.

In my opinion, the most effective and efficient way of configuring proxy settings is to use a proxy auto config file (PAC)
A PAC file contains a JavaScript function "FindProxyForURL(url, host)". This function returns a string with one or more access method specifications. These specifications cause the user agent to use a particular proxy server or to connect directly

.
You configure your browser (works in all popular browsers) to use a script to configure proxy settings, this setting remains in place permantly. If the PAC file is placed on a web server accessible only within the corporate LAN, if the user is away from the LAN, the config file is not found, so therefore a proxy is not used.


When the user is within the LAN, the file is found, and proxy settings configured.
Some say that a login script can achieve this too, however the login script requires you to login to take effect.


Take a scenario where a user is in the office, closes the lid on his or her laptop, gets on the train then opens the lid, and connects via 3G.
If proxy settings were configured with a login script, the office proxy settings would still be present unless the user logged off and on again.
With a PAC method in place, the browser looks for the settings each time a page is requested, therefore it would fail to find the config file and connect directly.

Below is an example PAC file which can be modified to suit your needs. This could be further extended to look at the current IP of the client, and return a different proxy depending on where the client is. Eg if the client is within an IP range which is associated with the Paris office, the Paris proxy would be returned, or if the client is on a New York IP range, the New York proxy would be returned.


function FindProxyForURL(url, host)
 {
        
        // Direct connections to Hosts
         if (isPlainHostName(host) ||
         (host == "127.0.0.1") ||
         (host == "www.a-whole-domain.com") ||
         (shExpMatch(host, "*.a-entire-domain.com")) ||
         (shExpMatch(host, "10.20.30.*"))) {
           return "DIRECT"
         } else {
           return "PROXY proxy-server.domain.com:8080"
         }
 }



Within this file, access to the IP range 10.20.30.0 - 10.20.30.255 would be accessed directly (bypassing the proxy) aswell as the domain www.a-whole-domain.com. anything under the domain a-entire.domain.com would also bypass the proxy. everything else will be directed at the proxy server "proxy-server.domain.com" on port 8080.
Add additional sites to the proxy bypass list by copying an existing line and pasting it below.


Although a WPAD file could also offer similar functionality, in my experience a PAC file is much more flexible and will enable changes to take effect instantly.

Tuesday, 25 January 2011

Using Powershell to grant access to all user mailboxes, or a whole exchange database

You may have a requirement to be able to open any users mailbox in your exchange 2010 environment.

The first thing to consider, is how you will control access, will you add individual users, or a security group with users in it.
A security group is the most efficient and tidiest by far, therefore this post will assume you are using a security group.

Method one

The first option is to give the security group full access to all user mailboxes

Advantage

The permissions will follow the mailbox around when it is moved between databases

Disadvantage

You will have to apply the permission to all new users you create

To use this method, use the Exchange Management Shell (also known as Powershell or EMS)to get all the mailboxes in your organisation, and then pipe this into a command that set the permissions:




Add-PSSnapin Microsoft.Exchange.Management.Powershell.Admin -erroraction silentlyContinue

$userAccounts = get-mailbox -resultsize unlimited

ForEach ($user in $userAccounts)

{

add-MailboxPermission -identity $user -user “Your Security Group Name” -AccessRights FullAccess

}


Method two:
The second option is to apply the permissions to the exchange mailbox database, so all mailboxes within that database will inherit those permissions.
Advantage

All new users will automaticly inherit the permissions you set on the storage group

Disadvantage

If different permissions are set on different databases, when users are moved between databases they will not be subject to the permissions that were assigned to the original database.

Use EMS to run the following command


Add-ADPermission -identity YourDatabasename -user “Your Security Group Name” -AccessRights genericall

Wednesday, 24 November 2010

Draining sessions from Remote Desktop Session Hosts / Terminal Servers

Maintaining terminal servers (or remote desktop session hosts as they are known now) in today’s world when users require access 24/7 is a challenge. Setting up an RDS farm, with a session broker will give you load balancing and fault tolerance. (I will write more about remote desktop server farms and session brokers in another article)
However notice I say “Fault Tolerance” this doesn’t mean that you can reboot session hosts without affecting users, it just means that your system will tolerate the failure of a session host. The users who were connected to the rebooted (or failed) session host will lose what they were working on and will have to reconnect.
The nature of a session broker is that it will try to distribute sessions evenly across all members of a farm; this is great, apart from when you want to reboot a session host without annoying your users.
There is no “live migration” of RDS sessions, once a user is on a host, that’s where they will stay until they log off. 
So how do you free up a session host to perform maintenance on it? Firstly you will need to plan your work in advance.
You can then use the “chglogon” command to begin “draining” sessions. There are many ways sessions can be drained, but it basically means the session host will stop accepting new connections. Eventually once your users have logged off, they will not be able to establish a new log onto the draining session host, so will establish a new connection on another session host, which mean eventually the session host you are draining will have no users logged into it.
There are four switches for the chglogon command:
/query – this will tell you what mode the session host is currently in
/enable – allows users to establish connections to the session host
/disable – doesn’t allow any new connections, or reconnections to an existing session.
/drain – doesn’t allow any new connections, but does allow users to reconnect to an existing session
/drainuntilrestart – does the same as /drain, but reverts to /enable after a reboot
NOTE: when using the /disable switch, this will prevent you reconnecting to the server via RDP. You need to ensure you have access to the console via another method other than RDP, or use the RD configuration utility from another RDS server to change the setting.
These commands could be utilised to help with automated updates. You could configure RDS1 to automatically install updates on a Saturday at 6PM, then create a scheduled task to run on a Friday at 6AM to run the chglogon /drainuntilrestart command.
This would hopefully mean by Saturday at 6PM there were no users left on RDS1 and it would be safe to automatically reboot after an update installation.
You could then use the same method with RDS2, RDS3 etc, but on different days to ensure 100% uptime of your RDS farm

Monday, 25 October 2010

Using dsget and dsrm to delete users who are a member of a group from active directory.

I use the “DS” set of commands almost daily, they are a very powerful set of tools, which allow the output to be piped between them.

In this example, we are going to use the dsget command, to retrieve a list of users from a security group, then pipe the result into dsrm to delete them.
This can be useful in an educational environment where lots of users leave at once, and hundreds of accounts need removing. Or in the current corprate climate, when an entire department disappears!


Before jumping in at the deep end, I recommend seeing what results you are going to pipe into a dsrm, so run the dsget command on its own.

Dsget group “cn=year13,ou=groups,ou=myschool,dc=domain,dc-suffix –members -expand

This will return the members of a group called “Year13” which is in an OU called “Groups”, which is within an OU called “myschool” which is in the domain domain.suffix.




You are telling the dsget command that it is looking at a group by specifying “group” after dsget. the switches at the end are also important.


-members tells dsget to return the members of the group
-expand returns all members of the group, if this isn’t used it is limited to 100

If you are happy with the results returned, you can pipe the results into DSRM. Piping is just like typing something into a command yourself, only you’re letting the previous command do the work.

To get the pipe character, hold shift and press your backslash key

Dsget group “cn=year13,ou=groups,ou=myschool,dc=domain,dc-suffix –members –expand | dsrm –noprompt

The –noprompt commant prevents dsrm from asking you to confirm before deleting each object. If your deleting a large amount of objects this well worth using (as long as you are confident the results being outputted by dsget are correct)

Friday, 8 October 2010

Error code 0xC004C020 when activating windows

When activating windows using a MAK key, if you receive the error code 0xC004C020 it means you have ran out of activations using that key.

You can login into the Microsoft licensing website to check how many activations you have remaining on your MAK keys, and also find the contact information to get additional keys if required.
https://www.microsoft.com/licensing/servicecenter/

Further information on activation error codes can be found here:

http://support.microsoft.com/kb/938450

Monday, 4 October 2010

Implementing AppLocker – some important steps before you start!

AppLocker is a feature within Windows 7 and Server 2008 R2 which uses rules and properties of files to provide access control for applications.
In an environment where you want to prevent the use of certain applications, or even to deny all applications and only allow the applications you name, AppLocker is the solution for you.


Before you get started, there are some pre requisites which aren’t so obvious. Without configuring the prerequisites detailed in this article, although your be able to configure AppLocker policy’s, and a gpresult will show them as being applied, they will not be.

The first step is to enable AppLocker Rule enforcement. To do this, edit the group policy object which you wish to use to apply the AppLocker policys, and navigate to

Computer Configuration | Policies | Windows Settings | Security Settings | Application Control Policies | Applocker

Select “Configure rule enforcement



Select all three configured boxes (ensuring that enforce rules is selected from the drop down boxes) and click ok. This now means any policies you put in place will be applied for executable applications, windows installers and scripts.




The next stage is to ensure the “application identity service” is running
This can be done manually on all your workstations, as part of a generic build or via group policy preferences. Group policy is by far the most effective way of doing this so it is detailed here.


Edit the group policy object which you want to use to configure the service, this GPO must apply to all computers you wish to have AppLocker policies applied on

Services can be configured by using Group Policy Preferences, to configure this navigate to:
Computer Configuration | Preferences | Control Panel Settings | Services
Right click ion services and select New | Service

Modify the start-up to be "Automatic" and browse for the service named “Application Identity” ensure the service action is “Start Service” then click ok



Because these are all machine policies, the workstations may need to be rebooted twice for them to take effect.

Monday, 27 September 2010

Essential tools for today’s admins and where to download them

These are the tools i use pretty much every day, so i thought i would share them and where to get them from:
(click on the name to be taken to the download page)

Remote Server Administration Tools

Install these on your Windows 7 Client, within the turn windows features on or off feature in control panel, a new option called "remote server administration tools" will exist. install this to get tools such as Active Directory Users and Computers, Hyper-V Manager etc

Putty

A great SSH, Telnet and console client all in one


Remote Desktop Connection Manager

If you manage a lot of servers via RDP, this tool is a god send. you can add multiple computers to one console, and group them into different roles, you can also right click on an entire group of servers (e.g. Domain Controllers) and log into them all at once. Many clicks are saved!

Notepad ++

Does what it says on the tin. if you edit any kind of code (XML etc) this is essential

ImgBurn

Although Windows 7 has burning features built in, it still lacks some of the required functionality. This tool has a tiny footprint, some great functionality and some cool sound effects!

Thursday, 23 September 2010

Installing a System Centre Essentials 2010 agent manually

You may come into a situation where you need to manually install the SCE agent, here is how.

Run setupsce.exe and click on install essentials agent

Specify the FQDN of your SCE server, and the management group name (by default this is SCEServername_mg)

You will also need the update services SSL certificate and the code signing certificate, you can find these on the SCE server in:

C:\program files\system center essentials\certificates

Copy these to the PC you are installing the agent on, and then browse to them in the installer setup window.

Once the agent is successfully installed, it will need to be manually approved in the SCE management console

Launch the SCE management console and select the administration section, then expand device management | pending management

Under pending management, you should see a section called “Manual Agent Install”, simply right click on the computer listed and click approve

Your agent should now check in.

Wednesday, 22 September 2010

SAN Certificates – a great way to get more for your money

When it comes to SSL certificates, you have two choices, go for a standard SSL certificate for a single domain, or get a wildcard cert for *.yourdomain.com

I’ve always been a fan of wildcard certificates; I believe in the long run these are cheaper as a single certificate will cater for all of your SSL needs; however wildcard certificates come at a price.

When using IIS or ISA/TMG you have the ability to host multiple domains on a single IP address or Web listener using host headers, however this only applies to HTTP traffic. When using SSL only one SSL certificate can be applied to an IP address.

This causes a problem, do you apply lots of different IP addresses to your web server or ISA/TMG server and use a certificate for each domain, or do you buy a wildcard certificate.

In the environment I work in, we are able to get certificates for pennies, but this doesn’t cover wildcard certificates. This means its difficult to justify a wildcard certificate.

However, there is a way! You can create what is known as “Subject Alternative Name” certificates. This is just like a normal certificate, but it is valid for any other domain you specify.

For example, I could request a SAN cert for:

Webmail.yourdomain.com
Portal.yourdomain.com
Crm.yourdomain.com
Anythingelseyouwant.yourdomain.com


To request a SAN cert, open an mmc and add the certificates snap in to it (ensure you select local computer)

Expand Certificates | Personal | Certificates

Right click on certificates and select All Tasks | Advanced Operations | Create custom request

Click next on the first two prompts, then select the web server template and click next.

Click the details button to expand the web server certificate template, and then click properties.

Add the normal subject names such as Organisation and country. Then add as many common names (domains) as you like!



Follow the rest of the wizard until completion; you will then have a CSR to upload to your certificate provider. This certificate will be valid for all of the domains you specified. If you think far enough in the future and specify some domains you think you may need in the future, it will save even more money!

Configuring shibboleth IdP to talk to an Active Directory

For those of you implementing a shibboleth IdP in an Active Directory environment, here is how the login.config and LDAP configuration within attribute-resolver.xml should look
When implementing our IdP I found lots of conflicting information on how it should be setup, we run a Windows Server 2008 R2 forest and domain functional level where all domain controllers are also global catalogues. I can confirm these settings work:


LDAP configuration in attribute-resolver.xml

    <!-- Example LDAP Connector -->
    <resolver:DataConnector id="myLDAP" xsi:type="LDAPDirectory" xmlns="urn:mace:shibboleth:2.0:resolver:dc"
        ldapURL="
ldap://domain.suffix:3268" baseDN="dc=domain,dc=suffix" principal="idpserviceaccount@domain.suffix"
        principalCredential="passwordgoeshere">
        <FilterTemplate>
            <![CDATA[
                (samAccountName=$requestContext.principalName)
            ]]>
        </FilterTemplate>
    </resolver:DataConnector>

LDAP authentication configuration in login.config
// Example LDAP authentication
// See:
https://spaces.internet2.edu/display/SHIB2/IdPAuthUserPass
   edu.vt.middleware.ldap.jaas.LdapLoginModule required
      host="domain.suffix"
      base="dc=domain,dc=suffix"
   port="3268"
   userField="sAMAccountName"
      subtreeSearch="true"
   serviceUser="
serviceaccount@domain.suffix"
   ServiceCredential="passwordgoeshere";