Tuesday, September 23, 2014

Configuring CRM Claims Based Authentication with User Certificates


Microsoft Dynamics CRM 2011 supports claims-based authentication in both internal and Internet-facing deployments. Claims-based authentication leverages Active Directory Federated Services (ADFS) to enable Single Sign-On (SSO) and allows you to establish complex trust relationships between other domains and organizations.

One of the issues with enabling claims-based authentication on CRM, or with any relying party that passes a wsignin1.0 request to your Security Token Service (STS) with the optional ‘wauth’ parameter, is that it specifies a specific method for authenticating the user. For CRM, this is fine if you want to authenticate your internal users with Windows authentication, and your external users with Forms based authentication, but if you want to use some other mode, you are stuck. You can configure and reconfigure the default settings for ADFS all day and night, but as long as the relying party is sending its request with ‘wauth’ in the URL, ADFS will ignore your default settings.

Of course, I refused to believe this, so I reconfigured my ADFS server to prefer to authenticate users with a certificate, and then tried to connect to CRM. Of course, the system attempts a Windows Integrated-style authentication anyway, displaying a dialog box prompting me for my user name and password. I typed it in, and was able to access CRM, but it wasn’t until I clicked “Log Off” in the CRM interface and was redirected to the “You have been signed off.” ADFS page that I was prompted for my certificate!

Changing 'wauth' in the CRM Configuration Database

So, how can we get around this?  We need to get CRM to set that‘wauth’ parameter correctly to tell ADFS that we want to authenticate the user with a certificate. A little research revealed that this is possible with CRM, though likely unsupported; so take the following advice totally at your own risk:

Open up SQL Management Studio and connect to the CRM configuration database “MSCRM_CONFIG”. Let’s take a look at the current state of the federation provider settings by running a quick query:

SELECT * FROM dbo.FederationProviderProperties

The results should show a few rows, the ones we’re interested in are the one with the “ColumnName” as “IntegratedAuthenticationMethod” and “IfdAuthenticationMethod”. If you take a look at the “NVarCharColumn”, *ahem*, column, for those two values, you should see the default entries of “urn:federation:authentication:windows” and “urn:oasis:names:tc:SAML:1.0:am:password”, respectively. I’m not sure who was responsible for the names in this schema, but I promise you it wasn’t me.

These values are used by CRM to build the URL passed to the STS when an authentication request is made. Assuming that you have claims-based authentication working and configured, go ahead and update the “IntegratedAuthenticationMethod”to tell the STS that we would like a certificate-based authentication method:

UPDATE dbo.FederationProviderProperties
SET NVarCharColumn=’urn:ietf:rfc:2246’
WHERE ColumnName=’IntegratedAuthenticationMethod’

Here, “urn:ietf:rfc:2246” is what specifies SSL/TLS Certificate Based Client Authentication. A list of some of the authentication methods defined in the SAML specification can be found here. I haven't experimented with anything other than the certificate based authentication method.

As it turns out, since we are authenticating using a user certificate that is associated with an Active Directory account, we don’t have to make any changes to the claims rules in our ADFS configuration.  So, open up a command window, do an “iisreset” and try and connect to your CRM server again. Your browser should now prompt you to present a certificate to the server, and if you’ve already generated a user certificate that corresponds to an Active Directory account that is associated with CRM (you have, haven’t you?) then you can select and submit it. You should be now be logged onto CRM using claims-based authentication with a certificate-based authentication method!

Other Thoughts

Going into the CRM database and making this change is certainly unsupported by Microsoft, but I have tested this successfully on both Internet Explorer and Firefox with a user certificate and have had no problems. To be honest with you, I’m as surprised as you are that it still works after making this change. My next step will be to see if I can get SharePoint to work using the same authentication method and demonstrate a working SSO environment.

I also am left wondering if it still makes sense to go through the trouble of configuring CRM for a full-blown Internet Facing Deployment when using this method. The main purpose of an IFD is to provide the forms-based login page, since a Windows integrated login doesn't make sense over the Internet. If you are using certificates for authentication both internally and externally, it makes sense that you can use the claims-based authentication with certificates to handle both scenarios.

Sunday, September 14, 2014

Enabling Wireless Connections Automatically (Scripting Windows Events)

If you're like me, you prefer your computers with front-loading reel-to-reel tapes and a massive control panel that includes lots of switches and incandescent lights. Unfortunately, this isn't very flexible in today's annoyingly mobile world, so you're forced to use a laptop computer with a docking station. Of course, you would like to have the system automatically disable the wireless adapter and prefer Ethernet when it's in the dock--but how do it? Well, on Windows 7, it's a lot more complicated than it should be, but now you have this article to help you get there.

Of course, this advice can be adapted to suit any problem where you'd like to run a PowerShell script when a Windows Event is logged.  I'm sure you'll find something even more creative to do.

Detecting When You're Connected

In order to detect when the system is connected to Ethernet and run a scheduled task, we'll need to find an event to trigger on. My system, a Lenovo ThinkPad X230, has an Intel 82579LM Ethernet adapter, which conveniently notifies me whenever the state of the network link changes in the Windows System event log:



When "e1cexpress" logs an Event ID 32, I know the Ethernet cable is connected and I can safely shut off the wireless adapter. When an Event ID 27 is logged, then the cable is disconnected and I should turn the wireless adapter back on. How nice.

Putting Together A PowerShell Script

Now that we have an event we can toggle on, we need to put together a script to turn the adapter on and off. The best way to do this is to put together a PowerShell script that we can call to adjust the adapter setting. There is a really helpful article here that covers how to do it using a program called devcon, using WMI, or using the NetAdapter module.

If you're on Windows 7, you'll have a hard time finding a copy of devcon that will actually let you toggle the adapter. There is evidence that the version included with Windows Server 2003 x64 will work on Windows 7 x64, but I didn't explore that any further. The NetAdapter module is only available in PowerShell 3.0 on Windows 8. So, that leaves us with WMI. With the help of the aforementioned article, I put together this simple PowerShell script:
param([string]$mode)
$wmi = Get-WmiObject -Class Win32_NetworkAdapter -filter "Name LIKE '%Centrino%'"

if($mode -eq "enable") {
echo "Enabling adapter"
$wmi.enable()
} elseif($mode -eq "disable") {
echo "Disabling adapter"
$wmi.disable()
} else {
echo "Usage: <command> enable/disable"
}

When this script is called with an "enable" or "disable" as an argument, it will do a search for network adapters that contain the name "Centrino", store a reference to it, and then either enable or disable the adapter. I'm searching for "Centrino" because the registered name of my wireless adapter is "Intel(R) Centrino(R) Ultimate-N 6300 AGN", yours may be different. If you run the command "Get-WmiObject -Class Win32_NetworkAdapter" you'll get a list of all the adapters on your system and can put together a query that meets your needs. Just make sure that you're returning only the one adapter that you want to toggle. You'll need to save your finished script into a text file with a .ps1 extension. I saved mine to C:\Windows\scripts\switch_wireless.ps1.

This almost certainly is not the best way to do it, but it works. I'm sure there is a PowerShell expert out there somewhere who has already written this on one line. If you've got a better way, please share it.


Wasting Time With Authenticode

PowerShell includes extra security for scripts. Which is to say that you won't be able to run them unless you do some extra work. If you want to take advantage of all the security Windows has to offer, then you'll need to sign your new script, trust yourself as a code publisher, and set the execution policy for your machine to "AllSigned". If you want to just get on with it, you can add the argument "-executionpolicy bypass" in the next step and ignore all of this.  In hindsight, that is probably what I should have done.

If you want to do it the "right way," you'll need to sign your script. If you're already equipped to do this, you probably don't need the help of this article. If you're not, you probably want to generate some self-signed certificates, install them, and sign the application. I used the helpful tutorial here to generate a code signing certificate, install it into the appropriate certificate stores, and sign my script. You'll need the makecert utility, which means you'll need to download and install either the Windows SDK or a copy of Visual Studio Express from the Microsoft site. If you're using Windows 8, that step can be skipped, and the whole process is greatly simplified by this helpful script, which isn't going to work on Windows 7, so don't try.

An important final step not mentioned by the signing tutorial is that you need to install your signing certificate as a trusted publisher on your system. You can do this by running the following commands:
$cert = Get-ChildItem Certificate::CurrentUser\My\<path to cert>
$store = New-Object System.Security.Cryptography.X509Certificates.X509Store "TrustedPublisher","LocalMachine"
$store.Open("ReadWrite")
$store.Add($cert)
$store.Close()
Once you have done this, you should be able to run "gci cert:\LocalMachine\TrustedPublisher" and see your certificate installed.

Finally, set your execution policy by running "Set-ExecutionPolicy AllSigned" and pressing yes when prompted by the scary error message. Now, you should be able to run your freshly signed script securely.

Configuring the Scheduled Task

Now that you have created your PowerShell script, and hopefully wrestled Authenticode into submission, you can link your script to the adapter events to finally arrive at our desired behavior. Go back to your Windows logs, find the event that you want to trigger on, and select it.

On the Action Pane, under "Event", click "Attach Task To This Event..." Go through the wizard, and create an event for both the connected and disconnected events, referencing your PowerShell script as the executable.


Once you've done this, we'll need to open up Scheduled Tasks and make some adjustments before it works correctly. If you don't, you're just going to see a text box with your script pop up every time the adapter is connected or disconnected.  Find your "Disable" event, and double-click it to make changes:


  1. Set the user account for the task to "SYSTEM", and make sure it's set to "Run with highest privileges" and to run "Hidden".
  2. On the Actions tab, edit the action that was automatically generated by the wizard. The action should be set to "Start a program" and the program to run should be changed to just "powershell". In the arguments field, you should have something like "-noprofile -file C:\Windows\scripts\switch_wireless.ps1 disable" for the event that will disable the adapter, and a similar argument but with "enable" for your other event. If you ignored everything about Authenticode in the previous section, this is where you want to add a switch for "-executionpolicy bypass" to your arguments so that your script won't be blocked by security policy.
  3. On the Conditions tab, make sure that nothing is checked, like "Start the task only if the computer is on AC power".
When you're done, it should look something like this:


Rinse and repeat for the "Enable" event, making sure to pass the correct argument to your script.

Testing It Out

OK, now test it out. You should have been testing and experimenting along the way, but if you haven't now is the time. With both scheduled tasks set your adapter should toggle on and off when you connect/disconnect your system from Ethernet.  Pretty neat, and it frees up your wireless bandwidth so that all of your iDevices aren't impeded from downloading cat videos.

Friday, September 12, 2014

Misadventures with Network Location Awareness

Recently, I ran into a situation where an IIS-based application (Microsoft Dynamics CRM 2011) was working very slowly.  It was working—but pages were taking a very long time to load, on the order of minutes.  Lots of troubleshooting went into determining the cause of this issue: checking database configuration and integrity, application settings, and network latency and routing issues.  Ultimately, in an illogical act of total desperation, all of the systems supporting the application were configured to have their host-based firewalls completely disabled.  After that, the application page loaded instantly, and everything was very responsive!

This wouldn’t have been so surprising if the Windows advanced firewall configuration hadn’t already been configured to allow all of the required connections between each server.  A little more investigation into the issue revealed that the Domain-connected network adapters were either assigned to the Public profile, or were stuck in the “Identifying…” state.  This is a big problem, because some of the firewall rules necessary to support the service were configured to be applied only to network connections that were assigned to the Domain profile.

So, I had found the root cause of the problem.  But, why were the network adapters being assigned to the incorrect profile, or not being assigned a profile at all?  According to Microsoft, in order for the Network Location Awareness service, which is responsible for assigning profiles to each of your networks, to assign the Domain profile, two conditions must be met:
  1. The system must be able to communicate with a DNS server that has the same connection specific DNS name that it is configured for.
  2. The system must be able to contact a Domain Controller via LDAP.
If both of these conditions are not met, then the system cannot be assigned the Domain profile.  I suspect that on this system, during the setup phase, there were network configuration issues or interruptions that prevented NLA from connecting to the DNS server when it was trying to identify the network.

There isn’t a prescribed situation in which a system should get stuck in the “Identifying…” state, but I can say that I have certainly seen it happen.  I suspect it might be a bug in the NLA service, as even if the system can’t get to a DNS server or contact the Domain Controller, it should end up in some state, even if it is the public network profile.  If I have time I would like to do more investigation into why this happens.  Until then, you can usually coerce it into making a decision by manually restarting the Network Location Awareness service, NlaSvc.

You might also find that you have lots of unused or incorrect profile information stored on your systems.  You can clear this out and let the NLA service regenerate it by removing all of the keys stored in HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList\Nla.

I also discovered a corner case that is very unlikely to occur, but is worth considering if you are using NLA and the Windows Advanced Firewall to help secure your network: That is, if you have a complete system outage, including your Primary Domain Controllers and DNS server, that server might get assigned the incorrect (public) profile when it is restarted.  A solution for this is to create a scheduled task that triggers on DNS server event ID 4 to run the PowerShell command “restart-service dhcp –force”.  This works for two reasons: The DNS server sends an event ID 4 when it has completed loading the AD integrated zones and the domain is fully available, and because NLA will re-profile the network when DHCP is restarted.  With this scheduled task in place, you can be more confident that in the event of a complete outage, the correct rules end up assigned to your AD/DNS servers.

AppLocker, Service Accounts, and Group Policy Pitfalls


AppLocker is managed and enforced by two Windows Services: AppIDSvc and AppIDAppIDSvc handles the configuration and management of AppLocker, and is the service primarily discussed in Microsoft documentation.  But, AppID is the service that AppIDSvc depends on to do the actual enforcement of AppLocker policy.  In order for AppLocker to work correctly, both of these services must be enabled and running.

For this reason, it makes sense to include some extra settings in your group policy to ensure that the AppIDSvc is running on your target systems: Naturally, you’ll open up your Group Policy Management Editor and navigate to Computer Configuration -> Preferences -> Control Panel Settings -> Services.  Then, right-click, select New, and then Service.  Type in the Service name, “AppIDSvc”, and set the Service action to Start Service.  Click the radio button to set the Log on account to Local System account, click OK and then you are done!  Right?



Wrong!  If you do this, you’ll end up spending the next 4 hours trying to trace the cause of “Error 31: A device attached to the system is not functioning” while simultaneously being baffled by the fact that some of your systems are still enforcing and logging AppLocker debugging error messages.

You need to make sure that your service is configured to run as “NT Authority\Local Service” in your group policy settings.  If you (like me) have accidentally pushed out a policy with the “Local System account” set as the Log on user for the account, you can go back into your group policy settings, change the Log on as setting to This account: and type in “NT Authority\Local Service”.  You will need to specify a password to satisfy the dialog box—but it doesn’t matter what it is, the Local Service account doesn’t have a password and whatever you type there will be ignored by the client system.  Just type in something to satisfy the dialog box.  I used a single space.  Now you should see the account correctly displayed in the Group Policy editor.

 
But why were my AppLocker policies that were already pushed out still being enforced even though the service stopped running?  Remember the AppID service?  The AppID service is actually there to register a kernel-mode device driver that handles the actual enforcement of policy.  So even if you break AppIDSvc, which is responsible for managing and applying policy, AppID will keep running and keep enforcing whatever policy you had in place when you decided to break AppIDSvc.

You shouldn’t need to specify anything for this service to start, as it’s a registered dependency of AppIDSvc.  If you wanted to be extra-thorough, you could verify that the service isn’t marked as disabled in the registry.  Implementing that step, perhaps by tattooing the registry, is left as an exercise to the reader.  Details about both services are located in the registry under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\.  Details about those details (metadetails?) are described in this Microsoft TechNet article.

Of course, once you've got the services enabled and working correctly, you’ll still need to configure your AppLocker Application Control Policies, link your Group Policy Object to your target OU, and refresh group policy on your clients before you start to see AppLocker in action.  Don’t forget to look in Server Manager -> Diagnostics -> Event Viewer -> Applications and Services Logs -> Microsoft -> Windows -> AppLocker to see information about how your policies are being enforced on your system.

First post.

I have been thinking of starting a blog for a long time.  Mostly I have hesitated because I don't like the word "blog."  Also, I couldn't think of a good name for a blog that wasn't taken.  But, an overwhelming desire to document solutions to various problems in a persistent and accessible way was the forcing function that enabled me to finally overcome those petty obstacles.

I think this blog is going to be mostly technical in nature, and will probably focus on IT-related issues.  I might use it to discuss my other hobbies, which include, and are mostly limited to, photography and cars.  I'm going to do my best to not label or organize it.  I want it to have a nice 1990's, "Web 1.0", web-ring kind of a feel to it.  I'm off to a bad start already because I'm using Blogger--but convenience, laziness, and all.

Finally, I expect that no one will ever read this first post, and should anyone ever end up here besides myself, it will have been a result of a panicked Google search in an effort to solve a problem that I have already run into, written about, and hopefully solved.  I hope that you find your solution to whatever problem we might have had the misfortune of sharing.