Preparing Active Directory for O365 in a Hybrid IBM Domino Environment

NOTE:  This is the first is a series of articles I am writing as my organization moves from using IBM Domino to Office 365.

For those who are moving from using IBM Domino as their email and directory, there are a number of things that one needs to think about for their Active Directory environment. If you have had AD and Domino in the same environment, most likely you have invested more time to make sure your Domino directory is updated with what I like to call rich contact information. This includes details such as address, telephone number, managers and work location. When you begin to move to O365, it becomes important to make sure that AD now contains this same rich information.

Directory Cleanup

One of the first aspects to look at is what information you want to store in AD. Some of the most obvious pieces of information to store is basic address information:

  • Street address
  • City
  • State/Province
  • Zip/Postal Code
  • Telephone number
  • Mobile number

This is basic information related to the user’s location. In addition to this information is other information related to their job:

  • Employee number – This field allows us to tie an employee with our HR system.
  • Employee type – We mark this field to denote what kind of employee/account this is. We use EMPLOYEE, VOLUNTEER, TEMPAGENCY, OFFICER, RETIREDOFFICER and POSTRETIREMENTSERVICE as an example.
  • Manager – The manager field becomes more important as we move to O365 as there are many views in O365 that expose our relationship with our manager and others in our corporation.
  • Job title – This is the job title of person.
  • Officer ID – This is an additional tie we have to other external systems for a subset of our users.

Finally, there are some that are more specific to their location and where it fits in the corporation:

  • Department
  • Company
  • Office/Location

Domino adds to this last section something that is also interesting. It is called corporate hierarchy and allows you to set the corporate hierarchy up to six levels deep and across multiple job descriptions. Because we are part of a large international organization, we have worked on setting these fields with that in mind. Here is how we have used these:

  • Corporate Hierarchy 1 Level 0 – We use this to denote our Territory’s name.
  • Corporate Hierarchy 1 Level 1 – We use this to denote the division or command.
  • Corporate Hierarchy 1 Level 2 – This shows the area command (if a location is part of an area command).
  • Corporate Hierarchy 1 Level 3 – The physical location where a user is located at.
  • Corporate Hierarchy 1 Level 4 – The department or role that the user is in.

AD doesn’t have those corporate hierarchy fields, so we decided to create a custom schema to add those attributes, along with some other attributes to assist with our AD/Domino co-existence. Here are the custom fields we created in AD to match us with Domino:

  • CorpHierarchy1Level0 – 6
  • CorpHierarchy2Level0 – 6
  • DominoAbbreviatedName
  • DominoDN
  • DominoMailFile
  • DominoMailServer
  • DominoSync

Creating the fields and having them available is one thing, but populating the data is a whole other challenge. It is also a challenge to get the data in there in a consistent manner. For example, is it “Street” or “St.”? To help us with consistency in this area, we created location groups in AD. For each location (we have over 780 different locations), we created a group in AD associated with this location. This will allow us with other initiatives to grant access to users based upon their location, as well. Tied to these groups, we also tied in the address information, as well as the ability to create the corporate hierarchy. There are a couple of ways to do this. One way would be to put the address information directly into the group (using custom fields) or have a database that is tied to all of your properties that contains the proper location information.

Once we had the location groups created, we started assigned each user to a location. I have also implemented a front-end process so that whenever a new user is created, a location (among other data) must be assigned. This now enables us (using PowerShell, of course), to stamp critical location information to each of our users based upon their group membership. Now, we can stamp our users either when they are created or later if they have been around for awhile with the correct address and corporate hierarchy information.

This helps us get some of our vital corporate information on our user accounts. Manager hierarchy is another important area, so we have worked with our HR to get an employee’s manager. The best tie we have to our HR system is the employee number. We have found this to be the best solution as we have many users who don’t go by their full legal name. Whether it is that they use a middle name as their primary name, a shortened version of their given name or even something totally unrelated to their given name, but the only way people know them, the employee number is the definitive tie.

If you already have much of this rich information populated in Domino, the quickest path will be to sync your AD and Domino environment and get that rich information over to AD. I’ll discuss later the process we use to sync our directories. As part of this process, it is also important to identify your directory of record. Is it going to be AD or Domino or will it change over time? Much of this depends upon the tools you provide for users to update their information. We have made AD our primary directory and provided tools to allow our users to update limited subsets of information about themselves.

Changing Domino Password when AD Password Changes–Part 4- Using PowerShell to Capture Change and Send to Domino

The final piece of the puzzle is to now take the information that has been written to our LDAP store and push those changes to Domino.  The blog article I mentioned in part one (where we got the modified password sync nsf from) uses TDI to accomplish this.  Originally, that was our plan as well, until we found out how complicated TDI is to use.  Despite working with a vendor who has used TDI before, we were unable to get all of the pieces working through TDI after a few hours.  In fact, the recommendation was to have two TDI servers: one where the config changes were made and a second that actually ran the process.  This seemed overly complex for what we wanted and took a lot more effort to maintain then I desired.

Because of that, I decided to fall back to my old familiar friend, PowerShell!  Since the information is stored in AD, I can easily retrieve that information and process it.  As we discussed requirements, we wanted to capture password changes at least every five minutes and retry changing the passwords if the initial attempt failed.  To accomplish this, I wrote the following PowerShell function (that calls the function I created in part 1 to do the actual change):

function Get-USSPasswordChange {
<#
    .SYNOPSIS
        Retrives passwords as they are changed in AD to update them to Domino.
    
    .DESCRIPTION
        This retrieves any password changes that were made in AD and writes those password changes to Domino.  This attempts to do the password changes 
        up to 3 times.  If unsuccessful, it will send an email error message.  This will only retrieve objects that have a password.  Once password has been
        changed, it will be blanked in AD.
    
    .EXAMPLE
                PS C:\> Get-USSPasswordChange
    
    .NOTES
    ===========================================================================
     Created with:  SAPIEN Technologies, Inc., PowerShell Studio 2017 v5.4.135
     Created on:    2/27/2017 3:39 PM
     Created by:    Doug Neely
     Organization:  TSA
     Filename:      Get-USSPasswordChange
     Version:       1.0.0
    ===========================================================================

#>
    
    [CmdletBinding()]
    param ()
    
    BEGIN 
    {
        Import-Module Domino
        Import-Module ADProxySAUSS
    }
    PROCESS
    {
        $UserToUpdate = Get-ADObject -Filter { (objectclass -eq "ibm-diPerson") } -Properties "ibm-diUserID", "ibm-diPassword", "ibm-diCustomData", "description" | Where-Object {$_."ibm-diPassword" -like "*"}
        foreach ($user in $UserToUpdate) {
            $IBMDIPersonDN = $user.DistinguishedName
            $DomainController = $user."ibm-diCustomData"
            $Attempt = $user.description
            $Username = $user.name
            $HexPassword = $user."ibm-diPassword"
            $PasswordArray = @()
            foreach ($Letter in $HexPassword) {
                $PasswordArray += [System.Text.Encoding]::ASCII.GetString($Letter)
            }
            
            $ADUser = Get-ADUser -Identity $user.Name -Properties tsaDominoAbbreviatedName
            #TODO: Add error handling for when their isn't a Domino account.
            $ADDomAbbreviatedName = $ADUser.tsaDominoAbbreviatedName
            If ($ADDomAbbreviatedName) {
                Write-Verbose "DN:                $IBMDIPersonDN"
                Write-Verbose "Password:          $PasswordArray"
                Write-Verbose "DominoAbbreviated: $ADDomAbbreviatedName"
                $DominoPwdChange = Set-USSDominoPassword -DominoAbbreviatedName $ADDomAbbreviatedNamePassword $PasswordArray -DomainController $DomainController
                $HTTPChange = $DominoPwdChange.HTTPResultCode
                $IDVaultChange = $DominoPwdChange.IDVaultResultCode
                If ($HTTPChange -eq "HTTP Password changed successfully.") { 
                    $HTTPChange = "SUCCESS"
                }
                If ($IDVaultChange -eq "ID Password changed successfully.") {
                    $IDVaultChange = "SUCCESS"
                }
                If (($HTTPChange -eq "SUCCESS") -and ($IDVaultChange -eq "SUCCESS")) {
                    #If both changes are successful, I will clear the password.  Otherwise, I will increment the attempts
                    Set-ADObject -Identity $IBMDIPersonDN -Clear "ibm-diPassword"
                } Else {
                    $Attempt = $Attempt + 1
                    If ($Attempt -eq 3) {
                        Set-ADObject -Identity $IBMDIPersonDN -Clear "ibm-diPassword"
                        $BodyContent = "Failed to change Domino Password <b>$Username</b>.</br>  <b>HTTP Result: </b>$HTTPChange</br> <b>ID Vault Result: </b>$IDVaultChange</br>`
                    <b>Domino Abbreviated Name: </b> $ADDomAbbreviatedName"
                        Send-USSEMail -To $EmailErrors -Subject "Domino Password Change Failed: $UserName" -BodyContent $BodyContent -Verbose
                    } Else {
                        Set-ADObject -Identity $IBMDIPersonDN -Description $Attempt
                    }
                } #End if HTTP and IDVault Password Successful
            } #End If  ADDomAbbreviatedName
            #Remove-ADObject -Identity $IBMDIPersonDN -Confirm:$false
        } #End User
    }
    END
    {
        
    }
}

I hope that this series will help you as you work with TDI 7.1.1 and AD to sync passwords between AD and Domino.



Changing Domino Password when AD Password Changes–Part 3-AD Schema Update

As we were preparing everything for these changes, we ran across numerous references to custom IBM schema fields that were being used by the password sync process (when using LDAP as the store), but really very little detail about what the schema change did and how it was used.  I don’t like to apply schema updates without a few more details, so I will try to explain some of it here.

To find the schema updates, go to the etc folder under the pwd_plugins folder.  Here you will find an ldif file named ibm-diPersonSchemaForAD.ldif.  This file makes the following changes:

  • ibm-diPerson – This is essentially the equivalent to a user or contact.  It is the top level field that contains all of the following attributes.
  • secretKey – I honestly am not sure what this is being used for.  I do not see any changes being made to this attribute when a password change occurs.
  • ibm-diUserId – This stores the samaccountname of the account where the password was changed.  This lets you tie the account to an existing AD user.
  • ibm-diPassword – This is where the captured password is stored.  It is in plain text unless you configure the pwsync.props file to encrypt the password.  I’ll discuss some other methods that can be used to secure this password later.
  • ibm-diExtendedData – This appears to store the common name of the user whose password was changed.
  • ibm-diCustomData – Using the pwsync.props file, you can pass custom data to this field.  An example of the custom data may be the server name on which the password change was intercepted.
  • ibm-diTimestamp – This is the timestamp of when the password change was intercepted.  It doesn’t seem to get filled in on the initial creation of the ibm-diPerson (first password change), but does seem to get filled in on subsequent changes.

Once the schema has been added to AD, these new attributes are available for use by the password sync process.  Now you will want to create an OU where the new ibm-diPerson objects can be created by the password sync process.  (I called mine TDI.)  You will also want to create a service account.  (The service account information is stored in the pwsync.props file.)  Here is where you can add an additional layer of security to the password sync process.  Typically, new objects can be read by anyone.  I would recommend changing the security on the new OU so that only domain admins and the new service account can read the OU and objects in it.  (As always, I recommend using a group to delegate those permissions.)

Once everything is working, you will see new objects created in the OU.

image

On the object, you can see these attributes created:

image

One of the interesting parts about the ibm-diPassword is that it is of the type String(Octet).  While it is viewable in Attribute Editor as plain text, it requires conversion to read the attribute otherwise.

Changing Domino Password when AD Password Changes–Part 2–Configuring Domain Controller to Capture Passwords

The next part of the process, now that we have a webservice we can call to change the HTTP and ID Vault password is to actually capture the password.  To complete this, we will do two things:  configure a new password capture tool on each read/write domain controller and do a schema update as a location where we can temporarily store the password change.

To capture the password change on a domain controller, we will use Tivoli Directory Integrator 7.1.1 as it is part of our entitlement with Domino.  Here are a few notes regarding this installation:

  • FP 6 breaks the password capture process.  This caused us much frustration trying to figure out why things weren’t working.  Currently, I am installing the password sync with no fixpack.  We did open an incident with FP6 and IBM was able to duplicate the issue. FP 5 does seem to work though.
  • To run the installer on Windows 2012 R2 (and probably on 2012, as well), you have to run it in Windows 7 compatibility mode and as an administrator.
  • We chose to install it to a path without any spaces.  Some of this came as we were troubleshooting the FP6 issue and some as we just found it was easier to put the paths into the pwsync.props file.

After installing just the Password Sync to the domain controller, there are a few post installation steps:

  • Copy the tdipwflt_64.dll file to C:\Windows\System32
  • Add the name of the dll file to “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\LSA\Notification Packages”
  • Run registerpwsync.reg to add info to the registry on where to find the pwsync.props file.
  • Reboot the DC.

Before any of this will work, it is necessary to configure the pwsync.props file.  At this point, it is important to decide what method you will use to capture the password.  Here are a few of the options:

  • Log Password – This is only recommended to be used for testing purposes.
  • MQe (Message Queue Element) – With this you need to have something like IBM’s Websphere MQ server or other such product.
  • LDAP – This stores the information in an LDAP directory.  This is the method we chose to use.

There are a few sections that it is important to modify in the pwsync.props file.

# Executable (binary or shell script) used to start the Java Proxy.
# If this property is set, both ‘jvmPath’ and ‘jvmClassPath’ will be ignored.
proxyStartExe=C:\\TDI\\V711/pwd_plugins/bin/startProxy.bat


# Port number, on which the Java Proxy listens for commands.
serverPort=18001


# The log file of the Plug-in part of the Password Synchronizer.
# If empty, no logging will be done.
logFile=C:\\TDI\\V711/pwd_plugins/windows/plugin.log


# Whether to reject password changes if the Password Store is down.
checkRepository=true

For testing purposes, you can configure the User filtering configuration section.  For example, you might want to set it so that a user must be in a group for a password change to be password to the password sync process.

#
# The Password Store component
#

syncClass=com.ibm.di.plugin.pwstore.ldap.LDAPPasswordStore

The LDAP section is the last section to configure:

#
# LDAP Password Store Configuration
#


# LDAP server host
ldap.hostname=DCNameHere


# LDAP server port
ldap.port=389


# LDAP bind dn
ldap.admindn=CN=SVC TDILDAP,DC=contoso,DC=net


# LDAP bind password
# This field must be encoded. Use the ‘encryptPasswd’ utility.
ldap.password=db8450d230833c9ae307c065401058d36d8a6b


# If set to true, password changes will be committed synchronously to the Password Store when
# a password change notification is received. The source of the password change will be blocked
# until the password change is written to the Password Store.
#
# If set to false, the commit will be asynchronous. Use the ‘ldap.delayMillis’ property to configure
# the time to wait before committing the password change.
ldap.waitForStore=true


# Time to wait (in milliseconds), before committing the password change to the Password Store.
# Will be ignored if ‘waitForStore’ is set to true.
# ldap.delayMillis=2000


# Use SSL for LDAP communication.
# If set to true, JSSE must be configured (set the javax.net.ssl.trustStore and javax.net.ssl.keyStore properties).
# ldap.ssl=false


# Location in the LDAP directory tree, where the Password Synchronizer will store data.
ldap.suffix=OU=TDI,DC=contoso,DC=net


# Name of an LDAP object class used to hold information for a given user.
ldap.schemaPersonObjectName=ibm-diPerson


# Name of an LDAP attribute which represents user identifier.
# This attribute must be a member of the object class specified by the ‘ldap.schemaPersonObjectName’ property.
ldap.schemaUseridAttributeName=ibm-diUserId


# Name of an LDAP attribute which represents user password.
# This attribute must be a member of the object class specified by the ‘ldap.schemaPersonObjectName’ property.
ldap.schemaPasswordAttributeName=ibm-diPassword

You will notice some special schema attributes mentioned in this section.  The next part of this blog will cover the schema attributes that need to be added and detail how they are used.

Once you have the pwsync.props file configured once, you can copy the file to each server (in the C:\TDI\V711\pwd_plugins\windows folder if you use the same install path that I did).  You will just want to update a couple of lines to reflect the correct server name.

Here are a couple of links to IBM support documents regarding this:

https://www.ibm.com/support/knowledgecenter/SSCQGF_7.1.1/com.ibm.IBMDI.doc_7.1.1/pluginsguide26.htm#win_postinstall

https://www.ibm.com/support/knowledgecenter/SSCQGF_7.1.1/com.ibm.IBMDI.doc_7.1.1/pluginsguide.pdf

Changing Domino Password when AD Password Changes–Part 1–Changing the HTTP and ID Vault Passwords

As an organization, one of our goals is to get to a single login.  For us, this means that we want users to use the same username and password across various systems.  Our primary authentication system is Active Directory, but we still have a number of legacy Domino databases around.  To accomplish our goal, we want to have passwords sync between AD and Domino.  There are a number of commercial systems that promise this ability, but many of them are costly.  Since we are already entitled to use Tivoli Directory Integrator, we decided to try to use this to implement the password syncing.  Of course, since, this is an IBM product, it means there is very little to zero clear documentation on how to implement this.  Here is my attempt to document this process for others to use.

When dealing with IBM Domino passwords, there are two places where passwords are stored: internet password (otherwise known as the HTTP password) and ID file password.  The internet password field is used by Domino when accessing resources over the internet (such as iNotes or other web enabled applications).  The ID file that is used for “thick client” installations also contains its own password.  If your company has implemented the ID vault, the best way to change this password is by changing the password in the ID vault.

To help us accomplish this, we found some great instructions on setting up a web service that enables the changing of the HTTP password and the ID vault password.  Our journey started here: http://www.cloudevangelist.in/2015/07/tivoli-directory-integrator-sync-ad.html .  Attached to that blog article is a Domino NSF that he has modified from the default pwdresetsample.nsf that is available on a standard installation of a Domino server.  To get it working, there are a few security things you will have to modify so that this database will run in a context that will enable it to reset ID Vault passwords.  One of the biggest security issues is to make sure that the database can update the ID Vault password.  For this to happen, we signed both the database and the web service with the server certificate and gave the server password reset permissions on the ID Vault.  This also needs to run on a server that has HTTP enabled as it is a SOAP service.  Internally, you can also modify the database to use a shared secret that allows you to securely call the password change.  You will want to look at any other security aspects on this to make sure it works securely in your environment.

Once we had the above up and running, we tested this by using PowerShell, of course.  I have a script that can change both the HTTP and ID Vault password.

function Set-USSDominoPassword {
<#
    .SYNOPSIS
        This will set the Domino HTTP and ID Vault password.
    
    .DESCRIPTION
        This uses a SOAP call to change the users HTTP and ID Vault passwords.
    
    .PARAMETER DominoAbbreviatedName
        This is the users name in Domino AbbreviatedName format: First Last/OU/Org
    
    .PARAMETER Password
        The new password to change to.
    
    .EXAMPLE
        PS C:\> Set-USSDominoPassword -DominoAbbreviatedName 'Joe Bob/AOK/SArmy' -Password "New Password Here"
    
    .NOTES
        ===========================================================================
        Created with:   SAPIEN Technologies, Inc., PowerShell Studio 2016 v5.2.120
        Created on:     2/27/2017 2:14 PM
        Created by:     Doug Neely
        Organization:   The Salvation Army
        Filename:       Set-USSDominoPassword.ps1
        Reference:  https://foxdeploy.com/2014/11/19/working-with-web-services-soap-php-and-all-the-rest-with-powershell/
        ===========================================================================
#>
    
    [CmdletBinding()]
    param
    (
        [Parameter(Mandatory = $true,
                   ValueFromPipeline = $true)]
        [string]$DominoAbbreviatedName,
        [Parameter(Mandatory = $true)]
        [string]$Password
    )
    
    $url = "http://servernamehere/PwdResetSample.nsf/passwordSync?WSDL"
    $HTTPMethodName = "CHANGEWEBPASSWORD"
    $IDVaultMethodName = "CHANGEIDPASSWORD"
    $SecretKey = "SecretKey"
    Write-Verbose $DominoAbbreviatedName
    foreach ($User in $DominoAbbreviatedName) {
        $ADUser = Get-ADUser -Filter { tsaDominoAbbreviatedName -eq $User } -Properties department, company
        $ADName = $ADUser.samaccountname
        $ADDepartment = $ADUser.Department
        $ADCompany = $ADUser.Company
        Write-Verbose "$ADName in $ADDepartment and $ADCompany"

        $proxy = New-WebServiceProxy $url
        $HTTPResults = $proxy.$HTTPMethodName($DominoAbbreviatedName, $Password, $SecretKey)
        #Start-Sleep 10
        $ProxyID = New-WebServiceProxy $url
        $IDVaultResults = $ProxyID.$IDVaultMethodName($DominoAbbreviatedName, $Password, $SecretKey)
        

        $obj = New-Object PSObject -Property ([Ordered] @{
                "DominoAbbreviatedName" = $DominoAbbreviatedName
                "SAMAccountName" = $ADName
                "Department" = $ADDepartment
                "Company" = $ADCompany
                "HTTPResultCode" = $HTTPResults
                "IDVaultResultCode" = $IDVaultResults
                
            }) #End PSObject
        
        Write-Output $obj
        Write-Verbose "Writing to DB and sending email"
        $Table = "dbo.DominoResetPasswordReport"
        $DBColumns = "Username, Company, Department, DominoAbbreviatedName, HTTPResults, IDVaultResults, DateTime"
        $DBValues = "'$ADName','$ADCompany','$ADDepartment','$DominoAbbreviatedName','$HTTPResults','$IDVaultResults''"
        Edit-ADProxyDBReport -Table $Table -DBColumns $DBColumns -DBValues $DBValues
        
    } #End foreach
}

This is the end of part one where I configure the password change.  Part two will cover the configuration of the password capture on the domain controllers, part three will cover the AD schema changes and part four will tie everything together with a PowerShell to capture the password changes and write them to Domino.

Auto Disabling Inactive Users with PowerShell

A number of years ago I posted some vbscript for auto disabling inactive users.  My system before was a two part system where I found the inactive users, wrote them to a text file and then my proxy system handled them.  Recently, I have been reviewing all of my scripts and moving them over to PowerShell.  (Note that I am now always expecting PowerShell 3.0 at minimum due to the improvement in native AD cmdlets.)

In addition, we have had to make sure that we are complying with PCI audit requirements so that inactive user accounts are disabled on a timely basis.  In the below PowerShell script, I have done a few different things:

  • Create an array of OU’s to look into for inactive accounts.
  • Look for users who have never logged on and were created 45 days or more ago.
  • Next, I look for users whose accounts have expired.
  • I send each user I find to my Disable-ADProxyUsers function.  This lets me do everything I want to when I disable a user account.

Here is that part of the script:

Import-Module ADProxy             
#First run just does regular accounts.            
$SearchBaseArray = "OU=User Accounts,DC=contoso,DC=org", "OU=Specialized Workstations and Users,DC=contoso,DC=org"            
            
foreach ($SearchBase in $SearchBaseArray) {               
 Write-Verbose "First, let's get any user who has never logged on and the creation date is more than 45 days"            
  $a = Get-Date            
  $b = $a.AddDays(-45)            
  $Justification = "Disabled: Auto Disabled by Proxy - User Never Logged On."            
  Get-ADUser -f {(lastlogontimestamp -notlike "*") -and (enabled -eq $true) -and (whencreated -lt $b) } `
     -searchbase $SearchBase |           
  Disable-ADProxyUser -Requestor "Proxy System" -ProcessType "AutoNeverLoggedOn" `
     -Justification $Justification -Notes "Yes"            
              
 Write-Verbose "Now let's look for inactive users.  Search AD accountinactive automatically adds 15 days to the timespan"            
  $NbDays = 104            
  Write-Debug "Get the current date"            
  $currentDate = [System.DateTime]::Now            
  # Convert the local time to UTC format because all dates are expressed in UTC (GMT) format in Active Directory            
  $currentDateUtc = $currentDate.ToUniversalTime()            
  # Calculate the time stamp in Large Integer/Interval format using the $NbDays specified on the command line            
  $lastLogonTimeStampLimit = $currentDateUtc.AddDays(- $NbDays)            
  $lastLogonIntervalLimit = $lastLogonTimeStampLimit.ToFileTime()            
  $Justification = "Disabled: Disabled by Proxy - User Account Inactive for more than 90 days."            
  Get-ADUser -f { (lastlogontimestamp -lt $lastLogonIntervalLimit) -and (enabled -eq $true) `
     -and (whencreated -lt $b)} -searchbase $SearchBase |            
  Disable-ADProxyUser -Requestor "Proxy System" -ProcessType "AutoInactive" `
     -Justification $Justification -Notes "Yes"            
              
 Write-Verbose "Finally let's look for accounts that have expired"            
  $Justification = "Account expired.  If you reenable, update the Account Expiration date."            
  Search-ADAccount -accountexpired -usersonly -searchbase $SearchBase |             
   Disable-ADProxyUser -Requestor "Proxy System" -ProcessType "AutoExpired" `
     -Justification $Justification -Notes "Yes"            
            
}

Finally, I send any users I find to a custom AD module to disable the user. I have made this into a function that is part of a larger module that I use to manage most changes I make in AD.  This custom function handles the disable by doing the following:

  • Disables the account
  • Updates the description with information on when the account was disabled and who disabled it.
  • Puts the disabled date into a custom schema field called contosoDisabledDate.  I then use the date in this field to auto delete accounts after they have been disabled for 60 days.
  • Set msnpAllowDialin to false.
  • Move to a Pending Delete OU.
  • Record information about this work in a SQL table so I can produce a monthly report on all AD changes.  I’ll share some of my code on this in a future blog post.

One of the benefits of this function (as part of my module) is that I can send users to disable using the PowerShell pipeline.  If you aren’t familiar with running functions, you can save below as a file and dot source it.  (You can find more information about dot sourcing by reading this blog article: http://mctexpert.blogspot.com/2011/04/dot-sourcing-powershell-script.html )

    function Disable-ADProxyUser {            
    <# 
     .Synopsis
      Disables Users in AD, moves them to pending delete and updates them with the correct information.
    
     .Description
      This will disable a user using our processes in place here in USW.  This includes disabling the account,
      moving it to Pending Delete OU and updating appropriate fields (description - contains date user was disabled
      and who disabled it, info - has the reason the user was disabled, division - old location for placing disabled
      date, and TSADisableDate - new location for storing the disable date using a Large Integer).  Finally it 
      reports all of these updates to the ADProxy reporting database.  
      
      This command accepts the user name via either direct parameter input or via pipeline.
      
     .Parameter Requestor
      The name of the person requesting the disable.
    
     .Parameter ProcessType
      This lets us know if it was done manually or through some other process.
      
     .Parameter Justification
      This is the reason that the account is being disabled.
      
     .Parameter Notes
      If we also want to disable the users Notes account, this field needs to be marked with a "Y".
       
     .Parameter Admin
      Setting this to "Y" will disable the user account and move it to the Pending Delete IT Admin OU.
       
     .Parameter WorkOrder
      The Service Desk Work Order number related to this disable.
    
     .Example
       # Disable a user and record the disable information to the ADProxy report DB.
       $LDAPFilter = "(&(objectClass=user)(!(userAccountControl:1.2.840.113556.1.4.803:=2))(lastLogonTimeStamp<=" + $lastLogonIntervalLimit + "))"
       Get-ADUser -SearchBase "OU=User Accounts,DC=usw,DC=ad,DC=salvationarmy,DC=org" -LDAPFilter $LDAPFilter -Properties * | Disable-ADProxyUser -Requestor "Proxy System" -ProcessType "Auto" -Justification "Disabled: Auto Disabled by Proxy - Inactive 75 days."
    
     .Example
       # Disable an individual user
       Disable-ADProxyUser -UserName "Joe.Bob" -Requestor "Proxy System" -ProcessType "Auto" -Justification "Disabled: Auto Disabled by Proxy - Inactive 75 days." -Notes "Y"
       
      .Example
      	#Disable a user with pipeline from a text file containing one user name per line
      	Get-Content names.txt | Disable-ADProxyUser -Requestor "Doug Neely" -ProcessType "ADCleanup" -Justification "User is inactive."
    #>            
    param(            
     [Parameter(Mandatory=$true,ValueFromPipeline=$True)]            
     [string] $UserName,            
        [parameter(Mandatory=$true)]            
        [string] $Requestor,            
        [parameter(Mandatory=$true)]            
        [string] $ProcessType,            
        [parameter(Mandatory=$true)]            
        [string] $Justification,            
        [ValidateSet("Y","N")]            
        [string] $Notes,            
        [ValidateSet("Y","N")]            
        [string] $Admin,            
        [string] $WorkOrder            
        )            
        BEGIN {            
        "Disabling User Accounts:"             
     #Set the termination date format used in the AD Description            
     $TerminationDate = Get-Date -Format "MMMM dd yyyy"            
     $TerminationDate = [string]$TerminationDate            
     $TermDateDivision = Get-Date -Format d            
     #The ContosoDisableDate field in AD is Large Integer.  Converting todays date to the Large Interval format.            
     $ContosoDisableDate = (Get-Date).ToFileTime()            
     }            
     PROCESS {            
                   
      foreach ($User in $UserName) {            
       $User            
          $ADAccount = Get-ADUser -identity $User -properties *            
          $ADSAM = $ADAccount.SamAccountName            
          $ADCompany = $ADAccount.Company            
          $ADDepartment = $ADAccount.Department            
        $ADEmployeeNumber = $ADAccount.EmployeeNumber            
        $ADEmployeeType = $ADAccount.EmployeeType            
       $ADLastLogon = $ADAccount.LastLogonTimestamp            
       If ($ADLastLogon -ne "") {            
        #$ConvertedLastLogin = [DateTime]::FromFileTime( [Int64]::Parse($ADLastLogon) )            
       }            
       $ADDescription = "Disabled on " + $TerminationDate + " by " + $Requestor            
       $DBReason = $Justification            
       #$Justification = "Inactive computer - Last logon: " + $ConvertedLastLogin            
       If ($ProcessType -eq "HR"){            
        $Justification = "User terminated by HR. Do not reenable unless HR rehires this user. $DBReason"            
       }            
       If ($ProcessType -eq "AutoInactive") {            
        $WorkOrder = " "            
        $Notes = "Yes"            
     
       }            
                      
       #Disable Account            
       Get-ADUser -identity $ADSAM -properties *| Disable-ADAccount            
       #Set AD Fields            
       Get-ADUser -identity $ADSAM -properties *|             
         Set-ADUser -Description $ADDescription            
        Get-ADUser -identity $ADSAM -properties *|             
         Set-ADUser -clear info,division,ContosoDisabledDate            
        Get-ADUser -identity $ADSAM -properties *|             
         Set-ADObject -ADD @{info=$Justification;division="$TermDateDivision";ContosoDisabledDate=$ContosoDisableDate}             
        Get-ADUser -identity $ADSAM -properties *|            
         Set-ADuser -replace @{msnpallowdialin=$false}            
        #Move to Pending Delete            
      
         Get-ADUser -identity $ADSAM -properties *|             
          move-ADObject -TargetPath 'ou=Pending Delete,dc=contoso,dc=org'            
               
             
        #dbo.DisableUsersReport            
        #ProcessType, Username, Company, Department, Reason, LotusNotes, WorkOrder, Requestor, EmployeeNumber, EmployeeType, DateTime            
        $DBColumns = "ProcessType, Username, Company, Department, Reason, LotusNotes, WorkOrder, Requestor, EmployeeNumber, EmployeeType, DateTime"            
        $DBValues = "'$ProcessType','$ADSAM','$ADCompany','$ADDepartment','$DBReason','$Notes','$WorkOrder','$Requestor','$ADEmployeeNumber','$ADEmployeeType'"            
        $DBValues            
        Edit-ADProxyDBReport -Table "dbo.DisableUsersReport" -DBColumns $DBColumns -DBValues $DBValues            
                  
        #Clear values at the end            
        $ADAccount = $null            
        $ADSAM = $null            
        $ADCompany = $null            
        $ADDepartment = $null            
        $ADEmployeeNumber = $null            
        $ADEmployeeType = $null            
        $ADLastLogon = $null            
      }            
     }            
     END { "Complete Disable ADProxy User"}            
    }            
    Export-ModuleMember -function Disable-ADProxyUser

    PowerShell Editors

    ​One of the things to consider when working with PowerShell is what editor do you want to use?

    Since I spend a fair amount of time in a script editor on a day to day basis, I have definitely found some things that I prefer.  Here are a few things that I really want in any script editor:

    • Color coding (this is a requirement, but really, any script editor includes this).
    • Auto-complete – this is when it will auto complete something you are typing with the correct information.  Usually using a tab or some other method, it will finish the code properly.
    • Quick and easy commenting – I often play around with different things in my scripts. This means that I may want to quickly select a number of lines and comment them out for testing purposes.
    • Open the editor and have the last scripts I was working on open automatically.  This is really just a convenience for me as I often return to the same scripts and want to keep editing them between sessions.
    • Auto signing of scripts.  I feel it is important to have all of my scripts signed automatically when I save them.  This gives me the ability to make sure that scripts aren’t modified by someone else before running them.  I am in and out of various scripts all the time and don’t want to have to think about the signing process.

    These are just a few of the things that are important to me when it comes to an editor.  I have tried out a number of editors over the years.  My primary one has been PrimalScript 2011 by Sapien Technologies.  I have really loved using it, but upgrading to the latest version is very expensive. 

    I am now starting to play with PowerShell ISE (which comes free from MS and is “in the box”).  Unfortunately, it doesn’t do everything on my must have list for script editors.  This has started me on a quest to see if I can get it to do everything I need.  The nice thing about PowerShell ISE is that it allows you to have Add-Ins to fill missing gaps.  One of the first Add-Ins I am using allows multi-line comment and uncomment and the ability to save state and exit. 

    http://blog.danskingdom.com/powershell-ise-multiline-comment-and-uncomment-done-right-and-other-ise-gui-must-haves/

    Another piece of the puzzle for me is signing the scripts automatically.  It isn’t quite the built-in ability that I like with PrimalScript, but I think it will do the job for me. 

    http://huddledmasses.org/signing-powershell-scripts-automatically/

    Performance Testing Slow Startups

    On occasion, I will have users complain that their computer is starting up slow.  Usually the first thing that gets blamed is group policy, but I know that it usually isn’t the primary cause (just the symptom of some other issue). Often, it indicates some issue specific to that machine.  How do I know that?  We have spent time over the years doing some regular performance testing and timing our startups.  This used to involve using a stopwatch, but it can always be difficult to get an accurate time with that.

    Today, we use a tool from the Windows Performance Toolkit (and specifically XPerf) to do our testing.  It will give us an accurate reading of our startup times, as well as helps us troubleshoot those systems with a slow startup.  Here are some example startup times in our environment:

    image

    *WTG is Windows To Go running from a USB key

    You can see from this that if a computer is at a location without a domain controller (one of our corps) the startup time increases but is still typically under 2 minutes.  We do see a startup time delay increase a bit when going over DirectAccess, but it is still well under our three minute mark that we like to see.

    So, how do I use XPerf to do my testing?  Well, rather than retype it all, I want to share an article (that has some good additional links in it) that covers setup and testing scenarios:

    http://blogs.technet.com/b/askpfeplat/archive/2012/06/09/slow-boot-slow-logon-sbsl-a-tool-called-xperf-and-links-you-need-to-read.aspx

    Sending Emails via IBM Notes and PowerShell

    I have been working for awhile now to move a lot of my code over to PowerShell.  I have found it to be very efficient and easy to read (just took a bit of a mentality change to switch over from VBScript).

    In our environment, I have always found it much more reliable to send emails (usually notification emails to end users about expiring passwords or accounts) directly through IBM Lotus Notes instead of just using our SMTP server.  Previously, I worked with my boss and we created a project in Visual Studio calling some of the Notes COM objects to send emails.  I decided to move all that code over and send the email directly from my PowerShell scripts.

    In searching the web, though, I didn’t find too many examples that covered what I wanted to do including sending HTML emails.  Today my boss and I finished working up this code that I have created as a PowerShell module (save it as a .psm1 file) that I can now call from my other scripts. 

    An important note to get this working.  First, you must have IBM Notes client installed on the server where you run the script.  Second, since it is making a COM call, you must also be running PowerShell in x86.  (In newer versions of Windows such as 2012, it does default to x64.)

    Here is the code:

    Function Send-IBMNotesMail 

    <# 

     .Synopsis

      Sends Email using IBM Notes.

     

     .Description

      This uses a local install of Notes to send emails.  It seems that this is a bit more 

      reliable in our environment than using a regular SMTP server for some reason.  It requires

      Notes to be installed on the computer that this script is being run on.  It calls the 

      Lotus.NotesSession comobject which requires PowerShell to be running in x86 mode.

      

     .Parameter NotesInstallDir

      The directory where Notes is installed.  It defaults to C:\Program Files (x86)\IBM\Notes.

      

     .Parameter Password

      The password of the ID file to send email. 

      

     .Parameter MailFileServer

      The server where the mail file resides.  

      

     .Parameter MailFile

      The mail file to use to send the email.  

     

      .Parameter To

      The email address you are sending an email to.

      

     .Parameter From

      The email address of the sender.

     

     .Parameter Subject

      The subject line of the email.

      

     .Parameter Body

      The body of the email in HTML format.

      

     .Parameter BCC

      (Optional) A BCC recepient.

     

     .Parameter Attachment

      (Optional) File to attach.

      

     .Example

       # Send an email.

       Send-IBMNotesEmail -SenderEmail "joe.bob@gmail.com" -DestinationEmail "joe.bob@outlook.com" -EmailSubject "Hello" -EmailBody "Body of the email" -BCC "joe.smith@outlook.com"

     

    #>

    { 

     

    [CmdletBinding()] 

    param( 

        [Parameter()]  

        [string] 

        $NotesInstallDir = "C:\Program Files (x86)\IBM\Notes", 

        [Parameter()]  

        [string] 

        $Password = "password", 

        [Parameter()]  

        [string] 

        [ValidateNotNullOrEmpty()] 

        $MailFileServer = "mailserver", 

        [Parameter()]  

        [string] 

        [ValidateNotNullOrEmpty()] 

        $MailFile = "mailfile", 

        [Parameter(Position=0, Mandatory=$true)]  

        [string[]] 

        [ValidateNotNullOrEmpty()] 

        $To,

        [Parameter(Position=1)]  

        [string] 

        [ValidateNotNullOrEmpty()] 

        $From, 

        [Parameter(Position=2, Mandatory=$true)]  

        [string] 

        $Subject, 

        [Parameter(Position=3, Mandatory=$true)]  

        [string] 

        $BodyContent, 

        [Parameter()]

        [string]

        $BCC,

        [Parameter()]  

        [string] 

        $Attachment

    ) 

     

    #For information on the GetDatabase option:

    #http://www-12.lotus.com/ldd/doc/domino_notes/rnext/help6_designer.nsf/f4b82fbb75e942a6852566ac0037f284/320b2e0e83e87bca85256c54004d45a1 

    $Notes = New-Object -ComObject Lotus.NotesSession 

    $Notes.Initialize("$Password") 

    $MailDB = $Notes.GetDatabase("$MailFileServer", "$MailFile") 

     

    If (-not $MailDB.isopen) {

        Write-Host "Couldn't open the Mail Databse...Trying the Cluster Failover Server"

        $dbMail = $notes.GetDatabase("ClusterFailoverServer","$MailFile")

    }

    if($MailDB.isopen) 

        { 

        $doc = $MailDB.CreateDocument()

        $doc.AppendItemValue("Form","Memo")

        

        $stream = $notes.CreateStream()

        $notes.ConvertMime = $false

        

        $body = $doc.CreateMIMEEntity()

        

        $header = $body.CreateHeader("Subject")

        $header.SetHeaderVal("$Subject")

        

        $header = $body.CreateHeader("To")

        $header.SetHeaderVal("$To")

        

        $header = $body.CreateHeader("From")

        $header.SetHeaderVal("$From")

        

        $header = $body.CreateHeader("Principal")

        $header.SetHeaderVal("$From")

        

        If ($BCC -ne "") {

            $header = $body.CreateHeader("BCC")

            $header.SetHeaderVal("$BCC")

        }

        #This is to get it so it shows up in sent items properly.

        $Date = Get-Date

        $doc.AppendItemValue("PostedDate",$Date)

        

        $stream.WriteText( $BodyContent) 

     

        $body.SetContentFromText($stream,"text/HTML;charset=UTF-8",$ENC_IDENTITY_7BIT)

         

        if ($Attachment -ne "") { 

            $($doc.CreateRichTextItem("Attachment")).EmbedObject(1454, "", "$Attachment", "Attachment")

        } 

        

        $doc.Save($true,$False)

        $doc.Send($False) 

        $Notes.ConvertMime=$true

        } 

    }

     

    Export-ModuleMember -Function Send-IBMNotesMail

    I hope you find this as a useful way to send emails from your scripts.  I’ll come back in a few days with some of my other PowerShell modules that I am working on and how I integrate with this function.

    PowerShell

    I guess it is confession time.  I am a recent convert to PowerShell.  After having used VBS scripts for years now, I have recently discovered that I can do so much more using PowerShell using a lot less code.  I have a long ways to go, especially to learn best practices for writing scripts and functions. 

    I thought I would share with you a few scripts that I am using and how they have helped me.  I am assuming the use of PowerShell 3.0 in this case.  (Note, to load a function so you can call it from the PowerShell prompt, you want to “dot-source” it. 

    Get-ServiceAccountUsage

    Have you ever had service accounts that have been around for ages yet no idea what servers they are running on or what they are actually doing?  I stumbled across Get-ServiceAccountUsage and am very impressed with it. Using the following command, I can have it search our Servers OU and tell me every place (whether it is a scheduled task, running a service or even running an IIS Application Pool) this account is in use:

    (Get-AdComputer –Filter * –SearchBase “OU=Servers,DC…”).name | Get-ServiceAccountUsage –UserAccount ‘domain\service.account’

    Set-AdminUser

    As user accounts have been moved around through the years and been in various groups, I found that some groups used to be members of protected Admin groups (such as Account Operator or Print Operator).  When this happens, a special flag is set on the account that ensures that security inheritance is turned off on that account.  Just removing a user from those protected groups doesn’t automatically remove this adminCount flag.  Using Set-AdminUser enables you to reset the flag and turn on inheritance.  Note that if the user account should still be protected, it will refresh the adminCount flag at the next security refresh (every 90 minutes or so) so that the accounts that need to be protected are still protected.

    Password Not Required field

    On occasion, I have found some accounts where they are set to not require a password.  It isn’t a major issue as the GUI won’t allow these users to have a blank password, but auditors definitely don’t like to see this.  Here is a one liner to reset any user accounts with the pwdNotRequired AD field set improperly:

    Get-ADUser -searchscope subtree -ldapfilter "(&(objectCategory=User)(userAccountControl:1.2.840.113556.1.4.803:=32))" | Set-ADAccountControl -PasswordNotRequired $false

    and a similar one to do the same for computer accounts:

    Get-ADComputer -searchscope subtree -ldapfilter "(&(objectCategory=Computer)(userAccountControl:1.2.840.113556.1.4.803:=32))" | Set-ADAccountControl -PasswordNotRequired $false

    Disable Computers Using CSV File Input

    On occasion, you might find it necessary to disable a list of computers.  In our environment, when we disable a computer, we do a few things:

    • Disable account
    • Set the description to note when we disabled this computer
    • Mark two fields (info and a custom schema object) with the reason we disable the account and the date we disabled the account.  (The date is used as we have another scheduled task that runs and will automatically delete disabled objects after a certain period of time.)
    • Move the account to a Pending Delete OU.
    • Finally, we also record all of this info in a database for future reference, as well as some of our regular reports on what work is done in AD.
    # ==============================================================================================

    # 

    # Microsoft PowerShell Source File -- Created with SAPIEN Technologies PrimalScript 2011

    # 

    # NAME: DisableADComputerfromCSV.ps1

    # 

    # AUTHOR: Doug Neely , TSA

    # DATE  : 3/1/2013

    # 

    # COMMENT: 

    # 

    # ==============================================================================================

    Import-Module ActiveDirectory 

    $datetime = Get-Date -Format MM_dd_yy_HH_mm

    $csv = "R:\Scripts\ADPowershell\OldServers.csv"

    $TerminationDate = Get-Date -Format M/d/yyyy

    $TerminationDate = [string]$TerminationDate

     

    #Import CSV File

    $TerminateCSV = Import-Csv $csv 

     

    foreach ($Computer in $TerminateCSV) {

        $ADSAM = $Computer.name

        

        $ADAccount = Get-ADComputer -identity $ADSAM -properties *

        $ADLastLogon = $ADAccount.LastLogonTimestamp

        $ConvertedLastLogin = [DateTime]::FromFileTime( [Int64]::Parse($ADLastLogon) )

        $ADDescription = "Disabled on " + $TerminationDate + " by This.User"

        $Justification = "Inactive computer - Last logon: " + $ConvertedLastLogin

        $ADOS = $ADAccount.OperatingSystem

        $ADOSSP = $ADAccount.OperatingSystemServicePack

        $ADFullOS = $ADOS + " " + $ADOSSP

        $Requestor = "This.UserSA"

        $DBDateTime = Get-Date

        

        #Disable Account

        Get-ADComputer -identity $ADSAM -properties *| Disable-ADAccount

        #Set AD Fields

        Get-ADComputer -identity $ADSAM -properties *| 

            Set-ADComputer -Description $ADDescription

        Get-ADComputer -identity $ADSAM -properties *| 

            Set-ADObject -clear info,tsaDisabledDate

        Get-ADComputer -identity $ADSAM -properties *| 

            Set-ADObject -ADD @{info=$Justification;tsaDisabledDate=$TerminationDate} 

        #Move to Pending Delete

        Get-ADComputer -identity $ADSAM -properties *| 

            move-ADObject -TargetPath 'ou=Pending Delete,dc=yourdomain,dc=org'

     

         #SQL Database Record information

        #Datafields: ProcessType, ComputerName, OS, Reason, Requestor, DateTime

        $Server = "ServerName"

        $Database = "DBName"

        $UserID = "User"

        $UserPassword = "Love really long passwords for others to use."

        $Table = "dbo.DisableComputersReport"

        $DateTimeRequest = (Get-Date).ToString("yyyy-MM-dd HH:mm:ss.sss")

        $Connection = New-Object System.Data.SQLClient.SQLConnection

        $Connection.ConnectionString = "server=$Server;Initial Catalog=$Database;User ID=$UserID;Password=$UserPassword;"

        $Connection.Open()

        $Command = New-Object System.Data.SQLClient.SQLCommand

        $Command.Connection = $Connection

        $Command.CommandText = "INSERT INTO $Table (ProcessType, ComputerName, OS, Reason, Requestor, DateTime) VALUES ('ADCleanup','$ADSAM','$ADFullOS','$Justification','Test.UserSA','$DBDateTime')"

        $Command.ExecuteNonQuery()

        $Connection.Close()

        

    }

    These are a few PowerShell scripts/codes that I have found useful recently.  Hope you find some of these useful as well.