Office365 and dirsync – the multiple accounts with the same UPN/mail/proxyaddresses value

For everyone who is working on an Office365 email deployment and using dirsync you are probably familiar with dirsync errors and trying to find the duplicated proxy addresses within AD. For everyone who hasn’t started one of these there are several things which must be unique within the FOREST. Proxy addresses is one of those things that must be unique within the directory. Dir sync will find instances of duplicated proxy addresses and will error on those objects.

Running into this at work I decided to see if someone had done the heavy-lifting of writing a script before me to find duplicated proxy address, mail, and UPN values. I didn’t find anything doing a quick Google that suited my needs and wants.

What this does is it goes out and pulls all of the user, contact, and group objects within the forest, selecting the canonical name, mail, UPN, and proxy addresses values for each. It then puts all of that into a single array. A prompt is put in for what string to look for. Once that is entered a straight search through all of the collected objects is performed with the results of any matches displayed. The input loop is then repeated so I don’t have to recollect all the data each time.

# This code is deliberately inefficient on the Get-ADObject command. The purpose being so the script can be adapted for other
# duplicate searches, such as for the mail user object property being non-unique. One advantage to this code is that it looks for
# all user and contact objects and gets their UPNS, mail, and proxyaddresses values, rather than just those with homeMDB populated. I have seen some
# accounts that have had the mailboxes rudely disassociated leaving proxyaddresses values that are not searchable via EMC/EMS.
#
$domainlist= (get-adforest).domains
foreach ($d in $domainlist)
    {
        Write-host "Processing domain " $d ". Please be patient. This may take some time depending on the number of user objects."
        $userlist = get-adobject -LDAPFilter "(&(objectClass=User)(objectCategory=person))" -Server $d -properties canonicalname,proxyaddresses,mail,userprincipalname | select canonicalname,proxyaddresses,mail,userprincipalname
        $contactlist = get-adobject -LDAPFilter "objectClass=Contact" -Server $d -properties canonicalname,proxyaddresses,mail,userprincipalname | select canonicalname,proxyaddresses,mail,userprincipalname
        $grouplist = get-adobject -LDAPFilter "objectClass=group" -Server $d -properties canonicalname,proxyaddresses,mail,userprincipalname | select canonicalname,proxyaddresses,mail,userprincipalname
	    foreach ($ul in $userlist)
	        {
		        [array]$allobjs += $ul
	        }
	    foreach ($cl in $contactlist)
	        {
		        [array]$allobjs += $cl
	        }
	    foreach ($gl in $grouplist)
	        {
		    [array]$allobjs += $gl
	        }
    }
$total = $allobjs.count
write-host " "
write-host "-------------------"
write-host "Total user and contact objects collected : " $total
$count = 1
$MatchingObjs = $null
write-host " "
write-host "-------------------"
$address= read-Host "Enter search address. Hit ENTER or type exit to exit. : "
If (($address.Length -gt 0) -and ($address -ne "exit"))
    {
        Do
            {
                foreach ($ao in $allobjs)
                    {
                        Write-Progress -Activity "Scanning for" $address -PercentComplete ($count/$total * 100)
                        $MatchFound = $False
                        ForEach ($pa in $ao.ProxyAddresses) 
                            {
                                If ($pa –Match $address)
                                    {
                                        $MatchFound = $True
                                    }
                            }
                        #add matches to array 
                        If ($ao.mail –Match $address)
                            {
                                $MatchFound = $True
                            }
                        If ($ao.userprincipalname –Match $address)
                            {
                                $MatchFound = $True
                            }
                        If ($MatchFound) 
                            {
                                [array]$MatchingObjs += $ao
                            }
                        $count++
                    }
                write-host " "
                write-host "-------------------"
                Write-host "Matching objects:"
                write-host "-------------------"
                foreach ($mo in $MatchingObjs)
                    {
                        write-host $mo.canonicalname
                    }
                write-host "-------------------"
                write-host " "
                $count = 1
                $MatchingObjs = $null
                $address = read-Host "Enter search address. Hit ENTER or type exit to exit. : "
            }
        Until (($address.Length -eq 0) -or ($address -eq "exit"))
    }
Advertisements

Running a different Linux at home now

For years I’ve been using Ubuntu at home, but I finally decided to switch to something else.

It wasn’t one thing in particular that did it, more a collection of things. Ever since I had upgraded to the 13.04 version I had a problem with the system at boot up throwing an error. After some searching I found this was a common issue and had to do with, IIRC, the Gnome components versus some other things. I’d fix it, and then either the next version or some update would come along and re-introduce the problem. I also got tired of opening the software center and getting ads for programs when all I wanted to do was add something like gimp. Even the system updates was buggy, sometimes having to be manually started in order to get updates.

After doing a little research I decide to go with Linux Mint. So far I’ve been impressed. The install was smooth and the end result has been like a breath of fresh air. I haven’t re-installed everything yet but what I have installed has not balked or errored out.

The only thing I found frustrating, as I did with Ubuntu, was getting printers installed. I don’t blame either distro for that. Instead it is the fault of the printer manufacturers who want to hide how their things work. On that I really do wish the printer makers (I’m looking at you Epson, HP, and Brother) would get their act together and start publishing either drivers for Linux that are easy to install or the APIs so that the community could do it.

Interlude II : Excel with PowerShell

In my last post I finished with this code:

$xl = New-Object -ComObject "Excel.Application"
$xl.visible = $true
$xlbook = $xl.workbooks.Add()
$xlbook.worksheets.Add() | out-null
$xlsheets = $xlbook.worksheets
$xlsheet1 = $xlsheets.item(1)
$xlsheet1.name = "Report"
$xlsheet1.Cells.Item(1,1) = "Header1"
$xlsheet1.Cells.Item(1,2) = "Header2"
$xlsheet1.Cells.Item(1,3) = "Header3"
$xlbook.SaveAs("REPORT.XLSX")
$xl.quit() | Out-Null

As I said, this can be made much better looking for a report. In addition, we can do things a bit smarter to make our job easier down the road.

First off, let’s break our code up into functions.

Function NewExcelObj
     {
          Set-Variable -Name xl -Value (New-Object -ComObject "Excel.Application") -Scope 1
          $xl.visible = $true
          Set-Variable -Name xlbook -Value ($xl.workbooks.Add()) -Scope 1
         $xlbook.worksheets.Add() | out-null
     }
Function AddSheets
     {
          Set-Variable -Name xlsheet -Value ($xlbook.worksheets) -Scope 1
          Set-Variable -Name xlsheet1 -Value ($xlsheet.item(1)) -Scope 1
          $xlsheet1.name = "Report"
     }
Function AddHeaders
     {
          $xlsheet1.Cells.Item(1,1) = "Header1"
          $xlsheet1.Cells.Item(1,2) = "Header2"
          $xlsheet1.Cells.Item(1,3) = "Header3"
     }
Function CloseExcel
     {
          $xlbook.SaveAs("REPORT.XLSX")
          $xl.quit() | Out-Null
     }
# Main code
NewExcelObj
AddSheets
AddHeaders
CloseExcel

What we’ve done is moved all of our code into functions. Note we had to change how certain variables are defined.

In PowerShell there are different scopes. Variables, objects, and constants reside within the scopes. Variables (note: I’m using this as shorthand to include objects and constants as well) defined in the main script are available to all levels of the script. A variable defined within a function is normally available to just that function’s scope. When the code returns from the function the variable defined within the function is lost.

Because of that you’ll see I have redone the definitions of some of the items used previously . Instead of the normal “$Variable = value” methodology I have swapped in “Set-Variable –Name Variable –Value (value) –Scope 1”. Using this format allows me to make use of the Scope option which allows me to say that the variable is defined at the scope 1 level above the function scope, in this case the base script scope level. With the variables created that way I can proceed to make use of the variables in other functions.

This cleans up the code some, but what about the spreadsheet. It’s still as ugly as before.

Let’s do something about that:

# Main code
NewExcelObj
AddSheets
[array]$headerlist = ”Header1”,”Header2”,”Header3”
AddHeaders $headerlist
CloseExcel
Function AddHeaders ($AHhl)
     {
          $col = 1
          Foreach ($t in $AHhl)
               {
                    $xlsheet1.Cells.Item(1,$col) = $t
                    $xlsheet1.Cells.Item(1,$col).Font.Bold = $True
                    $xlsheet1.Cells.Item(1,$col).Interior.ColorIndex = 15
                    $col++
               }
     }

post4

We are doing A lot of things here at once. First, just before the call of the function AddHeaders we are defining an array that contains our header values. We then pass that variable to AddHeaders – notice the line for the call of AddHeaders now is “AddHeaders $headerlist” and our Function header is “Function AddHeaders ($AHhl)” where $AHhl is contains the passed array values of $headerlist. We now have a function-local variable called $col to keep track of what column we are in. We go into a standard Foreach loop where we step through each member of $AHhl, populate the header column, and move on with incrementing $col ($col++). Between the two lines populating and incrementing are two lines that contain formatting instructions for the cells we just touched. The first sets the font in the cell to bold, the second sets a fill colour in the cell of a light grey.

That makes our headers stand out:

There’s still one more thing we should do to our sheet – autofit the columns. There is nothing more annoying to people looking at a spreadsheet than having to change the column width. We can accomplish this with just two lines of code added to our CloseExcel function.

Function CloseExcel
     {
         $UsedRange = $xlsheet1.UsedRange
         [void] $UsedRange.EntireColumn.Autofit()
         $xlbook.SaveAs("REPORT.XLSX")
         $xl.quit() | Out-Null
     }

We define a variable in the scope of the function called $UsedRange and assign the used range property of the sheet to it. After that we use the Autofit() method on the property EntireColumn forcing all of the used columns to be autofit to the data within the column. Finally we perform our previously defined quit statements, saving and closing the workbook.

Interlude I : Excel with PowerShell

One of the things that comes up frequently is being able to provide auditors with documentation. Often the result is something that can be done in Excel. With PowerShell this can be done as part of a script.

First we need to recognize that the machine generating the Excel document will need Excel installed. No Excel, no document using this method.

First we need to start by creating a new COM object instance of Excel:

$xl = New-Object - ComObject "Excel.Application"

So we can watch as the document in populated by the script we add in the following:

$xl.visible = $true

post1

With this we just get a nice empty Excel window. Nothing at all going on. First we need to add a workbook and then sheets to the workbook.

$xlbook = $xl.workbooks.Add()
$xlbook.worksheets.Add() | out-null

post2

That’s looking better. Now we create an object to reference the sheets within the workbook more easily:

$xlsheets = $xlbook.worksheets
$xlsheet1 = $xlsheets.item(1)
$xlsheet1.name = "Report"

What we’ve done is reduce our typing so that we don’t have to type $xlbook.Worksheets.item(1).name. It also makes more sense to us slow-brained humans than a long string of properties run together – translation : lower risk of error.

Now we can add our column headers.

$xlsheet1.Cells.Item(1,1) = "Header1"
$xlsheet1.Cells.Item(1,2) = "Header2"
$xlsheet1.Cells.Item(1,3) = "Header3"

post3

It should be obvious in the above that the value “1,1” refers to row 1 cell 1, “1,2” to row 1 cell 2, etc. Knowing this it should be easy to figure out how to place data into cells within the sheet.

All of this is kind of useful but what would be the point of scripting all up to this point if you don’t go ahead and save the sheet. That takes just two lines of code:

$xlbook.SaveAs("REPORT.XLSX")
$xl.quit() | Out-Null

The code talked about in this post:

$xl = New-Object -ComObject "Excel.Application"
$xl.visible = $true
$xlbook = $xl.workbooks.Add()
$xlbook.worksheets.Add() | out-null
$xlsheets = $xlbook.worksheets
$xlsheet1 = $xlsheets.item(1)
$xlsheet1.name = "Report"
$xlsheet1.Cells.Item(1,1) = "Header1"
$xlsheet1.Cells.Item(1,2) = "Header2"
$xlsheet1.Cells.Item(1,3) = "Header3"
$xlbook.SaveAs("REPORT.XLSX")
$xl.quit() | Out-Null

Next post I’ll talk about how to make this better looking.

Getting the members of a local group via PowerShell, Part II

In my last post I showed how after several steps we ran

(Get-WmiObject win32_groupuser -Filter $query).PartComponent

and got

\\PC\root\cimv2:Win32_UserAccount.Domain="PC",Name="Administrator"
\\PC\root\cimv2:Win32_Group.Domain="Domain",Name="Desktop Admins"
\\PC\root\cimv2:Win32_Group.Domain="Domain",Name="Domain Admins"
\\PC\root\cimv2:Win32_UserAccount.Domain="Domain",Name="ME"

Now we’re going to clean this up into a format that is more readable. First let’s take that last command and assign it to a variable:

$list = (Get-WmiObject win32_groupuser -Filter $query).PartComponent

This variable, $list, is actually an array. As such we can now create a foreach loop and process each line. The first thing we’re going to do is isolate the domain portion out.

foreach ($l in $list)
   {
       $domain = $l.Substring($l.IndexOf("`"")+1)
       $domain = $domain.Substring(0,$domain.IndexOf("`""))
       $domain | Out-Default
   }

So what we have done is get the domain out. Looking at the first line where the variable $domain is set we found the first case of the double quote character and removed everything to the left of it by first using the IndexOf method to find the position of the ” and then the Substring method to crop off the data to the left of it. By adding 1 to the IndexOf value we made sure to include the ” in the removed text. Note I had to use the sequence `” (NOT ‘ – ` which is the key to the left of the 1 key on a US keyboard) to be able to include the ” character in the search string. The next line involves finding the next ” and removing everything to the right of it. So our output from the loop is:

PC
Domain
Domain
Domain

Now let’s add some code to get the user/group name.

foreach ($l in $list)
   {
      $domain = $l.Substring($l.IndexOf("`"")+1)
      $UG = $domain
      $domain = $domain.Substring(0,$domain.IndexOf("`""))
      $UG = $UG.Substring($UG.IndexOf("`"")+1)
      $UG = $UG.Substring($UG.IndexOf("`"")+1)
      $UG = $UG.Substring(0,($UG.Length-1))
      $domain + "\" + $UG | Out-Default
   }

We’ve added a new variable $UG. Our first step is to capture the portion of the string after the section to the first ” is stripped  off – we need the value at the end. The next time we set $UG we are stripping everything up to and including the second ” in the original string. Then we repeat to the third ” in the original. Finally we strip off the the last “. Adding a “\” and the $UG value to our output gives us this:

PC\Administrator
Domain\Desktop Admins
Domain\Domain Admins
Domain\ME

Next time I’ll go over building a report for this information so it can be handed off to auditors.

Getting the members of a local group via PowerShell

I’m always looking for a way to do things by the command line, particularly if I can’t get to where I want to go quickly. Finding the members of a local group is one of those things where I like to being able to look things up quickly.

One area that comes up frequently, particularly in an audit, is who the local admins are. Most people head to the Computer Management console. I’ve got a better way, particularly if you have to deal with more than one machine.

With PowerShell I can make use of the WMI classes to perform the query. I start with the knowledge I want to use the WIN32_Group class to get information about groups. (See this for a list of the Managment Interfaces).

I could start with this:

Get-WmiObject Win32_Group

That would  be a bad thing to do on a domain though, particularly a large one. So I also add a filter:

Get-WmiObject Win32_Group -Filter “LocalAccount=True”

That gets me just the local machine groups. The one I am interested in is the Administrators group. I have two ways of getting this, both involving expanding my Filter. The first way is to expand the Filter to “LocalAccount=True AND NAME=’Administrators'”. While this works it has the problem of failing to return anything if the name of the group is changed. Instead I’ll expand the filter to include a look up using the SID.

get-wmiobject win32_group -Filter “LocalAccount=True AND SID=’S-1-5-32-544′”

Being a built-in group the SID is predefined and will not change so using the SID is garaunteed to get me the right group. At this point I have this output:

Caption                       Domain                        Name                          SID
——-                            ——                             —-                              —
PC\Administrators       PC                            Administrators                S-1-5-32-544

Not exactly what I was looking for. What I need to do now is pull some information from that into a query statment I can use with another WMI class. We’ll take the above command and assing it to a variable $admingroup.

$admingroup = get-wmiobject win32_group -Filter “LocalAccount=True AND SID=’S-1-5-32-544′”

To get the right results I am going to use two properties of the value stored in $admingroup, specifically Domain and Name. Here’s the query:

$query=”GroupComponent = `”Win32_Group.Domain='” + $admingroup.Domain + “‘,NAME='” + $admingroup.Name + “‘`””

GroupComponent is an instance of the Win32_GroupUser class that contains the information I am looking for. I’m setting the query filter to look for the domain retrieved from the Win32_Group class. I’m also, now that I’ve gotten the real name of the group, going to be filtering using the name of the group.

My next bit is to pull the data of the membership using the query I’ve just built:

Get-WmiObject win32_groupuser -Filter $query

Now if I were to run this I would get a BUNCH of information per object in the group. I want to clean it up:

(Get-WmiObject win32_groupuser -Filter $query).PartComponent

If you tried the previous command you would have seen, buried with the other information, the PartComponent property. The above command isolates out just that information:

\\PC\root\cimv2:Win32_UserAccount.Domain=”PC”,Name=”Administrator”
\\PC\root\cimv2:Win32_Group.Domain=”Domain”,Name=”Desktop Admins”
\\PC\root\cimv2:Win32_Group.Domain=”Domain”,Name=”Domain Admins”
\\PC\root\cimv2:Win32_UserAccount.Domain=”Domain”,Name=”ME”

It’s ugly but it is correct.

My next post I’ll discuss how to clean it up.

Content type and TNEF handling in Exchange 2010 for remote domains

At work it came up that we needed to have our emails downgraded to plain text if they were being sent out to the internet. The reason for this was so that our filtering software could remove PII without having to worry about the HTML code behind the human-readable version getting false flagged. As the filter was on the edge appliance internal emails could be left as HTML without fear of being false-flagged.

This sounds pretty simple to do. Unfortunately, it turned into a trip down the rabbit hole.

Under Exchange 2010 there are policies for remote domains, with the natural fallback of Default handling anything not already defined. I proceeded to do what one would natural expect would accomplish the desired goal. I first issued the following commands:

Set-RemoteDomain Default -ContentType MimeText
Set-RemoteDomain Default -TNEFEnabled $False

Everything appeared to work at first. Then I found Outlook clients weren’t sending their messages as plain text. This resulted in me spending several hours trying to puzzle it out, and finally a support call to Microsoft.

The call managed to explain why I was seeing what I was seeing.

There are three points of action : Outlook client, Contact/Mail user, and Remote Domain settings on the server.

For TNEF settings the order of application is exactly that – Outlook client, Contact/Mail user, and Remote Domain settings on the server. The server setting for the remote domain overrides anything set for a mail user/contact entry which overrides anything from Outlook.

However, ContentType is not that way. Instead it is the exact opposite, with the Outlook client having the last say in the equation. MOst people are now thinking “WHAT?!!! Why would the client be allowed to override the server’s setting?!!” I was given no clear answer on this from Microsoft, instead receiving the patent “By design” response.

What I was able to work out with Microsoft was the following:
1] For OWA clients I had to disable the Premium experience. While this did not prevent reading in HTML it did force ALL new, reply, and forward emails to be in plain-text.
2] For Outlook clients the implementations was a little more complicated. I had to create a GPO which had to have the following settings:
User Configuration|Policies|Administrative Templates|Classic Administrative Templets|Microsoft Outlook 2010|Disable Items in User Interface|Custom
Disable command bar buttons and menu items : 5564,5565
User Configuration|Policies|Administrative Templates|Classic Administrative Templets|Microsoft Outlook 2010|Mail Format|Internet Formatting
Outlook Rich Text Options :    Enabled
Use this format :    Convert to plain text
User Configuration|Policies|Administrative Templates|Classic Administrative Templets|Microsoft Outlook 2010|Mail Format|Internet Formatting|Message Format
Set message format : Enabled
Use the following format for e-mail messages:    Plain Text
User Configuration|Policies|Administrative Templates|Classic Administrative Templets|Microsoft Outlook 2010|Outlook Options|Preferences|E-mail Options
Read e-mail as plain text :    Enabled
Essentially what this did was force Outlook to read messages in plain text, and to write new messages in plain text. It also disabled the buttons under the Format tab for RTF and HTML. Now a user could still read the message in HTML by clicking the info message and choosing Display in HTML, and from there reply or forward in HTML but that is not something most people are going to do for every message every time it is opened.

Hopefully Microsoft will one day correct the precedence with ContentType conversion such that it is as Administrators would expect and want, namely server overrides client.

Copyright Info

Comments Off on Content type and TNEF handling in Exchange 2010 for remote domains Posted in Computer Tips And Tricks