Latest Entries »

Shortest Path from CSV to Excel

So, there are lots and lots of options out there these days for converting from CSV to Excel. You can find dozens of helpful scripts and functions with little more than a quick Google search. So, already you’re probably thinking that, with so many pre-built options why on earth he is writing this blog post. Well, I’m glad you asked (even if you don’t, just go with me on this).

What I noticed about most of these available options is a complete lack of consistency in their approach of how they used the ‘excel.application’ com object. Some methods involved creating new worksheets and copying content in, while others used cell based approaches amongst many others. Another thing these all had in common was how very LONG all these functions were, even when they had minimal error handling. This was somewhat confusing to me since I could open Excel and do a CSV import fairly quickly without needing to do copies or create worksheets, so there had to be a faster way. Again, I’m sure you’re sitting there thinking that you could just grab a script from online and carry on, and you could…but what happens if you don’t have internet access, or you are in a hurry and don’t want to copy a script for a simple execution? What? These thoughts didn’t occur to you, or you really don’t care? Well they occurred to me and, because I’m a compulsive sharer, I’m now writing this article.

So, let’s break it down to the bare basics of what we are trying to accomplish, namely:

  1. Initiate Excel
  2. Import our CSV file, complete with cells and columns
  3. Save the file out to the latest Excel format
  4. Close out Excel
  5. Move on with our lives…oh, and we want to do all this without having to see Excel, deal with any prompts, and without having to resort to a script

Now, of course, this all assumes nothing with regards to pretty formatting, or auto-fitting, or anything fancy…just dump to Excel and move on.

First, of course, we have to bind to Excel like so:

$excel = New-Object –com excel.application

Easy enough. Now, the next part is to suppress any Excel alerts that might bother us with things we don’t care about, which isn’t strictly necessary when interactive, but it is if you are batching, so we do that like so:

$excel.DisplayAlerts = $false

Now for the usually tricky part, as this is where most scripts start doing all kinds of crazy things…open the csv file in Excel and save it out as a true Excel file, which we can do in just TWO lines, like so:



Finally, we close out the file and exit Excel like so:



So, now we’ve taken what generally seems to take 30+ lines in most scripts I’ve seen, and cut it down to just 6. Granted, the two middle lines may seem to be a little daunting to remember, but they do have a nice little pattern to them. Now, at this point you’re probably wondering why there are so many little extra things…I mean, can’t we just open the file, save it, and quit it? Well, unfortunately, the answer is not so much. The difficulty here is in making sure we get true Excel formatting and end up with a true Excel file type. If we just open the file without any of the additional options, we end up without any columns just as if we had simply double-clicked on the CSV file. In theory, we don’t actually need ALL of the items, but we still have to provide them because of the way COM objects work. Unlike when you deal with a true object, COM objects don’t allow you to name your parameters as you pass them and thus each one must be provided in order to avoid confusing things which, in turn, means we have to have more parameters than we would really like or need. For instance, in our case we theoretically would only need to specify the filename, the data type, the text qualifier, and an indication the file is comma separated, but these are the first, fourth, fifth, ninth items, so we have to specify everything in between as well.

In case you’re wondering what the other items are, you can review all of the available object properties and value options here –

This all said, I do feel it’s important to point out something at this point. The above example works with the default output options from PowerShell objects that used the Export-CSV cmdlet with the UTF8 encoding and the ‘-NoTypeInformation’ switch specified. If you are exporting type information, but want to skip those lines, you will need to modify the third parameter to the appropriate row number from which you wish to start to avoid confusing Excel. In addition, if you are using some delimiter other than comma separated, you will want to adjust the true and false entries accordingly based on the link provided above.

Now, on to the saving portion. You’ve probably noticed that this piece is MUCH shorter than the opening line. Again, it’d be lovely if we could just specify the filename and call it a day, but this results in an empty file (not even an empty worksheet, just a workbook with no worksheets in it) which is decidedly NOT what we are looking for. The reason is that the second parameter is what actually specifies the file format that, for whatever reason, Excel can’t figure out for itself based on the file extension, but I’ll leave that minor rant for another day. When you go and do a file-saveas manually, you obviously have lots of choices within the drop down to select from, and it’s important that we properly match up our file extension with the intended file type. In the example above, the value of ’51’ is actually a reference to ‘xlWorkbookDefault’ which will match the default save option specified in your installation of Office. In my case, I’m using Office 2013 with the default save option set to the new file format with the ‘.xlsx’ file extension, but if you were using an older version of Excel, or one that is set to save in the old ‘.xls’ format by default instead, you would want to make sure your file extension for the destination file matched. Just on the off chance you might want to use a different file format, you can see a full list of the available options and their corresponding numbers from here –

There are, of course, lots of other parameters that you can specify during saving, such as setting passwords or access modes, but our goal here was to find the least number of characters to produce the desired result rather than to explore all the options. If you’d like to check out all the parameters because you have different needs, or if you just have a burning desire to know all the things (like I do), then you can review the full list of parameters here –

So, now we’ve explored all the bits and pieces, but the reality is that this was mostly just an exercise because, at the end of the day, we all know you’ll probably not commit this to memory and you are more likely to go for the script angle. Of course, you could take all of the above and readily add in all the possible options and error handling yourself (and I thoroughly encourage you to do so), but because I’m a nice guy I think I’ll give you a quick and dirty version you can use to start from. To that end, and in keeping with the premise of this post, you’ll find a short scripted version without options or error handling from which you can base a more comprehensive script.

Function Convert-CSVtoExcel {





    $excel = New-Object -com excel.application

    $excel.DisplayAlerts = $false

    $file = (Resolve-Path $srcfile).path

    if(!($file)){throw “No valid source file found!!”}






Well folks, that’s about all I’ve got time for at the moment. I hope it brought you some new knowledge and, until next time, always remember that fear is the mind-killer and PowerShell is life!!

So I was hunting around for something PowerShell oriented just the other day and one of the search results that came back was from a post by an absolutely awesome PowerShell MVP by the name of Brandon Shell. In this instance, the linked post wasn’t what I was looking for, but its title ‘Dealing with Parameters in PowerShell’ looked potentially interesting and, since I am always learning ‘new’ things from Brandon’s blog, I earmarked it for later review. I just got around to reviewing the post today and, while I did find Brandon’s post as helpful as always, it was something he said in response to a user’s comments that have caused me to write this post. I would have dropped it into the comments, but the last comment was from June of 2010 and further commenting appears to be blocked due to the age of the post, so here it is on my little blog and I hope someone may find it helpful.

The comment in particular was from someone who used the name ‘Harsha’ and he was asking if it was possible to use ‘throw’ to render a particular parameter as required, but to also call the usage function. Brandon’s response, based on the example provided, was basically accurate when he stated not so far as he was aware, but that got me thinking and I wondered if there might be a way after all. The example in the comments was as follows:

Param($a=$(throw "param 'a' is required"))

Now, if my function were purely that, then Brandon is right in that there isn’t really a way to accomplish what the commenter wanted. That said, running a quick ‘Get-Help about_Throw’ reveals that the ‘throw’ statement can throw an object in addition to a simple string (at least in PoSh 2.0, not sure about PoSh 1.0). So armed with this knowledge and, of course, since we all know that objects can be created from script blocks, that means then that this should theoretically be possible. The problem of course, lies in where the throw statement is being called. Since the above example is using a parameter, which is at the beginning of our function, we can’t call a sub-function because it wouldn’t have been read in yet and wouldn’t be available to be used in the throw. So that means that  the usage function would either need to already be established in the shell session, or we would at least need a pre-existing variable with our usage stored as a string.

So how can we handle this in an at least somewhat elegant manner that doesn’t involve just tons of additional external effort? Well, at least one way that I can think of, is to use a simple ‘if’ statement instead of checking in the parameter, which I will of course outline for you here (See, aren’t I a giver?).

Instead of calling the throw in the original parameter line, as shown in the original example, we instead move the check for the value further down the line, placing it after our ‘usage’ string or function definition, as shown in the example below.

Function Test-Throw {
    $usage = @"
This is my usage

        $UseError = @"
You must specify a value for 'a'
        throw $UseError
    # Other function stuff here

Now, if you aren’t familiar with what I did in the ‘if’ statement, don’t worry I’ll explain. The whole purpose behind an ‘if’ statement is typically to determine whether the expression contained in the parenthesis evaluates as ‘true’ or not and, if it does, the content in the curly braces is then executed. By placing the exclamation point in front and embedding my expression inside another set of parenthesis, I am now instead checking to see if value returned by my statement is instead ‘false’ and then, if so, executing the additional steps in the curly braces. Alternatively, I could have stuck with the ‘true’ evaluation by using ‘if($a –eq $null){blah}’, but that involved more typing and, let’s face it, we automate because we are essentially too lazy to want to do it manually (at least, that’s my excuse anyway lol) so I opted for the shorter syntax, however either method should produce the same results.

The other technique I used that you may have noted was to create a variable with those @ symbols both before and after my double quotes. That is what’s known as a ‘here-string’ and it allows me to apply a degree of formatting to my string without having to do a bunch of ‘write-host’ statements if all I’m looking for is some spacing and indenting. In addition, because I am using the double instead of the single quotes, any variables (including those containing script blocks) are processed first and the string output is contained in my final here-string variable on output. Be careful with using script blocks that return objects though as this could produce a value like ‘[System.Diagnostic.Process]’ (or whatever other type of object you end up with) instead of the desired string output. Using an object is fine, just spit it out to a string first or, alternatively, you can check the value of the object by calling the ‘TargetObject’ property of the $error variable (ex. $error[0].TargetObject). If you want to learn more about ‘here-strings’ you can check out this Microsoft TechNet article on the subject.

Now, of course, the above is a very ‘PoSh 1.0’ way of getting the job done, though it is effective. Yes, I know I said I didn’t know if ‘throw’ even supports throwing objects in PoSh 1.0, and I don’t have access to a system with PoSh 1.0 at the moment to check, but what I am actually referring to is the approach rather than the function of it. PoSh 2.0 gives us a very important capability not previously available however, which is the ability to include comment based help directly in the function, thus negating the need for a usage function or separate text file (though you can also use an XML formatted help file, but I won’t cover that topic here. Just run ‘get-help about_Comment_Based_Help’ if you want more info on that).

So what would such a thing look like? Don’t worry, I’m here for YOU (see, a GIVER I tell you!) and I have included an example below that should get you started. For simplicity, I have gone ahead and stuck with the same ‘$a’ example from before.

Function Test-Throw {
Tests throwing with help.
Looks for value for 'a' and throws help error without it.
This is the test parameter and is required.
The same string you put in.
C:\PS> Test-Throw -a "Some String"
Some String

           $a=$(throw $("You must specify a value for 'a'!" + `
              $((Get-Help Test-Throw).syntax | Out-String) + `
              "For more, type 'Get-Help Test-Throw -Full'")))

What I have done with the value for ‘throw’ in the example above is probably more complicated than it strictly needs to, but it provides significantly less junk on the screen then just running ‘Get-Help Test-Throw –Full "| Out-String’ even if I left off the ‘-full’. All I really need in this instance is the basic syntax, so that’s what I filtered it down to. Now, if I were to paste the example above into my console and run the function without arguments, this is what I would get:

C:\PS> Test-Throw
You must specify a value for 'a'!
test-throw [[-a] <String>] [<CommonParameters>]
For more, type 'Get-Help Test-Throw -Full'
At line:22 char:22
+            $a=$(throw <<<<  $("You must specify a value for 'a'!" + `
    + CategoryInfo          : OperationStopped: (You must -Full':String) [], RuntimeException
    + FullyQualifiedErrorId : You must specify a value for 'a'!
test-throw [[-a] <String>] [<CommonParameters>]
For more, type 'Get-Help Test-Throw -Full'

Of course, the above is strictly aimed at providing an answer to the original comment. In the end, the truly preferred approach, from a best practices standpoint at least, would be to leverage the new ‘advanced parameters’. Again, I won’t go through all of that here as you can just type ‘Get-Help about_Functions_Advanced_Parameters’ and read all about them yourself of course. Using the above example however, we would essentially configure the parameter to be mandatory and possibly set some quick help text like so:

Function Test-Mandatory{
                HelpMessage="Ex. Test-Mandatory -a "SomeString")]

C:\PS> Test-Mandatory

cmdlet Test-Mandatory at command pipeline position 1
Supply values for the following parameters:
(Type !? for Help.)
a: !?
Ex. Test-Mandatory -a 'SomeString'

I personally much prefer this approach because the person executing it will be prompted continually until they provide a value for ‘a’ rather than just getting a terminating error and having to execute the function again. I strongly suggest checking out these advanced capabilities as there is MUCH more you can do than just making parameters manual, such as binding to a particular position, validating correct values, or even configuring pipeline support just to name a few.

Whew! That was a long post for such a simple thing. No posts from me in forever and then two really long ones in a row. When it rains, it pours right? Hopefully it’s at least helpful.

Until next time,


File Aging–Redux

I know I keep apologizing for not blogging and then just continue not blogging, but in my defense I have been kept pretty busy of late. I’ve done quite a bit of PowerShelling, but there has just been no time left in my day to actually post anything about it. I know, boo wahhhh right? Well I’m posting something now, so hopefully it will be helpful to some of you.

Recently I needed to scan a file cluster to determine roughly how much space might be recovered when we implemented Symantec’s Enterprise Vault. They have tools to tell you such things with regards to email and SharePoint, but apparently they don’t have much in the files department just yet (or if they do I couldn’t find it). I could have just jumped straight to PowerShell without even checking, but I really hate reinventing the wheel when I don’t have to.

Initially I started off with just some basic file information gathering using the Get-ChildItem cmdlet, and this technically could have gotten me there, but I felt sure there had to be something out there that was possibly better. So I loaded up my trusty shell and proceeded to run a search on PoshCode…nothing…hrmm…ok, search Bing from PowerShell…aha!! A hit…from 2006. As it turns out, good old Sapien Technologies posted a sample script (viewable here) back in the day that gets some file aging info, but didn’t get around to updating for PowerShell 2.0, though they did update it to output objects instead of strings (viewable here) about a day later.

Now, I don’t even remotely claim to be anywhere NEAR Mr. Jeffrey Hicks level, but I thought I might be able to improve on his original efforts at this point and, since I didn’t find better out there during my search, I thought I might share the results here and try to make a learning opportunity out of it. Hopefully you will find this helpful and, as always, please feel free to hit me up with improvements as I know I still have a LONG way to go before I can say I have ‘mastered’ PowerShell.

So, the first thing I decided to change was the setup of the initial parameters and initial error checking. Rather than expecting the user to specify a path, I made it default to the current location with the option to override and I changed the error checking to verify the path exists rather than merely that the path is a string like so:

Param([string]$dir= .,[switch]$Summary)
if(!(Test-Path -Path $dir)){
    Throw "Directory does not exist or access is denied"

Next, I made a minor change to the ‘where’ statement to use an existing property instead of executing a method for each object. Both methods technically produce the same results, however using an existing object property to filter on is considered to be significantly faster and certainly results in less system overhead in my personal experience.

#Old way
$files = Get-ChildItem -path $dir -recurse | `
Where {($_.GetType()).name -eq "FileInfo"}

#New way
$files = Get-ChildItem -path $dir -recurse | `
Where {$_.PSIsContainer -eq $false}

The next bit is where the real ‘optimizations’ kick in. In the original object version of the script, there were a bunch of lines that pre-created place holder variables which were later used during the foreach loop. In my version, I dumped all but two of these; $count, and a new one $out. The former I now use as part of the Write-Progress cmdlet to help determine the percent complete in addition to showing current file and directory, though it doesn’t look like this variable actually got used in the original script as it didn’t show percent complete. The latter variable is a placeholder into which I am storing my custom objects prior to final output which allows me a much greater amount of control when I do get back my results.

Next, within the foreach loop, I make another hefty chunk of changes for both efficiency and length. In the original script, a switch statement was used to increment each of the placeholder values as well as another variable holding the length of the individual files founds. While I did keep the switch statement, I dumped a lot of the other processing that was being done as part of the loop. I also consolidated pretty much everything done after the foreach loop and instead created an object for every file processed that has four parameters (Age, MB, FileName, and DirPath) with the output being incrementally added to my ‘$out’ placeholder created previously. The updated elements are now as follows:

foreach ($file in $files) { $age=($now.subtract(($file.LastWriteTime))).days $count++ Write-Progress -Activity “File Aging Report” `

-status $file.DirectoryName -currentoperation $ `

-percentcomplete ($count/$files.count*100) switch ($age) { {$age -ge 730} {$ac = "2Yrs";break} {$age -ge 365} {$ac= "1Yrs";break} {$age -ge 90} {$ac = "3Mo";break} {$age -ge 30} {$ac = "1Mo";break} {$age -ge 7} {$ac = "1wk";break} {$age -lt 7} {$ac = "current";break} } $obj = New-Object PSObject -Property @{ Age = $ac MB = "{0:N2}" -f ($file.length/1MB)

FileName = $ DirPath = $ } $out += $obj }

Of course, one might quickly note that my final output would be far different than the original being as the original was not only color coded, but broken down by type and providing counts of each grouping. The new object output can give us much the same thing however, and some of it is even accounted for in the final line, which deals with the ‘Summary’ switch parameter that I added. By first sorting the object, then grouping it, and then finally using format-table I get similar results to what was achieved before with the exception of the color coding, which is easy enough to add back in.

If($Summary){$out | sort Age | group Age | ft name,count}Else{Write-Output $out}

Where my changes bring value, in my opinion, is that there is more that I can do, such as easy output to Out-Grid or Export-CSV, if I output a raw object that I then filter or manipulate from there rather than a very targeted use such as the format it was in before. For example, what if I needed to get a list of all files that were older than a year? The original script contains some of the pieces to make this happen, but it wasn’t designed with that in mind, so you would have to write another version for that. What probably would have been more efficient still would have been to keep the original ‘FileInfo’ object and simply use the ‘Add-Member’ cmdlet to simply add ‘Age’ as a new NoteProperty with the appropriate value, but that wouldn’t have been nearly as interesting.

For convenience, the full script is shown below so you don’t have to piece it together from the above.

# ==============================================================================================
# Microsoft PowerShell Source File -- Created with SAPIEN Technologies PrimalScript 2009
# NAME: Get-FileAging
# AUTHOR: Chris Whitfield, Paranet Solutions
# DATE  : 3/25/2011
# ==============================================================================================
Param([string]$dir= .,[switch]$Summary)
#check for path as well as path existence
if(!(test-path -path $dir)){
    Throw "Directory does not exist or access is denied"

$files=Get-ChildItem -path $dir -recurse | where {$_.PSIsContainer -eq $false}


#enumerate files and get information
foreach ($file in $files) {
     Write-Progress -Activity “File Aging Report” `
    -status $file.DirectoryName -currentoperation $ `
    -percentcomplete ($count/$files.count*100)
      switch ($age) {
        {$age -ge 730} {$ac = "2Yrs";break}
        {$age -ge 365} {$ac= "1Yrs";break}
        {$age -ge 90} {$ac = "3Mo";break} 
        {$age -ge 30} {$ac = "1Mo";break}
        {$age -ge 7} {$ac = "1wk";break}
        {$age -lt 7} {$ac = "current";break}
    $obj = New-Object PSObject -Property @{
        Age        = $ac
        MB        = "{0:N2}" -f ($file.length/1MB)
        DirPath = $
    $out += $obj

    $out | sort Age | group Age | ft name,count
    Write-Output $out


A Post-Updates Service Checker

‘A what?’ you might be asking…yes, a service checker to scan all your systems and generate a simple html doc displaying the results. While the title of this entry says ‘post-update’, it could actually be used in any number of situations, I just wrote it in response to that specific need.

The scenario is simple enough; Admin initiates a patch update process against a bunch of systems all at once and then bounces them. In this particular scenario, about 30 servers or so were being affected at a time via a management utility call Kaseya, and the admin needed to verify all services that were set to auto-start were actually started after the boot. I can only assume they had some sort of issue with this before.

I elected to write a script in PowerShell, but had to deal with the fact that PoSh would not be available on any system except the main management node. Technically of course, a script was not even required since good old Get-WMIObject could do the job fairly quickly as follows:

PS> Get-Content C:\servers.txt | foreach{Get-WMIObject Win32_Service –Computer $_ | Where{$_.StartMode –eq ‘Auto’ –and $_.State –eq ‘Stopped’ }}

Of course, requirements rapidly went up from there. I had to deal with a PoSh newb who would be learning on the go, so whatever I made had to be robust. It also was to be used as part of an automated process so I needed to account for at least some minor error handling. On top of that, I wanted it to be flexible and readily run without the automation. And lastly, I needed to output as an HTML file and exclude common services that they didn’t care about such as ‘Performance Logs & Alerts’ which of course is usually set to Auto on servers and all to often fails to start.

I had originally hoped to leverage some of the new PoSh v2 features to accept pipeline input, but ended up skipping that based on an additional requirement added later, which was to pull systems directly from AD. What I ended up with is a script that accepts 4 arguments, all optional; specify a single server, specify an input file for feeding a list of systems, specify an output file (a default is set for this one), and specify additional services to exclude. Any or all of these can be used without issue, even the single server as well as a file. The one caveat is the interaction with the default machine source, which is from AD based on the domain membership of the system where the script is run. By default, if no objects are specified, it pulls all servers from AD by filtering on the OS name. Anyway, here is the full script and, as always, feel free to post comments, suggestions, etc.

# ==============================================================================================
# Microsoft PowerShell Source File — Created with SAPIEN Technologies PrimalScript 2007
# NAME: Check-Services
# AUTHOR: Christopher Whitfield , Paranet
# DATE : 3/9/2010
# VERSION: 1.5
# COMMENT: Takes either a single argument or a file containing server names.
# Each server name is checked to determine online status, and
# then checked for any services set as automatic but not running.
# Results are then appended to an HTML file located in C:\Temp unless
# overridden using the switch -outfile with a different filename or location.
# 3/9/2010 – Took out hard stop for FileIn making it optional and added auto pull from AD
# 3/11/2010 – Added default service filter as well as a parameter for additional services
# and one for specification of a single server
# ==============================================================================================

Param (

# Check for value in the FileIn variable and populate from AD if empty
# Create LDAP filter
$filter = "(&(objectcategory=computer)(OperatingSystem=Windows*Server*))"

# Create DirectorySearcher that connects to default context and uses above filter
$ds = New-Object DirectoryServices.DirectorySearcher([ADSI]"",$filter)

# Execute search
$objServers = $ds.FindAll()

# Get server names from results
$Servers = $objServers | Foreach{$}

# If an input file was specified, verify file exists and throw error if not
if(!(Test-Path $FileIn)){
Throw "Specified input file does not exist! Check path and try again."
# If file path valid, get server names from content
$Servers = Get-Content $FileIn

# Now we check for manually specified servers and add them to the list
$Servers = $Servers + $SrvIn

# Setup service filter and append others if present
$DefSvcFilt = ‘Performance Logs & Alerts’
$DefSvcFilt = $DefSvcFilt + $SvcFilt

# Header details for html file
$header = @"
<meta name="GENERATOR" content="SAPIEN Technologies PrimalScript 2007">
<title>Auto Service Check – $(Get-Date)</title>
TABLE{border-width: 1px;border-style: solid;border-color: black;border-collapse: collapse;}
TH{border-width: 1px;padding: 2px;border-style: solid;border-color: black;background-color:thistle}
TD{border-width: 1px;padding: 2px;border-style: solid;border-color: black;background-color:palegoldenrod}

# Footer details for html file
$footer = @"

# Check for presence of specified $FileOut value
if(!(Test-Path $FileOut)){
# If file does not exist, create it
New-Item -Path $FileOut -ItemType file | Out-Null

# Once file has been created, add header information
Add-Content -Value $header -Path $FileOut

# If file exists, rename by appending current date to begining of filename
Rename-Item -Path $FileOut -NewName ((Get-Date -UFormat "%m%d%Y%H%M%S") + "-" + (Split-Path $FileOut -Leaf))

# Create a new file
New-Item -Path $FileOut -ItemType file | Out-Null

# Once file has been created, add header information
Add-Content -Value $header -Path $FileOut

# Process-System sub-function
Function Process-System {

# Create custom object to store and format output
$DataOut = New-Object object

# Add server name and run date/time to object
$DataOut | Add-Member -MemberType NoteProperty ServerName -Value $SrvName
$DataOut | Add-Member -MemberType NoteProperty CheckTime -Value (Get-Date -Format g)

# Verify online status and write output to object if not
if(!(Test-Connection -ComputerName $SrvName -Quiet)){
$DataOut | Add-Member -MemberType NoteProperty Online -Value "NO"
# If system is online, write to object and continue
$DataOut | Add-Member -MemberType NoteProperty Online -Value "YES"

# Use WMI to get Win32_Process from specified system and filter for stopped services set to auto start
$CheckServices = Get-WmiObject -Class Win32_Service -ComputerName $SrvName | 
Where-Object{ $_.StartMode -eq ‘Auto’ -and $_.State -eq ‘Stopped’ } | Select-Object DisplayName

# If any services returned update object
$svcnum = 1
Foreach($cs in $CheckServices){
if(!($DefSvcFilt -match $cs.DisplayName)){
$DataOut | Add-Member -MemberType NoteProperty ("DownService" + $svcnum) -Value $cs.DisplayName
$svcnum ++
# If no data comes back, update object
$DataOut | Add-Member -MemberType NoteProperty DownServices -Value "None"

# Output collected data to file
$FinalData = $DataOut | ConvertTo-Html -Fragment -PostContent "</P>"

Add-Content -Value $FinalData -Path $FileOut

# Destroy object for next run
$DataOut = $null

# Process servers
$Servers | foreach{ Process-System $_ }

# Add footer to file
Add-Content -Value $footer -Path $FileOut


Well, enjoy!


P.S. You might have noticed the much prettier looking code than I have posted in the past. This is thanks to awesome work by Vladimir Averkin and Lee Holmes in making a Set-ClipboardScript script that applies syntax highlighting to a file by adding HTML. I was able to just paste it into Live Writer as-is and it looks pretty decent…not counting the indenting issue of course.

Hello world!

Welcome to This is your first post. Edit or delete it and start blogging!

So, in the recent past, I had a need to build out a new ISA array for a special project. Now, you might be thinking at this point ‘OK, sure, but why PowerShell? Isn’t that overkill?’ and you would be right under normal circumstances. The thing that made this particular job a tad more painful was the need to generate 50 network definitions and access rules in order to accommodate the specific use to which ISA would be put. Now, I don’t know about you, but the need to generate 50 rules, each with minor yet predictable variations, just screams script.

So now maybe you are asking ‘But why reinvent the wheel since there are tons of examples out there using VBScript that do pretty much the same thing right?’ and the answer is…yes and no. There are examples out there for creating networks and creating rules both (see for some really great examples), but I needed to generate 50 of them and I didn’t necessarily want to generate a script, I just wanted to DO it. In cases like this especially, even when working with COM objects, PoSh is still the best tool for the job…or so my thinking went. I quickly discovered that, as great as ISA is as a product, its COM interface could use some work.

So we’ll start this journey at the beginning by initializing the COM object in PoSh, which can be done as so:

PS C:\>$root = New-Object –com "FPC.Root" –Strict

Now you will notice that I used the –Strict switch on my object. Running the cmdlet through the help indicates that this is done to cause an alert to be raised when the COM object uses an interop assembly so that you can more readily distinguish between actual COM objects and COM-callable wrappers for .NET objects. The reasons for needing this distinction in this case are not entirely clear to me, not being a programmer, but this appears to be the preferred method in all examples I have found, so we’ll go with it.

Next thing we need to do is set up bindings for a couple of the elements that we will be working with, namely networks, rule elements, and rules within an array. If we pass our $root object to Get-Member, we can see that the best method to use is likely going to be the ‘GetContainingArray’ method like so:

PS C:\>$array = $root.GetContainingArray()

This works fine when you have only a single array, but I don’t think this will bind properly to what we need if you have multiple arrays. In that case, it might be better to bind in a slightly different manner using the name of the array like so:

PS C:\>$array = $root.arrays | Where-Object {$ –like ‘MyArray’}

My preference in this instance will be to do a ‘Do-Until’ loop, so we need a simple counter place holder that we will go ahead and start at 1:

PS C:\>i = 1

Don’t forget that, before every new Do – Until loop that uses $i, you will first need to set $i back to equal 1 before starting your loop. Otherwise, you will not loop since the value of $i would still be greater than 50 and would stop. Alternately you could probably use different variables for each loop, but I think it’s just as easy the other way.

Next we want to create our networks. If we pass $networks to Get-Member, we quickly see that there is no obvious method for ‘Adding’ a new network. This is what I was referring to earlier on the COM interface needing a bit of work. In my mind, a properly built COM object would provide what you need via Get-Member even if it is an inherited item from a parent. Unfortunately, even checking the parent objects doesn’t reveal a method for adding a new network, so how do we do it?

For this I had to resort to downloading and combing through the ISA SDK. True, this is available online via MSDN, but it’s much easier to search for what I am looking for with the CHM format. In addition to this, not all of the samples are available in full format online so that is another reason to have the SDK installed since these examples are then installed locally.

After reviewing the CHM, it turns out that the majority of COM interfaces on ISA have an ‘Add’ method even when it isn’t otherwise reflected via Get-Member. Why this is the case is beyond me, but we’ll roll with it. When calling the ‘Add’ method for the COM objects associated with ISA, it always returns an instance of the new object for further manipulation. One of the nice aspects of this is that none of the changes are committed until you call the ‘Save’ method. This is helpful because we need to specify all the settings we want for our new network before we commit (not that we have commitment issues, we just like to work up to it gently). Since an object is returned, we will want to bind it to a variable so we can work with it which would look something like this:

PS C:\>$net = $networks.Add(("MyNetwork" + $i))

Now we have the variable $net bound to our new network which is called MyNetwork plus whatever the current value of ‘i’ is at the time. You may have noticed that I used double parenthesis (if you didn’t its ok, I don’t expect you to know everything this time…but next time there will be a test). The reason for this is a matter of execution priorities within PoSh. I want it to figure out what ‘"MyNetwork" + i’ is before it executes the ‘Add’ method since the method likely lacks the processing instructions to do it after. Methods are simple…they want what they want when they want it and only the way they want it (kinda like my two year old) so we don’t want to startle or scare our little method by asking it to do odd things. Thus, we do the processing first and provide only the results. I could have done this outside of the method call, but then I would have to assign it to a variable and increase the number of lines in my script. Doing it this way instead has the same effect, but requires less effort in my opinion.

So now we have a new object called $net. We can pass this to Get-Member, and it provides us with a handy little list of the remaining properties to be set. Each item that we care about needs to be set before we commit the changes like so:

PS C:\>$net.AutoDiscoveryPort = "80"

PS C:\>$net.Description = ("Test isolation network" + $i)

PS C:\>$net.EnableAutoDiscoveryPort = $true

PS C:\>$net.EnableFirewallClients = $true

PS C:\>$net.EnableWebProxyClients = $true

PS C:\>$net.IpRangeSet.IP_From = ("192.168." + $i + ".1")

PS C:\>$net.IpRangeSet.IP_To = ("192.168." + $i + ".50")

PS C:\>$net.LDT = ""


Of course, if you ran $net through Get-Member, you will probably notice that I didn’t specify every property and that some items even have several sub elements. As to why some and not others, some items have a default setting that was already what I needed while others required changing and yet others cannot be set. Items that cannot be set are usually obvious via the output of Get-Member because they indicate only {get} and not {set} which indicates they are read only properties. This could be because the property is managed by the system, or it could be because these properties actually contain ‘enumeration collections’ which are themselves objects and require you to set the sub objects rather than the property itself. This is the case with the ‘IpRangeSet’ property. Get-Member showed this property with only the ability to {get} and so running $net.IpRangeSet through Get-Member showed me the sub object properties that needed to be set. This is also evident in the SDK help file as the property is shown as read only, but derived from a sub object whose properties you can set. Others might indicate that the property is completely read only without a reference to another object and these you just have to accept the defaults for. In my case, the above properties were all I needed to set, but you might need others so I would strongly suggest grabbing the SDK from the Microsoft downloads site if you think you need other properties (though you can always sift through each property using Get-Member as well…but my way is shorter…trust me). The last thing I need to do is to save my changes like so:

PS C:\>$net.Save()

It’s important to note that, while this network is available now to ISA to use in the creation of rules and the like, it isn’t actually active yet. If you have worked with ISA for any length of time, you will probably realize that this is because we haven’t applied the changes yet (and no, save is NOT the same as apply).

So, all of the above goes into a Do – Until loop, of course with an increment on each loop for $i making this whole portion of the script look like this:

PS C:\>Do {

> $ net = $networks.Add(("MyNetwork" + $i))

> $net.AutoDiscoveryPort = "80"

> $net.Description = ("Test isolation network" + $i)

> $net.EnableAutoDiscoveryPort = $true

> $net.EnableFirewallClients = $true

> $net.EnableWebProxyClients = $true

> $net.IpRangeSet.IP_From = ("192.168." + $i + ".1")

> $net.IpRangeSet.IP_To = ("192.168." + $i + ".50")

> $net.LDT = ""

> $net.Save()

> $i ++

> } Until ($i –gt 50)


The next part, of course, is to define the network set and routing rules for these networks. Rather than loop this though, we can simply make them the next logical step in our script like so:

PS C:\> $netset = $array.NetworkConfiguration.NetworkSets.Add("All Test Iso Networks")

PS C:\> $netset.Networks.Add("External", 0)

PS C:\> $netset.Networks.Add("Internal", 0)

PS C:\> $netset.Networks.Add("Local Host", 0)

PS C:\> $netset.Networks.Add("Quarantined VPN Clients", 0)

PS C:\> $netset.Networks.Add("VPN Clients", 0)

PS C:\> $netset.Save()


You may have noticed that we didn’t add any of the MyNetwork networks we created before. We could have done this via another Do – Until loop, but we don’t have to go to all that trouble. Instead, we use the all-inclusive option for creating a network set and specify only the networks we want Excluded from this set. We do this by calling the ‘Add’ method (since we are referencing an existing object) and specifying the value for the Name property that represents the network we wish to exclude, and then setting either a 0 to exclude or a 1 to include the network. Since the only networks that exist on this ISA array other than the defaults are the ones I created, this represents a much smaller list than adding in all 50 of my networks I created.

Next, we have to define the routing relationship between this network and other networks. We do this almost the same way we did with the NetworkSet as follows:

PS C:\> $netrule = $array.NetworkConfiguration.NetworkRules.Add("MyNetwork to Internal")

PS C:\> $netrule.DestinationSelectionIPs.Networks.Add("Internal", 1)

PS C:\> $netrule.DestinationSelectionIPs.Networks.Add("VPN Clients", 1)

PS C:\> $netrule.SourceSelectionIPs.Networks.Add("All Test Iso Networks", 1)

PS C:\> $netrule.RoutingType = 0

PS C:\> $netrule.Save()


Since we already had the network set available, we simply added it to the source and our other ‘Internal’ networks to the destination. These network rules cover traffic in both directions, so there is no need to create any additional rules for traffic headed the other way. The only other item we set was the RoutingType, which controls whether the network uses a NAT or Route style when navigating traffic. Setting a value of ‘0’ sets the type to Route while a value of ‘1’ indicates NAT. The way I figured this out was to review the FpcCfg.vbs available in the ‘inc’ directory where you have installed the SDK for ISA. It provides value options for just about every item you might need to set. This was cross referenced with the information provided in the help file which indicates only that a value of either fpcRoute (0) or fpcNAT (1) needs to be set. I searched the vbs for those values to find out what they translated to.

The next piece, in my case, was to populate user sets that were tied to groups in AD. This was so that I could restrict access to each network to only those people who had been granted access. This process is fairly simple as well…except for the binding to the right AD user part which is slightly annoying. In my case, I had pre-created 50 AD groups with the same name, but incremented numbers on the end just to keep things consistent. I did that with PoSh as well using the Quest AD cmdlets, but will not cover that step in this post. The general process for adding each user set looks like this:

PS C:\> $uset = $array.RuleElements.UserSets.Add(("MyNetwork" + $i))

PS C:\> $uset.Accounts.Add(("mydomain\MyNetworkGrp" + $i))

PS C:\> $uset.Save()


As before, you will want to stick that into a loop like we did with the creation of the networks themselves. You could be asking yourself why we didn’t go ahead and create all of this inside of one loop at this point. Some things, such as the networks and usersets, we probably actually could, but others, like the Access Rules themselves and the network set, require the creation of the other elements first.

So now we are up to the next to last part, which is creating the Access Rules to allow traffic to actually flow between the networks. You see, it isn’t enough to simply specify a network has a routed or NATed relationship, you also have to specify what traffic is allowed to flow for travel between any two networks unless they use the same interface on ISA. In my case, it was fairly simple as all I wanted to do was allow inbound access from the internal networks to the desired MyNetwork based upon being in the correct AD group. You will also likely want to create a rule that allows your individual networks outside for web browsing or whatever, but I will not demonstrate that here on this post since the general process will be covered by the add rules loop I am about to show you. The only difference is that you will add the Network set instead of an individual network and the ‘All Users’ user set instead of a specific AD group user set. The process for creating the new access rule is as follows:

PS C:\> $accrule = $array.ArrayPolicy.PolicyRules.Add(("Int to MyNetwork" + $i))

PS C:\> $accrule.SourceSelectionIPs.Networks.Add("Internal", 0)

PS C:\> $accrule.AccessProperties.DestinationSelectionIPs.Networks.Add(("MyNetwork" + $i), 0)

PS C:\> $accrule.AccessProperties.UserSets.Add(("MyNetwork" + $i), 0)

PS C:\> $accrule.Action = 0

PS C:\> $accrule.Save()

PS C:\> $i ++


As before, the above will be enclosed in a Do – Until loop just like the first one. As before, I only set those properties that I felt needed to be set for my purposes and there are still many more that could be set. As shown, the above will generate an access rule that allows unrestricted access on any protocol for users on the internal network who are a member of the correct group for the chosen destination network.

The very last step is to commit all our changes and additions to ISA to make them live which we do like this:

PS C:\> $root.ApplyChanges()

In this instance, we have to commit or dump these changes at the root level because we are accessing an array. If you are using ISA standard, you could actually do this using the $array instead of $root I believe.

Anyway, I hope this post proves useful to someone and, as always, please feel free to reply with your suggestions for improvements or questions if you are stuck.


Quickie Inventory Example

So, after I posted the prior post, I realized that in my sleep deprived state I had overwritten this one. I’ll skip the boring bits and just provide the script.


Function Get-QuickInv {


        Write-Host "Getting inventory from selected systems"

        $outobj = @()



        $csinfo = gwmi -class Win32_ComputerSystem -computer $_ | select name,manufacturer,model,numberofprocessors

        $procinfo = gwmi -class Win32_Processor -computer $_ | select -first 1 currentclockspeed

        $diskinfo = gwmi -class Win32_LogicalDisk -computer $_ | where {$_.DriveType -eq 3} | select name,size,freespace

        $nicinfo = gwmi -class Win32_NetworkAdapter -computer $_ | where {$_.AdapterTypeID -eq 0 -and $_.PhysicalAdapter -eq $True}


        $output = New-Object System.Object

        $output | Add-Member -type NoteProperty -name System -value $csinfo.Name

        $output | Add-Member -type NoteProperty -name Manufacturer -value $csinfo.Manufacturer

        $output | Add-Member -type NoteProperty -name Model -value $csinfo.Model

        $output | Add-Member -type NoteProperty -name NumProcs -value $csinfo.NumberOfProcessors

        $output | Add-Member -type NoteProperty -name ClockSpeed -value $procinfo.CurrentClockSpeed

        $output | Add-Member -type NoteProperty -name NumNICs -value ($nicinfo.count)

        foreach($disk in $diskinfo){

            $output | Add-Member -type NoteProperty -name ("Drive-" + $ -value $

            $output | Add-Member -type NoteProperty -name ("Size-" + $ -value ([math]::truncate($disk.size/1mb))

            $output | Add-Member -type NoteProperty -name ("FreeSpace-" + $ -value ([math]::truncate($disk.freespace/1mb))



        $outobj += $output



        Write-Host "Inventory complete."

        Write-Output $outobj








So the same colleague of mine hit me up for a little more help. He needed some specific information in a nice readable format for 75 Exchange 2003 servers. Some of this info could be gotten from the Exchange 2003 PowerPak for PowerGUI, but not cleanly associated with a hierarchy. Some of the info was available via WMI, but not all of it. The only place where almost everything could be found was AD. The script I made for him is below, though keep in mind this was put together while I was suffering from a bout of insomnia, so I may have the formatting off, and my little test environment only has a single Exchange server of course, but it should work to get all the info, just not sure how it will format. I expect it will break though, since I don’t think I looped things effectively, but I’m finally tired so am going to bed. I will try to post an update some time tomorrow when I fix it. This new approach uses XML instead of the way I was doing it before, which was to create a custom object and try and stuff everything in there. The problem with this was that child objects were not readily associated with their parent objects which was the whole point, otherwise the PowerGUI PowerPak would have worked just fine.

There are some theoretical downsides to this (more like manual steps since I didn’t put it into the script yet) in that you will want to tie the output to a variable like so:

56# [xml]$testing = Get-QuickExchInv

After doing this, you can save to a file like this:

57# $"C:\testtest.xml")

This will save the output of the command to an XML file on your C: drive called ‘testtest.xml’ so you can review it as well as easily re-import it to work with it as an object. I tried simply outputting to CSV or XML via the cmdlets, but this didn’t go well. The CSV didn’t enumerate everything and the XML changed all the greater than and less than symbols to other text which broke the whole XML idea.

Function Get-QuickExchInv {


#requires -version 2

	Write-Host "Getting inventory from selected systems"

        $root = New-Object system.directoryservices.directoryentry("LDAP://RootDSE")

        $msexch = New-Object system.DirectoryServices.DirectoryEntry("LDAP://CN=Microsoft Exchange,CN=Services,CN=Configuration," + $root.defaultnamingcontext)

        $exchorgs = $msexch.children | Where-Object {$_.objectClass -contains 'msExchOrganizationContainer'}


        $output = "<ExchangeOrg>"

        $output += "<Organization Name=""$(($"">"


        $admgrps = ($exchorgs.children | Where-Object {$_.objectClass -contains 'msExchAdminGroupContainer'}).children

        $output += "<AdmGrps>"

        foreach($admgrp in $admgrps){

            $output += "<Group Name=""$(($"">"

            $exchsrvrs = ($admgrp.children | Where-Object {$_.objectClass -contains 'msExchServersContainer'}).children

            $output += "<Servers>"

            foreach($srv in $exchsrvrs){

                $mysrv = $

                $output += "<Server Name=""$(($"">"

                $istores = ($srv.children | Where-Object {$_.objectClass -contains 'msExchInformationStore'}).children

                $output += "<InfoStores>"

                foreach($is in $istores){

                    $output += "<Store Name=""$(($"">"

                    $ismdbs = $is.children | Where-Object {$_.objectClass -contains 'msExchMDB'}

                    $output += "<StoreDBs>"

                    foreach($ismdb in $ismdbs){

                        $mailboxcnt = (Get-WmiObject -Namespace 'root\MicrosoftExchangeV2' -Class 'Exchange_Mailbox' -computer $mysrv | ?{$_.StoreName -eq $}).count 

                        $output += "<DBName>""$(($""</DBName>"

                        $output += "<DBPath>""$(($ismdb.msExchEDBFile).ToString())""</DBPath>"

                        $output += "<DBMailBoxCount>""$mailboxcnt""</DBMailBoxCount>"


                    $output += "</StoreDBs>"

                    $output += "</Store>"


                $output += "</InfoStores>"

                $output += "</Server>"


            $output += "</Servers>"

            $output += "</Group>"    


        $output += "</AdmGrps>"                

        $output += "</Organization>"

        $output += "</ExchangeOrg>"

    Write-Host "Inventory complete."

    Write-Output $output



If you come up with a better approach before I do, please feel free to share!



As usual, it has been some time since I have posted anything, but sadly, I have had little opportunity to do any significant scripting lately. Fortunately, that has changed.

One of the many services my company provides is to perform health checks for AD, MOSS, Exchange, etc just like MS consulting services does. Unlike MCS however, we don’t have their nifty tools that do a healthy chunk of the work for you to gather data. Yes, we are an MS partner, but even partners don’t get to use these tools. Of course, there are a myriad of tools out there that get this info both commercial and free. The commercial option is great if our customer already owns it, but I tend to want to steer clear of this option because it sometimes gives the impression to the client that you have some agenda regarding this or that product. While this may sometimes be the case, it can be damaging to the trust relationship. This leaves me with the free alternatives.

There are many hundreds of tools out there to gather all kinds of info, each with its own way of getting and presenting data which, of course, then must be massaged into a bigger picture and analyzed, then made presentable. This is a lot of work…and I really hate work (yes, in spite of how hard I work to avoid work). Even MCS realizes this, but nobody wants to reinvent the wheel, so MCS built their tools to leverage the existing ones. Basically they made a bunch of wrapper WSF scripts and a nifty interface and called it a day (which is really why I am surprised they won’t let partners use it).

So, to that end, I have started a pet project to build a tool for use by my company. The advantage to any readers of this blog, of course, is that you get to enjoy the benefits of some of my labor since this is where I will be posting the different sub pieces as I get them done so that I don’t lose them down the road. Hopefully this helps others out and, who knows, maybe I’ll even accept a few beta testers from outside my company when I get further along. The scripts, however, are free to use as you will and, as always, please feel free to drop me a line if you have improvements.

With goal number 1 being to avoid commercial products (even the free ones), I know I have to use scripting either via ADSI or .NET…and frankly I have never been a huge fan of ADSI and I have some vague notion that I will eventually be able to do some programming down the road so .NET it is. The MCS AD Rapid Assessment Tool, which I do happen to have an older copy of (and no, I won’t send it to you), is being used as my guide for my own AD Info tool. I am not copying it, just using it to make sure I get all the pieces.

Part 1 is going to focus on the AD Sites aspect of AD replication; Namely collecting a bunch of useful info about sites and putting it into a usable format. For this I will be using the System.DirectoryServices.ActiveDirectory namespace for which you can find detailed information on the MSDN library here –

Since I will eventually be trying to get a rather large chunk of info, I will start by binding to the forest.

$forest = [System.DirectoryServices.ActiveDirectory.Forest]::GetCurrentForest()

This gets the forest of which the system I am running the script from is a member. There are lots of useful pieces in here and I can see already it’s a good start.





Gets a collection of all application partitions in the forest.


Gets a collection of all domains in the forest.


Gets the operating mode of the forest.


Gets a collection of all global catalogs in the forest.


Gets the name of the forest.


Gets the domain controller that holds the FSMO naming master role for the forest.


Gets the first domain that was created in a forest.


Gets the schema of the forest.


Gets the domain controller that holds the FSMO schema master role for the forest.


Gets a collection of sites that are contained in the forest.


Since site information is what I want right now, I’ll just try binding to sites like this:

$sites = $forest.sites

Now $sites produces a relatively nice listing of all my sites

PS C:\Users\BobbyBuche> $sites



Name : Default-First-Site-Name

Domains : {}

Subnets : {}

Servers : {}

AdjacentSites : {TestSite}


InterSiteTopologyGenerator :

Options : None

Location : Dallas

BridgeheadServers : {}

PreferredSmtpBridgeheadServers : {}

PreferredRpcBridgeheadServers : {}

IntraSiteReplicationSchedule : System.DirectoryServices.ActiveDirectory.ActiveDirectorySchedule


Name : TestSite

Domains : {}

Subnets : {}

Servers : {}

AdjacentSites : {Default-First-Site-Name}


InterSiteTopologyGenerator :

Options : None

Location : Houston

BridgeheadServers : {}

PreferredSmtpBridgeheadServers : {}

PreferredRpcBridgeheadServers : {}

IntraSiteReplicationSchedule :


Well, almost nice…there are all those elements inside the curly braces {} that, while fine in my little bitty environment, could be a problem in a larger one with many sites, subnets, and servers, etc. So I need to find a way to expand that information. I am sure there is some way to do this iteratively, but I am not sure how to do this so I’ll start out by trying to bind to a specific site and see if I can get to it that way.

As it turns out though, binding to a site via .NET is a bit harder than I anticipated and there are not any examples of this I could find anywhere using PowerShell. There are lots about adding subnets or sites, just none about modifying existing sites. A quick look through the namespace (Of course, finding this took a little longer than a ‘quick look’ but the details wouldn’t help you so I’ll skip em) reveals another promising class: System.DirectoryServices.ActiveDirectory.ActiveDirectorySite

This class has a method called ‘FindByName’ that looks like it will allow me to bind to a single site by name, something I couldn’t do via $forest.sites. Unfortunately, it has two overloads that are required. One is, of course, the string that is the name of the site, but the first one is ‘DirectoryContext’ and the MSDN page on the subject isn’t very enlightening. So with no clue how to bind to it, and after several failed attempts, back to Google I go with a search for PowerShell and the namespace with the DirectoryContext thrown in. Mr. Richard Siddaway ( comes to my rescue with an example of how to set up a context which, coupled with the MSDN information allows me to get this:

$context = [System.DirectoryServices.ActiveDirectory.DirectoryContextType]"forest"

$forcon = New-Object System.DirectoryServices.ActiveDirectory.DirectoryContext($context,$forest)


This lets my bind to a specific site by name like this:

$mytestsite = [System.DirectoryServices.ActiveDirectory.ActiveDirectorySite]::FindByName($forcon, "TestSite")


My output from typing $mytestsite still produces similar output to above, but now I can type $mytestsite.subnets, for example, and get an expanded listing of the subnet. I could, at this point, create a custom object and add everything to it using Add-Member, but I remembered another way that I tried out (and it worked) using Format-Table as shown here:

$siteslist = $sites | ft name,



                        @{Label="Adjacent Sites";Expression={$_.adjacentsites}},

                        @{Label="Site Links";Expression={$_.sitelinks}},




                        @{Label="Bridgehead Servers";Expression={$_.bridgeheadservers}},

                        @{Label="Preferred SMTP Bridgeheads";Expression={$_.preferredsmtpbridgeheadservers}},

                        @{Label="Preferred TPC Bridgeheads";Expression={$_.preferredrpcbridgeheadservers}}


This creates a nicely formatted table output with each item expanded like I wanted. You may have noted that I didn’t include the subnet info. This is on purpose as I plan to gather that later along with some additional info on undefined subnets.

In part 2, I plan to move on to getting detailed Replication Configuration, Replication Status, as well as the aforementioned subnet info and AD Convergence checks (Brandon Shell has a great script for this over on his blog at which I am totally gonna steal…er…borrow). Hopefully it won’t be another 4 months before I can get to it, but there may be a few other posts in the meantime for some other things I got going on.

Until next time, Enjoy!


Well, it’s easier anyway.

So what’s the deal you ask? Well, it’s a TV show where they have these briefcases and…oh…you meant with the title.

*clears throat*

I knew that. I was just testing you.

So the deal is this; I recently had a colleague who was looking for a way to update a few users…about 173,000 users to be imprecise. Now this is certainly something that could be done with the good old trusty VBScript, for which there are literally thousands of examples for on the internet…except for the fact that each user had an inconsistent number of fields and values to be updated and seven of the columns actually had to be combined into a single value, but only the ones that had data to avoid dumping in blank data.

Now I have written some pretty decently complex VBScripts in the past, but the logic associated with checking each column for each user and acting on it dynamically was really something much easier done in PowerShell. So, of course, I wrote up a quickie little PowerShell function using the Quest AD Cmdlets and sent it on its merry way. I also try to steer clear of VBScript these days if at all possible since, if I don’t make myself use the newer tech, I will never get used to it in time before I HAVE to be able to use it. If you work predominantly in the Microsoft space like I do, the PoSh wave is coming and you WILL have to know how to use PowerShell, even if you never write a single script (but you will write scripts, cause it’s just too easy to do not to).

After that was off and away though, I took a quick little jaunt around the net in my handy dandy Google-Mobile to see what others had done for this type of mass update using PowerShell. What I found was mostly a lot of specific use snippets or updates for only a single AD attribute, but not much in the way of mass user updating that was flexible for any situation. I found this fairly odd since, as anyone who has been using PowerShell for any amount of time knows, you really don’t need to even use scripts most of the time, just good one-liners and the pipeline. For example, let’s say you had a spreadsheet with users who were moving from one office to another and the spreadsheet had columns for username, street , city, and zip (let’s assume they are staying in the same state). You could simply update the username column to be ‘domain\username’ and then run the following one-liner:

Import-CSV C:\usermoves.csv | Set-QADUser $_.username –StreetAddress $_.street –City $_.City –PostalCode $

As long as you have data in every column, you’re done. Even empty columns will just cause the data already in the attribute to be deleted, but it should still work just fine as long as that’s ok. But what about the scenario with my colleague and what about all those poor souls out there who haven’t yet discovered how great and easy PowerShell is? They deserve a solution too don’t they, something that they can easily find in their wanderings on the net?

… Well maybe not precisely ‘deserve’ since they could theoretically take the time to learn PoSh rather than depending on the kindness of strangers, but I’m a softy…a giver you might say.

So, what I did, was write a function called Update-BTFuser that you can see below. I wrote it with the requirement that you would have to have the Quest AD Cmdlets installed in order to run the function, but the Quest cmdlets are free so I didn’t think that too onerous. Yes, you could do the same thing using ADSI, and there are those of my peers who might think less of me for not doing it that way, but in my mind, why should I reinvent the wheel when Quest’s wheel is so nice and shiny and pretty…like one of those sparkly 22 inch jobs with the spinners…if you’re into that kind of thing anyway…you get the point.

So, if Quest’s AD cmdlets are soooo great, why did I write a function to implement them you might ask? Quests cmdlets ARE great, but they don’t have a native ability to map your column titles to a particular execution path (if you work for Quest on these cmdlets…hint hint *wink*). In addition, there are a several options in the Set-QADuser cmdlet, as an example, that don’t map to a particular AD attribute name such as –PasswordNeverExpires. So the below function deals with all that and also compensates for having empty cells without clearing the existing attribute values by checking for a ‘clear’ value. If the cell is merely empty, it gets ignored, but if it contains the word ‘clear’, then it deletes the existing attribute data. In addition, it looks for PasswordNeverExpires and UserMustChangePassword columns and, if the value of the column is Y, it sets the value to True and N will set it to False (empty is still ignored). It even provides for a UserPassword column for updating passwords en-mass. I even accounted for the ObjectAttributes switch so that you can specify your own custom attributes, or those not covered by me in the function (which shouldn’t be too many). I didn’t account for every possible switch for the cmdlet, but I figure that with most of them there that you can add any that you need and I left out.

So, without further prattling on by me (despite how much I KNOW you love hanging on my every typed word *wink*), here is a link to a spreadsheet template that you can use, complete with examples of what goes in the different fields, and the function is below.

# ==============================================================================================


# Microsoft PowerShell Source File — Created with SAPIEN Technologies PrimalScript 2007


# NAME: Update-BTFUser


# AUTHOR: Christopher Whitfield, BT

# DATE : 9/12/2008


# COMMENT: Must have Quest AD Cmdlets installed to use this function. Must also provide input

#          from CSV file with columns named for AD attributes via the pipeline.


# ==============================================================================================


Function Global:Update-BTFUser {

    begin {


        # Set error behavior

        $erroractionpreference = "SilentlyContinue"


        # Get domain

        $domprompt = Read-Host "Please specify the netbios domain name:"


        # Set dom variable for script


            $dom = $domprompt + "\"


        else {

            $dom = $ENV:USERDOMAIN + "\"



        # Create placeholder variables

        $processed = 0

        $procerrors = 0

        $probuserout = @()

        $execcmd = @()



    process {


            foreach ($in in $input){

                $user = $dom + $in.sAMAccountName


                # Set initial execution cmd string

                $execcmd = " -identity $user"


                # Append to execution cmd string with data from cells using following steps:

                #     First check to see if cell has data and ignore if empty

                #     Next check to see if cell is marked ‘clear’ which indicates to delete current value from attribute

                #    If cell is not marked ‘clear’, add data in cell to cmd with appropriate switch



                    if($in.givenName -like "clear"){

                        $execcmd += " -firstname ”"


                    else {

                        $execcmd += " -firstname ‘$in.givenName’"





                    if($ -like "clear"){

                        $execcmd += " -lastname ”"


                    else {

                        $execcmd += " -lastname ‘$’"





                    if($in.initials -like "clear"){

                        $execcmd += " -initials ”"


                    else {

                        $execcmd += " -initials ‘$in.initials’"





                    if($in.displayName -like "clear"){

                        $execcmd += " -DisplayName ”"


                    else {

                        $execcmd += " -DisplayName ‘$in.displayName’"





                    if($in.userprincipleName -like "clear"){

                        $execcmd += " -userprinciplename ”"


                    else {

                        $execcmd += " -userprinciplename ‘$in.userprincipleName’"





                    if($in.homedrive -like "clear"){

                        $execcmd += " -homedrive ”"


                    else {

                        $execcmd += " -homedrive ‘$in.homedrive’"





                    if($in.homedirectory -like "clear"){

                        $execcmd += " -homedirectory ”"


                    else {

                        $execcmd += " -homedirectory ‘$in.homedirectory’"





                    if($in.profilepath -like "clear"){

                        $execcmd += " -profilepath ”"


                    else {

                        $execcmd += " -profilepath ‘$in.profilepath’"





                    if($in.scriptpath -like "clear"){

                        $execcmd += " -logonscript ”"


                    else {

                        $execcmd += " -logonscript ‘$in.scriptpath’"





                    if($in.mail -like "clear"){

                        $execcmd += " -email ”"


                    else {

                        $execcmd += " -email ‘$in.mail’"





                    if($in.wWWHomePage -like "clear"){

                        $execcmd += " -webpage ”"


                    else {

                        $execcmd += " -webpage ‘$in.wWWHomePage’"





                    if($ -like "clear"){

                        $execcmd += " -company ”"


                    else {

                        $execcmd += " -company ‘$’"





                    if($in.department -like "clear"){

                        $execcmd += " -department ”"


                    else {

                        $execcmd += " -department ‘$in.department’"





                    if($in.manager -like "clear"){

                        $execcmd += " -manager ”"


                    else {

                        $execcmd += " -manager ‘$in.manager’"





                    if($in.title -like "clear"){

                        $execcmd += " -title ”"


                    else {

                        $execcmd += " -title ‘$in.title’"





                    if($in.physicalDeliveryOfficeName -like "clear"){

                        $execcmd += " -office ”"


                    else {

                        $execcmd += " -office ‘$in.physicalDeliveryOfficeName’"





                    if($in.streetAddress -like "clear"){

                        $execcmd += " -streetAddress ”"


                    else {

                        $execcmd += " -streetAddress ‘$in.streetAddress’"





                    if($in.l -like "clear"){

                        $execcmd += " -city ”"


                    else {

                        $execcmd += " -city ‘$in.l’"





                    if($ -like "clear"){

                        $execcmd += " -StateOrProvince ”"


                    else {

                        $execcmd += " -StateOrProvince ‘$’"






                    if($in.postalCode -like "clear"){

                        $execcmd += " -PostalCode ”"


                    else {

                        $execcmd += " -PostalCode ‘$in.postalCode’"





                    if($in.postOfficeBox -like "clear"){

                        $execcmd += " -PostOfficeBox ”"


                    else {

                        $execcmd += " -PostOfficeBox ‘$in.postOfficeBox’"





                    if($in.telephoneNumber -like "clear"){

                        $execcmd += " -PhoneNumber ”"


                    else {

                        $execcmd += " -PhoneNumber ‘$in.telephoneNumber’"





                    if($in.facsimileTelephone -like "clear"){

                        $execcmd += " -fax ”"


                    else {

                        $execcmd += " -fax ‘$in.facsimileTelephone’"





                    if($in.homePhone -like "clear"){

                        $execcmd += " -HomePhone ”"


                    else {

                        $execcmd += " -HomePhone ‘$in.homePhone’"





                    if($ -like "clear"){

                        $execcmd += " -MobilePhone ”"


                    else {

                        $execcmd += " -MobilePhone ‘$’"





                    if($in.pager -like "clear"){

                        $execcmd += " -pager ”"


                    else {

                        $execcmd += " -pager ‘$in.pager’"





                    if($ -like "clear"){

                        $execcmd += " -notes ”"


                    else {

                        $execcmd += " -notes ‘$’"





                    if($in.description -like "clear"){

                        $execcmd += " -description ”"


                    else {

                        $execcmd += " -description ‘$in.description’"





                    if($in.PasswordNeverExpires -like ‘N’){ $pne = $False }

                    else { $pne = $True }


                    $execcmd += " -PasswordNeverExpires ‘$pne’"




                    if($in.UserMustChangePassword -like ‘N’){ $umcp = $False }

                    else { $umcp = $True }


                    $execcmd += " -PasswordNeverExpires ‘$umcp’"




                    $execcmd += " -UserPassword ‘$in.UserPassword’"





                    $execcmd += " -ObjectAttributes @{$in.ObjectAttributes}"



                # Execute combined command for current user

                Set-QADUser $execcmd


                # Increment number of processed users

                $processed += 1


                # Check for errors and store for output

                if($error) {

                    $procerrors += 1

                    $probusers = New-Object System.Object

                    $probusers | Add-Member -type NoteProperty -name "User" -value $in.sAMAccountName

                    $probuserout += $probusers




        else {

            throw "You must provide pipeline input from associated CSV file or another CSV with columns named for attributes. Exiting…"




    end {

        Write-Host "Finished updating $processed users!"

        Write-Host "Had $procerrors problems updating employeeIDs for the following people:"

        $probuserout | Sort-Object User



# ==============================================================================================

As always, please feel free to provide your own comments or feedback and especially improvements. I am far from the best of PoSh scripters, and I know there are often better ways and I love to learn them.

Until next time…



Get every new post delivered to your Inbox.