Using the CIM Cmdlets in Windows PowerShell to solve a challenge

Introduced in PowerShell 3.0 and further enhanced in PowerShell 4.0, the CIM (Common Information Model) Cmdlets makes it easier to work with WMI (Windows Management Instrumentation). You can find an introduction to the new CIM Cmdlets in this article on the Windows PowerShell Teams blog.

In this article we will look at a usage scenario for the new CIM Cmdlets.

The Challenge

When adding a disk to a Windows Failover Cluster, the clustered disk is added to the cluster as a resource. The resource is automatically named “Cluster Disk N”, where N is the first available number. The challenge we are going to solve is renaming the newly added cluster disk based on the name (File System Label) of the underlying disk volume. This is typically an NTFS volume which is initialized and formatted before the disk is added to the cluster:

image

Using the Failover Cluster Manager in Windows Server we can view the name of the underlying volume:

image

Based on this observation we can manually rename the cluster resource. However, in a large cluster with many disks this task is a good candidate for automation.

Solving the challenge using CIM Cmdlets

By using the Get-ClusterResource cmdlet in the FailoverClusters PowerShell module we can view information about the disk, but there isnt an easy way to view the name of the underlying volume.

Due to that, we can rather leverage the Get-CimInstance cmdlet to list all cluster resources in the MSCluster_Resource class:

image

We use the –Filter parameter to specify that we only want cluster resources of the type “Physical Disk”, if not we would also get cluster resources such as cluster IP addresses, cluster names and so on.

Based on the information available on the object produced by Get-CimInstance, we can see many cluster related properties. However, no information about the underlying volume is available.

There is one cmdlet in the CIMCmdlets module which can get all the associated instances for a particular instance – the Get-CimAssociatedInstance cmdlet. In this scenario, we want to find all related volumes for the cluster disks. We start by finding all related instances:

image

This produced a large list of different kind of objects. We can use Get-Member to view information about the objects, and Select-Object to view the unique object types:

image

Based on the above output, the MSCluster_DiskPartition looks like a good candidate. We can use the –ResultClass parameter of Get-CimAssociatedInstance in order to narrow down the results to the class we want:

image

This is excactly what we want, as we can see the VolumeLabel property on the above output.

Now that we have the information we want – the VolumeLabel property – we need to find a way to rename the cluster resource (“Cluster Disk 1”). We can start by looking for a Rename method on the MSCluster_DiskPartition object. Normally, we would use Get-Member to view the available methods:

image

Since the CIM cmdlets is using the Ws-Man (WS-Management) protocol, the objects we get returned is serialized/deserialized. Thus the objects methods is remove since we arent working with a “live” object. The Get-CimClass cmdlet can be used to get the class schema of a CIM class, which includes the methods.
Instead let`s try the Get-CimClass cmdlet to view the methods available on the mscluster_resource class:

image

As we can see, there is a Rename method which takes one parameter: newName.

In order to invoke the method, we can use the Invoke-CimMethod cmdlet where we specify Rename as the MethodName and provide the value we want to configure as a hash table on the Arguments parameter:

image

We provided the VolumeLabel property returned by the Get-CimAssociatedInstance cmdlet as the value for the newName parameter, effectively solving the challenge.

After invoking the method, we can see the name of the cluster disk is immediately updated in Failover Cluster Manager:

image

When working with multiple cluster disks, we can use a foreach loop in order to invoke the rename method for all disks  if the name of the cluster resource does not match the name of the underlying disk volume:

Resources

Get-PrinterDriver driver version

Windows Server 2012 introduced the Print Management Cmdlets in Windows PowerShell, which is also available on Windows 8 if you install the Remote Server Administration Tools.

What I want to show in this article is a challenge when working with the Get-PrinterDriver cmdlet, related to version info.

Let us have a look at the default output:

image

With an IT Professionals mindset, the MajorVersion property shown in the default output would probably be the printer drivers driver version, right?

Let us have a look at the same printer drivers using the Print Management MMC Console:

image

If we compare the two outputs, an educated guess would tell us that the MajorVersion property in the output of Get-PrinterDriver is actually the driver type (for example “Type 3 – User Mode”).

So how can we get the “Driver Version” property in the Print Management MMC Console in PowerShell? Let us use Get-Member to explore what properties is available on an object produced by Get-PrinterDriver:

image

It seems like the property “DriverVersion” is what we want, let us try:

image

This does not look very promising. The data we want is there, but not in a human readable format. The data is in an uint64 format, and would need to be converted in order to see the same values as we get in the Print Management MMC Console.

We can find an explanation on how the data is represented in this article on MSDN:

image

By using Select-Object we can convert the DriverProperty value into the same format as the Print Management MMC Console:

image

Thanks to PowerShell MVP Keith Hill for assisting with the conversion process.

I have also filed a suggestion on Microsoft Connect suggesting that more user friendly driver version information should be available by default on the objects created by the Get-PrinterDriver cmdlet.

If you find the need to provide feedback to Microsoft, whether it is bug reports or feature suggestions, you can find more information on how to do this in a previous article I have written.

Automating Microsoft Lync using Windows PowerShell

Microsoft Lync is the client for Microsoft`s unified communications platform Lync Server. The Lync client API includes the Microsoft Lync Model API and the Microsoft Lync Extensibility API, which is primarily targeted at developers who are building custom Lync applications or integrations with Line-Of-Business applications. The API is available in the Lync SDK (Software Developer Kit), the SDK for the latest version for the Lync client is available here. Since the SDK is .NET it is possible to leverage Windows PowerShell to access it. When the SDK is installed, you simply import the assembly Microsoft.Lync.Model.dll as a binary module using Import-Module:

Import-Module –Name “C:\Program Files (x86)\Microsoft Office\Office15\LyncSDK\Assemblies\Desktop\Microsoft.Lync.Model.dll”

In the Lync 2010 SDK documentation there is a great introduction to working with the Lync SDK from PowerShell, this also applies to the Lync 2013 SDK. See the article PowerShell Scripting Lync 2010 SDK Using Lync Model API.

In order to have a practical task to solve, I decided to try to configure the availability for the Lync client. Typically I want to configure my availability to “Available” when I get to work in the morning, and “Off Work” when I leave at the end of the day (my work computer is always powered on). I often forget to change the status, so I decided to use the new Scheduled Jobs functionality introduced in Windows PowerShell 3.0 in order to schedule the configuration of my Lync availability.

I have created a PowerShell function, Publish-LyncContactInformation, which makes it possible to configure Lync availability, location and personal note:

For the Availability parameter, the ValidateSet parameter validation attribute is used to validate the supplied parameters. This also gives us Intellisense in the PowerShell ISE:

image

Availability is provided with integer values, the possible values is available here. To make it more user friendly a switch statement is used to map the string values to the correct integer value, this makes it possible for the user to supply “Available” instead of “3000” as the parameter value.

What I could not find a value for was the “Off Work” status. It turns out that the Activity Id also must be configured in order to set availability to “Off Work” (thanks to Jens Trier Rasmussen for helping me with this one), more info here. Example usage: Publish-LyncContactInformation -Availability “Off Work” -ActivityId off-work

There are a number of presence items which can be configured, you can find more info here. In addition to the Availability and Activity Id I also included Personal Note and Location as parameters. It is also possible to add more items if needed, for example Photo URL.

Several examples on how to use the function is included in the help, run Get-Help Publish-LyncContactInformation –Examples to view them:

image

In example 5, a function is used to set the personal note in Lync. You can find the function here. After running the command in example 5, my Lync client presence looked like this:

image

Using Register-ScheduledJob, I have configured the function to run at 8.00 in the morning and 16.00 in the afternoon:

image

You can find an example on how to schedule the Publish-LyncContactInformation function here.

There are a couple of gotchas I would like to mention at the end:

#1: When using Register-ScheduledJob, the task created in \Microsoft\Windows\PowerShell\ScheduledJobs in the Task Scheduler is configured with the option “Run whether the user is logged on or not”. In order for [Microsoft.Lync.Model.LyncClient]::GetClient() to work properly from the scheduled job, the option must be set to “Run only when user is logged on”:

image

I have not found any way to configure this option using the cmdlets in the PSScheduledJobs module, so I configured this manually from the Task Scheduler. I will update this article when I have more information.

Update 2013-08-09: Using the Set-ScheduledTask cmdlet from the ScheduledTasks module works fine:

$principal = New-ScheduledTaskPrincipal -LogonType Interactive -UserId “$($env:USERDOMAIN)\$($env:USERNAME)”
Set-ScheduledTask -TaskName “\Microsoft\Windows\PowerShell\ScheduledJobs\Publish-LyncContactInformation – Off Work” -Principal $principal
Set-ScheduledTask -TaskName “\Microsoft\Windows\PowerShell\ScheduledJobs\Publish-LyncContactInformation – Available” -Principal $principal

The above will configure the “Run only when user is logged on” option for the two scheduled jobs in the examples.

about_Scheduled_Jobs states the following regarding scheduled jobs and scheduled tasks:

NOTE: You can view and manage Windows PowerShell scheduled jobs     in Task Scheduler, but the Windows PowerShell job and Scheduled     Job cmdlets work only on scheduled jobs that are created in     Windows PowerShell.

Based on this, my understanding is that modifying a scheduled task created by Register-ScheduledJob using Set-ScheduledTask is supported.

#2: When running the Lync SDK setup file I encountered the following error:

image

In order to avoid having to install Visual Studio, extract the exe-file using for example 7-Zip. Then you can install the Lync SDK by running the MSI-file from the extracted folder.

Automating the System Center Configuration Manager 2012 client

When working with the Server Core installation of Windows Server, most GUI applications does not work. This is expected for the most GUI applications, since dependencies for those GUIs is not present on Server Core. One such example is Software Center, which is part of the System Center 2012 Configuration Manager client.

When trying to launch Software Center (C:\Windows\CCM\SCClient.exe) on Server Core you  might get an error such as “SCClient has stopped working”:

image

For the most part you wont have the need to launch the Software Center on a Server Core installation of Windows Server. I recently worked with software updates on Windows Server 2008 R2 Hyper-V with Server Core. Although the WSUS integration in System Center Virtual Machine Manager might be a better option for working with Windows Updates on Hyper-V, System Center Configuration Manager 2012 was used in this particular scenario. As Hyper-V servers need to be in maintenance mode before maintenance work such as Windows patching, triggering the installation of new software updates in the SCCM Client needed to be performed manually in the scenario I worked with.

The first thing I tried was the UDA.CCMUpdatesDeployment COM object, which Boe Prox has written an excellent article about. However, this COM object is part of the System Center Configuration Manager 2007 client, and not available in the System Center Configuration Manager 2012 client.

It turns out there is a new WMI namespace in System Center Configuration Manager 2012, the ROOT\ccm\ClientSDK, which provides similar capabilities. If you have not worked a lot with WMI before, exploring this namespace might be a challenge. There is a project on CodePlex to make this a bit easier, the Client Center for Configuration Manager project:

image

This is a WPF GUI application for working with the ROOT\ccm\ClientSDK remotely, using WinRM. A nice feature of the GUI, is that the underlying PowerShell commands is shown at the bottom:

image

When clicking on “Missing Updates”, the PowerShell command actually running on the remote computer is Get-WmiObject -Query “SELECT * FROM CCM_SoftwareUpdate” -Namespace “ROOT\ccm\ClientSDK”.

When you click Install all, you will see that the InstallUpdates method is used to trigger installation of missing updates:

image

Based on this information, we can create PowerShell functions to make it easier to work with these commands. This is a few basic examples:

These functions can be used on computers with the System Center Configuration Manager 2012 client installed:

image

Ideas for further work:

  • Use the CIM cmdlets available in PowerShell 3.0 and above to create functions that will work with either DCOM (legacy WMI) or WSMan (PowerShell Remoting). You can find excellent examples in some of the 2013 Scripting Games entries. One example is the Get-SystemInventory function from Event 2.
  • Use PowerShell or the Configuration Manager Trace Log Tool to track installation status from SCCM log files such as C:\Windows\CCM\Logs\WUAHandler.log:

image

  • Use the Get-PendingReboot function to track whether a reboot is required after installing updates:

image

2013 Scripting Games – Learning points from Event 6

The 2013 Scripting Games started April 25th, and event 6 is now open for voting. This is the last event, and I must say I have learned a lot by reviewing scripts from all of the events. One of the great things about PowerShell is that there are many ways to perform a task, and by seeing how others solves it is a great learning opportunity. As I did in the previous events, I will pick out two entries from each event and provide some learning points to highlight both good and bad practices.

Before reading my comments, you might want to download the event instructions:

The first entry I will comment on is from Beginner 6:

The #requires statement is used to ensure that version 3.0 of PowerShell is being used when executing the script. This is good since functionality only available in version 3.0 is leveraged, such as the DHCP Server module.

  • The #requires statement could also have been used to ensure that the DHCP Server module is present on the system: #Requires –Modules DhcpServer. This would have saved some code, instead of using an if statement to see if the module is present.
  • The passwords could have been hardcoded, but for security reasons it is a good idea to prompt the user.
  • The event did not specify that the DHCP server is running Windows Server 2012, but I think it is ok to assume that in this event so the DHCP Server module can be leveraged to query the server. A foreach loop with error handling is used to obtain the IP address for each computer.
  • The information obtained from the previous foreach loop is saved to an array ($ipAddrs), which a new foreach statement is looping through in order to add the computers to the domain. Using Add-Computer is ok, but there is a gotcha not everyone thought of: The cmdlet is using WMI (the JoinDomainOrWorkgroup method of the Win32_ComputerSystem class), which is not available by default. Since the new computers is using Windows Server 2012, we can rely on PowerShell Remoting being enabled by default. One approach could then be to use Invoke-Command, and pass the Add-Computer cmdlet to Invoke-Command`s ScriptBlock parameter.

The next submission I will comment on is from Advanced 6:

  • Comment based help is leveraged to provide a synopsis, description, help for all parameters, input/output type information, notes and the link to the event.
  • The #requires statement is used to ensure that the PowerShell version is minimum 3.0, and that the DHCP Server module is available on the system. This is very good.
  • The parameters is well defined. For the OU parameter, input validation is performed within the function. I would rather have leveraged ValidateScript for the validation, which is used for the MacAddress parameter.
  • PowerShell Workflow is leveraged, but all remoting capabilities is not. Instead, the Windows Firewall is disabled in order to use the Restart-Computer cmdlet with its default protocol (DCOM). It is possible to specify WSMan as the protocol for the Restart-Computer cmdlet (-Protocol WSMan). Disabling the firewall is bad in terms of security. Note that if using PowerShell Remoting, we would also need to modify the TrustedHosts list in order to communicate with a workgroup computer. Some entries used * as the value for the TrustedHosts list. A better approach from a security standpoint would be to add the IP address for the computer we are connecting to, and to remove it after the remote commands is executed.

PS: Mike F Robbins has a very good article showing how to use PowerShell Workflow in this event.

Thank you to everyone who participated, I already look forward to the next Scripting Games. For those of you who did not participate this time, using the events as tasks for yourself (or your local PowerShell User Group) is a great learning opportunity.

2013 Scripting Games – Learning points from Event 4

The 2013 Scripting Games started April 25th, and event 4 is now open for voting. As I did in the previous events, I will pick out two entries from each event and provide some learning points to highlight both good and bad practices.

Before reading my comments, you might want to download the event instructions:

The first entry I will comment on is from Beginner 4:

  • Comment based help is leveraged to provide a synopsis, description and basic usage. Comment-based help and the Active Directory module which is leverage was introduced in PowerShell 2.0, thus #requires –version 2.0 could have been used to prevent users trying to run this in PowerShell 1.0 getting error messages.
  • HTML-formatting (head, pre- and post-content) as well as a couple of other variables is defined (note that I have updated my previous article with more info on working with HTML in PowerShell). I would also suggest the search base to be specified as a variable rather than a hard coded path in the script. Although, this is specifically stated in the synopsis.
  • The time stamp for when the report was created is included in the bottom of the report as required by the event instructions. In addition the username of the user generating the report is added, which is nice.
  • The actual cmdlets generating the report is a one-liner, nicely formatted on multiple lines for readability.
  • The lastlogon date property is used to determine when the users last logged on to the domain. There are ways to retrieve this information, and not all of them is accurate. For example the lastlogontimestamp property will be 9-14 days behind the current date. For more information, see this article on the “Ask the Directory Services Team” blog. To get a more accurate time stamp, you might want to query all domain controller, see this TechNet article for one example on how to accomplish this. Although I would not withdraw any points for not catching this “gotcha”, I would give a bonus point for those who did.

 

The second entry I have decided to comment on is from Advanced 4:

  • Leveraging the #requires statement is a good practice, however, this should be separated into two lines. When the script is run “as is” in PowerShell 2.0 the following error will be returned: “Cannot process the “#requires” statement at line 1 because it is not in the correct format.”. If separating into two lines, it will work in both 2.0 and 3.0:

#Requires -Version 3.0
#Requires -Module ActiveDirectory

  • Comment based help is leveraged in an excellent way, providing both help for all parameters as well as several examples.
  • [CmdletBinding()] is defined in order to use [Parameter()] decorators to validate input. Although there is nothing wrong with parameterizing this as a script, I would consider to turn it into a function (a personal preference).
  • The FilePath parameter is marked as mandatory, as well as positional. This is perfectly fine, but another option would be to default to a path using an environment variable (for example $env:userprofile\Documents\Report.html). The requirement of validating the filename extension (htm or html) is accomplished using a regular expression inside a ValidateScript validation.
  • The Count parameter is configured with a default value of 20 as required. In addition ValidateRange is leveraged to make sure the number specified is valid. This is not necessary, as defining the parameter as an integer would perform the necessary validation:

Cannot convert value “2222222222222222” to type “System.Int32″. Error: “Value was
either too large or too small for an Int32.

  • Two additional parameters is supplied: One for overwriting the report-file if it exists (-Force) and one returning the newly created HTML file (-PassThru). These two parameters provide additional functionality not required by the instructions.
  • The parameter values for ConvertTo-Html is quite large, and providing these as a hashtable and leveraging splatting makes it more readable. This also makes it possible to collapse/expand the HTML-code (the hashtable) in PowerShell ISE.
  • The PSCustomObject type accelerator is used to create new objects. There are many ways to create new objects in PowerShell 3.0, but this is the easiest/preferred way in my opinion. Also, the new [ordered] type adapter in PowerShell 3.0 is leveraged to decide the order of the properties, rather than using Select-Object which will actually create new objects (more overhead).

Like the previous events, there have been many great submissions in event 4. Keep up the good work, and good luck with event 5 – “The Logfile Labyrinth”.

2013 Scripting Games – Learning points from Event 3

The 2013 Scripting Games started April 25th, and event 3 is now open for voting. As I did in the previous events, I will pick out two entries from each event and provide some learning points to highlight both good and bad practices.

Before reading my comments, you might want to download the event instructions:

The first entry I will comment on is from Beginner 3:

  • Two pieces of information is required for this event, which we typically find in WMI. Since the instructions clearly stated that both CIM and WMI is available, and we do not have to think about authentication or firewalls, I find leveraging CIM a good practice. CIM is the newest technology, based on standards based management. It also requires just one firewall port to be open, compared to WMI which requires a lot more. Although, I would not give less points if WMI was leveraged in this event, since this is also a valid option in the instructions. In the real world however, we would typically need to think about authentication and firewalls in addition to other considerations, such as legacy systems. If Windows 2000 was still present, CIM would not have worked against WMI on that system without specifying the DCOM protocol for the CIM session.
  • Filtering is performed on the server side, which speeds up the query. A bad practice would have been skipping the –Filter parameter, and performed the filtering on the client side using Where-Object.
  • Extracting the data required by the instructions is done using Select-Object. What I also like is that formatting the data is done in the same operation. The –f operator is used to format the data, and PowerShells built-in constants for converting values to MB and GB is leveraged.
  • PowerShell has built-in capabilities to convert objects to HTML, in the form of the ConvertTo-Html cmdlet. This is leveraged in a way that meets the goals for the event instructions: The 3 properties needed is specified, and a header and a footer is added with the required data. However, a bonus point would have been added if the computer name was added to the header as specified in the event instructions. This is an example of how important it is to read the instructions carefully. I would recommend you to read through the instructions one more time before submitting your entry, as you might miss things like this.
  • The FilePath parameter is omitted on Out-File. This works fine since the parameter is positional. However, I prefer to use full parameter names when sharing scripts or even one-liners with others.
  • The instructions required a one-liner. Many people takes this literally, and place everything on one line. That is very bad for readability. This submission leverages the pipeline symbol, the comma and the back tick to continue on the next line. This is fully counted as a one-liner. However, the semi-colon as some people used does not qualify for a one-liner. This is simply a statement separator, and will break the pipeline.

The second entry I have decided to comment on is from Advanced 3:

This entry does produce a nice looking HTML-report with the required data. However, there are some bad practices I would like to highlight as learning points:

  • A ping function is created. This is not required, as PowerShell has a Test-Connection cmdlet giving the same functionality.
  • The naming of the function is not very descriptive for the end user sitting at the helpdesk.
  • The HTML code is created in a here-string rather than leveraging PowerShells ConvertTo-Html cmdlet. This makes it harder than necessary.
  • Aliases is used. This is ok when working interactively when you want to type fast, but not in a script or function you will share with others (think readability).
  • Write-Host is used (dont kill puppies), use Write-Verbose to output additional information. Alternatively, output an object even if the computer could not be contacted. You could add a “Connectivity” property to inform the user whether the computer was reachable or not, and add this to the report.

There have been many great suggestions in this event. Keep up the good work, and good luck with event 4 – “An Auditing Adventure”.

Update 2013-05-21: The good thing about the Scripting Games is that everyone learns something, including the judges. I havent been using here-strings for creating HTML, but as Rob Campbell (@mjolinor) stated on Twitter: “Here strings are an AWESOME tool for creating HTML or XML documents. ” -Jeffrey Snover. (link). Of course, this implies that you know HTML. On the topic of HTML, I would also like to recommend Don Jones` free ebook “Creating HTML Reports in PowerShell“. Available with the book is also a PowerShell module for working with HTML. In addition, there is also a very good session recording from the 2013 PowerShell Summit by Jeffrey Hicks called “Creating HTML Reports With style“.

2013 Scripting Games – Learning points from Event 2

The 2013 Scripting Games started April 25th, and event 2 is now open for voting. As I did in event 1, I will pick out two entries from each event and provide some learning points to highlight both good and bad practices.

Before reading my comments, you might want to download the event instructions:

The first entry I will comment on is from Beginner 2:

Note: I am testing Gist as a way to share code snippets. I may need to edit the article after posting, please refresh your browser or go to the Gist URL manually if you do not see the code.

  • I find using [System.Net.Dns]::GetHostEntry unnecessary in this case, as the name of the computers will be available from the WMI calls being used.
  • Whether to use the foreach statement or the ForEach-Object cmdlet is not that relevant in this case, as saving the contents of a small text-file into memory does not consume much resources.
    However, using Foreach-Object two times is unnecessary, there is nothing that prevents wrapping everything in one foreach-loop.
  • The event instructions did not require you to provide alternate credentials. Since it is done anyway in this submission, I would rather save the credentials into a variable once and re-use that variable inside the foreach-loop. If not, the user will be prompted for credentials two times for each computer in the text-file.
  • Re-writing the property names in the last line to make them more user-friendly is good, but there are some room for improvement in this line as well. Consider using Select-Object instead of Format-Table, this will make it up to the user what to do with the output. For example outputting them to a Csv-file using Export-Csv, or creating an HTML-report using ConvertTo-Html. Consider breaking the line into multiple lines to make it easier to read. Do not use aliases (ft in this case) when sharing a script, this makes it harder to read. Aliases are good though, when you work interactively in the shell and want to type as quickly as possible.

The second entry I have decided to comment on is from Advanced 2:

  • The use of the #requires statement is good, since this function leverages new functionality in PowerShell 3.0 which would have caused errors in previous versions of PowerShell.
  • Comment based help is leveraged in an excellent way, showing a very detailed description, help for each parameter, several examples, and what kind of object the function accepts as input and produces as output. .PARAMETER ShowProtocol is missing, although this parameter is documented in the description.
  • The function accepts input from the pipeline by using [Parameter(ValueFromPipeline=$True)], which makes it possible to pipe the contents of a txt-file to the function as shown in one of the examples.
  • A new Cim Session Option object is created in the begin block, defining DCOM as the protocol. This is a good example of showing how to use the begin block, which is typically used for operations that needs to be performed once for all of the objects that will be processed in the process block. One of the most common examples of using the begin block is connecting to a database server, but this function also shows another usage scenario.
  • The new CIM cmdlets in PowerShell 3.0 is leveraged when WSMAN 3.0 is available, if not the legacy DCOM protocol is used. This way, the function will work against both newer systems like Windows Server 2012 as well as older systems like Windows 2000 Server. As stated in the description, “The Cim cmdlets have been used to gain maximum efficiency so only one connection will be made to each computer.”.
  • Error handling is implemented, ensuring that even if the function is unable to connect to a computer, an object is returned stating that it was not possible to connect to the computer.
  • Splatting is leveraged for passing parameters, see examples on line 71 and line 95.
  • The function is outputting a custom object rather than text or formatting objects. This makes it up to the user using the function to decide what to do with the data. The new [ordered] syntax in PowerShell 3.0 is leveraged when creating the hash table which is passed to New-Object, in order to control the order of the properties of the custom object.

It has been fun to read through many of the great submissions made to event 2. Keep up the good work, and good luck with event 3 which starts May 9th.

2013 Scripting Games – Learning points from Event 1

The 2013 Scripting Games started April 25th, and event 1 is now open for voting. As I am participating as a judge this year, I will pick out two entries from each event and provide some learning points to highlight both good and bad practices.

The first entry I have decided to comment on is from Beginner 1: An Archival Atrocity.

Learning Points

  • Add comment based help. This makes it more easy for others to leverage the script, you also get the benefit of using Get-Help on your function or script.
  • The instructions did not require any output, consider using Write-Verbose instead of Write-Output. This makes it up to the user leveraging the script to decide if any output is wanted, by editing the $VerbosePreference automatic variable.
  • Avoid using aliases in scripts (Echo and Mkdir in this case), although it is great for interactive use when you want to type quickly.
  • When sharing a script, I also prefer to use full parameter names. For example Get-ChildItem -Path $RootPath instead of Get-ChildItem $RootPath.

The second entry is from Advanced 1: An Archival Atrocity.

Learning Points

  • The use of a function makes this tool easier to reuse without the need to call a script each time.
  • The use of an advanced function makes it work more like an actual cmdlet, since you can do more advanced things such as marking a parameter as mandatory and validating the arguments. For more information, see Get-Help about_Functions_Advanced.
  • This is also a good example on leveraging comment based help. All parameters is properly described, and a few examples is provided. Often the first thing people look at before reading the actual help is the examples, so it is important to provide at least a couple of them.
  • The use of parameter validation is excellent, as this simplifies the function and is leveraging functionality PowerShell is providing “for free”.
  • Default values is configured for the LogPath and Period parameters. This means that these parameter values will be used if the user using the function does not specify these parameters. It is also easy to change the values if needed, which was a requirement for this task.

There have been many great submissions for the first event. Keep up the good work, and good luck with event 2 which starts May 2.