Get-PrinterDriver driver version

Windows Server 2012 introduced the Print Management Cmdlets in Windows PowerShell, which is also available on Windows 8 if you install the Remote Server Administration Tools.

What I want to show in this article is a challenge when working with the Get-PrinterDriver cmdlet, related to version info.

Let us have a look at the default output:

image

With an IT Professionals mindset, the MajorVersion property shown in the default output would probably be the printer drivers driver version, right?

Let us have a look at the same printer drivers using the Print Management MMC Console:

image

If we compare the two outputs, an educated guess would tell us that the MajorVersion property in the output of Get-PrinterDriver is actually the driver type (for example “Type 3 – User Mode”).

So how can we get the “Driver Version” property in the Print Management MMC Console in PowerShell? Let us use Get-Member to explore what properties is available on an object produced by Get-PrinterDriver:

image

It seems like the property “DriverVersion” is what we want, let us try:

image

This does not look very promising. The data we want is there, but not in a human readable format. The data is in an uint64 format, and would need to be converted in order to see the same values as we get in the Print Management MMC Console.

We can find an explanation on how the data is represented in this article on MSDN:

image

By using Select-Object we can convert the DriverProperty value into the same format as the Print Management MMC Console:

image

Thanks to PowerShell MVP Keith Hill for assisting with the conversion process.

I have also filed a suggestion on Microsoft Connect suggesting that more user friendly driver version information should be available by default on the objects created by the Get-PrinterDriver cmdlet.

If you find the need to provide feedback to Microsoft, whether it is bug reports or feature suggestions, you can find more information on how to do this in a previous article I have written.

Automating Microsoft Lync using Windows PowerShell

Microsoft Lync is the client for Microsoft`s unified communications platform Lync Server. The Lync client API includes the Microsoft Lync Model API and the Microsoft Lync Extensibility API, which is primarily targeted at developers who are building custom Lync applications or integrations with Line-Of-Business applications. The API is available in the Lync SDK (Software Developer Kit), the SDK for the latest version for the Lync client is available here. Since the SDK is .NET it is possible to leverage Windows PowerShell to access it. When the SDK is installed, you simply import the assembly Microsoft.Lync.Model.dll as a binary module using Import-Module:

Import-Module –Name “C:\Program Files (x86)\Microsoft Office\Office15\LyncSDK\Assemblies\Desktop\Microsoft.Lync.Model.dll”

In the Lync 2010 SDK documentation there is a great introduction to working with the Lync SDK from PowerShell, this also applies to the Lync 2013 SDK. See the article PowerShell Scripting Lync 2010 SDK Using Lync Model API.

In order to have a practical task to solve, I decided to try to configure the availability for the Lync client. Typically I want to configure my availability to “Available” when I get to work in the morning, and “Off Work” when I leave at the end of the day (my work computer is always powered on). I often forget to change the status, so I decided to use the new Scheduled Jobs functionality introduced in Windows PowerShell 3.0 in order to schedule the configuration of my Lync availability.

I have created a PowerShell function, Publish-LyncContactInformation, which makes it possible to configure Lync availability, location and personal note:

For the Availability parameter, the ValidateSet parameter validation attribute is used to validate the supplied parameters. This also gives us Intellisense in the PowerShell ISE:

image

Availability is provided with integer values, the possible values is available here. To make it more user friendly a switch statement is used to map the string values to the correct integer value, this makes it possible for the user to supply “Available” instead of “3000” as the parameter value.

What I could not find a value for was the “Off Work” status. It turns out that the Activity Id also must be configured in order to set availability to “Off Work” (thanks to Jens Trier Rasmussen for helping me with this one), more info here. Example usage: Publish-LyncContactInformation -Availability “Off Work” -ActivityId off-work

There are a number of presence items which can be configured, you can find more info here. In addition to the Availability and Activity Id I also included Personal Note and Location as parameters. It is also possible to add more items if needed, for example Photo URL.

Several examples on how to use the function is included in the help, run Get-Help Publish-LyncContactInformation –Examples to view them:

image

In example 5, a function is used to set the personal note in Lync. You can find the function here. After running the command in example 5, my Lync client presence looked like this:

image

Using Register-ScheduledJob, I have configured the function to run at 8.00 in the morning and 16.00 in the afternoon:

image

You can find an example on how to schedule the Publish-LyncContactInformation function here.

There are a couple of gotchas I would like to mention at the end:

#1: When using Register-ScheduledJob, the task created in \Microsoft\Windows\PowerShell\ScheduledJobs in the Task Scheduler is configured with the option “Run whether the user is logged on or not”. In order for [Microsoft.Lync.Model.LyncClient]::GetClient() to work properly from the scheduled job, the option must be set to “Run only when user is logged on”:

image

I have not found any way to configure this option using the cmdlets in the PSScheduledJobs module, so I configured this manually from the Task Scheduler. I will update this article when I have more information.

Update 2013-08-09: Using the Set-ScheduledTask cmdlet from the ScheduledTasks module works fine:

$principal = New-ScheduledTaskPrincipal -LogonType Interactive -UserId “$($env:USERDOMAIN)\$($env:USERNAME)”
Set-ScheduledTask -TaskName “\Microsoft\Windows\PowerShell\ScheduledJobs\Publish-LyncContactInformation – Off Work” -Principal $principal
Set-ScheduledTask -TaskName “\Microsoft\Windows\PowerShell\ScheduledJobs\Publish-LyncContactInformation – Available” -Principal $principal

The above will configure the “Run only when user is logged on” option for the two scheduled jobs in the examples.

about_Scheduled_Jobs states the following regarding scheduled jobs and scheduled tasks:

NOTE: You can view and manage Windows PowerShell scheduled jobs     in Task Scheduler, but the Windows PowerShell job and Scheduled     Job cmdlets work only on scheduled jobs that are created in     Windows PowerShell.

Based on this, my understanding is that modifying a scheduled task created by Register-ScheduledJob using Set-ScheduledTask is supported.

#2: When running the Lync SDK setup file I encountered the following error:

image

In order to avoid having to install Visual Studio, extract the exe-file using for example 7-Zip. Then you can install the Lync SDK by running the MSI-file from the extracted folder.

Browsing PowerShell Commands using a grid window

Get-CommandGridView is a PowerShell function written by PowerShell MVP Jeffery Hicks, which he blogged about here. I think this is a nice way to explore what is available in PowerShell modules and snapins. The function will show the available cmdlets in the specified module or snapin in a grid window, showing the properties Name,Noun,Verb,CommandType and ModuleName:

image

I think this is a great way to explore new modules/snapins, but I also thought it would be a good idea to leverage the new –PassThru switch on Out-GridView introduced in PowerShell 3.0 to let the user select commands to be sent to Get-Help –ShowWindow to further explore the help for interesting cmdlets/functions. I modified Jeff`s original function to provide this functionality:

Here is an example from running Get-CommandGridView -Module Hyper-V:

image

You can now select one or more cmdlets (using Shift or Ctrl + select) and then hit the OK button to bring up help windows for the selected cmdlets:

image

image

If no module is specified, all cmdlets and functions will be retrieved. That could potentially be thousands of cmdlets/functions, so in order to protect the user from opening to many help windows I added a validation attributte to the MaxHelpWindowCount parameter.

The functionality of the Get-CommandGridView function is similar to the Show Command Add-on introduced in PowerShell ISE 3.0, but I think it is more convenient when exploring new modules.

Here is a screenshot of the Show Command Add-on for those who have not seen it:

image

Provide bug reports and feature suggestions for Windows PowerShell

Windows PowerShell is software, and software in general will always have bugs as well as needs for new features and improvements. Microsoft has a feedback channel called Microsoft Connect where customers can report bugs and provide suggestions for feedback to their software. There is a dedicated Connect website for Windows PowerShell, connect.microsoft.com/PowerShell:

 

PowerShell

 

 

 

 

 

 

 

Many of the changes since Windows PowerShell 1.0 is based upon feedback from customers. To help Microsoft prioritize what is important for customers, there is a voting available for each submission:

image

There are also dedicated Connect programs for other Microsoft software, which you can find browsing the Connect Directory.

Personally, I have submitted 3 feature suggestions recently. While I was evaluating the new Desired State Configuration (DSC) feature in Windows PowerShell 4.0 in my lab environment, I experienced need for features which was not available they way I would like. For example, the DSC feature provides logging to Windows event logs (ETW), and I would find it very useful to be able to retrieve new items from these event logs in a PowerShell session as they arrive. You can find more information about this suggestion as well as the other two I recently submitted on the respective Connect submissions:

Using the Connect feedback channel provides you a way to possible affect new features in the product, and helps Microsoft understand what features the customers want.

Another feedback opportunity you should know about Windows PowerShell is regarding to the help system. Since version 3, the help system is updateable (Update-Help), which makes it possible for Microsoft to update the help files with more information as well as correcting errors on a regular basis. To report errors or suggestions for the help system, you can use the e-mail alias write-help@microsoft.com. PowerShell MVP Thomas Lee has written a blog post where you can find more information.

Automating the System Center Configuration Manager 2012 client

When working with the Server Core installation of Windows Server, most GUI applications does not work. This is expected for the most GUI applications, since dependencies for those GUIs is not present on Server Core. One such example is Software Center, which is part of the System Center 2012 Configuration Manager client.

When trying to launch Software Center (C:\Windows\CCM\SCClient.exe) on Server Core you  might get an error such as “SCClient has stopped working”:

image

For the most part you wont have the need to launch the Software Center on a Server Core installation of Windows Server. I recently worked with software updates on Windows Server 2008 R2 Hyper-V with Server Core. Although the WSUS integration in System Center Virtual Machine Manager might be a better option for working with Windows Updates on Hyper-V, System Center Configuration Manager 2012 was used in this particular scenario. As Hyper-V servers need to be in maintenance mode before maintenance work such as Windows patching, triggering the installation of new software updates in the SCCM Client needed to be performed manually in the scenario I worked with.

The first thing I tried was the UDA.CCMUpdatesDeployment COM object, which Boe Prox has written an excellent article about. However, this COM object is part of the System Center Configuration Manager 2007 client, and not available in the System Center Configuration Manager 2012 client.

It turns out there is a new WMI namespace in System Center Configuration Manager 2012, the ROOT\ccm\ClientSDK, which provides similar capabilities. If you have not worked a lot with WMI before, exploring this namespace might be a challenge. There is a project on CodePlex to make this a bit easier, the Client Center for Configuration Manager project:

image

This is a WPF GUI application for working with the ROOT\ccm\ClientSDK remotely, using WinRM. A nice feature of the GUI, is that the underlying PowerShell commands is shown at the bottom:

image

When clicking on “Missing Updates”, the PowerShell command actually running on the remote computer is Get-WmiObject -Query “SELECT * FROM CCM_SoftwareUpdate” -Namespace “ROOT\ccm\ClientSDK”.

When you click Install all, you will see that the InstallUpdates method is used to trigger installation of missing updates:

image

Based on this information, we can create PowerShell functions to make it easier to work with these commands. This is a few basic examples:

These functions can be used on computers with the System Center Configuration Manager 2012 client installed:

image

Ideas for further work:

  • Use the CIM cmdlets available in PowerShell 3.0 and above to create functions that will work with either DCOM (legacy WMI) or WSMan (PowerShell Remoting). You can find excellent examples in some of the 2013 Scripting Games entries. One example is the Get-SystemInventory function from Event 2.
  • Use PowerShell or the Configuration Manager Trace Log Tool to track installation status from SCCM log files such as C:\Windows\CCM\Logs\WUAHandler.log:

image

  • Use the Get-PendingReboot function to track whether a reboot is required after installing updates:

image

2013 Scripting Games – Learning points from Event 6

The 2013 Scripting Games started April 25th, and event 6 is now open for voting. This is the last event, and I must say I have learned a lot by reviewing scripts from all of the events. One of the great things about PowerShell is that there are many ways to perform a task, and by seeing how others solves it is a great learning opportunity. As I did in the previous events, I will pick out two entries from each event and provide some learning points to highlight both good and bad practices.

Before reading my comments, you might want to download the event instructions:

The first entry I will comment on is from Beginner 6:

The #requires statement is used to ensure that version 3.0 of PowerShell is being used when executing the script. This is good since functionality only available in version 3.0 is leveraged, such as the DHCP Server module.

  • The #requires statement could also have been used to ensure that the DHCP Server module is present on the system: #Requires –Modules DhcpServer. This would have saved some code, instead of using an if statement to see if the module is present.
  • The passwords could have been hardcoded, but for security reasons it is a good idea to prompt the user.
  • The event did not specify that the DHCP server is running Windows Server 2012, but I think it is ok to assume that in this event so the DHCP Server module can be leveraged to query the server. A foreach loop with error handling is used to obtain the IP address for each computer.
  • The information obtained from the previous foreach loop is saved to an array ($ipAddrs), which a new foreach statement is looping through in order to add the computers to the domain. Using Add-Computer is ok, but there is a gotcha not everyone thought of: The cmdlet is using WMI (the JoinDomainOrWorkgroup method of the Win32_ComputerSystem class), which is not available by default. Since the new computers is using Windows Server 2012, we can rely on PowerShell Remoting being enabled by default. One approach could then be to use Invoke-Command, and pass the Add-Computer cmdlet to Invoke-Command`s ScriptBlock parameter.

The next submission I will comment on is from Advanced 6:

  • Comment based help is leveraged to provide a synopsis, description, help for all parameters, input/output type information, notes and the link to the event.
  • The #requires statement is used to ensure that the PowerShell version is minimum 3.0, and that the DHCP Server module is available on the system. This is very good.
  • The parameters is well defined. For the OU parameter, input validation is performed within the function. I would rather have leveraged ValidateScript for the validation, which is used for the MacAddress parameter.
  • PowerShell Workflow is leveraged, but all remoting capabilities is not. Instead, the Windows Firewall is disabled in order to use the Restart-Computer cmdlet with its default protocol (DCOM). It is possible to specify WSMan as the protocol for the Restart-Computer cmdlet (-Protocol WSMan). Disabling the firewall is bad in terms of security. Note that if using PowerShell Remoting, we would also need to modify the TrustedHosts list in order to communicate with a workgroup computer. Some entries used * as the value for the TrustedHosts list. A better approach from a security standpoint would be to add the IP address for the computer we are connecting to, and to remove it after the remote commands is executed.

PS: Mike F Robbins has a very good article showing how to use PowerShell Workflow in this event.

Thank you to everyone who participated, I already look forward to the next Scripting Games. For those of you who did not participate this time, using the events as tasks for yourself (or your local PowerShell User Group) is a great learning opportunity.

2013 Scripting Games – Learning points from Event 5

The 2013 Scripting Games started April 25th, and event 5 is now open for voting. As I did in the previous events, I will pick out two entries from each event and provide some learning points to highlight both good and bad practices.

Before reading my comments, you might want to download the event instructions:

The first entry I will comment on is from Beginner 5:

  • Although the event instructions indicated that a one-liner would be sufficient, this is a parameterized script with comment-based help added. Normally this is a very good idea in regards of documenting the script and making it easier to use against other paths. But in this case, Dr. Scripto specifically stated that he is keeping his log-files in a fixed path (C:\Reporting\Logfiles).
  • The PathtoLogs parameter is mandatory. Nothing wrong with that either, but I would have skipped that and added Dr. Scriptos fixed path as a default value for the parameter in regards to the instructions:     [string]$PathtoLogs = “C:\Reporting\Logfiles”
  • A regular expression is used to find IP addresses from the log files. This is a good idea which many participants used. However, this particular regex will also match non-legal IP addresses, such as 999.999.999.999.
  • -AllMatches is added to Select-String so that all matches is returned, not only the first one.
  • Lastly, Select-Object –Unique is used to get the unique IP address.

There were many creative ways of solving this event, and many did so with a one-liner. One alternate way one of the submissions used was a technique found on StackOverflow: Using Import-Csv:

param ($logfile)
import-csv -Delimiter " " -Header "date","time","s-ip","cs-method","cs-uri-stem","cs-uri-query","s-port","cs-username","c-ip","csUser-Agent","sc-status","sc-substatus","sc-win32-status","time-taken" -path $logfile | where { !($_.Date.StartsWith('#')) }

Another interesting approach was using the [ipaddress] type accelerator.

For those using a regular expression, Chris Warwick has written an excellent article on how to validate an IPv4 address using regex. PowerShell is using .NET regular expressions, so you might also want to bookmark a .NET regular expression cheat sheet.

Moving on to a submission from Advanced 5:

  • First of all, this is a very well documented function containing full documentation for all parameters, several examples, information about the functions input and output as well as very detailed information in the notes.
  • ValidateScript is used to validate that the specified folder exists, and to make sure it contains .log files.
  • ValidatePattern is used to provide IP filtering capabilities, if not specified it defaults to *.
  • ValidateRange is used to validate input to the ReadCount parameter. A tested value of 1200 is provided as the default.
  • The last two parameters is switch parameters, providing options to get instance counters per ClientIP, as well as outputting a hash table instead of objects.

Every step in the scriptblock is well documented, and I find it unnecessary to comment any further. This is an excellent submission, which is clearly tested in a production environment. The only thing I would have added is support for sub-folders, by adding a –Recurse switch. Although, this was not specifically required by the event instructions. We can work around that by using Get-ChildItem –Directory to get the sub-folders: Get-ChildItem -Directory -Path “C:\Reporting\LogFiles” | Get-ClientIPFromIISLogs

Now we have only one event left before the 2013 Scripting Games is over. Keep up the good work, and good luck with event 6 – “The Core Configurator”.

2013 Scripting Games – Learning points from Event 4

The 2013 Scripting Games started April 25th, and event 4 is now open for voting. As I did in the previous events, I will pick out two entries from each event and provide some learning points to highlight both good and bad practices.

Before reading my comments, you might want to download the event instructions:

The first entry I will comment on is from Beginner 4:

  • Comment based help is leveraged to provide a synopsis, description and basic usage. Comment-based help and the Active Directory module which is leverage was introduced in PowerShell 2.0, thus #requires –version 2.0 could have been used to prevent users trying to run this in PowerShell 1.0 getting error messages.
  • HTML-formatting (head, pre- and post-content) as well as a couple of other variables is defined (note that I have updated my previous article with more info on working with HTML in PowerShell). I would also suggest the search base to be specified as a variable rather than a hard coded path in the script. Although, this is specifically stated in the synopsis.
  • The time stamp for when the report was created is included in the bottom of the report as required by the event instructions. In addition the username of the user generating the report is added, which is nice.
  • The actual cmdlets generating the report is a one-liner, nicely formatted on multiple lines for readability.
  • The lastlogon date property is used to determine when the users last logged on to the domain. There are ways to retrieve this information, and not all of them is accurate. For example the lastlogontimestamp property will be 9-14 days behind the current date. For more information, see this article on the “Ask the Directory Services Team” blog. To get a more accurate time stamp, you might want to query all domain controller, see this TechNet article for one example on how to accomplish this. Although I would not withdraw any points for not catching this “gotcha”, I would give a bonus point for those who did.

 

The second entry I have decided to comment on is from Advanced 4:

  • Leveraging the #requires statement is a good practice, however, this should be separated into two lines. When the script is run “as is” in PowerShell 2.0 the following error will be returned: “Cannot process the “#requires” statement at line 1 because it is not in the correct format.”. If separating into two lines, it will work in both 2.0 and 3.0:

#Requires -Version 3.0
#Requires -Module ActiveDirectory

  • Comment based help is leveraged in an excellent way, providing both help for all parameters as well as several examples.
  • [CmdletBinding()] is defined in order to use [Parameter()] decorators to validate input. Although there is nothing wrong with parameterizing this as a script, I would consider to turn it into a function (a personal preference).
  • The FilePath parameter is marked as mandatory, as well as positional. This is perfectly fine, but another option would be to default to a path using an environment variable (for example $env:userprofile\Documents\Report.html). The requirement of validating the filename extension (htm or html) is accomplished using a regular expression inside a ValidateScript validation.
  • The Count parameter is configured with a default value of 20 as required. In addition ValidateRange is leveraged to make sure the number specified is valid. This is not necessary, as defining the parameter as an integer would perform the necessary validation:

Cannot convert value “2222222222222222” to type “System.Int32″. Error: “Value was
either too large or too small for an Int32.

  • Two additional parameters is supplied: One for overwriting the report-file if it exists (-Force) and one returning the newly created HTML file (-PassThru). These two parameters provide additional functionality not required by the instructions.
  • The parameter values for ConvertTo-Html is quite large, and providing these as a hashtable and leveraging splatting makes it more readable. This also makes it possible to collapse/expand the HTML-code (the hashtable) in PowerShell ISE.
  • The PSCustomObject type accelerator is used to create new objects. There are many ways to create new objects in PowerShell 3.0, but this is the easiest/preferred way in my opinion. Also, the new [ordered] type adapter in PowerShell 3.0 is leveraged to decide the order of the properties, rather than using Select-Object which will actually create new objects (more overhead).

Like the previous events, there have been many great submissions in event 4. Keep up the good work, and good luck with event 5 – “The Logfile Labyrinth”.

2013 Scripting Games – Learning points from Event 3

The 2013 Scripting Games started April 25th, and event 3 is now open for voting. As I did in the previous events, I will pick out two entries from each event and provide some learning points to highlight both good and bad practices.

Before reading my comments, you might want to download the event instructions:

The first entry I will comment on is from Beginner 3:

  • Two pieces of information is required for this event, which we typically find in WMI. Since the instructions clearly stated that both CIM and WMI is available, and we do not have to think about authentication or firewalls, I find leveraging CIM a good practice. CIM is the newest technology, based on standards based management. It also requires just one firewall port to be open, compared to WMI which requires a lot more. Although, I would not give less points if WMI was leveraged in this event, since this is also a valid option in the instructions. In the real world however, we would typically need to think about authentication and firewalls in addition to other considerations, such as legacy systems. If Windows 2000 was still present, CIM would not have worked against WMI on that system without specifying the DCOM protocol for the CIM session.
  • Filtering is performed on the server side, which speeds up the query. A bad practice would have been skipping the –Filter parameter, and performed the filtering on the client side using Where-Object.
  • Extracting the data required by the instructions is done using Select-Object. What I also like is that formatting the data is done in the same operation. The –f operator is used to format the data, and PowerShells built-in constants for converting values to MB and GB is leveraged.
  • PowerShell has built-in capabilities to convert objects to HTML, in the form of the ConvertTo-Html cmdlet. This is leveraged in a way that meets the goals for the event instructions: The 3 properties needed is specified, and a header and a footer is added with the required data. However, a bonus point would have been added if the computer name was added to the header as specified in the event instructions. This is an example of how important it is to read the instructions carefully. I would recommend you to read through the instructions one more time before submitting your entry, as you might miss things like this.
  • The FilePath parameter is omitted on Out-File. This works fine since the parameter is positional. However, I prefer to use full parameter names when sharing scripts or even one-liners with others.
  • The instructions required a one-liner. Many people takes this literally, and place everything on one line. That is very bad for readability. This submission leverages the pipeline symbol, the comma and the back tick to continue on the next line. This is fully counted as a one-liner. However, the semi-colon as some people used does not qualify for a one-liner. This is simply a statement separator, and will break the pipeline.

The second entry I have decided to comment on is from Advanced 3:

This entry does produce a nice looking HTML-report with the required data. However, there are some bad practices I would like to highlight as learning points:

  • A ping function is created. This is not required, as PowerShell has a Test-Connection cmdlet giving the same functionality.
  • The naming of the function is not very descriptive for the end user sitting at the helpdesk.
  • The HTML code is created in a here-string rather than leveraging PowerShells ConvertTo-Html cmdlet. This makes it harder than necessary.
  • Aliases is used. This is ok when working interactively when you want to type fast, but not in a script or function you will share with others (think readability).
  • Write-Host is used (dont kill puppies), use Write-Verbose to output additional information. Alternatively, output an object even if the computer could not be contacted. You could add a “Connectivity” property to inform the user whether the computer was reachable or not, and add this to the report.

There have been many great suggestions in this event. Keep up the good work, and good luck with event 4 – “An Auditing Adventure”.

Update 2013-05-21: The good thing about the Scripting Games is that everyone learns something, including the judges. I havent been using here-strings for creating HTML, but as Rob Campbell (@mjolinor) stated on Twitter: “Here strings are an AWESOME tool for creating HTML or XML documents. ” -Jeffrey Snover. (link). Of course, this implies that you know HTML. On the topic of HTML, I would also like to recommend Don Jones` free ebook “Creating HTML Reports in PowerShell“. Available with the book is also a PowerShell module for working with HTML. In addition, there is also a very good session recording from the 2013 PowerShell Summit by Jeffrey Hicks called “Creating HTML Reports With style“.

2013 Scripting Games – Learning points from Event 2

The 2013 Scripting Games started April 25th, and event 2 is now open for voting. As I did in event 1, I will pick out two entries from each event and provide some learning points to highlight both good and bad practices.

Before reading my comments, you might want to download the event instructions:

The first entry I will comment on is from Beginner 2:

Note: I am testing Gist as a way to share code snippets. I may need to edit the article after posting, please refresh your browser or go to the Gist URL manually if you do not see the code.

  • I find using [System.Net.Dns]::GetHostEntry unnecessary in this case, as the name of the computers will be available from the WMI calls being used.
  • Whether to use the foreach statement or the ForEach-Object cmdlet is not that relevant in this case, as saving the contents of a small text-file into memory does not consume much resources.
    However, using Foreach-Object two times is unnecessary, there is nothing that prevents wrapping everything in one foreach-loop.
  • The event instructions did not require you to provide alternate credentials. Since it is done anyway in this submission, I would rather save the credentials into a variable once and re-use that variable inside the foreach-loop. If not, the user will be prompted for credentials two times for each computer in the text-file.
  • Re-writing the property names in the last line to make them more user-friendly is good, but there are some room for improvement in this line as well. Consider using Select-Object instead of Format-Table, this will make it up to the user what to do with the output. For example outputting them to a Csv-file using Export-Csv, or creating an HTML-report using ConvertTo-Html. Consider breaking the line into multiple lines to make it easier to read. Do not use aliases (ft in this case) when sharing a script, this makes it harder to read. Aliases are good though, when you work interactively in the shell and want to type as quickly as possible.

The second entry I have decided to comment on is from Advanced 2:

  • The use of the #requires statement is good, since this function leverages new functionality in PowerShell 3.0 which would have caused errors in previous versions of PowerShell.
  • Comment based help is leveraged in an excellent way, showing a very detailed description, help for each parameter, several examples, and what kind of object the function accepts as input and produces as output. .PARAMETER ShowProtocol is missing, although this parameter is documented in the description.
  • The function accepts input from the pipeline by using [Parameter(ValueFromPipeline=$True)], which makes it possible to pipe the contents of a txt-file to the function as shown in one of the examples.
  • A new Cim Session Option object is created in the begin block, defining DCOM as the protocol. This is a good example of showing how to use the begin block, which is typically used for operations that needs to be performed once for all of the objects that will be processed in the process block. One of the most common examples of using the begin block is connecting to a database server, but this function also shows another usage scenario.
  • The new CIM cmdlets in PowerShell 3.0 is leveraged when WSMAN 3.0 is available, if not the legacy DCOM protocol is used. This way, the function will work against both newer systems like Windows Server 2012 as well as older systems like Windows 2000 Server. As stated in the description, “The Cim cmdlets have been used to gain maximum efficiency so only one connection will be made to each computer.”.
  • Error handling is implemented, ensuring that even if the function is unable to connect to a computer, an object is returned stating that it was not possible to connect to the computer.
  • Splatting is leveraged for passing parameters, see examples on line 71 and line 95.
  • The function is outputting a custom object rather than text or formatting objects. This makes it up to the user using the function to decide what to do with the data. The new [ordered] syntax in PowerShell 3.0 is leveraged when creating the hash table which is passed to New-Object, in order to control the order of the properties of the custom object.

It has been fun to read through many of the great submissions made to event 2. Keep up the good work, and good luck with event 3 which starts May 9th.