Tag Archives: PowerShell

Creating PowerShell Modules, the easy way.

Today I’m going to show you a new way to build a PowerShell module: I think it’s the fastest, easiest way to turn a few PowerShell script files into a reusable, shareable module.

Here’s the “too long; didn’t read” version: I’ve written a function that will allow you to specify a list of scripts and collect them into a module. Get the script from the end of this post, and you can run this command to build a module named “MyUtilities” within the current directory, moving the ps1 scripts into the new sub-directory:

New-ScriptModule .\MyUtilities *.ps1 -move

Or you can run this command to create the “MyUtilities” module in your regular modules folder from all the ps1 or psm1 scripts in this directory or any sub-directories, overwriting any existing module:

New-ScriptModule MyUtilities *.ps1, *.psm1 -recurse -force

If you want greater precision, you can pipe the files in. This command will let you collect only the ps1 and psm1 files in the current directory, but recursively collect ALL files from sub-directories (like language data files, or nested modules), and generate the module containing them.

dir *.ps1, *.psm1, *\* | dir -s | New-ScriptModule MyUtility

Once you’ve created a script module using New-ScriptModule, you can run this command to update it with the files that are currently in it’s folder tree, incrementing the version number:

New-ScriptModule MyUtilities -Upgrade

Note: none of these commands move the scripts, they just copy them. However, you can add the -Move switch to move them instead.

It’s important to note that the scripts you put into a module need to define functions (they can’t be pure scripts that you run by file name, or else they’ll execute when you import the module).

What follows is an explanation of what this function does (and steps for doing it by hand). The first step, of course, is to create a “MyUtilities” folder in your PSModulePath, and copy all of the script files into it. Then we can get started with a simple Root Module.

The Root Module (ModuleToProcess)

The root module is the main script or assembly that’s going to be executed/loaded when the module is imported. In our example case, it’s the “MyUtilities.psm1” file (the name matches the folder and ends in psm1). Basically, copy the line below into a new file and save it as “MyUtilities.psm1”:

Get-ChildItem $PSScriptRoot -Recurse -Filter *.ps1 | Import-Module

That simple version just imports every ps1 file in the module folder into the module itself (it’s just like dot-sourcing them, but cleaner to write). In the end, I decided that I wanted to have the list of scripts that would be imported hard-coded in the root module (which can be signed), to make the whole thing harder to tamper with, so in the script below you’ll see I’m building a slightly more complicated root module.

The Module Manifest

I want you to end up with a good reusable and redistributable module, so you should definitely create a module manifest. You don’t really have to set any properties in the manifest except the RootModule and the version (but that defaults to 1.0, so you don’t have to worry about it at first), but I recommend you also pass at least:

  • The Author parameter (it defaults to your $Env:UserName which is usually not a full name).
  • The CompanyName (otherwise it comes out “Unknown”, you could at least set it to your blog url).
  • The FileList (which will matter for packaging someday).
  • The Description.

Here’s an example of how to use New-ModuleManifest to create one:

$Module = "~\Documents\WindowsPowerShell\Modules\MyUtilities"

$Files = Get-ChildItem $Module\*.* -recurse | Resolve-Path -Relative

New-ModuleManifest -Path "$Module\MyUtilities.psd1" -RootModule "MyUtilities.psm1" -FileList $Files -Author "Joel Bennett" -Functions "*-*" -Aliases "*" -Cmdlets $null -Variables "MyUtilities*" -Types "*Types.ps1xml" -Format "*Formats.ps1xml"

That’s it. We’re done. You can try now try: Import-Module MyUtilities and Get-Command -Module MyUtilities to see the results!

Some additional information, and the script.

I want to add some detail here about the conventions I follow which are enforced or encouraged by this function and the modules and manifests that it produces, but as I’ve gone on long enough, I will save that for another post.

I do want to state one caveat: although the function below produces much more complete modules than the few lines above, it’s still basically a beta — it’s only been tested by me (and a couple of people from IRC, thanks guys), so it’s possible that everything is not perfect. That said:

  • It supports piping files in to generate the module from, as well as specifying a pattern to match files (and even the -recurse switch).
  • It handles nested psm1 modules as well as ps1 scripts, and treats *.Types.ps1xml and *.Formats.ps1xml files appropriately.
  • It sets reasonable defaults for the export filters of functions, aliases and variables.
  • It supports upgrading modules, updating the file list and incrementing the version number, but won’t overwrite your customizations.

The current version generates a default module which lists specifically the files to import, and then pushes location to the module folder before importing the scripts, one at a time, writing progress messages and error messages as appropriate.

This function is part of the larger PoshCode Packaging module that I am working on. Hopefully you’ll find it an easy way to put a bunch of scripts into a module. It should work in PowerShell 2-4 (although I haven’t been able to test on 2 yet), but let me know if you have any problems (get it from poshcode).

A new way to zip and unzip in PowerShell 3 and .Net 4.5

I’ve got an article coming about WPF 4.5 and what’s new there for ShowUI, but in the meantime I thought I’d share this little nugget I noticed today: System.IO.Compression.FileSystem.ZipFile. You can use that class and it’s buddy ZipArchive to do all sorts of zipping and unzipping tasks.

In order to use these new classes, you have to use Add-Type to import the System.IO.Compression.FileSystem assembly. If you don’t understand that, don’t worry about it, just copy the first line of the examples below. You only have to do it once, so it might be worth sticking it in your profile (or in a module file with a couple of scripts) if you think you’ll do a lot of this sort of thing.

Add-Type -As System.IO.Compression.FileSystem
[IO.Compression.ZipFile]::CreateFromDirectory( (Split-Path $Profile), "WindowsPowerShell.zip", "Optimal", $true )
 

That’s basically a one-liner to zip up a folder. Of course, you should have a one liner to unzip it too:

Add-Type -As System.IO.Compression.FileSystem

$ZipFile = Get-Item ~\Downloads\Wasp.zip
$ModulePath = $Env:PSModulePath -split ";" -like "$Home*" | Select -first 1

[IO.Compression.ZipFile]::ExtractToDirectory( $ZipFile, $ModulePath )
 

In order for the second example to work the way you want it to, you need a zip file where all the files are in a single folder — which is why in the first example I used the overload for CreateFromDirectory which takes a boolean for whether or not to include the root directory in the zip as opposed to just it’s contents. Otherwise, you would need to create the “WASP” folder, first, and then extract to that directory. Of course, if you do that when there is a directory in the zip (as there was, in this case), you’ll end up with a Modules\WASP\WASP ... which in PS3 will work (although it shouldn’t), but is rather frustrating.

So, to avoid ending up with folders inside folders, we can use the ZipArchive class.

The easiest way to get an actual ZipArchive is to use the “Open” method on the ZipFile class. Once you’ve done that you can easily check all the files in it: $archive.Entries | Format-Table and you can extract a single entry using ExtractToFile or the whole archive using ExtractToDirectory

So for a final example, I’ll use ZipArchive to create a script that will always unzip to a new folder (unless there’s just one single file in the zip), and will create an “archive name” folder if there isn’t already a single folder root inside the archive. In fact, as you’ll see below, I went further and forced the resulting folder to always end up named after the archive.


Add-Type -As System.IO.Compression.FileSystem

function Expand-ZipFile {
  #.Synopsis
  #  Expand a zip file, ensuring it's contents go to a single folder ...
  [CmdletBinding()]
  param(
    # The path of the zip file that needs to be extracted
    [Parameter(ValueFromPipelineByPropertyName=$true, Position=0, Mandatory=$true)]
    [Alias("PSPath")]
    $FilePath,

    # The path where we want the output folder to end up
    $FolderPath = $Pwd,

    # Make sure the resulting folder is always named the same as the archive
    $Force
  )
  process {
    $ZipFile = Get-Item $FilePath
    $Archive = [System.IO.Compression.ZipFile]::Open( $ZipFile, "Read" )
    # The place where we expect it to end up
    $Destination = join-path $FolderPath $ZipFile.BaseName
    # The root folder of the first entry ...
    $ArchiveRoot = $Archive.Entries[0].FullName.split("/",2)[0]

    # If any of the files are not in the same root folder ...
    if($Archive.Entries.FullName | Where-Object { $_.split("/",2)[0] -ne $ArchiveRoot }) {
      # extract it into a new folder:
      New-Item $Destination -Type Directory -Force
      [System.IO.Compression.ZipFileExtensions]::ExtractToDirectory( $Archive, $Destination )
    } else {
      # otherwise, extract it to the FolderPath
      [System.IO.Compression.ZipFileExtensions]::ExtractToDirectory( $Archive, $FolderPath )

      # If there was only a single file in the archive, then we'll just output that file...
      if($Archive.Entries.Count -eq 1) {
        Get-Item (Join-Path $FolderPath $Archive.Entries[0].FullName)
      } elseif($Force) {
        # Otherwise let's make sure that we move it to where we expect it to go, in case the zip's been renamed
        if($ArchiveRoot -ne $ZipFile.BaseName) {
          Move-Item (join-path $FolderPath $ArchiveRoot) $Destination
          Get-Item $Destination
        }
      } else {
        Get-Item (Join-Path $FolderPath $ArchiveRoot)
      }
    }

    $Archive.Dispose()
  }
}
 

Update and continuation

I just posted a new version of this to PoshCode including a New-ZipFile function, just because I can’t stop myself, and I’ve always wanted a zip function that behaved exactly the way I wanted it to. If anyone want’s to add to that ZipFile module, that would be the place to do it, for now. Here’s the current version:

Long-running background tasks in ShowUI GUIs from PowerShell

A lot of the time when you’re writing ShowUI user interfaces in PowerShell, you’re just asking users for inputs, prompting them for choices, or showing them the results of some calculations (whether that be dashboards, graphics, etc).

However, sometimes you need to prompt the user for input and then do some work, and you want to present the users with output as the work progresses. With any graphical user interface, when you want to do any significant amount of work, you need to do it on a background thread. The ShowUI way to do that is to use the Invoke-Background command, which gives you an event-driven way to run a script block (or command) and capture all of it’s various outputs.

I’ll show you an example here which writes output and progress events, and show you how to handle updating the user interface with both (hopefully you’ll be able to figure out the other output options, including the errors).


Import-Module ShowUI -Req 1.4

New-Grid -Columns "*","5","70" -Rows "21","Auto","5","*" -Margin 5 {
  ProgressBar -Name ProgressBar -ColumnSpan 3 -Maximum 100 -Margin "0,0,0,5"

  TextBox -Name count -Text 12 -MinWidth 150 -Row 1

  TextBox -Name output -MinHeight 100 -ColumnSpan 3 -IsReadOnly -Row 3

  Button "Start" -Name StartButton -Column 2 -Row 1 -On_Click {
    # If you need to pass values from your form to the background function
    # They need to be serializable, and go through the -Parameter hashtable:
    Invoke-Background -Parameter @{ work = [int]$count.Text } {
      param($work=10)
      # You would do real work here, but I'll just fake it
      foreach($item in 1..$work) {
        # Anything that goes to Output triggers the OutputChanged
        Write-Output "Processing item $item of $work"
        Start-Sleep -milli 500
        # And of course, progress triggers the ProgressChanged
        $progress = (([double]$item / [double]$work) * 100)
        Write-Progress "Processing" -PercentComplete $progress
      }
    } -On_OutputChanged {
      # The actual event is on the DataContext object
      $output.Text = $args[0].DataContext.Output -join "`n" | Out-String
    } -On_ProgressChanged {
      $ProgressBar.Value = $args[0].DataContext.LastProgress.PercentComplete
    }
  }
} -Show
 

The first key, of course, is knowing that Invoke-Background sets a DataContext on the object (either the object specified to the -Control parameter, or the parent who’s event handler calls it). That DataContext is a ShowUI.PowerShellDataSource (it’s code is in the C# folder in the ShowUI module), and it has events and properties for each of the streams: Output, Error, Warning, Verbose, Debug, and Progress. It also has a property for the “Last” item from each of those streams, and a TimeStampedOutput property which collects all of the output, in order, with a NoteProperty for the Stream and the TimeStamp. More importantly, it has events which fire on every output.

The second key is knowing that the properties like “Output” contain all the output, and properties like “LastOutput” contain only the very last thing output. Since in the case of the progress we only care about the very last value, we’re able to use the “LastProgress” property. Since we want to show all of the output, rather than try to collect the “last” one each time, we can just overwrite the textbox’s text with the output array.

It’s also very important to remember that the properties are arrays, and in particular to remember that the Output is an array of PSObject, and the others are arrays of ProgressRecords, DebugRecords, ErrorRecords, etc… so we use Out-String to make sure that we convert them to a string before we set the Text of our output pane.

Hopefully this function will help you out — I’ll try to improve it’s help and examples in the next release of ShowUI.

Get-Help for Modules you don’t have installed

If you have you ever wished you could get-help on commands for modules you don’t have installed locally, or are having problems using Save-Help to get module help onto servers that don’t have an internet connection because your clients don’t have those modules — I have a solution.

With the new Update-Help command, the PowerShell team has made it possible to Save-Help to disk, and then move that help and update servers that are offline or behind strict firewalls. However, there’s no built-in way to download help for a module that’s not installed on your local client … and you can’t use the output of Save-Help to read the help on your development box if you don’t have the module installed there.

My new module HelpModules aims to solve both those problems with two commands:

New-HelpModule

New-HelpModule will let you generate a simple module stub that contains just enough information to convince Update-Help and Save-Help to do their jobs. What’s more, it works on the pipeline so you can use Invoke-Command to get the module information from remote servers and pipe it straight into New-HelpModule:

Invoke-Command -ComputerName Server1 {
   Get-Module -ListAvailable | Where HelpInfoUri } | New-HelpModule
 

This example would actually list all the modules from a server named “Server1” that have updatable help, generate stubs for them in your local $PSModuleHelpRoot (more about that later), and update the help files (locally).

You can also generate a stub by hand, given the information about the module. In other words, call your mate up on the phone, have them run Get-Module Hyper-V -List | Format-List * and then read you the GUID, Version, and HelpInfoUri … then, you just run:

New-HelpModule Hyper-V '1.0' 'af4bddd0-8583-4ff2-84b2-a33f5c8de8a7' 'http://go.microsoft.com/fwlink/?LinkId=206726'
 

StubFunctions

The second problem we have is that we can’t run Get-Help on commands that don’t exist on our system. There are two ways around that, using this module. The simplest is to just pass the -StubFunctions switch when you’re calling New-HelpModule. This will generate empty function stubs for each command that’s in the original module — they have no parameters, no code, nothing.

StubFunctions are be enough to let Get-Help work on those commands, but you’ll have to add the $PSModuleHelpRoot to your $Env:PSModulePath in order to take advantage of it. The problem with that is that you’ll pollute your session with modules and commands that don’t really exist (or at least, don’t do anything). Incidentally, I promised more information about PSModuleHelpRoot:

PSModuleHelpRoot

This variable is exported by the HelpModules module, and it’s the path to where New-HelpModule will generate modules (and where Get-ModuleHelp will read from: more on that next). The path defaults to a “WindowsPowerShellHelpModules” folder in your Documents directory, but you can set it to anything you like after importing the module.

Get-ModuleHelp

Get-ModuleHelp is basically a simplified version of Get-Help that works straight on the XML files in your $PSModuleHelpRoot modules. Instead of searching for commands, it searches for help.

It basically works the same as Get-Help, so I’m not going to bother with documentation here — the point is, unlike Get-Help, this doesn’t require you to add $PSModuleHelpRoot to your $Env:PSModulePath, and thus doesn’t add empty modules and commands to your session. It’s a little harder to work with, since you have to know what help you have available, and you have to type the full command name (no wildcard support) but that seemed worth it to me.

Get HelpModules from PoshCode.org (you’ll want to save it as “HelpModules.psm1” to a path in your PSModulePath like: ~\Documents\WindowsPowerShell\Modules\HelpModules\HelpModules.psm1

PowerShell PowerUser Tips: The Hash TabExpansion

One of my favorite features in the built-in TabExpansion in PowerShell is one that many (if not most) users are unaware of: the “#” hash mark.

In order to test this tip, you’re going to need a command console that you’ve used and typed several commands into, so that Get-History will return more than a few different commands. Now, actually run the Get-History command so you can see the list of your last few commands.

The basic tip is pretty simple: if you type “#” and then hit the Tab key, PowerShell completes the full previous command-line.

You can also hit tab again to complete the next oldest command-line, and so on, right back to the beginning (it actually wraps around). You can even hit Shift-Tab to reverse direction if you go past the command-line you wanted. Additionally, this works on your history, so it even completes multi-line items. The one weird thing is that if you tab-complete past an item with multiple lines, the TabExpansion function doesn’t realize the cursor’s not on the prompt line anymore, so it doesn’t quite redraw right, but it’s mostly ok: the commands still work.

Of course, if that’s all there was to this, I’d just have tweeted and gone back to preparing for my presentation at the Windows Server 2012 Launch Event in Rochester

The really cool thing is that you can filter the feature. That is: if you type # and then the first few characters of some command-line in your history, when you hit tab you will get the most recent command-line that starts with those characters, and as before, you can hit tab repeatedly to cycle through all the commands in your history that match.

There’s one more part to the hash-tab feature: numbers. If you know the history id of the command you want to type, you can type, for instance, #20{Tab} to complete the 20th command from your PowerShell session. It’s basically the same as using “r” shortcut for invoke-history, except you hit tab after the number instead of space before, and you get to see the command (and edit it) before you press Enter.

So to sum up:

  • hash-tab – completes command-lines from your history
  • hash-txt-tab – filters your history like get-history | where { $_.Commandline -Like txt* }
  • hash-id-tab – completes the command from history with the matching id

Get-Command in PowerShell 3 (NOTE: CTP2 Bug causes module loading)

I don’t normally blog about the bugs I find in beta software, but I posted this bug to PowerShell’s Connect and I feel like it got ignored and not voted, so I’m going to try to explain myself better here … The bug is on Connect, but let me talk to you first about how Get-Command is supposed to work.

In PowerShell, Get-Command is a command that serves two purposes: first it lets you search for commands using verb, noun, wildcards, module names etc. and then it also returns metadata about commands. In PowerShell 2, it could only search commands that were in modules (or snapins) you had already imported, or executables & scripts that were in your PATH.

So here’s the deal: Get-Command has always behaved differently when it thinks you’re searching. The only way it can tell that you’re searching is that you don’t provide a full command name. So, if you use a wildcard (e.g.: Get-Command Get-Acl* or even Get-Command Get-Ac[l]), or search using a Noun or Verb (e.g.: Get-Command -Verb Get or Get-Command -Noun Acl or even Get-Command -Verb Get -Noun Acl), then PowerShell assumes you’re searching (and won’t throw an error when no command is found).

In PowerShell 3, because modules can be loaded automatically when you try to run a command from them, Get-Command had to be modified to be able to return commands that aren’t already loaded. The problem the PowerShell team faced is that in order to get the metadata about a command, they needed to actually import the module. What they came up with is that if you’re searching … then Get-Command will not load modules which aren’t already loaded. If you specify a full command name with no wildcards, then PowerShell will load any module(s) where it finds a matching command in order to get the metadata (parameter sets, assembly info, help, etc). And of course, if you specify a full command that doesn’t exist, you’ll get an error!

Perhaps a few examples will help:

Launch PowerShell 3 using:

powershell -noprofile -noexit -command "function prompt {'[$($myinvocation.historyID)]: '}"
 

And then try this, noticing how much more information you get when you specify a specific full name:


[1]: Get-Module
[2]: Import-Module Microsoft.PowerShell.Utility
[3]: Get-Command -Verb Get -Noun Acl | Format-List

Name             : Get-Acl
Capability       : Cmdlet
Definition       : Get-Acl
Path             :
AssemblyInfo     :
DLL              :
HelpFile         :
ParameterSets    : {}
ImplementingType :
Verb             : Get
Noun             : Acl


[4]: Get-Module

ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Manifest   Microsoft.PowerShell.Utility        {Add-Member, ...}

[5]: Get-Command Get-Acl | Format-List

Name             : Get-Acl
Capability       : Cmdlet
Definition       : Get-Acl [[-Path] <string[]>] [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>]

                   Get-Acl -InputObject <psobject> [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>]

                   Get-Acl [[-LiteralPath] <string[]>] [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>]
Path             :
AssemblyInfo     :
DLL              : C:\Windows\Microsoft.Net\assembly\GAC_MSIL\
                   Microsoft.PowerShell.Security\
                   v4.0_3.0.0.0__31bf3856ad364e35\
                   Microsoft.PowerShell.Security.dll
HelpFile         : Microsoft.PowerShell.Security.dll-Help.xml
ParameterSets    : {[[-Path] <string[]>] [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>],
                   -InputObject <psobject> [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>],
                   [[-LiteralPath] <string[]>] [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>]}
ImplementingType : Microsoft.PowerShell.Commands.GetAclCommand
Verb             : Get
Noun             : Acl


[6]: Get-Module

ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Manifest   Microsoft.PowerShell.Security       {ConvertFrom-Sec...}
Manifest   Microsoft.PowerShell.Utility        {Add-Member, ...}
 

But there are several problems:

Get-Command has another parameter: -Module, which allows you to specify which modules should be searched, and in PowerShell 3, it changes the behavior in weird (buggy) ways:

  1. If you specify a single module, then that module is imported (to search it more thoroughly?), even if you specify a specific command that’s not in that module.
  2. If you specify a single module that does not have a command that matches, then Microsoft.PowerShell.Management is loaded also. I don’t know why yet.
  3. If you specify more than one module, and you’re searching, and none of them have a command that matches … it’s just as though you hadn’t specified modules, and nothing unexpected happens.
  4. If you specify more than one module, and a specific command, then it gets really wierd:
    • If the command is in one (or more) of the specified modules, the first module (in PATH order, not the order you specified) which you listed that has the command is imported.
    • If it’s a valid command in a different module, the first module with the command is loaded … and so is Microsoft.PowerShell.Management. I don’t know why! Oh, and you still get the error because it can’t find the command where you told it to look.

I filed a bug on Connect to cover that last scenario where the module containing the command is loaded even though you gave Get-Command a list of modules to look in, here’s another example, and notice that even though all I do here is run the same command over and over (I added some Get-Module to show you WHY you get these results, but it’s the same without them), but I get different results:


[1]: Import-Module Microsoft.PowerShell.Utility
[2]: Get-Module

ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Manifest   Microsoft.PowerShell.Utility        {Add-Member, ...}


[3]: Get-Command Get-Acl -module (Get-Module) # Passes one module
Get-Command : The term 'get-acl' is not recognized as the name of a
cmdlet, function, script file, or operable program. Check the
spelling of the name, or if a path was included, verify that the
path is correct and try again.

[4]: Get-Module

ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Manifest   Microsoft.PowerShell.Management     {Add-Computer, ...}
Manifest   Microsoft.PowerShell.Utility        {Add-Member, ...}


[5]: Get-Command Get-Acl -module (Get-Module) # Passes two modules
Get-Command : The term 'get-acl' is not recognized as the name of a
cmdlet, function, script file, or operable program. Check the
spelling of the name, or if a path was included, verify that the
path is correct and try again.

[6]: Get-Module

ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Manifest   Microsoft.PowerShell.Management     {Add-Computer, ...}
Manifest   Microsoft.PowerShell.Security       {ConvertFrom-Sec...}
Manifest   Microsoft.PowerShell.Utility        {Add-Member, ...}

[7]: # This time it will include Microsoft.PowerShell.Security!
[7]: Get-Command Get-Acl -module (Get-Module)

Capability      Name                ModuleName
----------      ----                ----------
Cmdlet          Get-Acl             Microsoft.PowerShell.Security
 

Rich formatting for PowerShell help

[updated] Ok, I just updated this with a new post on PoshCode. I posted the HtmlHelp module to PoshCode for generating HTML web pages based on the help for functions or cmdlets. It basically has one command: Get-HtmlHelp, which takes a couple of parameters now. The only mandatory parameter is the name of the command you want help for, in this case the html is output on the pipeline and can be redirected into a file.

Get-HtmlHelp Get-HtmlHelp | Set-Content Get-HtmlHelp.html
 

The markup generated is (I hope) reasonable and lightweight, with some simple css rules pre-applied. Feel free to customize the script to generate help however you like.

[new] Generating many at once

I forgot to mention the other parameters on Get-HtmlHelp. They’re pretty cool, because if you want to upload your help you can do so with this. Say you created a module, and you wanted to generate all the help into files for uploading. You want to set the -BaseUrl to the location you will upload them to, and then use the -OutputFolder parameter to generate an html file for each command into the specified folder:

Get-Command -Module ModuleName |
Get-HtmlHelp -BaseUrl http://HuddledMasses.org/HtmlHelp/ -OutputFolder ~\sites\HuddledMasses\HtmlHelp\
 

Now you can just take those files and upload them to the right spot on your website. I actually have some scripts which I can wrap around this to post the help to a wiki, but you’re just going to have to wait for that until the next time I get inspired to work on help …

Show-Help

I did include a little function in the comments and in the help for Get-HtmlHelp which uses ShowUI to display the rich formatted help in a popup window:

function Show-Help {
[CmdletBinding()]
param([String]$Name)  
   Window { WebBrowser -Name wb } -On_Loaded {
      $wb.NavigateToString((Get-HtmlHelp $Name))
      $this.Title = "Get-Help $Name"
   } -Show
}
 

So anyway, enough of that. When you run it, it looks like this (click for the full screenshot):

Click to see full screen-shot

Did you know PowerShell can use Selenium?

This is sort-of a place-holder for a full-length post that I really ought to write about driving web testing from PowerShell using Selenium. I actually have a little module around for doing that with WaTiN, but honestly the Selenium project seems to be a lot more active, and has quite a bit of muscle behind it since they’ve merged with WebDriver…


Add-Type -path ~\Downloads\selenium-dotnet-2.16.0\net40\WebDriver.dll

# Navigate to google in IE (or Firefox, Chrome, Opera, etc)
$driver = New-Object OpenQA.Selenium.IE.InternetExplorerDriver
$driver.Url = "http://google.com"

# Type PowerShell into the query box, the page will update via AJAX
# Note we won't hit enter or anything
$q = $driver.FindElementByname("q")
$q.SendKeys("PowerShell")

# Use a CSS selector to find the first result link and click it
$driver.FindElementByCssSelector("li.g h3.r a").Click()
 

One Catch

The Security tab of the Internet Options dialogIf you try this with IE and you get the error Unexpected error launching Internet Explorer. Protected Mode must be set to the same value (enabled or disabled) for all zones ... it means exactly what it says. You need to open “Internet Options” from your start menu (or from IE), and go through each “zone” and set the “Enabled Protected Mode” check box to the same value for each zone (either all checked, obviously the most secure, or all unchecked). I’m not going to debate whether setting them all unprotected is a good idea … I set mine to all protected, but I don’t generally use IE anyway.

If you want more help, Selenium’s documentation is great, and there’s a section on Getting Started with Selenium WebDriver which I found quite helpful (make sure your examples are in “csharp” and you can almost just copy and paste — someone should offer to do them in PowerShell).

If you want more information about the Internet Explorer driver and this problem in particular, the short answer is that “Protected Mode” is a security boundry, so if you cross over it the COM automation object doesn’t work — thus, you need to make sure you always stay on the same side. There’s a good discussion on the mailing list archive about how it works and why, as well a weird alternative documented on the Selenium JavaDocs

PowerShell 3 – Finally on the DLR!

For those of you living in a cave: PowerShell 3 will be released in Windows 8, and we got a CTP at roughly the same time as the Windows 8 Developer Preview was released (at Microsoft’s new //Build/ conference in September 2011). A second CTP was released just in time for Christmas.

I’ve been playing with PowerShell 3 for a few months now, and I guess it’s long past time I started blogging about it.

There are a lot of new things coming in this release, but for me, the biggest change is the fact that PowerShell is now based on the Dynamic Language Runtime, a runtime environment that adds a set of services for dynamic languages to the Common Language Runtime (CLR), which is the core of the .NET Framework. The DLR makes it easier to develop dynamic languages to run on the .NET Framework. Of course, PowerShell is a dynamic language that runs on the .NET framework, but it was originally begun before the DLR had been released, so it’s only now that it’s finally been adapted to the DLR.

However, although PowerShell 3 is implemented using the DLR, it’s not a DLR language in every way that IronPython or IronRuby are. Let me borrow a couple of graphics from the DLR overview documentation.

The DLR Overview

DLR Overview

You can see there’s three major sections to the DLR as it’s available on CodePlex: hosting, runtime, and language. However, not all of the DLR actually shipped in the .NET Framework 4.0 CLR.

The DLR shipped in CLR 4

DLR Shipped in CLR4

PowerShell 3 takes advantage of all (or most) of what shipped in the CLR, but since the PowerShell team wasn’t willing to be responsible for shipping the rest of the DLR in the operating system, they didn’t implement the rest of it. Which is to say, PowerShell 3 is using the DLR Language Implementation code, with Shared AST and Expression trees, as well as the DynamicObject and Call Site Caching portions of the runtime, but none of the Common Hosting pieces like ScriptRuntime, ScriptScope, ScriptSource, or CompiledCode …

This means that you cannot use the same hosting APIs for PowerShell that you use for IronPython or IronRuby. However, even though you’re stuck using the same hosting APIs that you used with PowerShell 2 … you do get to use dynamic instead of PSObject when you’re working with the output in C#.

This really is a big deal

I wouldn’t care to speculate how many of the changes you’ll see in PowerShell 3 are directly due to the conversion to the DLR, but there are a few changes that you ought to be aware of. The first thing that you’ll probably notice is the difference in execution and performance. Anything you’ve learned about the relative performance of Scripts vs. Functions vs. Cmdlets and the load time of binary vs. script modules is going to go right out the window with PowerShell 3, as scripts and functions are now no longer (re)interpreted each time they’re run, but are compiled, executed, and (sometimes) cached. The result is that initial runs of scripts and imports of script modules are sometimes slower than they used to be, but subsequent runs of the same script or execution of functions from script modules run much faster, and this applies in particular to actual scripts in files and pre-defined functions in modules. Running a function repeatedly is now much faster than pasting the same code repeatedly into the console.

A more subtle, but significant difference is the change to PSObject.

In PowerShell 3, PSObject is a true dynamic object, and thus the output of cmdlets or scripts called in C# can be used with the dynamic keyword in C# instead of with the pseduo-reflection methods which are required for working with PSObject. However, this is just the tip of the iceberg, so to speak.

In PowerShell 2, all of PowerShell’s Extended Type System (ETS) was based on PSObject. New members were always added to a PSObject which is wrapped around the actual “BaseObject” — regardless of whether they came from a types.ps1xml file, or from calling Add-Member on an object. If you use Add-Member on strongly objects that are not already wrapped in a PSObject, you have to specify the -Passthru parameter and capture the output in order to have your object wrapped into a PSObject that the new member can be added to. In addition, when you cast an object to a specific type, those ETS members are mostly lost. Take this script for example:


$psObject = Get-ChildItem
$psObject.Count
$Count1 = ($psObject | where { $_.PSIsContainer }).Count

[IO.FileSystemInfo[]]$ioObject = Get-ChildItem
$ioObject.Count
$Count2 = ($ioObject | where { $_.PSIsContainer }).Count

$Count3 = ($ioObject | where { $_ -is [IO.DirectoryInfo] }).Count
 

In PowerShell 2, $Count1 and $Count3 will be the number of folders in the current directory, but $Count2 will always be ZERO, because the PSIsContainer property is actually an ETS property that’s lost when you cast the object to FileSystemInfo (and therefore it always evaluates as null and

However, in PowerShell 3 that’s no longer true. PowerShell now works with everything as dynamic objects, and Add-Member no longer needs the PSObject to keep track of these ETS members. This script will now get $Count1, $Count2, and $Count3 equal, as expected. Obviously the -Passthru switch on Add-Member is only needed when you’re trying to pipeline things, and not for simple assignments. However, there may also be other implications on when things get wrapped into a PSObject, and when it matters.

I think you’ll agree that having PowerShell on the DLR is awesome! But be aware that there are a few inconsequential breaking changes hiding in this kind of stuff. For example, after running that script above, try these three lines on PowerShell 2 and PowerShell 3 CTP2:


$Count1 -eq $Count2
$e = $ioObject[0] | Add-Member NoteProperty Note "This is a note" -Passthru
$f = $ioObject[0] | Add-Member NoteProperty Note "This is a note" -Passthru
 

In PowerShell 2, you’ll get False, and then the next two lines will work fine. In PowerShell 3 the first line will return True, and since Add-Member actually affects the underlying object even when it’s not wrapped in a PSObject, the third line will actually cause an error, because “Add-Member : Cannot add a member with the name “Note” because a member with that name already exists.”

Anyway, I’m sure I’ll have more to write about the DLR and the changes it’s bringing to PowerShell, but for now, I hope that’s enough to get you thinking ;-)

Arrange – Act – Assert: Intuitive Testing

Today I have a new module to introduce you to. It’s a relatively simple module for testing, and you can pick it up in short order and start testing your scripts, modules, and even compiled .Net code. If you put it together with WASP you can pretty much test anything ;-)

The basis for the module is the arrange-act-assert model of testing. First we arrange the things we’re going to test: set up data structures or whatever you need for testing. Then we act on them: we perform the actual test steps. Finally, we assert the expected output of the test. Normally, the expectation is that during the assert step we’ll return $false if the test failed, and that’s all there is to it. Of course, there’s plenty more to testing, but lets move on to my new module.

The module is called PSaint (pronounced “saint”), and it stands, loosely, for PowerShell Arrange-Act-Assert in testing. Of course, what it stands for isn’t important, just remember the name is PSaint :)

PSaint is really a very simple module, with only a few functions. There are two major functions which we’ll discuss in detail: Test-Code and New-RhinoMock, and then a few helpers which you may or may not even use:

Set-TestFilter

Sets filters (include and/or exclude) for the tests by name or category.

Set-TestSetup (alias “Setup”)

Sets the test setup ScriptBlock which will be run before each test.

Set-TestTeardown (alias “Teardown”)

Sets the test teardown ScriptBlock which will be run after each test.

Assert-That

Assserts something about an object (or the output of a scriptblock) and throws if that assertion is false. This function supports asserting that an exception should be thrown, or that a test is false … and supports customizing the error message as well.

Assert-PropertyEqual

This is a wrapper around Compare-Object to compare the properties of two objects.

How to test with PSaint: Test-Code

Test-Code (alias “Test”) is the main driver of functionality in PSaint, and you use it to define the tests that you want to run. Let’s jump to an example or two so you can see the usefulness of this module.

Let’s start with an extremely simple function that we want to write: New-Guid. We want a function that generates a valid random GUID as a string. We’ll start by writing a couple of tests. First we’ll test that the output of the function is a valid GUID.

test "New-Guid outputs a Guid" {
   act {
      $guid = New-Guid
   }
   assert {
      $guid -is [string]
      New-Object Guid $guid
   }
}
 

Now, to verify that the test works, you should define this function (the GUID-looking thing is one letter short) and then run that test:

function New-Guid { "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaa" }
 

Another proof that it works would be that it should fail on this function too, because “x” is not a valid character in a Guid:

function New-Guid { "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" }
 

So, let’s write a minimal New-Guid that actually generates a valid Guid:

function New-Guid { "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }
 

If you run our test on that, you will see:

   Result: Pass

Result Name                          Category
------ ----                          --------
Pass   New-Guid outputs a Guid
 

If you don’t like the fact that the Category is empty, you could add a category or two to the end of our test. We should also switch to using Assert-That if we want to know which test in the assert failed. Finally, we want to write another test which would test that New-Guid doesn’t just return the same Guid every time, the way ours does right now:

test "New-Guid outputs a Guid" {
   act {
      $guid = New-Guid
   }
   assert {
      Assert-That { $guid -is [string] } -FailMessage "New-Guid returned a $($guid.GetType().FullName)"
      New-Object Guid $guid  # Throws relevant errors already
   }
} -Category Output, ValidGuid

test "New-Guid outputs different Guids" {
   arrange {
      $guids = @()
      $count = 100
   }
   act {
      # generate a bunch of Guids
      for($i=0; $i -lt $count; $i++) {
         $guids += New-Guid
      }
   }
   assert {
      # compare each guid to all the ones after it
      for($i=0; $i -lt $count; $i++) {
         for($j=$i+1; $j -lt $count; $j++) {
            Assert-That ($guids[$i] -ne $guids[$j]) -FailMessage "There were equal Guids: $($guids[$i])"
         }
      }
   }
} -Category Output, RandomGuids
 

Now, we have to actually fix our New-Guid function to generate real random Guids:

function New-Guid { [System.Guid]::NewGuid().ToString() }
 

And at that point, we should have a function, and a couple of tests that verify it’s functionality…

The finer points of assertions

One thing you’ll notice the first time you use Get-Member after loading the PSaint module is a few script properties have been added to everything. I did this because I found myself writing the same Assert-That calls over and over and decided that it would be slicker to make these extension methods than to write new functions for each one:

MustBeA([Type]$Expected,[string]$Message)
MustBeFalse([string]$Message)
MustBeTrue([string]$Message)
MustEqual([Object]$Expected,[string]$Message)
MustNotEqual([Object]$Expected,[string]$Message)
 

There’s also a MustThrow([Type]$Expected, [string]$Message) which can be used on script blocks (note that this function executes the ScriptBlock immediately, so be careful how you use it).

We can use these to tidy up our tests quite a bit, while still getting good error messages when tests fail:

test "New-Guid outputs a Guid String" {
   act {
      $guid = New-Guid
   }
   assert {
      $guid.MustBeA( [string] )
      New-Object Guid $guid # Throws relevant errors already
   }
} -Category Output, ValidGuid

test "New-Guid outputs different Guids" {
   arrange {
      $guids = @()
      $count = 100
   }
   act {
      # generate a bunch of Guids
      for($i=0; $i -lt $count; $i++) {
         $guids += New-Guid
      }
   }
   assert {
      # compare each guid to all the ones after it
      for($i=0; $i -lt $count; $i++) {
         for($j=$i+1; $j -lt $count; $j++) {
            $guids[$i].MustNotEqual($guids[$j])
         }
      }
   }
} -Category Output, RandomGuids
 

COM Objects

PSaint also has a wrapper for COM objects to help with testing them. It adds GetProperty and SetProperty methods to allow you to access COM object properties which don’t show up on boxed COM objects (a common problem when working with MSOffice, for instance). It also adds InvokeMethod for COM objects to invoke methods that don’t show up for similar reasons. These, of course, only help you if you’re already fairly literate with the COM object in question.

Mock Objects

PSaint includes New-RhinoMock, a function for generating a new mock object using RhinoMocks (which is included). Rhino Mocks is a BSD-licensed dynamic mock object framework for the .Net platform. Its purpose is to ease testing by allowing the developer to create mock implementations of custom objects and verify the interactions using unit testing.

I have to admit that this New-RhinoMock function is incomplete, and exposes only a fraction of the options and power in RhinoMocks, but it’s been sufficient for the few times when I’ve wanted to actually mock objects from PowerShell, so I’m including it here.

For those of you (developers) who want to know why RhinoMocks instead of your favorite mocking framework, the answer is astonishingly simple: it had the least number of necessary generic methods (which are impossible to call in PowerShell 2).