Get-Help for Modules you don’t have installed

If you have you ever wished you could get-help on commands for modules you don’t have installed locally, or are having problems using Save-Help to get module help onto servers that don’t have an internet connection because your clients don’t have those modules — I have a solution.

With the new Update-Help command, the PowerShell team has made it possible to Save-Help to disk, and then move that help and update servers that are offline or behind strict firewalls. However, there’s no built-in way to download help for a module that’s not installed on your local client … and you can’t use the output of Save-Help to read the help on your development box if you don’t have the module installed there.

My new module HelpModules aims to solve both those problems with two commands:

New-HelpModule

New-HelpModule will let you generate a simple module stub that contains just enough information to convince Update-Help and Save-Help to do their jobs. What’s more, it works on the pipeline so you can use Invoke-Command to get the module information from remote servers and pipe it straight into New-HelpModule:

Invoke-Command -ComputerName Server1 {
   Get-Module -ListAvailable | Where HelpInfoUri } | New-HelpModule
 

This example would actually list all the modules from a server named “Server1” that have updatable help, generate stubs for them in your local $PSModuleHelpRoot (more about that later), and update the help files (locally).

You can also generate a stub by hand, given the information about the module. In other words, call your mate up on the phone, have them run Get-Module Hyper-V -List | Format-List * and then read you the GUID, Version, and HelpInfoUri … then, you just run:

New-HelpModule Hyper-V '1.0' 'af4bddd0-8583-4ff2-84b2-a33f5c8de8a7' 'http://go.microsoft.com/fwlink/?LinkId=206726'
 

StubFunctions

The second problem we have is that we can’t run Get-Help on commands that don’t exist on our system. There are two ways around that, using this module. The simplest is to just pass the -StubFunctions switch when you’re calling New-HelpModule. This will generate empty function stubs for each command that’s in the original module — they have no parameters, no code, nothing.

StubFunctions are be enough to let Get-Help work on those commands, but you’ll have to add the $PSModuleHelpRoot to your $Env:PSModulePath in order to take advantage of it. The problem with that is that you’ll pollute your session with modules and commands that don’t really exist (or at least, don’t do anything). Incidentally, I promised more information about PSModuleHelpRoot:

PSModuleHelpRoot

This variable is exported by the HelpModules module, and it’s the path to where New-HelpModule will generate modules (and where Get-ModuleHelp will read from: more on that next). The path defaults to a “WindowsPowerShellHelpModules” folder in your Documents directory, but you can set it to anything you like after importing the module.

Get-ModuleHelp

Get-ModuleHelp is basically a simplified version of Get-Help that works straight on the XML files in your $PSModuleHelpRoot modules. Instead of searching for commands, it searches for help.

It basically works the same as Get-Help, so I’m not going to bother with documentation here — the point is, unlike Get-Help, this doesn’t require you to add $PSModuleHelpRoot to your $Env:PSModulePath, and thus doesn’t add empty modules and commands to your session. It’s a little harder to work with, since you have to know what help you have available, and you have to type the full command name (no wildcard support) but that seemed worth it to me.

Get HelpModules from PoshCode.org (you’ll want to save it as “HelpModules.psm1” to a path in your PSModulePath like: ~\Documents\WindowsPowerShell\Modules\HelpModules\HelpModules.psm1

Why I’m not excited about Windows 8 Certified Store Apps

It’s come up a few times recently, and I’m frustrated enough that I thought I’d just post this here for reference.

The Windows 8 App Certification requirements has one particular requirement that makes me (as a life-long scripter) very unhappy:

3.9 All app logic must originate from, and reside in, your app package

Your app must not attempt to change or extend the packaged content through any form of dynamic inclusion of code or data that changes how the application interacts with the Windows Runtime, or behaves with regard to Store policy. It is not permissible, for example, to download a remote script and subsequently execute that script in the local context of your app package

Bottom line: you cannot write extensible apps for the Windows Store. In fact, although Windows PowerShell is shipped even on Windows RT, you can’t use it from a certified Windows 8 Store app.

I don’t know about you, but the apps that I use on a regular basis are almost all extensible, and most of them have both plugins and scripting:

  • Visual Studio (Thank goodness for NuGet, ReSharper, StyleCop, GhostDoc, NCrunch etc)
  • Notepad++ and Sublime Text 2 and PyCharm
  • PowerShell and ConEmu
  • Microsoft Office: Word, Excel
  • KeePass
  • Firefox, and even Chrome and IE
  • XChat and even Trillian

I’ve been using Windows 8 for months now, but every app pinned on my taskbar is extensible, and leaving aside video games, I can only see three apps I’ve used in the last month which aren’t readily extensible: PeaZip (which does have some scripting capabilities, but I don’t use them since I script from PowerShell), Zune, and Trillian (which is technically extensible, but all the plugins I use ship in the box).

Even Windows File Manager has shell extensions.

Now, I’m not saying I won’t use an app that’s not extensible … but without even thinking about it, most of the apps I use are scriptable and/or extensible, and I bet that’s true of most of the apps you use too. As a side note, one of the coolest new phone apps from Microsoft is on{x}, an automation app which is only available on Android (and can’t ever pass validation on the Windows Store because of this policy).

So yeah. Most of the stuff I do with computers is about automation, scripting, robotics… or gaming. I can’t see myself getting really fired up about that App Store stuff.

Let me know when 3.9 is revoked.

Now, I have faith in Microsoft. I’m sure they’re not trying to kill off running multiple windows on a desktop, but I don’t understand why they would write terms in their certification requirements that would prevent an app like Sublime Text 2, KeePass, or Firefox from being written. I certainly hope that they can be convinced to rewrite that constraint to allow for users who choose to install modules and scripts.

As a side note, there’s another point in there that I’m not too happy with either:

4.4 Your app must not be designed or marketed to perform, instruct, or encourage tasks that could cause physical harm to a customer or any other person

We would consider an app that allows for control of a device without human manipulation, or that is marketed for use to resolve emergency or lifesaving situations to violate this requirement.

At first, that one seemed fine. But when you read the detail, it’s clear that any app that is for robotics/AI and wants to interface with external devices is basically going to be refused. Your Lego Mindstorms apps are only allowed if they’re remote controls which require human manipulation, because they … might cause harm?

As long as we’ve got desktop mode and sideloading of non-certified apps, we’re ok (I guess), but Microsoft needs to stop limiting certified apps before they alienate the hackers and tinkerers. I’m a big fan (and author) of Open Source software, but I don’t want a world where all the commercial software companies lock out the geeks and our only option is Open Source.

PowerShell PowerUser Tips: The Hash TabExpansion

One of my favorite features in the built-in TabExpansion in PowerShell is one that many (if not most) users are unaware of: the “#” hash mark.

In order to test this tip, you’re going to need a command console that you’ve used and typed several commands into, so that Get-History will return more than a few different commands. Now, actually run the Get-History command so you can see the list of your last few commands.

The basic tip is pretty simple: if you type “#” and then hit the Tab key, PowerShell completes the full previous command-line.

You can also hit tab again to complete the next oldest command-line, and so on, right back to the beginning (it actually wraps around). You can even hit Shift-Tab to reverse direction if you go past the command-line you wanted. Additionally, this works on your history, so it even completes multi-line items. The one weird thing is that if you tab-complete past an item with multiple lines, the TabExpansion function doesn’t realize the cursor’s not on the prompt line anymore, so it doesn’t quite redraw right, but it’s mostly ok: the commands still work.

Of course, if that’s all there was to this, I’d just have tweeted and gone back to preparing for my presentation at the Windows Server 2012 Launch Event in Rochester

The really cool thing is that you can filter the feature. That is: if you type # and then the first few characters of some command-line in your history, when you hit tab you will get the most recent command-line that starts with those characters, and as before, you can hit tab repeatedly to cycle through all the commands in your history that match.

There’s one more part to the hash-tab feature: numbers. If you know the history id of the command you want to type, you can type, for instance, #20{Tab} to complete the 20th command from your PowerShell session. It’s basically the same as using “r” shortcut for invoke-history, except you hit tab after the number instead of space before, and you get to see the command (and edit it) before you press Enter.

So to sum up:

  • hash-tab – completes command-lines from your history
  • hash-txt-tab – filters your history like get-history | where { $_.Commandline -Like txt* }
  • hash-id-tab – completes the command from history with the matching id

Adventures getting MSBuild, TFS and SQL Server Data Tools to work together

We recently found that our database project at work goes from a 40-minute build and compile to about 20 minutes when we upgrade from the VS2010 (SQL 2008) database projects (with the old .dbproj files) to the new SQL Server Data Tools (SSDT) projects with the .sqlproj files, even though we’re still deploying to SQL Server 2008 R2. So our goal immediately became:

Get the SSDT projects to compile with parameters from MSBuild

The problem is that with the new .sqlproj and the SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets there’s no built-in way to pass the database name or even a connection string when building the project via MSBuild — which is a critical part of our continuous integration builds. The project I’m working on these days has 7 or 8 teams working in up to twice that many branches, and all those branches need CI builds, every one of which deploys the database project and validates it. Since the branches change at least once a week, it’s way too much work to run around modifying publish.xml files to change database names every time we create a new branch (which we need to do to avoid the builds deploying over the top of each other).

With the old .dbproj format, there was a SQLDeploy task called in the TeamData\Microsoft.Data.Schema.SqlTasks.targets build target file which included a whole bunch of variables that could be overridden on the command line, so we could pass TargetDatabase and TargetConnectionString as MSBuild arguments, and then, to be able to compile the whole solution and still call the Deploy target, we added this to the project file:


  <PropertyGroup>
    <DBDeployOnBuild Condition="'$(DBDeployOnBuild)' == ''">False</DBDeployOnBuild>
  </PropertyGroup>
  <Target Name="AfterBuild">
    <CallTarget Targets="Deploy" Condition="'$(DBDeployOnBuild)'=='True'" />
  </Target>
 

In our workflow, we redefine MSBuildArguments in the workflow, and now we can msbuild the whole solution and the database will be deployed:


MSBuildArguments & " /p:DBDeployOnBuild=True;TargetDatabase=""" & BuildDetail.BuildDefinition.Name & """;TargetConnectionString=..."
 

But that doesn’t work with the new SSDT project type.

First of all, they don’t deploy with all their dependencies, instead, they have to be published. It’s basically the same thing, with a different name. But the SqlPublish task requires all the parameters to be in an xml file, and there’s no build properties we can override, because the properties are hiding in that publish.xml file that doesn’t get tokenized

I’ve spent the last couple of days figuring out a work around, so I figured I should blog it up here and help the next guy. The process is not simple. The bottom line is that I haven’t found a way to get the SQLPublish task to take it’s values from anywhere except a publish xml file, so the solution I came up with was to rewrite the publish file using the XDT transform tasks defined for Web.config transforms.

Web.Config Transforms, on random XML files

The cool thing is, there’s actually a ParameterizeTransformXml task which allows you to define your transform as a string in the build file.


  <UsingTask TaskName="ParameterizeTransformXml" AssemblyFile="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v$(VisualStudioVersion)\Web\Microsoft.Web.Publishing.Tasks.dll" />
 

In web projects, that task is used to replace connection strings (to hide them in web packages), but we can use it to replace the database name in our publish.xml. In fact, I can actually add the same properties to the build that we used to have with the old project format (and which we’ll define on the command-line, exactly the way we did before). We put some default properties for TargetDatabaseName and TargetConnectionString in our Debug.publish.xml and our CI.publish.xml and then we just replace them during the build.

It’s a lot more complicated than what we had to do previously, partly because we need to define the SqlPublishProfilePath for the Publish task, but we have to use CallTarget to call the Publish target (not Deploy this time), which doesn’t support passing properties, nor does the target you call inherit properties that are defined in your scope. This means we need to define the SqlPublishProfilePath property in the BeforePublish target which the publish target depends on (the “dependson” relationship inherits defined properties).


  <Target Name="AfterBuild" Condition="'$(DBDeployOnBuild)'=='True'">
    <CallTarget Targets="Publish" />
  </Target>
  <Target Name="BeforePublish" Condition="('$(TargetDatabase)' != '' Or '$(TargetConnectionString)' != '') And Exists($(TransformOutputFile))">
    <PropertyGroup>
      <SqlPublishProfilePath>$(TransformOutputFile)</SqlPublishProfilePath>
    </PropertyGroup>
  </Target>
 

But the real work is actually setting up the TransformPublishXml property with the right XML to replace the nodes with the properties from the command-line arguments, and then actually calling the task. Since we imported the task before, we just need a property group to define our variables with default values, and then a BeforeBuild target to actually call the ParameterizeTransformXml:


  <PropertyGroup Condition="'$(TargetDatabase)' != '' Or '$(TargetConnectionString)' != ''">
    <DBDeployOnBuild Condition="'$(DBDeployOnBuild)' == ''">False</DBDeployOnBuild>
    <TargetConnectionStringXml Condition="'$(TargetConnectionString)' != ''">
      &lt;TargetConnectionString xdt:Transform="Replace"&gt;$(TargetConnectionString)&lt;/TargetConnectionString&gt;
    </TargetConnectionStringXml>
    <TargetDatabaseXml Condition="'$(TargetDatabase)' != ''">
      &lt;TargetDatabaseName xdt:Transform="Replace"&gt;$(TargetDatabase)&lt;/TargetDatabaseName&gt;
    </TargetDatabaseXml>
    <TransformPublishXml>&lt;?xml version="1.0"?&gt;
        &lt;Project xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"&gt;
        &lt;PropertyGroup&gt;$(TargetConnectionString)$(TargetDatabaseXml)&lt;/PropertyGroup&gt;
        &lt;/Project&gt;
    </TransformPublishXml>
    <TransformFile Condition="'$(SqlPublishProfilePath)' != ''">$(SqlPublishProfilePath)</TransformFile>
    <TransformFile Condition="'$(SqlPublishProfilePath)' == ''">$(Configuration).publish.xml</TransformFile>
    <TransformFile Condition="'$([System.IO.Path]::IsPathRooted($(TransformFile)))' == 'False'">$(MSBuildProjectDirectory)$(TransformFile)</TransformFile>
    <!-- In order to do a transform, we HAVE to change the SqlPublishProfilePath-->
    <BuildDefinitionName Condition="'$(BuildDefinitionName)' ==''">VSBuild</BuildDefinitionName>
    <TransformOutputFile>$(MSBuildProjectDirectory)$(BuildDefinitionName)_$(Configuration).publish.xml</TransformOutputFile>
    <TransformScope>$([System.IO.Path]::GetFullPath($(MSBuildProjectDirectory)))</TransformScope>
    <TransformStackTraceEnabled Condition="'$(TransformStackTraceEnabled)'==''">False</TransformStackTraceEnabled>
  </PropertyGroup>
  <UsingTask TaskName="ParameterizeTransformXml" AssemblyFile="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v$(VisualStudioVersion)\Web\Microsoft.Web.Publishing.Tasks.dll" />
  <Target Name="BeforeBuild" Condition="('$(TargetDatabase)' != '' Or '$(TargetConnectionString)' != '')">
    <Message Text="The Target Database: '$(TargetDatabase)' and Connection String: '$(TargetConnectionString)'" Importance="high" />
    <!-- If TargetDatabase or TargetConnectionString is passed in
        Then we use the tokenize transform to create a parameterized sql publish file-->
    <Error Condition="!Exists($(TransformFile))" Text="The SqlPublish Profile '$(TransformFile)' does not exist, please specify a valid file using msbuild /p:SqlPublishProfilePath='Path'" />
    <ParameterizeTransformXml Source="$(TransformFile)" IsSourceAFile="True" Transform="$(TransformPublishXml)" IsTransformAFile="False" Destination="$(TransformOutputFile)" IsDestinationAFile="True" Scope="$(TransformScope)" StackTrace="$(TransformStackTraceEnabled)" SourceRootPath="$(MSBuildProjectDirectory)">
    </ParameterizeTransformXml>
  </Target>
 

So all you have to do is put those three blocks of XML at the bottom of your .sqlproj file, and then call msbuild with /p:TargetDatabase=DBName;TargetConnectionString="Data Source=DBServer;User ID=sa;Password=password";DBDeployOnbuild=True to get the database project to build and deploy to the database you want.

If you’ve got questions, post ‘em — I’m writing this at 1:30 in the morning so I’m not at my most lucid :-)

Get-Command in PowerShell 3 (NOTE: CTP2 Bug causes module loading)

I don’t normally blog about the bugs I find in beta software, but I posted this bug to PowerShell’s Connect and I feel like it got ignored and not voted, so I’m going to try to explain myself better here … The bug is on Connect, but let me talk to you first about how Get-Command is supposed to work.

In PowerShell, Get-Command is a command that serves two purposes: first it lets you search for commands using verb, noun, wildcards, module names etc. and then it also returns metadata about commands. In PowerShell 2, it could only search commands that were in modules (or snapins) you had already imported, or executables & scripts that were in your PATH.

So here’s the deal: Get-Command has always behaved differently when it thinks you’re searching. The only way it can tell that you’re searching is that you don’t provide a full command name. So, if you use a wildcard (e.g.: Get-Command Get-Acl* or even Get-Command Get-Ac[l]), or search using a Noun or Verb (e.g.: Get-Command -Verb Get or Get-Command -Noun Acl or even Get-Command -Verb Get -Noun Acl), then PowerShell assumes you’re searching (and won’t throw an error when no command is found).

In PowerShell 3, because modules can be loaded automatically when you try to run a command from them, Get-Command had to be modified to be able to return commands that aren’t already loaded. The problem the PowerShell team faced is that in order to get the metadata about a command, they needed to actually import the module. What they came up with is that if you’re searching … then Get-Command will not load modules which aren’t already loaded. If you specify a full command name with no wildcards, then PowerShell will load any module(s) where it finds a matching command in order to get the metadata (parameter sets, assembly info, help, etc). And of course, if you specify a full command that doesn’t exist, you’ll get an error!

Perhaps a few examples will help:

Launch PowerShell 3 using:

powershell -noprofile -noexit -command "function prompt {'[$($myinvocation.historyID)]: '}"
 

And then try this, noticing how much more information you get when you specify a specific full name:


[1]: Get-Module
[2]: Import-Module Microsoft.PowerShell.Utility
[3]: Get-Command -Verb Get -Noun Acl | Format-List

Name             : Get-Acl
Capability       : Cmdlet
Definition       : Get-Acl
Path             :
AssemblyInfo     :
DLL              :
HelpFile         :
ParameterSets    : {}
ImplementingType :
Verb             : Get
Noun             : Acl


[4]: Get-Module

ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Manifest   Microsoft.PowerShell.Utility        {Add-Member, ...}

[5]: Get-Command Get-Acl | Format-List

Name             : Get-Acl
Capability       : Cmdlet
Definition       : Get-Acl [[-Path] <string[]>] [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>]

                   Get-Acl -InputObject <psobject> [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>]

                   Get-Acl [[-LiteralPath] <string[]>] [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>]
Path             :
AssemblyInfo     :
DLL              : C:\Windows\Microsoft.Net\assembly\GAC_MSIL\
                   Microsoft.PowerShell.Security\
                   v4.0_3.0.0.0__31bf3856ad364e35\
                   Microsoft.PowerShell.Security.dll
HelpFile         : Microsoft.PowerShell.Security.dll-Help.xml
ParameterSets    : {[[-Path] <string[]>] [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>],
                   -InputObject <psobject> [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>],
                   [[-LiteralPath] <string[]>] [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>]}
ImplementingType : Microsoft.PowerShell.Commands.GetAclCommand
Verb             : Get
Noun             : Acl


[6]: Get-Module

ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Manifest   Microsoft.PowerShell.Security       {ConvertFrom-Sec...}
Manifest   Microsoft.PowerShell.Utility        {Add-Member, ...}
 

But there are several problems:

Get-Command has another parameter: -Module, which allows you to specify which modules should be searched, and in PowerShell 3, it changes the behavior in weird (buggy) ways:

  1. If you specify a single module, then that module is imported (to search it more thoroughly?), even if you specify a specific command that’s not in that module.
  2. If you specify a single module that does not have a command that matches, then Microsoft.PowerShell.Management is loaded also. I don’t know why yet.
  3. If you specify more than one module, and you’re searching, and none of them have a command that matches … it’s just as though you hadn’t specified modules, and nothing unexpected happens.
  4. If you specify more than one module, and a specific command, then it gets really wierd:
    • If the command is in one (or more) of the specified modules, the first module (in PATH order, not the order you specified) which you listed that has the command is imported.
    • If it’s a valid command in a different module, the first module with the command is loaded … and so is Microsoft.PowerShell.Management. I don’t know why! Oh, and you still get the error because it can’t find the command where you told it to look.

I filed a bug on Connect to cover that last scenario where the module containing the command is loaded even though you gave Get-Command a list of modules to look in, here’s another example, and notice that even though all I do here is run the same command over and over (I added some Get-Module to show you WHY you get these results, but it’s the same without them), but I get different results:


[1]: Import-Module Microsoft.PowerShell.Utility
[2]: Get-Module

ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Manifest   Microsoft.PowerShell.Utility        {Add-Member, ...}


[3]: Get-Command Get-Acl -module (Get-Module) # Passes one module
Get-Command : The term 'get-acl' is not recognized as the name of a
cmdlet, function, script file, or operable program. Check the
spelling of the name, or if a path was included, verify that the
path is correct and try again.

[4]: Get-Module

ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Manifest   Microsoft.PowerShell.Management     {Add-Computer, ...}
Manifest   Microsoft.PowerShell.Utility        {Add-Member, ...}


[5]: Get-Command Get-Acl -module (Get-Module) # Passes two modules
Get-Command : The term 'get-acl' is not recognized as the name of a
cmdlet, function, script file, or operable program. Check the
spelling of the name, or if a path was included, verify that the
path is correct and try again.

[6]: Get-Module

ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Manifest   Microsoft.PowerShell.Management     {Add-Computer, ...}
Manifest   Microsoft.PowerShell.Security       {ConvertFrom-Sec...}
Manifest   Microsoft.PowerShell.Utility        {Add-Member, ...}

[7]: # This time it will include Microsoft.PowerShell.Security!
[7]: Get-Command Get-Acl -module (Get-Module)

Capability      Name                ModuleName
----------      ----                ----------
Cmdlet          Get-Acl             Microsoft.PowerShell.Security
 

Rich formatting for PowerShell help

[updated] Ok, I just updated this with a new post on PoshCode. I posted the HtmlHelp module to PoshCode for generating HTML web pages based on the help for functions or cmdlets. It basically has one command: Get-HtmlHelp, which takes a couple of parameters now. The only mandatory parameter is the name of the command you want help for, in this case the html is output on the pipeline and can be redirected into a file.

Get-HtmlHelp Get-HtmlHelp | Set-Content Get-HtmlHelp.html
 

The markup generated is (I hope) reasonable and lightweight, with some simple css rules pre-applied. Feel free to customize the script to generate help however you like.

[new] Generating many at once

I forgot to mention the other parameters on Get-HtmlHelp. They’re pretty cool, because if you want to upload your help you can do so with this. Say you created a module, and you wanted to generate all the help into files for uploading. You want to set the -BaseUrl to the location you will upload them to, and then use the -OutputFolder parameter to generate an html file for each command into the specified folder:

Get-Command -Module ModuleName |
Get-HtmlHelp -BaseUrl http://HuddledMasses.org/HtmlHelp/ -OutputFolder ~\sites\HuddledMasses\HtmlHelp\
 

Now you can just take those files and upload them to the right spot on your website. I actually have some scripts which I can wrap around this to post the help to a wiki, but you’re just going to have to wait for that until the next time I get inspired to work on help …

Show-Help

I did include a little function in the comments and in the help for Get-HtmlHelp which uses ShowUI to display the rich formatted help in a popup window:

function Show-Help {
[CmdletBinding()]
param([String]$Name)  
   Window { WebBrowser -Name wb } -On_Loaded {
      $wb.NavigateToString((Get-HtmlHelp $Name))
      $this.Title = "Get-Help $Name"
   } -Show
}
 

So anyway, enough of that. When you run it, it looks like this (click for the full screenshot):

Click to see full screen-shot

Did you know PowerShell can use Selenium?

This is sort-of a place-holder for a full-length post that I really ought to write about driving web testing from PowerShell using Selenium. I actually have a little module around for doing that with WaTiN, but honestly the Selenium project seems to be a lot more active, and has quite a bit of muscle behind it since they’ve merged with WebDriver…


Add-Type -path ~\Downloads\selenium-dotnet-2.16.0\net40\WebDriver.dll

# Navigate to google in IE (or Firefox, Chrome, Opera, etc)
$driver = New-Object OpenQA.Selenium.IE.InternetExplorerDriver
$driver.Url = "http://google.com"

# Type PowerShell into the query box, the page will update via AJAX
# Note we won't hit enter or anything
$q = $driver.FindElementByname("q")
$q.SendKeys("PowerShell")

# Use a CSS selector to find the first result link and click it
$driver.FindElementByCssSelector("li.g h3.r a").Click()
 

One Catch

The Security tab of the Internet Options dialogIf you try this with IE and you get the error Unexpected error launching Internet Explorer. Protected Mode must be set to the same value (enabled or disabled) for all zones ... it means exactly what it says. You need to open “Internet Options” from your start menu (or from IE), and go through each “zone” and set the “Enabled Protected Mode” check box to the same value for each zone (either all checked, obviously the most secure, or all unchecked). I’m not going to debate whether setting them all unprotected is a good idea … I set mine to all protected, but I don’t generally use IE anyway.

If you want more help, Selenium’s documentation is great, and there’s a section on Getting Started with Selenium WebDriver which I found quite helpful (make sure your examples are in “csharp” and you can almost just copy and paste — someone should offer to do them in PowerShell).

If you want more information about the Internet Explorer driver and this problem in particular, the short answer is that “Protected Mode” is a security boundry, so if you cross over it the COM automation object doesn’t work — thus, you need to make sure you always stay on the same side. There’s a good discussion on the mailing list archive about how it works and why, as well a weird alternative documented on the Selenium JavaDocs

PowerShell 3 – Finally on the DLR!

For those of you living in a cave: PowerShell 3 will be released in Windows 8, and we got a CTP at roughly the same time as the Windows 8 Developer Preview was released (at Microsoft’s new //Build/ conference in September 2011). A second CTP was released just in time for Christmas.

I’ve been playing with PowerShell 3 for a few months now, and I guess it’s long past time I started blogging about it.

There are a lot of new things coming in this release, but for me, the biggest change is the fact that PowerShell is now based on the Dynamic Language Runtime, a runtime environment that adds a set of services for dynamic languages to the Common Language Runtime (CLR), which is the core of the .NET Framework. The DLR makes it easier to develop dynamic languages to run on the .NET Framework. Of course, PowerShell is a dynamic language that runs on the .NET framework, but it was originally begun before the DLR had been released, so it’s only now that it’s finally been adapted to the DLR.

However, although PowerShell 3 is implemented using the DLR, it’s not a DLR language in every way that IronPython or IronRuby are. Let me borrow a couple of graphics from the DLR overview documentation.

The DLR Overview

DLR Overview

You can see there’s three major sections to the DLR as it’s available on CodePlex: hosting, runtime, and language. However, not all of the DLR actually shipped in the .NET Framework 4.0 CLR.

The DLR shipped in CLR 4

DLR Shipped in CLR4

PowerShell 3 takes advantage of all (or most) of what shipped in the CLR, but since the PowerShell team wasn’t willing to be responsible for shipping the rest of the DLR in the operating system, they didn’t implement the rest of it. Which is to say, PowerShell 3 is using the DLR Language Implementation code, with Shared AST and Expression trees, as well as the DynamicObject and Call Site Caching portions of the runtime, but none of the Common Hosting pieces like ScriptRuntime, ScriptScope, ScriptSource, or CompiledCode …

This means that you cannot use the same hosting APIs for PowerShell that you use for IronPython or IronRuby. However, even though you’re stuck using the same hosting APIs that you used with PowerShell 2 … you do get to use dynamic instead of PSObject when you’re working with the output in C#.

This really is a big deal

I wouldn’t care to speculate how many of the changes you’ll see in PowerShell 3 are directly due to the conversion to the DLR, but there are a few changes that you ought to be aware of. The first thing that you’ll probably notice is the difference in execution and performance. Anything you’ve learned about the relative performance of Scripts vs. Functions vs. Cmdlets and the load time of binary vs. script modules is going to go right out the window with PowerShell 3, as scripts and functions are now no longer (re)interpreted each time they’re run, but are compiled, executed, and (sometimes) cached. The result is that initial runs of scripts and imports of script modules are sometimes slower than they used to be, but subsequent runs of the same script or execution of functions from script modules run much faster, and this applies in particular to actual scripts in files and pre-defined functions in modules. Running a function repeatedly is now much faster than pasting the same code repeatedly into the console.

A more subtle, but significant difference is the change to PSObject.

In PowerShell 3, PSObject is a true dynamic object, and thus the output of cmdlets or scripts called in C# can be used with the dynamic keyword in C# instead of with the pseduo-reflection methods which are required for working with PSObject. However, this is just the tip of the iceberg, so to speak.

In PowerShell 2, all of PowerShell’s Extended Type System (ETS) was based on PSObject. New members were always added to a PSObject which is wrapped around the actual “BaseObject” — regardless of whether they came from a types.ps1xml file, or from calling Add-Member on an object. If you use Add-Member on strongly objects that are not already wrapped in a PSObject, you have to specify the -Passthru parameter and capture the output in order to have your object wrapped into a PSObject that the new member can be added to. In addition, when you cast an object to a specific type, those ETS members are mostly lost. Take this script for example:


$psObject = Get-ChildItem
$psObject.Count
$Count1 = ($psObject | where { $_.PSIsContainer }).Count

[IO.FileSystemInfo[]]$ioObject = Get-ChildItem
$ioObject.Count
$Count2 = ($ioObject | where { $_.PSIsContainer }).Count

$Count3 = ($ioObject | where { $_ -is [IO.DirectoryInfo] }).Count
 

In PowerShell 2, $Count1 and $Count3 will be the number of folders in the current directory, but $Count2 will always be ZERO, because the PSIsContainer property is actually an ETS property that’s lost when you cast the object to FileSystemInfo (and therefore it always evaluates as null and

However, in PowerShell 3 that’s no longer true. PowerShell now works with everything as dynamic objects, and Add-Member no longer needs the PSObject to keep track of these ETS members. This script will now get $Count1, $Count2, and $Count3 equal, as expected. Obviously the -Passthru switch on Add-Member is only needed when you’re trying to pipeline things, and not for simple assignments. However, there may also be other implications on when things get wrapped into a PSObject, and when it matters.

I think you’ll agree that having PowerShell on the DLR is awesome! But be aware that there are a few inconsequential breaking changes hiding in this kind of stuff. For example, after running that script above, try these three lines on PowerShell 2 and PowerShell 3 CTP2:


$Count1 -eq $Count2
$e = $ioObject[0] | Add-Member NoteProperty Note "This is a note" -Passthru
$f = $ioObject[0] | Add-Member NoteProperty Note "This is a note" -Passthru
 

In PowerShell 2, you’ll get False, and then the next two lines will work fine. In PowerShell 3 the first line will return True, and since Add-Member actually affects the underlying object even when it’s not wrapped in a PSObject, the third line will actually cause an error, because “Add-Member : Cannot add a member with the name “Note” because a member with that name already exists.”

Anyway, I’m sure I’ll have more to write about the DLR and the changes it’s bringing to PowerShell, but for now, I hope that’s enough to get you thinking ;-)

Arrange – Act – Assert: Intuitive Testing

Today I have a new module to introduce you to. It’s a relatively simple module for testing, and you can pick it up in short order and start testing your scripts, modules, and even compiled .Net code. If you put it together with WASP you can pretty much test anything ;-)

The basis for the module is the arrange-act-assert model of testing. First we arrange the things we’re going to test: set up data structures or whatever you need for testing. Then we act on them: we perform the actual test steps. Finally, we assert the expected output of the test. Normally, the expectation is that during the assert step we’ll return $false if the test failed, and that’s all there is to it. Of course, there’s plenty more to testing, but lets move on to my new module.

The module is called PSaint (pronounced “saint”), and it stands, loosely, for PowerShell Arrange-Act-Assert in testing. Of course, what it stands for isn’t important, just remember the name is PSaint :)

PSaint is really a very simple module, with only a few functions. There are two major functions which we’ll discuss in detail: Test-Code and New-RhinoMock, and then a few helpers which you may or may not even use:

Set-TestFilter

Sets filters (include and/or exclude) for the tests by name or category.

Set-TestSetup (alias “Setup”)

Sets the test setup ScriptBlock which will be run before each test.

Set-TestTeardown (alias “Teardown”)

Sets the test teardown ScriptBlock which will be run after each test.

Assert-That

Assserts something about an object (or the output of a scriptblock) and throws if that assertion is false. This function supports asserting that an exception should be thrown, or that a test is false … and supports customizing the error message as well.

Assert-PropertyEqual

This is a wrapper around Compare-Object to compare the properties of two objects.

How to test with PSaint: Test-Code

Test-Code (alias “Test”) is the main driver of functionality in PSaint, and you use it to define the tests that you want to run. Let’s jump to an example or two so you can see the usefulness of this module.

Let’s start with an extremely simple function that we want to write: New-Guid. We want a function that generates a valid random GUID as a string. We’ll start by writing a couple of tests. First we’ll test that the output of the function is a valid GUID.

test "New-Guid outputs a Guid" {
   act {
      $guid = New-Guid
   }
   assert {
      $guid -is [string]
      New-Object Guid $guid
   }
}
 

Now, to verify that the test works, you should define this function (the GUID-looking thing is one letter short) and then run that test:

function New-Guid { "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaa" }
 

Another proof that it works would be that it should fail on this function too, because “x” is not a valid character in a Guid:

function New-Guid { "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" }
 

So, let’s write a minimal New-Guid that actually generates a valid Guid:

function New-Guid { "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }
 

If you run our test on that, you will see:

   Result: Pass

Result Name                          Category
------ ----                          --------
Pass   New-Guid outputs a Guid
 

If you don’t like the fact that the Category is empty, you could add a category or two to the end of our test. We should also switch to using Assert-That if we want to know which test in the assert failed. Finally, we want to write another test which would test that New-Guid doesn’t just return the same Guid every time, the way ours does right now:

test "New-Guid outputs a Guid" {
   act {
      $guid = New-Guid
   }
   assert {
      Assert-That { $guid -is [string] } -FailMessage "New-Guid returned a $($guid.GetType().FullName)"
      New-Object Guid $guid  # Throws relevant errors already
   }
} -Category Output, ValidGuid

test "New-Guid outputs different Guids" {
   arrange {
      $guids = @()
      $count = 100
   }
   act {
      # generate a bunch of Guids
      for($i=0; $i -lt $count; $i++) {
         $guids += New-Guid
      }
   }
   assert {
      # compare each guid to all the ones after it
      for($i=0; $i -lt $count; $i++) {
         for($j=$i+1; $j -lt $count; $j++) {
            Assert-That ($guids[$i] -ne $guids[$j]) -FailMessage "There were equal Guids: $($guids[$i])"
         }
      }
   }
} -Category Output, RandomGuids
 

Now, we have to actually fix our New-Guid function to generate real random Guids:

function New-Guid { [System.Guid]::NewGuid().ToString() }
 

And at that point, we should have a function, and a couple of tests that verify it’s functionality…

The finer points of assertions

One thing you’ll notice the first time you use Get-Member after loading the PSaint module is a few script properties have been added to everything. I did this because I found myself writing the same Assert-That calls over and over and decided that it would be slicker to make these extension methods than to write new functions for each one:

MustBeA([Type]$Expected,[string]$Message)
MustBeFalse([string]$Message)
MustBeTrue([string]$Message)
MustEqual([Object]$Expected,[string]$Message)
MustNotEqual([Object]$Expected,[string]$Message)
 

There’s also a MustThrow([Type]$Expected, [string]$Message) which can be used on script blocks (note that this function executes the ScriptBlock immediately, so be careful how you use it).

We can use these to tidy up our tests quite a bit, while still getting good error messages when tests fail:

test "New-Guid outputs a Guid String" {
   act {
      $guid = New-Guid
   }
   assert {
      $guid.MustBeA( [string] )
      New-Object Guid $guid # Throws relevant errors already
   }
} -Category Output, ValidGuid

test "New-Guid outputs different Guids" {
   arrange {
      $guids = @()
      $count = 100
   }
   act {
      # generate a bunch of Guids
      for($i=0; $i -lt $count; $i++) {
         $guids += New-Guid
      }
   }
   assert {
      # compare each guid to all the ones after it
      for($i=0; $i -lt $count; $i++) {
         for($j=$i+1; $j -lt $count; $j++) {
            $guids[$i].MustNotEqual($guids[$j])
         }
      }
   }
} -Category Output, RandomGuids
 

COM Objects

PSaint also has a wrapper for COM objects to help with testing them. It adds GetProperty and SetProperty methods to allow you to access COM object properties which don’t show up on boxed COM objects (a common problem when working with MSOffice, for instance). It also adds InvokeMethod for COM objects to invoke methods that don’t show up for similar reasons. These, of course, only help you if you’re already fairly literate with the COM object in question.

Mock Objects

PSaint includes New-RhinoMock, a function for generating a new mock object using RhinoMocks (which is included). Rhino Mocks is a BSD-licensed dynamic mock object framework for the .Net platform. Its purpose is to ease testing by allowing the developer to create mock implementations of custom objects and verify the interactions using unit testing.

I have to admit that this New-RhinoMock function is incomplete, and exposes only a fraction of the options and power in RhinoMocks, but it’s been sufficient for the few times when I’ve wanted to actually mock objects from PowerShell, so I’m including it here.

For those of you (developers) who want to know why RhinoMocks instead of your favorite mocking framework, the answer is astonishingly simple: it had the least number of necessary generic methods (which are impossible to call in PowerShell 2).

Ramblings about computer markets …

This is what I call a stream of consciousness, edited. Please don’t rip my head off, call me names, etc.

Microsoft is a product of the commodity hardware era.

That is, Microsoft is a company that couldn’t have existed previously, in an era where computers cost tens of thousands of dollars (or hundreds of thousands of dollars). When the computers themselves were expensive, the hardware was the item of value.

There was only one business model.

Companies producing them were not thinking about the software running on them as valuable (and even missed patenting things like windowed interfaces, pointing devices, and networking protocols), because they through the valuable thing would be the hardware forever.

Modern business models for personal computing.

In the modern era, we have three major business models: hardware, software, service.

Apple represents the hardware model: they’re making billions selling top-end hardware (phones, laptops, computers, tablets). They’re basically giving away their software. They’re trying to commodify the software — making it trivial and making it easier for thousands of companies and even individuals to produce competing versions of every possible application on their hardware.

Microsoft represents the software model: they’re making billions selling software licenses for software that runs on any kind of hardware. They’re actively trying to commodify hardware, making their software run on any kind of hardware from hundreds of manufacturers, and driving prices of home computers, netbooks, and tablets down to a couple of hundred dollars.

RedHat (and Linux vendors in general) represents the service model: they’re making millions selling service contracts for free software that runs on that same really cheap hardware that Microsoft’s model has been driving.

How might that affect service, software, and hardware?

It wouldn’t be any surprise if service companies didn’t care much about how user friendly the software is: after all, they only make money if you need support to use it.

It would also not be surprising if software companies didn’t care how confusing the hardware choices are — they want to make sure that there are lots of hardware companies, because that’s how you maintain the commodity status of the hardware.

And of course, it wouldn’t be a surprise if hardware companies made high end hardware, released their software updates freely or cheaply, treating the software like firmware designed to run only on their specific hardware models. These companies might have the lowest barriers to switching hardware architectures and creating new hardware form factors, since they’re providing a stand-alone platform (the software is like firmware) without much regard for disruption of software platforms.

Which market is easier to enter?

It seems to me like the most sustainable of these models, and more importantly, the most favorable to the consumer … is the software model. I prefer business models which encourage competition, and I think that hardware and service models do not.

On the one hand it’s easier to grow a software company than it is to grow a new service company. It’s very hard to compete with the big guys on service because of reach (you can’t provide world-wide service), and as a result, very few service companies make it past the mom-and-pop size to medium-sized businesses employing more than 10 people.

On the other hand, it’s easier to create a new software company than it is to create a new hardware company. It’s very hard to compete with the big guys on hardware because of costs (you can’t get hardware costs down without substantial volume).

You only have to look at the past few years to see that it takes a long time for a new hardware company to grow to a useful size, and that service companies tend to consolidate long before they become competitive.

But you’re not taking X into account

Of course, this is a fairly simple interpretation of the personal computer marketplace. I know that. Tell me something I don’t know below.

You can do more than breathe for free…