Tag Archives: Scripting

Scripts I’ve written or customized. Mostly Windows scripts, but all sort of automation tools pop up here.

Why I’m not excited about Windows 8 Certified Store Apps

It’s come up a few times recently, and I’m frustrated enough that I thought I’d just post this here for reference.

The Windows 8 App Certification requirements has one particular requirement that makes me (as a life-long scripter) very unhappy:

3.9 All app logic must originate from, and reside in, your app package

Your app must not attempt to change or extend the packaged content through any form of dynamic inclusion of code or data that changes how the application interacts with the Windows Runtime, or behaves with regard to Store policy. It is not permissible, for example, to download a remote script and subsequently execute that script in the local context of your app package

Bottom line: you cannot write extensible apps for the Windows Store. In fact, although Windows PowerShell is shipped even on Windows RT, you can’t use it from a certified Windows 8 Store app.

I don’t know about you, but the apps that I use on a regular basis are almost all extensible, and most of them have both plugins and scripting:

  • Visual Studio (Thank goodness for NuGet, ReSharper, StyleCop, GhostDoc, NCrunch etc)
  • Notepad++ and Sublime Text 2 and PyCharm
  • PowerShell and ConEmu
  • Microsoft Office: Word, Excel
  • KeePass
  • Firefox, and even Chrome and IE
  • XChat and even Trillian

I’ve been using Windows 8 for months now, but every app pinned on my taskbar is extensible, and leaving aside video games, I can only see three apps I’ve used in the last month which aren’t readily extensible: PeaZip (which does have some scripting capabilities, but I don’t use them since I script from PowerShell), Zune, and Trillian (which is technically extensible, but all the plugins I use ship in the box).

Even Windows File Manager has shell extensions.

Now, I’m not saying I won’t use an app that’s not extensible … but without even thinking about it, most of the apps I use are scriptable and/or extensible, and I bet that’s true of most of the apps you use too. As a side note, one of the coolest new phone apps from Microsoft is on{x}, an automation app which is only available on Android (and can’t ever pass validation on the Windows Store because of this policy).

So yeah. Most of the stuff I do with computers is about automation, scripting, robotics… or gaming. I can’t see myself getting really fired up about that App Store stuff.

Let me know when 3.9 is revoked.

Now, I have faith in Microsoft. I’m sure they’re not trying to kill off running multiple windows on a desktop, but I don’t understand why they would write terms in their certification requirements that would prevent an app like Sublime Text 2, KeePass, or Firefox from being written. I certainly hope that they can be convinced to rewrite that constraint to allow for users who choose to install modules and scripts.

As a side note, there’s another point in there that I’m not too happy with either:

4.4 Your app must not be designed or marketed to perform, instruct, or encourage tasks that could cause physical harm to a customer or any other person

We would consider an app that allows for control of a device without human manipulation, or that is marketed for use to resolve emergency or lifesaving situations to violate this requirement.

At first, that one seemed fine. But when you read the detail, it’s clear that any app that is for robotics/AI and wants to interface with external devices is basically going to be refused. Your Lego Mindstorms apps are only allowed if they’re remote controls which require human manipulation, because they … might cause harm?

As long as we’ve got desktop mode and sideloading of non-certified apps, we’re ok (I guess), but Microsoft needs to stop limiting certified apps before they alienate the hackers and tinkerers. I’m a big fan (and author) of Open Source software, but I don’t want a world where all the commercial software companies lock out the geeks and our only option is Open Source.

Arrange – Act – Assert: Intuitive Testing

Today I have a new module to introduce you to. It’s a relatively simple module for testing, and you can pick it up in short order and start testing your scripts, modules, and even compiled .Net code. If you put it together with WASP you can pretty much test anything ;-)

The basis for the module is the arrange-act-assert model of testing. First we arrange the things we’re going to test: set up data structures or whatever you need for testing. Then we act on them: we perform the actual test steps. Finally, we assert the expected output of the test. Normally, the expectation is that during the assert step we’ll return $false if the test failed, and that’s all there is to it. Of course, there’s plenty more to testing, but lets move on to my new module.

The module is called PSaint (pronounced “saint”), and it stands, loosely, for PowerShell Arrange-Act-Assert in testing. Of course, what it stands for isn’t important, just remember the name is PSaint :)

PSaint is really a very simple module, with only a few functions. There are two major functions which we’ll discuss in detail: Test-Code and New-RhinoMock, and then a few helpers which you may or may not even use:

Set-TestFilter

Sets filters (include and/or exclude) for the tests by name or category.

Set-TestSetup (alias “Setup”)

Sets the test setup ScriptBlock which will be run before each test.

Set-TestTeardown (alias “Teardown”)

Sets the test teardown ScriptBlock which will be run after each test.

Assert-That

Assserts something about an object (or the output of a scriptblock) and throws if that assertion is false. This function supports asserting that an exception should be thrown, or that a test is false … and supports customizing the error message as well.

Assert-PropertyEqual

This is a wrapper around Compare-Object to compare the properties of two objects.

How to test with PSaint: Test-Code

Test-Code (alias “Test”) is the main driver of functionality in PSaint, and you use it to define the tests that you want to run. Let’s jump to an example or two so you can see the usefulness of this module.

Let’s start with an extremely simple function that we want to write: New-Guid. We want a function that generates a valid random GUID as a string. We’ll start by writing a couple of tests. First we’ll test that the output of the function is a valid GUID.

test "New-Guid outputs a Guid" {
   act {
      $guid = New-Guid
   }
   assert {
      $guid -is [string]
      New-Object Guid $guid
   }
}
 

Now, to verify that the test works, you should define this function (the GUID-looking thing is one letter short) and then run that test:

function New-Guid { "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaa" }
 

Another proof that it works would be that it should fail on this function too, because “x” is not a valid character in a Guid:

function New-Guid { "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" }
 

So, let’s write a minimal New-Guid that actually generates a valid Guid:

function New-Guid { "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" }
 

If you run our test on that, you will see:

   Result: Pass

Result Name                          Category
------ ----                          --------
Pass   New-Guid outputs a Guid
 

If you don’t like the fact that the Category is empty, you could add a category or two to the end of our test. We should also switch to using Assert-That if we want to know which test in the assert failed. Finally, we want to write another test which would test that New-Guid doesn’t just return the same Guid every time, the way ours does right now:

test "New-Guid outputs a Guid" {
   act {
      $guid = New-Guid
   }
   assert {
      Assert-That { $guid -is [string] } -FailMessage "New-Guid returned a $($guid.GetType().FullName)"
      New-Object Guid $guid  # Throws relevant errors already
   }
} -Category Output, ValidGuid

test "New-Guid outputs different Guids" {
   arrange {
      $guids = @()
      $count = 100
   }
   act {
      # generate a bunch of Guids
      for($i=0; $i -lt $count; $i++) {
         $guids += New-Guid
      }
   }
   assert {
      # compare each guid to all the ones after it
      for($i=0; $i -lt $count; $i++) {
         for($j=$i+1; $j -lt $count; $j++) {
            Assert-That ($guids[$i] -ne $guids[$j]) -FailMessage "There were equal Guids: $($guids[$i])"
         }
      }
   }
} -Category Output, RandomGuids
 

Now, we have to actually fix our New-Guid function to generate real random Guids:

function New-Guid { [System.Guid]::NewGuid().ToString() }
 

And at that point, we should have a function, and a couple of tests that verify it’s functionality…

The finer points of assertions

One thing you’ll notice the first time you use Get-Member after loading the PSaint module is a few script properties have been added to everything. I did this because I found myself writing the same Assert-That calls over and over and decided that it would be slicker to make these extension methods than to write new functions for each one:

MustBeA([Type]$Expected,[string]$Message)
MustBeFalse([string]$Message)
MustBeTrue([string]$Message)
MustEqual([Object]$Expected,[string]$Message)
MustNotEqual([Object]$Expected,[string]$Message)
 

There’s also a MustThrow([Type]$Expected, [string]$Message) which can be used on script blocks (note that this function executes the ScriptBlock immediately, so be careful how you use it).

We can use these to tidy up our tests quite a bit, while still getting good error messages when tests fail:

test "New-Guid outputs a Guid String" {
   act {
      $guid = New-Guid
   }
   assert {
      $guid.MustBeA( [string] )
      New-Object Guid $guid # Throws relevant errors already
   }
} -Category Output, ValidGuid

test "New-Guid outputs different Guids" {
   arrange {
      $guids = @()
      $count = 100
   }
   act {
      # generate a bunch of Guids
      for($i=0; $i -lt $count; $i++) {
         $guids += New-Guid
      }
   }
   assert {
      # compare each guid to all the ones after it
      for($i=0; $i -lt $count; $i++) {
         for($j=$i+1; $j -lt $count; $j++) {
            $guids[$i].MustNotEqual($guids[$j])
         }
      }
   }
} -Category Output, RandomGuids
 

COM Objects

PSaint also has a wrapper for COM objects to help with testing them. It adds GetProperty and SetProperty methods to allow you to access COM object properties which don’t show up on boxed COM objects (a common problem when working with MSOffice, for instance). It also adds InvokeMethod for COM objects to invoke methods that don’t show up for similar reasons. These, of course, only help you if you’re already fairly literate with the COM object in question.

Mock Objects

PSaint includes New-RhinoMock, a function for generating a new mock object using RhinoMocks (which is included). Rhino Mocks is a BSD-licensed dynamic mock object framework for the .Net platform. Its purpose is to ease testing by allowing the developer to create mock implementations of custom objects and verify the interactions using unit testing.

I have to admit that this New-RhinoMock function is incomplete, and exposes only a fraction of the options and power in RhinoMocks, but it’s been sufficient for the few times when I’ve wanted to actually mock objects from PowerShell, so I’m including it here.

For those of you (developers) who want to know why RhinoMocks instead of your favorite mocking framework, the answer is astonishingly simple: it had the least number of necessary generic methods (which are impossible to call in PowerShell 2).

More Custom Attributes for PowerShell (Parameter Transformation)

I wrote a post awhile back about using custom attributes for PowerShell parameter validation but when I did it, I focused on the use of attributes to improve the error messages output by validation (specifically, by: ValidatePattern).

There are many other things that can be done with custom attributes. However, PowerShell ships with two base types for attributes which derive from the CmdletMetadataAttribute, and it applies special processing to parameters of functions or cmdlets which have these attributes: ValidateArguments and ArgumentTransformation, since I’ve already written about custom argument validation, I figured a post about argument transformations would be appropriate.

It just so happens that I had cause to write such an attribute recently:


using System;
using System.ComponentModel;
using System.Management.Automation;
using System.Reflection;
using System.Text.RegularExpressions;
using System.Windows.Automation;

[AttributeUsage(AttributeTargets.Field | AttributeTargets.Property)]
public class StaticFieldAttribute : ArgumentTransformationAttribute {
   private Type _class;

   public override string ToString() {
      return string.Format("[StaticField(OfClass='{0}')]", OfClass.FullName);
   }

   public override Object Transform( EngineIntrinsics engineIntrinsics, Object inputData) {
      if(inputData is string && !string.IsNullOrEmpty(inputData as string)) {
         System.Reflection.FieldInfo field = _class.GetField(inputData as string, BindingFlags.Static | BindingFlags.Public);
         if(field != null) {
            return field.GetValue(null);
         }
      }
      return inputData;
   }
   
   public StaticFieldAttribute( Type ofClass ) {
      OfClass = ofClass;
   }

   public Type OfClass {
      get { return _class; }
      set { _class = value; }
   }  
}

This was written in C#, but you can wrap it up in an Add-Type -TypeDefinition in PowerShell as I did in the latest preview of my Windows Automation Scripts for PowerShell … and basically embed it in a script, a module, or your profile. A note of warning: you should specify a namespace, really, to avoid collisions.

However, if you can’t read C#, or if you’re not experienced with reflection, you may not even be able to tell what that code does, so let me explain quickly.

Basically, an ArgumentTransformationAttribute is very simple: it just has to have a Transform method which converts an input object into an output object. The Transform method has access to the PowerShell EngineIntrinsics class which gives you access to the current Host, the current SessionState to get variable values, etc. as well as being able to Invoke Commands or PSProviders…

In the example above, I’m trying to transform a string into an object. There’s a collection of the type of objects that I want which are defined as static fields on a certain class, so I created an argument transformation which takes the string, looks for a field with that name, and returns it. I made the class that it looks on configurable, so it would be flexible, so the actual usage looks something like this:

param(
   [Parameter(Mandatory=$false)]
   [System.Windows.Automation.ControlType]
   [StaticField(([System.Windows.Automation.ControlType]))]$ControlType
)

This parameter will now take a ControlType object, or the NAME of one of the fields on ControlType (which it will look up and return), so instead of always having to call: Select-UIElement -ControlType [System.Windows.Automation.ControlType]::Button, I can just write: Select-UIElement -ControlType Button … which is clearly a bit nicer to use.

PowerShell has one of these argument transformations included for use with credentials, so whenever you write a script that has a PSCredential parameter, you should decorate it with the CredentialAttribute like this:

param(
   [Parameter(Mandatory=$false)]
   [System.Management.Automation.PSCredential]
   [System.Management.Automation.Credential()]$Credential = [System.Management.Automation.PSCredential]::Empty
)

That one’s a little confusing because you leave off the “Attribute” part of the attribute’s name (ie: you don’t have to specify [System.Management.Automation.CredentialAttribute()]), so at first glance, it looks like you’re specifying the Credential type twice. Of course, in reality this is another use of parenthesis in PowerShell. To specify an attribute, you use square braces as with types, but with parenthesis in them (even if the attribute doesn’t require any parameters).

Type then Attribute, and non-mandatory parameters

When you specify a transformation on a parameter, you must be careful to specify the type first and then the transformation attribute (although this may seem counter-intuitive if you’re a developer). This puts the transformation closest to the variable name, and ensures that it is called before the value is cast to the parameter type.

When you don’t specify a parameter as mandatory, and you do specify a transformation attribute, the attribute’s Transform method will still be called, even if the user doesn’t provide you with an input. This is so that the transform attribute can provide a default value if need be. However, it’s called with a null value for the input data — this means that your attribute needs to be able to deal with a null value and output something which the cmdlet or script can deal with.

Depending on your use cases, it may be enough to just output null, but in the case of the Credential parameter, passing an empty string causes the Credential entry dialog to pop up. That’s the desired behavior if you want it to be mandatory, but otherwise, you need to be sure to provide a default value such as the Empty credentials — this will supress the prompt, but it will return an empty credential object you can easily distinguish from a passed-in value. In any case, if nothing is passed and the parameter isn’t marked manadatory, but the transform object creates a default value, the $PSBoundParameters should still be empty.

One Last Hurrah

To make this post as useful as I can, I’ve written a TransformAttribute that takes a script block to do the transform with, and a few examples of using it. In these examples I always assume the input is a string which will be converted into an environment variable, a static field value (this example does exactly the same thing as the class above, but using the generic ScriptBlock-based Transform), and even transforming user names into email addresses (by looking them up in Active Directory). I’ll post the code in PowerShell format this time, with the requisite Add-Type wrapped around it:


Add-Type -TypeDefinition @"
using System;
using System.ComponentModel;
using System.Management.Automation;
using System.Collections.ObjectModel;

[AttributeUsage(AttributeTargets.Field | AttributeTargets.Property)]
public class TransformAttribute : ArgumentTransformationAttribute {
   private ScriptBlock _scriptblock;
   private string _noOutputMessage = "
Transform Script had no output.";

   public override string ToString() {
      return string.Format("
[Transform(Script='{{{0}}}')]", Script);
   }

   public override Object Transform( EngineIntrinsics engine, Object inputData) {
      try {
         Collection<PSObject> output =
            engine.InvokeCommand.InvokeScript( engine.SessionState, Script, inputData );
         
         if(output.Count > 1) {
            Object[] transformed = new Object[output.Count];
            for(int i =0; i < output.Count;i++) {
               transformed[i] = output[i].BaseObject;
            }
            return transformed;
         } else if(output.Count == 1) {
            return output[0].BaseObject;
         } else {
            throw new ArgumentTransformationMetadataException(NoOutputMessage);
         }
      } catch (ArgumentTransformationMetadataException) {
         throw;
      } catch (Exception e) {
         throw new ArgumentTransformationMetadataException(string.Format("
Transform Script threw an exception ('{0}'). See `$Error[0].Exception.InnerException.InnerException for more details.",e.Message), e);
      }
   }
   
   public TransformAttribute() {
      this.Script = ScriptBlock.Create("
{`$args}");
   }
   
   public TransformAttribute( ScriptBlock Script ) {
      this.Script = Script;
   }

   public ScriptBlock Script {
      get { return _scriptblock; }
      set { _scriptblock = value; }
   }
   
   public string NoOutputMessage {
      get { return _noOutputMessage; }
      set { _noOutputMessage = value; }
   }  
}
"
@

## Some example transformations:

## Convert a string into the value of the named environment variable (or error)
function Test-TransformEnvironment {
param(
   [Parameter(Mandatory=$true)]
   [string]
   [Transform({ Get-Content "Env:$($args[0])" })]
   $Environment
)
process { Write-Host $Environment }
}

# Test TransformEnvironment
Test-TransformEnvironment UserName
# Test Error Message 1:
Test-TransformEnvironment "This is not an environment variable name"


Add-Type -Assembly UIAutomationTypes
function Test-TransformStaticFieldValue {
param(
   [Parameter(Mandatory=$true)]
   [System.Windows.Automation.ControlType]
   [Transform({
      param([Parameter(Mandatory=$true)][string]$FieldName)
      foreach($field in [System.Windows.Automation.ControlType].GetField( $FieldName, "IgnoreCase,Public,Static" ) | Where { $_ }) {
         $field.GetValue($null)
      }
   })]
   $ControlType
)
process { $ControlType }
}

# Test TransformStaticFieldValue
Test-TransformStaticFieldValue Button
# Test Error Message 2:
Test-TransformStaticFieldValue DoorHandle


function Test-TransformEmail {
param(
   [Parameter(Mandatory=$false)]
   [String[]]
   [Transform(NoOutputMessage = "Specified value is not an email address, and we could not find a user by that name", Script = {
      param([Parameter(Mandatory=$true)][string[]]$UserName)
      if(!$UserName) {
         $UserName = Read-Host "Username"
      }
      $ads = New-Object System.DirectoryServices.DirectorySearcher([ADSI]'')
      foreach($a in $UserName){
         if("$a".Contains("@")) { write-output $a } else {
            $ads.filter = "(|(samAccountName=$a)(displayName=$a))"
            foreach($user in $ads.FindAll().GetEnumerator()) {
               $user.GetDirectoryEntry().Mail
            }
         }
      }
   })]
   $UserEmail
     
)
process { Write-Host $UserEmail }
}

# Test TransformEmail
Test-TransformEmail -User $Env:UserName
# Test Error Message 3:
Test-TransformEmail -User Gremlins

function Test-StackingTransforms {
param(
   [Parameter(Mandatory=$false)]
   [String[]]
   ## Transform UserNames to Email Adresses
   [Transform(NoOutputMessage = "Specified value is not an email address, and we could not find a user by that name", Script = {
      param([Parameter(Mandatory=$true)][string[]]$UserName)
      $ads = New-Object System.DirectoryServices.DirectorySearcher([ADSI]'')
      foreach($a in $UserName){
         if("$a".Contains("@")) { write-output $a } else {
            $ads.filter = "(|(samAccountName=$a)(displayName=$a))"
            foreach($user in $ads.FindAll().GetEnumerator()) {
               $user.GetDirectoryEntry().Mail
            }
         }
      }
   })]
   ## Transform a path to it's content
   [Transform({ if($args[0]) { Get-Content $args[0] } else { Get-Content Env:\Username } })]
   $UserEmail
     
)
process { $UserEmail }
}

## Rely on the default value
Test-StackingTransforms

## Specify an environment variable
Test-StackingTransforms Env:\UserName

## Read from a file (first, build the list)
$Env:UserName > $pwd\users.txt
$Env:UserName >> $pwd\users.txt
$Env:UserName >> $pwd\users.txt
$Env:UserName >> $pwd\users.txt
Test-StackingTransforms $pwd\users.txt
 

I want to point out two things about these examples:

In the TransformAttribute there is custom error handling for the script: exceptions are handled and printed, the case where you get no output is handled and printed, and you can provide your own error message for that no output case.

Notice that not all of the transformed parameters are mandatory: in the last two I’ve dealt with null input specially by either prompting the user from inside the transform script, or using a default value when none is provided. These are powerful options which you should use with care. Prompting from inside the script will allow you to build up pretty much anything you need in the transformation attribute, and having a default value can be just as useful, but you need to make sure that you keep the assumptions in your script in line with what actually happens in the parameter processing.

Hopefully this will give you some ideas for how other transformations :) If you’re inspired, please feel free to share examples in the comments: just use a code tag around them: <code lang="posh">

Logging Robocopy errors to the Event Log using PowerShell

Someone came into IRC last week asking for help converting a rather large vbscript into PowerShell, and got me interested in turning Robocopy logs into Windows Events…

The original VBScript is about 68 lines of code, and writes one event per log file. We duplicated it’s functionality with the following 11 lines of code:

Param($LogPath, $LogName, $ArchiveDays)
[string]$Log = Join-Path $LogPath $LogName
if(Select-String -Path $Log -pattern "0x0000") {
   $logError = "$Log.ERROR.$(get-date -format 'yyyy-MM-dd-hhmmss')"
   write-eventlog Application -Source Robocopy -EventId 12 -EntryType Error -Message "Robocopy Job Failed -please check log file $LogError"
   move-item $log $logError
} else {
   # If you do not want to send Success Events to the Application Log, comment out the following line
   write-eventlog Application -Source Robocopy -EventId 1 -EntryType Information -Message "Robocopy Job Failed -please check log file $LogError"
   move-item $Log "$Log.ARCHIVE.$(Get-Date -f 'yyyy-MM-dd-hhmmss')"
}
## Remove archive logs older than $archiveDate
Get-ChildItem "$Log.ARCHIVE.*" | Where { $_.CreationTime -lt (Get-Date).AddDays(-$archiveDays) } | Remove-Item

Actually, even that first attempt extended the functionality a little, because we potentially make multiple archive copies which we only delete once they pass the archive date.

New-EventLog Application Robocopy

There are two catches. First, you need PowerShell 2.0. Second, before you can run that script the first time, you have to create the “Robocopy” event source for the machine by running this command in an elevated PowerShell console (that is, you have to run it “as Administrator”):

New-EventLog Application Robocopy

Of course, being a good geek, I couldn’t leave well enough alone, so we changed the script so that it would log each unique error to the event log (including the line following the error, which has more details), so that there’s no need to go “check the log file” on the machine, since you can retrieve the event logs remotely. The finished script looks like this:

#requires -Version 2.0
## BEFORE you use this the FIRST time (only once per machine)
## you must run the following command elevated (as Administrator):
## New-EventLog Application Robocopy

Param(  [string]$LogPath  = "C:\Logs\",
        [string]$LogName  = "Robocopy-log-file.log",
        [int]$ArchiveDays = 30
)
[string]$Log = Join-Path $LogPath $LogName

$Archive = "ARCHIVE"
foreach($errorEvent in Select-String -Path $Log -Pattern 'ERROR .*0x0000.*$' -context 0,1 | sort {$_.matches[0].value} -Unique )
{
        $Archive = "ERROR"
        write-eventlog Application -Source Robocopy -EventId 12 -EntryType Error -Message $($errorEvent.Line + "`n" + $errorEvent.Context.PostContext)
}

switch($Archive) {
        "ERROR" { ## Archive the log file as an ERROR (we never delete these automatically)
                move $Log "$Log.ERROR.$(get-date -format 'yyyy-MM-dd-hhmmss')"
        }
        "ARCHIVE" {
                write-eventlog Application -Source Robocopy -EventId 1 -EntryType Information  -Message "Robocopy successful"
                ## Archive the log file
                move-item $Log "$Log.ARCHIVE.$(Get-Date -f 'yyyy-MM-dd-hhmmss')" -Force
        }
}

## Remove archive logs older than $archiveDate
$archiveDate = (Get-Date).AddDays(-$archiveDays)
Get-ChildItem "$Log.ARCHIVE.*" | Where { $_.CreationTime -lt $archiveDate } | Remove-Item

Notice that we cleaned up the parameters a little bit, and put some defaults in, but we still haven’t written “help” ... that’s partly because I still am not sure that’s the best option for logging :) Another way would be to just write one log event, but with details about the errors like:

#requires -Version 2.0
## BEFORE you use this the FIRST time (only once per machine)
## you must run the following command elevated (as Administrator):
## New-EventLog Application Robocopy

Param(  [string]$LogPath  = "C:\Logs\",
        [string]$LogName  = "Robocopy-log-file.log",
        [int]$ArchiveDays = 30
)
[string]$Log = Join-Path $LogPath $LogName
[string]$LogError = "$Log.ERROR.$(get-date -format 'yyyy-MM-dd-hhmmss')"
[string]$LogArchive = "$Log.ARCHIVE.$(get-date -format 'yyyy-MM-dd-hhmmss')"

$Errors = Select-String -Path $Log -Pattern 'ERROR .*0x0000.*$' -context 0,1 |
          Group-Object { $_.Context.PostContext } |
          Format-Table Count, Name -HideTableHeaders -AutoSize | Out-String
if($Errors) {
        write-eventlog Application -Source Robocopy -EventId 12 -EntryType Error -Message "$errors`n`nPlease check: $LogError"
        move $Log $LogError
} else {
        write-eventlog Application -Source Robocopy -EventId 1 -EntryType Information  -Message "Robocopy successful. Log archived: $LogArchive"
        move-item $Log $LogArchive
}

## Remove archive logs older than $archiveDate
$archiveDate = (Get-Date).AddDays(-$archiveDays)
Get-ChildItem "$Log.ARCHIVE.*" | Where { $_.CreationTime -lt $archiveDate } | Remove-Item

A DSL for XML in PowerShell: New-XDocument

In July of last year I wrote a PowerShell script with the goal of allowing me to generate XML from PowerShell with a simple markup that would look a little like the resulting XML ... this week I was using that script again, and had a couple of issues that made me go back and look at the source.

While I was playing with the source and tweaking things a little bit to improve the way it handles namespaces, I started playing with the idea that I could improve the syntax. At the very least, I thought, I ought to be able to do away with all those “xe” aliases…

Well, I was able to. ( [new] new version) And what’s more, I managed to dramatically clean up the way namespaces work, and make it so that really, the only ugly part of the syntax is the initial declaration of namespaces! I’m going to start with two examples, and use them to walk you through the features :)

Example 1

The simplest example I could think of is to list all the files in a folder, with the file size and last modified stamp:

[string]$xml = New-XDocument folder -path $pwd {
   foreach($file in Get-ChildItem) {
      file -Modified $file.LastWriteTimeUtc -Size $file.Length { $file.Name }
   }
}

The output of that, when run on my formats folder, looks like this:

<folder path="C:\Users\Jaykul\Documents\WindowsPowerShell\formats">
  <file modified="2009-11-07T07:27:00Z" size="30474">CliXml.xsd</file>
  <file modified="2009-11-07T07:27:40.48001Z" size="14314">format.xsd</file>
  <file modified="2010-01-16T21:30:06.0562796Z" size="18275">NppExternalLexers.xml</file>
  <file modified="2009-03-18T21:28:51.6579351Z" size="5802">Recommender.Types.Format.ps1xml</file>
  <file modified="2009-11-07T07:27:40.518029Z" size="5107">types.xsd</file>
</folder>

You can immediately see what the script does: New-XDocument (which is aliased as ‘xml’) actually generates the root xml node, so the first argument to it is the name of that node, and any other arguments become attributes … except for the script block. That script block turns into the contents of the node.

Inside the script block, PowerShell code is parsed as usual, but whenever a command that doesn’t exist is encountered, it is turned into an xml node! Pretty simple, right? Of course, if you wanted to create a node with a name that’s already taken by a PowerShell command, you can just replace file with New-XElement file, or (using aliases) xe 'file', which explicitly creates an xml node with the given name.

That’s pretty much it for our first example, so let’s look at a more complicated example, with multiple namespaces, and deeper nesting.

Example 2

This time, we’ll create an Atom document, and we’ll include some namespace extensions (including a made up one for listing my files as we did above):

New-XDocument (([XNamespace]"http://www.w3.org/2005/Atom") + "feed")          `
              -fi ([XNamespace]"http://huddledmasses.org/schemas/FileInfo")   `
              -dc ([XNamespace]"http://purl.org/dc/elements/1.1")             `
              -$([XNamespace]::Xml +'lang') "en-US" -Encoding "UTF-16"        `
{
   title {"Huddled Masses: You can do more than breathe for free..."}
   link {"http://HuddledMasses.org/"}
   updated {(Get-Date -f u) -replace " ","T"}
   author {
      name {"Joel Bennett"}
      uri {"http://HuddledMasses.org/"}
   }
   id {"http://HuddledMasses.org/" }

   entry {
      title {"A DSL for XML in PowerShell: New-XDocument"}
      link {"http://HuddledMasses.org/A-DSL-for-XML-in-PowerShell-New-XDocument/" }
      id {"http://HuddledMasses.org/A-DSL-for-XML-in-PowerShell-New-XDocument/" }
      updated {(Get-Date 2010/03/03 -f u) -replace " ","T"}
      summary {"A while back, I posted a simple mini language for generating XML from PowerShell script. However, I was using it the other day, and I really just felt that the markup was ugly, since it was littered with 'xe' marks and such."}
      link -rel license -href "http://creativecommons.org/licenses/by/3.0/" -title "CC By-Attribution"
      dc:rights { "Copyright 2010, Some rights reserved (licensed under the Creative Commons Attribution 3.0 Unported license)" }
      category -scheme "http://huddledmasses.org/tag/" -term "xml"
      category -scheme "http://huddledmasses.org/tag/" -term "PowerShell"
      category -scheme "http://huddledmasses.org/tag/" -term "DSL"
      fi:folder -Path "~\Formats" {
         foreach($file in Get-ChildItem (Join-Path (Split-Path $profile) Formats)) {
            fi:file -Created $file.CreationTimeUtc -Modified $file.LastWriteTimeUtc -Size $file.Length { $file.Name }
         }
      }
   }
} | % { $_.Declaration.ToString(); $_.ToString() }

There are four things you should notice, in particular:

First: the initial tag has a [XNamespace] added to it. You can specify a tag name that has a namespace by adding them together this way, or by embedding the namespace in the string like "{http://www.w3.org/2005/Atom}feed" instead. Either way works. This initial namespace becomes the default namespace for the document. If you don’t specify a namespace on tags later, they automatically belong to that one.

Second: when you want to add additional namespaces, you can do so with a custom prefix like: -dc ([XNamespace]"http://purl.org/dc/elements/1.1"), and that prefix (dc) takes on a special meaning. When you want to have a tag later on that is part of that namespace, you just prefix the tag, like dc:rights —the same way you would in XML.

Third: any number of attributes can be specified using the -name value syntax, but anything in a {scriptblock} becomes the content — and is subject to the same rules as the outer sections.

Fourth: This generates an XDocument. When you cast an XDocument to string, the xml declaration is left off, so if you want it, you need to manually add it via $XDocument.Declaration. Incidentally, XDocuments are not XMLDocuments, but they are trivially castable to them.

The output of that particular section of New-XDocument is this:

<feed xmlns:dc="http://purl.org/dc/elements/1.1" xmlns:fi="http://huddledmasses.org/schemas/FileInfo" xml:lang="en-US" xmlns="http://www.w3.org/2005/Atom">
 
  <link />http://HuddledMasses.org/
  <updated>2010-03-04T00:44:31Z</updated>
  <author>
    <name>Joel Bennett</name>
    <uri>http://HuddledMasses.org/</uri>
  </author>
  <id>http://HuddledMasses.org/</id>
  <entry>
   
    <link />http://HuddledMasses.org/A-DSL-for-XML-in-PowerShell-New-XDocument/
    <id>http://HuddledMasses.org/A-DSL-for-XML-in-PowerShell-New-XDocument/</id>
    <updated>2010-03-03T00:00:00Z</updated>
    <summary>A while back, I posted a simple mini language for generating XML from PowerShell script. However, I was using it the other day, and I really just felt that the markup was ugly, since it was littered with 'xe' marks and such.</summary>
    <link rel="license" href="http://creativecommons.org/licenses/by/3.0/" title="CC By-Attribution" />
    <dc:rights>Copyright 2010, Some rights reserved (licensed under the Creative Commons Attribution 3.0 Unported license)</dc:rights>
    <category scheme="http://huddledmasses.org/tag/" term="xml">
    <category scheme="http://huddledmasses.org/tag/" term="PowerShell">
    <category scheme="http://huddledmasses.org/tag/" term="DSL">
    <fi:folder path="~\Formats">
      <fi:file created="2009-11-07T07:27:00Z" modified="2009-11-07T07:27:00Z" size="30474">CliXml.xsd</fi:file>
      <fi:file created="2009-11-07T07:27:40.4529965Z" modified="2009-11-07T07:27:40.48001Z" size="14314">format.xsd</fi:file>
      <fi:file created="2009-02-07T13:56:12Z" modified="2010-01-16T21:30:06.0562796Z" size="18275">NppExternalLexers.xml</fi:file>
      <fi:file created="2009-08-09T19:10:06.3647094Z" modified="2009-03-18T21:28:51.6579351Z" size="5802">Recommender.Types.Format.ps1xml</fi:file>
      <fi:file created="2009-11-07T07:27:40.4970185Z" modified="2009-11-07T07:27:40.518029Z" size="5107">types.xsd</fi:file>
    </fi:folder>
  </category></category></category></entry>
</feed>

The New-XDocument script itself is on PoshCode in the Xml Module 4 along with a few interesting functions like Select-XML (which improves over the built-in by being able to ignore namespaces when you write XPath) and Remove-XmlNamespace (which was instrumental in removing namespaces for Select-Xml). There’s also a Format-Xml for pretty-printing, and a Convert-Xml for processing XSL transformations.

I’ll probably post some more examples of this in the next week or two, and I really should write some commentary about the function itself, which uses the tokenizer to discover which “commands” are really xml nodes … but for now, I’ll leave you to enjoy.

Reblog this post [with Zemanta]

Are you interested in a virtual PowerShell brown-bag event?

I just put up a poll on the PowerShell Virtual Group to see if people are interested in a low-planning brown-bag event.

The initial question is: would you attend a weekly (or monthly) virtual brown-bag lunch if I put one together.

The idea is that we would start each session with a short collection of interesting links, tips and tricks, or connect issues, and then have a presentation or discussion or script club or open-mic session, depending on interest.

Basically, this is meant to flow, on a week-to-week basis, from script-club to formal user group presentations.

Some weeks we would have presenters from various local user groups share content they had prepared for their local groups. We would encourage you to take ownership of this time, request topics, and prepare presentations (no matter how short).

Some weeks we would have an open-format script club using our private pastebin, IRC channel, and LiveMeeting voice chat. This time could vary from real-world problem solving to scripting games and “project Euler”-style challenges.

Some weeks we would have open-mic time and solicit feedback for the PowerShell team from anyone who cared to give it. I would make an effort to make sure I wasn’t the only MVP there, so you could feel that even if there wasn’t a Microsoft employee present on a given day, your voice could be heard when you make suggestions or vent frustrations …

The point is: the format would vary a bit, and we would adjust it over time to fit whatever works the best within our virtual meeting and time constraints.

So, what do you think? Are you interested?

Fun with PInvoke and Aero Peek

There are so many fun things you can do in Windows when your scripting language allows you to make PInvoke calls to Win32 APIs … but I have to say it’s amazing how many things have been added to Windows recently and still left out of the .Net framework …

Anyway, on to the Aero Peek stuff. If you haven’t seen it, Aero Peek is a feature of Windows 7, which lets you get a peek at your desktop, or at a single window for a moment. Basically, you can press Win+Space (the Windows logo key and the space bar) and all of your open windows instantly turn transparent, revealing … whatever was on your desktop: wallpaper, icons, and gadgets. You can also use it by hovering your mouse on the right corner of the taskbar, or you can peek at a single window by hovering over it’s taskbar button and then over it’s thumbnail.

In any case, I have a couple of windows which I would like to have stay visible on the desktop when I hit the aero peek hotkey: Rainlendar and Miranda. It turns out there’s a simple API call for this: DwmSetWindowAttribute which lets you set the DWMWA_EXCLUDED_FROM_PEEK attribute to ENABLED … causing a window to no longer hide when you press that hotkey. Of course, that API call should be made by those apps, in response to a user setting (so I’ve told their authors about it), but it doesn’t have to be (so I wrote a script to do it myself).

In the old days, I would have written a little systray app which would give you a popup list of all windows, or perhaps added a menu item to a window’s right-click menu … and I would have had to deal with creating some way to persist which apps you wanted to apply this to, and then I could have applied the setting to them whenever you opened them.

But now, I have PowerShell. I don’t need to give you menus and store settings, because I can just let you edit a little script instead.

So here’s a script which will let you turn off Aero Peek transparency for windows by window title and/or process name … Once you have this function available, you can keep Rainlendar’s calendar, tasks, and event windows all visible by just running Remove-AeroPeek -Process Rainlendar2 or you can keep your Miranda contact list visible by running Remove-AeroPeek "Miranda IM" (although you should not that depends on the window title matching just that one window — and Miranda lets you change what your title is, so you may have to adjust it).

Of course, that script really deserves explanation, because it’s showing off quite a few advanced things…

The first thing is that I’m using a Try/Catch block in the BEGIN block to make sure I only execute that code once. You can’t call Add-Type with the same code multiple times in a single PowerShell session, because the type will already exist when you call it the second time. So the code in the try block will throw an exception if the type doesn’t already exist, and in the catch handler, we’ll create the type, and define the other function we need.

Add-Type is a super-powerful cmdlet which compiles code on the fly (or imports types from pre-compiled assemblies). In this case we’re using it to import a little class called Dwm which I started writing myself from PInvoke.net and the MSDN documentation, but then eventually copied most of from a NeoWin forum thread… All this class really does is define the API function and the flags we need to pass to it, and then provides a wrapper for the DwmSetWindowAttribute call. We could have written that call in PowerShell, but at the end of the day, once you start compiling C# code in PowerShell, it’s hard to know when to stop ;)

The Select-Window function is (yet another customized version of) a function I wrote awhile back on PoshCode as part of my (still in progress) rewrite of WASP to use the UIAutomationClient … I’ve just modified it to add only the three properties of the window that I’m interested in: Title and ProcessId (for identifying the correct windows) and Handle (for passing to the DwmSetWindowAttribute call). It uses the RootElement property of System.Windows.Automation.AutomationElement to do a search, and then a series of GetCurrentPropertyValue calls to determine the Name, ProcessId, and NativeWindowHandle of the windows it finds.

That’s pretty much all there is to it, other than filtering out the window(s) that we want and actually calling the API. I think I’m going to have to play a little bit more with this to see what else we can do — I’ve already realized that this means we can make little widgets with PowerBoots and set them to stick around just like regular desktop gadgets …

Reblog this post [with Zemanta]

More Growl for Windows from PowerShell

Well, I’m back already with an update for the Growl module I posted yesterday

This new version is a true PowerShell 2.0 only module, because I found that the Growl callbacks can only be handled in PowerShell 2.0 anyway, so in order to add support for that, I went ahead and upgraded the rest.

The Growl module is now designed to be used BY your PowerShell scripts and modules. The idea is that if you wrote, say, a script/module to check for email and called it PoshMail … you could start up Growl like this:


Import-Module Growl

## At least once (e.g.: on the first Type you register) you should include an AppIcon :)
Register-GrowlType PoshMail NewMail -AppIcon $PoshMailFolder\Email-48.png
## If you want to, you can still override the icon per notice type
Register-GrowlType PoshMail Hotmail -Icon $PoshMailFolder\Hotmail-48.png
Register-GrowlType PoshMail GMail -Icon $PoshMailFolder\GMail-48.png
 

Now, technically that’s all we have to do. At that point, we could pop up Growl notices for either Hotmail or GMail … let’s say our fictitious script (which is running in the background on an event timer) discovers a new message … you could notify the user with a Url callback. Let’s assume that you have a few variables set after checking for email:

  • $Number is the number of email messages
  • $Subjects is an array of email subject lines
  • $Urls is an array of links to the emails


Send-Growl PoshMail GMail "You have $number new messages" ($Summary[0..2] -join "`n") -Url $Urls[0]
# OR ...
Send-Growl PoshMail Hotmail "You have $number new messages" ($Summary[0..2] -join "`n") -Url $Urls[0]
 

Of course, if you wanted to launch your Outlook 2010 preview because you discovered new POP or IMAP mail … or because you want to use Outlook to read your Hotmail/GMail … then a callback URL isn’t going to cut it. In that case, you want to handle the click event yourself:



## We would need it to launch something appropriate on receipt of new POP3 email, for instance.
Register-GrowlCallback {
   PARAM( $response, $context )
   # This is just here for your sake, because I know you want to know what else is in there:
   Write-Host $("Response Type: {0}`nNotification ID: {1}`nCallback Data: {2}`nCallback Data Type: {3}" -f $context.Result, $context.NotificationID, $context.Data, $context.Type) -fore Yellow
   if($context.Result -eq "Click") {
      ## Start the default email client
      Start-Process $(
         $MailTo = (gp Registry::HKEY_CLASSES_ROOT\mailto\shell\open\command)."(default)" -split " "
         for($i=0;$i-lt$mailto.Count;$i++) {
            $email = "$($mailto[0..$i])".Trim('"')
            if(Resolve-Path $email){ return $email }
         }
      )

   }
}
 

Something like that should work regardless of your actual email client, and then you just have to pass a callback value to make sure your function gets called:


# This would trigger the callback REGARDLESS of whether it was clicked.
Send-Growl PoshMail NewMail "You have $number new messages" ($Summary[0..2] -join "`n") -CallbackData "Data" "POP3 Callback"
 

There are a lot of other possibilities here, from alerting when long running commands finish (think PSJobs, or even remote jobs) to … writing popup-based PowerShell instant messengers, or even … using Growl as a ghetto inter-process communication medium which works on multiple PCs. Ok, that’s maybe a bit much, but the point is: sky’s the limit. Have a little fun. Note that the machine SENDING the popups doesn’t necessarily have to have Growl installed — you could just copy the libraries over and then send remote growls….

Growl for Windows – From PowerShell

Growl

This is just a quick post (as I promised, recently) to let you all know that I’ve published the first release of my Growl for Windows module for PowerShell over on PoshCode.

I haven’t been able to get callbacks to work, and I spent way too much time playing with them instead of publishing this, or working on remote growls … but nonetheless, it’s sufficient to let you pop up growl notifications from scripts, which you can, of course, customize in Growl itself.

You can add a few additional notice types very simply (you just hard code them in the script, copying lines 19 and 21 — remember you need to use the same notice names each time anyway) ... and then tweak them in Growl to have different display types, or to make some of your notice types sticky, or have them forwarded to your iPhone or whatever.

Here’s the first version of the script:

Note: if you want to use this on PowerShell 1.0, just comment out the [Parameter( lines, and dot-source it, it should work fine. In PowerShell 2.0 you’ll want to load it as a module, because doing so will hide some of those ugly script variables.

Control your PC with your voice … and PowerShell

  • Have you ever wanted to be able to ask your computer questions and have it answer you out loud?
  • Have you ever wondered if your computer could be more like the ones running the Star Trek Enterprise, responding to voice queries and commands?
  • Have you played with ZWave or X10 home automation and thought that voice control of your devices would be an obvious next step?

Well, ok … I’m not going to show you how to turn on lights or work with home automation — but that’s the main thing that keeps me thinking about this voice-recognition stuff. What geek doesn’t want to walk into the living room and say “Computer: Lights On” and have it work?

Instead, as a first step to all of that, let me show you how to use PowerShell to do simple voice command recognition scripts … which can fire off any PowerShell script you care to write! Continue reading