Browsing Posts published by james

psake Last night I gave a presentation on psake and PowerShell to the Virtual ALT.NET (VAN) group. I had a fun time demonstrating how to write a psake build script, examining some psake internals, discussing the current state of the project, and generally making a fool of myself by showing how much of a PowerShell noob I really am. I believe that the presentation was recorded and will be posted online in the next few days. Then you too can see me fumbling around trying to remember PowerShell syntax. I consider myself a professional developer when it comes to many areas, but in terms of PowerShell I am a hack who learns just enough to get the job done.

As promised, here are the links from the meeting…

psake Resources

Project Homepage

Users mailing list

Dev mailing list

PowerShell Resources

PowerShell Cheat Sheet
 

Windows PowerShell in Action (book)
 

Windows PowerShell Team Blog

On Twitter, I have a search for #psake. If you have a question, comment, or quibble about psake, you can use the #psake hashtag or @JamesKovacs to get my attention.

P.S. A number of people expressed interest in some of my dev-related PowerShell scripts, such as removing unversioned files from a SVN working copy, updating all SVN working copies off a common directory, cleaning a solution, … I’ll be putting them in a publicly accessible location soon and blogging about those scripts. So please be patient and don’t adjust your sets.

Parthenon Under ConstructionI don’t have to remind everyone that we’re in the middle of a world-wide economic depression downturn. When the economy is good, it is hard enough to convince your boss to re-build an application from scratch. When the economy is bad, it is bloody near impossible. In the coming months (and potentially years), I expect that as developers we’re going to be seeing more and more brownfield projects, rather than greenfield ones. We’re going to see more push for evolutionary development of applications rather than wholesale replacement. We will be called upon to improve existing codebases, implement new features, and take these projects in initially unforeseen directions. We will have to learn how to be Working Effectively with Legacy Code. (Took some effort to coerce the title of Michael Feathers’ excellent book into that last sentence.) A lot of companies have tremendous investment in existing “classic” ASP.NET websites, but there is a desire to evolve these sites rather than replace them, especially given these tough economic times. Howard Dierking, editor of MSDN Magazine, has asked me to write a 9-week series entitled From Web Dev to RIA Dev where we will explore refactoring an existing “classic” ASP.NET site. We want to improve an existing ASP.NET using new technologies, such as AJAX, jQuery, and ASP.NET MVC. We want to show that you can adopt better practices, such as continuous integration, web testing (e.g. WatiN, WatiR, Selenium), integration testing, separation of concerns, layering, and more.

So I have two questions for you, Dear Reader…

  1. Can you think of a representative “classic” ASP.NET website (or websites) for the project?
  2. What topics would you like to see covered?

I should clarify what I mean…

“Classic” ASP.NET Applications

I’m currently considering PetShop, IBuySpy, DasBlog, SubText, and ScrewTurn Wiki. I’m not looking for one riff with bad practices. Just an ASP.NET project in need of some TLC – one that doesn’t have a decent build script, isn’t under CI, a bit shy on the testing, little to no AJAX, etc. The code should be typical of what you would see in a typical ASP.NET application. (For that reason, I am probably going to discount IBuySpy as it is built using a funky webpart-like framework, which is not typical of most ASP.NET applications.) Some of the ASP.NET applications that I just mentioned don’t exactly qualify because they do have build scripts, tests, and other features that I would like to demonstrate. I will get permission from the project owner(s) before embarking on this quest and plan to contribute any code back to the project. Needless to say that the project must have source available to be considered for this article series. So please make some suggestions!

Topics

I have a lot of ideas of technologies and techniques to explore including proper XHTML/CSS layout, jQuery, QUnit, AJAX, HTTP Modules/Handlers, build scripts, continuous integration (CI), ASP.NET MVC, web testing (probably WatiN or Selenium), refactoring to separate domain logic from codebehind/sprocs, … I will cover one major topic per week over the 9-week series. So I’ve got lots of room for cool ideas. What would you like to see? What do you think is the biggest bang for your buck in terms of improving an existing ASP.NET application?

Depending on the topics covered (based on your feedback here), I might use one site for the entire series or different sites to cover each topic. It would add some continuity to the series to use a single site over the 9 weeks, but after a brief inspection of the codebases mentioned above, I am having my doubts about finding a single representative site. We’ll have to see. Please leave your suggestions in the comments below. Thanks in advance!

I’ve been having fun writing about my adventures in PowerShell. I would like to thank everyone for their encouragement and feedback. Something that I haven’t explicitly stated – which should go without saying as this is a blog – is that I am not a PowerShell expert. This is one man’s journey learning about PowerShell. I consider myself an expert on C#, .NET, and many other things, but as for PowerShell, I am a hacker. I learn enough to get the job done.

Yes, I wrote psake, which is a cool little PowerShell-based build tool, if I do say so myself. I wrote it in part to learn more about PowerShell and what was possible. (I surprised myself that I was able to write a task-based build system in a few hours with about 100 lines of PowerShell, ignoring comments.)

If you’re looking for PowerShell gospel, I would recommend checking out the Windows PowerShell Blog (the blog of Jeffrey Snover and the rest of the PowerShell team), Windows PowerShell in Action by Bruce Payette, the PowerScripting Podcast, or any of the myriad PowerShell MVP blogs. They are the experts. I’m just a hacker having fun.

With that disclaimer, I hope that by documenting my PowerShell learnings in public, I will help other developers learn PowerShell. I know that I am learning great things about PowerShell from my readers. In Getting Started with PowerShell – Developer Edition, I lamented the lack of grep. My friend, Chris Tavares – known for his work on Unity and ASP.NET MVC – pointed out that Select-String can perform similar functions. Awesome! Then in PowerShell, Processes, and Piping, Jeffrey Snover himself pointed out that PowerShell supports KB, MB, and GB – with TB and PB in v2 – so that you can write:

get-process | where { $_.PrivateMemorySize –gt 200MB }

rather than having to translate 200MB into 200*1024*1024 as I originally did. Fantastic!

In Writing Re-usable Scripts with PowerShell, wekempf, Peter, and Josh discussed the merits of setting your execution policy to Unrestricted. I corrected the post to use RemoteSigned, which means that downloaded PowerShell scripts have to be unblocked before running, but local scripts can run without requiring signing/re-signing. Thanks, guys. I agree that RemoteSigned is a better option.

Let’s talk security for a second. I am careful about security. I run as a normal user on Vista and have a separate admin account. When setting up teamcity.codebetter.com, the build agent runs under a least privilege account, which is why we can’t run NCover on the build server yet. (NCover currently requires admin privs, though Gnoso is working on fixing that in short order.) (Imagine if we did run builds as an Administrator or Local System. Someone could write a unit test that added a new user with admin privs to the box, log in remotely and start installing bots, malware, and other evil.) So I tend to be careful about security.

Now for my real question… What is the threat model for PowerShell that requires script signing? Maybe I’m being really dense here, but I don’t get it. Let’s say I want to do something really evil like formatting your hard drive. I create a PowerShell script with “format c:” in it, exploit a security vulnerability to drop it onto your box, and exploit another security vulnerability to launch PowerShell to execute the script. (Or I name it the same as a common script, but earlier in your search path, and wait for you to execute it.) But you’ve been anal-retentive about security and only allow signed scripts. So the script won’t execute. Damn! Foiled again! But wait! Let me just rename it from foo.ps1 to foo.cmd or foo.bat and execute it from cmd.exe. If I can execute code on your computer, there are easier ways for me to do bad things than writing PowerShell scripts. Given that we can’t require signing for *.cmd and *.bat files as this would horribly break legacy compatibility, what is the advantage of requiring PowerShell scripts to be signed by default? Dear readers, please enlighten me!

UPDATE: Joel “Jaykul” Bennett provided a good explanation in the comments. I would recommend reading:

http://blogs.msdn.com/powershell/archive/2008/09/30/powershell-s-security-guiding-principles.aspx

as it exlains the PowerShell Team’s design decision. The intention wasn’t to force everyone to sign scripts, but to disable script execution for most users (as they won’t use PowerShell), but allow PowerShell users to opt into RemoteSigned or Unrestricted as they so choose. Script signing is meant for administrators to set group policy and use signed scripts for administration (as one example use case of script signing).

Thanks again, Joel! That was faster than sifting through the myriad posts on script signing trying to find the reasoning behind it. Once again, the advantages of learning as a community!

Continuing on from last time, I will now talk about writing re-usable scripts in PowerShell. Any command that we have executed at PowerShell command line can be dropped into a script file. I have lots of little PowerShell scripts for common tasks sitting in c:\Utilities\Scripts, which I include in my path. Let’s say that I want to stop all running copies of Cassini (aka the Visual Studio Web Development Server aka WebDev.WebServer.exe).

Stop-Process -name WebDev.WebServer.exe -ErrorAction SilentlyContinue

This will terminate all running copies of the above-named process. ErrorAction is a common parameter for all PowerShell commands that tells PowerShell to ignore failures. (By default, Stop-Process would fail if no processes with that name were found.)

We’ve got our command. Now we want to turn it into a script so that we don’t have to type it every time. Simply create a new text file with the above command text called “Stop-Cassini.ps1” on your desktop using the text editor of your choice. (The script can be in any directory, but we’ll put it on our desktop to start.) Let’s execute the script by typing the following at the PowerShell prompt:

Stop-Cassini

Current dirctory not in search path by default

What just happened? Why can’t PowerShell find my script? By default, PowerShell doesn’t include the current directory in its search path, unlike cmd.exe. To run a script from the current directory, type the following:

.\Stop-Cassini

Another option is to add the current directory to the search path by modifying Computer… Properties… Advanced… Environment Variables… Path. Or you can modify it for the current PowerShell session using:

$env:Path += ‘.\;’

($env: provides access to environment variables in PowerShell. Try $env:ComputerName, $env:OS, $env:NUMBER_OF_PROCESSORS, etc.)

You could also modify your PowerShell startup script, but we’ll talk about that in a future instalment. Let’s run our script again:

ExecutionPolicy error

No dice again. By default, PowerShell does not allow unsigned scripts to run. This is a good policy on servers, but is a royal pain on your own machine. That means that every time you create or edit a script, you have to sign it. This doesn’t promote the use of quick scripts for simplifying development and administration tasks. So I turn off the requirement for script signing by running the following command from an elevated (aka Administrator) PowerShell prompt:

Set-ExecutionPolicy Unrestricted

Set-ExecutionPolicy RemoteSigned

Set-ExecutionPolicy succeeded

If this command fails with an access denied error:

Set-ExecutionPolicy failed

then make sure that you launched a new PowerShell prompt via right-click Run as administrator…

Third time is the charm…

Success!

We are now able to write and use re-usable scripts in PowerShell. In my next instalment, we’ll start pulling apart some more complicated scripts that simplify common developer tasks.

UPDATE: As pointed out by Josh in the comments, setting your execution policy to RemoteSigned (rather than Unrestricted) is a better idea. Downloaded scripts will require you to unblock them (Right-click… Properties… Unblock or ZoneStripper if you have a lot) before execution. Thanks for the correction.

Coffee and Code Joey Devilla (aka The Accordian Guy) from Microsoft’s Toronto office started Coffee and Code a few weeks ago in Toronto and John Bristowe is bringing the experience to Calgary. When John contacted me about the event, I thought to myself, “I like coffee. I like code. I want to be involved!” (Heck, I would order an Americano via intravenous drop if I could.) So John and I will be hanging at the Kawa Espresso Bar this Friday for the entire day drinking coffee, cutting code, and talking to anyone and everyone about software development. John is broadly familiar with a wide variety of Microsoft development technologies, as am I. I’ll also be happy to talk about Castle Windsor (DI/IoC), NHibernate (ORM), OOP and SOLID, TDD/BDD, continuous integration, software architectures, ASP.NET MVC, WPF/Prism, build automation with psake, … Curious what ALT.NET is about, I’ll be happy to talk about that too! I got my cast off today from my ice skating accident two weeks ago and am in a half-cast now. So I am hopeful that I’ll be able to demonstrate some ReSharper Jedi skills for those curious about the amazing tool that is ReSharper. (I am going to be daring and have a nightly build of ReSharper 4.5 on my laptop to show off some new features.) So come join John and I for some caffeinated coding fun at the Kawa Espresso Bar anytime between 9am and 4pm Friday, March 13, 2009.

This post has been brought to you by the letter C and the number 4…

teamcity.codebetter.com

CodeBetter – in collaboration with JetBrains, IdeaVine, and Devlicio.us – is proud to announce the launch of TeamCity.CodeBetter.com – a continuous integration server farm for open source projects. JetBrains is generously supporting our community efforts by funding the monthly costs of the server farm and providing a TeamCity Enterprise license. Volunteers from CodeBetter, IdeaVine, and Devlicio.us are administering the servers and setting up OSS projects on the build grid. We are currently providing CI for the following projects (in alphabetical order):

We will be adding additional OSS projects in the coming weeks/months. You can register for an account here or log in as a guest. By default, new users can view all hosted projects. If you are a project member, you can email us at teamcity@codebetter.com to have us add you as a project member. (N.B. You only need to be a project member on TeamCity if you need to manage/modify the build.)

The current build grid consists of:

  • TeamCity – Dual CPU Quad-Core Xeon 5310 @ 1.60 GHz (clovertown) with 4GB RAM & 2x250GB SATA II in RAID-1
  • Agents – Single CPU Dual-Core Xeon 5130 @ 2.00 GHz (Woodcrest) with 4GB RAM & 2x250GB SATA II

Both are physical servers hosted by SoftLayer. As we add more projects, we will add additional agent servers to distribute the load. Each agent will have the following software installed:

  • Microsoft Windows Server 2003 R2 Standard x64 Edition SP2
  • Microsoft .NET Framework 1.1, 2.0 SP2, 3.0 SP2, 3.5 SP1
  • Microsoft .NET Framework 2.0 SDK
  • Windows SDK 6.1
  • Microsoft SQL Server 2008 Express (64-bit)
  • Ruby 1.8.6-26 (including rake, rails, activerecord, and rubyzip)

Build scripts can be authored in NAnt, MSBuild, Rake, or any other build runner supported by TeamCity. The build farm monitors your current version control system – at SoureForge.net, Google Code, or elsewhere – for changes and supports Subversion, CVS, and other popular source control systems. (TeamCity 4.0.2 – current version – does not support GIT. GIT support is planned for the 4.1 release, which should to be released at the end of March. We will upgrade to TeamCity 4.1 as soon as it is released.)

Projects can use SQL Express for integration testing. N.B. We will not be manually setting up databases, virtual directories, or other services for projects. If you need a database created, your build script must include its creation/teardown.

If your build script includes unit/integration tests, TeamCity can display the results in the UI if they are in the correct format. We can work with individual projects to ensure that this is the case. TeamCity can archive build artifacts and make them available for download if projects want to make CI builds available to the community.

TeamCity has rich notification mechanisms for communicating build status of projects, including email, IDE (VS, IntelliJ, Eclipse), and Windows Tray notifiers. Alternately you can subscribe to the build server’s RSS feed for succeeded and failed builds, succeeded builds only, or failed builds only. You can make use of these tools to stay apprised of current build health as team members check in changes to source control. All notifiers can be downloaded and configured through the My Settings & Tools menu on the TeamCity server itself.

If you would like your OSS project considered for free CI hosting, you must meet the following requirements:

  • Active project with a commit in the last 3 months.
  • OSI-approved OSS license with a publicly available source.

We will prioritize requests for hosting solely at our discretion, though we will try to accommodate as many requests as possible. (We do have day jobs, you know.) smile_regular We reserve the right to remove projects from the build farm that are monopolizing farm resources. (i.e. If a build script pegs all CPUs at 100% for one hour at a time, it’s going to get disabled so as to be fair to other projects.)

To apply to have us host CI for your OSS project:

  • Register a user account here.
  • Email teamcity@codebetter.com with the following information:
    • Your user account name, which you created above.
    • Project name & URL.
    • Link to your OSI-approved OSS license.
    • URL and type (SVN, CVS, …) of your source control system.
    • Build runner (NAnt, MSBuild, Rake, etc.) and default target.
    • Any additional requirements you might have.

CodeBetter, JetBrains, IdeaVine, and Devlicio.us are looking forward to providing free continuous integration hosting for the open source community. Please email us at teamcity@codebetter.com if you have any questions or comments.

James in CastUnfortunately I’m going to have to postpone my presentation on Tuesday as I broke my left wrist late this afternoon while ice skating with my older son. (I was practicing skating backwards, slipped, and landed with all my weight on the one wrist.) It’s a distal radial fracture, which means lots o’ pain meds for a few days and a cast for 6-8 weeks. smile_sad You can see the effects of the percocet kicking in in the photo to the right. On a positive note, they let you pick the colour of the fibreglass cast. Glad to know that you can break your bones, but still be fashion conscious. Unfortunately they didn’t have my corporate colour green, which would have been cool.

So coding is going to be excruciatingly slow for awhile. I’ll reschedule the presentation once the cast comes off.

Coming to a .NET User Group near you*… This Tuesday only…

Topic: Light Up Your Application with Convention-Over-Configuration
Date: Tuesday, February 24, 2009 Postponed
Time: 5:00 pm – 5:15 pm (registration)
  5:30 pm – ??? (presentation)
Location: Nexen Conference Center
801-7th Ave. S.W., Calgary, AB. (Plus 15 level)
Map

Inversion of Control (IoC) containers, such as Castle Windsor, increase the flexibility and testability of your architecture by decoupling dependencies, but as an application grows, container configuration can become onerous. We will examine how convention-over-configuration can allow us to achieve simplicity in IoC configuration while still maintaining flexibility and testability. You can have your cake and eat it too!

* Assuming that you live in Calgary. smile_regular

A friend, having recently upgraded to Rhino Mocks 3.5, expressed his confusion regarding when to use mocks vs. stubs. He had read Martin Fowler’s Mocks Aren’t Stubs (recommended), but was still confused with how to actually decide whether to use a mock or a stub in practice. (For a pictorial overview, check out Jeff Atwood slightly NSFW photo montage of dummies, fakes, stubs, and mocks.) I thought I’d share my response which cleared up the confusion for my friend…

It’s easy to get confused. Basically, mocks specify expectation. Stubs are just stand-in objects that return whatever you give them. For example, if you were testing that invoices over $10,000 required a digital signature…

// Arrange
var signature = DigitalSignature.Null;
var invoice = MockRepository.GenerateStub<IInvoice>();
invoice.Amount = new Money(10001M);
invoice.Signature = signature;
var signatureVerifier = MockRepository.GenerateMock<ISignatureVerifier>();
signatureVerifier.Expect(v => v.Verify(signature)).Return(false);
var invoiceRepository = MockRepository.GenerateMock<IInvoiceRepository>();
var accountsPayable = new AccountsPayable(signatureVerifier, invoiceRepository);
 
// Act 
accountsPayable.Receive(invoice);
 
// Assert 
invoiceRepository.AssertWasNotCalled(r => r.Insert(invoice));
signatureVerifier.VerifyAllExpectations(); 

I don’t have a real invoice. It’s a proxy generated by Rhino Mocks using Castle DynamicProxy. You just set/get values on the properties. Generally I use the real object, but stubs can be handy if the real objects are complex to set up. (Then again, I would consider using an ObjectMother first.)

Mocks on the other hand act as probes to detect behaviour. We are detecting whether the invoice was inserted into the database without requiring an actual database. We are also expecting the SignatureVerifier to be called and specifying its return value.

Now the confusing part… You can stub out methods on mocks too. If you don’t care whether a method/property on a mock is called (by you do care about other aspects of the mock), you can stub out just that part. You cannot however call Expect or Stub on stubs.

UPDATE: I’m including my comments inline as they respond to important points raised by Aaron and John in the comments here and many readers don’t bother looking through comments. 🙂

@Aaron Jensen – As Aaron points out in the comments, you are really mocking or stubbing a method or property, rather than an object. The object is just a dynamically generated proxy to intercept these calls and relay them back to Rhino Mocks. Whether it’s a mock/stub/dummy/fake doesn’t matter.

Like Aaron, I prefer AssertWasCalled/AssertWasNotCalled. I only use Expect/Verify if the API requires me to supply return values from a method/property as shown above.

I also have to agree that Rhino Mocks, while a great mocking framework that I use everyday, is showing its age. It has at least 3 different mocking syntaxes (one of which I contributed), which increases the confusion. It’s powerful and flexible, but maybe a bit too much. Rhino Mocks vNext would likely benefit from deprecating all but the AAA syntax (the one borrowed from Moq) and doing some house-cleaning on the API. I haven’t given Moq an honest try since its initial release so I can’t comment on it.

@John Chapman – Thanks for the correction. I’ve had Rhino Mocks throw an exception when calling Expect/Stub on a stub. I assumed it was expected behaviour that these methods failed for stubs, but it looks like a bug. (The failure in question was part of an overly complex test and I can’t repro the issue in a simple test right now. Switching from stub to mock did fix the issue though.) stub.Stub() is useful for read-only properties, but generally I prefer getting/setting stub.Property directly. Still stub.Expect() and stub.AssertWasCalled() seems deeply wrong to me. 🙂

Last time, I discussed why you as a developer might be interested in PowerShell and gave you some commands to start playing with. I said we’d cover re-usable scripts, but I’m going to delay that until next post as I want to talk more about life in the shell…

PowerShell feels a lot like cmd.exe, but with a lot more flexibility and power. If you’re an old Unix hack like me, you’ll appreciate the ability to combine (aka pipe) commands together to do more complex operations. Even more powerful than Unix command shells is the fact that rather than inputting/outputting strings as Unix shells do, PowerShell inputs and outputs objects. Let me prove it to you…

  1. At a PowerShell prompt, run “get-process” to get a list of running processes. (Remember that PowerShell uses single nouns for consistency.)
  2. Use an array indexer to get the first process: “(get-process)[0]” (The parentheses tell PowerShell to run the command.)
  3. Now let’s get really crazy… “(get-process)[0].GetType().FullName”

As a .NET developer, you should recognize “.GetType().FullName”. You’re getting the class object (aka System.Type) for the object returned by (get-process)[0] and then asking it for its type name. What does this command return?

image

That’s right! The PowerShell command, get-process, returns an array of System.Diagnostics.Process objects. So anything you can do to a Process object, you can do in PowerShell. To figure out what else we can do with a Process object, you can look up your MSDN docs or just ask PowerShell itself.

get-member –inputObject (get-process)[0]

Out comes a long list of methods, properties, script properties, and more. Methods and properties are the ones defined on the .NET object. Script properties, alias properties, property sets, etc. are defined as object extensions by PowerShell to make common .NET objects friendlier for scripting.

Let’s try something more complex and find all processes using more than 200MB of memory:

get-process | where { $_.PrivateMemorySize –gt 200*1024*1024 }

Wow. We’ve got a lot to talk about. The pipe (|) takes the objects output from get-process and provides them as the input for the next command, where – which is an alias for Where-Object. Where requires a scriptblock denoted by {}, which is PowerShell’s name for a lambda function (aka anonymous delegate). The where command evaluates each object with the scriptblock and passes along any objects that return true. $_ indicates the current object. So we’re just looking at Process.PrivateMemorySize for each process and seeing if it is greater than 200 MB.

Now why does PowerShell use –gt, –lt, –eq, etc. for comparison rather than >, <, ==, etc.? The reason is that for decades shells have been using > and < for input/output redirection. Let’s write to the console:

‘Hello, world!’

Rather than writing to the console, we can redirect the output to a file like this:

‘Hello, world!’ > Hello.txt

You’ll notice that a file is created called Hello.txt. We can read the contents using Get-Content (or its alias, type).

get-content Hello.txt

image

Since > and < already have a well-established use in the shell world, the PowerShell team had to come up with another syntax for comparison operators. They turned to Unix once again and the test command. The same operators that have been used by the Unix test command for 30 years are the same ones as used by PowerShell.*

So helpful tidbits about piping and redirection…

  • Use pipe (|) to pass objects returned by one command as input to the next command.
    • ls | where { $_.Name.StartsWith(‘S’) }
  • Use output redirection (>) to redirect the console (aka stdout) to a file. (N.B. This overwrites the destination file. You can use >> to append to the destination file instead.)
    • ps > Processes.txt
  • Do not use input redirection (<) as it is not implemented in PowerShell v1. smile_sad

So there you have it. We can now manipulate objects returned by PowerShell commands just like any old .NET object, hook commands together with pipes, and redirect output to files. Happy scripting!

* From Windows PowerShell in Action by Bruce Payette p101. This is a great book for anyone experimenting with PowerShell. It has lots of useful examples and tricks of the PowerShell trade. Highly recommended.