Posts Tagged ‘corgibytes’

Recipe: How to make your ruby version and gemset more visible when using rvm

Friday, January 28th, 2011

I recently started using rvm for all of my projects. rvm is designed to help ruby developers work with multiple versions of ruby on their system. I recently came up with a great way always knowing with version of ruby is in use by rvm. But before I go into that, let’s talk about some details about rvm.

Installing a few rubys

Once you get rvm installed, you only need to run rvm install 1.9.2. That command will download and build the latest version of ruby 1.9.2 from source. If you also work with Phusion’s ruby enterprise edition, you can install it from source by running rvm install ree.

After running those two commands, you will have three versions of ruby installed on your computer, the system version, ruby 1.9.2 and ruby enterprise edition. However, if you run ruby --version you’ll notice that the system version is the one that is getting executed. Here’s what doing so looks like on my Mac running Mac OS X version 10.6.6.

cloudraker:~ mscottford$ ruby --version
ruby 1.8.7 (2009-06-12 patchlevel 174) [universal-darwin10.0]

However, if you first run rvm use 1.9.2, the running ruby --version should give you exactly what you expect. Try switching to ruby enterprise edition with rvm use ree. Again, running ruby --version should confirm that the switch took place correctly. Should you want to return to using your system’s version of ruby, just execute rvm use system.

If you ever need to check which version of rvm is active, you can run rvm current. This will output the name of the ruby that rvm has setup. We’ll discuss a better way to determine which ruby is active a little later. But first, let’s talk about how rvm helps us manage gems for each project.

Working with gemsets

Since just having different versions of ruby is not enough, rvm also gives us the ability to create different sets of gems that are completely isolated from each other. By default each version of ruby that we install gets its own gemset. We also have the ability to create named gemsets.

We create gemsets with the command rvm gemset create gemset_name. This will create a gemset for the currently selected version of ruby. One thing to keep in mind is that creating a gemset does not automatically switch you to that gemset. To do that you’ll need to use the rvm use command, for example rvm use 1.9.2@gemset_name. If you need to figure out which gemset is active, you can run the rvm currrent command. Once again, a better way to keep track of this is on it’s way.

Here’s a longer example that shows how to create and work with gemsets.

$ rvm use 1.8.6
$ rvm gemset create funkyness
'funkyness' gemset created (/Users/mscottford/.rvm/gems/ruby-1.8.6-p399@funkyness).
$ rvm current
$ rvm use 1.8.6@funkyness
Using /Users/mscottford/.rvm/gems/ruby-1.8.6-p399 with gemset funkyness
$ rvm current
$ rvm use 1.8.6
Using /Users/mscottford/.rvm/gems/ruby-1.8.6-p399
$ rvm current

Start using .rvmrc, and stop thinking

To make it impossible to forget which of your projects are using which versions of ruby and even then which gemsets, rvm will look for a .rvmrc in each directory that you switch into with the cd command.

Here’s an example.

$ rvm current
$ cd funkyness
$ rvm current

Okay. That looks like magic. What’s going on?

To answer that question, let’s take a peek inside of ~/funkyness/.rvmrc.

rvm 1.8.6@funkyness --create

With that one line, rvm will switch to ruby version 1.8.6 and gemset funkyness. It will even create it for you if it does not exist.

Since this feature could potentially be used to trick you into running malicious code on your system, rvm asks you to trust a .rvmrc file the first time that it reads it. You only have to do this once however.

What’s this post about again?

Now that I’ve explained the finer points about using rvm, I can finally start to vent a little.

I have several ruby projects that I’m working on at the moment. Some are for fun, but most are for my paying clients. I only recently started using .rvmrc files, and I’ve yet to create them for all of my projects. This means that for some projects, I don’t really need to think about which version of ruby is getting run, because it is the version that I’ve specified in the .rvmrc file. For other projects, however, I need to remember to run rvm use with the correct version of ruby for that project.

But I’d hate to run rvm use if I don’t need to. And running rvm current all them time seems a little silly. The solution that I’ve come up with is to alter the bash prompt to always let me know the current version of ruby that is in use by rvm.

To get started I used my favorite search engine to see if someone had already tackled this problem. I found one really good example that even introduced some color, however it was also using some git magic to include the current branch on the prompt. A few modifications later, I came up with my own version that just displays the ruby that is in use by rvm, and it does so while looking like it was copied and pasted out of Textmate.

Here’s what it looks like.

:rvm => 'system'
~ $ cd funkyness

:rvm => 'ruby-1.8.6@funkyness'
funkyness $ cd ..

:rvm => 'system'
~ $

CSS Unit Testing

Tuesday, October 19th, 2010

CSS is often not treated as code, but I’d like to make the argument that it should be treated as code. For instance, it needs to be easier to refactor CSS documents, and it needs to be possible to detect when there are CSS rules that are no longer needed.

I’ve read some recent discussions where the question of CSS unit testing has been raised. Many of these discussions devolved into a debate about whether or not CSS was “code”. A lot of these commentators complained about CSS not being a Turing complete language. I’d like to claim that this debate, with respect to unit testing, is a giant waste of time. Whether or not CSS is Turing complete has nothing to do with the reasons why one would like to write tests against CSS.

But to avoid that debate, I’ll avoid describing CSS as code. Instead I’d to propose that CSS is actually a domain specific language that is used to control the the way a browser works. For simplicity, let’s think of CSS as a configuration syntax.

CSS is a language that affects the way that HTML documents are displayed by web browsers. As the use of CSS has increased as the primary method for altering the way information is displayed, HTML documents have become more and more semantic. The additional tags that have been added to HTML 5 have made the documents even more semantic.

This means that HTML is basically just data that is displayed by a web browser. Web browsers have a default way of presenting this information. CSS is used to alter this default presentation, which means that CSS is simply a method for configuring the workings of a browser. Since CSS has an effect on the execution of a program, the web browser in this case, that is used for displaying information, it is important to ensure that the configuration is accurate for the task at hand.

This is where testing comes in. Testing should be employed any time that we want to ensure the correct operation of an application.

So can we stop the bitching and get started on a decent method for testing CSS already? I’ve got some ideas, but I’ll have to write about them later, once I’ve had a chance to work up some experiments.

Dynamic DNS with Rackspace Apps Control Panel

Monday, May 24th, 2010

I use Rackspace Apps for email across all of my domains, and I am using them as a domain registrar, too. A few days ago, I wanted to create a subdomain, like, that pointed to my computer at home. I didn’t want to use one of the free dynamic dns services, and I wanted to be able to create the subdomain for a domain that I already own.

Through the Rackspace Apps control panel, I can change all of the DNS entries for any of the domains that they are hosting. To create a subdomain, all I have to do is create an A record entry for “example” that points to my home ip address. I used to look that up. Clicked “save”, and then the address started resolving right away. Perfect. Well, at least until my ISP hands out a different ip address.

What I needed was a programmatic way to detect that my ip address has changed and then update the A record entry for with the new ip address.

I dug through the Rackspace Apps API documentation looking for a published way to do this, but I was unable to find one. Then I realized that I could just treat the control panel website as an API by driving it with a headless browser, like HtmlUnit.

There are several ruby gems that provide the ability to drive headless browsers. I took a quick look at celerity, mechanize, steam, and webdriver. I settled in on using steam, because it seemed like it had the fewest number of layers between it and the headless browser. I had also never used it before, and wanted to get a feel for how well it worked.

I have posted the resulting script as a github gist. Take a peek. Comments and forks are welcome. Note that all domain names, user names, and passwords have been replaced with made up examples.

As for running the script, I set up an @hourly entry in cron. The script only contacts the Rackspace Apps control panel if it detects an ip address change, so the risk of accidentally hammering their web server with this, should be low. (I point that out in case any of my old coworkers stumbles across this. :))

The current implementation can only update an existing DNS entry, but it should not be too hard to extend it to support creating additional DNS entries. Anyone that goes to implement this should make sure to correctly handle clicking the add link if there are not any empty rows in the entry table.

In addition to supporting creating new entries, there are a few improvements that I would like to make to this script. (1) I’d like to have an external service resolve the domain. This is going to become critical, because I want my the domain to resolve to the private ip address for devices that are on the private network. (2) I’d like to not rely on

I’m thinking of writing a small web service that I can install on my server that will address both of these. The service will be able to do DNS resolution, and it will be able to detect the public ip address of the caller.

Rake recipes for working with Visual Studio projects

Wednesday, February 3rd, 2010

Despite spending my day job coding in Microsoft-land, I find myself using ruby tools more and more during my daily development. I recently wrote some rake tasks that I think are worth sharing and explaining. Specifically, I wrote a tasks to control building with msbuild (seems redundant, I know) and some tasks for starting and stopping Cassini (or webdev.webserver as it is now named).

Monkey patch Pathname for Windows paths

Since I am using the Pathname class to build paths in my examples, I need to give you the monkey patch that I use to make Pathname correctly display win32 paths.

require 'pathname'
class Pathname
  alias_method :original_to_s, :to_s
  def to_s
    original_to_s.gsub('/', '\\')

Visual Studio command line environment

Running command line Visual Studio tools requires having certain environment variables loaded. This can be done by running vsvars32.bat directly or by launching a Visual Studio command prompt from the start menu. This is something that I always forget to do; my terminal windows spring into life by typing cmd in the run box. So, I wanted to write a task to ensure that the environment was properly set up.

I am working with Visual Studio 2005. If you want to use the Visual Studio 2008 tools then you will need to adjust the vsvars32_bat variable accordingly.

vsvars32_bat =
  "c:\\program files\\") +
  "microsoft visual studio 8" +
task :vsvars do
    `\"#{vsvars32_bat}\" && set`.each do |line|
      if line =~ /(\w+)=(.+)/
        ENV[$1] = $2
  raise "Eek!" if ENV["VSINSTALLDIR"].nil?

This code is the product of about 30 minutes of googling. I eventually found this trick in the shoes rakefile[1].

Now any task that needs to call a Visual Studio command line tool just needs to declare vsvars as a prerequisite, like so.

task :csc => [:vsvars] do
  sh "csc test.cs"

Building with msbuild

This is actually pretty easy once we have the environment set up correctly. Just create a task that calls msbuild from a sh call.

namespace :build do
  desc "Build the core project"
  task :core => [:vsvars] do
    sh "msbuild #{core_solution_path}"

Controlling Cassini (or webdev.webserver)

There is a lot going on here, so let me first overwhelm you with the code, and then explain what it is doing.

def wait_until_site_loaded
  puts "Please be patient. Waiting for site to respond...."
  site_loaded = false
  until site_loaded
    site_loaded = system
      "curl -L -I -f http://localhost:2088/Default.aspx > NUL 2>&1"
  puts "done."

namespace :web do
  desc "Start the local web server"
  task :start => [:vsvars] do do
      sh "webdev.webserver /path:#{web_root_path} /port:2088 /vpath:/"


  desc "Stop the local web server"
  task :stop => [:vsvars] do
    `taskkill /im webdev.webserver.exe > NUL 2>&1`

  desc "Restart the local web server"
  task :restart => [:vsvars, :stop, :start]

Before the code block above will work, you will need to create a web_root_path variable that points to the absolute path of your website. Relative paths will not work.

The web:start task will start the web server. If an instance is already running at the specified port, then you will get an error message in the form of a dialog box. I wish I knew how to make it fail silently. (I also wish I knew how to prevent it from displaying an annoying balloon notification.)

After starting the web server, the web:start calls out to curl to make sure that the site is responding to get requests. I have curl installed as a result of installing cygwin. There are other ways to get curl, but you will need to make sure your path points at it’s location to use the code above without modifications. curl is pretty chatty, so I have silenced it by routing its standard out and standard error streams to NUL.

The web:stop task will kill all instances of Cassini. This might be annoying if you have more than one instance running. If that is the case, then you will need to write in some form of accounting for the process id of the web server process, or develop a way to figure out which process id owns the port you want. Once you know the specific pid, you can call taskkill /pid and pass it the pid of the process you want to kill.

The web:restart task will call web:stop task followed by web:start task.

One more thing: display available tasks from default task

Note: This recipe does not apply to just Windows development. It will work on any platform.

You can get a list of documented tasks by calling rake -T, but I always forget to do that. I usually just call rake when I want to know what it does. So I created a :default task that displays the same task list that you get when you call rake -T.

task :default do |task|
  puts "You must specifiy a task. Available tasks are listed below:"
  task.application.options.show_task_pattern = /.*/

That’s it. I hope you found these recipes helpful. Happy coding!

[1]: This is way off topic, so I stuck it in a footnote. I know it’s old news, but the way _why, the creator of shoes, committed Internet suicide really irritates me. There were a lot of people benefiting from his contributions to the ruby world, and then one day he just decides to take his toys and go home. He could have bowed out graciously, explaining that he had moved on, but he instead chose identity death. His works remain in archived form, but I am not sure if there is still any energy behind them.

Watir Wait

Thursday, January 7th, 2010

I have been working with watir over the last couple of days. I quickly became frustrated with numerous errors claiming that the element I wanted to perform an operation on did not exist. I found the Watir::Waiter class and started using it extensively. So extensively, that I decided to write a little monkey patch to make my life easier.

The application that I am working with performs a lot of client-side DOM manipulation. This can create instances where my script was asking Watir to perform operations on DOM objects that didn’t exist. To defend against that, every time that I called click or set or select on various DOM objects, I wrote two additional statements. One to make sure the browser had finished whatever it was working on, and one to make sure that the element I was about to interact with actually existed.

The code looked something like this.

  @browser =

  Watir::Waiter.wait_until { @browser.text_field(:name, /UserName/).exists? }
  @browser.text_field(:name, /UserName/).set("Admin")

  Watir::Waiter.wait_until { @browser.text_field(:name, /Password/).exists? }
  @browser.text_field(:name, /Password/).set("Password")

  Watir::Waiter.wait_until { @browser.button(:name, /Submit/).exists? }
  @browser.button(:name, /Submit/).click

While that works, I got really sick of having to re-type the selector for the DOM element that I wanted to muck with. What I wanted to do was write code that looked something like this.

  @browser =

  @browser.text_field(:name, /UserName/).wait_to_set("Admin")
  @browser.text_field(:name, /Password/).wait_to_set("Password")
  @browser.button(:name, /Submit/).wait_to_click

Wow. That is much more concise and easier to understand. Even a non-programmer can understand what is happening now.

To make this code actually work, I decided to write a quick monkey patch that adds a “wait_to_” alternative for every method that can be called on input elements and links. These methods call @browser.wait, ask Watir::Waiter to wait for the element to exist, and then call the requested method.

I called my monkey patch Watir Wait. (Get it? I crack myself up! :)) Take a peek and let me know what you think. If I get enough positive feedback, I’ll rework this into a proper patch and submit it to the Watir team for inclusion.

Database Dump: export the contents of your Oracle database

Friday, December 18th, 2009

I created another small utility written in ruby. This one dumps the entire contents of a database to a text file. Contents are spewed to standard out, so you will have to pipe the output to a file if you want to do anything useful with it later.


Ever wanted a database equivalent to grep?

Thursday, December 17th, 2009

I am always banging my head against the wall when working with legacy databases, because it is difficult tell where information is stored. Reading through the entire application code base to find the location of a string on the user interface is a very frustrating task. It would be much faster if I could just run grep '.*message.*'. I have been wishing that such a thing existed for quite a while, but I was unable to find one that did what I wanted. Start with a dash of ruby, add an hour of my time, throw in some tinkering, and bang. It’s done.

This version only works with Oracle databases, but it should not be too difficult to rework this to talk to database management system that you are mucking with.

Requirements: ruby 1.8.7 or later, ruby-oci8 version 2.0 (if on Windows, make sure you download the binary gem from rubyforge instead of trying to do gem install ruby-oci8)


Instrumenting assemblies with Mono.Cecil and IronRuby

Wednesday, December 16th, 2009

I just finished working on a script that I am really proud of. So proud that I want to share it with all of you.

I am working on making modifications to a third party application. I have source for some of the application, but unfortunately just having the source has not answered all of my questions. The application’s architecture is rather convoluted, and the source code is filled with hints that it was produced by very inexperienced hands. To steal a quote that I read on twitter a couple of months ago, “I was hoping to at least get spaghetti, but this code is just soup.”

So, I wanted to instrument the code so that I could get a better idea about what was going on in one particular library, specifically one that leverages Microsoft’s Workflow Foundation. My first stab at this was to just read the source in and add code to each method that marks the insert and exit points. I was planning on using the System.CodeDom libraries for this, and I was rather disappointed to discover that CodeParser is not implemented by the .NET Framework for C#.

So I turned to Mono.Cecil instead. I wrote a utility that modifies every constructor and every method. For each one, a message is inserted at the beginning of the method to note that it has started, and a message is inserted right before the return statement to note that the method is complete. Messages are transmitted through log4net, so you will need to play with your app.config to make the tracing messages show up.

The utility is written in ruby and will only run from IronRuby, because it makes heavy use of the .NET Framework. Oh, and the utility has the ability to apply and remove the instrumentation to an assembly, so you can put it back the way you found it.

Enjoy and let me know if you find any problems.

Using log4net with IronRuby

Monday, December 7th, 2009

Using log4net with IronRuby is something of a pain. This is for two reasons.

  1. log4net violates Microsoft’s API naming guidelines by naming the root namespace in the log4net assembly as ‘log4net’. A conforming name would look more like ‘Log4Net’.
  2. IronRuby ignores any namespaces that start with lowercase letter. It will flat out refuse to load them.

These two facts together lead to total suck, but I have found a work around. I wrote a wrapper class that invokes the log4net assembly via reflection. This lets you call log4net.Config.BasicConfigurator.Configure() so that log4net gets configured from the app.config file. The wrapper class also allows you to access named loggers and provides a way to output the log levels that are configured for the root logger.


IronRuby and the Configuration (app.config or exe.config)

Friday, December 4th, 2009

I was trying to write a quick little IronRuby application that talks to a third-party library that I am working with. I ran into some problems related to configuration files, and I thought I would share how I got around the problem.[2]

The library I am working with requires that some values exist in the application’s configuration file, which could be either the app.config file or the executable_name.exe.config file. But I have no way to specify these values, because IronRuby’s ir.exe[1] has it’s own configuration, ir.exe.config that sets up paths and other options for the Dynamic Language Runtime (DLR). Any application that you execute with IronRuby is run within the context of ir.exe, and so it inherits ir.exe‘s configuration.

I should mention I could have added the values directly to ir.exe.config, but I dismissed this solution as unacceptable. I am really a stubborn person.

During my extensive research into the issue I encountered several suggested solutions, but none of them worked. Most discussions that I came across ended with someone giving up and modifying ir.exe.config.

The .NET Framework provides no approved way to modify the configuration once it is loaded into memory. I imagine that this is due to security issues. You would not want malicious code to get access to the configuration file and change the values. My second of two attempts to solve this problem resulted in success.

First, I tried creating a new AppDomain with its own configuration. However, I was not able to use any IronRuby constructs to get code to execute within the context of the child AppDomain.

To do this I first tried creating a MarshalByRef descendant that contained the code requiring the configuration settings. However, the way IronRuby creates CLR versions of the Ruby types made this very difficult. It looks like the types are created in an in memory assembly, but I could not get a reference to that assembly that would let me load my custom type into a different AppDomain. I kept getting errors complaining that the assembly could not be found. After hours of trying and trying I gave up and decided to call AppDomain.DoCallBack instead.

Here I encountered an issue with IronRuby delegates. I created a proc with the code that I wanted to execute and passed it into the constructor for the delegate type that is expected by the DoCallBack method. However, I got a really strange error complaining about not being able to serialize the delegate into the new AppDomain. Strike two. At this point giving up is starting to look like a really good option.

Not knowing another way to solve the problem, I decided to hack my way to a solution. With my friendly companion, Reflector, I started deciphering the logic that reads configuration files into memory. I wanted to find out how to change the configuration file that the current AppDomain is using and then force the AppDomain to read from the new configuration file. The result is the ConfigurationSettingsHackery class. It uses reflection to dig into System.Configuration and change some key private members. After doing so, the AppDomain re-reads the configuration the next time that configuration information is requested.

I hope this helps someone. It would have really been nice to have this class two days ago. I should warn you, however, that this is a nasty, nasty hack. As such, it it most likely not work on the next version of the .NET Framework.

[1]: I am using IronRuby 0.9.2.
[2]: This discussion is also applicable to IronPython users that are trying to do the same thing, as it has the same issues and limitations.

Mono.Cecil and Type Forwarding

Monday, November 23rd, 2009

Just a quick note to help those that may be searching for the ability to use Mono.Cecil to create an assembly that forwards types to another assembly. I after trying several different ways to call the library to do what I wanted, I decided it was time to dive into the source and see what was going on. Well the answer to my frustrations was found after much searching. Take a peek at the source for Mono.Cecil.ReflectionWriter and search for TODO. You will find the VisitExternType method. It contains nothing but the comment, TODO. Oh, and the method is never called, so good luck trying to figure out how it is supposed to work.

I am going to try to get this to work with Microsoft’s CCI instead. I will report my findings in another post.

Importing an existing git repository into subversion

Thursday, November 12th, 2009

I been scouring the net for a way to take an existing local git repository and apply all of the commits to a subversion repository. I finally found the answer. I am going to rewrite the procedure here while I wait for my code to be commited.

Assume you have an existing git repository, and you are currently in that directory, run the following commands to link your git repository to the subversion repository.

  $ git svn init -s svn://my/svn/server
  $ git svn fetch

The result of the fetch command should display a series of revisions from the subversion repository.

Now run the following command and store the result somewhere.

  $ git show-ref trunk

This should yield a sha-1 hash for the remote repository.

Now we need to grab the hash for the local repository.

  $ git log --pretty=oneline master | tail -n1

Finally, we need to let git know that these two revisions should be “grafted” onto one another for that do the following.

  $ echo "<second value>  <first value>" >> .git/info/grafts

Running git log should reveal that the last commit from subversion now appears right before the first commit in your local repository. Perfect!

Now run the following command to push everything into the subversion repository.

  $ git svn dcommit

Sit back and watch the output scroll by. My commit is still running, even after typing this entire post. :)

Go, Go gadget Google!

Wednesday, November 11th, 2009

Okay, so the title of this post needs some work, but I wanted to take a few moments to comment about the new programming language on the street today, Go.

Go was born out of one of Google’s famous 20% projects. I have been reading through the documentation on the project site, and I am starting to get a feel for the motivation behind the development of the language.

It appears that someone at Google was a really big fan of C. Such a big fan, that they designed a language with the same basic feel, but with some newer and improved syntax sugar.

With most of the sexy languages in the land being of the dynamic variety, it is interesting to see such an improvement in the static space. The Go language utilizes many features that are really popular in dynamic languages, but provides the advantages that come only as an after-thought with most dynamic languages.

It is going to be interesting to see how this language becomes adopted. I, for one, am not anxious to start using it. Mainly because I have been working with Ruby in my free time. (Ha! Free. right. More on that some other day.)

I am going to remember Go though for one particular use case. If I find myself unhappy with Ruby runtime performance, and I want to optimize by writing closer to the metal, then I am more likely to reach for Go than I am to reach for C or C++. Very interesting.

sudo, Ubuntu, and the PATH environment variable – a love story (of sorts)

Saturday, November 7th, 2009

I just started setting up a Ubuntu Karmic Koala (9.10) server in the cloud, and I became very frustrated very quickly about the default behavior that is compiled into sudo. Since there is not much info laying around the net on how to solve this problem, I thought I would throw this post together. So if the big search engine in the sky brought you my way, then I hope this helps you.

Sudo on Ubuntu Karmic has been compiled with the –with-secure-path option. This causes sudo to ignore any changes to the path environment variable. And I do mean any changes. Changing the path in the user’s environment ala PATH=$PATH:/opt/other-bin sudo gem will not work. Neither will modifying the path variable in the /etc/environment file. And don’t try to modify the PATH in /etc/profile or /root/.profile or /root/.bashrc because none of those will work either.

If you want to see the path that sudo is using then take a peek at /usr/share/doc/sudo/OPTIONS. There you will see the exact path that was compiled into the sudo command.

This “secure path” can be modified. But before I tell you how, I should insert a word of caution. My research indicated that this was done for your protection. As with many things that are done for your protection, it is annoying as hell. But it evidently makes it harder for trojans to run commands as root. So make sure that you think twice before making changes to the “secure path” that sudo uses when it runs.

Thanks for patiently reading the disclaimer. Now for the juicy details. To modify sudo’s “secure path” you just need to add a line to the /etc/sudoers file. This file is best modified using the visudo command. So fire up visudo and add the following line.

  Defaults        secure_path=<your new path>

I highly recommend that you start with the value that sudo was compiled with and then append to it.

I hope that helps you.

It would have been really nice if this was documented better somewhere. I was only able to piece this solution together after reading a lot of confusing forum posts and after several head-scratching reads of the sudo man page.

Working with C# Anonymous objects

Monday, June 8th, 2009

Anonymous objects in C# are very handy, especially given the way they are supported by the ASP.NET MVC framework.

I recently ran into a case where I wanted to interact with an anonymous object. Specifically, I was testing the data that I provided a JsonResult. I handed the JsonResult a pretty complicated anonymous object with several layers of nesting. This is a great use of anonymous objects because in code they look a lot like JSON. So, how do I make sure that the JsonResult is getting the correct data? The answer is reflection. But like with all things, there is a hard way and an easy way.

First the hard way.

var example = new {
  stringData = "string data",
  integerData = 12,
  booleanData = true

Given the block of code above, if we want to retrieve one of the values from the example instance we have to do the following.

string value = (string) example.GetType().
    new object[] { });

Replace “stringData” with the name of any of the fields and there you go. The trouble is that this block of code is seven kinds of ugly (yes, I counted) and it is not the kind of thing that you want to type over and over. Wouldn’t it be nice if there was an easier way?

What if we use the following extension class. Also available as a gist on my github account.

static class ObjectExtensions
    public static T Property<t>(this object target, string name)
        return (T)target.GetType().InvokeMember(
            new object[] { });

Now we can access fields from our sample class by writing the following code.

string value = example.Property<string>("stringData");
int otherValue = example.Property<int>("integerData");
bool yetAnother = example.Property<bool>("booleanData");

I think that this looks much better. I hope this helps you as much as it has helped me.

Generating an ASP.NET MVC Menu from a SiteMap

Wednesday, May 27th, 2009

I needed an MVC helper method that generated the same markup that the WebForms Menu control does. I am not sure that it is 100% complete, but you can take a look at my first cut. I welcome any improvements or suggestions.

Storing Cucumber scenarios inside TFS

Thursday, April 23rd, 2009

Ever since seeing it demo at Agile 2008, I have fallen in love with cucumber. Yesterday, I posted the source code for CucumberTFS. Read on for more information about what it does.

I have been trying to introduce some Behavior Driven Development to the project that I am working on. We recently received a large batch of requirements/stories. Our goal was to develop a better way to communicate functionality with our testing team. I suggested using the cucumber given, when, then format to describe all of our scenarios.

We stored these scenarios in our Team Foundation Server (TFS) instance. This lets us track code changes against the scenarios, assign them to different people and anything else that can be done with a TFS work item. I got the idea to retrieve the scenarios from TFS and format them so that they could be run through cucumber. Enter CucumberTFS.

CucumberTFS’s first iteration attempted to wrap the call to cucumber directly. Given the problems I had with redirecting cucumber’s output so that it appeared to come from CucumberTFS, I decided to modify the tool to just generate a single feature file that contains the contents of all the scenarios in TFS.

The tool still needs some work before it can be used by the masses. I want to create a binary release with an installer, for example. And there needs to be more control over the name of the file that is generated and more control over the set of TFS work items that are retrieved.

So, if you are using TFS and want to integrate it with cucumber, then check out CucumberTFS. Head over to github, fork the project and make your own changes.

Working with IIS 7.0 PowerShell provider

Monday, December 8th, 2008

I have spent several hours with the new IIS 7.0 PowerShell provider, and I am very happy. I want to post a quick tip that I was looking for but did not find an answer to.

Suppose you want to write a script that is going to use the features that are loaded by the provider. The first thing that I would expect the script to do is to make sure that the provider is loaded into the environment, and then load it if needed.

So here is the chuck of PowerShell script that will do just that.

# Load the IIs Provider if it is not loaded already.
$foundSnapIn = $False
Get-PsSnapIn | ForEach-Object {
  if ($ -eq "IIsProviderSnapIn") {
    $foundSnapIn = $True
if ($foundSnapIn -eq $False) {
  Write-Host "Adding IIS Snapin...."
  Add-PsSnapIn IIsProviderSnapIn
  Write-Host "Added."

Update 2008.12.10:
Here is a better way to handle this.

if (!(Test-Path IIS:\)) { Add-PsSnapIn IIsProviderSnapIn }

Cucumber Rocks

Friday, August 29th, 2008

The most impressive thing that I saw at Agile 2008 was Aslak Hellesoy’s unveiling of cucumber. Cucumber is meant to replace the rspec story syntax with a format that is targeted towards non-technical test authors. I think that cucumber is going to go a long way to making it very easy for non-developers to write very tests very easily.

I have been working with cucumber a lot since I saw the Aslak’s presentation. I have forked the project on github with the hope of adding a way to turn off the colored output. It causes issues when running cucumber with jRuby on Win32. This is because the ruby gem that adds support for ansi colors in the Win32 console requires a C extension, which jRuby does not support.

I have created an additional sample that I need to push to my github repository. I added a sample that calls .NET code with the assistance of the rubydotnet project (careful, there are two). I would have liked to use IronRuby, but it is just not far enough along, and I was not willing to wait for it to get there. Before trying the rubydotnet project, I also tried creating a Java stub for a .NET assembly using IKVM. I did not have much success with that route. I have a feeling that I was having classpath issues, but after a few hours of trying, my patience ran out.

I will leave a note here when after I post the additional sample. The sample will include a readme file explaining all of the dependencies that are needed. The sample will also include the Rakefile that I have been using. That should go up some time this weekend.

I have even started using cucumber on one of my projects at work. It has really come in handy. Writing functional tests is just dead simple.

Sometime this week I hope to write another sample demonstrating how to use cucumber to test Objective-C code using RubyCocoa. I have been trying to get cucumber to work with MacRuby, but that is going to have to wait until gem support is working. Or until I get a better understanding of the inner workings of Ruby. That would not exactly be a bad thing.

Oh, one last cucumber note. I have plans to create a Fit/Fitness style web application that acts as a repository for cucumber tests and provides a built in editor for writing them. I think that with a project like that around, cucumber would really be able to take off. And that is something that I would really like to push for.

Writer’s note: I typically would have gone through this post and linked to all of the things that I am talking about, but I am feeling a little lazy. I also need to hurry up and get on the road. I am hanging out at a campground in Lurary, VA. One of my really good friends has driven his RV there. It should be a weekend filled with good fun. But it will not start unless I get moving. So yeah, long story short, I will update this post later with links.

Asp.Net ViewState is annoying

Tuesday, July 1st, 2008

Asp.Net ViewState is annoying