Archive for the ‘’ Category

Recipe: How to make your ruby version and gemset more visible when using rvm

Friday, January 28th, 2011

I recently started using rvm for all of my projects. rvm is designed to help ruby developers work with multiple versions of ruby on their system. I recently came up with a great way always knowing with version of ruby is in use by rvm. But before I go into that, let’s talk about some details about rvm.

Installing a few rubys

Once you get rvm installed, you only need to run rvm install 1.9.2. That command will download and build the latest version of ruby 1.9.2 from source. If you also work with Phusion’s ruby enterprise edition, you can install it from source by running rvm install ree.

After running those two commands, you will have three versions of ruby installed on your computer, the system version, ruby 1.9.2 and ruby enterprise edition. However, if you run ruby --version you’ll notice that the system version is the one that is getting executed. Here’s what doing so looks like on my Mac running Mac OS X version 10.6.6.

cloudraker:~ mscottford$ ruby --version
ruby 1.8.7 (2009-06-12 patchlevel 174) [universal-darwin10.0]

However, if you first run rvm use 1.9.2, the running ruby --version should give you exactly what you expect. Try switching to ruby enterprise edition with rvm use ree. Again, running ruby --version should confirm that the switch took place correctly. Should you want to return to using your system’s version of ruby, just execute rvm use system.

If you ever need to check which version of rvm is active, you can run rvm current. This will output the name of the ruby that rvm has setup. We’ll discuss a better way to determine which ruby is active a little later. But first, let’s talk about how rvm helps us manage gems for each project.

Working with gemsets

Since just having different versions of ruby is not enough, rvm also gives us the ability to create different sets of gems that are completely isolated from each other. By default each version of ruby that we install gets its own gemset. We also have the ability to create named gemsets.

We create gemsets with the command rvm gemset create gemset_name. This will create a gemset for the currently selected version of ruby. One thing to keep in mind is that creating a gemset does not automatically switch you to that gemset. To do that you’ll need to use the rvm use command, for example rvm use 1.9.2@gemset_name. If you need to figure out which gemset is active, you can run the rvm currrent command. Once again, a better way to keep track of this is on it’s way.

Here’s a longer example that shows how to create and work with gemsets.

$ rvm use 1.8.6
$ rvm gemset create funkyness
'funkyness' gemset created (/Users/mscottford/.rvm/gems/ruby-1.8.6-p399@funkyness).
$ rvm current
$ rvm use 1.8.6@funkyness
Using /Users/mscottford/.rvm/gems/ruby-1.8.6-p399 with gemset funkyness
$ rvm current
$ rvm use 1.8.6
Using /Users/mscottford/.rvm/gems/ruby-1.8.6-p399
$ rvm current

Start using .rvmrc, and stop thinking

To make it impossible to forget which of your projects are using which versions of ruby and even then which gemsets, rvm will look for a .rvmrc in each directory that you switch into with the cd command.

Here’s an example.

$ rvm current
$ cd funkyness
$ rvm current

Okay. That looks like magic. What’s going on?

To answer that question, let’s take a peek inside of ~/funkyness/.rvmrc.

rvm 1.8.6@funkyness --create

With that one line, rvm will switch to ruby version 1.8.6 and gemset funkyness. It will even create it for you if it does not exist.

Since this feature could potentially be used to trick you into running malicious code on your system, rvm asks you to trust a .rvmrc file the first time that it reads it. You only have to do this once however.

What’s this post about again?

Now that I’ve explained the finer points about using rvm, I can finally start to vent a little.

I have several ruby projects that I’m working on at the moment. Some are for fun, but most are for my paying clients. I only recently started using .rvmrc files, and I’ve yet to create them for all of my projects. This means that for some projects, I don’t really need to think about which version of ruby is getting run, because it is the version that I’ve specified in the .rvmrc file. For other projects, however, I need to remember to run rvm use with the correct version of ruby for that project.

But I’d hate to run rvm use if I don’t need to. And running rvm current all them time seems a little silly. The solution that I’ve come up with is to alter the bash prompt to always let me know the current version of ruby that is in use by rvm.

To get started I used my favorite search engine to see if someone had already tackled this problem. I found one really good example that even introduced some color, however it was also using some git magic to include the current branch on the prompt. A few modifications later, I came up with my own version that just displays the ruby that is in use by rvm, and it does so while looking like it was copied and pasted out of Textmate.

Here’s what it looks like.

:rvm => 'system'
~ $ cd funkyness

:rvm => 'ruby-1.8.6@funkyness'
funkyness $ cd ..

:rvm => 'system'
~ $

CSS Unit Testing

Tuesday, October 19th, 2010

CSS is often not treated as code, but I’d like to make the argument that it should be treated as code. For instance, it needs to be easier to refactor CSS documents, and it needs to be possible to detect when there are CSS rules that are no longer needed.

I’ve read some recent discussions where the question of CSS unit testing has been raised. Many of these discussions devolved into a debate about whether or not CSS was “code”. A lot of these commentators complained about CSS not being a Turing complete language. I’d like to claim that this debate, with respect to unit testing, is a giant waste of time. Whether or not CSS is Turing complete has nothing to do with the reasons why one would like to write tests against CSS.

But to avoid that debate, I’ll avoid describing CSS as code. Instead I’d to propose that CSS is actually a domain specific language that is used to control the the way a browser works. For simplicity, let’s think of CSS as a configuration syntax.

CSS is a language that affects the way that HTML documents are displayed by web browsers. As the use of CSS has increased as the primary method for altering the way information is displayed, HTML documents have become more and more semantic. The additional tags that have been added to HTML 5 have made the documents even more semantic.

This means that HTML is basically just data that is displayed by a web browser. Web browsers have a default way of presenting this information. CSS is used to alter this default presentation, which means that CSS is simply a method for configuring the workings of a browser. Since CSS has an effect on the execution of a program, the web browser in this case, that is used for displaying information, it is important to ensure that the configuration is accurate for the task at hand.

This is where testing comes in. Testing should be employed any time that we want to ensure the correct operation of an application.

So can we stop the bitching and get started on a decent method for testing CSS already? I’ve got some ideas, but I’ll have to write about them later, once I’ve had a chance to work up some experiments.

Dynamic DNS with Rackspace Apps Control Panel

Monday, May 24th, 2010

I use Rackspace Apps for email across all of my domains, and I am using them as a domain registrar, too. A few days ago, I wanted to create a subdomain, like, that pointed to my computer at home. I didn’t want to use one of the free dynamic dns services, and I wanted to be able to create the subdomain for a domain that I already own.

Through the Rackspace Apps control panel, I can change all of the DNS entries for any of the domains that they are hosting. To create a subdomain, all I have to do is create an A record entry for “example” that points to my home ip address. I used to look that up. Clicked “save”, and then the address started resolving right away. Perfect. Well, at least until my ISP hands out a different ip address.

What I needed was a programmatic way to detect that my ip address has changed and then update the A record entry for with the new ip address.

I dug through the Rackspace Apps API documentation looking for a published way to do this, but I was unable to find one. Then I realized that I could just treat the control panel website as an API by driving it with a headless browser, like HtmlUnit.

There are several ruby gems that provide the ability to drive headless browsers. I took a quick look at celerity, mechanize, steam, and webdriver. I settled in on using steam, because it seemed like it had the fewest number of layers between it and the headless browser. I had also never used it before, and wanted to get a feel for how well it worked.

I have posted the resulting script as a github gist. Take a peek. Comments and forks are welcome. Note that all domain names, user names, and passwords have been replaced with made up examples.

As for running the script, I set up an @hourly entry in cron. The script only contacts the Rackspace Apps control panel if it detects an ip address change, so the risk of accidentally hammering their web server with this, should be low. (I point that out in case any of my old coworkers stumbles across this. :))

The current implementation can only update an existing DNS entry, but it should not be too hard to extend it to support creating additional DNS entries. Anyone that goes to implement this should make sure to correctly handle clicking the add link if there are not any empty rows in the entry table.

In addition to supporting creating new entries, there are a few improvements that I would like to make to this script. (1) I’d like to have an external service resolve the domain. This is going to become critical, because I want my the domain to resolve to the private ip address for devices that are on the private network. (2) I’d like to not rely on

I’m thinking of writing a small web service that I can install on my server that will address both of these. The service will be able to do DNS resolution, and it will be able to detect the public ip address of the caller.

Marvel Civil War: Reading Order

Thursday, April 8th, 2010

The recent release of the comic book applications for the iPhone and iPad have sparked my interest in the comic book world. I am fascinated by the concept of the Civil War story arc, but I want to make sure that I read the content in the best order. After doing some research, this is what I came up with. I have based this list on the Marvel Civil War Cover gallery, which seemed to be the most complete listing.

This is a ton of content. Wish me luck!

  1. Amazing Spider-Man #529
  2. Amazing Spider-Man: Decisions
  3. Road to Civil War TPB
  4. Amazing Spider-Man #530
  5. Amazing Spider-Man #531
  6. Fantastic Four #536
  7. New Avengers: Illuminati Special
  8. Civil War: Opening Shot Sketchbook
  9. Fantastic Four #537
  10. Civil War #1
  11. She-Hulk #8
  12. Wolverine #42
  13. Amazing Spider-Man #532
  14. Civil War: Front Line #1
  15. Thunderbolts #103
  16. Thunderbolts: Swimming With Sharks
  17. Civil War #2
  18. Civil War: Front Line #2
  19. Amazing Spider-Man #533
  20. New Avengers #21
  21. Fantastic Four #538
  22. Wolverine #43
  23. X-Factor #8
  24. Civil War: Front Line #3
  25. Thunderbolts #104
  26. Civil War #3
  27. Civil War: X-Men #1
  28. Cable & Deadpool #30
  29. X-Factor #9
  30. New Avengers #22
  31. Black Panther #18
  32. Wolverine #44
  33. Civil War: Young Avengers & Runaways #1
  34. Daily Bugle Special Edition: Civil War
  35. Civil War: Front Line #4
  36. Amazing Spider-Man #534
  37. Fantastic Four #539
  38. Ms. Marvel #6
  39. Civil War: Front Line #5
  40. Thunderbolts #105
  41. Civil War: X-Men #2
  42. Heroes For Hire #1
  43. Wolverine #45
  44. New Avengers #23
  45. Cable & Deadpool #31
  46. Civil War: Young Avengers & Runaways #2
  47. Civil War Files
  48. Ms. Marvel #7
  49. Civil War #4
  50. Wolverine #46
  51. Civil War: X-Men #3
  52. Amazing Spider-Man #535
  53. Civil War: Front Line #6
  54. Heroes For Hire #2
  55. Cable & Deadpool #32
  56. Captain America #22
  57. Fantastic Four #540
  58. Civil War: Front Line #7
  59. Ms. Marvel #8
  60. Civil War: X-Men #4
  61. Wolverine #47
  62. New Avengers #24
  63. Civil War: Choosing Sides
  64. Captain America #23
  65. Heroes For Hire #3
  66. Black Panther #21
  67. Black Panther: War Crimes TPB
  68. Civil War: Young Avengers & Runaways #4
  69. Civil War #5
  70. New Avengers #25
  71. Iron Man #13
  72. Amazing Spider-Man #536
  73. Civil War: Front Line #8
  74. Wolverine #48
  75. Punisher War Journal #1
  76. Black Panther #22
  77. Captain America #24
  78. Civil War: War Crimes #1
  79. Civil War: Casualties of War TPB
  80. Iron Man #14
  81. Iron Man/Captain America: Casualties of War
  82. Fantastic Four #541
  83. Civil War: Front Line #9
  84. Winter Soldier: Winter Kills
  85. Black Panther #23
  86. Civil War #6
  87. Civil War: Front Line #10
  88. Amazing Spider-Man #537
  89. Punisher War Journal #2
  90. Thunderbolts #110
  91. Blade #5
  92. Fantastic Four #542
  93. Punisher War Journal #3
  94. Civil War: The Return #1
  95. Moon Knight #7
  96. Moon Knight #8
  97. Ghost Rider #8
  98. Ghost Rider #9
  99. Ghost Rider #10
  100. Ghost Rider #11
  101. Black Panther #24
  102. Civil War #7
  103. Amazing Spider-Man #538
  104. Civil War Poster Book
  105. Black Panther #25
  106. Civil War: Front Line #11
  107. Captain America #25
  108. Civil War: The Confession
  109. Civil War: The Initiative
  110. Fallen Son: Death of Captain America TPB
  111. Fantastic Four #543
  112. Civil War: Battle Damage Report
  113. Marvel Spotlight: Civil War Aftermath
  114. Marvel Spotlight: Captain America Remembered
  115. Civil War Chronicles #1
  116. Civil War Chronicles #2
  117. Civil War Chronicles #3
  118. Civil War Chronicles #4
  119. Civil War Chronicles #5
  120. Civil War Chronicles #6
  121. Civil War Chronicles #7
  122. Civil War Chronicles #8
  123. Civil War Chronicles #9
  124. Civil War Chronicles #10
  125. Civil War Chronicles #11
  126. Civil War Chronicles #12

Open Source Fear, Uncertainty and Doubt (FUD)

Wednesday, April 7th, 2010

I just read an article that Andrea sent me a while back titled, Lobby Group Says Open-Source Threatens Capitalism. I must say that I am a little shocked. But before I rant about why, take a few minutes and read the article. I’ll be here when you get back.

Read it? Good. Time for my rant.

In case you have a horrible memory (or more likely you did not actually read it, yet. Tisk. Tisk), the short version of the article is that an intellectual property group has requested that certain countries be added to the “Special 301 watchlist” because they advocate using open source technologies for government work.

The first issue that came to mind when I read that is that the organization forgot to request the same status for Oregon and Massachusetts. You know, states that are part of the United States. I am sure that I could have dug up more states if I had spent more time with my pal Google. But 2 minutes seemed like enough to prove my point.

To address the second issue, I need to draw your attention to the organization’s own words. Let me paraphrase.

“The Indonesian government’s policy … weakens the software industry … [because] it fails to build respect for intellectual property rights.”

Ugh! Without strong intellectual property rights open source would not be possible. Every open source license builds on the foundation of the author’s copyright as established by law. These licenses provide authors with a legal framework for granting permissions to others.


I really hope this request was not taken seriously. I had thought that fear, uncertainty and doubt (FUD) attacks against open source were behind us. This was a tactic that Microsoft employed very heavily during its spats with Netscape and the U.S. Department of Justice.

But in recent years, Microsoft has started developing new products out in the open that are under open source licenses. This is in addition to the large amount of code that was initially developed internally and then later released to the public as open source. New projects include, IronRuby and the ASP.NET MVC library. While the other class includes WiX[1] and a host more.

[1]: Shameless plug: I have contributed code to the WiX project, and it was accepted!

The final issue that I will point out is the incredibly large number of companies that this group represents. It does not represent any company directly, but instead does so through other industry groups. But the list is quite large. And amusingly enough, if you dig deep enough into the member lists, you will find some big corporate open source supporters such as Adobe, Apple, IBM and Microsoft.

  • Association of American Publishers (AAP) – [274 members]
  • Business Software Alliance (BSA) – [35 members]
  • Entertainment Software Association (ESA) – [29 members]
  • Independent Film & Television Alliance (IFTA) – [143 members]
  • Motion Picture Association of America (MPAA) – [6 members]
  • National Music Publishers’ Association (NMPA) – [more than 800 members]
  • Recording Industry Association of America (RIAA) – [3905 members]

(Note: the member counts may be a little misleading, because many companies are represented by several of these groups. Don’t just add them all up if you are trying to get a total count.)

This is normally where I would stick a nice little conclusion paragraph to tie my whole post together, but I am too tired. So, I’ll leave that part as an exercise for the reader. What conclusions do you draw from this information? Leave them in the comments.

What iPad keyboard layout does Wayne Westerman use?

Sunday, March 28th, 2010

I have been rather disappointed by the iPad’s apparent exclusion of the Dvorak keyboard layout. But the other day, I had a funny idea. That the creator of the iPod Touch/iPhone/iPad touch technology, Wayne Westerman, originally developed the technology for use in keyboards. These keyboards were sold by his company Fingerworks. The acknowledgements section in Westerman’s dissertation provides evidence that his own repetitive stress injury (RSI) was the driving force behind developing a keyboard that did not require a finger to exert any pressure to press a key. He specifically thanks several individuals that he had type for him, until he was able to “[perfect] less fatiguing forms of data entry”.

Furthermore, the shinning star of the Fingerworks product line up, the TouchStream, supported a multitude of firmware enabled keyboard layouts. One of these was an experimental layout, named Qwerak. It was meant to be a modified version of the Dvorak keyboard layout with the hope of addressing two problems. (1) The difficulty involved when typing on a surface with no tactile feedback, and (2) the steep learning curve that is involved in switching from Qwerty to Dvorak. (<bragging>Although, it can’t be too steep, because Andrea was able to pick it up in about three weeks. But then again, she is super smart.</bragging>)

Did Wayne Westerman use the Qwerak keyboard layout himself? I don’t know; I can only speculate. But as an RSI sufferer myself that types in a non-Qwerty layout, I can confidently state two facts. (1) Without a doubt, typing with Dvorak induces less pain than typing in Qwerty, and (2) when forced to use the Qwerty keyboard layout to interact with a computer, my productivity becomes dramatically reduced. I estimate that my productivity when using Qwerty is about 1/5th of my productivity when I am using Dvorak. From these two pieces of data, I speculate that if Mr. Westerman found Qwerak to be even easier to type with than either Qwerty or Dvorak, then it would be very painful, frustrating, and counter-productive for him to type with any other keyboard layout.

So this brings me back to my original question: When Wayne Westerman turns on an iPad, launches the email app, touches the compose email icon and then touches inside the address bar, what keyboard layout is displayed on the screen? I’d be willing to bet a whole bunch of donuts that it is not Qwerty.

Hopefully, one day, when I repeat those same steps, I’ll be presented with a Dvorak keyboard layout. Until I can assure that I will, I won’t be spending a dime on an iPad. Who knows, maybe Google will come out with a Android based tablet device. If so, I’ll flock to that. On that platform, developers have the ability to write custom input methods, including alternative keyboard layouts.

What do you think Google will call such a device, if they develop one? I like Nexus Prime. But, since I name my computers after Transformers, I might not be the best person to ask about potential names.

FF7: Voices of the Lifestream

Tuesday, February 9th, 2010

<sigh/> For once, I have more to say than will fit in a tweet.

Over the weekend I pulled down OC ReMix’s Final Fantasy VII: Voices of the Lifestream project. I listened to the entire four-disc set while coding at work yesterday, and about half of it while coding in bed last night.

I really enjoyed listening to the album while coding, but I did not enjoy listening to the songs as my primary focus. I have two theories about this. One, these songs were composed to be background music, so if you pay too much attention to them, you break the spell. Or two, listening to these songs directly conjures up images of tireless sitting in front of a small television with a Playstation controller in my sweaty hands, desperately trying to figure out where to go next, and hoping that I don’t run into a battle along the way, because the battle music always makes me jump out of my skin when it starts at 3 AM, and I am pretty low on life anyway, though I really could use the XP, all the while wondering why Aeris had to die.

Yeah, my money is on the second theory.

[Disclaimer: I worked on this game on and off from mid 1998 through about 2005. I never finished, stuck in the ice mountains on the third disc. Oh, and the movie was beautiful, but it sucked, so I protested by buying it anyway.]

Rake recipes for working with Visual Studio projects

Wednesday, February 3rd, 2010

Despite spending my day job coding in Microsoft-land, I find myself using ruby tools more and more during my daily development. I recently wrote some rake tasks that I think are worth sharing and explaining. Specifically, I wrote a tasks to control building with msbuild (seems redundant, I know) and some tasks for starting and stopping Cassini (or webdev.webserver as it is now named).

Monkey patch Pathname for Windows paths

Since I am using the Pathname class to build paths in my examples, I need to give you the monkey patch that I use to make Pathname correctly display win32 paths.

require 'pathname'
class Pathname
  alias_method :original_to_s, :to_s
  def to_s
    original_to_s.gsub('/', '\\')

Visual Studio command line environment

Running command line Visual Studio tools requires having certain environment variables loaded. This can be done by running vsvars32.bat directly or by launching a Visual Studio command prompt from the start menu. This is something that I always forget to do; my terminal windows spring into life by typing cmd in the run box. So, I wanted to write a task to ensure that the environment was properly set up.

I am working with Visual Studio 2005. If you want to use the Visual Studio 2008 tools then you will need to adjust the vsvars32_bat variable accordingly.

vsvars32_bat =
  "c:\\program files\\") +
  "microsoft visual studio 8" +
task :vsvars do
    `\"#{vsvars32_bat}\" && set`.each do |line|
      if line =~ /(\w+)=(.+)/
        ENV[$1] = $2
  raise "Eek!" if ENV["VSINSTALLDIR"].nil?

This code is the product of about 30 minutes of googling. I eventually found this trick in the shoes rakefile[1].

Now any task that needs to call a Visual Studio command line tool just needs to declare vsvars as a prerequisite, like so.

task :csc => [:vsvars] do
  sh "csc test.cs"

Building with msbuild

This is actually pretty easy once we have the environment set up correctly. Just create a task that calls msbuild from a sh call.

namespace :build do
  desc "Build the core project"
  task :core => [:vsvars] do
    sh "msbuild #{core_solution_path}"

Controlling Cassini (or webdev.webserver)

There is a lot going on here, so let me first overwhelm you with the code, and then explain what it is doing.

def wait_until_site_loaded
  puts "Please be patient. Waiting for site to respond...."
  site_loaded = false
  until site_loaded
    site_loaded = system
      "curl -L -I -f http://localhost:2088/Default.aspx > NUL 2>&1"
  puts "done."

namespace :web do
  desc "Start the local web server"
  task :start => [:vsvars] do do
      sh "webdev.webserver /path:#{web_root_path} /port:2088 /vpath:/"


  desc "Stop the local web server"
  task :stop => [:vsvars] do
    `taskkill /im webdev.webserver.exe > NUL 2>&1`

  desc "Restart the local web server"
  task :restart => [:vsvars, :stop, :start]

Before the code block above will work, you will need to create a web_root_path variable that points to the absolute path of your website. Relative paths will not work.

The web:start task will start the web server. If an instance is already running at the specified port, then you will get an error message in the form of a dialog box. I wish I knew how to make it fail silently. (I also wish I knew how to prevent it from displaying an annoying balloon notification.)

After starting the web server, the web:start calls out to curl to make sure that the site is responding to get requests. I have curl installed as a result of installing cygwin. There are other ways to get curl, but you will need to make sure your path points at it’s location to use the code above without modifications. curl is pretty chatty, so I have silenced it by routing its standard out and standard error streams to NUL.

The web:stop task will kill all instances of Cassini. This might be annoying if you have more than one instance running. If that is the case, then you will need to write in some form of accounting for the process id of the web server process, or develop a way to figure out which process id owns the port you want. Once you know the specific pid, you can call taskkill /pid and pass it the pid of the process you want to kill.

The web:restart task will call web:stop task followed by web:start task.

One more thing: display available tasks from default task

Note: This recipe does not apply to just Windows development. It will work on any platform.

You can get a list of documented tasks by calling rake -T, but I always forget to do that. I usually just call rake when I want to know what it does. So I created a :default task that displays the same task list that you get when you call rake -T.

task :default do |task|
  puts "You must specifiy a task. Available tasks are listed below:"
  task.application.options.show_task_pattern = /.*/

That’s it. I hope you found these recipes helpful. Happy coding!

[1]: This is way off topic, so I stuck it in a footnote. I know it’s old news, but the way _why, the creator of shoes, committed Internet suicide really irritates me. There were a lot of people benefiting from his contributions to the ruby world, and then one day he just decides to take his toys and go home. He could have bowed out graciously, explaining that he had moved on, but he instead chose identity death. His works remain in archived form, but I am not sure if there is still any energy behind them.

Watir Wait

Thursday, January 7th, 2010

I have been working with watir over the last couple of days. I quickly became frustrated with numerous errors claiming that the element I wanted to perform an operation on did not exist. I found the Watir::Waiter class and started using it extensively. So extensively, that I decided to write a little monkey patch to make my life easier.

The application that I am working with performs a lot of client-side DOM manipulation. This can create instances where my script was asking Watir to perform operations on DOM objects that didn’t exist. To defend against that, every time that I called click or set or select on various DOM objects, I wrote two additional statements. One to make sure the browser had finished whatever it was working on, and one to make sure that the element I was about to interact with actually existed.

The code looked something like this.

  @browser =

  Watir::Waiter.wait_until { @browser.text_field(:name, /UserName/).exists? }
  @browser.text_field(:name, /UserName/).set("Admin")

  Watir::Waiter.wait_until { @browser.text_field(:name, /Password/).exists? }
  @browser.text_field(:name, /Password/).set("Password")

  Watir::Waiter.wait_until { @browser.button(:name, /Submit/).exists? }
  @browser.button(:name, /Submit/).click

While that works, I got really sick of having to re-type the selector for the DOM element that I wanted to muck with. What I wanted to do was write code that looked something like this.

  @browser =

  @browser.text_field(:name, /UserName/).wait_to_set("Admin")
  @browser.text_field(:name, /Password/).wait_to_set("Password")
  @browser.button(:name, /Submit/).wait_to_click

Wow. That is much more concise and easier to understand. Even a non-programmer can understand what is happening now.

To make this code actually work, I decided to write a quick monkey patch that adds a “wait_to_” alternative for every method that can be called on input elements and links. These methods call @browser.wait, ask Watir::Waiter to wait for the element to exist, and then call the requested method.

I called my monkey patch Watir Wait. (Get it? I crack myself up! :)) Take a peek and let me know what you think. If I get enough positive feedback, I’ll rework this into a proper patch and submit it to the Watir team for inclusion.

Blood sucking time vampire

Tuesday, December 29th, 2009

A blood sucking time vampire. That is what XKCD is. I clicked on a link from twitter, and 25 minutes later, I had all of the following pages open in tabs, because I wanted to share them. Too long for a tweet.

More letters: this time, Sprint about device pricing and cancellation fees

Tuesday, December 22nd, 2009

It appears that I am on a letter writing binge. Here is the email that I just fired off to Sprint’s CEO, Dan Hesse, and President of Strategy and Corporate Initiatives, Keith Cowan.

Mr. Dan Hesse and Mr. Keith Cowan:

I am writing to talk about the differences I have noticed between the way Sprint treats new customers versus the way it treats existing customers. My observations are specifically based on pricing, but I feel that the pricing sends a message. I am not sure it is the message that Sprint intends to send. Furthermore, I would like to propose a new method for calculating device discounts and cancellation fees.

Starting service in 03/2009, I am a data only user of the Sprint network. I use a PC card with my laptop when I am out of the house. I am very happy with the coverage area the speeds that I have been getting with the device on Sprint’s network. I have recommended Sprint’s data services to friends and family, and I plan on continuing to do so.

I recently learned about the Novatel MiFi 2200, and I wanted to know the cost for purchasing one. I am greatly discouraged by what I have learned.

Sprint is selling the Novatel MiFi 2200 to new customers for $50, after instant savings and a mail-in rebate. Since this offer is only available to new customers, I was expecting to have to pay a little more for the device. My price for the device is $299. No instant savings and no mail-in rebates. Full price. Repeat, no discounts.

After talking to a representative in a Sprint store, I learned that after one year of service, I am eligible for a $75 discount on a new device. After two years of service, I am eligible for a $150 discount on a new device. The Sprint store employee was only able to estimate my cancellation fee at about $150.

Reading through the website and a quick customer service chat, transcript attached, confirmed this information. The chat customer service representative was able to provide me the exact cancellation fee: $140.

Here are the options that I put together after collecting this information.
* remain a current customer and purchase the new device: $299.
* terminate service and sign a new contract: $190.
* wait until I have been a customer for one year: $224.
* wait until I have been a customer for two years: $150.

It appears that preferential pricing is being given to new customers, while existing customers are forced to pay higher prices. In my case, the cancellation fee when combined with the cheap introductory price is encouraging me to cancel my service. I find it startling that Sprint would ever introduce pricing schemes that make it appealing for me to discontinue service. Because if I am willing to sign a new contract with Sprint then why not a competitor?

I feel that the discount pricing and cancellation pricing has become too confusing. Steps need to be taken to simplify the pricing model. While doing so, care should be taken to ensure that discounts are applied fairly to both new and existing customers. Care should also be taken to ensure that cancel fees do not appear to be alarmingly high.

Arguments that have been given to the FCC and Congress by cell phone providers in response to inquiries about high cancellation fees have lead me to assume that the cancellation fee exists to defray the cost of discounting devices for customers.

If this is truly the case, then should be clear in Sprint’s initial pricing. And it should apply to upgrade pricing as well. To make this clear, I have a three part proposal.

First, make device prices the same for everyone but fluctuate the discount based on how long the person has been a customer. New customers pay full price minus $240, called the “new customer discount”. (Example: $299 – $240 = $59) Existing customers pay full price minus $10 multiplied by the number of months the customer has had service, called the “existing customer discount”. (Example: $299 – (10 * 9) = $209) In this scheme, an existing customer gets the same discount as a new customer every two years.

Second, the cancellation fee should be $240 minus $10 multiplied by the number of months since the customer has purchased a device with a discount. (Example: $240 – (10 * 9) = 209).

Third, make the device discount and the cancellation fee very visible. They should be visible when you log in to the website, and they should appear on the paper bill. Being more transparent about the fees and discounts is going to significantly cut down on the confusion and frustration surrounding them.

In this scheme, customers are incentivised to stay with Sprint in two ways, accumulating discounts towards new devices and avoiding a high cancellation fee.

I thank you for taking the time to read my suggestions. I wish you and your family a happy holiday season.

M. Scott Ford

I wrote my respresentatives today. About food.

Monday, December 21st, 2009

I sent the following letter to my congressman, Eric Cantor and to my senators, Mark R. Warner and Jim Webb.

<Mr. Representative>:

This letter was prompted by my viewing the recent documentary, Food, Inc. by filmmaker Robert Kenner.

I encourage you to reintroduce, or support the reintroduction, of “Kevin’s Law,” introduced in the 109th Congress as H.R.3160. I feel strongly that granting additional authority to the Secretary of Agriculture and subsidiary agencies will lead to a safer food system and a healthier public.

I also encourage you to support the “local food” movement. The public needs better access to local food sources. Having a “local food” section in every grocery store will go a long way to providing this access. A more practical solution is to encourage and support the development and growth of farmer’s markets, especially in urban areas.

I would like to close with an appeal to watch the documentary, Food, Inc.. This movie has greatly affected how I feel about the food that reaches my mouth, however, I feel powerless to effect change. My hope is that if more public officials with power to effect change become aware of the current situation then true change will begin to take shape.

Thank you for your time. I wish you and your family a wonderful holiday season.

-M. Scott Ford

Database Dump: export the contents of your Oracle database

Friday, December 18th, 2009

I created another small utility written in ruby. This one dumps the entire contents of a database to a text file. Contents are spewed to standard out, so you will have to pipe the output to a file if you want to do anything useful with it later.


Ever wanted a database equivalent to grep?

Thursday, December 17th, 2009

I am always banging my head against the wall when working with legacy databases, because it is difficult tell where information is stored. Reading through the entire application code base to find the location of a string on the user interface is a very frustrating task. It would be much faster if I could just run grep '.*message.*'. I have been wishing that such a thing existed for quite a while, but I was unable to find one that did what I wanted. Start with a dash of ruby, add an hour of my time, throw in some tinkering, and bang. It’s done.

This version only works with Oracle databases, but it should not be too difficult to rework this to talk to database management system that you are mucking with.

Requirements: ruby 1.8.7 or later, ruby-oci8 version 2.0 (if on Windows, make sure you download the binary gem from rubyforge instead of trying to do gem install ruby-oci8)


Instrumenting assemblies with Mono.Cecil and IronRuby

Wednesday, December 16th, 2009

I just finished working on a script that I am really proud of. So proud that I want to share it with all of you.

I am working on making modifications to a third party application. I have source for some of the application, but unfortunately just having the source has not answered all of my questions. The application’s architecture is rather convoluted, and the source code is filled with hints that it was produced by very inexperienced hands. To steal a quote that I read on twitter a couple of months ago, “I was hoping to at least get spaghetti, but this code is just soup.”

So, I wanted to instrument the code so that I could get a better idea about what was going on in one particular library, specifically one that leverages Microsoft’s Workflow Foundation. My first stab at this was to just read the source in and add code to each method that marks the insert and exit points. I was planning on using the System.CodeDom libraries for this, and I was rather disappointed to discover that CodeParser is not implemented by the .NET Framework for C#.

So I turned to Mono.Cecil instead. I wrote a utility that modifies every constructor and every method. For each one, a message is inserted at the beginning of the method to note that it has started, and a message is inserted right before the return statement to note that the method is complete. Messages are transmitted through log4net, so you will need to play with your app.config to make the tracing messages show up.

The utility is written in ruby and will only run from IronRuby, because it makes heavy use of the .NET Framework. Oh, and the utility has the ability to apply and remove the instrumentation to an assembly, so you can put it back the way you found it.

Enjoy and let me know if you find any problems.

Using log4net with IronRuby

Monday, December 7th, 2009

Using log4net with IronRuby is something of a pain. This is for two reasons.

  1. log4net violates Microsoft’s API naming guidelines by naming the root namespace in the log4net assembly as ‘log4net’. A conforming name would look more like ‘Log4Net’.
  2. IronRuby ignores any namespaces that start with lowercase letter. It will flat out refuse to load them.

These two facts together lead to total suck, but I have found a work around. I wrote a wrapper class that invokes the log4net assembly via reflection. This lets you call log4net.Config.BasicConfigurator.Configure() so that log4net gets configured from the app.config file. The wrapper class also allows you to access named loggers and provides a way to output the log levels that are configured for the root logger.


IronRuby and the Configuration (app.config or exe.config)

Friday, December 4th, 2009

I was trying to write a quick little IronRuby application that talks to a third-party library that I am working with. I ran into some problems related to configuration files, and I thought I would share how I got around the problem.[2]

The library I am working with requires that some values exist in the application’s configuration file, which could be either the app.config file or the executable_name.exe.config file. But I have no way to specify these values, because IronRuby’s ir.exe[1] has it’s own configuration, ir.exe.config that sets up paths and other options for the Dynamic Language Runtime (DLR). Any application that you execute with IronRuby is run within the context of ir.exe, and so it inherits ir.exe‘s configuration.

I should mention I could have added the values directly to ir.exe.config, but I dismissed this solution as unacceptable. I am really a stubborn person.

During my extensive research into the issue I encountered several suggested solutions, but none of them worked. Most discussions that I came across ended with someone giving up and modifying ir.exe.config.

The .NET Framework provides no approved way to modify the configuration once it is loaded into memory. I imagine that this is due to security issues. You would not want malicious code to get access to the configuration file and change the values. My second of two attempts to solve this problem resulted in success.

First, I tried creating a new AppDomain with its own configuration. However, I was not able to use any IronRuby constructs to get code to execute within the context of the child AppDomain.

To do this I first tried creating a MarshalByRef descendant that contained the code requiring the configuration settings. However, the way IronRuby creates CLR versions of the Ruby types made this very difficult. It looks like the types are created in an in memory assembly, but I could not get a reference to that assembly that would let me load my custom type into a different AppDomain. I kept getting errors complaining that the assembly could not be found. After hours of trying and trying I gave up and decided to call AppDomain.DoCallBack instead.

Here I encountered an issue with IronRuby delegates. I created a proc with the code that I wanted to execute and passed it into the constructor for the delegate type that is expected by the DoCallBack method. However, I got a really strange error complaining about not being able to serialize the delegate into the new AppDomain. Strike two. At this point giving up is starting to look like a really good option.

Not knowing another way to solve the problem, I decided to hack my way to a solution. With my friendly companion, Reflector, I started deciphering the logic that reads configuration files into memory. I wanted to find out how to change the configuration file that the current AppDomain is using and then force the AppDomain to read from the new configuration file. The result is the ConfigurationSettingsHackery class. It uses reflection to dig into System.Configuration and change some key private members. After doing so, the AppDomain re-reads the configuration the next time that configuration information is requested.

I hope this helps someone. It would have really been nice to have this class two days ago. I should warn you, however, that this is a nasty, nasty hack. As such, it it most likely not work on the next version of the .NET Framework.

[1]: I am using IronRuby 0.9.2.
[2]: This discussion is also applicable to IronPython users that are trying to do the same thing, as it has the same issues and limitations.

Mono.Cecil and Type Forwarding

Monday, November 23rd, 2009

Just a quick note to help those that may be searching for the ability to use Mono.Cecil to create an assembly that forwards types to another assembly. I after trying several different ways to call the library to do what I wanted, I decided it was time to dive into the source and see what was going on. Well the answer to my frustrations was found after much searching. Take a peek at the source for Mono.Cecil.ReflectionWriter and search for TODO. You will find the VisitExternType method. It contains nothing but the comment, TODO. Oh, and the method is never called, so good luck trying to figure out how it is supposed to work.

I am going to try to get this to work with Microsoft’s CCI instead. I will report my findings in another post.

Importing an existing git repository into subversion

Thursday, November 12th, 2009

I been scouring the net for a way to take an existing local git repository and apply all of the commits to a subversion repository. I finally found the answer. I am going to rewrite the procedure here while I wait for my code to be commited.

Assume you have an existing git repository, and you are currently in that directory, run the following commands to link your git repository to the subversion repository.

  $ git svn init -s svn://my/svn/server
  $ git svn fetch

The result of the fetch command should display a series of revisions from the subversion repository.

Now run the following command and store the result somewhere.

  $ git show-ref trunk

This should yield a sha-1 hash for the remote repository.

Now we need to grab the hash for the local repository.

  $ git log --pretty=oneline master | tail -n1

Finally, we need to let git know that these two revisions should be “grafted” onto one another for that do the following.

  $ echo "<second value>  <first value>" >> .git/info/grafts

Running git log should reveal that the last commit from subversion now appears right before the first commit in your local repository. Perfect!

Now run the following command to push everything into the subversion repository.

  $ git svn dcommit

Sit back and watch the output scroll by. My commit is still running, even after typing this entire post. :)

Go, Go gadget Google!

Wednesday, November 11th, 2009

Okay, so the title of this post needs some work, but I wanted to take a few moments to comment about the new programming language on the street today, Go.

Go was born out of one of Google’s famous 20% projects. I have been reading through the documentation on the project site, and I am starting to get a feel for the motivation behind the development of the language.

It appears that someone at Google was a really big fan of C. Such a big fan, that they designed a language with the same basic feel, but with some newer and improved syntax sugar.

With most of the sexy languages in the land being of the dynamic variety, it is interesting to see such an improvement in the static space. The Go language utilizes many features that are really popular in dynamic languages, but provides the advantages that come only as an after-thought with most dynamic languages.

It is going to be interesting to see how this language becomes adopted. I, for one, am not anxious to start using it. Mainly because I have been working with Ruby in my free time. (Ha! Free. right. More on that some other day.)

I am going to remember Go though for one particular use case. If I find myself unhappy with Ruby runtime performance, and I want to optimize by writing closer to the metal, then I am more likely to reach for Go than I am to reach for C or C++. Very interesting.