When I started my next project I switched from WatiN to Selenium, and I incorporated the PageObjectModel. I had recently watched John Somnez’s video’s Pluralsight videos around this topic (http://simpleprogrammer.com/2013/09/28/creating-automated-testing-framework-selenium/) , so a lot of his ideas were shining through. There was a Pages class which had static properties to all of the Page objects.

Here are some of the highlights of that solution. We created some additional extension methods for any web element to be able to perform some common functions. Because Selenium’s FindElement normally only looks under an element, and we needed a way of looking above an element, we modified this hack using XPath parent axis. Another really useful function is the ability to extract table information.

    public static class WebElementExtensions
        public static IWebElement GetParent(this IWebElement element)
            return element.FindElement(By.XPath("parent::*"));

        public static IWebElement FindParentByClassName(
            this IWebElement element, string className)
            if (element == null)
                return null;

            var classValue = element.GetAttribute("class");
            if (classValue.Contains(className))
                return element;

            return FindParentByClassName(element.GetParent(), className);

        public static List<string[]> ToTable(this IWebElement element)
            var rows = new List<string[]>();
            foreach (var tr in element.FindElements(By.TagName("tr")))
                var thOrTds = tr.FindElements(By.TagName("th"))
                rows.Add(thOrTds.Select(c => c.Text).ToArray());

            return rows;

In addition to the normal page object model there are often times menus, or toolbars, that cross pages. The original way we did this was just to use the Base classes, but we soon started needing the base classes for things like steps in a wizard. So instead we moved those to extensions as well, based off the BasePage. So when we created a new page that used an exiting menu partial we could use the extension methods to call those the methods easily without any modifications. We found the easiest way to do this was based off empty interfaces, because extension methods don’t really support attributes and we needed someway of describing which extension methods were legal on which objects.

public interface IHaveAdminMenu

public static class AdminMenuExtensions
    public static void AdminMenuClickItems(this IHaveAdminMenu adminMenu)
        var basePage = (BasePage) adminMenu;

This blog was cross posted on the Crafting Bytes blog at Web UI Testing Part 4: Extension methods in Page Object Model

Whether you end up WatiN or Selenium for automating the browser actually doesn’t matter that much. Whichever mechanism you use should be hidden behind a Page Object Model. This actually took me a while to discover because it wasn’t really in your face on the WatiN and Selenium forums. In fact even once I knew about the pattern I didn’t feel the need for it at first. It was similar to having a domain controller for a couple of computers. However, as the sites I was writing and testing got more complicated, I needed a way of organizing the methods to manipulate the pages into a logical grouping. It makes sense to make an object model that encapsulates the ID, classes, tags, etc. inside a page so that they can be reused easily. Let’s look at a simple example in WatiN, prior to putting in the Page Object Model.

[Given(@"I am on an item details page")]
public void GivenIAmOnAnItemDetailsPage()
    browser = new IE("http://localhost:12345/items/details/1?test=true");

[When(@"I update the item information")]
public void WhenIUpdateTheItemInformation()
        .TypeTextQuickly("New item name");
        .TypeTextQuickly("This is the new item description");
    var fileUpload = browser.FileUpload(Find.ByName("pictureFile"));
    string codebase = new Uri(GetType().Assembly.CodeBase).AbsolutePath;
    string baseDir = Path.GetDirectoryName(codebase);
    string path = Path.Combine(baseDir, @"..\..\DM.png");

The ?test=true in the first method is interesting, but the subject of another blog post. Instead Notice the Find.ByName(“Name”) in the second method. Now what if there is another method where I need to check the name to see what is there. And yet another where I need to both check it *and* update it. So I would have three places and four lines where that Find.ByName(“Name”) would be used.

What happens when I change the element to have a different name? Every test where I have used Find.ByName(“Name”) breaks. I have to go through and find them all and update them.

Let’s look at the same two methods, but this time with a PageObject model.

[Given(@"I am on an item details page")]
public void GivenIAmOnAnItemDetailsPage()
	browser = new IE(Pages.ItemDetails.Url);

[When(@"I update the item information")]
public void WhenIUpdateTheItemInformation()
	Pages.ItemDetails.SetName("New item name");
	Pages.ItemDetails.SetDetails("This is the new item description");

A couple of interesting things happened. The first is that the test is a lot more readable. The second is that I now have a central place to change when something from the page changes. I fix one line, and now all of the tests are running again.

So to recap, Page Object Models are great when either the pages are volatile or the same pages are being used for lots of different tests.

This blog was cross posted on the Crafting Bytes blog at Web UI Testing Part 3: Page Object Model

Because of the two problems I mentioned with back-door web testing (changes to layout and no JS testing), I was looking to pursue front-door web testing toward the end of 2012.

My first thought was that whatever framework I chose should have a test recorder so that writing the tests would be much easier than having to code up every little click and wait. The problem with this philosophy is that most of these test recorders generate code. It turns out that generating code in a maintainable way is hard, and all code should be maintainable, even test code. So I scrapped that path, and started looking at using a nice API to drive the browser.

I looked at two different frameworks in .NET for accomplishing this: WatiN and Selenium. Both had great feature sets and either one would have been suitable. At the time, Selenium’s documentation was way too fragmented. There were multiple versions: Selenium 1.0, Selenium RC, Selenium 2.0 , etc. Because I was new I wasn’t sure which one to use (e.g. was 2.0 stable?). I would do a search and end up on a blog post using an outdated method, or the blog post didn’t indicate which version of the API was being used. I found that WatiN’s documentation to be much clearer on the .NET side. So I went with that.

[Update: Selenium has been using 2.0 for a while, and the older documentation is becoming less relevant in search engines, so I would probably go with Selenium today]

This blog was cross posted on the Crafting Bytes blog at Web UI Testing Part 2: Front-door testing tools

Llewellyn Falco and I had a conversation many years ago (June 2010?) about the best way to test Web UI. During that conversation we referred to the two classifications/mechanisms of web testing as front-door and back-door web testing. That is how I still think of the two types many years later, although I recognize that not many people in the industry use those terms.

In front-door web testing you are using the browser to drive the test, which more closely tests what the user sees, but offers limited ability to manipulate or control the data and other dependencies. The other drawback of this type of testing is if the test modifies data, there needs to be some way to get back to a clean slate after the test finishes.

In back-door web testing you call the controller or presenter directly (assumes you are using MVC pattern, or have done a good job separating the Greedy view into a presenter). The advantage of this pattern is that you can control the dependencies and data context under which the test runs more easily by using in memory repositories, mocks, and things of that nature. The main issue with this type of testing is that these controller methods return some sort of model and view name, making it difficult to test what the user sees. Because of this, you can have complete test coverage over the controllers but still have bugs in the view.

In January of 2011 ASP.NET MVC 3 was released which allowed different view engines to be used to render the views into HTML that would be sent back to the client. Because the View engines were easily pluggable and the Razor Engine was packaged separately this allowed back door testing to call the engine to produce HTML. This allowed back-door web testing to get closer to what the user was seeing and eventually resulted in Llewellyn augmenting Approval tests with a mechanism for Approving HTML.

However, there are still problems with this approach. Two of the biggest problems are

  1. changes to the layout template break all tests
  2. inability to test JavaScript manipulations of the page

This blog was cross posted on the Crafting Bytes blog at Web UI Testing Part 1: Front-door and back-door testing

I think this is my 6th year speaking and 7th year attending at SoCalCodeCamp (San Diego edition).  For the past several years I have tried to give an advanced talk and an intro talk.  For the advanced talk I decided to give OData again for several reasons.

  1. Nobody else was speaking on it
  2. I has already signed up to give a webinar on it
  3. I still think it is relevant
  4. It is so flipping cool

The slides and demos for that are in the previous blog post.

When it came time to pick an intro talk I combed the SoCal Code Camp web site looking for gaps.  The gap I found was not what I am used to speaking about.  I was going to have to give a talk about . . . the front end.  (NOOOOO!!! Wait, I mean Sooooo what? What’s the big deal about the front-end?)  Specifically, I wanted to give a talk about using an off-the-shelf CSS framework called Twitter Bootstrap.  I had used it in my job, and in my side project, so I figured that should qualify me :) .  I put in the abstract and then forgot about it.

Months go by, and here it is the week before the talk.  I check back and realize a couple of things

  1. I had not used Twitter Bootstrap since I wrote the abstract
  2. There are over 80 people interested in the talk
  3. My daughter Julia has a soccer tournament on the same weekend

Long story short I wasn’t quite as prepared as I should have been.

The first day was pretty hectic.  First I saw Robin Shahan give a talk on Windows Azure in Real Life.  She good-naturedly accused me of heckling, but I think I was just encouraging audience participation :) .  The second talk I saw was Search engine building with Lucene and Solr, but I left as the speaker transitioned into Solr.  I ran home to help Julia get ready, and then back to Code camp to see Windows Azure Mobile Services by Bret Stateham.  Great talk as always, but I had to leave early to see the soccer game, which they lost 2-3.  I hustled back for NancyFX, which was probably my most influential talk of the weekend, then left again to see the final minutes of a 3-2 victory. Woohoo!!!

That night at the Geek dinner we found out that Woody was passing the baton after 8 years of running the SoCal code camps to Hattan. I also found out that night that my daughters team was playing the Surf at 10:00 (the same time as my OData talk) which sucked.

The second day I suffered through Data Flow Architectures, before giving my OData talk.  While speaking I learned that my daughter’s team lost the 2-2 game in penalty kicks.  The bright side to that was I didn’t need to worry about a conflict that afternoon.  I then went on to see Timothy Strimple on Git and GitHub.  I stayed in the room for Llewellyn and Chris Lucian talking about Agile Metrics, which helped me see some problems with the project that I was on.

The Twitter Bootstrap talk was the final talk of the day.  It was in the TV building, and the room could probably only support 30-40 people, but it was packed.  People were standing along the outside.  Also mildly surprising there were children in the class!  And they asked questions!?!?  And the questions were good!!!

I had no idea how the timing of the talk was going to go, and in fact it ran over, but it was very well received.  I was applauded (some even gave standing ovations – see previous paragraph) and several people stayed behind to congratulate me on a talk that I felt could have definitely gone smoother. I am glad everyone enjoyed it.

Thanks everyone for attending.  Here are the demos and slides. Slides are also on slideshare.