Angular 2.0 finally released on September 15th. We started a new project in early October, so we decided to try it out. Pretty quickly the question came up, which module loader should we use for the new application?

The Angular 2.0 tutorials use SystemJS, except for a few pages referencing Webpack. So we started leaning towards SystemJS. Then I came across a webpack article in the Angular documentation:
In it is says:

It’s an excellent alternative to the SystemJS approach we use throughout the documentation

Well, if it is such an “excellent alternative” why wasn’t it used in the documentation instead of SystemJS itself?

I also found this on Stack overflow.

Webpack is a flexible module bundler. This means that it goes further [edit: than SystemJS] and doesn’t only handle modules but also provides a way to package your application (concat files, uglify files, …). It also provides a dev server with load reload for development.

SystemJS and Webpack are different but with SystemJS, you still have work to do (with Gulp or SystemJS builder for example) to package your Angular2 application for production.

So Webpack can do more, point for Webpack.

And then I found this article

Angular 2 CLI moves from SystemJS to Webpack

Google itself is now using webpack? Game over, webpack wins.

This blog was cross posted on the Crafting Bytes blog at Webpack vs SystemJS

When I started my next project I switched from WatiN to Selenium, and I incorporated the PageObjectModel. I had recently watched John Somnez’s video’s Pluralsight videos around this topic (http://simpleprogrammer.com/2013/09/28/creating-automated-testing-framework-selenium/) , so a lot of his ideas were shining through. There was a Pages class which had static properties to all of the Page objects.

Here are some of the highlights of that solution. We created some additional extension methods for any web element to be able to perform some common functions. Because Selenium’s FindElement normally only looks under an element, and we needed a way of looking above an element, we modified this hack using XPath parent axis. Another really useful function is the ability to extract table information.

    
    public static class WebElementExtensions
    {
        public static IWebElement GetParent(this IWebElement element)
        {
            return element.FindElement(By.XPath("parent::*"));
        }

        public static IWebElement FindParentByClassName(
            this IWebElement element, string className)
        {
            if (element == null)
            {
                return null;
            }

            var classValue = element.GetAttribute("class");
            if (classValue.Contains(className))
            {
                return element;
            }

            return FindParentByClassName(element.GetParent(), className);
        }

        public static List<string[]> ToTable(this IWebElement element)
        {
            var rows = new List<string[]>();
            foreach (var tr in element.FindElements(By.TagName("tr")))
            {
                var thOrTds = tr.FindElements(By.TagName("th"))
                    .Union(tr.FindElements(By.TagName("td")));
                rows.Add(thOrTds.Select(c => c.Text).ToArray());
            }

            return rows;
        }

In addition to the normal page object model there are often times menus, or toolbars, that cross pages. The original way we did this was just to use the Base classes, but we soon started needing the base classes for things like steps in a wizard. So instead we moved those to extensions as well, based off the BasePage. So when we created a new page that used an exiting menu partial we could use the extension methods to call those the methods easily without any modifications. We found the easiest way to do this was based off empty interfaces, because extension methods don’t really support attributes and we needed someway of describing which extension methods were legal on which objects.

public interface IHaveAdminMenu
{
}

public static class AdminMenuExtensions
{
    public static void AdminMenuClickItems(this IHaveAdminMenu adminMenu)
    {
        var basePage = (BasePage) adminMenu;
        basePage.Driver.FindElement(By.Id("itemsLink")).Click();
    }
}

This blog was cross posted on the Crafting Bytes blog at Web UI Testing Part 4: Extension methods in Page Object Model

Whether you end up WatiN or Selenium for automating the browser actually doesn’t matter that much. Whichever mechanism you use should be hidden behind a Page Object Model. This actually took me a while to discover because it wasn’t really in your face on the WatiN and Selenium forums. In fact even once I knew about the pattern I didn’t feel the need for it at first. It was similar to having a domain controller for a couple of computers. However, as the sites I was writing and testing got more complicated, I needed a way of organizing the methods to manipulate the pages into a logical grouping. It makes sense to make an object model that encapsulates the ID, classes, tags, etc. inside a page so that they can be reused easily. Let’s look at a simple example in WatiN, prior to putting in the Page Object Model.

[Given(@"I am on an item details page")]
public void GivenIAmOnAnItemDetailsPage()
{
    browser = new IE("http://localhost:12345/items/details/1?test=true");
}

[When(@"I update the item information")]
public void WhenIUpdateTheItemInformation()
{
    browser.TextField(Find.ByName("Name"))
        .TypeTextQuickly("New item name");
    browser.TextField(Find.ByName("Description"))
        .TypeTextQuickly("This is the new item description");
    var fileUpload = browser.FileUpload(Find.ByName("pictureFile"));
    string codebase = new Uri(GetType().Assembly.CodeBase).AbsolutePath;
    string baseDir = Path.GetDirectoryName(codebase);
    string path = Path.Combine(baseDir, @"..\..\DM.png");
    fileUpload.Set(Path.GetFullPath(path));

The ?test=true in the first method is interesting, but the subject of another blog post. Instead Notice the Find.ByName(“Name”) in the second method. Now what if there is another method where I need to check the name to see what is there. And yet another where I need to both check it *and* update it. So I would have three places and four lines where that Find.ByName(“Name”) would be used.

What happens when I change the element to have a different name? Every test where I have used Find.ByName(“Name”) breaks. I have to go through and find them all and update them.

Let’s look at the same two methods, but this time with a PageObject model.

[Given(@"I am on an item details page")]
public void GivenIAmOnAnItemDetailsPage()
{
	browser = new IE(Pages.ItemDetails.Url);
}

[When(@"I update the item information")]
public void WhenIUpdateTheItemInformation()
{
	Pages.ItemDetails.SetName("New item name");
	Pages.ItemDetails.SetDetails("This is the new item description");
	Pages.ItemDetails.SetPictureFile("DM.png");

A couple of interesting things happened. The first is that the test is a lot more readable. The second is that I now have a central place to change when something from the page changes. I fix one line, and now all of the tests are running again.

So to recap, Page Object Models are great when either the pages are volatile or the same pages are being used for lots of different tests.

This blog was cross posted on the Crafting Bytes blog at Web UI Testing Part 3: Page Object Model

Because of the two problems I mentioned with back-door web testing (changes to layout and no JS testing), I was looking to pursue front-door web testing toward the end of 2012.

My first thought was that whatever framework I chose should have a test recorder so that writing the tests would be much easier than having to code up every little click and wait. The problem with this philosophy is that most of these test recorders generate code. It turns out that generating code in a maintainable way is hard, and all code should be maintainable, even test code. So I scrapped that path, and started looking at using a nice API to drive the browser.

I looked at two different frameworks in .NET for accomplishing this: WatiN and Selenium. Both had great feature sets and either one would have been suitable. At the time, Selenium’s documentation was way too fragmented. There were multiple versions: Selenium 1.0, Selenium RC, Selenium 2.0 , etc. Because I was new I wasn’t sure which one to use (e.g. was 2.0 stable?). I would do a search and end up on a blog post using an outdated method, or the blog post didn’t indicate which version of the API was being used. I found that WatiN’s documentation to be much clearer on the .NET side. So I went with that.

[Update: Selenium has been using 2.0 for a while, and the older documentation is becoming less relevant in search engines, so I would probably go with Selenium today]

This blog was cross posted on the Crafting Bytes blog at Web UI Testing Part 2: Front-door testing tools

Llewellyn Falco and I had a conversation many years ago (June 2010?) about the best way to test Web UI. During that conversation we referred to the two classifications/mechanisms of web testing as front-door and back-door web testing. That is how I still think of the two types many years later, although I recognize that not many people in the industry use those terms.

In front-door web testing you are using the browser to drive the test, which more closely tests what the user sees, but offers limited ability to manipulate or control the data and other dependencies. The other drawback of this type of testing is if the test modifies data, there needs to be some way to get back to a clean slate after the test finishes.

In back-door web testing you call the controller or presenter directly (assumes you are using MVC pattern, or have done a good job separating the Greedy view into a presenter). The advantage of this pattern is that you can control the dependencies and data context under which the test runs more easily by using in memory repositories, mocks, and things of that nature. The main issue with this type of testing is that these controller methods return some sort of model and view name, making it difficult to test what the user sees. Because of this, you can have complete test coverage over the controllers but still have bugs in the view.

In January of 2011 ASP.NET MVC 3 was released which allowed different view engines to be used to render the views into HTML that would be sent back to the client. Because the View engines were easily pluggable and the Razor Engine was packaged separately this allowed back door testing to call the engine to produce HTML. This allowed back-door web testing to get closer to what the user was seeing and eventually resulted in Llewellyn augmenting Approval tests with a mechanism for Approving HTML.

However, there are still problems with this approach. Two of the biggest problems are

  1. changes to the layout template break all tests
  2. inability to test JavaScript manipulations of the page

This blog was cross posted on the Crafting Bytes blog at Web UI Testing Part 1: Front-door and back-door testing

When I started doing more complicated things with ASP.NET MVC it was using Razor. In some ways that was unfortunate because some of these things were actually a little easier in prior versions. It starts to get complicated when you start composing partial views and multiple javascript files. First some Javascript files depend on other javascript files. And secondly partial views need certain scripts to be included that the main page doesn’t necessarily know about. The problem is that Razor doesn’t really deal with these things very well.

For this blog entry I am going to focus on getting JavaScript files included from partial views. This question has been asked numerous times on Stack Overflow

  • http://stackoverflow.com/questions/863436/is-it-bad-practice-to-return-partial-views-that-contain-javascript
  • http://stackoverflow.com/questions/912755/include-javascript-file-in-partial-views
  • http://stackoverflow.com/questions/4707982/how-to-include-javasscript-from-a-partial-view-in-asp-net-mvc3
  • http://stackoverflow.com/questions/5376102/mvc-partial-views-and-unobtrusive-jquery-javascript
  • http://stackoverflow.com/questions/7556400/injecting-content-into-specific-sections-from-a-partial-view-asp-net-mvc-3-with
  • http://stackoverflow.com/questions/11098198/is-it-ok-to-put-javascript-in-partial-views

In fact, I bet if you put all of the questions in 1 it would have quite a point total. But it is spread into so many slightly different questions that it is tough to quantify.

So first to define the problem. The ideal place for scripts is right before the close of the body tag. The default template’s master/layout view contains a scripts section for this purpose. Unfortunately sections can only be defined, not added to. So that means that the main view is the only one that can place script files in that section. It can get very awkward if there are script files that are very specific to the partial view, especially if the main view includes a number of partials. Basically the master view has to maintain the list of scripts needed by the entire tree of partial views.

Let’s make the problem more concrete. Let’s say I have three main view that include a partial view. That partial view uses another partial view. I change the leaf partial view so that I need some JavaScript. I have to find out where all of the views are that include me (but of course no view includes me directly), and add the script to those views. In short – YUCK.

While researching a solution to the problem, I came across a couple of promising solutions, namely:

http://stackoverflow.com/questions/5433531/using-sections-in-editor-display-templates/

And

http://forloop.co.uk/blog/managing-scripts-for-razor-partial-views-and-templates-in-asp.net-mvc

The first didn’t take into account paths, and the second was way too complicated in terms of how to use them, so I came up with this nice simple hybrid of the two solutions.

Here is an example of its use
Either at the top of the file or the web config, need to use the namespace

@using PartialsWithScripts.Helpers

To include a script in a partial view simple add it like so:

@{ Html.MyAddScriptFile("~/Scripts/App/contact.js"); }

Here is the code

public static class ScriptHelpers
{
    const string ScriptContextKey = "ScriptContext";

    public static void AddScript(this HtmlHelper htmlHelper, string path)
    {
        var scriptContext = GetScriptContext(htmlHelper);
        scriptContext.Add(path);
    }

    public static IHtmlString RenderScripts(this HtmlHelper htmlHelper)
    {
        var httpContext = htmlHelper.ViewContext.HttpContext;
        var scriptContext = httpContext.Items[ScriptContextKey] as HashSet<string>;
        if (scriptContext != null)
        {
            var builder = new StringBuilder();
            var urlHelper = new UrlHelper(htmlHelper.ViewContext.RequestContext,
                                htmlHelper.RouteCollection);
            foreach (var scriptFile in scriptContext)
            {
                builder.AppendLine("<script type='text/javascript' src='" 
                    + urlHelper.Content(scriptFile) + "'></script>");
            }
            return new MvcHtmlString(builder.ToString());
        }
        return MvcHtmlString.Empty;
    }

    private static HashSet<string> GetScriptContext(HtmlHelper htmlHelper)
    {
        var httpContext = htmlHelper.ViewContext.HttpContext;
        var scriptContext = httpContext.Items[ScriptContextKey] as HashSet<string>;
        if (scriptContext == null)
        {
            scriptContext = new HashSet<string>();
            htmlHelper.ViewContext.HttpContext.Items[ScriptContextKey] = scriptContext;
        }
        return scriptContext;
    }
}

I use WordPress for my blog. I wanted to update the bog to use HTML5 and CSS3. It wasn’t like I was using tables for layout or anything but there were three things that I didn’t really like the look of.

The first thing I wanted to fix was the Web font. I was using JavaScript to produce the font, and it was not giving good results in all browsers. I searched for and found the Whiteboard regular font I was using. Unfortunately not all font files are supported in all of the browsers. To support all browsers you need at least 2 files. I downloaded and copied all of the necessary files (ttf, eot, woff, and svg). Then I edited my main stylesheet and added this to the top

/* @font-face kit by Fonts2u (http://www.fonts2u.com) */ 
@font-face
{
	font-family:"Whiteboard";
	src: url("House_Whiteboard_font_by_callsignKateJones.eot");
	src: url("House_Whiteboard_font_by_callsignKateJones.eot?#iefix") format("embedded-opentype"),
	url("House_Whiteboard_font_by_callsignKateJones.woff") format("woff"),
	url("House_Whiteboard_font_by_callsignKateJones.ttf") format("truetype"),
	url("House_Whiteboard_font_by_callsignKateJones.svg#Whiteboard") format("svg");
	font-weight:normal;font-style:normal;
}

I removed the typeface.js scripts and the extraneous styles, and voila, I now had working fonts in all browsers.

The next thing I wanted to focus on was the text on the sticky note. I wanted it to look written on. For this I needed the new CSS3 rotate transform

.sticky
{
	-webkit-transform:rotate(-9deg);
	-moz-transform:rotate(-9deg);
	-o-transform:rotate(-9deg);
	/* filter:progid:DXImageTransform.Microsoft.BasicImage(rotation=-0.1); */
	-ms-transform:rotate(-9deg);
}

The last thing I didn’t like was the whiteboard boarder. It looked alright in Chrome (which was my default browser, but not so good in IE. I tried a couple of things including border gradients and background gradients, but I couldn’t get it to look right. Finally I had to resort to JavaScript which sucks.

I had a number of musings and thoughts that I had written down in various places over the past 6 months, and I wanted to collect them and organize them into some sort of blog form.

So which blog engine should I use? After looking around for a while, I decided to build my own. I know that will surprise and dismay a number of people (including myself) – but hear me out. The reason why I am doing this is because I am *not* a web developer. Wait a minute – why would this make me *more* likely to develop a web application? Because I need to hone my skills of course. Part of being a good developer is working on lots of different types of applications, and frankly it has been quite a while since I have played over in the web world. I could use a standard blog engine, and invent some other sort of project for myself, but why not kill two birds with one stone?

I chose an older more lengthy one as the first entry to port. Which as it turns out may have been a mistake. For this particular entry, I had a large amount of code that I needed to annotate in addition to colorize. I wanted to make it easy to post the entry and have it look like it does inside Visual Studio automatically. After looking around a bit I found a place that had a code control, but not one that enabled you to annotate the code (e.g. highlight certain sections, or cross out certain lines that weren’t needed anymore). I didn’t spend a ton of time looking, because as I mentioned one of the goals was to see what was involved in writing a bigger web application.

The ultimate goal was to paste in a block of code like this:

<code>
class Program
{
	static void Main(string[] args)
	{
		Console.WriteLine("Hello world");
	}
}
</code>

and it would look great when rendered.

However if I wanted to I could talk about how the args parameter wasn’t needed I could indicate this by surrounding the args with a span containing some CSS class, like this:

<code>
class Program
{
	static void Main(<span style="text-decoration: line-through;">string[] args</span>)
	{
		Console.WriteLine("Hello world");
	}
}
</code>

and it would render in the correct coloring, but with a strikeout of the args like this:

class Program
{
	static void Main(string[] args)
	{
		Console.WriteLine("Hello world");
	}
}

In order to support this I had to come up with an easy way to parse/recognize code. I didn’t need a professional grade parser, I just wanted a simple coloring mechanism. Here I decided to use some simple regular expressions to do the trick. These regular expressions are based on a set of keywords read from a config file like so:

	  <add key="C#Keywords" value="#region.*n,#endregion.*n,abstract,event,new,struct,
explicit,null,switch,base,extern,object,this,bool,false,operator,
throw,break,finally,out,true,byte,fixed,override,try,case,float,
params,typeof,catch,private,uint,char,foreach,protected,ulong,
checked,goto,public,unchecked,class,readonly,unsafe,const,
implicit,ushort,continue,return,using,decimal,sbyte,virtual,
default,interface,sealed,volatile,delegate,internal,short,void,
sizeof,while,double,lock,stackalloc,else,long,static,enum,
namespace,string,ref,int,for,if,else if,do,is,in,as"/>

The other requirement that I had (because I chose this initial entry to port) was the need to do the same set of annotations for xml files. This just meant supporting two different languages.

As with most things I was able to get 80% of the functionality in 20% of the time, but the last 20% of the functionality took a while, but here is the final result. It is weird to think about but it is actually colorizing itself :)
[Update: when I switched to using WordPress I also changed any code that did not have bolding or line-through to use SyntaxHighlighter]

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Configuration;
using System.Text;
using System.Text.RegularExpressions;
using System.Web.UI;
using BrainHz.Blog;

[ParseChildren(true)]
public partial class CodeControl : UserControl
{
	private Regex regex;
	private string[] groupNames;

	string language;
	public string Language
	{
		get { return language; }
		set { language = value; }
	}

	string textContent;
	[PersistenceMode(PersistenceMode.InnerDefaultProperty)]
	[DesignerSerializationVisibility(DesignerSerializationVisibility.Content)]
	public string Content
	{
		get { return textContent; }
		set { textContent = value; }
	}

	private List<KnownType> knownTypes;
	public List<KnownType> KnownTypes
	{
		get { return knownTypes; }
		set { knownTypes = value; }
	}

	string spanString;

	protected void Page_Load(object sender, EventArgs e)
	{
		spanString = string.Format("(?<{0}></*{0}[^>]*>)|", "span");
		if (language == "csharp")
		{
			regex = CreateCodeRegex();
			groupNames = new string[] {"comment", "quotated", "keyword", "knownType"};
		}

		if (language == "xml")
		{
			regex = CreateXmlRegex();
			groupNames = new string[] { "elementName", "attributeName", "attributeValue" };
		}
	}

	private Regex CreateXmlRegex()
	{
		StringBuilder exp = new StringBuilder(spanString +
			"&lt;/?(?<elementName>[\S]+)|" +
			"(?<attribute>(?<attributeName>\w+)=(&quot;|")(?<attributeValue>[^&"]*)(&quot;|"))"); //+ "|" +
			//"(?<elementName>[\S]+)&gt;");
		return new Regex(exp.ToString());
	}

	/// <summary>
	/// This method creates the regular expression which will be used to
	/// identify special words.
	/// The keywords are read from the application configuration file.
	/// The knownTypes configured per control use
	/// </summary>
	/// <returns>Regex object</returns>
	private Regex CreateCodeRegex()
	{
		StringBuilder expression = new StringBuilder(spanString + "(?<quotated>(\".*\"))|(?<comment>(//.*))");
		string keywords = ConfigurationManager.AppSettings["C#Keywords"];

		string[] splitKeywords = keywords.Split(',');
		string keywordExpression = GetRegexForSpecificWords("keyword", splitKeywords);
		expression.Append(keywordExpression);

		if (knownTypes != null && knownTypes.Count > 0)
		{
			List<string> types = new List<string>();
			foreach (KnownType type in knownTypes)
				types.Add(type.Name);
			string knownTypeExpression = GetRegexForSpecificWords("knownType", types);
			expression.Append(knownTypeExpression);
		}

		return new Regex(expression.ToString());
	}

	private static string GetRegexForSpecificWords(string collectionName, ICollection<string> words)
	{
		if (words == null) return string.Empty;
		if (words.Count == 0) return string.Empty;

		StringBuilder exp = new StringBuilder();
		exp.AppendFormat("|(?<{0}>\b(", collectionName);

		bool needsPipe = false;
		foreach (string s in words)
		{
			if (needsPipe)
				exp.Append("|");
			exp.Append(s);
			needsPipe = true;
		}
		exp.Append("\b))");
		return exp.ToString();
	}

	class CaptureInfo
	{
		public string GroupName;
		public Capture Capture;
		public CaptureInfo(string groupName, Capture capture)
		{
			GroupName = groupName;
			Capture = capture;
		}
	}

	/// <summary>
	/// This method takes an input string from a source file and
	/// outputs the string with the spans and classes.
	/// </summary>
	/// <param name="writer">place to write to</param>
	/// <param name="line">single line of source code</param>
	private void Colorize(HtmlTextWriter writer, string line)
	{
		int idx = 0;
		Match m = regex.Match(line);

		while (m != null && m.Success)
		{
			writer.Write(line.Substring(idx, m.Index - idx));
			idx = m.Index;

			// create a sorted list of captured info
			SortedDictionary<int, CaptureInfo> captures = new SortedDictionary<int, CaptureInfo>();
			foreach (string groupName in groupNames)
			{
				Group group = m.Groups[groupName];
				if (!group.Success)
					continue;
				foreach (Capture cap in group.Captures)
					captures[cap.Index] = new CaptureInfo(groupName, cap);
			}

			foreach (KeyValuePair<int, CaptureInfo> kv in captures)
			{
				string groupName = kv.Value.GroupName;
				Capture cap = kv.Value.Capture;

				if (idx != cap.Index)
				{
					// write any non-formatted stuff
					writer.Write(line.Substring(idx, cap.Index - idx));
					idx = cap.Index;
				}

				writer.AddAttribute(HtmlTextWriterAttribute.Class, groupName);
				writer.RenderBeginTag(HtmlTextWriterTag.Span);
				writer.Write(cap.Value);
				idx += cap.Length;
				writer.RenderEndTag();
			}

			// write out remaining
			writer.Write(line.Substring(idx, m.Index + m.Length - idx));
			idx = m.Index + m.Length;

			m = m.NextMatch();
		}

		writer.Write(line.Substring(idx));
	}

	protected override void Render(HtmlTextWriter writer)
	{
		writer.AddAttribute(HtmlTextWriterAttribute.Class, "code");
		writer.RenderBeginTag(HtmlTextWriterTag.Pre);

		string[] lines = textContent.Split(new string[] {"rn"}, StringSplitOptions.None);
		foreach (string line in lines)
		{
			Colorize(writer, line);
			writer.WriteLine();
		}
		writer.RenderEndTag();
	}
}