I am by no means an expert on Workflow. But after e-mailing back and forth with Jim Bears and Dave McCarter about a possible talk for the July meeting of the San Diego Developers Group, we concluded that not many people understand, use, or appreciate workflow. This could be due to bad experiences they have had in prior versions. Since the 4.0 version has addressed a lot of those shortcomings, we decided that it would be a great topic for discussion at a user group.

As per usual, I was too busy to prepare for the talk more than a day in advance. But I put together some slides, and formed a pretty good idea of the demo I wanted to do before I went in.

The first hour of the talk went pretty well. It was when I got into the unrehearsed part of the demo that things started to go awry. I had packaged up the first demo into a custom activity, and was trying to reuse that in a flowchart activity. Also I was switching from a WorkflowInvoker to running the workflow from a WorkflowApplication. At the same time I was adding persistence. I noted that I was still passing in the custom activity and not the workflow that contained the Flowchart, but I knew that something else was going on. I eventually found it (I was forgetting to call Run on the WorkflowApplication). Once I fixed that however, I forgot to go back and switch to the outer activity. That’s was the reason that nothing else was working.

This morning before I went to work, I fixed that problem, and corrected a couple of typos in the slides. Now that I am back home, I am going to finish the rest of the demo. That was always my plan, because I didn’t think that I would have time to do everything live.

I had already created one event – the completed event. But now that I allow the workflow to persist I want to capture another event – the PeristableIdle event

Here is an example of hooking that up:

app.PersistableIdle = e =>
{
	Console.WriteLine("Persisting...");
	_persistingEvent.Set();
	return PersistableIdleAction.Persist;
};

then I want to change my main from this:

WorkflowApplication app = CreateNewWorkflow();
app.Run();
_completedEvent.WaitOne();

to this:

WorkflowApplication app = CreateNewWorkflow();
app.Run();
_persistingEvent.WaitOne();
app.ResumeBookmark("readPrizeCode", Console.ReadLine());
_completedEvent.WaitOne();

Run it again, and everything works, but now what happens if I close the console application after receiving the prize code?
I need someway of loading the existing workflow, but I don’t have anything that I can load it by. As I mentioned last night there are a couple of ways to do this:
1) by using Promotable properties so that some of your properties are persisted along with the workflow.
2) Just tracking the mapping between your custom property and the instance ID from another table

I chose the second option, and added in the DataAccessLayer to do this already, but to take advantage of this feature I need to add the mapping record when it goes idle and take out the mapping when it completes. The final result looks something like this:

static readonly ManualResetEvent _completedEvent = new ManualResetEvent(false);
static readonly ManualResetEvent _persistingEvent = new ManualResetEvent(false);
static readonly ManualResetEvent _unloadedEvent = new ManualResetEvent(false);

static void Main(string[] args)
{
	string email = null;
	while (string.IsNullOrWhiteSpace(email))
	{
		Console.WriteLine("Enter your e-mail address:");
		email = Console.ReadLine();
	}

	bool done = false;
	while (!done)
	{
		WorkflowApplication app = CreateNewWorkflow(email);

		Guid instanceId;
		using (var session = new EfSession())
		{
			instanceId = session.Workflows.GetWorkflowInstance(email);
		}
		if (instanceId != Guid.Empty)
		{
			app.Load(instanceId);

			string prizeCode = null;
			while (string.IsNullOrWhiteSpace(prizeCode))
			{
				Console.WriteLine("Enter your prize code:");
				prizeCode = Console.ReadLine();
			}

			app.ResumeBookmark("readPrizeCode", prizeCode);
			_completedEvent.WaitOne();
			done = true;
		}
		else
		{
			app.Run();
			_unloadedEvent.WaitOne();
		}
	}
}

public static WorkflowApplication CreateNewWorkflow(string email)
{
	IDictionary<string, object> outputs = null;
	var app = new WorkflowApplication(
		new QuestionForPrize());
	app.Completed += e =>
	{
		outputs = e.Outputs;
		Console.WriteLine("Removing instance {0}...", app.Id);
		using (var session = new EfSession())
		{
			var instances =
				from wi in session.Workflows.All
				where wi.EmailAddress == email && wi.WorkflowInstanceId == app.Id
				select wi;
			var instance = instances.SingleOrDefault();
			if (instance != null)
			{
				session.Workflows.Delete(instance);
				session.Save();
			}
		}
		_completedEvent.Set();
	};
	app.PersistableIdle = e =>
	{
		Console.WriteLine("Persisting instance {0}...", app.Id);
		var wi = new WorkflowInstance
		{
			EmailAddress = email,
			WorkflowInstanceId = app.Id,
		};
		using (var session = new EfSession())
		{
			session.Workflows.Add(wi);
			session.Save();
		}
		_persistingEvent.Set();
		return PersistableIdleAction.Unload;
	};

	app.Unloaded = e =>
	{
		Console.WriteLine("Instance {0} has been unloaded", app.Id);
		_unloadedEvent.Set();
	};

	app.InstanceStore = GetInstanceStore();
	return app;
}

private static InstanceStore GetInstanceStore()
{
	var instanceStore = new SqlWorkflowInstanceStore(
		ConfigurationManager.ConnectionStrings["Workflow"].ConnectionString)
	{
		HostLockRenewalPeriod = TimeSpan.FromSeconds(1)
	};

	InstanceHandle handle = instanceStore.CreateInstanceHandle();
	InstanceView view = instanceStore.Execute(
		handle, new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(30));
	handle.Free();
	instanceStore.DefaultInstanceOwner = view.InstanceOwner;
	return instanceStore;
}

Here are the slides and demos for the talk. Thanks everyone for coming!

This was my fourth year presenting at the San Diego Code Camp (and the 5th year attending).

On Saturday I went to see Dustin Davis speak on Aspect Oriented Programming with PostSharp because we had been e-mailing back and forth about combining our possible AOP talks. I then stopped by to see Mark Rotenburg speak about NetDuino which I hadn’t really played with but after that talk I really wanted to.

After lunch I tried to get in to see Jon Galloway speak about MVC 3, but the place was so packed I went next door to see Brad Cunningham speak about “Becoming a better ninja”.

Ike Ellis and I spoke next about Sql Azure vs. Amazon RDS. This was probably the easiest talk I have ever given with no preparation. Ike and I interact with one another quite a lot and it was really no different in front of an audience as it was front of a group of company developers.

Unfortunately I was peppered with questions after the talk, so I missed the final session.

On Sunday I was the first talk on Aspecting EF And WCF. I was glad to see that I was given a bigger room this year. Jim Houghton and Mark Taparauskas from DevelopMentor were also there. I was nervous because of the large amount of material I had to cover, but I think the talk went well considering how advanced the topic was. In the hour talk I managed to go over the WCF extensibility model, write an IOperationInvoker, discuss the EF provider model, plug in a custom provider, discuss how lambdas can be used to call services in a type safe manner, describe the decorator pattern, talk about how to bundle up reusable calling logic, and switch the calling model of my existing client. There were three demos, on aspecting WCF on the service side, EF on the client side, and WCF on the client side.

After my talk I headed over to see Alex Shah talk about PhoneGap. Having done a little phone development myself I found the topic interesting, although the talk could have used a little polish.

I went to lunch with Mark, and when I got back Llewellyn was waiting to prepare our talk on Reactive Extensions. My original talk had ben introducing some concepts and then building a couple of nifty demos. When Llewellyn joined he wanted to show the Koans for each particular set of functions. So we wrote up the demo slides that followed every slide of what I wanted to cover with the name of a method two from the Koans. We finished early but unfortunately not in time to catch the talk before ours, so we just hung out until the end and gave the final talk.

During the talk Bart texted to say that Run was now called ForEach? Not sure if I agree with that rename, but there it is. The final demos were on Drag and drop with Silverlight, and throttling text event to issue web service calls. We had one question at the end around Throttle which I will post when I get a chance.

I love the Reactive Framework, I will get to why in a minute. I even did a talk on it at the Guerrilla.NET class in May. That talk didn’t go so well because they keep changing the freakin’ API! Every single time I upgrade I have to fix things. This last class unfortunately I didn’t even realize that I was upgrading. I made the mistake of creating a project from scratch and using NuGet to pull down the latest version of Reactive. All of the Demo code that I was using broke in subtle ways that I couldn’t quite figure out live, so I had to pull out a project that I had built a week prior. I just now had the time to sift through the changes and make everything work again.

There are still two major ways to subscribe to UI events. The API used to be called FromEvent, now it is FromEventPattern, even though FromEvent still exists for some reason. The important part isn’t whether th API is called FromEvent or FromEventPattern. The distinguishing feature is whether you pass one generic type argument or two. If you just pass one you get access to use the old string method like this:

var mousemoves1 =
	from evt in Observable.FromEventPattern<MouseEventArgs>(rect, "MouseMove")
	select evt.EventArgs.GetPosition(LayoutRoot);

If you pass in two arguments (the first being the delegate type, and the second being the EventArgs type) you get to use the more confusing, but more flexible API. It makes sense after you think about it for a while. That one looks like this:

var mousemoves2 =
	from evt in Observable.FromEventPattern<MouseEventHandler, MouseEventArgs>(
		h => MouseMove += h, h => MouseMove -= h)
	select evt.EventArgs.GetPosition(LayoutRoot);

The reason you can’t simple pass MouseMove is because the add_ and remove_ methods for the event aren’t exposed directly.

Anyway, one of the best features of Reactive is the fact that you can transform a set of UI specific events that can only be handled by the UI to a set of events that can be handled by anyone, a ViewModel for instance.

var textChanged = Observable.FromEventPattern<TextChangedEventHandler, TextChangedEventArgs>(
				h => textBox.TextChanged += h,
				h => textBox.TextChanged -= h);
var stringsChanging = 
	from tc in textChanged
	select ((TextBox)tc.Sender).Text;

// Now that we have transformed these events we can pass 
// them safely into the ViewModel
// which will still know nothing of how they were generated
viewModel.SetupSubscription(stringsChanging);

The idea of transforming an passing around events is one of the main reasons why I still love Reactive even though they break me every month or so…

I just finished a Guerrilla.NET in Boston with Michael Kennedy and Mark Smith. Here are the topics we covered.

  • Introduction to WPF and Silverlight
  • ASP.NET MVC 3.0: Beyond the Basics
  • LINQ to Objects and LINQ to XML
  • Entity Framework
  • Model-View-ViewModel for WPF and Silverlight
  • PFx: Task and The Parallel Class
  • Thread Safety and Concurrent Data Structures
  • Building WCF REST Services
  • C# 3.0, 4.0, and 5.0
  • Entity Framework and the Repository Pattern
  • jQuery
  • Cloud Computing for the .NET Developer: IaaS, PaaS, and Patterns
  • The NoSQL Movement, LINQ, and MongoDB
  • iOS Programming with .NET and MonoTouch
  • Design Patterns for Testability (DI, IoC, and unit testing)
  • Reactive Framework for .NET (Rx)
  • WCF Data Services
  • Power Debugging with WinDBG

I had a great time, and as an added bonus I learned some things. I was monkeying for Mark while he was doing the “Entity Framework and the Repository Pattern”. He did two things that I thought were better than I had done in the past. But first some background…

I used to structure my repositories using an interface similar to this:

IEnumerable<T> GetAll() // or sometimes returning an IList
T GetById(TKey id)  // sometimes just taking an object
void Add(T t)
void Delete(TKey id) // sometimes taking a T
void Save()

I may have other interfaces which specialize a particular IRepository to contain additional query methods.
Sometimes when I am implementing the repository pattern I combine it with a session object. This is especially nice when using NHibernate or doing Web work (where the controller method creates the session and closes it before returning). The session will hang on to the context so that different repositories can be called together in a transaction.

Mark pointed out two things that work differently in Entity Framework and LINQ than in other ORMs.
1) Save() doesn’t need to go on the individual repositories anymore it can be moved up into the Session object. This is because SaveChanges() commits all changes across anything using the same context.
2) If you change the All method to return an IQueryable you can remove all of the other Get methods, because they can be built using LINQ.

So the new IRepository looks like:

IQueryable<T> All
void Add(T t)
void Delete(T t)

That is a good simplification, thanks Mark.

A nerve racking morning

Today I woke up ready to start preparing for my GeekSpeak talk. You could argue that I severely procrastinated in waiting until today, and you would be right. I hadn’t done this particular demo before, but I had played with both the Diagnostics APIs and the Service Management APIs pretty extensively, so I was fairly confident. Even though I was starting at 7:30, I felt sure I would have a polished demo by 11:30. Let’s just say I now have an even healthier respect for good backup strategies.

Early on I made a couple of glitches that set me back, like pasting code from the 1.2 version of the Azure SDK that used the “DiagnosticsConnectionString”, because the new 1.3 version uses “Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString”. I was getting this completely unhelpful error message like: “Error on startup”. I managed to get though that, and I got the custom performance counter created.

Collecting was a bit tricky, because I was trying to demonstrate a product that I hadn’t used in a while called Cerebrata Azure Diagnostics Manager. I kept starting it up and trying to connect before the WADPerformanceCounterTable had been created. Once I learned patience, I was able to get it running successfully.

At this point it was about 9:30, and I was ready to start on the Service Management piece. I opened up a project I had used on numerous occasions on the past to eliminate all of the running instances on a list of subscription IDs. I had one little heart attack because I realized that out of all the APIs that I utilized, I hadn’t pulled out the configuration info, and I hadn’t called the Change Deployment Configuration. I managed to get the configuration extracted and base-64 decoded, fairly quickly. I used some X-Linq to modify the configuration XML and now its time to POST. And… It doesn’t work… WHAT! What do you mean can’t read the configuration information, I have a demo in 45 minutes! After a little searching I found that you have to prepend the XML preprocessing instruction

<?xml version="1.0" encoding="utf-8" ?>

Even though the instruction was there when I extracted it somehow gets removed during the transformation. With that – Voila! – it works. And I still have 15 minutes to spare. Whew! Not too bad for a mornings work.

And here was where the “benevolent being” thought, “Man this guy is cocky. I will teach him a lesson.” What happened next I still don’t clearly understand. I had done all of the work around extracting configuration information and updating it inside the CleanUpAzureDeployments solution. I wanted to move it over to the AutoScaling solution that I had done all of the performance work in. Three simple actions and I lost almost two hours worth of work.
1) I used Ctrl – X to remove the GetDeploymentConfiguration, AddAnotherInstanceToConfiguration, UpdateConfigurationInformation functions (about 100 lines of code)
2) Seeing that the code was removed successfully I assumed I had it in my buffer, so I closed that solution.
3) I used Ctrl-V to paste the information into the new project.

And nothing appeared…

I pressed it again, maybe I didn’t press it hard enough. Then at least a minute passes while I sit looking at the computer screen in a state of slowing growing terror. After the initial shock, I start cursing Visual Studio in the vilest sort of language for not supporting automatic backups like Eclipse does.

At this point it is time for the call, but I just lost my most important demo that it took me almost two hours to get working. I place the call to Glen explain the situation and start frantically trying to recreate the demo before everyone joins the call. Thank goodness for a weird form of the 80/20 rule which states that you can redo all of the work that you have recently done in about 20 percent of the time. During the sound checks and introductions I was coding away trying to recreate the 100 lines that I had lost, and I managed to finish them before the curtain came up.

I think the stress probably took a month of off my life expectancy. I am not sure that my nerves had time to recover by the time I finished the call.

The talk itself

The talk itself might have gone well. It was tough to tell from my perspective because I was so freaked out that I couldn’t think straight. Anyway thanks to everyone that was on the call. Here are the newly recreated auto scaling demos.

Also I had some questions:
1) What was the name of an easy Auto-Scaling product?
I should have remembered the name because it is similar to the name of the Amazon solution Azure Watch
2) Can we get a list of what built-in performance counters are available on Azure?
This one is a little tricky to answer, because any performance counter you can use on premises you can use in Azure. A better question might be: “Which performance counters can’t I use in Azure?” The short answer is none, but of course you can only use counters from software that is installed, so basically all the core Windows counters.
3) Can Azure monitor the security channel from the windows event log?
I understand that it is possible, although I have not done it myself. To read an event log that strongly ACLed though I am pretty sure you will need to include:

<Runtime executionContext="elevated"/>

in the ServiceDefinition.csdef file.
4) Can I get a full list of what’s in the DiagnosticMonitor namespace?
The best overall picture of what’s going on is found here, about midway down.
5) Where is the information on the Rest API for changing configuration settings, how the cert works, etc?
That can all be found in the normal MSDN documentation here.

Thanks again!

Thoughts about how Azure is architected have been jumping around my brain for a long time now. Subconsciously I was trying to tie all of these thoughts together, but as usual it took some idle time when I wasn’t thinking about anything else for it to come to the foreground.

I first began seeing problems with the way Azure is architected over a year ago at the San Diego Day of Azure (Saturday, Oct 03, 2009). There I ran into a brilliant guy, who I hadn’t seen in a while named Jason Diamond. Jason is a fellow DevelopMentor instructor and former co-worker. He was playing with NServiceBus and asked if you could deploy both web and worker roles to a single machine. This pointed out a limit in the ability of Azure to scale down. While we were talking I pointed out another limit – that in order for the service level agreements (SLAs) to take effect you have to run two instances of each role. These two problems together meant that if you have both a web role and worker role you basically needed four dedicated instances in order to achieve the SLAs. Ouch!

Then in preparation for writing my cloud course I started reading more about Google App Engine. I was marveling out how they could offer so much for free, until I realized that they weren’t dedicating *any* hardware to a particular App Engine “customer.” As a customer you might be running on a box with hundreds or even thousands of other customers. Heck, for all you know you might not be running on a box at all. The interesting thing is, until you hit the limits, you don’t really care. When you do hit the limits then you can start paying money and Google might upgrade you to your own machine (actually, I am not really sure what they do – it is difficult to tell from reading the skimpy documentation on how it actually works under the covers).

Then last Friday Ike Ellis and I were writing an article about SQL Azure vs. Amazon RDS. Probably the most interesting parts of the article were the graphs (price, and performance).

I think that if SQL Azure can flatten the storage cost line a little bit, then they are a much more compelling scenario. They are also more “cloudy”. By that I mean that SQL Azure is SQL Server re-architected for the cloud, not just an instance of SQL Server running in the cloud. SQL Azure is multi-tenet, it supports 3 replicas automatically, and if a box is “getting hot” SQL Azure can move it to another box with less running on it in order to better support that customer’s needs. I think it is a great abstraction and ultimately will win in the long run.

Regardless of what the marketing materials say, Azure was architected as Infrastructure as a service. I know it is positioned as Platform as a service, but underneath the covers it is definitely – without a doubt – an infrastructure based system. That is both good and bad. It is great as you get big enough to need your own dedicated hardware, but until you get to that point, you really don’t need all of the expense that goes along with paying for multiple CPUs owned solely by you. Google has proved that if you put a lots of customers together on the same hardware, it is much cheaper than giving each customer there own hardware. That is how they can afford to give away so much for free.

If Azure really is a platform then they should start acting like one. To me a platform is something that you can stand on, without having to know how it was constructed underneath. In Azure, due to the law of leaky abstractions, some of the Infrastructure details come leaking through. This is most notable through the fact that you have to manually or programmatically adjust the number of instances that your application is running on. “Instances?! I am running on instances? I thought I was running on telepathic robots! I am going over to Google, where telepathic robots do my work for me, instances are so 2000-an-late.”

If Azure had the same free entry model as Google where they ran in a multi-teneted environment then you would simply deploy your application to the platform, and the platform would make sure that it never fell down. Microsoft knows how to setup a system like this, as they have demonstrated with SQL Azure. This is the ideal entry level system, and an ideal on-ramp for customers. As the applications outgrow the free system, they can move to dedicated hardware. This is something that Google currently doesn’t offer and it gives companies the best of both worlds. In fact Microsoft could apply that same philosophy with SQL Azure, and compete against Amazon RDS’s high-end database in the cloud scenarios.

When I finally sat down and started writing the cloud course one of the first slides that I wrote was what I consider the cloud philosophies to be.

However two announcements by Amazon in the past couple of months have made this chart slightly less accurate. The announcements were Elastic Beanstalk and just yesterday Cloud Formation. Beanstalk is Amazon’s first foray into the Platform as a Service (PaaS) offering. The platform they refer to is Java. Cloud Formation allows you to create essentially what is called a service model in Azure. It is almost as if Amazon realized that they were the most complicated of the cloud platforms and started thinking of ways to simplify it :)

I wasn’t feeling all that great, but I let Ike Ellis drive me to Cloud Camp in San Diego. I had been to the one in San Jose with Michael Kennedy, and this one had a very similar format. The most amusing aspect was when the presenter almost got into a fight with one of the audience members over the definition of the cloud. I let Brian Loesgen talk me into holding an impromptu talk on Microsoft Azure, and I felt to be fair I should also hold one on Amazon AWS. The Microsoft Azure talk was pretty well attended and I did my best organizing and answering everybody’s questions. The AWS talk was up against some talks that I would have rather attended, but I didn’t think I should abandon my own talk – so the turnout for that one was a little bit lower.

Yesterday we had the second day of Azure in San Diego. All of the usual suspects were there including Brian Loesgen, Lynn Langit, Ike Ellis, and yours truly. I was tasked with demoing my way through a ton of different features to give everyone some hands on experience in what it is like to develop applications from scratch using Azure. The title was “Azure by Demo – From 0 to 60”, and as you can see from the slide deck, the talk consisted of 8 fairly major demos one after the other. The problem was that due to all the questions (which I love) I ran out of time. I didn’t get to finish the queue demo, and I didn’t have time to go into deployment, although I did mention the two biggest caveats.

Here are the demos after I removed my secret keys, and switched back to development storage.

My buddy Llewellyn showed me this a while back, but Java enums rock. It seems to me that in the Java/C# war whichever language gets the feature last does a better job because they see the other languages’ pain points. Enums were baked into .NET 1.0 (~2000), whereas Java got them in 2005. As a result Java did a better job.

The example they give in the tutorial is this one:

public enum Planet {
    MERCURY (3.303e+23, 2.4397e6),
    VENUS   (4.869e+24, 6.0518e6),
    EARTH   (5.976e+24, 6.37814e6),
    MARS    (6.421e+23, 3.3972e6),
    JUPITER (1.9e+27,   7.1492e7),
    SATURN  (5.688e+26, 6.0268e7),
    URANUS  (8.686e+25, 2.5559e7),
    NEPTUNE (1.024e+26, 2.4746e7);

    private final double mass;   // in kilograms
    private final double radius; // in meters
    Planet(double mass, double radius) {
        this.mass = mass;
        this.radius = radius;
    }

    public double getMass()   { return mass; }
    public double getRadius() { return radius; }

    // universal gravitational constant  (m3 kg-1 s-2)
    public static final double G = 6.67300E-11;

    double surfaceGravity() {
        return G * mass / (radius * radius);
    }

    double surfaceWeight(double otherMass) {
        return otherMass * surfaceGravity();
    }

    public static void main(String[] args) {
        if (args.length != 1) {
            System.err.println("Usage:  java Planet <earth_weight>");
            System.exit(-1);
        }
        double earthWeight = Double.parseDouble(args[0]);
        double mass = earthWeight/Planet.EARTH.surfaceGravity();
        for (Planet p : Planet.values())
           System.out.printf("Your weight on %s is %f%n", p, p.surfaceWeight(mass));
    }
}

Pasted from here.

There was one syntax error in the code, and I changed a couple of other things shown in bold because I didn’t understand the point (perhaps if I was a better Java developer I might).

Let’s see what it would take to produce this in C#, shall we? I moved the main into a separate file (even in the Java world).
My first pass was this:

public class Planet
{
	public static readonly Planet MERCURY = new Planet(3.303e+23, 2.4397e6);
	public static readonly Planet VENUS   = new Planet(4.869e+24, 6.0518e6);
	public static readonly Planet EARTH   = new Planet(5.976e+24, 6.37814e6);
	public static readonly Planet MARS    = new Planet(6.421e+23, 3.3972e6);
	public static readonly Planet JUPITER = new Planet(1.9e+27, 7.1492e7);
	public static readonly Planet SATURN  = new Planet(5.688e+26, 6.0268e7);
	public static readonly Planet URANUS  = new Planet(8.686e+25, 2.5559e7);
	public static readonly Planet NEPTUNE = new Planet(1.024e+26, 2.4746e7);

	public static Planet[] values()
	{
		return new Planet[]
		{
			MERCURY,
			VENUS  ,
			EARTH  ,
			MARS   ,
			JUPITER,
			SATURN ,
			URANUS ,
			NEPTUNE,
		};
	}

	private readonly double mass;   // in kilograms
	private readonly double radius; // in meters
	private Planet(double mass, double radius)
	{
		this.mass = mass;
		this.radius = radius;
	}
	public double getMass() { return mass; }
	public double getRadius() { return radius; }

	// universal gravitational constant  (m3 kg-1 s-2)
	public static readonly double G = 6.67300E-11;

	public double surfaceGravity()
	{
		return G * mass / (radius * radius);
	}
	public double surfaceWeight(double otherMass)
	{
		return otherMass * surfaceGravity();
	}
}

Notice that I had to do explicit initialization, and I had to add the values function as well.
The only problem with this was that I didn’t get a proper ToString (or toString for the Java folks). I could just add it, but that would be tricky. Instead let me try another way, basically using the normal .NET enums to my advantage.

public enum PlanetEnum
{
	MERCURY,
	VENUS  ,
	EARTH  ,
	MARS   ,
	JUPITER,
	SATURN ,
	URANUS ,
	NEPTUNE,
}

public class Planet
{
	public static readonly Planet MERCURY = new Planet(PlanetEnum.MERCURY, 3.303e+23, 2.4397e6);
	public static readonly Planet VENUS = new Planet(PlanetEnum.VENUS, 4.869e+24, 6.0518e6);
	public static readonly Planet EARTH = new Planet(PlanetEnum.EARTH, 5.976e+24, 6.37814e6);
	public static readonly Planet MARS = new Planet(PlanetEnum.MARS, 6.421e+23, 3.3972e6);
	public static readonly Planet JUPITER = new Planet(PlanetEnum.JUPITER, 1.9e+27, 7.1492e7);
	public static readonly Planet SATURN = new Planet(PlanetEnum.SATURN, 5.688e+26, 6.0268e7);
	public static readonly Planet URANUS = new Planet(PlanetEnum.URANUS, 8.686e+25, 2.5559e7);
	public static readonly Planet NEPTUNE = new Planet(PlanetEnum.NEPTUNE, 1.024e+26, 2.4746e7);

	public static Planet[] values()
	{
		return new Planet[]
		{
			MERCURY,
			VENUS  ,
			EARTH  ,
			MARS   ,
			JUPITER,
			SATURN ,
			URANUS ,
			NEPTUNE,
		};
	}

	private readonly double mass;   // in kilograms
	private readonly double radius; // in meters
	private readonly PlanetEnum planetEnum;

	private Planet(PlanetEnum planetEnum, double mass, double radius)
	{
		this.planetEnum = planetEnum;
		this.mass = mass;
		this.radius = radius;
	}
	public double getMass() { return mass; }
	public double getRadius() { return radius; }

	// universal gravitational constant  (m3 kg-1 s-2)
	public static readonly double G = 6.67300E-11;

	public double surfaceGravity()
	{
		return G * mass / (radius * radius);
	}
	public double surfaceWeight(double otherMass)
	{
		return otherMass * surfaceGravity();
	}

	public override string ToString()
	{
		return planetEnum.ToString();
	}
}

That achieves parity, but it is ugly.

Score 1 for Java I guess…

This brought me to the end of the first three modules (OO Programming concepts, Language Basics, and Classes and Objects) in the “Learning the Java Language trail”. Next time I will continue with “Interfaces and Inheritance”.