I missed the first session because of lack of sleep and the fact that I hadn’t finished my slides for the day’s second talk. I came in at the end of Bret Stateham’s talk on Getting Started on Azure. I didn’t expect anyone to show up for my talk on Azure Tricks and Tips because of the advanced nature of the talk and that not many people are doing Azure yet. I was pleasantly surprised to find 10-15 people there, which was a great number for that kind of talk. The talk was well received and there were lots of good questions. I have uploaded the slides for the talk.

Afterwards I went to lunch with the DM gang. After I came back I was going to check out David Pallmann’s talk but I was about five minutes late, and it was in the same room as mine, so it was hot stuffy and crowded. Instead I ventured over to John Bowen’s talk on the Future of XAML for XAML Developers, which was in a much nicer room. I wanted to be in the same room for the talk after the next one, and Llewellyn assured me that it was going to be crowded, so instead of heading over to several other talks I just hung out in the same room and suffered through Fundamentals of Metro Style Applications. Then it was time for my favorite talk of the day – Michael Palermo’s HTML5 for the Real World. The crazy thing was that it was my favorite talk despite knowing all the material! His dynamic and engaging style was simply fun to listen to. After that was over I headed over to a fairly good talk by Paul Mendoza on Writing Maintainable JavaScript.

The Geek dinner was great – Lllewellyn was congratulated on the schedule, and I met a few people and had some interesting conversations on the CAP theorem and light particles as well as digitizing old film.

I was undecided as to which topic to attend first the next morning. On a whim I decided to attend the Hacking Your Memory session. Much to my surprise that became my favorite session of the entire camp – It Was Awesome! The speaker (Gary Hoffman) did a great job, the slides were well prepared, the topic was interesting and the audience was really engaged. Check out the site if you are interested.

Next I was trying to decide between WordPress Ninja!, and .NET TDD Kickstart. by Barry Stahl whom I had met two nights previously. I made the wrong choice and attended the WordPress Ninja! talk, which should have been renamed WordPress Beginnner!. During lunch Llewellyn talked me into doing the afternoon sessions that was supposed to be with Woody Zuill, but due to family emergencies Woody had to cancel. I begged out of the first one to attend a Node.js talk. I then trekked back over and helped give the talk on Testing EF, ASP.NET and ASP.NET MVC. I stayed in the room to attend the final talk User Driven Development which had some interesting discussion.

Great conference as always. Kudos to Woody Pewitt, Bret Stateham, Llewellyn Falco, and the rest of the volunteers for their efforts.

A nerve racking morning

Today I woke up ready to start preparing for my GeekSpeak talk. You could argue that I severely procrastinated in waiting until today, and you would be right. I hadn’t done this particular demo before, but I had played with both the Diagnostics APIs and the Service Management APIs pretty extensively, so I was fairly confident. Even though I was starting at 7:30, I felt sure I would have a polished demo by 11:30. Let’s just say I now have an even healthier respect for good backup strategies.

Early on I made a couple of glitches that set me back, like pasting code from the 1.2 version of the Azure SDK that used the “DiagnosticsConnectionString”, because the new 1.3 version uses “Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString”. I was getting this completely unhelpful error message like: “Error on startup”. I managed to get though that, and I got the custom performance counter created.

Collecting was a bit tricky, because I was trying to demonstrate a product that I hadn’t used in a while called Cerebrata Azure Diagnostics Manager. I kept starting it up and trying to connect before the WADPerformanceCounterTable had been created. Once I learned patience, I was able to get it running successfully.

At this point it was about 9:30, and I was ready to start on the Service Management piece. I opened up a project I had used on numerous occasions on the past to eliminate all of the running instances on a list of subscription IDs. I had one little heart attack because I realized that out of all the APIs that I utilized, I hadn’t pulled out the configuration info, and I hadn’t called the Change Deployment Configuration. I managed to get the configuration extracted and base-64 decoded, fairly quickly. I used some X-Linq to modify the configuration XML and now its time to POST. And… It doesn’t work… WHAT! What do you mean can’t read the configuration information, I have a demo in 45 minutes! After a little searching I found that you have to prepend the XML preprocessing instruction

<?xml version="1.0" encoding="utf-8" ?>

Even though the instruction was there when I extracted it somehow gets removed during the transformation. With that – Voila! – it works. And I still have 15 minutes to spare. Whew! Not too bad for a mornings work.

And here was where the “benevolent being” thought, “Man this guy is cocky. I will teach him a lesson.” What happened next I still don’t clearly understand. I had done all of the work around extracting configuration information and updating it inside the CleanUpAzureDeployments solution. I wanted to move it over to the AutoScaling solution that I had done all of the performance work in. Three simple actions and I lost almost two hours worth of work.
1) I used Ctrl – X to remove the GetDeploymentConfiguration, AddAnotherInstanceToConfiguration, UpdateConfigurationInformation functions (about 100 lines of code)
2) Seeing that the code was removed successfully I assumed I had it in my buffer, so I closed that solution.
3) I used Ctrl-V to paste the information into the new project.

And nothing appeared…

I pressed it again, maybe I didn’t press it hard enough. Then at least a minute passes while I sit looking at the computer screen in a state of slowing growing terror. After the initial shock, I start cursing Visual Studio in the vilest sort of language for not supporting automatic backups like Eclipse does.

At this point it is time for the call, but I just lost my most important demo that it took me almost two hours to get working. I place the call to Glen explain the situation and start frantically trying to recreate the demo before everyone joins the call. Thank goodness for a weird form of the 80/20 rule which states that you can redo all of the work that you have recently done in about 20 percent of the time. During the sound checks and introductions I was coding away trying to recreate the 100 lines that I had lost, and I managed to finish them before the curtain came up.

I think the stress probably took a month of off my life expectancy. I am not sure that my nerves had time to recover by the time I finished the call.

The talk itself

The talk itself might have gone well. It was tough to tell from my perspective because I was so freaked out that I couldn’t think straight. Anyway thanks to everyone that was on the call. Here are the newly recreated auto scaling demos.

Also I had some questions:
1) What was the name of an easy Auto-Scaling product?
I should have remembered the name because it is similar to the name of the Amazon solution Azure Watch
2) Can we get a list of what built-in performance counters are available on Azure?
This one is a little tricky to answer, because any performance counter you can use on premises you can use in Azure. A better question might be: “Which performance counters can’t I use in Azure?” The short answer is none, but of course you can only use counters from software that is installed, so basically all the core Windows counters.
3) Can Azure monitor the security channel from the windows event log?
I understand that it is possible, although I have not done it myself. To read an event log that strongly ACLed though I am pretty sure you will need to include:

<Runtime executionContext="elevated"/>

in the ServiceDefinition.csdef file.
4) Can I get a full list of what’s in the DiagnosticMonitor namespace?
The best overall picture of what’s going on is found here, about midway down.
5) Where is the information on the Rest API for changing configuration settings, how the cert works, etc?
That can all be found in the normal MSDN documentation here.

Thanks again!

Thoughts about how Azure is architected have been jumping around my brain for a long time now. Subconsciously I was trying to tie all of these thoughts together, but as usual it took some idle time when I wasn’t thinking about anything else for it to come to the foreground.

I first began seeing problems with the way Azure is architected over a year ago at the San Diego Day of Azure (Saturday, Oct 03, 2009). There I ran into a brilliant guy, who I hadn’t seen in a while named Jason Diamond. Jason is a fellow DevelopMentor instructor and former co-worker. He was playing with NServiceBus and asked if you could deploy both web and worker roles to a single machine. This pointed out a limit in the ability of Azure to scale down. While we were talking I pointed out another limit – that in order for the service level agreements (SLAs) to take effect you have to run two instances of each role. These two problems together meant that if you have both a web role and worker role you basically needed four dedicated instances in order to achieve the SLAs. Ouch!

Then in preparation for writing my cloud course I started reading more about Google App Engine. I was marveling out how they could offer so much for free, until I realized that they weren’t dedicating *any* hardware to a particular App Engine “customer.” As a customer you might be running on a box with hundreds or even thousands of other customers. Heck, for all you know you might not be running on a box at all. The interesting thing is, until you hit the limits, you don’t really care. When you do hit the limits then you can start paying money and Google might upgrade you to your own machine (actually, I am not really sure what they do – it is difficult to tell from reading the skimpy documentation on how it actually works under the covers).

Then last Friday Ike Ellis and I were writing an article about SQL Azure vs. Amazon RDS. Probably the most interesting parts of the article were the graphs (price, and performance).

I think that if SQL Azure can flatten the storage cost line a little bit, then they are a much more compelling scenario. They are also more “cloudy”. By that I mean that SQL Azure is SQL Server re-architected for the cloud, not just an instance of SQL Server running in the cloud. SQL Azure is multi-tenet, it supports 3 replicas automatically, and if a box is “getting hot” SQL Azure can move it to another box with less running on it in order to better support that customer’s needs. I think it is a great abstraction and ultimately will win in the long run.

Regardless of what the marketing materials say, Azure was architected as Infrastructure as a service. I know it is positioned as Platform as a service, but underneath the covers it is definitely – without a doubt – an infrastructure based system. That is both good and bad. It is great as you get big enough to need your own dedicated hardware, but until you get to that point, you really don’t need all of the expense that goes along with paying for multiple CPUs owned solely by you. Google has proved that if you put a lots of customers together on the same hardware, it is much cheaper than giving each customer there own hardware. That is how they can afford to give away so much for free.

If Azure really is a platform then they should start acting like one. To me a platform is something that you can stand on, without having to know how it was constructed underneath. In Azure, due to the law of leaky abstractions, some of the Infrastructure details come leaking through. This is most notable through the fact that you have to manually or programmatically adjust the number of instances that your application is running on. “Instances?! I am running on instances? I thought I was running on telepathic robots! I am going over to Google, where telepathic robots do my work for me, instances are so 2000-an-late.”

If Azure had the same free entry model as Google where they ran in a multi-teneted environment then you would simply deploy your application to the platform, and the platform would make sure that it never fell down. Microsoft knows how to setup a system like this, as they have demonstrated with SQL Azure. This is the ideal entry level system, and an ideal on-ramp for customers. As the applications outgrow the free system, they can move to dedicated hardware. This is something that Google currently doesn’t offer and it gives companies the best of both worlds. In fact Microsoft could apply that same philosophy with SQL Azure, and compete against Amazon RDS’s high-end database in the cloud scenarios.

Yesterday we had the second day of Azure in San Diego. All of the usual suspects were there including Brian Loesgen, Lynn Langit, Ike Ellis, and yours truly. I was tasked with demoing my way through a ton of different features to give everyone some hands on experience in what it is like to develop applications from scratch using Azure. The title was “Azure by Demo – From 0 to 60”, and as you can see from the slide deck, the talk consisted of 8 fairly major demos one after the other. The problem was that due to all the questions (which I love) I ran out of time. I didn’t get to finish the queue demo, and I didn’t have time to go into deployment, although I did mention the two biggest caveats.

Here are the demos after I removed my secret keys, and switched back to development storage.

I spoke tonight at the San Diego .NET User group Architecture SIG. Brian had asked me to give a talk for the upcoming Day of Azure (Deep Dive). The topic wasn’t even decided until the 4th, but as I teach Azure anyway, it was no big deal. The final topic chosen was around the storage aspects of Azure. I tried to cover a brief introduction to both the relational and non-relational (No-SQL) aspects of storage in the cloud complete with demos. I chose to talk about SQL Azure first to give them something they could relate to (haha, sorry, couldn’t resist) before I lead them into a technology that most of them didn’t have any exposure to. Thanks everyone, for coming out.
Here are the demos.

I am teaching Azure in England this week to Microsoft and one of my students said that he was having trouble getting an actual crash dump file to be produced, both locally and in the cloud. I think the problem was that the way you normally think to write a program that crashes is to throw an exception soon after startup. The problem is that happens every time and it doesn’t give Azure a chance to send the crash dump file. To make this work, what I wanted was a program that crashed every *other* time, allowing it to crash, then send the dump, then crash, and then send the dump, etc.

I was able to get that working successfully by creating a NumberRepository and using that to keep track of how many times I have run.  Here are some excerpts from the code:

First the OnStart of the WorkerRole:

var config = DiagnosticMonitor.GetDefaultInitialConfiguration();
config.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
CrashDumps.EnableCollection(true);
DiagnosticMonitor.Start("DiagnosticsConnectionString", config);

Then in the Run:

int number = NumberRepository.GetNumber();
NumberRepository.UpdateNumber(++number);
if (number %2 == 1)
{
	throw new Exception("Bye");
}

I ran it one time (outside the debugger) and it crashed, I ran it again and waited for 1.5 minutes and voila it appeared in wad-crash-dumps storage container.