On Signing Git Tags on Windows

Since I started working at Couchbase, I’ve had to get used to a whole new set of tools and processes. Most of this new process revolves around Git. I’ve previously lamented the Git experience on Windows. I’m not sure whether it’s gotten better or I’m just learning to use Git more. Maybe it’s a little of both…

Still, there are pain points. Git certainly has a strong *nix bias. There are assumptions about what tools are available or installed on your machine. As a Windows developer, you’ll find yourself searching for Windows variants of these tools. This was the case today when I set out to create a signed tag on the Couchbase .NET Client.

In theory, it’s a simple command.

git tag -s v1.0 -m "A message"

To sign a tag, you need to have GPG installed. I installed Gpg4win, which installs a nice GUI for managing keys and the GPG command line interface. My ignorance of the process was clear as I repeatedly attempted to use the GUI (GNU Privacy Assistant – Key Manager) to create my key. That GUI appears to create valid keys, but wherever it stores the related key part files is not where the GPG command line expects to find them.

Instead, be sure to use the command line client. Start with:

gpg --gen-key

Answer the questions when prompted (using the defaults is fine). Be sure to make sure that the email address you set for the key is the one you set for your user.email git config value. If key creation fails, you might manually need to create the directory c:users<USER>.gnupg, which GPG will apparently not do on its own. After you create the key using –gen key, you should have no problem running the tag command above.

The errors that I was seeing along the way were “gpg: no writable public keyring found” and “signing failed: secret key not available.”

Leave a comment

On Wicked Easy NoSQL with Couchbase Server

NoSQL doesn’t have to be difficult. Generally speaking, it isn’t. But admittedly, and especially on Windows, it’s not always as clean as it could be. Each database has its challenges. Some are difficult to install. Some are difficult to configure. Some have poor server admin tools. Some lack strong client library support. One of the NoSQL offerings that really gets things right is Couchbase Server. OK, full disclosure, this is my first post as a Developer Advocate for Couchbase!

Couchbase was formed when Membase and CouchOne merged. Couchbase Server 2.0 is going to be a hybrid NoSQL database, combining features from both distributed key/value stores and document oriented databases. The 2.0 product will be released in 2012. In January 2012, an interim 1.8 release will be the first official release of the merged Couchbase Server, formerly Membase Server. As part of my responsibilities at Couchbase, I’m working on the .NET client library for Couchbase Server 1.8. Below is a preview of what’s coming. If you’ve used Membase Server with .NET, you should be familiar with the code below. If you’re new to Couchbase Server, I’m going to start from the beginning.

Installing Couchbase Server

Most NoSQL databases have Windows installers, though sometimes these aren’t kept up to date. In the case of CouchDB, there are a couple of different MSI packages available, but only one works (at least as of October 2011). MongoDB has a command line installer for its service. Couchbase Server, fortunately, has an officially supported Windows installer. You can download the latest installer here. As I write this, the latest server version is 1.7.2. Check back later in January for 1.8. Grab the Community Edition of Membase Server, which is appropriate for development purposes. Membase Server will be renamed Couchbase Server with the 1.8 release. You can also grab the 2.0 Developer Preview, which already sports the new name.

After you run the installer, you’ll be taken the web based admin console. The admin console is where you’ll be able to configure your cluster and manage the nodes within that cluster. In local development, you’ll likely have a single node cluster (e.g., your dev machine).

Once you’ve gotten the server up and running, it’s time to write some code. If you create a simple console app, the easiest way to include the Couchbase .NET client library in your app is to use Nuget. After you add the reference, add the following using statement:

Hello, Couchbase!

using Couchbase;

Then add the following line to your Main method:

static void Main(string[] args) {
    var client = new CouchbaseClient();

After you add these lines, compile the app. You’ll probably get a strange compilation error that the namespace ‘Couchbase’ can’t be found. The reason you’ll see this error is that Visual Studio 2010 (I’ve made the assumption that you’re using 2010) defaults console projects to use the .NET 4 Client Profile, which is a subset of .NET 4. You’ll need to update the .NET version to .NET 4.0 (or 3.5). After making this change, you’ll be able to build.

Next, you’ll need to add some configuration info to your app.config. The entire file should look as follows:


In the config section, you provide the client details on how to connect and to where data will be written. To tell the CouchbaseClient to use the app.config section, update the declaration as follows:

var client = new CouchbaseClient("membase");

Saving and reading primitive data types is just as easy as saving and reading user defined types. So I’ll create a couple of classes Brewery and Beer (intentionally simplified).

public class Brewery {
    public string Name { get; set; }
    public string City { get; set; }
    public string State { get; set; }
public class Beer {
    public string Name { get; set; }
    public Brewery Brewery { get; set; }
    public decimal ABV { get; set; }

These classes are just POCOs (Plain Old CLR Objects) that have been marked Serializable. I’ll new up an instance of each in my Main method.

var brewery = new Brewery { 

    City = "Hartford", 
    State = "CT", 
    Name = "Thomas Hooker Brewery" 
var beer = new Beer { 
    Brewery = brewery, 
    Name = "American Pale Ale", 
    ABV = 5.3m

Next, I’ll persist the Beer instance by calling the client’s Store method. Note that the StoreMode requires the additional using statement be added for Enyim.Caching.Memcached.

client.Store(StoreMode.Set, beer.Name, beer);

After storing, I’ll read the Beer back out and display its name.

var savedBeer = client.Get("beer");

And that’s it, wicked easy, right? Admitedly, this is a very simplified introduction to Couchbase Server. As I’m now taking over the .NET client library, I’ll be posting more detailed tutorials and samples.

Leave a comment

On Getting Started with Node.js, MongoDB and Heroku on Windows

I recently had the opportunity to see David Padbury present on Node.js at the New England Code Camp. An earlier version of his slides is available here. Recently while prepping for an upcoming presentation on Windows Phone 7 location services, I decided that I’d incorporate Node.js into my demo. This post will briefly introduce the technologies in its title and describe the process of getting them installed and running on a Windows box.


Hopefully at this point, you have at least heard of Node.js. If you haven’t, stop what you’re doing right now and Bing it. Node is one of those disruptive technologies that’s going to change the way we build web apps in the years ahead. Well, it might not totally change how we build them. But things will start to look very different because of it.

OK, so what is Node.js? Node is an evented I/O web server. Such web servers use a single thread (yes, you read that correctly) to handle all requests and rely on non-blocking I/O and callbacks to achieve massive levels of concurrency. The operating system provides facilities for performing non-blocking I/O. Windows supports this feature via I/O Completion Ports. Basically, that single thread gets a request, sends it off to do something using non-blocking I/O, waits for a callback upon I/O completion and then sends the response.


It’s somewhat forgivable if you haven’t heard of Node.js. We all miss our exits from time to time. But MongoDB? There’s really no excuse for not knowing about this NoSQL staple! It’s an amazing product. But I’ll mention briefly that MongoDB is a document-oriented, schema-less database. JavaScript is used for queries and commands and documents are stored as BSON (Binary JSON). If you want more introductory material, check out my presentations.


Heroku started out as a service for deploying Ruby on Rails apps to EC2, though recently they’ve expanded their offerings. Heroku now supports Node.js, Python and Java, among others. They offer a platform that allows your app to be deployed via a Git push. You get some basic web and database resources for free and pay as your app needs to scale out. It’s all seamless. AppHarbor offers a similar service for ASP.NET apps and is arguably simpler.

Installing Node on Windows

There isn’t an installer for Node. You simply grab the latest executable from http://nodejs.org/#download and drop it somewhere like c:Program FilesNodeNode.exe. Add the directory to your path. Open up cmd and type node. You should get the interactive node console. I won’t bother reposting the “Hello, World!” exercise. But you should go through the exercise either in the interactive shell or by simply working in a text editor and creating a JS file and then running “node yourfile.js” from the command line.

OK, I’ll re-post so you don’t have to click over.

var http = require('http');
http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello Worldn');
}).listen(1337, "");
console.log('Server running at');

Save that code snippet to a file hello.js and run “node hello.js”

Browse to http://localhost:1337 and Node should respond with the expected “Hello, World!” response. What’s most exciting about this exercise is that Node is running on Windows, thanks to Microsoft. Be warned that if you scroll to the comments in that post, there are some clear Linux zealots who say some dumb, dumb things about Microsoft. Don’t get me wrong, I have a dual boot of Ubuntu and Win7 on one of my Vaios. But I hate the “MS can’t do anything right crowd.” Leaving the soapbox…

Installing NPM on Windows

OK, so bare bones Node is now running. To do more, you’ll need to grab some of the numerous node modules. As Nuget is to .NET, NPM is to Node. For a young framework, Node has a shockingly rich package library accessible via NPM. Keep in mind, NPM on Windows is considered experimental.

Installation requires Git, so make sure you have that installed. Once you do, run the following commands:

git clone --recursive https://github.com/isaacs/npm.git npm
cd npm
node cli.js install npm -gf

Make sure you didn’t skip that –recursive step. Things break if you do. Also keep in mind that if your Node.exe directory is under Program Files, that’s protected by Windows and you’ll need to run cmd as an admin. The NPM installation creates files in the Node.exe directory. You can test your installation by grabbing the express web framework for Node.js.

npm install express

Creating a Heroku Node.js App

The first thing you need to do is create an empty git repository in the directory in which you’ll create your app.

git init

After you’ve created your Heroku account and the git repo, you need to install the Heroku client. Once you’ve done that, open a command line and enter:

heroku login

You’ll be prompted to enter your email and password. After you successfully authenticate, you should receive a message that you need to either upload an existing SSH public key or create a new one. If you haven’t created an SSH key before, you can also use ssh-keygen to do so.

Next, open your text editor of choice and create a .js file named client.js (or whatever you want to name it). Add the following code (from Heroku’s Node quick start docs).

var express = require('express');
var app = express.createServer(express.logger());
app.get('/', function(request, response) {
  response.send('Hello World!');

var port = process.env.PORT || 3000;
app.listen(port, function() {
  console.log("Listening on " + port);

Note that this snippet assumes you’ve used NPM to install express as outlined above. You can test the code by running:

Node.exe client.js

and browsing to http://localhost:3000.

Next, you’re going to need to create a package.json file to tell Heroku which dependencies your app will need. The file should look like below (current version of express is 2.5.0, yours might vary).

  "name": "your-app-name",
  "version": "0.0.1",
  "dependencies": {
    "express": "2.5.0",
    "mongoskin" : "0.1.3"

From the directory where your node app lives, run the following command:

npm install

Because this command copies dependencies locally into your directory structure, you’ll want to add the following line to your .gitignore file. Heroku will use NPM to manage your dependencies on the server side.


Next, create a file named Procfile (case sensitive) with no extension. Its content should be:

web: node client.js

This file will tell Heroku to start a web process using node to run client.js.

Next, you’ll want to create the app on Heroku, using the cedar runtime stack. The runtime stack is basically just the Heroku environment config (OS, installed libraries, etc.).

heroku create --stack cedar

This will create an app with a unique name, which you can rename to something more meaningful via the following command:

heroku rename mymeaningfulname

Heroku also creates a git remote for you, so you can push your code to Heroku.

git push heroku master

One last step to get your app running on Heroku.

heroku ps:scale web=1

This command will start a web process, which you can verify is running with the following command:

heroku ps

Browse to http://yourmeangingfulname.herokuapp.com to see that your Hello, World! app is indeed running.

MongoDB and NodeJS

There are several options for using MongoDB with Node.js. The native driver is accessible via NPM (npm install mongodb). But don’t start here. While the driver is powerful, it requires maddeningly dense code to use. Your closures will have closures will have closures will have closures…

Mongoose looks promising and simpler. It’s a full ORM style Document Mapper. But for a quick start, I prefer Mongoskin. It’s succinct and easy to use. It doesn’t offer all of the document mapping features of Mongoose, but I’m pretty sure it has the lowest barrier to entry for using MongoDB on Node.js.

npm install mongoskin

On Windows, you will likely see a message from NPM that it is “Not building native library for cygwin” followed by “command not found” and “unary operator expected” errors. I admittedly don’t know what the cause or meaning of these errors is, but they can be ignored for dev.

After installing mongoskin, modify your client.js file so it looks like:

var express = require('express');
var mongo = require('mongoskin');

var app = express.createServer(express.logger());

app.get('/', function(request, response) {
	var conn = mongo.db('mongodb://localhost:27017/yourdbname');
	conn.collection('users').find().toArray(function(err, items) {
		if (err) throw err;

var port = process.env.PORT || 3000;
app.listen(port, function() {
  console.log('Listening on ' + port);

This code assumes you have MongoDB running locally and that you have a collection named users with at least one document inserted. Restart your server (Ctrl+C your running Node and rerun node client.js). Browse again to the app and you should see the JSon output.

MongoHQ and Heroku

MongoHQ is a hosted Mongo solution that offers integration with Heroku. You can enable it by navigating to your app in the Heroku admin and clicking on “Add-Ons” in the upper right hand corner. There’s a free version you can get started with (note, this requires a credit card though you’re not billed).

Click through to the MongoHQ admin pages and create a user and get your connection string. Update the line in client.js that creates the connection so it looks something like:

var conn = mongo.db('mongodb://USER:PASSWORD@SUBDOMAIN.mongohq.com:PORT/DBNAME');

That should be it. Now commit and push.

git add .
git commit -m "Commit message"
git push heroku master

Browse to your app and you probably see nothing. Using the MongoHQ admin pages, you can create the “users” collection and add a document. If you do that and refresh, you should see the JSON formatted user document.


Now that Node.js runs on Windows, devs need to take a serious look. Microsoft is not ignoring this technology and neither should those of us who develop Windows based apps. This walkthrough hopefully gets you familiar with the tools and technologies you need to get started. Heroku isn’t runnign Windows and it certainly isn’t necessary for running node, but for a sandbox it’s generally pretty useful to have.

At some point, I plan to get Node and MongoDB running on Azure. When I do, I’ll post about that too.

Leave a comment

On Updating Records with Object (Non)Relational Mappers in F#

I’m delivering a talk on NoSQL and F# at the F# User Group in Cambridge, MA in a few weeks. As I started prepping my old C# and NoSQL presentation for an F# makeover, it didn’t take long for me to find some critical differences that more or less fly in the face of common ORM patterns.

The fetch-modify-update idiom is often used when dealing with updates to a record. If you consider a form where an Artist record may be edited, the (web) workflow is typically to:

  1. Fetch the record for display on an edit form
  2. Re-fetch the record when changes are submitted
  3. Update any properties that were changed by the form (perhaps with MVVM and AutoMapper)
  4. Save the record that was fetched and updated

In abbreviated code, this might look like:

var artists = _mongoDatabase.GetCollection(COLLECTION);
var artist = artists.Find(Query.FindOne(Query.EQ("_id", id)));
artist.Name = viewModel.Name;

This pattern works well with ORMs such as the Entity Framework and NHibnerate. It also works with O(Non)RMs such as the MongoDB C# driver, which is shown in the code above. Again, the idea is to retrieve a record, modify some properties and save it back to the database, whether relational or NoSQL. However, this pattern breaks one of the core functional programming concepts, namely immutability.

As I sat down with the F# equivalent of my NoSQL demoware, I had a choice to make. I could take advantage of F#’s multi-paradigm language features and simply allow for mutable Artist instances or I could work through the problem using immutability. I chose the latter option, which I’ll describe below. But first, I’ll note that this post is certainly geared towards C# developers who are new to F#. F# developers probably don’t get too hung up on value bindings vs. variables as I do.

The code in F# is still only 4 lines, but there are a couple of notable differences.

let artists = mongoDatabase.GetCollection("Artists")
let artist = artists.Find(Query.FindOne(Query.EQ("_id", id)))
let updatedArtist = { artist with Name = viewModel.Name }
mongoCollection.Save(updatedArtist) |> ignore

The requirement that artist be immutable leads to the use of an F# record. An F# record is sort of like an anonymous class in C# (no constructor or methods), but it has a name and has built in facilities for copying one record to another. That copying facility is what is demonstrated on the third F# line. The updatedArtist value is a member for member copy of the artist, except for the Name property that is set to the view model’s name property.

type Artist = { Name : string; Genre : string; Id : ObjectId }

Records seem to work particularly well with ORMs and O(Non)RMs where POCOs are used to represent database records, whether documents or table rows.

Leave a comment

On Creating an Orchard Module without Having to RTFM

Recently, I created my first Orchard module. Of course, rather than read the manual, I learned by copying the code for an existing plugin. So if you’re like me and you take a code-first approach to learning, read on as I’m going to document my experience below. Again, I haven’t RTFM (yet). So assume that some of what I’m writing is either incomplete or wrong. But the code works (or appears to be working). So I must have gotten something right…

The module I created was a wrapper around LinkedIn’s Member Profile widget. The module that I modeled my module after was the Facebook.Like module by the Orchard team. Both modules are basically wrappers around JavaScript includes.

I found it easiest to develop within the context of the Orchard source code, even though not entirely necessary. Either download the source or clone the repository. Setup an application in IIS that points to the Orchard.Web directory. After compiling the solution, you should be able to browse to Orchard and start setting up the site.

Modules are MVC apps. Start by creating an empty ASP.NET MVC 3 project, choosing Razor as the view engine. Save the new project to the Modules directory under the Orchard.Web directory. Clean out any of the superfluous scripts or styles. Add a reference to Orchard.Framework and Orchard.Core.

Basically, what my module needs to do is capture the five possible pieces of information that must be passed to the LinkedIn scripts:

  1. Public profile URL
  2. Display mode (inline, icon and name, icon)
  3. Display text (displayed with the icon and name mode)
  4. Display behavior (on click or on hover)
  5. Whether to show connections

So these five pieces of information will be used to create the model for the module. This model will be used to create the module’s backing table, the editor template and the module’s view.

The first piece to create in the module is the actual model class. In my case, the model is the Profile (Profile.cs in the Models directory). There are actually two models to create, one is a subclass of ContentPartRecord and one a subclass of ContentPart.

The ContentPartRecord is basically the class that represents the database schema. Again, I need to read more, but I can say from a couple of YSODs that a ProfilePartRecord class needs to have properties that match column names and all properties must be marked virtual. I believe the virtual requirement is related to NHibernate being used for data access, but it could be something else with DynamicProxy.

public class ProfilePartRecord : ContentPartRecord {

	public virtual string ProfileUrl { get; set; }
	public virtual string DisplayText { get; set; }
	public virtual string DisplayBehavior { get; set; }
	public virtual string DisplayMode { get; set; }
	public virtual bool ShowConnections { get; set; }

	public ProfilePartRecord() {
		DisplayBehavior = "hover";
		DisplayMode = "inline";
		ShowConnections = false;

The ContentPart is essentially the view model for the module. It’s the class that gets bound to the editor template and the actual module view template. There won’t always be a one-to-one mapping between the ContentRecordPart and ContentPart, but in the case of my LinkedIn.Profile module, there was. However, as I write this explanation, I suspect I could/should change that correspondence. Note the use of the DataAnnotations validation attributes. The editor template will make use of these.

public class ProfilePart : ContentPart {
	public string ProfileUrl {
		get { return Record.ProfileUrl; }
		set { Record.ProfileUrl = value; }

	public string DisplayText {
		get { return Record.DisplayText; }
		set { Record.DisplayText = value; }

	public string DisplayBehavior {
		get { return Record.DisplayBehavior; }
		set { Record.DisplayBehavior = value; }

	public string DisplayMode {
		get { return Record.DisplayMode; }
		set { Record.DisplayMode = value; }

	public bool ShowConnections {
		get { return Record.ShowConnections; }
		set { Record.ShowConnections= value; }

Once the model is created, the rest of the module dependencies can be filled in. The database migration probably makes sense as a next step. A Migrations.cs file at the root of the project contains the Migrations class, which subclasses DataMigrationImpl. If you’re not familiar with the concept of database migrations, I wrote about them in .NET a couple of years ago. Basically, you create a class that represents a schema change to the database. In the case of Orchard, Migrations use a Create method to add a table to the schema.

public class Migrations : DataMigrationImpl {

	public int Create() {

		SchemaBuilder.CreateTable("ProfilePartRecord", table => table
			.Column("ProfileUrl", DbType.String)
			.Column("DisplayText", DbType.String)
			.Column("DisplayBehavior", DbType.String)
			.Column("DisplayMode", DbType.String)
			.Column("ShowConnections", DbType.Boolean)                

			typeof(ProfilePart).Name, cfg => cfg.Attachable());

		return 1;


The table and column names must map to the model’s ContentPartRecord subclass. If you’re worried about collisions, you don’t need to. Tables are prefixed (with the namespace I think). I admit, I am not sure what the call to AlterPartDefinition with the lambda is doing, but it was in the Facebook.Like module. So I assume it’s necessary. Remember, I said I didn’t RTFM… I’m also not clear what the widget mapping in the migration does either, but again, it was in the Facebook.Like and appears to be meaningful. I’ll eventually look into these items.

   public int UpdateFrom1()
		// Create a new widget content type with our map
		ContentDefinitionManager.AlterTypeDefinition("LinkedInProfileWidget", cfg => cfg
			.WithSetting("Stereotype", "Widget"));

		return 2;

The next piece that needs to be included is the content part driver. Under Drivers, I have ProfileDriver.cs. ProfileDriver is a subclass of ContentPartDriver. Basically, this class is what gets data to the templates. ProfileDriver overrides Display and two Editor methods. In the Display method, the ProfilePart is mapped to a dynamic, which becomes the model for the module’s display view. The Editor methods provide a means for locating and displaying the editor template and saving changes, both within the context of the admin pages.

public class ProfileDriver : ContentPartDriver
	protected override DriverResult Display(ProfilePart part, string displayType, dynamic shapeHelper)
		return ContentShape("Parts_Profile",
			() => shapeHelper.Parts_Profile(ProfileUrl: part.ProfileUrl,
											DisplayText: part.DisplayText,
											DisplayBehavior: part.DisplayBehavior,
											DisplayMode: part.DisplayMode,
											ShowConnections: part.ShowConnections));

	protected override DriverResult Editor(ProfilePart part, dynamic shapeHelper)
		return ContentShape("Parts_Profile_Edit",
			() => shapeHelper.EditorTemplate(
				TemplateName: "Parts/Profile",
				Model: part,
				Prefix: Prefix));

	protected override DriverResult Editor(ProfilePart part, IUpdateModel updater, dynamic shapeHelper)
		updater.TryUpdateModel(part, Prefix, null, null);
		return Editor(part, shapeHelper);

In the Handlers namespace, the ProfileHandler extends ContentHandler. Content handlers define what happens to a content part during certain events, such as OnActivated and OnLoading. In the case of the ProfileHandler in my LinkedIn project, none of these events is used. The only code in the handler is that which adds the repository dependency to the Filters collection of the handler.

public class ProfileHandler : ContentHandler
	public ProfileHandler(IRepository repository)

In terms of actual code, that’s it. The next step is to create the Razor templates. Under Views/EditorTemplates/Parts is Profile.cshtml. This is simply a CRUD form that will be loaded in the admin pages.

@model LinkedIn.Profile.Models.ProfilePart

LinkedIn Profile Widget
@Html.LabelFor(model => model.ProfileUrl)
@Html.TextBoxFor(model => model.ProfileUrl) @Html.ValidationMessageFor(model => model.ProfileUrl)
@Html.LabelFor(model => model.ShowConnections)
@Html.CheckBoxFor(model => model.ShowConnections) @Html.ValidationMessageFor(model => model.ShowConnections)

The Views/Parts/Profile.cshtml template is where the LinkedIn scripts are loaded in. Right now, I have the model setup to reflect the different settings, but some settings are mutually exclusive. Until I have a chance to update the UI to reflect these mutual exclusions, I have some hacky logic to determine which script to load. The logic is:

  • If the display mode is inline, everything shows and there is no need for hover or click behavior
  • Else If the display mode is icon, show the icon only (no name) and set hover or click behavior as set in the admin tools
  • Else same as Else If, but include the display text (typically person’s name)

This logic could be tighter and I’ll eventually update the ProfilePart to be more in line with how the scripts render. But for now, it’s simple and works.

@if(@Model.DisplayMode == "inline") {

else if (@Model.DisplayMode == "icon") {
} else { 

The placement.info file in the root of the the project contains a chunk of XML used to place the Profile templates within the broader template structure of the site.


Finally, the Module.txt contains information used by the admin pages to link the module to the listing of installed modules. Here you define items like the author, dependencies, name and website for the module.

Name: LinkedIn.Profile
AntiForgery: enabled
Author: John Zablocki
Website: http://bitbucket.org/johnzablocki/orchardlinkedinprofile
Version: 0.1
OrchardVersion: 1.0
Description: This a module for using the LinkedIn Profile widget
		Name: LinkedIn Profile
        Description: LinkedIn Profile widgets.
		Category: Widget
		Dependencies: Orchard.Widgets

If you have been playing along at home, you can now test out the module. After you’ve built the solution, you can navigate to your local site. In the admin dashboard, LinkedIn.Profile should show up on the list of installed modules (Installed tab of Modules page).

If you click the name of the module (LinkedIn.Profile) you should be taken to the Features tab of the Modules page, anchored at the LinkedIn.Profile module. Click enable.

Navigate next to Widgets. Click “Add” in any one of the content areas.

LinkedIn.Profile should be an option. Click through and you should see the form you created under EditorTemplates.

Fill out the values and view your home page. You should then see the LinkedIn Profile widget.

Well that’s an Orchard Module in a nutshell. This example is hardly the most complex, but it illustrates several basic concepts of putting a module together from scratch. I should also note that there are some command line tools included with Orchard to help automate some of the project setup. I think they’re part of the Web Platform Installer. Honestly, I haven’t tried them yet.

Leave a comment

On Free Orchard Hosting with Bitbucket and AppHarbor

If you’re not familiar with the Orchard Project, you should be. Orchard is an open source, mostly-CMS framework built on top of ASP.NET MVC. I say mostly-CMS because the project has goals of becoming more of a generalized component framework.

I first saw Orchard in action at the NYC GiveCamp. One of the teams built out a site for a charity using Orchard, which at that point had been released for about a week (I think). As a CMS, it’s reasonably feature rich. It’s not quite WordPress yet, but it’s more than adequate for use in a typical content heavy site.

I’m working on starting an ALT.NET group up here in Boston – Beantown ALT.NET. My first efforts with getting the group off the ground were of course to buy a domain and put up a site. I grabbed the domain beantownalt.net (bostonalt.net was taken) from Namecheap, since Bob Parsons is sick individual.

For the site, I considered the usual suspects. I know Fairfield/Westchester .NET uses WordPress. But running an ALT.NET site on a PHP app didn’t feel right. I think that group used to run on Dot Net Nuke, but running an ALT.NET site on VB.NET is worse than running it on PHP! I also considered Meringue, since I know the developer. But it doesn’t have the module support I wanted.

Ultimately, Orchard was an easy choice. It’s built on MVC and lots of ALT.NET technologies, such as Autofac and NHibernate.

At first, I setup BeantownALT.NET on a hosted VPS (damn impulse purchases) that I have. But I can’t really justify the cost of that server for the demoware that I host there. So I’m going to move the sites on that box to a cheaper solution, namely AppHarbor.

AppHarbor is Heroku for .NET. You commit your code to either a Mercurial or Git repository then push it to AppHarbor. AppHarbor then builds your solution, runs your tests and viola, your app is live in the cloud. It’s really that simple. In fact it’s so simple, I’m going to move Beantown ALT.NET’s website to AppHarbor as I write this post.

First things first. You’ll need to grab the Orchard bits. You can grab them from CodePlex here. I’m just grabbing the web zip as I’m going to push the prebuilt web package to a Mercurial repository setup on Bitbucket.

OK, so Bitbucket… If you’re a Windows developer and you’re using Git and GitHub, I have to ask why. The Windows experience with Mercurial is simply cleaner. More importantly, Bitbucket offers Free private repositories. That’s right, free and private. GitHub doesn’t offer that. I know, I know… Rails is hosted at GitHub, so everyone uses it. Get over it and get some free repositories devs!

Anyway, so once you’ve done the sensible thing and signed up for Bitbucket, create a new repository and clone it locally.

hg clone https://username@bitbucket.org/username/project project

Back to Orchard… You could try to get the source code and set it up to build at AppHarbor, but really there’s no need to do that. If you do that, start by renaming Orchard.sln to AppHarbor.sln so that AppHarbor can disambiguate which solution to build (Orchard has a few in the source tree). Getting the Orchard source to build is a topic for a different post.

Since you’ll likely just be using your Orchard site for content management, just download the web files zip instead. This package has the compiled bits ready to be deployed. Extract those files to your Mercurial repository. Once you do that, move the files out of the Orchard directory into the root of your repository (sibling with the .hg directory). You need to move these files since we’re pushing only content and this is the website root.

Your repository should look something like:


Now go ahead and hg add and hg commit the source files. If you have an .hgignore file setup to ignore your bin directories, you’ll need to reverse that here. Remember, this is the prebuilt Orchard and you’re going to want to have the binaries deployed as well.

Next login to AppHarbor (assuming you’ve already signed up). Create a new application and click into its properties pages (just click on the name of the app on the left hand side of the top navigation bar). Under the repository URL, you should see a link “Create build URL: (show/hide).” Click show/hide and copy the link that appears. Also, while in AppHarbor, click on “Settings” and check the option to allow write access, as Orchard will need this setting to manage modules and the like.

Next, you’ll need to navigate to the “Admin” tab for your repository on Bitbucket. Look for the “Additional options/settings” widget and click “Services.” Select “POST” from the list and paste the build URL from AppHarbor. Save the changes. You’ve now wired Bitbucket to AppHarbor. Also, in the admin section, find the “User access” widget and give read access to the “apphb” user. Don’t miss this step or it won’t work! Every push to Bitbucket will now trigger a build at AppHarbor. Yes, it’s that easy…

Go ahead and push your changes to Bitbucket. Once the commit is complete, you should see a “Builds” section appear in your application’s properties pages at AppHarbor. You’ll also see a list of Application URLs. Navigate to one and you should see the setup page for your new Orchard site.

At AppHarbor, on the property pages for your application, you can add a SQL database. When you create that database, copy the connection string and paste it into the form for creating your site. Once you hit submit, you’re in Orchard.

So that’s it. Free hosting for your Orchard site (assuming you can stay within the pretty generous free limits), deployed via Mercurial. Kind of awesome, right?

Oh yeah, the one thing I should definitely mention is that all builds are clean! What this means is that each time you hg push, you get a clean deploy of Orchard, meaning your settings will be lost. So you have one of two options. The first option is not to push again. Manage your site and updates through Orchard. This option is how I plan to work. Option two is to run your site locally, make updates locally, add content locally, etc. You should be able to connect to your AppHarbor DB locally. Then just push your code when its ready to go live.

Leave a comment

On a Quick and Dirty Asynchronous Python Client for SimpleGeo

SimpleGeo maintains a Python client library that is both easy to use and feature complete (in terms of API coverage). Searching by long/lat is demonstrated below:

from simplegeo import Client
client = Client("KEY", "SECRET")
places = client.places
features = client.places.search(41.773847,-72.672477)
print features[0].properties["name"]

The client library performs two basic tasks, wrapping the API call along with its HTTP/OAuth internals and deserializing the JSon response into domain objects. While the JSon deserialization will likely prove plenty robust for most scenarios, there is a case where the client’s HTTP component is insufficient. The specific scenario occurs when working with an evented I/O web server like Tornado.

Tornado is a single-threaded web server that relies on non-blocking I/O and callbacks to handle large (thousands) of concurrent requests with minimal overhead. For more on evented I/O web servers, see http://codevoyeur.com/presentations.aspx#tornado. The important thing to understand about this class of web servers is that any synchronous code you run on that single thread will block all other requests.

SimpleGeo’s Python client uses the httplib2 Python module, which is a blocking client. In other words, when a request is made to SimpleGeo, Tornado cannot handle any other requests until the request completes. While SimpleGeo certainly seems to have infrastructure in place to handle large volumes, it’s still unwise to block requests while attempting to gain access to a network resource.

Fortunately, Tornado includes an asynchronous HTTP client that uses its non-blocking socket library. I’ve started to rewrite pieces of the SimpleGeo Python client to use Tornado’s AsyncHTTPClient. I’ll walk through these changes below.

First, I thinks it’s reasonable to leave as much of the original SimpleGeo code in place. With that in mind, I keep a dependency on the simplegeo module and reuse the models and utilities from there. The only changes need to be made to the Client hierarchy (each of SimpleGeo’s API s has a corresponding client class).

All client requests are made via the _request method in the base Client class (found in __init__.py in the simplegeo directory). Most of the code in this function deals with constructing the request details, before sending the request. The response content is returned along with the headers.

def _request(self, endpoint, method, data=None):
    self.headers, content = self.http.request(endpoint, method, body=body,headers=headers)
    return self.headers, content

Modifying the blocking code above to instead use Tornado wasn’t too difficult. I had to modify the signature to now accept a callback and replace the blocking call with the call to use AsyncHTTPClient below. All of the request construction was able to stay in place. Notice too that the new _request no longer has a return value. The data will come back in a callback.

def _request(self, endpoint, method, data=None, callback=None):
    http_client = AsyncHTTPClient()
    http_client.fetch(HTTPRequest(url=endpoint, method=method, headers=headers, body=body), callback=callback)          		            

So far I’ve modified only the Places API Client. That’s the API that lets you find places data using addresses, lat/long, IP address, etc. The Places Client has a method search, to search by lat/long. It has some code for constructing the endpoint URI for the corresponding API call and then calls _request from the base Client. It then returns the object graph using the SimpleGeo provided models.

def search(self, lat, lon, radius=None, query=None, category=None):
   result = self._request(endpoint, 'GET', data=kwargs)[1]

   fc = json_decode(result)
   return [Feature.from_dict(f) for f in fc['features']]

Making this method work with the asynchronous version of _request first requires accepting a callback to pass to _request. However, this callback requires a layer of indirection because of the flow of work.

The callback that the app layer wants will be passed a Feature list. The callback that the AsyncHTTPClient requires is passed the HTTP response. So I can’t simply pass the same callback through the layers. To get around that, I wrap the app layer callback in a inline function that will in turn call the app layer’s callback.

def search(self, lat, lon, radius=None, query=None, category=None, callback=None):
    def callback_wrapper(response):
            fc = json_decode(response.body)            
            callback([Feature.from_dict(f) for f in fc['features']])            

    self._request(endpoint, 'GET', data=kwargs, callback=callback_wrapper) 

Finally, at the calling app creates the Client instance and provides the callback to receive the features.

def get(self):
    client = Client("vvc2y7nAjkx6fUaJqQ94FT7nAdZCWQrA", "CJYFj8Sy3WwDL2sFfQXJnDdyXh7BqDU2")
    client.places.search(41.773847,-72.672477, callback=self.on_search_complete)
def on_search_complete(self, features):

    for feature in features:
        self.write(feature.properties["name"] + "
") self.finish()

Hopefully the point of the callback_wrapper is clearer having seen the Tornado code. Again, the problem is that the code in the RequestHandler expects to work with a Feature list. So it passes a callback to the client that will work with a feature list. The code in the Client layer expects to work with and HTTPResponse, so the Places Client can’t simply pass its Feature list callback through to the AsyncHTTPClient.

There are a couple pieces that I have to add back in, namely checking the response status and raising appropriate errors when the status is not in the 200 or 300 range. I’ll also build out the other search functions as I need them. But for now, if you’re in need of a Tornado/SimpleGeo client, this should get you started.

Code is available at https://bitbucket.org/johnzablocki/asyncsimplegeo/.

Leave a comment

On (Almost) Speaking at the PostgreSQL Conference

Shortly before leaving to go speak at the PostgreSQL East conference today, I tweeted rather ominously -

Heading over to deliver a .NET and#MongoDB talk at the PostgreSQL conference. Anticipating an empty room or lots of angry DBAs! #pgeast

As it turned out, I had not a single attendee join me for my would-be discussion. I can’t say I’m surprised. I submitted a talk not expecting to be selected. I didn’t have time to prepare a new, more relevant talk for the event. But I had one prepared so I figured it was worth a shot.

I was selected, which I think has more to do with my relationship with 10gen. I’ve spoken on MongoDB and .NET at the Hartford, New York City and Philly code camps, the NYC and Philly ALT.NET groups and I have two more coming up in Philly and New Jersey. So 10gen has been good at engaging me in the community and recently inviting me to speak at the Mongo Philly event. Anyway, long story short, 10gen is a great at community outreach and I love to talk about MongoDB and .NET.

I knew I was topically out of place at this conference. There was a point on the speakers’ distribution list where the following statement was made (in response to the suggestion of using PowerPoint over OpenOffice:

I beg to disagree, using Powerpoint on a FLOSS conference simply doesn’t look right to me

Yeah, this is a crowd ready for a talk on .NET and MongoDB.

I’m not bitter about not having anyone show to my talk. I generally get a respectable turn out at code camps or user group meetings. I don’t feel slighted. But I do think I realize now that there’s really just no interest in the broader development community to learn from or about .NET and our development world.

This notion was made clear to me when one person walked into my room about 10 minutes after my talk should have started. I joked with him that he could have dedicated MongoDB training. He offered that he didn’t know .NET so it wouldn’t be of value to him.

At the NYC Code Camp, topics like node.js and Ruby on Rails are popular. A good segment of .NET devs have been schooled in the idea that you can learn a lot about your platform by learning about other platforms. This is a tenet of ALT.NET. It seems clear to me that such thinking doesn’t exist in communities like that of PostgreSQL’s.

To be fair, I have never used PostgreSQL and couldn’t imagine ever finding myself wanting to try it. But I wouldn’t dismiss it. And I’ve worked with MySQL quite a bit and were it not for MongoDB and SQL Server having a development edition, I’d probably give it a try. But anyone who knows me in the technical sense knows that I try to avoid the class of problems that needs to be solved by an RDBMS.

So as Microsoft actively courts the open source community by sponsoring projects and foundations such as Python, Mercurial and Apache, the open source community still feels like MS is back in 2001 with Steve Ballmer calling Linux a cancer.

I invested little more than changing a title slide and walking a few blocks to the event today, so I’m not particularly bothered by the fact that my talk didn’t get an attendee. But I’m disappointed because I was looking forward to a healthy debate about .NET, MongoDB and exposing a group of people to something novel. After all, that’s why we present, right? And after all, what’s more novel to a bunch of PHP/Java/Ruby Postgres devs and DBAs than a talk about .NET and MongoDB?

So having heard comments like “Microsoft shouldn’t make operating systems anymore” and the line above about PowerPoint, I won’t be submitting proposals to such narrowly focused product conferences – at least ones where I’m likely to be the only .NET developer in the building.

Leave a comment

On a Quick and Dirty SimpleGeo Client for .NET

SimpleGeo is a free service providing, among other location-based services, a places API. Provide a longitude and latitude and get details including the names and addresses of businesses and points of interest near this location. While the data is not complete (you may or may not get a phone number), it is still impressively comprehensive.

The SimpleGeo team has provided client libraries for Python, Java and a few other platforms, but no .NET. There was a CodePlex project named SimpleGeo.NET that was until around April or May 2010 actively maintained. Unfortunately, it seems to have fallen out of sync with the official SimpleGeo API. It’s also written in VB.NET, which I think prevented the community from rescuing the codebase.

More recently, another SimpleGeo.NET client popped up on GitHub. This codebase is written in C#, but it’s still very early in its development. Its developer, Jörg Battermann, seems to be developing a rich API with the support for Silverlight and Windows Phone 7. While it seems likely that this project will eventually become for SimpleGeo what NoRM was to MongoDB, it is mostly throwing NotImplementedExceptions at this time.

Unfortunately, I have a somewhat immediate need for a SimpleGeo client in C#. This meant having to hack something together for rapid consumption. It was actually pretty easy to get working, so I figured I’d share the code for those too impatient to wait on Jörg Battermann’s project.

SimpleGeo has a straightforward RESTful API. Authentication is done through two-legged OAuth. Basically, it’s just a signed request with no security token used for user authorization. API calls return GeoJSON.

Places are known as “Features” in SimpleGeo terms. When looking up places near a given longitude/latitude pair, the GeoJSON returns a collection of features. I created a brute force domain model that simply mirrors the GeoJSON.

public class FeatureCollection {

        public int Total { get; set; }        
        public string Type { get; set; }
        public IList Features { get; set; }


Features look like:

public class Feature {

        public Geometry Geometry { get; set; }
        public string Type { get; set; }
        public string Id { get; set; }
        public Properties Properties { get; set; }


Most of the interesting data is in the Properties class.

public class Properties {

        public string Province { get; set; }
        public string City { get; set; }
        public string Name { get; set; }
        public IList Tags { get; set; }
        public string Country { get; set; }
        public string Phone { get; set; }
        public string Href { get; set; }
        public string Address { get; set; }
        public string Owner { get; set; }
        public string Postcode { get; set; }
        public IList Classifiers { get; set; }

Geometry and Classifier classes round out the model.

public class Geometry {

        public string Point { get; set; }
        public IList Coordinates { get; set; }        

public class Classifier {

        public string Category { get; set; }
        public string Type { get; set; }
        public string Subcategory { get; set; }

With that model in place, it was somewhat trivial to get the rest working. Borrowing a bit from Jörg Battermann’s work, I create a Client class that extends Hammock for REST’s RestClient.

public class Client : RestClient {

        internal const string AUTHORITY = "http://api.simplegeo.com";
        internal const string VERSIONPATH = "1.0";

        private readonly OAuthCredentials _credentials = null;
        public event Action RequestCompleteEventHandler;

Hammock has built in support for OAuth, so I just used that, borrowing again the Jorg’s constructor.

public Client(string oAuthKey, string oAuthSecret) {
            _credentials = new OAuthCredentials() {
                Type = OAuthType.RequestToken,
                SignatureMethod = OAuthSignatureMethod.HmacSha1,
                ParameterHandling = OAuthParameterHandling.HttpAuthorizationHeader,
                ConsumerKey = oAuthKey,
                ConsumerSecret = oAuthSecret

            Authority = AUTHORITY;
            VersionPath = VERSIONPATH;
            Credentials = _credentials;


My very, very hacky GetNearbyPlaces implementation is as follows:

public FeatureCollection GetNearbyPlaces(double latitude, double longitude, string query = null, string category = null) {

            var path = string.Format("places/{0},{1}.json", latitude, longitude);
            //TODO: less ternary operator!
            path += !string.IsNullOrEmpty(query) ? "?q=" + query : "";
            path += !string.IsNullOrEmpty(category) ? 
                string.IsNullOrEmpty(query) ? "?" : "&" + "category=" + category: "";

            var request = new RestRequest() { Path = path };
            var response = this.Request(request);

            if (response.StatusCode == HttpStatusCode.OK) {
                var jObj = JObject.Parse(response.Content);
                return JsonConvert.DeserializeObject(response.Content);
            } else {
                //TODO: raise intelligent exceptions
                return new FeatureCollection() { Features = new List() };

I use Hammock’s request and response sending/receiving methods and pass the result to JSON.NET which does a brilliant job of hydrating my FeatureCollection object graph.

The code is far from perfect and is clearly a work in progress. But it works.

        public void Can_Get_NearbyPlaces_With_Query() {
            Client client = new Client(OAUTH_KEY, OAUTH_SECRET);
            var collection = client.GetNearbyPlaces(37.7645, -122.4294, "Starbucks");
            foreach (var feature in collection.Features) {
                Console.WriteLine(feature.Properties.Name + " at " + feature.Properties.Address);

If you want an even hackier version, check out my SimpleGeo WP7 experiment.

Leave a comment

On Ignoring Windows Phone 7 App Build Directories in Mercurial

When working with any modern version control system, it’s pretty common to have some global or project specific ignore file that tells your VCS of choice to ignore certain types of files on adds and commits. When working with Mercurial, you can create a file named .hgignore and save it to the root of your project.

As I’ve done every time I work with a new VCS, I Google around to find someone else’s ignore patterns and use that in my project. This practice followed me from Subversion to Git to Mercurial, my current VCS of choice.

This evening while setting up a new solution that contains a Windows Phone VS2010 project, I noticed that hg status was showing .dll and .pdb files as untracked. My ignore file clearly excludes the bin directory and I’ve never seen this behavior before. What I quickly discovered was that by default, WP7 apps build to a Bin directory, not bin as do web and class library projects. The obj directory is still lowercase obj. This is a small but subtle difference and could result in either in an unnecessary add or two. Adding Bin to your .hgignore solves the problem.

To add to the Web’s collection of .hgignore files, I’ve included mine below…

syntax: glob
#-- Files

#-- Directories

#-- Microsoft Visual Studio specific
Leave a comment