Harnessing the BackPack API - Part IV

Sign in to queue


  This article is in series of BackPack API. This IV adds the ability to remove, or delete pages, notes, tasks, etc.
Contact Michael K. Campbell

Difficulty: Easy
Time Required: 3-6 hours
Cost: Free
Software: Visual Studio Express Editions, BackPack API

Previously on BackPack API...

Well, here we are: on the fourth and final installment of the articles on Xml4Fun dedicated to exploring the BackPack API. In the previous articles we've covered a lot of ground. We covered the API itself in the first article, along with ways to translate user input into dynamically generated XML and pass data back and forth between our application and the BackPack servers. In the second article we took things a step further by creating an object model, using it to consume data expressed as XML, and configuring it so that it could consume that XML either from disk or from the web—all without any noticeable difference to our application. In the third article we examined the flow of information between our user interface, the middle tier, the BackPack servers, and then back out to the UI again (via the middle tier). We also added a TreeView control to visually represent our BackPack data, and loaded each TreeNode's Tag property with a ResourceDescriptor object designed to allow for quick and easy lookups to the kind of data represented.


In previous articles, we've worked on the basics of our object Model, but left out a key component: the ability to remove, or delete pages, notes, tasks, etc. Coding the functionality to remove objects is fairly straightforward, and just involves wiring up a bit of logic to our already existing infrastructure. Adding a new Context Menu item allows users to right-click and delete nodes.

Generic Episode Image

When end users select the delete option, the node's Tag property is evaluated to determine what kind of node (Page, Task, Note, etc.) is being flagged for deletion. A command representing the deletion request is then assembled by either the PageManager or Page in question (if it's the child object of a Page that is being deleted), and the command is then routed to the local BackPackGateway instance for processing against the BackPack servers. Once the operation is completed on against the servers, the PageManger removes the corresponding PendingOperation instance representing the operation in question, modifies any local data as needed (i.e., deletes it) and then notifies the UI that the operation has completed, upon which the UI can remove the corresponding node in question from the TreeView.

There is, however, one catch: when Pages assemble commands for processing on the server, they don't communicate those commands directly to the BackPackGateway object. Instead, they raise an event that lets the PageManager know that one of its Pages needs to marshal a change up to the server. The PageManager, in turn, determines whether to push the change immediately (if online) or store it for processing later (once connectivity has been restored). The problem is that because Pages announce their changes via events, we need to unbind event handlers in the PageManger before we can remove any given Page. Removing an event handler is a fairly straightforward operation, and looks like the following:

Visual C#

Page removed = e.PendingOperation.State as Page;
if (removed != null)
removed.OperationAssembled -=
new OperationAssembledEventHandler(this.Page_OperationAssembled);
removed = null;

Visual Basic

Dim removed As Page = CType(e.PendingOperation.State, Page)

If (Not (removed) Is Nothing) Then
RemoveHandler removed.OperationAssembled,
New OperationAssembledEventHandler(
AddressOf Me.Page_OperationAssembled)
removed = Nothing
End If

Once the event handler is removed, the page can then be dropped from the middle tier, and the PageManager can inform the UI that the page is now gone, upon which the corresponding Node (and all associated lookups) can then be removed from the UI tier. If you traverse the code that's required to delete BackPack data from beginning to end, you'll see that there's an awful lot of activity involved. Part of it is housekeeping functionality spread over two tiers (UI and a business tier), part of it is threading code designed to ensure that the application's Winform stays responsive while everything occurs in the background, and a large portion of the code involved relates to handling the commands themselves and what to do with the commands based upon network connectivity.

Network Awareness (It's alive!!!)

The ability to determine current network connectivity and detect changes is a pretty critical piece of functionality for an application whose stated goal is to allow you to work with an online application while you are offline. As such, we need a clean way to determine network connectivity. Happily the 2.0 version of .NET Framework introduces a bunch of new functionality that makes all of that possible. The new System.Net.NetworkInformation namespace presents a number of very handy classes and utilities that let you easily determine your current network status, as well as alert you to changes in that status provided that you set up the proper event handlers. The functionality is, however, spread across a number of classes, so encapsulating core sections of that logic into a single class will make interacting with that functionality much easier for our application. The abstraction is really nothing special, and in the interest of time I'm going to skip detailing the implementation (the included code NetworkStatus.cs/vb should be clear enough—though I did blog about the implementation a while back if you'd like more info). Once successfully abstracted, the NetworkStatus class ends up looking like so:

Generic Episode Image

Adding an instance of the NetworkStatus class to the PageManager enables it to determine current network connectivity status and handle requested changes as needed by either routing them directly to the Server (via the BackPackGateway object), or by adding them to the PageManager's internal collection of Pending Commands that will be run as soon as connectivity is re-established. If the application is closed prior to regaining connectivity, the pending commands will be persisted to disk to ensure that they are not lost—enabling us to take our BackPack data offline, and even make modifications while offline.

Visual C#

bool connected = 
this._networkStatus.ConnectivityStatus == ConnectionStatus.Connected;
bool gatewayConfigured = (this._gateway != null &&
this._gateway.ConnectionInfo != null);
if(connected && gatewayConfigured)
AsyncRemoteOperation async =
new AsyncRemoteOperation(this._gateway.ExecuteWebMethod);
async.BeginInvoke(url, args, operation, null, null);
this._pendingCommands.Add(new Command(url,args,operation));

Visual Basic

Dim connected As Boolean = 
(Me._networkStatus.ConnectivityStatus = ConnectionStatus.Connected)
Dim gatewayConfigured As Boolean = ((Not (Me._gateway) Is Nothing) _
AndAlso (Not (Me._gateway.ConnectionInfo) Is Nothing))
If (connected AndAlso gatewayConfigured) Then
Dim async As AsyncRemoteOperation =
New AsyncRemoteOperation(AddressOf Me._gateway.ExecuteWebMethod)
async.BeginInvoke(url, args, operation, Nothing, Nothing)
Me._pendingCommands.Add(New Command(url, args, operation))
End If

And, of course, because of the way everything has been architected, the UI is, effectively, oblivious to all of this and can provide almost full functionality while offline. I made the decision to disallow edits to newly created objects while offline merely to keep synchronization of temporary IDs to a minimum while offline.

Persisting Commands

One of the great things about developing applications with an object model is that as you get closer to the finish line, tasks tend to become increasingly easier to complete, as they are able to leverage existing code and functionality. Because of all of the work already done to persist various collections of objects, persisting commands is a terribly easy task. When the user closes the Winform, logic will check to see if any pending operations were added during the current session. If they were, the user will be prompted to save their changes as so:

Visual C#

bool changesPending = this._pageManager.PendingOperations.Count > 0;

// changes may BE pending - but they could be from a previous session,
// in which case there is no need to save
bool madeThisSession = this._pageManager.ChangeCount > 0;

if (changesPending & madeThisSession)
string message = "Changes made are still pending against the " +
"server. Do you wish to save changes from this Session?" +
System.Environment.NewLine + "(Changes made in previous " +
"Sessions will still be persisted if you don't save now.)";

DialogResult res = MessageBox.Show(
message, "Save Changes?", MessageBoxButtons.YesNoCancel);
switch (res)
case DialogResult.Cancel:
e.Cancel = true;
case DialogResult.No:
case DialogResult.Yes:
e.Cancel = true;

Visual Basic

Dim changesPending As Boolean = (Me._pageManager.PendingOperations.Count > 0)

' changes may BE pending - but they could be
' from a previous session, in which case there is no need to save
Dim madeThisSession As Boolean = (Me._pageManager.ChangeCount > 0)

If (changesPending And madeThisSession) Then
Dim message As String = ("Changes made are still pending against the"+
"server. Do you wish to save changes from this Session?" _
+ (System.Environment.NewLine + "(Changes made in previous"+
" Sessions will still be persisted if you don't save now.)"))

Dim res As DialogResult = MessageBox.Show(message, "Save Changes?",
Select Case (res)
Case DialogResult.Cancel
e.Cancel = True
Case DialogResult.No
Case DialogResult.Yes
Case Else
e.Cancel = True
End Select
End If

The calls to PersistPages() and PersistCommands() just wrap calls to the helper method SerializeToFile() created in Article #2. That method serializes a collection of generics to disk.

Each time the Winform is loaded, the PageManger will look for persisted Commands as well as Pending Operations and flag their existence as needed. If the end-user then logs in and loads Pages from the server, all pending operations are discarded (the assumption is that they're getting a new copy of the data from the server, instead of loading their previous, offline changes). If the end-user loads the Pages from disk, any pending operations against that data are then loaded as well. Pending Commands, if there are any, will be immediately executed against the server provided user credentials are provided. (Likewise, pending commands "rolling around in memory" will also be executed against the Server if network connectivity is restored while the application is in operation and has been running in offline mode.)

Of course, managing the interaction and various permutations between all of the menu options, connectivity states, and permutations of pending operations was a bit messy. I believe my implementation of the various business rules governing what to do based upon "pending-ness," and connectivity is fairly logical—but it's entirely possible that the way I implemented these choices may not mesh with everyone's logic. (I regret nothing!) That's the bad news. The good news, however, is that while determining when to send pending commands to the server is a bit tricky, actually doing it is a snap. Commands themselves are nothing more than persisted data sent into a single method, so putting commands back into play involves nothing more than just sending them back to the method in question. For example, I've tapped the event handler that detects when network connectivity returns to "rehydrate" commands, and in that routine I just thaw out any commands that were serialized to disk, and add them to the list of existing pending commands already in memory, and then send the whole lot off to be processed as follows:

Visual C#

public void NetworkStatusChanged(object sender,
NetworkStatusChangedEventArgs e)
bool online = e.ConnectionState == ConnectionStatus.Connected;
bool connected = this._gateway != null && this._gateway.Loaded;
if(online && connected)
SerializableList<Command> savedCommands = (SerializableList<Command>)
this.LoadSerializableList(System.Environment.CurrentDirectory +
CommandsFileName, typeof(SerializableList<Command>));

if (savedCommands != null && savedCommands.Count > 0)
foreach (Command c in savedCommands)
if (!this._pendingCommands.Contains(c))

if (this._pendingCommands.Count > 0)
Command[] queued = new Command[this._pendingCommands.Count];

Visual Basic

Public Sub NetworkStatusChanged(ByVal sender As Object,
ByVal e As NetworkStatusChangedEventArgs)
Dim online As Boolean = (e.ConnectionState = ConnectionStatus.Connected)
Dim connected As Boolean = ((Not (Me._gateway) Is Nothing) _
AndAlso Me._gateway.Loaded)
If (online AndAlso connected) Then
If Me._savedCommands Then
Dim savedCommands As SerializableList(Of Command) =
System.Environment.CurrentDirectory + CommandsFileName),
GetType(SerializableList(Of Command))), SerializableList(
Of Command))

If ((Not (savedCommands) Is Nothing) _
AndAlso (savedCommands.Count > 0)) Then
'this._pendingCommands.AddRange(savedCommands); (would
'work, but would also create dupes/collisions)
For Each c As Command In savedCommands
If Not Me._pendingCommands.Contains(c) Then
End If
End If
End If
If (Me._pendingCommands.Count > 0) Then
Dim queued(Me._pendingCommands.Count) As Command
End If
End If
End Sub

As for processing the commands themselves, there is one small hitch: what to do if connectivity disappears. Or, so you would think. Remember that if connectivity isn't present, the PageManager detects it before handing commands off to the gateway, and just persists them. The actual processing of the commands is handled by a helper function, which allows for processing of commands to occur when network connectivity is restored to an already-running instance of the application, or to route the commands when a new session is started with existing connectivity. The helper function does nothing more than break up the Command object and route it to the method responsible for routing commands to the BackPackGateway object:

Visual C#

private void ProcessPendingCommands(Command[] commands)
for (int i = 0; i < commands.Length; i++)
Command current = commands[i];
this.InvokeOperation(current.Url, current.Arguments, current.Operation);

Visual Basic

Private Sub ProcessPendingCommands(ByVal commands() As Command)
Dim i As Integer = 0
Do While (i < commands.Length)
Dim current As Command = commands(i)
Me.InvokeOperation(current.Url, current.Arguments,

i = (i + 1)
End Sub

Because the code that handles the processing of the actual commands is asynchronous, and announces results by way of events and handlers, nothing more is needed. If the commands process successfully against the server, event handlers in the PageManager and Winform will react accordingly. If there is a problem it will be passed along by the same event, and will be handled as needed by the PageManager and UI. If connectivity somehow only "flickered" back on, and then quickly disappeared, the currently running command might run into serious problems (that's just the way of networked applications—though there is rollback functionality in place). However, subsequent pending commands will be routed into the Invoke Operation() method. Because that method checks for connectivity, and queues commands if there is no connectivity available, the commands will just be re-queued (and persisted as needed) until connectivity returns.

Professional Driver; Closed Circuit

With everything now in place, we're now ready for a full-blown test drive. While the UI is still a bit awkward around the whole 'sign in' process and my use of menus (I'm obviously not going to be putting 37Signals out of business with my app), it is still possible to take the entire app for a test drive and watch it work while on- and offline. I'd suggest something similar to the following (though any similar permutation should work fine):

  • Build and deploy the application by pressing F5.
  • Enter your account name and API Key, then press the Log In button. (Save your Credentials so that you don't have to retype them by using the Credentials | Save menu option.)
  • Load pages from the server (you'll need to be online for this, obviously) using the Pages menu.
  • Make a few changes while online. Adding new objects should show you how TreeNodes will be briefly "dirty" until the change is approved on the server. (Modifying most other nodes happens to quickly to see most of the time—though changes to the title or name of a node won't be reflected in the UI until the change has been made on the server so that's a good way to see things in action.)
  • Once you've played with the application online for a while, either go offline (disable your network connection somehow) or save your pages, close the application, and then go offline.
  • With pages loaded, and while offline, make a few changes. You'll see that everything behaves as it did while online, only changes don't get marked as "un-dirty" immediately and newly added nodes aren't editable.
  • At this point you can save your changes by closing the application and selecting Yes when prompted to save, or reconnect your network connection.
  • If you simply recover your network connection (which will take the application a few seconds to notice), you'll start to see "dirty" nodes get replaced with their normal icons once they are updated on the server.
  • If you closed and saved your changes, you'll see that upon opening the application, your pending nodes are still flagged as pending. Only once you reconnect will they be marked as clean, or unpending.

    Click here for larger image

    (click image to zoom)

Taking the application for a spin is a definite hoot once you realize what is under the covers and what is going on. It's probably not quite ready to be handed off to your pointy-haired boss, or grandma, without some work; but there's a decent framework in place that will let you stub in the rest of the BackPack API functionality (such as tagging, linking, sharing, and duplicating pages) if you are so inclined. In other words, the application isn't really intended for resale. Rather, it's a sample application intended for developers—to help them get an idea of how to use XML to communicate back and forth with servers that use XML to communicate, and as their interface, as well as a chance to explore options for serialization and the dynamic generation of XML. In other words, it's been an excuse to code with XML for fun.

The Discussion

Add Your 2 Cents