Harnessing the BackPack API - Part II

  This article is in series of BackPack API. The focus of this article is on another key consideration and needed component of the final application: local storage of BackPack data. This facilitates BackPack off-line,(need to operate against a local copy of a user's BackPack data.) If the application is online, the modification will be sent immediately; otherwise, the command will be shelved until the application regains an Internet connection and can pass the command up to the BackPack servers as needed.
Contact Michael K. Campbell

Difficulty: Easy
Time Required: 3-6 hours
Cost: Free
Software: Visual Studio Express Editions, BackPack API
Hardware:
Download:

One of the exciting new features of .NET 2.0 is the introduction of Generics: special template-like classes that allow developers to quickly and easily interact with collections of objects in an efficient, strongly-typed, manner. However, while a wealth of information can be found by searching the internet for articles and documentation on Generics, there's very little information available about the serialization of Generics into XML. In fact, documentation is virtually non-existent. This is too bad, because Generics lend themselves very well to serialization, and can really help make serializing collections of objects quite easy—once a few simple, but key, points are addressed. This article will take a look at some of those key points and provide some concrete examples of serializing Generics, as well as providing an overview of just how beneficial Generics are when it comes to serializing collections of objects.

The Back Pack API - Roadmap to our sample application

In my previous article, I introduced the BackPack API and mentioned that I'd spend three articles looking into how to build a Winforms application that would not only allow for direct interaction with BackPackIt.com servers to manipulate user information, but that would also allow the data to be pulled down locally and manipulated off-line. In the previous article the focus was on providing an overview of BackPack, its accompanying API, and on writing code that dynamically generated XML data that represented user input which could then be sent back to the BackPack servers for processing. While the sample application in the previous article was a bit weak, it did cover some of the key concepts of dynamic XML generation with the DOM, and also showcased how easy it is to communicate with an 'XML' Web server—all of which were key components in building the final application which will be completed in the third installment of this series.

The focus of this article will be on another key consideration and needed component of the final application: local storage of BackPack data. In order to facilitate being able to use BackPack off-line, the final application will need to operate against a local copy of a user's BackPack data. In the final application, when the user wants to change their data, the change will first be made against the local data store, and a corresponding modification will be sent for processing against the server. If the application is online, the modification will be sent immediately; otherwise, the command will be shelved until the application regains an Internet connection and can pass the command up to the BackPack servers as needed. The primary goal, therefore, will be to make BackPack "portable"—and XML will be a key player in making that so (and serializing Generics will play a large role in harnessing XML for our needs).

Of course, until operations have been successfully completed on the server, we'll want some way to know that a local copy of a page is "dirty" or pending an update on the server. We'll also want to make sure that if something happens, and the modification can't be correctly executed on the server, that we'll have some way of recovering from such an issue. But we'll deal with all of those details in the next article. Here we'll focus on a framework that will allow us to save BackPack data locally so that it can be queried as needed—and serve as the basis for making changes against the server whether we're online or offline. In other words, the goal of this article will be to provide a few more key building blocks towards the final application in the form of a simple application that will allow us to:

  1. load pages from the server,
  2. save them to disk, and then
  3. load them from the server or from disk as needed.

Architecture and Game Plan

The BackPack API returns data as XML. That's an excellent transport mechanism, but not something we want to work with in our final application. What we really want is the ability to interact with objects—to be able to tell them WHAT to do, and not have to worry about HOW that gets done. In other words, we want encapsulation: the ability to hide implementation details from other "players" in our application such that if the HOW needs to be changed, our application doesn't break. A quick review of the BackPack API shows that it provides a number of ways to access user data. For example, notes for a given page can be pulled back by querying a specific URL which will return an XML "data-gram" full of current notes associated with the requested page, as shown in the following screen capture taken from the API's documentation:

In other words, the API provides the ability to SLICE and DICE data—which is handy, but something our final application won't need. The goal of the final application will be to encapsulate EVERYTHING (well close to everything) available in the API, and, since the BackPack application revolves around pages, our application will also target pages. In other words, if we have a local copy of every page, we can then see which lists, notes, tags, links, etc. belong to any given page as needed—without requiring a bunch of chatty interactions with the server each time we need to know something. We'll therefore want to architect our application in such a way that it's page-centric. As such, the general focus and flow of this article will be to:

  1. Create a page-centric Object Model where interacting with a page provides us the ability to see assets (data) that belongs to said page, and (later, in article 3) modifying data belonging to the page is handled by the Page object itself.
  2. Construct the basis of a Business Object, or middle tier, that can manipulate page objects as needed—and can handle all of the gory details associated with bringing pages in and out of existence, whether from the server or from disk.
  3. Finally, we'll need to wrap the above functionality in a rudimentary GUI that will let us easily handle operations around loading and saving pages.

Building the Object Model

To build a suitable page-centric object model, we just need to gain an idea of what data points, or attributes, are found in each BackPack page, and create a corresponding data point, or property, for each in our classes. (In the third article we'll model behaviors, or methods that will allow us to modify and interact with page objects.) The quickest way to gain a sense of what data points need to be represented is to just take a look at a sample page documented in the API:

As I mentioned in the previous article, the BackPack API is well documented with good, solid, examples. This is true of the sample page covered in the API; it successfully models all of the data points potentially needed in a single page. Therefore, as you can see, a Page consists of an id, a title, a body, and potential collections of lists, notes, tags, etc. Modeling these last resource types is easy—and the logical place to start, as they are the constituent elements of any given page.

NOTE: Because a Generic Collection is frequently stored in a System.Collections.Generic.List, I've opted to call lists, in BackPack parlance, Tasks in my object model—just to make sure there won't be any confusion.

A few seconds of typing in C# Express (using Code Snippets to generate the property definitions—woot!) and Links, Notes, Tags, and Tasks have been quickly modeled (though the following image was generated in VS 2005):

Once these objects are defined, a Page object, which contains these data types as child nodes, can then be created in similar fashion. Like the classes above, it too consists of an id property, along with a couple of other string fields, and then potential collections of the classes created above:

At this point, our object model is complete—at least from the standpoint of storing data at run time.

Let the Serialization Begin!

With the object model in place, it's time to turn our attention to the process of translating our objects into XML and back out again to meet our persistence needs. For the final application, we'll need to "create" page objects from XML, whether they're being loaded from the BackPackIt.com servers or from the File System, where we've saved local copies to use when there is no connection to the Internet. Normally when basing objects off of XML provided by an outside party, it's a good idea to make sure that if the external party changes the XML, it doesn't break your application—or at least that such a change is easily dealt with. A good way to do that is to run the external XML through a transform that converts it to a dependable format used internally by your application. This can easily be done via an XML Transform, which can be thought of as a buffer interface designed to help abstract details of the foreign XML away from our application.

Using a transform also provides another benefit: If we transform the incoming XML into the same format used by our application, then we can deserialize objects from BackPack servers and the file system using, effectively, the same functionality. In other words, a single page loaded from the server, once transformed, will look the same to our application as a collection of pages serialized to disk. And speaking of collections, that's where we need to turn our attention back to Generics. If you look at the implementation of the Page class, you'll see that the Notes, Tasks, Tags, and Links are implemented as a Generic Type: SerializableList. That's a custom class that I've created especially for this application. The entire declaration of that type is as follows:

Visual C#

[XmlType("{T}s")]
public class SerializableList<T> : List<T> { }

Visual Basic


<XmlType("{T}s")> _
Public Class SerializableList(Of T)
Inherits List(Of T)
End Class

Not a lot of code, is there? That's because a SerializableList is just a class that derives from the System.Collections.Generic.List class (a container for Generics) that ships with the .NET Framework 2.0. Under the covers it behaves the exact same way as a List object, but there's one subtle difference: this custom class has an XML Serialization attribute that describes the type of information that will be serialized. That's what the [XMLType("{T}s"] denotes, where the {T} is a token used as a placeholder for the type of object held in the collection. Here's a conceptual view of how it would serialize in the wild:

<{T}s>
<TypeName>data</TypeName>
<TypeName>data</TypeName>
</{T}s>

Where the type of the object contained would be output as a parent element, denoted by the {T} token, and then subsequent child elements would be serialized as needed. So, if the SerializableList contained something simple, like a string (SerializableList<string>), the XML tags would look like:

<strings>
<string>one</string>
<string>two</string>
<string>etc.</string>
</strings>

See? Simple. Complex objects serialize the same way, making Generics a powerful ally in the serialization of collections. The only trick is coming up with an XmlTypeAttribute name that is generic enough to describe whatever is being serialized; that way, one collection object will fit all of your serialization needs. In my case, I've just added an s to the end of the type name itself, and called it good—but you could easily do something like add "list" to the end of yours and that would work equally well.

So, with that bit of information covered, a Page object will then serialize, and any Tasks, Tags, Links, or Notes, will serialize out as lists of the corresponding entities if any data is currently present. Here's an example:

<?xml version="1.0" ?>
<Pages>
<Page>
<PageIsDirty>false</PageIsDirty>
<Title>Example Page</Title>
<Body>This page was created via the BackPack API.</Body>
<Tasks />
<Notes>
<Note>
<Title>Sample Note</Title>
<Body>This is the body of a note</Body>
<Id>279762</Id>
<Created>2005-09-29 15:08:08</Created>
</Note>
<Note>
<Title>Another Note</Title>
<Body>More body/sample</Body>
<Id>279385</Id>
<Created>2005-09-29 17:05:35</Created>
</Note>
</Notes>
<Links />
<Tags />
</Page>

Note how the collection of Note objects serialized as <Notes> (or <{T}s> where the name of the type was a Note), and so on. What's more is that when we go to serialize a collection of Pages, they too serialize the same way—as a <Pages> collection, with corresponding, nested, collections of their constituent data—making everything tidy and easy to handle.

<Pages>
<Page>
<PageIsDirty>false</PageIsDirty>
<Title>Example Page</Title>
<Body>This page was created via the BackPack API.</Body>
<Tasks>
<!-- elided from above -->
</Tasks>
<Links />
<Tags />
</Page>
<Page>
<PageIsDirty>false</PageIsDirty>
<Title>Another Page</Title>
<Body>This page doesn't have any data in it yet—other than title/body.</Body>
<Tasks />
<Notes />
<Links />
<Tags />
</Page>
<Page>
<PageIsDirty>false</PageIsDirty>
<Title>And Yet Another Page</Title>
<Body>Unlike the above page, this one has some data loaded...</Body>
<Tasks>
<Task>
<Text>Get Stuff Done</Text>
<Id>1347087</Id>
<Complete>false</Complete>
</Task>
<Task>
<Text>Do more stuff</Text>
<Id>1347088</Id>
<Complete>false</Complete>
</Task>
</Tasks>
<Notes />
<Links />
<Tags />
</Page>
</Pages>

Serializing objects is also terribly easy at this point as well. We just need to know the type of object to serialize (in this case a Generic Collection), a location to save the information, and an instance of the type of object to serialize that contains the data we need to serialize. Because we'll be serializing credentials and connection information, as well as pages (and later on even commands to modify data when offline), I've created a helper method as follows:

Visual C#

private void SerializeToFile(Type type, object data, string path, SavedFileType resourceType)
{
try { XmlSerializer serializer = new XmlSerializer(type);
XmlTextWriter tw = new XmlTextWriter(path, Encoding.UTF8);

tw.Formatting = Formatting.Indented;
tw.WriteRaw("<?xml version=\"1.0\" ?>");

XmlSerializerNamespaces ns = new XmlSerializerNamespaces();
ns.Add(string.Empty, string.Empty);

serializer.Serialize(tw, data, ns);
tw.Close();

this.OnSerializationComplete(new SerializationCompleteEventArgs(resourceType));
}
catch // etc.. }

Visual Basic

Private Sub SerializeToFile(ByVal type As Type, ByVal data As Object, ByVal path As String, ByVal resourceType As SavedFileType)
Try Dim serializer As XmlSerializer = New XmlSerializer(type)
Dim tw As XmlTextWriter = New XmlTextWriter(path, Encoding.UTF8)
tw.Formatting = Formatting.Indented
tw.WriteRaw("<?xml version=""1.0"" ?>")
Dim ns As XmlSerializerNamespaces = New XmlSerializerNamespaces
ns.Add(String.Empty, String.Empty)
serializer.Serialize(tw, data, ns)
tw.Close()

Me.UpdateResourceStates(resourceType)

Me.OnSerializationComplete(
New SerializationCompleteEventArgs(resourceType))
Catch ex As Exception
Throw New NotImplementedException(
"Need to do a serialization exception..", ex)
End Try End Sub

The code is pretty straightforward: it just creates an XmlSerializer instance of the type indicated, and contains a few snippets of code that replace the default XML Serialization namespaces (injected into the serialized XML) with null, or blank, namespaces which keep the serialized XML clean and tidy.

Transforming incoming XML

Now that we've seen what our pages look like once serialized, we can build a quick XSL transform to convert incoming BackPack XML into that same format so that we can deserialize BackPack pages from the server just as easily as those stored locally in our own format. The transform is a simple one, because both the BackPack XML and our own format are very element-heavy, which makes mapping really just a question of changing names from the incoming document to match the names of our objects.

<?xml version="1.0"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" />
<xsl:template match="/">
<Page>
<PageIsDirty>false</PageIsDirty>
<Title><xsl:value-of select="response/page/@title" /></Title>
<Body><xsl:value-of select="response/page/description" /></Body>
<xsl:if test="count(response/page/items/*) > 0">
<Tasks>
<xsl:apply-templates select="response/page/items" />
</Tasks>
</xsl:if>
<xsl:if test="count(response/page/notes) > 0">
<Notes>
<xsl:apply-templates select="response/page/notes" />
</Notes>
</xsl:if>
<xsl:if test="count(response/page/links) > 0">
<Links>
<xsl:apply-templates select="response/page/links" />
</Links>
</xsl:if>
<xsl:if test="count(response/page/tags) > 0">
<Tags>
<xsl:apply-templates select="response/page/tags" />
</Tags>
</xsl:if>
</Page>
</xsl:template>
<xsl:template match="item">
<Task>
<Text><xsl:value-of select="." /></Text>
<Id><xsl:value-of select="./@id" /></Id>
<Complete><xsl:value-of select="./@completed" /></Complete>
</Task>
</xsl:template>
<!-- elided for brevity (see sample code for more info) -- >
</xsl:stylesheet>

I'm also a bit of a *cough* clean freak, so I've added a bit of logic to make sure that empty elements arriving from the server aren't included in our transformed XML—but that's a total waste of energy and cycles in case anyone was wondering (but serves as a simple example of how that could be done if you wanted your XML more human-readable).

As for performing the transform, that will be done in memory as pages arrive from the server, as demonstrated by the following code:

Visual C#

private void SinglePageReturned(string pageData)
{
// TODO: add try/catch etc byte[] data = Encoding.UTF8.GetBytes(pageData);
MemoryStream stream = new MemoryStream(data);
XPathDocument input = new XPathDocument(stream);

XslCompiledTransform xsl = new XslCompiledTransform();
XmlNode stylesheet = this.LoadTransformDocument();
xsl.Load(stylesheet);

MemoryStream ms = new MemoryStream();
StreamWriter sw = new StreamWriter(ms);

xsl.Transform(input, null, sw);

XmlDocument page = new XmlDocument();
byte[] bytes = ms.ToArray();
string transformedXml = Encoding.UTF8.GetString(bytes);
page.LoadXml(transformedXml);
sw.Dispose();
ms.Close();
ms.Dispose();

XmlNode node = page.SelectSingleNode("/");
Page output = this.GetPageFromXml(node);

this.AddPage(output);
}

Visual Basic

Private Sub SinglePageReturned(ByVal pageData As String)
' TODO: add try/catch etc Dim data() As Byte = Encoding.UTF8.GetBytes(pageData)
Dim stream As MemoryStream = New MemoryStream(data)
Dim input As XPathDocument = New XPathDocument(stream)
Dim xsl As XslCompiledTransform = New XslCompiledTransform
Dim stylesheet As XmlNode = Me.LoadTransformDocument
xsl.Load(stylesheet)
Dim ms As MemoryStream = New MemoryStream
Dim sw As StreamWriter = New StreamWriter(ms)
xsl.Transform(input, Nothing, sw)
Dim page As XmlDocument = New XmlDocument
Dim bytes() As Byte = ms.ToArray
Dim transformedXml As String = Encoding.UTF8.GetString(bytes)
page.LoadXml(transformedXml)
sw.Dispose()
ms.Close()
ms.Dispose()
Dim node As XmlNode = page.SelectSingleNode("/")
Dim output As Page = Me.GetPageFromXml(node)
Me.AddPage(output)
End Sub

Note the helper method there that grabs the XSL file: LoadTransformDocument(). I've added that method as a helper to abstract the method used for grabbing the transform document listed above. In this sample application I'm grabbing that XSL document from the assembly itself. I did that ONLY to show that it can be done—which is a handy trick if you want to be able to load XSL transforms completely in memory and not require them to be dependent upon satellite files, etc. But in our case it doesn't really make the most sense as we're partially using the XSL document to shield us against changes to the incoming XML—meaning that if 37Signals ever changed their XML format, we'd have to build a new transform, and then recompile our application to get the XSL document back into the Assembly's Resource Stream for further, continued, use in our application. So, with the caveat that I only did it this way to model how to pull XML documents out of your assembly, here's the code that fetches the XSL document:

Visual C#

private XmlNode LoadTransformDocument()
{
if (this._stylesheetNode == null)
{
// pull back the xslt document from the manifest: Assembly current = Assembly.GetExecutingAssembly(); XmlDocument doc = new XmlDocument();

using (StreamReader sr = new StreamReader(current.GetManifestResourceStream(this.GetType(), "transform.xsl")))
doc.Load(sr);

XmlNode output = doc.SelectSingleNode(".");
this._stylesheetNode = output;
}
return this._stylesheetNode;
}

Visual Basic


Private Function LoadTransformDocument() As XmlNode
If (Me._stylesheetNode Is Nothing) Then ' pull back the xslt document from the manifest: Dim current As Assembly = Assembly.GetExecutingAssembly
Dim doc As XmlDocument = New XmlDocument
Dim sr As StreamReader =
New StreamReader(current.GetManifestResourceStream(
Me.GetType, "transform.xsl"))
doc.Load(sr)
Dim output As XmlNode = doc.SelectSingleNode(".")
Me._stylesheetNode = output
End If Return Me._stylesheetNode
End Function

In the final application, it would probably be better to just have this method open up a stream to the Current Directory where the application resides, and pull in a .xsl document—that way if the format ever needed to change, you could just replace the Transform document, and wouldn't need to recompile the app/bits.

Translating Pages into Objects from XML

With the transform built, and with a method to serialize pages to disk, we now just need to focus on pulling pages back from the BackPack server and turning them into objects. Then we can turn our attention to converting pages saved to disk back into objects as well. But, because the XML formats are the same, both operations will end up being quite similar—and are covered by the following two methods (one of which returns a single Page, the other which returns a Collection, or SerializableList, of Pages):

Visual C#

private Page GetPageFromXml(XmlNode input)
{
XmlSerializer s = new XmlSerializer(typeof(Page));
Page page = (Page)s.Deserialize(new XmlNodeReader(input));

return page;
}

Visual Basic

Private Function GetPageFromXml(ByVal input As XmlNode) As Page
Dim s As XmlSerializer = New XmlSerializer(GetType(Page))
Dim page As Page = CType(s.Deserialize(New XmlNodeReader(input)), Page)
Return page
End Function

Visual C#

private SerializableList<Page> GetPagesFromXml(XmlNode input)
{
XmlSerializer s = new XmlSerializer(typeof(SerializableList<Page>));
SerializableList<Page> list =
(SerializableList<Page>)s.Deserialize(new XmlNodeReader(input));

return list;
}

Visual Basic

Private Function GetPagesFromXml(ByVal input As XmlNode) As
   SerializableList(Of Page)
    Dim s As XmlSerializer = New XmlSerializer(GetType(
SerializableList(Of Page)))
Dim list As SerializableList(Of Page) = CType(s.Deserialize(
New XmlNodeReader(input)), SerializableList(Of Page))
Return list
End Function

Because the XML representing a BackPack page has been translated to the very format output by our serialized objects, hydration is a quick and simple operation. And, because our Generic SerializableList has an XML Serialization attribute that lets it morph element names based upon the type of object collection being serialized, all of our nested objects line up perfectly and can therefore be hydrated and dehydrated as needed.

Loading Pages - Without Freezing the User Interface

Many of the key components of the final application are now complete. We have a connection mechanism, created in the first article, that allows us to fetch XML from the BackPackIt.com servers, and we now have a mechanism in place that can turn XML into objects that can be used in our application. We also now have the ability to store those objects as a collection of Pages on disk for use offline. All we really need at this point is to wire up existing functionality from the last article to request pages from the server so that they can be turned into objects that can be used by our application. We'll also need a user interface that will allow us to specify credentials, and direct operations—such as loading pages from the server (or from the file) as well as being able to save the pages to disk.

In building the user interface, one thing we'll want to ensure is that we don't allow long-running interactions with the file system or remote servers to stall, or freeze the UI. To accomplish that, we'll need to implement those interactions asynchronously which will prevent locking the UI thread, and will avoid the possibility of collisions, or race conditions, from happening with data returned from either the remote server or the file system. Concurrent with our use of asynchronously invoking long running methods, we'll make use of events to "announce" the arrival of new pieces of data, and then handle binding that data to the user interface as needed. This will allow the Winform to be responsive while individual pages are being loaded from either the server or file system, and will further serve as the framework for the modification-oriented operations that we'll be working on in the next article.

Asynchronous Architecture

To keep the application responsive, the user interface will route user requests into the business logic layer (the PageManager object), where an asynchronous delegate will be used to perform the actual operation requested—leaving the UI thread free to stay focused on user interaction. Once the requested operation completes, it will raise in the PageManager a RemoteMethodComplete event that will then be bubbled up and handled by the Winform. Handler logic associated with each event type will let the Winform process incoming information as needed, and the actual details of the operation itself (such an individual page) will be passed along in the event as part of a corresponding EventArgs class accompanying each event. This will ensure that the state, or data, associated with the completion of each operation doesn't run the risk of getting overwritten by another operation completing concurrently (which was the case in the first example where I used a member variable to handle state data returned by the logging delegate).

The actual invocation of any of the methods needed to request data from the BackPack servers will be routed through a helper method in the PageManager that bundles up any needed XML, and the corresponding XML and places it implicitly on its own thread by "scheduling" the operation for completion by invoking it asynchronously on a delegate signature bound to the Connection object:

Visual C#

private void InvokeRemoteOperation(string url, XmlElement[] args)
{
if (this._gateway.ConnectionInfo.IsOnline)
{
AsyncRemoteOperation operation = new AsyncRemoteOperation(this._gateway.ExecuteWebMethod);
operation.BeginInvoke(url, args, null, null);
}
else // covered in part III }

Visual Basic

Private Overloads Sub InvokeRemoteOperation(ByVal url As String, 
ByValargs() As XmlElement)

If Me._gateway.ConnectionInfo.IsOnline Then Dim operation As AsyncRemoteOperation =
New AsyncRemoteOperation(
AddressOf Me._gateway.ExecuteWebMethod)
operation.BeginInvoke(url, args, Nothing, Nothing)
Else ' covered in part III End If End Sub

Note too, that since this helper method serves to route all commands, it's the perfect place to either route the commands to the BackPack servers, or store the commands for later execution—which will be covered in the next article. As with the example from the last article, XML data is generated as needed, and passed in as an XmlElement array of args that will be fired against the appropriate URL. But in the last article, the Winform was responsible for bundling up that data and preparing it. Logically, such interaction should be handled transparently by the Winform via the PageManager class, which should know about how to handle the details of bundling XML and locating the correct URL, so that's how we'll do it in this article—by letting the PageManager handle all of those details. Once the XML is generated, and the URL is built, the PageManager routes the request into the InvokeRemoteOperation() method which asynchronously routes the commands into a BackPackGateway object—a slight modification of the BackPackRequest class used in the previous article. The BackPackGateway object handles the connection details, transfers data as needed, and once complete, raises an event which is then handled by the PageManager which can evaluate the type of event raised, and determine what to do with the information returned:

Visual C#

public void GatewayResponded(object sender, RemoteMethodCompleteEventArgs e)
{
if (e.OperationStatus == CompletionStatus.Fail)
{
throw new BackPackServerException("Asynchronous BackPack Method Failed.", e.Exception);
}
// later: we'll parse the returned info much more fully, and handle it as // appropriate - for now we just need to know if it was a single page, or a // list of all the pages: if (e.MethodUrl == "/ws/pages/all")
{
this.PagesListed(e.ResponseText);
}
else // it's an individual page { this.SinglePageReturned(e.ResponseText);
}
}

Visual Basic

Public Sub GatewayResponded(ByVal sender As Object, 
ByVal e As RemoteMethodCompleteEventArgs)
If (e.OperationStatus = CompletionStatus.Fail) Then Throw New BackPackServerException(
"Asynchronous BackPack Method Failed.", e.Exception)
End If ' later: we'll parse the returned info much more fully, and handle it ''as appropriate - for now we just need to know if it was a single page, 'or a list of all the pages: If (e.MethodUrl = "/ws/pages/all") Then Me.PagesListed(e.ResponseText)
Else Me.SinglePageReturned(e.ResponseText)
End If End Sub

When the PageManager detects a list of pages being returned, it knows that those pages need to be loaded, and will therefore proceed to request each page in the collection on the current background thread. A URL is therefore built to handle the request, and routed into the BackPackGateway object, which will again bundle the request with user credentials, route the request to the appropriate URL on the BackPack server, process the response, and bundle the results into a corresponding RemoteMethodCompletedEventArgs class which will be bubbled up to the PageManager via the RemoteMethodCompleted Event. The PageManager will then again handle the event in the GatewayResonded handler—only this time it will see that an individual page has been returned. The data from the event, XML representing the page, will be routed into a method that will transform the XML into our own persistence format, which will then translate the resultant XML into a full-blown Page object. Once the page is fully hydrated, the PageManager, in turn, will add the page to its own internal collection of Pages, and will then bundle the page into a PageAddedEventArgs class, and announce the arrival of the new page to the Winform via the PageAdded event. The Winform, which has been sitting around waiting for user input (or notification from back end processes), will handle the PageAdded event and will, in this article, respond by binding the Page's title to the TreeView control used to represent our Page objects:

Click here for larger image

(click image to zoom)

In this way, the actions required to load the pages take place transparently under the covers, and the pages just appear in the UI once they are fully loaded and announced to the Winform by backend logic and processing. Once the pages are fully loaded from the server, the UI will toggle a Pages Menu Item command that will let us save pages to disk—which can be accomplished by simply sending the PageManager's internal collection of Page objects to the serialization helper method that we created earlier (SerializeToFile()).

Once the pages are saved to disk, they can then be loaded from disk, in much the same way as they are loaded from the server: an asynchronous delegate will open up a stream to the persisted XML, and will then hand that off to the GetPagesFromXml() helper method, which will return a SerializableList of Pages. This Generic Collection of Pages can then be assigned to the PageManager's internal collection of Pages, and each page can be announced to the Winform by bundling it up in a PageAddedEventArg and bubbling it up to the Winform via the PageAdded event such that it can then be bound to the Winform's TreeView control in the same manner as individual pages were bound as they arrived from the BackPack server.

Wrapping it all up

We're now, effectively, feature-complete for this article: we have the ability to 1) load pages from the server, 2) save them to disk, and then 3) reload them from disk. To the application, specifically the Winform, there's really no difference between pages loaded from the server and those loaded from disk—they're just objects which were once expressed as XML. In the next article we'll look at interacting with those objects, and marshalling changes made to them back and forth between our application and the BackPack servers. If the application is offline, then the commands to modify the data will be persisted locally as XML which can be re-hydrated and sent to the server once the application regains an Internet connection. Once the modification against the server is complete, an event will be raised by the BackPackGateway, processed by the PageManager, and announced to the Winform as needed—be it a success or a failure.

Now, if you've made it this far, there's some homework: go take a look at the sample code. Note that the helper method used to serialize pages to disk has also been used for serializing credentials, which can now be serialized to disk as well—making it so that you don't need to keep pasting in the API Key each time you fire off the application for testing and debugging. Check out the event model, and how it allows us to keep data channels clear, and free from potential collisions, or race conditions that would otherwise occur if we were using shared variables between multiple, concurrent threads. Then take the application for a test ride, and watch how responsive it is as long-running applications take place in the background—something that will be a key consideration when we change data in the next article: the data will be flagged as changed locally, and once completed/committed on the server, an event will be raised locally which will un-flag a particular data point as being dirty. And the cool thing? This will happen whether we're online or offline, making it possible for us visually to keep an eye on nodes that have been altered while our BackPack is offline.

Follow the Discussion

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know.