# Sunday, August 10, 2008

Visual Studio Team System 2008 Database Edition is a mouthful to say, so a lot of people affectionately call it “Data Dude”.

Data Dude provides a set of tools integrated into Visual Studio that assist developers in managing and deploying SQL Server database objects.

There are four tools in this product that I have found particularly useful: the Database Project; the Schema Compare tool; the Data Compare Tool; and Database Unit Tests.

A Database Project is a Visual Studio project just as a class library or ASP.Net web project is.  However, instead of holding .Net source code, a Database Project holds the source code for database objects, such as tables, views and stored procedures.  This code is typically written in SQL Data Definition Language (DDL).  Storing this code in a Database Project makes it easier to check it into a source code repository such as Team Foundation Server (TFS); and simplifies the process of migrating database objects to other environments.

The Schema Compare tool is most useful when comparing a database with a Visual Studio Database Project.  Developers can use this tool after adding, modifying or deleting objects from a database in order to propagate those changes to a Database Project.  Later, a Database Administrator (DBA) can compare the Database Project to a different database to see what objects have been added, dropped or modified since the last compare.  The DBA can then deploy those changes to the other database.  This is useful for migrating data objects from one environment to another, for example when moving code changes from a Development database to a QA or Production database.

The Data Compare is another tool for migrating from one database environment to the next.  This tool facilitates the migration of records in a given table from one database to another.  The table in both the source and destination database must have the same structure.  I use this when I want to seed values into lookup tables, such as a list of states or a list of valid customer types that are stored in database tables.

Unit tests have increased in popularity the last few years as developers have come to realize their importance in maintaining robust, error-free code.  But unit testing stored procedures is still relatively rare, even though code in stored procedures is no less important than code in .Net assemblies.  Data Dude provides the ability to write unit tests for stored procedures using the same testing framework (MS Test) you used for unit tests of .Net code.  The tests work the same as your other unit tests – you write code and assert what you expect to be true.  Each test passes only if all its assertions are true at runtime.  The only difference is that your code is written in T-SQL, instead of C# or Visual Basic.Net. 

There are some limitations.  In order to use Data Dude, you must have either SQL Server 2008 or SQL Express installed locally on your development machine and you (the logged-in user) must have "Create Database" rights on that local installation.  To my knowledge, Data Dude only works with SQL Server 2000 and 2005 databases.  Plans to integrate with SQL Server 2008 have been announced but I don't know Microsoft's plans for other database engines.   I also occasionally find myself wishing Data Dude could accomplish its tasks more easily or in a more automated fashion.  I wish, for example, I could specify that I always want to ignore database users in a database and always want to migrate everything else when using the Schema Compare tool.  But overall, the tools in this product have increased my productivity significantly.  Nearly every application I write has a database element to it and anything that can help me with database development, management and deployment improves the quality of my applications.

.Net | SQL Server | VSTS
Sunday, August 10, 2008 1:34:41 PM (GMT Daylight Time, UTC+01:00)
# Saturday, August 9, 2008

When applications service a large number of simultaneous users, the developer needs to take this into account and find ways to ease the application’s bottlenecks. 

One way to help speed up a stressed application is to load into memory resources that will be requested by multiple users.  Reading from memory is much faster than reading from a hard drive or a database, so this can significantly speed up an application. 

However, each computer contains a finite amount of memory, so there is a limit of how much data you can store there.

Microsoft Distributed Cache (code named "Velocity") attempts to address this problem.  It allows your code to store data in an in-memory cache and it allows that cache to be stored on multiple servers, thus increasing the amount of memory available for storage. 

Velocity even ships with a provider that allows you to store a web site's session state, making it possible to increase the amount of memory available to your session data.

Microsoft has not yet published a release date for Velocity, but it is available as a Community Technology Preview (CPT).  You can download these bits and read more about it at http://code.msdn.microsoft.com/velocity.

The current CTP is not production ready - I had trouble keeping the service running on my Vista machine - but the technology shows enough promise that it is worth checking out.  When the glitches are fixed, this will make .Net an even more appealing choice for developing enterprise applications.

Saturday, August 9, 2008 2:38:56 PM (GMT Daylight Time, UTC+01:00)
# Friday, August 8, 2008

I am a recent convert to Agile methodologies. 

Until last year, I worked for a large consulting company that had established a solid reputation using a waterfall approach to deliver solutions.

My current employer is committed to the agile methodology SCRUM.  They have developed their own variation of SCRUM and several consultants here have even made a name for themselves delivering presentations on this methodology to customers and at conferences.

So it's only natural that I have been engrossed in SCRUM since joining.  Nine months of agile software development have sold me on its benefits. 

The biggest advantage I see to SCRUM is the short delivery schedule pushed by the sprints.  For those who don’t know, a sprint is a set of features scheduled for delivery in a short period of time (typically 1-4 weeks).  A sprint forces (or at least encourages) frequent delivery of working software and provides a great feedback loop to the development team. 

When users can actually see, touch and use functioning software, they don't just get value more quickly - they are able to evaluate it more quickly and provide valuable feedback.  That feedback might be a rethinking of original assumptions; it might be new ideas sparked by using the software; it might be a reshuffling of priorities, or it might be a clarification of some miscommunication between the users and the developers.  It will probably be several of these things. 
That miscommunication issue is one that occurs far too often on software projects.  Catching these misunderstandings early in the life of an application can save a huge amount of time and money.  We all know that the cost of making a change to software goes up exponentially the later that change is made.

By delivering something useable to the customer several times a month, we are providing value to the customer in a timely manner.  At best, this value comes in the form of software that enhances their ability to perform their job.  At worst, we provide something they didn't ask for.  But this worst-case scenario also adds value because we can use the delivery to clarify the misunderstandings and poor assumptions that leaked through the design.

I think back to the last waterfall project in which I was involved.  Our team was charged with designing and building an integration layer between an e-commerce web application (that was being designed at the same time) and dozens of backend systems (that were in a state of constant flux).  We spent months designing this integration layer.  During these months, the systems with which we planned to integrate changed dozens of times.  These changes included adding or removing fields; placing a web service in front of an existing interface; and completely redesigning and rewriting backend systems.

Each of these changes forced us to re-examine all the design work we had done and to modify all our documents to match the changed requirements.  In some cases, we had to start our design over from scratch.

An agile approach would have helped immensely.  Instead of designing everything completely before we started building anything, we could have minimized changes by designing, building, and deploying the integration service to one back-end system at a time.  By selecting a single integration point, we might have been able to quickly deliver a single piece of functionality while other backend systems to stabilize. 

I'm not going to suggest that agile is the appropriate methodology for every software project or that no other methodologies have value.  My former employer delivered countless successful projects using waterfall techniques. 

But it pays to recognize when agile will help your project and it is definitely a useful tool for any developer, architect or project manager to have in his or her toolbox.

Friday, August 8, 2008 3:58:27 PM (GMT Daylight Time, UTC+01:00)
# Thursday, August 7, 2008

Microsoft recently released the Managed Extensibility Framework (MEF) which allows developers to add hooks into their applications so that the application can be extended at runtime.

Using MEF is a 2-step process: one step is performed by the application developer who adds attributes or code at defined points in the application.  At these points, the application searches for extensible objects and adds or call them to the application at runtime. 
The second step is by third-party developers who use the MEF application programming interface (API) to define classes in an “extension” assembly as extensible so that they will be discoverable by the above-mentioned applications.

The two steps are loosely-coupled, meaning neither the application nor the extension assembly needs to know anything about the other.  We don't even need to set a reference from one project to another in order to call across these boundaries.

I can think of two scenarios where this technology would be useful.

In scenario 1, an independent software vendor develops and sells a package with many pluggable modules.  Customers may choose to buy and install one module or all modules in the package.  For example, an Accounting package may offer General Ledger, Accounts Payable, Accounts Receivable Payroll and Reporting modules, but not all users will want to pay for every module.  By using MEF, the software could search a well-known directory for any module assemblies (flagged as extensions by MEF) and add to the menu only those that are installed.  With MEF in place, more modules could be added at a later time with no recompiling.

In scenario 2, developers create and deploy an application with a given set of functionality and create points at which other developers are allowed to extend the application using MEF.  By publishing these extendable points, they can allow developers to add functionality to the application without modifying or overriding the original source code.  This is a much safer way of extending functionality.  Extensions could be anything from new business rules or workflow to additional UI elements on forms.

All the extensions happen at runtime and MEF gives developers the ability to add metadata to better describe their extension classes.  By querying this metadata, we can conditionally load only those extensions that meet expected criteria.  The best part of this feature is that we can query a class's metadata without actually loading that class into memory.  This can be a huge resource saving over similar methods, such as Reflection.

MEF is currently released as Community Technology Preview (CPT), so the API is likely to change before its final release.  You can download the CTP and read more about it at http://code.msdn.microsoft.com/mef.   By learning it now, you can be prepared to add extensibility to your application when MEF is fully released.

Thursday, August 7, 2008 8:37:42 PM (GMT Daylight Time, UTC+01:00)
# Friday, July 4, 2008

July 3 (Contribupendence Day) is almost over.  As promised, I wrote reviews for several colleagues on LinkedIn.  Most of the people I reviewed were those I worked with directly at my last employer.  I am currently on a project that allows me to work directly with a couple folks from my current employer

I became motivated enough that I ended up writing 7 reviews tonight (I had only promised 5) and I invited quite a few new people to connect with me on LinkedIn and Plaxo

I was surprised at the number of past and present colleagues that are using these networking sites.  My connections should at least double in the next few days.

Hopefully the people I recommended will be inspired to pick 5 people of their own and this thing will explode.

My plan is to write more recommendations in the coming weeks.  I have some that I've written but cannot make official until the recipient accepts my connection invitation. 

I won't copy my recommendations on this site but you are welcome to read them here.

Again thanks to Jeff for suggesting this.

Friday, July 4, 2008 3:08:05 AM (GMT Daylight Time, UTC+01:00)
# Thursday, July 3, 2008

Tomorrow - July 3 - is Contribupendence Day.

What is Contribupendence Day, you ask?  Well it's the day when all readers of my blog agree to send me $20.

Not really.  Contribupendence Day is the brainchild of Microsoft Developer Evangelist Jeff Blankenburg, who woke up one day and noticed the untapped potential of networking sites such as LinkedIn, Plaxo and Facebook.  These sites give us the opportunity to recommend or comment on those we have worked.  Unfortunately, few of us take advantage of this feature which is a shame.  These recommendations could be a good source of feedback to potential employers and might make the difference in getting an interview or landing a job.

He's right of course. I've been on LinkedIn for a couple months and I am connected to a few dozen people but these are very passive connections.  In the back of my mind, I tell myself I'll focus on LinkedIn the next time I look for a job (which hopefully won't be for a long time).  The problem with this attitude is that I'm relying on everyone else to motivate themselves around my schedule.  When I'm ready to look for a job, will others have the time to write a glowing review for me?  If I worked for or with them, will they even remember my specific accomplishments?

So, at Jeff's urging, I'm being proactive.  Tomorrow I resolve to write reviews of 5 people on LinkedIn.  In doing so, I hope to inspire these 5 to either review me or to review someone else, which will set in motion a process that may very well come back to me.

I've worked with some great people in my life so it wasn't difficult to pick five that I can rave about.  I'll be reviewing them tomorrow and I urge you to do the same.

Happy Contribupendence Day everybody!  And you are welcome to review me.  Or, if that's too much trouble, just send me the 20 bucks.

Thursday, July 3, 2008 3:24:25 AM (GMT Daylight Time, UTC+01:00)
# Tuesday, July 1, 2008

The ann arbor Give Camp is July 11-13 at Washtenaw Community College - less than two weeks away.  For those who haven't heard, this is a great opportunity to contribute to some worthy causes, flex your tech muscles and network with the developer community. 

At this event, software developers, DBA, project managers and UI designers will get together and develop projects for local charities.  So many charities requested projects for this camp that most had to be turned away due to lack of resources.  The more people involved, the more charities we can help.  The facilities will be available and staffed round-the-clock on this weekend and refreshments will be provided.  For security reasons, you must register in advance in order to participate.  Jennifer Marsman of Microsoft is organizing the camp. 

I will be out of town with my son the weekend of the event, but I volunteered to help with some of the evaluations of the projects because it I really wanted to contribute.

You can get more information and you can register for this great outing at http://www.annarborgivecamp.org/.  If you will be in town, please take a look and consider giving your time.

Tuesday, July 1, 2008 4:49:47 PM (GMT Daylight Time, UTC+01:00)
# Thursday, June 26, 2008

Binding to a GridView

ASP.Net data binding is a great technology for developers who want to create interactive web applications without writing a lot of code. For example, a developer can drop a GridView and a DataObject onto a web form, set a few properties and have a fully editable grid, allowing users to easily view and update data. 

Even sorting the grid data can be performed without writing any code.  Just set the GridView's AllowSorting property to "true" and set the SortExpression property of any "sortable" column. 



      HeaderText="Product Name"
      SortExpression="Name" />


By default, the grid renders a link at the top of each sortable column.  Clicking this link toggles the sort order between 3 states: Ascending, Descending and No Sort. 

Sorting Limitations

But there is a "Gotcha".  The GridView can be bound to any set of data - that is, to any object that implements the System.Collections.IEnumerable interface.  This includes a DataTable, ArrayList, List and HashTable, among others.  However the automatic sorting only works when binding to a DataTable.

Personally, I prefer to work with and bind to generic Lists.  A List is more flexible than a DataTable and does a better job of abstracting the user interface from the back-end data source, making it easier to swap out one or the other.

But by binding to a List, I sacrifice the automatic sorting that happens when I bind to a DataTable.  Fortunately, it doesn't take a lot of code to implement sorting on a GridView bound to a generic List.

How to sort a GridView bound to a List

First, we set the GridView AllowSorting property and each column's SortExpression property as described above.  This provides "sort" links at the top of each column.  Unfortunately these links will not work properly - in fact, clicking them will generate an error.

To get this to work, you must do the following

  1. Add an extra "sortby" parameter of type string to your Select method.
  2. Add code to your sort method to sort the list before returning it (more on this later)
  3. Add a SortParameterName attribute to your ObjectDataSource.  The value of this parameter should be the same as the parameter you added to your Select method.

By setting the SortParameterName attribute, we are telling ASP.Net to pass sorting information to the Select method and which parameter to pass it to.  The Select method gets called when the grid loads or refreshes and whenever the user clicks the "sort" links at the top of each column.  Most of the time, the value passed to the sortby parameter is an empty string (indicating no sort order), but if the user clicks a "sort" link, this parameter will contain one of the following three values

  • "<SortField> ASC"
  • "<SortField> DESC"
  • ""

where <SortField> is the string specified in the SortExpression attribute of the column clicked.  With each click, the grid column cycles its sort between ascending order, descending order, and no sort.  The parameter value passed depends on which is currently the active sort. 

<asp:ObjectDataSource ID="ProductObjectDataSource" runat="server" 

Now, how do we sort a List?  More specifically, how do we sort a list when we don't know in advance on which column we are sorting or in which direction (ascending or descending)? 

A List has a Sort method, so we can call that.  But what does it mean to sort a list of objects?  An object has properties and we can sort on any one (or more) of those properties, as long as the property is of a type that can be sorted.  We need to specify on which object property we will be sorting.  To do this, we use an overload of the List.Sort method that accepts an IComparer object.  IComparer has a Compare method that tells the Sort method how to order each pair of objects in the list.  We can create a class that implements IComparer, override the Compare method and use reflection to determine at runtime on which property to sort.  The name of the property and the sort order (Ascending or Descending) can be passed into the class constructor. 

The code for this class (named GenericComparer) below:

using System;
using System.Collections.Generic;
using System.Collections;
using System.Text;
using System.Reflection;

namespace DemoGridBusLayer
    /// <summary>
    /// This class is used to compare any
    /// type(property) of a class for sorting.
    /// This class automatically fetches the
    /// type of the property and compares.
    /// </summary>
    public sealed class GenericComparer<T> : IComparer<T>
        public enum SortOrder { Ascending, Descending };

        #region Constructors
        public GenericComparer(string sortColumn, SortOrder sortingOrder)
            this._sortColumn = sortColumn;
            this._sortingOrder = sortingOrder;

        /// <summary>
        /// Constructor when passing in a sort expression containing both the Sort Column and the Sort Order
        /// e.g., "BPCode ASC".
        /// </summary>
        /// <param name="sortExpression"></param>
        /// <remarks>
        /// This constructor is useful when using this with the ASP.NET ObjectDataSource,
        /// which passes the SortParameterName in this format
        /// </remarks>
        public GenericComparer(string sortExpression)
            string[] sortExprArray = sortExpression.Split(" ".ToCharArray());
            string sortColumn = sortExprArray[0];
            SortOrder sortingOrder;
            sortingOrder = SortOrder.Ascending;
            if (sortExprArray.Length > 1)
                if (sortExprArray[1].ToUpper() == "DESC")
                    sortingOrder = SortOrder.Descending;

            this._sortColumn = sortColumn;
            this._sortingOrder = sortingOrder;


        #region public properties
        /// <summary>
        /// Column Name(public property of the class) to be sorted.
        /// </summary>
        private string _sortColumn;
        public string SortColumn
            get { return _sortColumn; }

        /// <summary>
        /// Sorting order (ASC OR DESC)
        /// </summary>
        private SortOrder _sortingOrder;
        public SortOrder SortingOrder
            get { return _sortingOrder; }


        /// <summary>
        /// Compare two objects of the same class,
        /// based on the value of a given property
        /// </summary>
        /// <param name="x">First Object</param>
        /// <param name="y">Second Object</param>
        /// <returns>int</returns>
        public int Compare(T x, T y)

            // User reflection to get the property
            PropertyInfo propertyInfo = typeof(T).GetProperty(_sortColumn);

            // Cast the property to IComparable, so we can use the built-in compare.
            IComparable obj1 = (IComparable)propertyInfo.GetValue(x, null);
            IComparable obj2 = (IComparable)propertyInfo.GetValue(y, null);

            // Order depends on Asc vs Desc.
            if (_sortingOrder == SortOrder.Ascending)
                return (obj1.CompareTo(obj2));
                return (obj2.CompareTo(obj1));

Within our select method, we can call the List's Sort method and pass in an instance of the GenericComparer class, specifying on which column and in which direction to sort the list.  The SelectMethod is shown below.  (The details of querying a database and storing results into a List of objects is omitted.)

        /// <summary>
        /// Get a sorted list of all products
        /// </summary>
        /// <param name="sortBy"></param>
        /// <returns></returns>
        public static List<Product> GetProductsList(string sortBy)
            // Get a list of Business Processes
            List<Product> prodList = GetProductsList();

            // Sort list
            if (sortBy != "")
                GenericComparer<Product> cmp = new GenericComparer<Product>(sortBy);
            return prodList;



With a small amount of code, we can enable sorting of a List of objects bound to a GridView in the same way that sorting is enabled for a DataTable bound to a GridView.


You can download the entire sample described in this article here: DemoGridSort.zip (39.63 KB).  You will need the AdventureWorks sample SQL Server database for this demo.

Thursday, June 26, 2008 5:05:33 PM (GMT Daylight Time, UTC+01:00)
# Tuesday, June 24, 2008

Did you know that every time an Ajax control performs a partial postpack, every event in the life cycle of the control's containing page or pages fires?

To me, this seems counterintuitive - There is no refresh of the containing page or of the master page, yet the Page_Load of both events fire.

I ran into it when I witnessed some unexpected behavior in a colleague's Ajax control.  After some investigation I saw the behavior was caused by code in the Page_Load event handler.  I thought this was a bug until I learned it was by design.  So we ended up bracketing some of the Page_Load code, testing the value of Page.IsPostBack to prevent code from running when it should not.

Tuesday, June 24, 2008 3:51:57 PM (GMT Daylight Time, UTC+01:00)
# Sunday, June 22, 2008

I had a terrific time yesterday at the Lansing Day of .Net yesterday. 

This was the last in an ambitious string of community-sponsored events in Michigan and Ohio under the "Day of .Net" branding.  In the past two months, DODN events have been held in Wilmington, OH, Grand Rapids, MI, Cleveland, OH, and Lansing, MI.  I managed to make the two Michigan events but family commitments kept me from the ones in Ohio.

A Day of .Net event features numerous speakers (usually about 30) speaking on topics related to software development.  The primary focus is .Net development but peripheral topics are almost always included.  I heard a very good talk yesterday by Dan Rigsby on the agile methodology in which software was barely mentioned.

Prior to yesterday, I wondered if the Lansing event might be anticlimactic coming so soon after three similar events.  I worried for nothing.  In fact, the opposite was true.  They managed to attract an excellent group of speakers, a full slate of sponsors (meaning, among other things, many cool door prizes) one of the better facilities I've seen (Lansing Community College West Campus) and the mayor of Lansing.  People were generally excited about this event.  I've heard - but can't confirm - that the Day of .Net was covered by two TV stations.  Jeff McWhirter and his group did a great job putting this together.  I don't know who thought of inviting the mayor, but that was a good idea.

The best part of these events is interacting with the people in the community.  There was a lot of good discussions about various projects, the state of the industry, the role of the community and the various approaches to developing software. 

When it was over, many of us headed over to Jeff's house to celebrate into the night.  I left at around 11 and the place was still packed and the bonfire was blazing.  Mike Wood, an old friend from my Cincinnati days stayed at my house before heading home this morning.

I picked up some nice swag - a copy of Camtasia, a logo t-shirt, and a pint glass featuring the LDODN logo.  This morning, I noticed that the t-shirt includes the slogan "I was there" but the pint glass has a modified slogan "I think I was there".

Here are some photos of the day: Photos.


>Lansing Day of .Net, 21 June 2008 - I'll be there!
Sunday, June 22, 2008 5:13:40 PM (GMT Daylight Time, UTC+01:00)